Self Hosting
Logflare can be self-hosted. As of now, only a single machine setup is supported.
Two different backends are supported:
- BigQuery
- PostgreSQL (experimental)
Docker-compose is the recommended way to manage single node deployments.
Limitations
Inviting team users and other team-related functionality is currently not supported, as Logflare self-hosted is currently intended for single-user experience only.
All browser authentication will be disabled when in single-tenant mode.
Configuration
Common Configuration
Env Var | Type | Description |
---|---|---|
LOGFLARE_SINGLE_TENANT | Boolean, defaults to false | If enabled, a singular user will be seeded. All browser usage will default to the user. |
LOGFLARE_API_KEY | string, defaults to nil | If set, this API Key can be used for interacting with the Logflare API. API key will be automatically generated if not set. |
LOGFLARE_SUPABASE_MODE | Boolean, defaults to false | A special mode for Logflare, where Supabase-specific resources will be seeded. Intended for Suapbase self-hosted usage. |
PHX_HTTP_PORT | Integer, defaults to 4000 | Allows configuration of the HTTP server port. |
DB_SCHEMA | String, defaults to nil | Allows configuration of the database schema to scope Logflare operations. |
LOGFLARE_LOG_LEVEL | String, defaults to info . Options: error ,warn , info , debug | Allows runtime configuration of log level. |
LOGFLARE_NODE_HOST | string, defaults to 127.0.0.1 | Sets node host on startup, which affects the node name logflare@<host> |
BigQuery Backend Configuration
Env Var | Type | Description |
---|---|---|
GOOGLE_PROJECT_ID | string, required | Specifies the GCP project to use. |
GOOGLE_PROJECT_NUMBER | string, required | Specifies the GCP project to use. |
GOOGLE_DATASET_ID_APPEND | string, defaults to _prod | This allows customization of the dataset created in BigQuery. |
PostgreSQL Backend Configuration
Env Var | Type | Description |
---|---|---|
POSTGRES_BACKEND_URL | string, required | PostgreSQL connection string, for connecting to the database. User must have sufficient permssions to manage the schema. |
POSTGRES_BACKEND_SCHEMA | string, optional, defaults to public | Specifies the database schema to scope all operations. |
BigQuery Setup
Pre-requisites
You will need a Google Cloud project with billing enabled in order to proceed.
The requirements for server startup are as follows after creating the project:
- Project ID
- Project number
- A service account key
Setting up BigQuery Service Account
To ensure that you have sufficient permissions to insert into your Google Cloud BigQuery, ensure that you have created a service account with either:
- BigQuery Admin role; or
- The following permissions:
- bigquery.datasets.create
- bigquery.datasets.get
- bigquery.datasets.getIamPolicy
- bigquery.datasets.update
- bigquery.jobs.create
- bigquery.routines.create
- bigquery.routines.update
- bigquery.tables.create
- bigquery.tables.delete
- bigquery.tables.get
- bigquery.tables.getData
- bigquery.tables.update
- bigquery.tables.updateData
We recommend setting the BigQuery Admin role, as it simplifies permissions setup.
Obtaining the BigQuery Service Account Key
In order for Logflare to connect sources to their relevant BigQuery tables, we would need to have a service account key that can sign the JWTs needed to authenticate with the Google Cloud APIs.
To obtain the BigQuery service account key after creating it, navigate to IAM > Service Accounts in the web console and click on the "Manage Keys" action option.
Thereafter, click on "Add Key" to create a new key. The key will be in a JSON format. Store this key securely on your host machine.
You can also obtain the key via the gcloud
cli by following the official documentation.
Deployment with Docker Compose
Using docker compose is the recommended method for self-hosting.
- Create the
docker-compose.yml
services:
db:
image: postgres:13.4-alpine
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DATABASE: logflare_docker
ports:
- "5432:5432"
volumes:
- ./priv/wal.sql:/docker-entrypoint-initdb.d/wal.sql
- pg-data:/var/lib/postgresql/data
logflare:
image: supabase/logflare:1.0.1
ports:
- "4000:4000"
hostname: 127.0.0.1
environment:
- DB_DATABASE=logflare_docker
- DB_HOSTNAME=db
- DB_PORT=5432
- DB_PASSWORD=postgres
- DB_USERNAME=postgres
- LOGFLARE_SINGLE_TENANT=true
- LOGFLARE_API_KEY=my-cool-api-key
# Required for BigQuery backend
- GOOGLE_DATASET_ID_APPEND=_your_env
- GOOGLE_PROJECT_ID=logflare-docker-example
- GOOGLE_PROJECT_NUMBER=123123123213
# Required for Postgres backend
- POSTGRES_BACKEND_URL=postgresql://user:pass@host:port/db
- POSTGRES_BACKEND_SCHEMA=my_schema
volumes:
- type: bind
source: ${PWD}/.env
target: /tmp/.secrets.env
read_only: true
- type: bind
source: ${PWD}/gcloud.json
target: /opt/app/rel/logflare/bin/gcloud.json
read_only: true
depends_on:
- db
- Using the Service Account key that you had obtained under the pre-requisites section, move and rename the JSON file to
gcloud.json
in your working directory.
The directory structure should be as follows:
\
|- gcloud.json
|- docker-compose.yml
- Run
docker-compose up -d
and visit http://localhost:4000
Using an .env
file
You can optionally use a .env
file to manage your environemnts. You can base the file contents on this reference file
You cannot have comments in the env file as we load it at startup via xargs
.
# ... the rest is the same
volumes:
# add in this bind bound. If you have a different name or location, update the source
- type: bind
source: ${PWD}/.env
target: /tmp/.secrets.env
read_only: true
The directory structure will now be as follows:
\
|- gcloud.json
|- .env
|- docker-compose.yml