Configuration
Important settings in .env
Set these in .env before starting the instance for the first time:
NEODB_SECRET_KEY- 50 characters of random string, no white spaceNEODB_SITE_DOMAIN- the domain name of your site
NEODB_SECRET_KEY and NEODB_SITE_DOMAIN MUST NOT be changed later.
If you are doing debug or development:
NEODB_DEBUG- True will turn on debug for both neodb and takahe, turn off relay, and reveal self as debug mode in nodeinfo (so peers won't try to run fedi search on this node)NEODB_IMAGE- the docker image to use,neodb/neodb:edgefor the main branch
Site Settings UI
Most configuration settings can be managed through the web-based Site Settings page at /manage/, accessible to superusers. This includes:
- Branding - site name, logo, icon, color theme, description, footer links, custom HTML head
- Discover - minimum marks, update interval, language filtering, local-only mode, popular posts/tags
- Access - invite-only mode, local-only posting, Mastodon login whitelist, Bluesky/Threads login, preferred languages
- Federation - default relay, fanout limit, prune horizon, search sites/peers, hidden categories
- API Keys - Spotify, TMDB, Google Books, Discogs, IGDB, Steam, DeepL, LibreTranslate, Threads, Sentry, Discord webhooks
- Downloader - scraping providers, proxy list, provider API keys, timeouts
- Advanced - alternative domains, Mastodon client scope, cron jobs, index aliases
Settings configured in the UI take effect immediately (within 30 seconds) without restarting the server. Values set in the UI override .env values. If a setting has not been configured in the UI, the .env value is used as fallback.
Settings that must remain in .env
These settings require infrastructure access or process restart and cannot be managed from the UI:
NEODB_SECRET_KEY- Django secret keyNEODB_SITE_DOMAIN- primary domain (identity-critical)NEODB_DB_URL,TAKAHE_DB_URL- database connection stringsNEODB_REDIS_URL- Redis URL for cache and job queueNEODB_SEARCH_URL- Typesense search backend URLNEODB_EMAIL_URL- email sender configuration, e.g.smtp://<username>:<password>@<host>:<port>smtp+tls://<username>:<password>@<host>:<port>smtp+ssl://<username>:<password>@<host>:<port>anymail://<anymail_backend_name>?<anymail_args>, see anymail doc
NEODB_EMAIL_FROM- the email address to send email fromMEDIA_BACKEND- storage backend (local/s3)NEODB_MEDIA_ROOT,NEODB_MEDIA_URL- media storage pathsSSL_ONLY- Force HTTPSNEODB_DATA- data directory for docker volumes (database, redis, typesense, media), default../dataNEODB_PORT- the port to expose the main web server onNEODB_IMAGE- docker image to pull fromTAKAHE_NO_FEDERATION- disable federation (test/development only)TAKAHE_SENTRY_DSN- Sentry DSN for takahe containerNEODB_ADMIN_HANDLES- comma-separated list of handles to auto-promote to superuser on registration, intype:handleformat (e.g.mastodon:user@mastodon.social,email:admin@example.com). Supported types:mastodon,email,bluesky,threads.NEODB_LOG_LEVEL- logging level (DEBUG, INFO, WARNING, ERROR). Requires restart.SKIP_MIGRATIONS- deprecated, retained as a fallback only. Configure skipped post-migration job keys in Admin > Advanced > "Skip Migration Jobs" instead. The UI value is read by the worker at dequeue time (no restart needed); skipped jobs log a warning and post a[migration] <key>: skippednotice to the Discordsystemchannel.
S3 and Compatible Storage
To test storage configuration, you can use the following command to upload a test file and check if it's accessible:
neodb-manage catalog storage-test
Minio
If you are using Minio or its forks for local S3-compatible storage, add the following configuration to compose.override.yml (change minio/minio to your chosen fork as the original one is unmaintained and may have known security issues):
services:
minio:
image: minio/minio:latest
command: server --console-address :9001
environment:
MINIO_DOMAIN: ${MINIO_DOMAIN}
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: change_password
MINIO_VOLUMES: /var/lib/minio
volumes:
- ${NEODB_DATA:-../data}/minio-files:/var/lib/minio
healthcheck:
test: ["CMD", "mc", "ready", "local"]
ports:
- 9000:9000
- 9001:9001
And add these settings to .env:
MINIO_DOMAIN=my.media.domain
MEDIA_BACKEND=s3-insecure://minioadmin:change_password@minio:9000/media
MEDIA_URL=https://my.media.domain/media/
Also make sure my.media.domain maps to your Minio server (port 9000 as configured above)
Garage
Garage is a lightweight S3-compatible storage engine. Add the following to compose.override.yml:
services:
garage:
image: dxflrs/garage:v2.2.0
volumes:
- ${NEODB_DATA:-../data}/garage/garage.toml:/etc/garage.toml
- ${NEODB_DATA:-../data}/garage/data:/var/lib/garage/data
- ${NEODB_DATA:-../data}/garage/meta:/var/lib/garage/meta
ports:
- 3900:3900
- 3902:3902
Create a garage.toml configuration file (see Garage quick start for details), then initialize after first start:
# Assign node layout
docker compose exec garage /garage -c /etc/garage.toml status
docker compose exec garage /garage -c /etc/garage.toml layout assign -z dc1 -c 1G <node_id>
docker compose exec garage /garage -c /etc/garage.toml layout apply --version 1
# Create bucket and key
docker compose exec garage /garage -c /etc/garage.toml bucket create media
docker compose exec garage /garage -c /etc/garage.toml key create neodb-app-key
docker compose exec garage /garage -c /etc/garage.toml bucket allow --read --write --owner media --key neodb-app-key
docker compose exec garage /garage -c /etc/garage.toml bucket website --allow media
Add these settings to .env, using the key ID and secret from the output of key create above:
MEDIA_BACKEND=s3-insecure://KEY_ID:SECRET_KEY@garage:3900/media
MEDIA_URL=https://media.my.media.domain/
Garage serves files publicly via its S3 Web endpoint (port 3902) using virtual-host-style routing. The MEDIA_URL hostname must match {bucket}.{root_domain} configured in [s3_web] section of garage.toml. For example, with root_domain = ".my.media.domain" and bucket media, the public URL becomes https://media.my.media.domain/. Make sure DNS for that hostname points to Garage's port 3902.
SeaweedFS
SeaweedFS is a distributed storage system with S3 API support. Add the following to compose.override.yml, mounting an S3 credentials config file with anonymous Read and an admin identity (see Docker Compose for S3 for details):
services:
seaweedfs:
image: chrislusf/seaweedfs
command: "server -s3 -s3.config /etc/seaweedfs/config.json"
volumes:
- ${NEODB_DATA:-../data}/seaweedfs/config.json:/etc/seaweedfs/config.json
- ${NEODB_DATA:-../data}/seaweedfs/data:/data
ports:
- 8333:8333
Create the media bucket after first start (using awscli or any S3 client):
aws --endpoint-url http://localhost:8333 s3 mb s3://media
Add these settings to .env, matching the credentials in the config file:
MEDIA_BACKEND=s3-insecure://some_access_key:some_secret_key@seaweedfs:8333/media
MEDIA_URL=https://my.media.domain/media/
Make sure my.media.domain maps to your SeaweedFS server (port 8333). Files are publicly readable via the same port thanks to the anonymous read identity.
Scaling Parameters
For high-traffic instance, spin up these configurations to a higher number in .env, as long as the host server can handle them:
NEODB_WEB_WORKER_NUMNEODB_API_WORKER_NUMNEODB_RQ_WORKER_NUMTAKAHE_WEB_WORKER_NUMTAKAHE_STATOR_CONCURRENCYTAKAHE_STATOR_CONCURRENCY_PER_MODEL
Further scaling up with multiple nodes (e.g. via Kubernetes) is beyond the scope of this document, but consider run db/redis/typesense separately, and then duplicate web/worker/stator containers as long as connections and mounts are properly configured; migration only runs once when start or upgrade, it should be kept that way.
Other Maintenance Tasks
Add alias to your shell for easier access. Not necessary, just for convenience.
alias neodb-manage='docker-compose --profile production run --rm shell neodb-manage'
Manage user tasks and cron jobs
neodb-manage task --list
neodb-manage cron --list
Rebuild search index
neodb-manage catalog idx-reindex
There are more commands available to manage catalog and take a look at Manage Accounts to learn how to create an admin/staff account, create an invitation code and more.
Run PostgresQL/Redis/Typesense without Docker
It's currently possible but quite cumbersome to run without Docker, hence not recommended. However it's possible to only use docker to run neodb server but reuse existing PostgresQL/Redis/Typesense servers with compose.override.yml, an example for reference:
services:
redis:
profiles: ['disabled']
typesense:
profiles: ['disabled']
neodb-db:
profiles: ['disabled']
takahe-db:
profiles: ['disabled']
migration:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
neodb-web:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
healthcheck: !reset {}
neodb-web-api:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
healthcheck: !reset {}
neodb-worker:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
neodb-worker-extra:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
takahe-web:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
takahe-stator:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
shell:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
root:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
dev-neodb-web:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
dev-neodb-worker:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
dev-takahe-web:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
dev-takahe-stator:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
dev-shell:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
dev-root:
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on: !reset []
extra_hosts is only needed if PostgresQL/Redis/Typesense is on your host server)
Multiple instances on one server
It's possible to run multiple clusters in one host server with docker compose, as long as NEODB_SITE_DOMAIN, NEODB_PORT and NEODB_DATA are different.
Deprecated .env settings
The following settings can still be set in .env for backward compatibility, but should be configured through the Site Settings UI (/manage/) instead. .env values are used as initial defaults when the UI has not been configured.
Customization
NEODB_SITE_LOGONEODB_SITE_ICONNEODB_SITE_NAMENEODB_USER_ICONNEODB_SITE_COLORNEODB_SITE_INTRONEODB_SITE_HEADNEODB_SITE_DESCRIPTIONNEODB_SITE_LINKSNEODB_PREFERRED_LANGUAGESNEODB_ALTERNATIVE_DOMAINSNEODB_INVITE_ONLYNEODB_ENABLE_LOCAL_ONLYNEODB_LOGIN_MASTODON_WHITELISTNEODB_ENABLE_LOGIN_BLUESKYNEODB_ENABLE_LOGIN_THREADS
Discover
NEODB_DISCOVER_FILTER_LANGUAGENEODB_DISCOVER_SHOW_LOCAL_ONLYNEODB_DISCOVER_UPDATE_INTERVALNEODB_DISCOVER_SHOW_POPULAR_POSTSNEODB_DISCOVER_SHOW_POPULAR_TAGSNEODB_MIN_MARKS_FOR_DISCOVER
Federation
NEODB_DISABLE_DEFAULT_RELAYNEODB_SEARCH_PEERSNEODB_SEARCH_SITESNEODB_FANOUT_LIMIT_DAYSTAKAHE_REMOTE_PRUNE_HORIZONNEODB_HIDDEN_CATEGORIESNEODB_REVIEW_AS_ARTICLE(defaultTrue; setFalseto federate Reviews asNotefor backward compatibility)
External item sources
SPOTIFY_API_KEYTMDB_API_V3_KEYGOOGLE_API_KEYDISCOGS_API_KEYIGDB_API_CLIENT_ID,IGDB_API_CLIENT_SECRETSTEAM_API_KEY
Scraping providers
NEODB_DOWNLOADER_PROVIDERSNEODB_DOWNLOADER_SCRAPFLY_KEYNEODB_DOWNLOADER_DECODO_TOKENNEODB_DOWNLOADER_SCRAPERAPI_KEYNEODB_DOWNLOADER_SCRAPINGBEE_KEYNEODB_DOWNLOADER_CUSTOMSCRAPER_URLNEODB_DOWNLOADER_PROXY_LISTNEODB_DOWNLOADER_BACKUP_PROXYNEODB_DOWNLOADER_REQUEST_TIMEOUTNEODB_DOWNLOADER_CACHE_TIMEOUTNEODB_DOWNLOADER_RETRIES
Translation
DEEPL_API_KEYLT_API_URL,LT_API_KEY
Administration
DISCORD_WEBHOOKSNEODB_SENTRY_DSNNEODB_SENTRY_SAMPLE_RATETHREADS_APP_ID,THREADS_APP_SECRETNEODB_MASTODON_CLIENT_SCOPENEODB_DISABLE_CRON_JOBSINDEX_ALIASES