Harden S3 DC handling

This commit is contained in:
Andreas Dueren
2025-11-18 11:57:32 -06:00
parent 42c4c1f38f
commit 3b7a853c71
5 changed files with 37 additions and 20 deletions

View File

@@ -58,7 +58,7 @@ After installing on Cloudron remember to:
1. Open the File Manager for the app, edit `/app/data/config/s3.env`, and set the S3-compatible credentials that belong in `museum.yaml`. The upstream documentation expects the canonical keys `b2-eu-cen` (primary), `wasabi-eu-central-2-v3` (secondary) and `scw-eu-fr-v3` (cold); this package renders those blocks automatically from the environment variables below so you dont have to touch the generated config. At minimum set `S3_ENDPOINT`, `S3_REGION`, `S3_BUCKET`, `S3_ACCESS_KEY`, `S3_SECRET_KEY`, plus the optional `S3_PREFIX`. To enable replication you must also define **both** `S3_SECONDARY_*` and `S3_COLD_*` (endpoint, region, bucket, key, secret, optional prefix/DC overrides); after a restart the package will flip `replication.enabled` on your behalf when all three buckets are present. Advanced knobs from the documentation map to the following variables:
- `S3_ARE_LOCAL_BUCKETS=false` toggles SSL/subdomain-style URLs (`are_local_buckets` in `museum.yaml`); leave it at `true` for MinIO-style setups.
- `S3_FORCE_PATH_STYLE=true` translates to `use_path_style_urls=true` (required for R2/MinIO and most LAN storage).
- `S3_PRIMARY_DC`, `S3_SECONDARY_DC`, `S3_COLD_DC`, and `S3_DERIVED_DC` let you override the default data-center identifiers if your Museum database already uses different keys.
- The data-center identifiers (`b2-eu-cen`, `wasabi-eu-central-2-v3`, `scw-eu-fr-v3`, etc.) are **hard-coded upstream**. Keep the defaults unless you know you are targeting one of the legacy names (as listed in the Ente docs). The start script will ignore unknown values to prevent replication from breaking with empty bucket names.
- Leave the generated `museum/configurations/local.yaml` alone—if you need to append extra settings, do so via `/app/data/config/museum.override.yaml` and only add the keys you actually want to change. Copypasting the full sample `s3:` block from the docs will overwrite the generated credentials with blanks.
- If you are using Cloudflare R2 or another hosted S3 provider, configure your buckets CORS policy to allow the Ente frontends (e.g. `https://ente.due.ren`, `https://accounts.due.ren`, `https://cast.due.ren`, etc.) so that cast/slideshow playback can fetch signed URLs directly from storage. Backblaze B2 also requires clearing its “native” CORS rules; see the script in `POSTINSTALL.md`.
2. When prompted during installation, pick hostnames for the Accounts/Auth/Cast/Albums/Family web apps (they are exposed via Cloudron `httpPorts`). Ensure matching DNS records exist; Cloudron-managed DNS creates them automatically, otherwise point CNAME/A records such as `accounts.<app-domain>` at the primary hostname.
@@ -104,7 +104,7 @@ The main photos UI continues to live on the hostname you selected during install
The upstream documentation at [ente.io/help/self-hosting/administration/object-storage](https://ente.io/help/self-hosting/administration/object-storage) is written for bare-metal installs where you edit `museum.yaml` by hand. The Cloudron package wraps those steps so you only maintain `/app/data/config/s3.env`, but the same concepts apply:
- **Canonical bucket names.** Museums schema ships with `b2-eu-cen`, `wasabi-eu-central-2-v3`, and `scw-eu-fr-v3`. You can point those names at any S3-compatible provider—the labels are just keys. If you do override them, update `S3_PRIMARY_DC`, `S3_SECONDARY_DC`, `S3_COLD_DC`, and restart so the package renders matching blocks.
- **Canonical bucket names.** Museums schema ships with `b2-eu-cen`, `wasabi-eu-central-2-v3`, and `scw-eu-fr-v3`. You can point those identifiers at any S3-compatible provider, but you cannot rename them—replication logic only understands the upstream keys (or their documented legacy aliases). Leave the defaults in `s3.env` and only change the credentials/endpoints under each key.
- **Three buckets for replication.** Replication only works when two “hot” buckets and one “cold” bucket are configured. Populate `S3_*`, `S3_SECONDARY_*`, and `S3_COLD_*`; once all three have endpoints/keys/secrets the package automatically writes the `replication.enabled: true` stanza.
- **Transport settings.** Set `S3_ARE_LOCAL_BUCKETS=true`/`false` and `S3_FORCE_PATH_STYLE=true` to mirror the documentations `are_local_buckets`/`use_path_style_urls` toggles when talking to MinIO, Cloudflare R2, or other providers that require path-style URLs over HTTPS.
- **CORS.** If browsers cannot upload/download because of CORS, apply the recommended JSON from the docs (or the Backblaze helper script in `POSTINSTALL.md`). Ensure `Content-MD5` is listed in `AllowedHeaders` for providers with allow-lists.