Support optional S3 replication

This commit is contained in:
Andreas Dueren
2025-11-17 13:18:12 -06:00
parent 8f3a34a277
commit 632dea4517
4 changed files with 312 additions and 45 deletions

View File

@@ -20,7 +20,7 @@ cloudron install \
``` ```
## After Install ## After Install
1. **S3** In Cloudron File Manager open `/app/data/config/s3.env`, fill in your endpoint/region/bucket/access/secret, then restart the app from the dashboard. 1. **S3** In Cloudron File Manager open `/app/data/config/s3.env`, fill in your endpoint/region/bucket/access/secret, then restart the app from the dashboard. Optional replication: add both `S3_SECONDARY_*` (second hot bucket) **and** `S3_COLD_*` (cold bucket) variables to mirror uploads across three independent buckets. Replication is only enabled when all three buckets are present. See Entes [object storage guide](https://ente.io/help/self-hosting/administration/object-storage) for example configs.
2. **Secondary hostnames** During installation Cloudron now prompts for hostnames for the Accounts/Auth/Cast/Albums/Family web apps (powered by `httpPorts`). Ensure matching DNS records exist that point to the primary app domain. If you use Cloudron-managed DNS, those records are created automatically; otherwise create CNAME/A records such as `accounts.<app-domain> → <app-domain>`. 2. **Secondary hostnames** During installation Cloudron now prompts for hostnames for the Accounts/Auth/Cast/Albums/Family web apps (powered by `httpPorts`). Ensure matching DNS records exist that point to the primary app domain. If you use Cloudron-managed DNS, those records are created automatically; otherwise create CNAME/A records such as `accounts.<app-domain> → <app-domain>`.
Once DNS propagates, use the dedicated hosts (defaults shown below — substitute the names you selected during install): Once DNS propagates, use the dedicated hosts (defaults shown below — substitute the names you selected during install):

View File

@@ -53,6 +53,7 @@ Supported variables:
- `S3_ACCESS_KEY` - `S3_ACCESS_KEY`
- `S3_SECRET_KEY` - `S3_SECRET_KEY`
- `S3_PREFIX` (optional path prefix) - `S3_PREFIX` (optional path prefix)
- Optional replication: define `S3_SECONDARY_*` **and** `S3_COLD_*` (endpoints, keys, secrets, DC names) to mirror uploads to a second hot bucket and a third cold bucket. Replication is only enabled when all three buckets are configured; otherwise the app stays in single-bucket mode. See [Entes object storage guide](https://ente.io/help/self-hosting/administration/object-storage) for sample setups and discussion of reliability.
## Required: Secondary Hostnames ## Required: Secondary Hostnames
@@ -81,4 +82,4 @@ The installer now asks for dedicated hostnames for the Auth/Accounts/Cast/Albums
# inspect available commands # inspect available commands
sudo -u cloudron ente --help sudo -u cloudron ente --help
``` ```
After youre signed in you can follow the upstream docs for tasks like increasing storage: see [user administration](https://ente.io/help/self-hosting/administration/users) and the [CLI reference](https://ente.io/help/self-hosting/administration/cli). After youre signed in you can follow the upstream docs for tasks like increasing storage: see [user administration](https://ente.io/help/self-hosting/administration/users) and the [CLI reference](https://ente.io/help/self-hosting/administration/cli). The [object storage guide](https://ente.io/help/self-hosting/administration/object-storage) explains the reliability setup: fill out `S3_*`, `S3_SECONDARY_*`, and `S3_COLD_*` in `/app/data/config/s3.env`, and the package will automatically enable three-bucket replication when you restart (no extra toggle needed).

View File

@@ -55,7 +55,7 @@ The app is configured automatically using Cloudron's environment variables for:
After installing on Cloudron remember to: After installing on Cloudron remember to:
1. Open the File Manager for the app, edit `/app/data/config/s3.env` with your object storage endpoint/keys, and restart the app. If you are using Cloudflare R2 or another S3-compatible service, configure the buckets CORS policy to allow the Ente frontends (e.g. `https://ente.due.ren`, `https://accounts.due.ren`, `https://cast.due.ren`, etc.) so that cast/slideshow playback can fetch signed URLs directly from storage. 1. Open the File Manager for the app, edit `/app/data/config/s3.env` with your object storage endpoint/keys, and restart the app. If you are using Cloudflare R2 or another S3-compatible service, configure the buckets CORS policy to allow the Ente frontends (e.g. `https://ente.due.ren`, `https://accounts.due.ren`, `https://cast.due.ren`, etc.) so that cast/slideshow playback can fetch signed URLs directly from storage. Replication requires **three** buckets: the primary (`S3_*`), a secondary hot bucket (`S3_SECONDARY_*`) and a cold bucket (`S3_COLD_*`). Once all three are configured the package will automatically enable replication on startup (watch `/app/data/logs/startup.log` for the “replication enabled” log line). See the [object storage guide](https://ente.io/help/self-hosting/administration/object-storage) for sample layouts and reliability notes.
2. When prompted during installation, pick hostnames for the Accounts/Auth/Cast/Albums/Family web apps (they are exposed via Cloudron `httpPorts`). Ensure matching DNS records exist; Cloudron-managed DNS creates them automatically, otherwise point CNAME/A records such as `accounts.<app-domain>` at the primary hostname. 2. When prompted during installation, pick hostnames for the Accounts/Auth/Cast/Albums/Family web apps (they are exposed via Cloudron `httpPorts`). Ensure matching DNS records exist; Cloudron-managed DNS creates them automatically, otherwise point CNAME/A records such as `accounts.<app-domain>` at the primary hostname.
3. To persist tweaks to Museum (for example, seeding super-admin or whitelist entries), create `/app/data/config/museum.override.yaml`. Its contents are appended to the generated `museum/configurations/local.yaml` on every start, so you only need to declare the keys you want to override. 3. To persist tweaks to Museum (for example, seeding super-admin or whitelist entries), create `/app/data/config/museum.override.yaml`. Its contents are appended to the generated `museum/configurations/local.yaml` on every start, so you only need to declare the keys you want to override.
```yaml ```yaml

350
start.sh
View File

@@ -177,28 +177,56 @@ if [ ! -f "$S3_CONFIG_FILE" ]; then
# S3_ACCESS_KEY=your-access-key # S3_ACCESS_KEY=your-access-key
# S3_SECRET_KEY=your-secret-key # S3_SECRET_KEY=your-secret-key
# S3_PREFIX=optional/path/prefix # S3_PREFIX=optional/path/prefix
# Optional replication settings (secondary object storage):
# S3_SECONDARY_ENDPOINT=https://secondary.s3-provider.com
# S3_SECONDARY_REGION=us-west-1
# S3_SECONDARY_BUCKET=ente-data-backup
# S3_SECONDARY_ACCESS_KEY=secondary-access-key
# S3_SECONDARY_SECRET_KEY=secondary-secret-key
# S3_SECONDARY_PREFIX=optional/path/prefix
# S3_SECONDARY_DC=b2-us-west
# S3_COLD_ENDPOINT=https://cold.s3-provider.com
# S3_COLD_REGION=eu-central-1
# S3_COLD_BUCKET=ente-cold
# S3_COLD_ACCESS_KEY=cold-access-key
# S3_COLD_SECRET_KEY=cold-secret-key
# S3_COLD_PREFIX=optional/path/prefix
# S3_COLD_DC=scw-eu-fr-v3
# Replication requires configuring both the secondary hot storage and the cold
# storage buckets. Leave these unset to run with a single bucket. (Derived storage
# is optional and defaults to the primary bucket.)
# #
# Example for Cloudflare R2 (replace placeholders):
#S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
#S3_REGION=auto
#S3_BUCKET=ente
#S3_ACCESS_KEY=R2_ACCESS_KEY
#S3_SECRET_KEY=R2_SECRET_KEY
#S3_FORCE_PATH_STYLE=true
#S3_PRIMARY_DC=b2-eu-cen
#S3_SECONDARY_DC=b2-eu-cen
#S3_DERIVED_DC=b2-eu-cen
# #
# Example for Backblaze B2 (replace placeholders): # Example layout (replication):
#
# Primary hot bucket (Backblaze B2):
#S3_ENDPOINT=https://s3.us-west-002.backblazeb2.com #S3_ENDPOINT=https://s3.us-west-002.backblazeb2.com
#S3_REGION=us-west-002 #S3_REGION=us-west-002
#S3_BUCKET=ente #S3_BUCKET=ente-due-ren
#S3_ACCESS_KEY=B2_ACCESS_KEY #S3_ACCESS_KEY=<B2_PRIMARY_ACCESS_KEY>
#S3_SECRET_KEY=B2_SECRET_KEY #S3_SECRET_KEY=<B2_PRIMARY_SECRET_KEY>
#S3_FORCE_PATH_STYLE=true #S3_FORCE_PATH_STYLE=true
#S3_PRIMARY_DC=b2-eu-cen #S3_PRIMARY_DC=b2-eu-cen
#S3_SECONDARY_DC=b2-eu-cen #
#S3_DERIVED_DC=b2-eu-cen # Secondary hot bucket (Hetzner Object Storage, hel1):
#S3_SECONDARY_ENDPOINT=https://hel1.your-objectstorage.com
#S3_SECONDARY_REGION=hel1
#S3_SECONDARY_BUCKET=ente-secondary
#S3_SECONDARY_ACCESS_KEY=<HETZNER_ACCESS_KEY>
#S3_SECONDARY_SECRET_KEY=<HETZNER_SECRET_KEY>
#S3_SECONDARY_FORCE_PATH_STYLE=true
#S3_SECONDARY_DC=wasabi-eu-central-2-v3
#
# Cold bucket (Cloudflare R2):
#S3_COLD_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
#S3_COLD_REGION=auto
#S3_COLD_BUCKET=ente-cold
#S3_COLD_ACCESS_KEY=<R2_ACCESS_KEY>
#S3_COLD_SECRET_KEY=<R2_SECRET_KEY>
#S3_COLD_FORCE_PATH_STYLE=true
#S3_COLD_DC=scw-eu-fr-v3
#
# When all three blocks are configured, replication is enabled automatically.
EOF_S3 EOF_S3
chown cloudron:cloudron "$S3_CONFIG_FILE" chown cloudron:cloudron "$S3_CONFIG_FILE"
chmod 600 "$S3_CONFIG_FILE" chmod 600 "$S3_CONFIG_FILE"
@@ -212,6 +240,25 @@ if [ -f "$S3_CONFIG_FILE" ]; then
fi fi
set -u set -u
parse_s3_endpoint() {
local raw="$1"
local prefix="$2"
local host_var="$3"
local prefix_var="$4"
local host="${raw#https://}"
host="${host#http://}"
host="${host%%/}"
local path="${host#*/}"
if [ "$path" != "$host" ]; then
if [ -z "$prefix" ]; then
prefix="$path"
fi
host="${host%%/*}"
fi
printf -v "$host_var" "%s" "$host"
printf -v "$prefix_var" "%s" "$prefix"
}
S3_ENDPOINT="${S3_ENDPOINT:-${ENTE_S3_ENDPOINT:-}}" S3_ENDPOINT="${S3_ENDPOINT:-${ENTE_S3_ENDPOINT:-}}"
S3_REGION="${S3_REGION:-${ENTE_S3_REGION:-}}" S3_REGION="${S3_REGION:-${ENTE_S3_REGION:-}}"
S3_BUCKET="${S3_BUCKET:-${ENTE_S3_BUCKET:-}}" S3_BUCKET="${S3_BUCKET:-${ENTE_S3_BUCKET:-}}"
@@ -219,6 +266,35 @@ S3_ACCESS_KEY="${S3_ACCESS_KEY:-${ENTE_S3_ACCESS_KEY:-}}"
S3_SECRET_KEY="${S3_SECRET_KEY:-${ENTE_S3_SECRET_KEY:-}}" S3_SECRET_KEY="${S3_SECRET_KEY:-${ENTE_S3_SECRET_KEY:-}}"
S3_PREFIX="${S3_PREFIX:-${ENTE_S3_PREFIX:-}}" S3_PREFIX="${S3_PREFIX:-${ENTE_S3_PREFIX:-}}"
S3_SECONDARY_ENDPOINT="${S3_SECONDARY_ENDPOINT:-${ENTE_S3_SECONDARY_ENDPOINT:-}}"
S3_SECONDARY_REGION="${S3_SECONDARY_REGION:-${ENTE_S3_SECONDARY_REGION:-}}"
S3_SECONDARY_BUCKET="${S3_SECONDARY_BUCKET:-${ENTE_S3_SECONDARY_BUCKET:-}}"
S3_SECONDARY_ACCESS_KEY="${S3_SECONDARY_ACCESS_KEY:-${ENTE_S3_SECONDARY_ACCESS_KEY:-}}"
S3_SECONDARY_SECRET_KEY="${S3_SECONDARY_SECRET_KEY:-${ENTE_S3_SECONDARY_SECRET_KEY:-}}"
S3_SECONDARY_PREFIX="${S3_SECONDARY_PREFIX:-${ENTE_S3_SECONDARY_PREFIX:-}}"
S3_SECONDARY_DC_RAW="${ENTE_S3_SECONDARY_DC:-}"
S3_SECONDARY_ENABLED=false
S3_SECONDARY_ENDPOINT_HOST=""
S3_COLD_ENDPOINT="${S3_COLD_ENDPOINT:-${ENTE_S3_COLD_ENDPOINT:-}}"
S3_COLD_REGION="${S3_COLD_REGION:-${ENTE_S3_COLD_REGION:-}}"
S3_COLD_BUCKET="${S3_COLD_BUCKET:-${ENTE_S3_COLD_BUCKET:-}}"
S3_COLD_ACCESS_KEY="${S3_COLD_ACCESS_KEY:-${ENTE_S3_COLD_ACCESS_KEY:-}}"
S3_COLD_SECRET_KEY="${S3_COLD_SECRET_KEY:-${ENTE_S3_COLD_SECRET_KEY:-}}"
S3_COLD_PREFIX="${S3_COLD_PREFIX:-${ENTE_S3_COLD_PREFIX:-}}"
S3_COLD_DC_RAW="${ENTE_S3_COLD_DC:-}"
S3_COLD_ENABLED=false
S3_COLD_ENDPOINT_HOST=""
S3_DERIVED_ENDPOINT="${S3_DERIVED_ENDPOINT:-${ENTE_S3_DERIVED_ENDPOINT:-}}"
S3_DERIVED_REGION="${S3_DERIVED_REGION:-${ENTE_S3_DERIVED_REGION:-}}"
S3_DERIVED_BUCKET="${S3_DERIVED_BUCKET:-${ENTE_S3_DERIVED_BUCKET:-}}"
S3_DERIVED_ACCESS_KEY="${S3_DERIVED_ACCESS_KEY:-${ENTE_S3_DERIVED_ACCESS_KEY:-}}"
S3_DERIVED_SECRET_KEY="${S3_DERIVED_SECRET_KEY:-${ENTE_S3_DERIVED_SECRET_KEY:-}}"
S3_DERIVED_PREFIX="${S3_DERIVED_PREFIX:-${ENTE_S3_DERIVED_PREFIX:-}}"
S3_DERIVED_CUSTOM=false
S3_DERIVED_ENDPOINT_HOST=""
if [ -z "$S3_ENDPOINT" ] || [ -z "$S3_REGION" ] || [ -z "$S3_BUCKET" ] || [ -z "$S3_ACCESS_KEY" ] || [ -z "$S3_SECRET_KEY" ]; then if [ -z "$S3_ENDPOINT" ] || [ -z "$S3_REGION" ] || [ -z "$S3_BUCKET" ] || [ -z "$S3_ACCESS_KEY" ] || [ -z "$S3_SECRET_KEY" ]; then
log ERROR "Missing S3 configuration. Update $S3_CONFIG_FILE or set environment variables." log ERROR "Missing S3 configuration. Update $S3_CONFIG_FILE or set environment variables."
log ERROR "The application will start in configuration mode. Please configure S3 and restart." log ERROR "The application will start in configuration mode. Please configure S3 and restart."
@@ -228,27 +304,28 @@ else
fi fi
if [ "$S3_NOT_CONFIGURED" = "false" ]; then if [ "$S3_NOT_CONFIGURED" = "false" ]; then
S3_ENDPOINT_HOST="${S3_ENDPOINT#https://}" parse_s3_endpoint "$S3_ENDPOINT" "$S3_PREFIX" S3_ENDPOINT_HOST S3_PREFIX
S3_ENDPOINT_HOST="${S3_ENDPOINT_HOST#http://}" parse_s3_endpoint "$S3_DERIVED_ENDPOINT" "$S3_DERIVED_PREFIX" S3_DERIVED_ENDPOINT_HOST S3_DERIVED_PREFIX
S3_ENDPOINT_HOST="${S3_ENDPOINT_HOST%%/}"
S3_ENDPOINT_PATH="${S3_ENDPOINT_HOST#*/}"
if [ "$S3_ENDPOINT_PATH" != "$S3_ENDPOINT_HOST" ]; then
if [ -z "$S3_PREFIX" ]; then
S3_PREFIX="$S3_ENDPOINT_PATH"
fi
S3_ENDPOINT_HOST="${S3_ENDPOINT_HOST%%/*}"
fi
log INFO "Using S3 endpoint $S3_ENDPOINT_HOST (region $S3_REGION, bucket $S3_BUCKET)"
S3_REGION_LOWER="$(printf '%s' "$S3_REGION" | tr '[:upper:]' '[:lower:]')" S3_REGION_LOWER="$(printf '%s' "$S3_REGION" | tr '[:upper:]' '[:lower:]')"
if printf '%s' "$S3_ENDPOINT_HOST" | grep -q '\.r2\.cloudflarestorage\.com$' && [ "$S3_REGION_LOWER" != "auto" ]; then if printf '%s' "$S3_ENDPOINT_HOST" | grep -q '\.r2\.cloudflarestorage\.com$' && [ "$S3_REGION_LOWER" != "auto" ]; then
log WARN "Cloudflare R2 endpoints require S3_REGION=auto; current value '$S3_REGION' may cause upload failures" log WARN "Cloudflare R2 endpoints require S3_REGION=auto; current value '$S3_REGION' may cause upload failures"
fi fi
else else
S3_ENDPOINT_HOST="s3.example.com" S3_ENDPOINT_HOST="s3.example.com"
S3_DERIVED_ENDPOINT_HOST="$S3_ENDPOINT_HOST"
log WARN "S3 not configured - using placeholder values" log WARN "S3 not configured - using placeholder values"
fi fi
# Ensure AWS SDK always has a region when Museum needs to presign URLs (e.g. replication)
if [ "$S3_NOT_CONFIGURED" = "false" ]; then
if [ -n "$S3_REGION" ] && [ -z "${AWS_REGION:-}" ]; then
export AWS_REGION="$S3_REGION"
fi
if [ -n "${AWS_REGION:-}" ] && [ -z "${AWS_DEFAULT_REGION:-}" ]; then
export AWS_DEFAULT_REGION="$AWS_REGION"
fi
fi
DEFAULT_FORCE_PATH_STYLE="true" DEFAULT_FORCE_PATH_STYLE="true"
if printf '%s' "$S3_ENDPOINT_HOST" | grep -q '\.r2\.cloudflarestorage\.com$'; then if printf '%s' "$S3_ENDPOINT_HOST" | grep -q '\.r2\.cloudflarestorage\.com$'; then
if [ -z "${S3_FORCE_PATH_STYLE:-}" ] && [ -z "${ENTE_S3_FORCE_PATH_STYLE:-}" ]; then if [ -z "${S3_FORCE_PATH_STYLE:-}" ] && [ -z "${ENTE_S3_FORCE_PATH_STYLE:-}" ]; then
@@ -261,9 +338,136 @@ S3_FORCE_PATH_STYLE="$(printf '%s' "$S3_FORCE_PATH_STYLE_RAW" | tr '[:upper:]' '
S3_ARE_LOCAL_BUCKETS="$(printf '%s' "${S3_ARE_LOCAL_BUCKETS:-${ENTE_S3_ARE_LOCAL_BUCKETS:-false}}" | tr '[:upper:]' '[:lower:]')" S3_ARE_LOCAL_BUCKETS="$(printf '%s' "${S3_ARE_LOCAL_BUCKETS:-${ENTE_S3_ARE_LOCAL_BUCKETS:-false}}" | tr '[:upper:]' '[:lower:]')"
S3_PRIMARY_DC="${ENTE_S3_PRIMARY_DC:-b2-eu-cen}" S3_PRIMARY_DC="${ENTE_S3_PRIMARY_DC:-b2-eu-cen}"
S3_SECONDARY_DC="${ENTE_S3_SECONDARY_DC:-$S3_PRIMARY_DC}" S3_COLD_DC="${ENTE_S3_COLD_DC:-scw-eu-fr-v3}"
S3_DERIVED_DC="${ENTE_S3_DERIVED_DC:-$S3_PRIMARY_DC}" S3_DERIVED_DC="${ENTE_S3_DERIVED_DC:-$S3_PRIMARY_DC}"
S3_SECONDARY_ENV_PRESENT=false
for value in "$S3_SECONDARY_ENDPOINT" "$S3_SECONDARY_REGION" "$S3_SECONDARY_BUCKET" "$S3_SECONDARY_ACCESS_KEY" "$S3_SECONDARY_SECRET_KEY" "$S3_SECONDARY_PREFIX" "$S3_SECONDARY_DC_RAW"; do
if [ -n "$value" ]; then
S3_SECONDARY_ENV_PRESENT=true
break
fi
done
if [ "$S3_NOT_CONFIGURED" = "false" ] && [ "$S3_SECONDARY_ENV_PRESENT" = true ]; then
S3_SECONDARY_REGION="${S3_SECONDARY_REGION:-$S3_REGION}"
S3_SECONDARY_BUCKET="${S3_SECONDARY_BUCKET:-$S3_BUCKET}"
S3_SECONDARY_PREFIX="${S3_SECONDARY_PREFIX:-$S3_PREFIX}"
MISSING_SECONDARY_VARS=()
[ -z "$S3_SECONDARY_ENDPOINT" ] && MISSING_SECONDARY_VARS+=("S3_SECONDARY_ENDPOINT")
[ -z "$S3_SECONDARY_ACCESS_KEY" ] && MISSING_SECONDARY_VARS+=("S3_SECONDARY_ACCESS_KEY")
[ -z "$S3_SECONDARY_SECRET_KEY" ] && MISSING_SECONDARY_VARS+=("S3_SECONDARY_SECRET_KEY")
if [ "${#MISSING_SECONDARY_VARS[@]}" -gt 0 ]; then
log ERROR "Secondary S3 configuration incomplete (missing: ${MISSING_SECONDARY_VARS[*]}). Replication disabled."
S3_SECONDARY_ENABLED=false
S3_SECONDARY_DC=""
else
S3_SECONDARY_ENABLED=true
if [ -n "$S3_SECONDARY_DC_RAW" ]; then
S3_SECONDARY_DC="$S3_SECONDARY_DC_RAW"
else
S3_SECONDARY_DC="${S3_PRIMARY_DC}-secondary"
fi
fi
else
S3_SECONDARY_ENABLED=false
S3_SECONDARY_DC=""
fi
S3_COLD_ENV_PRESENT=false
for value in "$S3_COLD_ENDPOINT" "$S3_COLD_REGION" "$S3_COLD_BUCKET" "$S3_COLD_ACCESS_KEY" "$S3_COLD_SECRET_KEY" "$S3_COLD_PREFIX" "$S3_COLD_DC_RAW"; do
if [ -n "$value" ]; then
S3_COLD_ENV_PRESENT=true
break
fi
done
if [ "$S3_NOT_CONFIGURED" = "false" ] && [ "$S3_COLD_ENV_PRESENT" = true ]; then
S3_COLD_REGION="${S3_COLD_REGION:-$S3_REGION}"
S3_COLD_BUCKET="${S3_COLD_BUCKET:-$S3_BUCKET}"
S3_COLD_PREFIX="${S3_COLD_PREFIX:-$S3_PREFIX}"
MISSING_COLD_VARS=()
[ -z "$S3_COLD_ENDPOINT" ] && MISSING_COLD_VARS+=("S3_COLD_ENDPOINT")
[ -z "$S3_COLD_ACCESS_KEY" ] && MISSING_COLD_VARS+=("S3_COLD_ACCESS_KEY")
[ -z "$S3_COLD_SECRET_KEY" ] && MISSING_COLD_VARS+=("S3_COLD_SECRET_KEY")
if [ "${#MISSING_COLD_VARS[@]}" -gt 0 ]; then
log ERROR "Cold storage configuration incomplete (missing: ${MISSING_COLD_VARS[*]}). Replication disabled."
S3_COLD_ENABLED=false
S3_COLD_DC=""
else
S3_COLD_ENABLED=true
if [ -n "$S3_COLD_DC_RAW" ]; then
S3_COLD_DC="$S3_COLD_DC_RAW"
fi
fi
else
S3_COLD_ENABLED=false
S3_COLD_DC="${S3_COLD_DC:-}"
fi
S3_DERIVED_ENV_PRESENT=false
for value in "$S3_DERIVED_ENDPOINT" "$S3_DERIVED_REGION" "$S3_DERIVED_BUCKET" "$S3_DERIVED_ACCESS_KEY" "$S3_DERIVED_SECRET_KEY" "$S3_DERIVED_PREFIX"; do
if [ -n "$value" ]; then
S3_DERIVED_ENV_PRESENT=true
break
fi
done
if [ "$S3_NOT_CONFIGURED" = "false" ]; then
if [ "$S3_DERIVED_ENV_PRESENT" = true ]; then
S3_DERIVED_REGION="${S3_DERIVED_REGION:-$S3_REGION}"
S3_DERIVED_BUCKET="${S3_DERIVED_BUCKET:-$S3_BUCKET}"
S3_DERIVED_PREFIX="${S3_DERIVED_PREFIX:-$S3_PREFIX}"
MISSING_DERIVED_VARS=()
[ -z "$S3_DERIVED_ENDPOINT" ] && MISSING_DERIVED_VARS+=("S3_DERIVED_ENDPOINT")
[ -z "$S3_DERIVED_ACCESS_KEY" ] && MISSING_DERIVED_VARS+=("S3_DERIVED_ACCESS_KEY")
[ -z "$S3_DERIVED_SECRET_KEY" ] && MISSING_DERIVED_VARS+=("S3_DERIVED_SECRET_KEY")
if [ "${#MISSING_DERIVED_VARS[@]}" -gt 0 ]; then
log ERROR "Derived S3 configuration incomplete (missing: ${MISSING_DERIVED_VARS[*]}). Falling back to primary bucket for derived assets."
S3_DERIVED_CUSTOM=false
S3_DERIVED_ENDPOINT="$S3_ENDPOINT"
S3_DERIVED_REGION="$S3_REGION"
S3_DERIVED_BUCKET="$S3_BUCKET"
S3_DERIVED_ACCESS_KEY="$S3_ACCESS_KEY"
S3_DERIVED_SECRET_KEY="$S3_SECRET_KEY"
S3_DERIVED_PREFIX="$S3_PREFIX"
else
S3_DERIVED_CUSTOM=true
fi
else
S3_DERIVED_CUSTOM=false
S3_DERIVED_ENDPOINT="$S3_ENDPOINT"
S3_DERIVED_REGION="$S3_REGION"
S3_DERIVED_BUCKET="$S3_BUCKET"
S3_DERIVED_ACCESS_KEY="$S3_ACCESS_KEY"
S3_DERIVED_SECRET_KEY="$S3_SECRET_KEY"
S3_DERIVED_PREFIX="$S3_PREFIX"
fi
else
S3_DERIVED_CUSTOM=false
fi
if [ "$S3_NOT_CONFIGURED" = "false" ] && [ "$S3_SECONDARY_ENABLED" = true ]; then
parse_s3_endpoint "$S3_SECONDARY_ENDPOINT" "$S3_SECONDARY_PREFIX" S3_SECONDARY_ENDPOINT_HOST S3_SECONDARY_PREFIX
else
S3_SECONDARY_ENDPOINT_HOST=""
fi
if [ "$S3_NOT_CONFIGURED" = "false" ] && [ "$S3_COLD_ENABLED" = true ]; then
parse_s3_endpoint "$S3_COLD_ENDPOINT" "$S3_COLD_PREFIX" S3_COLD_ENDPOINT_HOST S3_COLD_PREFIX
else
S3_COLD_ENDPOINT_HOST=""
fi
S3_REPLICATION_ENABLED=false
if [ "$S3_SECONDARY_ENABLED" = true ] && [ "$S3_COLD_ENABLED" = true ]; then
S3_REPLICATION_ENABLED=true
elif [ "$S3_SECONDARY_ENABLED" = true ] && [ "$S3_COLD_ENABLED" = false ]; then
log WARN "Secondary hot bucket configured without a cold storage bucket; S3 replication remains disabled."
elif [ "$S3_SECONDARY_ENABLED" = false ] && [ "$S3_COLD_ENABLED" = true ]; then
log WARN "Cold storage bucket configured without a secondary hot bucket; S3 replication remains disabled."
fi
S3_DCS=() S3_DCS=()
add_s3_dc() { add_s3_dc() {
local candidate="$1" local candidate="$1"
@@ -279,11 +483,56 @@ add_s3_dc() {
} }
add_s3_dc "$S3_PRIMARY_DC" add_s3_dc "$S3_PRIMARY_DC"
add_s3_dc "$S3_SECONDARY_DC" if [ "$S3_SECONDARY_ENABLED" = true ]; then
add_s3_dc "$S3_SECONDARY_DC"
fi
if [ "$S3_COLD_ENABLED" = true ]; then
add_s3_dc "$S3_COLD_DC"
fi
add_s3_dc "$S3_DERIVED_DC" add_s3_dc "$S3_DERIVED_DC"
write_s3_dc_block() {
local dc="$1"
local key="$2"
local secret="$3"
local endpoint="$4"
local region="$5"
local bucket="$6"
local prefix="$7"
cat >> "$MUSEUM_CONFIG" <<EOF_CFG
$dc:
key: "$key"
secret: "$secret"
endpoint: "$endpoint"
region: "$region"
bucket: "$bucket"
EOF_CFG
if [ -n "$prefix" ]; then
printf ' path_prefix: "%s"\n' "$prefix" >> "$MUSEUM_CONFIG"
fi
printf '\n' >> "$MUSEUM_CONFIG"
}
S3_PREFIX_DISPLAY="${S3_PREFIX:-<none>}" S3_PREFIX_DISPLAY="${S3_PREFIX:-<none>}"
log INFO "Resolved S3 configuration: host=$S3_ENDPOINT_HOST region=$S3_REGION pathStyle=$S3_FORCE_PATH_STYLE localBuckets=$S3_ARE_LOCAL_BUCKETS primaryDC=$S3_PRIMARY_DC derivedDC=$S3_DERIVED_DC prefix=$S3_PREFIX_DISPLAY" log INFO "Resolved S3 configuration: host=$S3_ENDPOINT_HOST region=$S3_REGION pathStyle=$S3_FORCE_PATH_STYLE localBuckets=$S3_ARE_LOCAL_BUCKETS primaryDC=$S3_PRIMARY_DC derivedDC=$S3_DERIVED_DC prefix=$S3_PREFIX_DISPLAY"
if [ "$S3_SECONDARY_ENABLED" = true ]; then
S3_SECONDARY_PREFIX_DISPLAY="${S3_SECONDARY_PREFIX:-<none>}"
log INFO "Secondary replication target: host=$S3_SECONDARY_ENDPOINT_HOST region=$S3_SECONDARY_REGION dc=$S3_SECONDARY_DC prefix=$S3_SECONDARY_PREFIX_DISPLAY"
else
log INFO "Secondary hot-storage bucket not configured; replication disabled."
fi
if [ "$S3_COLD_ENABLED" = true ]; then
S3_COLD_PREFIX_DISPLAY="${S3_COLD_PREFIX:-<none>}"
log INFO "Cold storage target: host=$S3_COLD_ENDPOINT_HOST region=$S3_COLD_REGION dc=$S3_COLD_DC prefix=$S3_COLD_PREFIX_DISPLAY"
else
log INFO "Cold storage bucket not configured."
fi
if [ "$S3_DERIVED_CUSTOM" = true ]; then
S3_DERIVED_PREFIX_DISPLAY="${S3_DERIVED_PREFIX:-<none>}"
log INFO "Derived storage target: host=$S3_DERIVED_ENDPOINT_HOST region=$S3_DERIVED_REGION dc=$S3_DERIVED_DC prefix=$S3_DERIVED_PREFIX_DISPLAY"
else
log INFO "Derived storage reuses the primary bucket."
fi
DEFAULT_GIN_TRUSTED_PROXIES="127.0.0.1,::1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16" DEFAULT_GIN_TRUSTED_PROXIES="127.0.0.1,::1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
GIN_TRUSTED_PROXIES="${GIN_TRUSTED_PROXIES:-$DEFAULT_GIN_TRUSTED_PROXIES}" GIN_TRUSTED_PROXIES="${GIN_TRUSTED_PROXIES:-$DEFAULT_GIN_TRUSTED_PROXIES}"
@@ -412,6 +661,10 @@ fi
# Always regenerate Museum config to pick up S3 changes # Always regenerate Museum config to pick up S3 changes
log INFO "Rendering Museum configuration" log INFO "Rendering Museum configuration"
HOT_STORAGE_SECONDARY_LINE=""
if [ "$S3_SECONDARY_ENABLED" = true ]; then
HOT_STORAGE_SECONDARY_LINE=" secondary: ${S3_SECONDARY_DC}"
fi
cat > "$MUSEUM_CONFIG" <<EOF_CFG cat > "$MUSEUM_CONFIG" <<EOF_CFG
log-file: "" log-file: ""
http: http:
@@ -440,25 +693,38 @@ s3:
use_path_style_urls: ${S3_FORCE_PATH_STYLE} use_path_style_urls: ${S3_FORCE_PATH_STYLE}
hot_storage: hot_storage:
primary: ${S3_PRIMARY_DC} primary: ${S3_PRIMARY_DC}
secondary: ${S3_SECONDARY_DC} ${HOT_STORAGE_SECONDARY_LINE}
derived-storage: ${S3_DERIVED_DC} derived-storage: ${S3_DERIVED_DC}
EOF_CFG EOF_CFG
for dc in "${S3_DCS[@]}"; do for dc in "${S3_DCS[@]}"; do
cat >> "$MUSEUM_CONFIG" <<EOF_CFG if [ "$dc" = "$S3_PRIMARY_DC" ]; then
$dc: write_s3_dc_block "$dc" "$S3_ACCESS_KEY" "$S3_SECRET_KEY" "$S3_ENDPOINT_HOST" "$S3_REGION" "$S3_BUCKET" "$S3_PREFIX"
key: "$S3_ACCESS_KEY" elif [ "$S3_SECONDARY_ENABLED" = true ] && [ "$dc" = "$S3_SECONDARY_DC" ]; then
secret: "$S3_SECRET_KEY" write_s3_dc_block "$dc" "$S3_SECONDARY_ACCESS_KEY" "$S3_SECONDARY_SECRET_KEY" "$S3_SECONDARY_ENDPOINT_HOST" "$S3_SECONDARY_REGION" "$S3_SECONDARY_BUCKET" "$S3_SECONDARY_PREFIX"
endpoint: "$S3_ENDPOINT_HOST" elif [ "$S3_COLD_ENABLED" = true ] && [ "$dc" = "$S3_COLD_DC" ]; then
region: "$S3_REGION" write_s3_dc_block "$dc" "$S3_COLD_ACCESS_KEY" "$S3_COLD_SECRET_KEY" "$S3_COLD_ENDPOINT_HOST" "$S3_COLD_REGION" "$S3_COLD_BUCKET" "$S3_COLD_PREFIX"
bucket: "$S3_BUCKET" elif [ "$dc" = "$S3_DERIVED_DC" ]; then
EOF_CFG if [ "$S3_DERIVED_CUSTOM" = true ]; then
if [ -n "$S3_PREFIX" ]; then write_s3_dc_block "$dc" "$S3_DERIVED_ACCESS_KEY" "$S3_DERIVED_SECRET_KEY" "$S3_DERIVED_ENDPOINT_HOST" "$S3_DERIVED_REGION" "$S3_DERIVED_BUCKET" "$S3_DERIVED_PREFIX"
printf ' path_prefix: "%s"\n' "$S3_PREFIX" >> "$MUSEUM_CONFIG" else
write_s3_dc_block "$dc" "$S3_ACCESS_KEY" "$S3_SECRET_KEY" "$S3_ENDPOINT_HOST" "$S3_REGION" "$S3_BUCKET" "$S3_PREFIX"
fi
fi fi
printf '\n' >> "$MUSEUM_CONFIG"
done done
if [ "$S3_REPLICATION_ENABLED" = true ]; then
cat >> "$MUSEUM_CONFIG" <<'EOF_CFG'
replication:
enabled: true
EOF_CFG
else
cat >> "$MUSEUM_CONFIG" <<'EOF_CFG'
replication:
enabled: false
EOF_CFG
fi
cat >> "$MUSEUM_CONFIG" <<EOF_CFG cat >> "$MUSEUM_CONFIG" <<EOF_CFG
smtp: smtp:
host: "${SMTP_HOST}" host: "${SMTP_HOST}"