{% extends "base.html" %} {% block content %}

Documentation

Your guide to MyFSIO

Follow these steps to install, authenticate, master the console, and automate everything through the API.

API base URL
{{ api_base }}
01

Set up & run locally

Prepare a virtual environment, install dependencies, and launch both servers for a complete console + API experience.

  1. Install Python 3.11+ plus system build tools.
  2. Create a virtual environment and install requirements.txt.
  3. Start the services with python run.py.
python -m venv .venv
. .venv/Scripts/activate      # PowerShell: .\\.venv\\Scripts\\Activate.ps1
pip install -r requirements.txt

# Run both API and UI (Development)
python run.py

# Run in Production (Waitress server)
python run.py --prod

# Or run individually
python run.py --mode api
python run.py --mode ui

Configuration

Configuration defaults live in app/config.py. You can override them using environment variables. This is critical for production deployments behind proxies.

Variable Default Description
API_BASE_URL None The public URL of the API. Required if running behind a proxy. Ensures presigned URLs are generated correctly.
STORAGE_ROOT ./data Directory for buckets and objects.
MAX_UPLOAD_SIZE 1 GB Max request body size in bytes.
SECRET_KEY (Auto-generated) Flask session key. Auto-generates if not set. Set explicitly in production.
APP_HOST 0.0.0.0 Bind interface.
APP_PORT 5000 Listen port (UI uses 5100).
CORS Settings
CORS_ORIGINS * Allowed origins. Restrict in production.
CORS_METHODS GET,PUT,POST,DELETE,OPTIONS,HEAD Allowed HTTP methods.
CORS_ALLOW_HEADERS * Allowed request headers.
CORS_EXPOSE_HEADERS * Response headers visible to browsers (e.g., ETag).
Security Settings
AUTH_MAX_ATTEMPTS 5 Failed login attempts before lockout.
AUTH_LOCKOUT_MINUTES 15 Lockout duration after max failed attempts.
RATE_LIMIT_DEFAULT 200 per minute Default API rate limit.
Encryption Settings
ENCRYPTION_ENABLED false Enable server-side encryption support.
KMS_ENABLED false Enable KMS key management for encryption.
Logging Settings
LOG_LEVEL INFO Log verbosity: DEBUG, INFO, WARNING, ERROR.
LOG_TO_FILE true Enable file logging.
Metrics History Settings
METRICS_HISTORY_ENABLED false Enable metrics history recording and charts (opt-in).
METRICS_HISTORY_RETENTION_HOURS 24 How long to retain metrics history data.
METRICS_HISTORY_INTERVAL_MINUTES 5 Interval between history snapshots.
Production Checklist: Set SECRET_KEY, restrict CORS_ORIGINS, configure API_BASE_URL, enable HTTPS via reverse proxy, and use --prod flag.
02

Running in background

For production or server deployments, run MyFSIO as a background service so it persists after you close the terminal.

Quick Start (nohup)

Simplest way to run in background—survives terminal close:

# Using Python
nohup python run.py --prod > /dev/null 2>&1 &

# Using compiled binary
nohup ./myfsio > /dev/null 2>&1 &

# Check if running
ps aux | grep myfsio

Screen / Tmux

Attach/detach from a persistent session:

# Start in a detached screen session
screen -dmS myfsio ./myfsio

# Attach to view logs
screen -r myfsio

# Detach: press Ctrl+A, then D

Systemd (Recommended for Production)

Create /etc/systemd/system/myfsio.service:

[Unit]
Description=MyFSIO S3-Compatible Storage
After=network.target

[Service]
Type=simple
User=myfsio
WorkingDirectory=/opt/myfsio
ExecStart=/opt/myfsio/myfsio
Restart=on-failure
RestartSec=5
Environment=STORAGE_ROOT=/var/lib/myfsio
Environment=API_BASE_URL=https://s3.example.com

[Install]
WantedBy=multi-user.target

Then enable and start:

sudo systemctl daemon-reload
sudo systemctl enable myfsio
sudo systemctl start myfsio

# Check status
sudo systemctl status myfsio
sudo journalctl -u myfsio -f   # View logs
03

Authenticate & manage IAM

MyFSIO seeds data/.myfsio.sys/config/iam.json with localadmin/localadmin. Sign in once, rotate it, then grant least-privilege access to teammates and tools.

  1. Visit /ui/login, enter the bootstrap credentials, and rotate them immediately from the IAM page.
  2. Create additional users with descriptive display names and AWS-style inline policies (for example {"bucket": "*", "actions": ["list", "read"]}).
  3. Rotate secrets when sharing with CI jobs—new secrets display once and persist to data/.myfsio.sys/config/iam.json.
  4. Bucket policies layer on top of IAM. Apply Private/Public presets or paste custom JSON; changes reload instantly.

All API calls require X-Access-Key and X-Secret-Key headers. The UI stores them in the Flask session after you log in.

04

Use the console effectively

Each workspace models an S3 workflow so you can administer buckets end-to-end.

Buckets

  • Create/delete buckets from the overview. Badges reveal IAM-only, public-read, or custom-policy states.
  • Summary stats show live object counts and total capacity; click through for inventories.

Uploads

  • Drag and drop folders or files into the upload modal. Objects above 16 MB switch to multipart automatically.
  • Progress rows highlight retries, throughput, and completion even if you close the modal.

Object browser

  • Navigate folder hierarchies using breadcrumbs. Objects with / in keys display as folders.
  • Infinite scroll loads more objects automatically. Choose batch size (50–250) from the footer dropdown.
  • Bulk select objects for multi-delete or multi-download. Filter by name using the search box.
  • If loading fails, click Retry to attempt again—no page refresh needed.

Object details

  • Selecting an object opens the preview card with metadata, inline viewers, presign generator, and version history.
  • Trigger downloads, deletes, restores, or metadata refreshes without leaving the panel.

Policies & versioning

  • Toggle versioning (requires write access). Archived-only keys are flagged so you can restore them quickly.
  • The policy editor saves drafts, ships with presets, and hot-reloads data/.myfsio.sys/config/bucket_policies.json.
05

Automate with CLI & tools

Point standard S3 clients at {{ api_base }} and reuse the same IAM credentials.

AWS CLI

aws configure set aws_access_key_id <access_key>
aws configure set aws_secret_access_key <secret_key>
aws configure set default.region us-east-1

aws --endpoint-url {{ api_base }} s3 ls
aws --endpoint-url {{ api_base }} s3api create-bucket --bucket demo
aws --endpoint-url {{ api_base }} s3 cp ./sample.txt s3://demo/sample.txt

s3cmd

cat > ~/.s3cfg-myfsio <<'EOF'
host_base = {{ api_host }}
host_bucket = %(bucket)s.{{ api_host }}
access_key = <access_key>
secret_key = <secret_key>
use_https = False
signature_v2 = False
EOF

s3cmd --config ~/.s3cfg-myfsio ls
s3cmd --config ~/.s3cfg-myfsio put notes.txt s3://demo/notes.txt

curl / HTTPie

curl {{ api_base }}/ \
  -H "X-Access-Key: <access_key>" \
  -H "X-Secret-Key: <secret_key>"

curl -X PUT {{ api_base }}/demo/notes.txt \
  -H "X-Access-Key: <access_key>" \
  -H "X-Secret-Key: <secret_key>" \
  --data-binary @notes.txt

# Presigned URLs are generated via the UI
# Use the "Presign" button in the object browser
06

Key REST endpoints

Method Path Purpose
GET / List buckets accessible to the caller.
PUT /<bucket> Create a bucket.
DELETE /<bucket> Delete a bucket (must be empty).
GET /<bucket> List objects (supports prefix / max-keys queries).
PUT /<bucket>/<key> Upload or overwrite an object; UI helper handles multipart flows.
GET /<bucket>/<key> Download an object (UI adds ?download=1 to force attachment).
DELETE /<bucket>/<key> Delete an object.
GET/PUT/DELETE /<bucket>?policy Fetch, upsert, or remove a bucket policy (S3-compatible).

All responses include X-Request-Id for tracing. Logs land in logs/api.log and logs/ui.log.

07

API Examples

Common operations using popular SDKs and tools.

Python (boto3)

import boto3

s3 = boto3.client(
    's3',
    endpoint_url='{{ api_base }}',
    aws_access_key_id='<access_key>',
    aws_secret_access_key='<secret_key>'
)

# List buckets
buckets = s3.list_buckets()['Buckets']

# Create bucket
s3.create_bucket(Bucket='mybucket')

# Upload file
s3.upload_file('local.txt', 'mybucket', 'remote.txt')

# Download file
s3.download_file('mybucket', 'remote.txt', 'downloaded.txt')

# Generate presigned URL (valid 1 hour)
url = s3.generate_presigned_url(
    'get_object',
    Params={'Bucket': 'mybucket', 'Key': 'remote.txt'},
    ExpiresIn=3600
)

JavaScript (AWS SDK v3)

import { S3Client, ListBucketsCommand, PutObjectCommand } from '@aws-sdk/client-s3';

const s3 = new S3Client({
  endpoint: '{{ api_base }}',
  region: 'us-east-1',
  credentials: {
    accessKeyId: '<access_key>',
    secretAccessKey: '<secret_key>'
  },
  forcePathStyle: true  // Required for S3-compatible services
});

// List buckets
const { Buckets } = await s3.send(new ListBucketsCommand({}));

// Upload object
await s3.send(new PutObjectCommand({
  Bucket: 'mybucket',
  Key: 'hello.txt',
  Body: 'Hello, World!'
}));

Multipart Upload (Python)

import boto3

s3 = boto3.client('s3', endpoint_url='{{ api_base }}')

# Initiate
response = s3.create_multipart_upload(Bucket='mybucket', Key='large.bin')
upload_id = response['UploadId']

# Upload parts (minimum 5MB each, except last part)
parts = []
chunks = [b'chunk1...', b'chunk2...']
for part_number, chunk in enumerate(chunks, start=1):
    response = s3.upload_part(
        Bucket='mybucket',
        Key='large.bin',
        PartNumber=part_number,
        UploadId=upload_id,
        Body=chunk
    )
    parts.append({'PartNumber': part_number, 'ETag': response['ETag']})

# Complete
s3.complete_multipart_upload(
    Bucket='mybucket',
    Key='large.bin',
    UploadId=upload_id,
    MultipartUpload={'Parts': parts}
)

Presigned URLs for Sharing

# Generate presigned URLs via the UI:
# 1. Navigate to your bucket in the object browser
# 2. Select the object you want to share
# 3. Click the "Presign" button
# 4. Choose method (GET/PUT/DELETE) and expiration time
# 5. Copy the generated URL

# Supported options:
# - Method: GET (download), PUT (upload), DELETE (remove)
# - Expiration: 1 second to 7 days (604800 seconds)
08

Site Replication

Automatically copy new objects to another MyFSIO instance or S3-compatible service for backup or disaster recovery.

Setup Guide

  1. Prepare Target: On the destination server, create a bucket (e.g., backup-bucket) and an IAM user with write permissions.
  2. Connect Source: On this server, go to Connections and add the target's API URL and credentials.
  3. Enable Rule: Go to the source bucket's Replication tab, select the connection, and enter the target bucket name.
Headless Target Setup

If your target server has no UI, create a setup_target.py script to bootstrap credentials:

# setup_target.py
from pathlib import Path
from app.iam import IamService
from app.storage import ObjectStorage

# Initialize services (paths match default config)
data_dir = Path("data")
iam = IamService(data_dir / ".myfsio.sys" / "config" / "iam.json")
storage = ObjectStorage(data_dir)

# 1. Create the bucket
bucket_name = "backup-bucket"
try:
    storage.create_bucket(bucket_name)
    print(f"Bucket '{bucket_name}' created.")
except Exception as e:
    print(f"Bucket creation skipped: {e}")

# 2. Create the user
try:
    creds = iam.create_user(
        display_name="Replication User",
        policies=[{"bucket": bucket_name, "actions": ["write", "read", "list"]}]
    )
    print("\n--- CREDENTIALS GENERATED ---")
    print(f"Access Key: {creds['access_key']}")
    print(f"Secret Key: {creds['secret_key']}")
    print("-----------------------------")
except Exception as e:
    print(f"User creation failed: {e}")

Save and run: python setup_target.py

Bidirectional Replication (Active-Active)

To set up two-way replication (Server A ↔ Server B):

  1. Follow the steps above to replicate A → B.
  2. Repeat the process on Server B to replicate B → A (create a connection to A, enable rule).

Loop Prevention: The system automatically detects replication traffic using a custom User-Agent (S3ReplicationAgent). This prevents infinite loops where an object replicated from A to B is immediately replicated back to A.
Deletes: Deleting an object on one server will propagate the deletion to the other server.

Error Handling & Rate Limits

The replication system handles transient failures automatically:

Behavior Details
Retry Logic boto3 automatically handles 429 (rate limit) errors using exponential backoff with max_attempts=2
Concurrency Uses a ThreadPoolExecutor with 4 parallel workers for replication tasks
Timeouts Connect: 5s, Read: 30s. Large files use streaming transfers
Large File Counts: When replicating buckets with many objects, the target server's rate limits may cause delays. There is no built-in pause mechanism. Consider increasing RATE_LIMIT_DEFAULT on the target server during bulk replication operations.
09

Object Versioning

Keep multiple versions of objects to protect against accidental deletions and overwrites. Restore previous versions at any time.

Enabling Versioning

  1. Navigate to your bucket's Properties tab.
  2. Find the Versioning card and click Enable.
  3. All subsequent uploads will create new versions instead of overwriting.

Version Operations

Operation Description
View Versions Click the version icon on any object to see all historical versions with timestamps and sizes.
Restore Version Click Restore on any version to make it the current version (creates a copy).
Delete Current Deleting an object archives it. Previous versions remain accessible.
Purge All Permanently delete an object and all its versions. This cannot be undone.

Archived Objects

When you delete a versioned object, it becomes "archived" - the current version is removed but historical versions remain. The Archived tab shows these objects so you can restore them.

API Usage

# Enable versioning
curl -X PUT "{{ api_base }}/<bucket>?versioning" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"Status": "Enabled"}'

# Get versioning status
curl "{{ api_base }}/<bucket>?versioning" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# List object versions
curl "{{ api_base }}/<bucket>?versions" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Get specific version
curl "{{ api_base }}/<bucket>/<key>?versionId=<version-id>" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
Storage Impact: Each version consumes storage. Enable quotas to limit total bucket size including all versions.
10

Bucket Quotas

Limit how much data a bucket can hold using storage quotas. Quotas are enforced on uploads and multipart completions.

Quota Types

Limit Description
Max Size (MB) Maximum total storage in megabytes (includes current objects + archived versions)
Max Objects Maximum number of objects (includes current objects + archived versions)

Managing Quotas (Admin Only)

Quota management is restricted to administrators (users with iam:* permissions).

  1. Navigate to your bucket → Properties tab → Storage Quota card.
  2. Enter limits: Max Size (MB) and/or Max Objects. Leave empty for unlimited.
  3. Click Update Quota to save, or Remove Quota to clear limits.

API Usage

# Set quota (max 100MB, max 1000 objects)
curl -X PUT "{{ api_base }}/bucket/<bucket>?quota" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"max_bytes": 104857600, "max_objects": 1000}'

# Get current quota
curl "{{ api_base }}/bucket/<bucket>?quota" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Remove quota
curl -X PUT "{{ api_base }}/bucket/<bucket>?quota" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"max_bytes": null, "max_objects": null}'
Version Counting: When versioning is enabled, archived versions count toward the quota. The quota is checked against total storage, not just current objects.
11

Encryption

Protect data at rest with server-side encryption using AES-256-GCM. Objects are encrypted before being written to disk and decrypted transparently on read.

Encryption Types

Type Description
AES-256 (SSE-S3) Server-managed encryption using a local master key
KMS (SSE-KMS) Encryption using customer-managed keys via the built-in KMS

Enabling Encryption

  1. Set environment variables:
    # PowerShell
    $env:ENCRYPTION_ENABLED = "true"
    $env:KMS_ENABLED = "true"  # Optional
    python run.py
    
    # Bash
    export ENCRYPTION_ENABLED=true
    export KMS_ENABLED=true
    python run.py
  2. Configure bucket encryption: Navigate to your bucket → Properties tab → Default Encryption card → Click Enable Encryption.
  3. Choose algorithm: Select AES-256 for server-managed keys or aws:kms to use a KMS-managed key.
Important: Only new uploads after enabling encryption will be encrypted. Existing objects remain unencrypted.

KMS Key Management

When KMS_ENABLED=true, manage encryption keys via the API:

# Create a new KMS key
curl -X POST {{ api_base }}/kms/keys \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"alias": "my-key", "description": "Production key"}'

# List all keys
curl {{ api_base }}/kms/keys \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Rotate a key (creates new key material)
curl -X POST {{ api_base }}/kms/keys/{key-id}/rotate \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Disable/Enable a key
curl -X POST {{ api_base }}/kms/keys/{key-id}/disable \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Schedule key deletion (30-day waiting period)
curl -X DELETE "{{ api_base }}/kms/keys/{key-id}?waiting_period_days=30" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

How It Works

Envelope Encryption: Each object is encrypted with a unique Data Encryption Key (DEK). The DEK is then encrypted (wrapped) by the master key or KMS key and stored alongside the ciphertext. On read, the DEK is unwrapped and used to decrypt the object transparently.

12

Lifecycle Rules

Automatically delete expired objects, clean up old versions, and abort incomplete multipart uploads using time-based lifecycle rules.

How It Works

Lifecycle rules run on a background timer (Python threading.Timer), not a system cronjob. The enforcement cycle triggers every 3600 seconds (1 hour) by default. Each cycle scans all buckets with lifecycle configurations and applies matching rules.

Expiration Types

Type Description
Expiration (Days) Delete current objects older than N days from their last modification
Expiration (Date) Delete current objects after a specific date (ISO 8601 format)
NoncurrentVersionExpiration Delete non-current (archived) versions older than N days from when they became non-current
AbortIncompleteMultipartUpload Abort multipart uploads that have been in progress longer than N days

API Usage

# Set lifecycle rule (delete objects older than 30 days)
curl -X PUT "{{ api_base }}/<bucket>?lifecycle" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '[{
    "ID": "expire-old-objects",
    "Status": "Enabled",
    "Prefix": "",
    "Expiration": {"Days": 30}
  }]'

# Abort incomplete multipart uploads after 7 days
curl -X PUT "{{ api_base }}/<bucket>?lifecycle" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '[{
    "ID": "cleanup-multipart",
    "Status": "Enabled",
    "AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7}
  }]'

# Get current lifecycle configuration
curl "{{ api_base }}/<bucket>?lifecycle" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
Prefix Filtering: Use the Prefix field to scope rules to specific paths (e.g., "logs/"). Leave empty to apply to all objects in the bucket.
13

Metrics History

Track CPU, memory, and disk usage over time with optional metrics history. Disabled by default to minimize overhead.

Enabling Metrics History

Set the environment variable to opt-in:

# PowerShell
$env:METRICS_HISTORY_ENABLED = "true"
python run.py

# Bash
export METRICS_HISTORY_ENABLED=true
python run.py

Configuration Options

Variable Default Description
METRICS_HISTORY_ENABLED false Enable/disable metrics history recording
METRICS_HISTORY_RETENTION_HOURS 24 How long to keep history data (hours)
METRICS_HISTORY_INTERVAL_MINUTES 5 Interval between snapshots (minutes)

API Endpoints

# Get metrics history (last 24 hours by default)
curl "{{ api_base | replace('/api', '/ui') }}/metrics/history" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Get history for specific time range
curl "{{ api_base | replace('/api', '/ui') }}/metrics/history?hours=6" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Get current settings
curl "{{ api_base | replace('/api', '/ui') }}/metrics/settings" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Update settings at runtime
curl -X PUT "{{ api_base | replace('/api', '/ui') }}/metrics/settings" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"enabled": true, "retention_hours": 48, "interval_minutes": 10}'

Storage Location

History data is stored at:

data/.myfsio.sys/config/metrics_history.json
UI Charts: When enabled, the Metrics dashboard displays line charts showing CPU, memory, and disk usage trends with a time range selector (1h, 6h, 24h, 7d).
14

Operation Metrics

Track API request statistics including request counts, latency, error rates, and bandwidth usage. Provides real-time visibility into API operations.

Enabling Operation Metrics

Set the environment variable to opt-in:

# PowerShell
$env:OPERATION_METRICS_ENABLED = "true"
python run.py

# Bash
export OPERATION_METRICS_ENABLED=true
python run.py

Configuration Options

Variable Default Description
OPERATION_METRICS_ENABLED false Enable/disable operation metrics collection
OPERATION_METRICS_INTERVAL_MINUTES 5 Interval between snapshots (minutes)
OPERATION_METRICS_RETENTION_HOURS 24 How long to keep history data (hours)

What's Tracked

Request Statistics
  • Request counts by HTTP method (GET, PUT, POST, DELETE)
  • Response status codes (2xx, 3xx, 4xx, 5xx)
  • Average, min, max latency
  • Bytes transferred in/out
Endpoint Breakdown
  • object - Object operations (GET/PUT/DELETE)
  • bucket - Bucket operations
  • ui - Web UI requests
  • service - Health checks, etc.

S3 Error Codes

The dashboard tracks S3 API-specific error codes like NoSuchKey, AccessDenied, BucketNotFound. These are separate from HTTP status codes – a 404 from the UI won't appear here, only S3 API errors.

API Endpoints

# Get current operation metrics
curl "{{ api_base | replace('/api', '/ui') }}/metrics/operations" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Get operation metrics history
curl "{{ api_base | replace('/api', '/ui') }}/metrics/operations/history" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Filter history by time range
curl "{{ api_base | replace('/api', '/ui') }}/metrics/operations/history?hours=6" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

Storage Location

Operation metrics data is stored at:

data/.myfsio.sys/config/operation_metrics.json
UI Dashboard: When enabled, the Metrics page shows an "API Operations" section with summary cards, charts for requests by method/status/endpoint, and an S3 error codes table. Data refreshes every 5 seconds.
15

Troubleshooting & tips

Symptom Likely cause Fix
403 from API despite Public preset Policy not saved or ARN mismatch Reapply the preset and confirm arn:aws:s3:::bucket/* matches the bucket name.
UI shows stale policy/object data Browser cached prior state Refresh; the server hot-reloads data/.myfsio.sys/config/bucket_policies.json and storage metadata.
Presign dialog returns 403 User lacks required read/write/delete action or bucket policy denies Update IAM inline policies or remove conflicting deny statements.
Large uploads fail instantly MAX_UPLOAD_SIZE exceeded Raise the env var or split the object.
Requests hit the wrong host Proxy headers missing or API_BASE_URL incorrect Ensure your proxy sends X-Forwarded-Host/Proto headers, or explicitly set API_BASE_URL to your public domain.
Large folder uploads hitting rate limits (429) RATE_LIMIT_DEFAULT exceeded (200/min) Increase rate limit in env config, use Redis backend (RATE_LIMIT_STORAGE_URI=redis://host:port) for distributed setups, or upload in smaller batches.
{% endblock %}