{% extends "base.html" %} {% block content %}

Documentation

Your guide to MyFSIO

Follow these steps to install, authenticate, master the console, and automate everything through the API.

API base URL
{{ api_base }}
01

Set up & run locally

Prepare a virtual environment, install dependencies, and launch both servers for a complete console + API experience.

  1. Install Python 3.11+ plus system build tools.
  2. Create a virtual environment and install requirements.txt.
  3. Start the services with python run.py.
python -m venv .venv
. .venv/Scripts/activate      # PowerShell: .\\.venv\\Scripts\\Activate.ps1
pip install -r requirements.txt

# Run both API and UI (Development)
python run.py

# Run in Production (Granian server)
python run.py --prod

# Or run individually
python run.py --mode api
python run.py --mode ui

Configuration

Configuration defaults live in app/config.py. You can override them using environment variables. This is critical for production deployments behind proxies.

Variable Default Description
API_BASE_URL http://127.0.0.1:5000 Internal S3 API URL used by the web UI proxy. Also used for presigned URL generation. Set to your public URL if running behind a reverse proxy.
STORAGE_ROOT ./data Directory for buckets and objects.
MAX_UPLOAD_SIZE 1 GB Max request body size in bytes.
SECRET_KEY (Auto-generated) Flask session key. Auto-generates if not set. Set explicitly in production.
APP_HOST 0.0.0.0 Bind interface.
APP_PORT 5000 Listen port (UI uses 5100).
DISPLAY_TIMEZONE UTC Timezone for UI timestamps (e.g., US/Eastern, Asia/Tokyo).
CORS Settings
CORS_ORIGINS * Allowed origins. Restrict in production.
CORS_METHODS GET,PUT,POST,DELETE,OPTIONS,HEAD Allowed HTTP methods.
CORS_ALLOW_HEADERS * Allowed request headers.
CORS_EXPOSE_HEADERS * Response headers visible to browsers (e.g., ETag).
Security Settings
AUTH_MAX_ATTEMPTS 5 Failed login attempts before lockout.
AUTH_LOCKOUT_MINUTES 15 Lockout duration after max failed attempts.
RATE_LIMIT_DEFAULT 200 per minute Default API rate limit.
RATE_LIMIT_LIST_BUCKETS 60 per minute Rate limit for listing buckets.
RATE_LIMIT_BUCKET_OPS 120 per minute Rate limit for bucket operations.
RATE_LIMIT_OBJECT_OPS 240 per minute Rate limit for object operations.
RATE_LIMIT_HEAD_OPS 100 per minute Rate limit for HEAD requests.
RATE_LIMIT_ADMIN 60 per minute Rate limit for admin API endpoints (/admin/*).
ADMIN_ACCESS_KEY (none) Custom access key for the admin user on first run or credential reset. Random if unset.
ADMIN_SECRET_KEY (none) Custom secret key for the admin user on first run or credential reset. Random if unset.
Server Settings
SERVER_THREADS 0 (auto) Granian blocking threads (1-64). 0 = auto (CPU cores × 2).
SERVER_CONNECTION_LIMIT 0 (auto) Max concurrent connections (10-1000). 0 = auto (RAM-based).
SERVER_BACKLOG 0 (auto) TCP listen backlog (64-4096). 0 = auto (conn_limit × 2).
SERVER_CHANNEL_TIMEOUT 120 Idle connection timeout in seconds (10-300).
Encryption Settings
ENCRYPTION_ENABLED false Enable server-side encryption support.
KMS_ENABLED false Enable KMS key management for encryption.
Logging Settings
LOG_LEVEL INFO Log verbosity: DEBUG, INFO, WARNING, ERROR.
LOG_TO_FILE true Enable file logging.
Metrics History Settings
METRICS_HISTORY_ENABLED false Enable metrics history recording and charts (opt-in).
METRICS_HISTORY_RETENTION_HOURS 24 How long to retain metrics history data.
METRICS_HISTORY_INTERVAL_MINUTES 5 Interval between history snapshots.
Site Sync Settings (Bidirectional Replication)
SITE_SYNC_ENABLED false Enable bi-directional site sync background worker.
SITE_SYNC_INTERVAL_SECONDS 60 Interval between sync cycles (seconds).
SITE_SYNC_BATCH_SIZE 100 Max objects to pull per sync cycle.
SITE_SYNC_CONNECT_TIMEOUT_SECONDS 10 Connection timeout for site sync (seconds).
SITE_SYNC_READ_TIMEOUT_SECONDS 120 Read timeout for site sync (seconds).
SITE_SYNC_MAX_RETRIES 2 Max retry attempts for site sync operations.
SITE_SYNC_CLOCK_SKEW_TOLERANCE_SECONDS 1.0 Clock skew tolerance for conflict resolution.
Replication Settings
REPLICATION_CONNECT_TIMEOUT_SECONDS 5 Connection timeout for replication (seconds).
REPLICATION_READ_TIMEOUT_SECONDS 30 Read timeout for replication (seconds).
REPLICATION_MAX_RETRIES 2 Max retry attempts for replication operations.
REPLICATION_STREAMING_THRESHOLD_BYTES 10485760 Objects larger than this use streaming upload (10 MB).
REPLICATION_MAX_FAILURES_PER_BUCKET 50 Max failure records to keep per bucket.
Security & Auth Settings
SIGV4_TIMESTAMP_TOLERANCE_SECONDS 900 Max time skew for SigV4 requests (15 minutes).
PRESIGNED_URL_MIN_EXPIRY_SECONDS 1 Minimum presigned URL expiry time.
PRESIGNED_URL_MAX_EXPIRY_SECONDS 604800 Maximum presigned URL expiry time (7 days).
Proxy & Network Settings
NUM_TRUSTED_PROXIES 1 Number of trusted reverse proxies for X-Forwarded-* headers.
ALLOWED_REDIRECT_HOSTS (empty) Comma-separated whitelist of safe redirect targets.
ALLOW_INTERNAL_ENDPOINTS false Allow connections to internal/private IPs (webhooks, replication).
Storage Limits
OBJECT_KEY_MAX_LENGTH_BYTES 1024 Maximum object key length in bytes.
OBJECT_CACHE_MAX_SIZE 100 Maximum number of objects in cache.
BUCKET_CONFIG_CACHE_TTL_SECONDS 30 Bucket config cache TTL in seconds.
OBJECT_TAG_LIMIT 50 Maximum number of tags per object.
LIFECYCLE_MAX_HISTORY_PER_BUCKET 50 Max lifecycle history records per bucket.
OBJECT_CACHE_TTL 60 Seconds to cache object metadata.
BULK_DOWNLOAD_MAX_BYTES 1 GB Max total size for bulk ZIP downloads.
ENCRYPTION_CHUNK_SIZE_BYTES 65536 Chunk size for streaming encryption (64 KB).
KMS_GENERATE_DATA_KEY_MIN_BYTES 1 Minimum data key size for KMS generation.
KMS_GENERATE_DATA_KEY_MAX_BYTES 1024 Maximum data key size for KMS generation.
Production Checklist: Set SECRET_KEY (also enables IAM config encryption at rest), restrict CORS_ORIGINS, configure API_BASE_URL, enable HTTPS via reverse proxy, use --prod flag, and set credential expiry on non-admin users.
02

Running in background

For production or server deployments, run MyFSIO as a background service so it persists after you close the terminal.

Quick Start (nohup)

Simplest way to run in background—survives terminal close:

# Using Python
nohup python run.py --prod > /dev/null 2>&1 &

# Using compiled binary
nohup ./myfsio > /dev/null 2>&1 &

# Check if running
ps aux | grep myfsio

Screen / Tmux

Attach/detach from a persistent session:

# Start in a detached screen session
screen -dmS myfsio ./myfsio

# Attach to view logs
screen -r myfsio

# Detach: press Ctrl+A, then D

Systemd (Recommended for Production)

Create /etc/systemd/system/myfsio.service:

[Unit]
Description=MyFSIO S3-Compatible Storage
After=network.target

[Service]
Type=simple
User=myfsio
WorkingDirectory=/opt/myfsio
ExecStart=/opt/myfsio/myfsio
Restart=on-failure
RestartSec=5
Environment=STORAGE_ROOT=/var/lib/myfsio
Environment=API_BASE_URL=https://s3.example.com

[Install]
WantedBy=multi-user.target

Then enable and start:

sudo systemctl daemon-reload
sudo systemctl enable myfsio
sudo systemctl start myfsio

# Check status
sudo systemctl status myfsio
sudo journalctl -u myfsio -f   # View logs
03

Authenticate & manage IAM

On first startup, MyFSIO generates random admin credentials and prints them to the console. Set ADMIN_ACCESS_KEY and ADMIN_SECRET_KEY env vars for custom credentials. When SECRET_KEY is configured, the IAM config is encrypted at rest. To reset credentials, run python run.py --reset-cred.

  1. Check the console output for the generated Access Key and Secret Key, then visit /ui/login.
  2. Create additional users with descriptive display names, AWS-style inline policies (for example {"bucket": "*", "actions": ["list", "read"]}), and optional credential expiry dates.
  3. Set credential expiry on users to grant time-limited access. The UI shows expiry badges and provides preset durations (1h, 24h, 7d, 30d, 90d). Expired credentials are rejected at authentication.
  4. Rotate secrets when sharing with CI jobs—new secrets display once and persist to data/.myfsio.sys/config/iam.json.
  5. Bucket policies layer on top of IAM. Apply Private/Public presets or paste custom JSON; changes reload instantly.

All API calls require X-Access-Key and X-Secret-Key headers. The UI stores them in the Flask session after you log in.

04

Use the console effectively

Each workspace models an S3 workflow so you can administer buckets end-to-end.

Buckets

  • Create/delete buckets from the overview. Badges reveal IAM-only, public-read, or custom-policy states.
  • Summary stats show live object counts and total capacity; click through for inventories.

Uploads

  • Drag and drop folders or files into the upload modal. Objects above 16 MB switch to multipart automatically.
  • Progress rows highlight retries, throughput, and completion even if you close the modal.

Object browser

  • Navigate folder hierarchies using breadcrumbs. Objects with / in keys display as folders.
  • Infinite scroll loads more objects automatically. Choose batch size (50–250) from the footer dropdown.
  • Bulk select objects for multi-delete or multi-download (ZIP archive, up to 1 GiB). Filter by name using the search box.
  • If loading fails, click Retry to attempt again—no page refresh needed.

Object details

  • Selecting an object opens the preview card with metadata, inline viewers, presign generator, and version history.
  • Trigger downloads, deletes, restores, or metadata refreshes without leaving the panel.

Policies & versioning

  • Toggle versioning (requires write access). Archived-only keys are flagged so you can restore them quickly.
  • The policy editor saves drafts, ships with presets, and hot-reloads data/.myfsio.sys/config/bucket_policies.json.
05

Automate with CLI & tools

Point standard S3 clients at {{ api_base }} and reuse the same IAM credentials.

AWS CLI

aws configure set aws_access_key_id <access_key>
aws configure set aws_secret_access_key <secret_key>
aws configure set default.region us-east-1

aws --endpoint-url {{ api_base }} s3 ls
aws --endpoint-url {{ api_base }} s3api create-bucket --bucket demo
aws --endpoint-url {{ api_base }} s3 cp ./sample.txt s3://demo/sample.txt

s3cmd

cat > ~/.s3cfg-myfsio <<'EOF'
host_base = {{ api_host }}
host_bucket = %(bucket)s.{{ api_host }}
access_key = <access_key>
secret_key = <secret_key>
use_https = False
signature_v2 = False
EOF

s3cmd --config ~/.s3cfg-myfsio ls
s3cmd --config ~/.s3cfg-myfsio put notes.txt s3://demo/notes.txt

curl / HTTPie

curl {{ api_base }}/ \
  -H "X-Access-Key: <access_key>" \
  -H "X-Secret-Key: <secret_key>"

curl -X PUT {{ api_base }}/demo/notes.txt \
  -H "X-Access-Key: <access_key>" \
  -H "X-Secret-Key: <secret_key>" \
  --data-binary @notes.txt

# Presigned URLs are generated via the UI
# Use the "Presign" button in the object browser
06

Key REST endpoints

Method Path Purpose
GET / List buckets accessible to the caller.
PUT /<bucket> Create a bucket.
DELETE /<bucket> Delete a bucket (must be empty).
GET /<bucket> List objects (supports prefix / max-keys queries).
PUT /<bucket>/<key> Upload or overwrite an object; UI helper handles multipart flows.
GET /<bucket>/<key> Download an object (UI adds ?download=1 to force attachment).
DELETE /<bucket>/<key> Delete an object.
HEAD /<bucket> Check if a bucket exists.
HEAD /<bucket>/<key> Get object metadata without downloading.
POST /<bucket>?delete Bulk delete objects (XML body).
GET/PUT/DELETE /<bucket>?policy Bucket policy management.
GET/PUT /<bucket>?versioning Versioning status.
GET/PUT/DELETE /<bucket>?lifecycle Lifecycle rules.
GET/PUT/DELETE /<bucket>?cors CORS configuration.
GET/PUT/DELETE /<bucket>?encryption Default encryption.
GET/PUT /<bucket>?acl Bucket ACL.
GET/PUT/DELETE /<bucket>?tagging Bucket tags.
GET/PUT/DELETE /<bucket>/<key>?tagging Object tags.
POST /<bucket>/<key>?uploads Initiate multipart upload.
POST /<bucket>/<key>?select SQL query (SelectObjectContent).

All responses include X-Request-Id for tracing. See the Full API Reference for the complete endpoint list. Logs land in logs/api.log and logs/ui.log.

07

API Examples

Common operations using popular SDKs and tools.

Python (boto3)

import boto3

s3 = boto3.client(
    's3',
    endpoint_url='{{ api_base }}',
    aws_access_key_id='<access_key>',
    aws_secret_access_key='<secret_key>'
)

# List buckets
buckets = s3.list_buckets()['Buckets']

# Create bucket
s3.create_bucket(Bucket='mybucket')

# Upload file
s3.upload_file('local.txt', 'mybucket', 'remote.txt')

# Download file
s3.download_file('mybucket', 'remote.txt', 'downloaded.txt')

# Generate presigned URL (valid 1 hour)
url = s3.generate_presigned_url(
    'get_object',
    Params={'Bucket': 'mybucket', 'Key': 'remote.txt'},
    ExpiresIn=3600
)

JavaScript (AWS SDK v3)

import { S3Client, ListBucketsCommand, PutObjectCommand } from '@aws-sdk/client-s3';

const s3 = new S3Client({
  endpoint: '{{ api_base }}',
  region: 'us-east-1',
  credentials: {
    accessKeyId: '<access_key>',
    secretAccessKey: '<secret_key>'
  },
  forcePathStyle: true  // Required for S3-compatible services
});

// List buckets
const { Buckets } = await s3.send(new ListBucketsCommand({}));

// Upload object
await s3.send(new PutObjectCommand({
  Bucket: 'mybucket',
  Key: 'hello.txt',
  Body: 'Hello, World!'
}));

Multipart Upload (Python)

import boto3

s3 = boto3.client('s3', endpoint_url='{{ api_base }}')

# Initiate
response = s3.create_multipart_upload(Bucket='mybucket', Key='large.bin')
upload_id = response['UploadId']

# Upload parts (minimum 5MB each, except last part)
parts = []
chunks = [b'chunk1...', b'chunk2...']
for part_number, chunk in enumerate(chunks, start=1):
    response = s3.upload_part(
        Bucket='mybucket',
        Key='large.bin',
        PartNumber=part_number,
        UploadId=upload_id,
        Body=chunk
    )
    parts.append({'PartNumber': part_number, 'ETag': response['ETag']})

# Complete
s3.complete_multipart_upload(
    Bucket='mybucket',
    Key='large.bin',
    UploadId=upload_id,
    MultipartUpload={'Parts': parts}
)

Presigned URLs for Sharing

# Generate presigned URLs via the UI:
# 1. Navigate to your bucket in the object browser
# 2. Select the object you want to share
# 3. Click the "Presign" button
# 4. Choose method (GET/PUT/DELETE) and expiration time
# 5. Copy the generated URL

# Supported options:
# - Method: GET (download), PUT (upload), DELETE (remove)
# - Expiration: 1 second to 7 days (604800 seconds)
08

Site Replication & Sync

Replicate objects to another MyFSIO instance or S3-compatible service. Supports one-way replication for backup and bi-directional sync for geo-distributed deployments.

Setup Guide

  1. Prepare Target: On the destination server, create a bucket (e.g., backup-bucket) and an IAM user with write permissions.
  2. Connect Source: On this server, go to Connections and add the target's API URL and credentials.
  3. Enable Rule: Go to the source bucket's Replication tab, select the connection, and enter the target bucket name.
Headless Target Setup

If your target server has no UI, create a setup_target.py script to bootstrap credentials:

# setup_target.py
from pathlib import Path
from app.iam import IamService
from app.storage import ObjectStorage

# Initialize services (paths match default config)
data_dir = Path("data")
iam = IamService(data_dir / ".myfsio.sys" / "config" / "iam.json")
storage = ObjectStorage(data_dir)

# 1. Create the bucket
bucket_name = "backup-bucket"
try:
    storage.create_bucket(bucket_name)
    print(f"Bucket '{bucket_name}' created.")
except Exception as e:
    print(f"Bucket creation skipped: {e}")

# 2. Create the user
try:
    creds = iam.create_user(
        display_name="Replication User",
        policies=[{"bucket": bucket_name, "actions": ["write", "read", "list"]}]
    )
    print("\n--- CREDENTIALS GENERATED ---")
    print(f"Access Key: {creds['access_key']}")
    print(f"Secret Key: {creds['secret_key']}")
    print("-----------------------------")
except Exception as e:
    print(f"User creation failed: {e}")

Save and run: python setup_target.py

Replication Modes

Mode Description
new_only Only replicate new/modified objects (default, one-way)
all Sync all existing objects when rule is enabled (one-way)
bidirectional Two-way sync with Last-Write-Wins conflict resolution

Bidirectional Site Replication

For true two-way synchronization with automatic conflict resolution, use the bidirectional mode. Both sites must be configured to sync with each other.

Both sites need configuration. Each site pushes its changes and pulls from the other. You must set up connections and replication rules on both ends.

Step 1: Enable Site Sync on Both Sites

Set these environment variables on both Site A and Site B:

SITE_SYNC_ENABLED=true
SITE_SYNC_INTERVAL_SECONDS=60   # How often to pull changes
SITE_SYNC_BATCH_SIZE=100        # Max objects per sync cycle

Step 2: Create IAM Users for Cross-Site Access

On each site, create an IAM user that the other site will use to connect:

Site Create User For Required Permissions
Site A Site B to connect read, write, list, delete on target bucket
Site B Site A to connect read, write, list, delete on target bucket

Step 3: Create Connections

On each site, add a connection pointing to the other:

On Site A

Go to Connections and add:

  • Endpoint: https://site-b.example.com
  • Credentials: Site B's IAM user
On Site B

Go to Connections and add:

  • Endpoint: https://site-a.example.com
  • Credentials: Site A's IAM user

Step 4: Enable Bidirectional Replication

On each site, go to the bucket's Replication tab and enable with mode bidirectional:

On Site A
  • Source bucket: my-bucket
  • Target: Site B connection
  • Target bucket: my-bucket
  • Mode: Bidirectional sync
On Site B
  • Source bucket: my-bucket
  • Target: Site A connection
  • Target bucket: my-bucket
  • Mode: Bidirectional sync

How It Works

  • PUSH: Local changes replicate to remote immediately on write/delete
  • PULL: Background worker fetches remote changes every SITE_SYNC_INTERVAL_SECONDS
  • Conflict Resolution: Last-Write-Wins based on last_modified timestamps (1-second clock skew tolerance)
  • Deletion Sync: Remote deletions propagate locally only for objects originally synced from remote
  • Loop Prevention: S3ReplicationAgent and SiteSyncAgent User-Agents prevent infinite sync loops

Error Handling & Rate Limits

The replication system handles transient failures automatically:

Behavior Details
Retry Logic boto3 automatically handles 429 (rate limit) errors using exponential backoff with max_attempts=2
Concurrency Uses a ThreadPoolExecutor with 4 parallel workers for replication tasks
Timeouts Connect: 5s, Read: 30s. Large files use streaming transfers
Large File Counts: When replicating buckets with many objects, the target server's rate limits may cause delays. There is no built-in pause mechanism. Consider increasing RATE_LIMIT_DEFAULT on the target server during bulk replication operations.
09

Site Registry

Track cluster membership and site identity for geo-distributed deployments. The site registry stores local site identity and peer site information.

Connections vs Sites

Understanding the difference between Connections and Sites is key to configuring geo-distribution:

Aspect Connections Sites
Purpose Store credentials to authenticate with remote S3 endpoints Track cluster membership and site identity
Contains Endpoint URL, access key, secret key, region Site ID, endpoint, region, priority, display name
Used by Replication rules, site sync workers Geo-distribution awareness, cluster topology
Analogy "How do I log in to that server?" "Who are the members of my cluster?"

Sites can optionally link to a Connection (via connection_id) to perform health checks against peer sites.

Configuration

Set environment variables to bootstrap local site identity on startup:

Variable Default Description
SITE_ID None Unique identifier for this site (e.g., us-west-1)
SITE_ENDPOINT None Public URL for this site (e.g., https://s3.us-west-1.example.com)
SITE_REGION us-east-1 AWS-style region identifier
SITE_PRIORITY 100 Routing priority (lower = preferred)
# Example: Configure site identity
export SITE_ID=us-west-1
export SITE_ENDPOINT=https://s3.us-west-1.example.com
export SITE_REGION=us-west-1
export SITE_PRIORITY=100
python run.py

Using the Sites UI

Navigate to Sites in the sidebar to manage site configuration:

Local Site Identity
  • Configure this site's ID, endpoint, region, and priority
  • Display name for easier identification
  • Changes persist to site_registry.json
Peer Sites
  • Register remote sites in your cluster
  • Link to a Connection for health checks
  • View health status (green/red/unknown)
  • Edit or delete peers as needed

Admin API Endpoints

The /admin API provides programmatic access to site registry:

# Get local site configuration
curl {{ api_base }}/admin/site \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Update local site
curl -X PUT {{ api_base }}/admin/site \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"site_id": "us-west-1", "endpoint": "https://s3.example.com", "region": "us-west-1"}'

# List all peer sites
curl {{ api_base }}/admin/sites \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Add a peer site
curl -X POST {{ api_base }}/admin/sites \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"site_id": "us-east-1", "endpoint": "https://s3.us-east-1.example.com"}'

# Check peer health
curl {{ api_base }}/admin/sites/us-east-1/health \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Get cluster topology
curl {{ api_base }}/admin/topology \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

Storage Location

Site registry data is stored at:

data/.myfsio.sys/config/site_registry.json
Planned: The site registry lays the groundwork for features like automatic failover, intelligent routing, and multi-site consistency. Currently it provides cluster awareness and health monitoring.
10

Object Versioning

Keep multiple versions of objects to protect against accidental deletions and overwrites. Restore previous versions at any time.

Enabling Versioning

  1. Navigate to your bucket's Properties tab.
  2. Find the Versioning card and click Enable.
  3. All subsequent uploads will create new versions instead of overwriting.

Version Operations

Operation Description
View Versions Click the version icon on any object to see all historical versions with timestamps and sizes.
Restore Version Click Restore on any version to make it the current version (creates a copy).
Delete Current Deleting an object archives it. Previous versions remain accessible.
Purge All Permanently delete an object and all its versions. This cannot be undone.

Archived Objects

When you delete a versioned object, it becomes "archived" - the current version is removed but historical versions remain. The Archived tab shows these objects so you can restore them.

API Usage

# Enable versioning
curl -X PUT "{{ api_base }}/<bucket>?versioning" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"Status": "Enabled"}'

# Get versioning status
curl "{{ api_base }}/<bucket>?versioning" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# List object versions
curl "{{ api_base }}/<bucket>?versions" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Get specific version
curl "{{ api_base }}/<bucket>/<key>?versionId=<version-id>" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
Storage Impact: Each version consumes storage. Enable quotas to limit total bucket size including all versions.
11

Bucket Quotas

Limit how much data a bucket can hold using storage quotas. Quotas are enforced on uploads and multipart completions.

Quota Types

Limit Description
Max Size (MB) Maximum total storage in megabytes (includes current objects + archived versions)
Max Objects Maximum number of objects (includes current objects + archived versions)

Managing Quotas (Admin Only)

Quota management is restricted to administrators (users with iam:* permissions).

  1. Navigate to your bucket → Properties tab → Storage Quota card.
  2. Enter limits: Max Size (MB) and/or Max Objects. Leave empty for unlimited.
  3. Click Update Quota to save, or Remove Quota to clear limits.

API Usage

# Set quota (max 100MB, max 1000 objects)
curl -X PUT "{{ api_base }}/bucket/<bucket>?quota" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"max_bytes": 104857600, "max_objects": 1000}'

# Get current quota
curl "{{ api_base }}/bucket/<bucket>?quota" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Remove quota
curl -X PUT "{{ api_base }}/bucket/<bucket>?quota" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"max_bytes": null, "max_objects": null}'
Version Counting: When versioning is enabled, archived versions count toward the quota. The quota is checked against total storage, not just current objects.
12

Encryption

Protect data at rest with server-side encryption using AES-256-GCM. Objects are encrypted before being written to disk and decrypted transparently on read.

Encryption Types

Type Description
AES-256 (SSE-S3) Server-managed encryption using a local master key
KMS (SSE-KMS) Encryption using customer-managed keys via the built-in KMS
SSE-C Server-side encryption with customer-provided keys (per-request)

Enabling Encryption

  1. Set environment variables:
    # PowerShell
    $env:ENCRYPTION_ENABLED = "true"
    $env:KMS_ENABLED = "true"  # Optional
    python run.py
    
    # Bash
    export ENCRYPTION_ENABLED=true
    export KMS_ENABLED=true
    python run.py
  2. Configure bucket encryption: Navigate to your bucket → Properties tab → Default Encryption card → Click Enable Encryption.
  3. Choose algorithm: Select AES-256 for server-managed keys or aws:kms to use a KMS-managed key.
Important: Only new uploads after enabling encryption will be encrypted. Existing objects remain unencrypted.

KMS Key Management

When KMS_ENABLED=true, manage encryption keys via the API:

# Create a new KMS key
curl -X POST {{ api_base }}/kms/keys \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"alias": "my-key", "description": "Production key"}'

# List all keys
curl {{ api_base }}/kms/keys \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Rotate a key (creates new key material)
curl -X POST {{ api_base }}/kms/keys/{key-id}/rotate \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Disable/Enable a key
curl -X POST {{ api_base }}/kms/keys/{key-id}/disable \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Schedule key deletion (30-day waiting period)
curl -X DELETE "{{ api_base }}/kms/keys/{key-id}?waiting_period_days=30" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

How It Works

Envelope Encryption: Each object is encrypted with a unique Data Encryption Key (DEK). The DEK is then encrypted (wrapped) by the master key or KMS key and stored alongside the ciphertext. On read, the DEK is unwrapped and used to decrypt the object transparently.

SSE-C (Customer-Provided Keys)

With SSE-C, you supply your own 256-bit AES key with each request. The server encrypts/decrypts using your key but never stores it. You must provide the same key for both upload and download.

Header Value
x-amz-server-side-encryption-customer-algorithm AES256
x-amz-server-side-encryption-customer-key Base64-encoded 256-bit key
x-amz-server-side-encryption-customer-key-MD5 Base64-encoded MD5 of the key
# Generate a 256-bit key
KEY=$(openssl rand -base64 32)
KEY_MD5=$(echo -n "$KEY" | base64 -d | openssl dgst -md5 -binary | base64)

# Upload with SSE-C
curl -X PUT "{{ api_base }}/my-bucket/secret.txt" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -H "x-amz-server-side-encryption-customer-algorithm: AES256" \
  -H "x-amz-server-side-encryption-customer-key: $KEY" \
  -H "x-amz-server-side-encryption-customer-key-MD5: $KEY_MD5" \
  --data-binary @secret.txt

# Download with SSE-C (same key required)
curl "{{ api_base }}/my-bucket/secret.txt" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -H "x-amz-server-side-encryption-customer-algorithm: AES256" \
  -H "x-amz-server-side-encryption-customer-key: $KEY" \
  -H "x-amz-server-side-encryption-customer-key-MD5: $KEY_MD5"
Note: SSE-C does not require ENCRYPTION_ENABLED or KMS_ENABLED. If you lose your key, the data is irrecoverable.
13

Lifecycle Rules

Automatically delete expired objects, clean up old versions, and abort incomplete multipart uploads using time-based lifecycle rules.

How It Works

Lifecycle rules run on a background timer (Python threading.Timer), not a system cronjob. The enforcement cycle triggers every 3600 seconds (1 hour) by default. Each cycle scans all buckets with lifecycle configurations and applies matching rules.

Expiration Types

Type Description
Expiration (Days) Delete current objects older than N days from their last modification
Expiration (Date) Delete current objects after a specific date (ISO 8601 format)
NoncurrentVersionExpiration Delete non-current (archived) versions older than N days from when they became non-current
AbortIncompleteMultipartUpload Abort multipart uploads that have been in progress longer than N days

API Usage

# Set lifecycle rule (delete objects older than 30 days)
curl -X PUT "{{ api_base }}/<bucket>?lifecycle" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '[{
    "ID": "expire-old-objects",
    "Status": "Enabled",
    "Prefix": "",
    "Expiration": {"Days": 30}
  }]'

# Abort incomplete multipart uploads after 7 days
curl -X PUT "{{ api_base }}/<bucket>?lifecycle" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '[{
    "ID": "cleanup-multipart",
    "Status": "Enabled",
    "AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7}
  }]'

# Get current lifecycle configuration
curl "{{ api_base }}/<bucket>?lifecycle" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
Prefix Filtering: Use the Prefix field to scope rules to specific paths (e.g., "logs/"). Leave empty to apply to all objects in the bucket.
14

Garbage Collection

Automatically clean up orphaned data that accumulates over time: stale temp files, abandoned multipart uploads, stale lock files, orphaned metadata, orphaned versions, and empty directories.

Enabling GC

Disabled by default. Enable via environment variable:

GC_ENABLED=true python run.py

Configuration

Variable Default Description
GC_ENABLEDfalseEnable garbage collection
GC_INTERVAL_HOURS6Hours between GC cycles
GC_TEMP_FILE_MAX_AGE_HOURS24Delete temp files older than this
GC_MULTIPART_MAX_AGE_DAYS7Delete orphaned multipart uploads older than this
GC_LOCK_FILE_MAX_AGE_HOURS1Delete stale lock files older than this
GC_DRY_RUNfalseLog what would be deleted without removing

What Gets Cleaned

Type Location Condition
Temp files.myfsio.sys/tmp/Older than configured max age
Orphaned multipart.myfsio.sys/multipart/Older than configured max age
Stale lock files.myfsio.sys/buckets/<bucket>/locks/Older than configured max age
Orphaned metadata.myfsio.sys/buckets/<bucket>/meta/Object file no longer exists
Orphaned versions.myfsio.sys/buckets/<bucket>/versions/Main object no longer exists
Empty directoriesVarious internal dirsDirectory is empty after cleanup

Admin API

Method Route Description
GET/admin/gc/statusGet GC status and configuration
POST/admin/gc/runTrigger manual GC run
GET/admin/gc/historyGet execution history
# Trigger a dry run (preview what would be cleaned)
curl -X POST "{{ api_base }}/admin/gc/run" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -H "Content-Type: application/json" \
  -d '{"dry_run": true}'

# Trigger actual GC
curl -X POST "{{ api_base }}/admin/gc/run" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Check status
curl "{{ api_base }}/admin/gc/status" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# View history
curl "{{ api_base }}/admin/gc/history?limit=10" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
Dry Run: Use GC_DRY_RUN=true or pass {"dry_run": true} to the API to preview what would be deleted without actually removing anything. Check the logs or API response for details.
15

Integrity Scanner

Detect and optionally auto-repair data inconsistencies: corrupted objects, orphaned files, phantom metadata, stale versions, ETag cache drift, and unmigrated legacy metadata.

Enabling Integrity Scanner

Disabled by default. Enable via environment variable:

INTEGRITY_ENABLED=true python run.py

Configuration

Variable Default Description
INTEGRITY_ENABLEDfalseEnable background integrity scanning
INTEGRITY_INTERVAL_HOURS24Hours between scan cycles
INTEGRITY_BATCH_SIZE1000Max objects to scan per cycle
INTEGRITY_AUTO_HEALfalseAutomatically repair detected issues
INTEGRITY_DRY_RUNfalseLog issues without healing

What Gets Checked

Check Detection Heal Action
Corrupted objectsFile MD5 does not match stored ETagUpdate ETag in index (disk is authoritative)
Orphaned objectsFile exists without metadata entryCreate index entry with computed MD5/size/mtime
Phantom metadataIndex entry exists but file is missingRemove stale entry from index
Stale versionsManifest without data or vice versaRemove orphaned version file
ETag cacheetag_index.json differs from metadataDelete cache file (auto-rebuilt)
Legacy metadataLegacy .meta.json differs or unmigratedMigrate to index, delete legacy file

Admin API

Method Route Description
GET/admin/integrity/statusGet scanner status and configuration
POST/admin/integrity/runTrigger manual scan
GET/admin/integrity/historyGet scan history
# Trigger a dry run with auto-heal preview
curl -X POST "{{ api_base }}/admin/integrity/run" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -H "Content-Type: application/json" \
  -d '{"dry_run": true, "auto_heal": true}'

# Trigger actual scan with healing
curl -X POST "{{ api_base }}/admin/integrity/run" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -H "Content-Type: application/json" \
  -d '{"auto_heal": true}'

# Check status
curl "{{ api_base }}/admin/integrity/status" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# View history
curl "{{ api_base }}/admin/integrity/history?limit=10" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
Dry Run: Use INTEGRITY_DRY_RUN=true or pass {"dry_run": true} to the API to preview detected issues without making any changes. Combine with {"auto_heal": true} to see what would be repaired.
16

Metrics History

Track CPU, memory, and disk usage over time with optional metrics history. Disabled by default to minimize overhead.

Enabling Metrics History

Set the environment variable to opt-in:

# PowerShell
$env:METRICS_HISTORY_ENABLED = "true"
python run.py

# Bash
export METRICS_HISTORY_ENABLED=true
python run.py

Configuration Options

Variable Default Description
METRICS_HISTORY_ENABLED false Enable/disable metrics history recording
METRICS_HISTORY_RETENTION_HOURS 24 How long to keep history data (hours)
METRICS_HISTORY_INTERVAL_MINUTES 5 Interval between snapshots (minutes)

API Endpoints

# Get metrics history (last 24 hours by default)
curl "{{ api_base | replace(from="/api", to="/ui") }}/metrics/history" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Get history for specific time range
curl "{{ api_base | replace(from="/api", to="/ui") }}/metrics/history?hours=6" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Get current settings
curl "{{ api_base | replace(from="/api", to="/ui") }}/metrics/settings" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Update settings at runtime
curl -X PUT "{{ api_base | replace(from="/api", to="/ui") }}/metrics/settings" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"enabled": true, "retention_hours": 48, "interval_minutes": 10}'

Storage Location

History data is stored at:

data/.myfsio.sys/config/metrics_history.json
UI Charts: When enabled, the Metrics dashboard displays line charts showing CPU, memory, and disk usage trends with a time range selector (1h, 6h, 24h, 7d).
17

Operation Metrics

Track API request statistics including request counts, latency, error rates, and bandwidth usage. Provides real-time visibility into API operations.

Enabling Operation Metrics

Set the environment variable to opt-in:

# PowerShell
$env:OPERATION_METRICS_ENABLED = "true"
python run.py

# Bash
export OPERATION_METRICS_ENABLED=true
python run.py

Configuration Options

Variable Default Description
OPERATION_METRICS_ENABLED false Enable/disable operation metrics collection
OPERATION_METRICS_INTERVAL_MINUTES 5 Interval between snapshots (minutes)
OPERATION_METRICS_RETENTION_HOURS 24 How long to keep history data (hours)

What's Tracked

Request Statistics
  • Request counts by HTTP method (GET, PUT, POST, DELETE)
  • Response status codes (2xx, 3xx, 4xx, 5xx)
  • Average, min, max latency
  • Bytes transferred in/out
Endpoint Breakdown
  • object - Object operations (GET/PUT/DELETE)
  • bucket - Bucket operations
  • ui - Web UI requests
  • service - Health checks, etc.

S3 Error Codes

The dashboard tracks S3 API-specific error codes like NoSuchKey, AccessDenied, BucketNotFound. These are separate from HTTP status codes – a 404 from the UI won't appear here, only S3 API errors.

API Endpoints

# Get current operation metrics
curl "{{ api_base | replace(from="/api", to="/ui") }}/metrics/operations" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Get operation metrics history
curl "{{ api_base | replace(from="/api", to="/ui") }}/metrics/operations/history" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Filter history by time range
curl "{{ api_base | replace(from="/api", to="/ui") }}/metrics/operations/history?hours=6" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

Storage Location

Operation metrics data is stored at:

data/.myfsio.sys/config/operation_metrics.json
UI Dashboard: When enabled, the Metrics page shows an "API Operations" section with summary cards, charts for requests by method/status/endpoint, and an S3 error codes table. Data refreshes every 5 seconds.
18

Troubleshooting & tips

Symptom Likely cause Fix
403 from API despite Public preset Policy not saved or ARN mismatch Reapply the preset and confirm arn:aws:s3:::bucket/* matches the bucket name.
UI shows stale policy/object data Browser cached prior state Refresh; the server hot-reloads data/.myfsio.sys/config/bucket_policies.json and storage metadata.
Presign dialog returns 403 User lacks required read/write/delete action or bucket policy denies Update IAM inline policies or remove conflicting deny statements.
Large uploads fail instantly MAX_UPLOAD_SIZE exceeded Raise the env var or split the object.
Requests hit the wrong host Proxy headers missing or API_BASE_URL incorrect Ensure your proxy sends X-Forwarded-Host/Proto headers, or explicitly set API_BASE_URL to your public domain.
Large folder uploads hitting rate limits (429) RATE_LIMIT_DEFAULT exceeded (200/min) Increase rate limit in env config, use Redis backend (RATE_LIMIT_STORAGE_URI=redis://host:port) for distributed setups, or upload in smaller batches.
19

Health Check Endpoint

The API exposes a health check endpoint for monitoring and load balancer integration.

# Check API health
curl {{ api_base }}/myfsio/health

# Response
{"status": "ok", "version": "0.1.7"}

Use this endpoint for:

  • Load balancer health checks
  • Kubernetes liveness/readiness probes
  • Monitoring system integration (Prometheus, Datadog, etc.)
20

Object Lock & Retention

Object Lock prevents objects from being deleted or overwritten for a specified retention period.

Retention Modes

Mode Description
GOVERNANCE Objects can't be deleted by normal users, but admins with bypass permission can override
COMPLIANCE Objects can't be deleted or overwritten by anyone until the retention period expires

API Usage

# Set object retention
curl -X PUT "{{ api_base }}/<bucket>/<key>?retention" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"Mode": "GOVERNANCE", "RetainUntilDate": "2025-12-31T23:59:59Z"}'

# Enable legal hold (indefinite protection)
curl -X PUT "{{ api_base }}/<bucket>/<key>?legal-hold" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"Status": "ON"}'

# Get legal hold status
curl "{{ api_base }}/<bucket>/<key>?legal-hold" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
Legal Hold: Provides indefinite protection independent of retention settings. Use for litigation holds or regulatory requirements.
21

Access Logging

Enable S3-style access logging to track all requests to your buckets for audit and analysis.

# Enable access logging
curl -X PUT "{{ api_base }}/<bucket>?logging" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{
    "LoggingEnabled": {
      "TargetBucket": "log-bucket",
      "TargetPrefix": "logs/my-bucket/"
    }
  }'

# Get logging configuration
curl "{{ api_base }}/<bucket>?logging" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

Log Contents

Logs include: timestamp, bucket, key, operation type, request ID, requester, source IP, HTTP status, error codes, bytes transferred, timing, referrer, and User-Agent.

22

Notifications & Webhooks

Configure event notifications to trigger webhooks when objects are created or deleted.

Supported Events

Event Type Description
s3:ObjectCreated:* Any object creation (PUT, POST, COPY, multipart)
s3:ObjectRemoved:* Any object deletion
# Set notification configuration
curl -X PUT "{{ api_base }}/<bucket>?notification" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{
    "TopicConfigurations": [{
      "Id": "upload-notify",
      "TopicArn": "https://webhook.example.com/s3-events",
      "Events": ["s3:ObjectCreated:*"],
      "Filter": {
        "Key": {
          "FilterRules": [
            {"Name": "prefix", "Value": "uploads/"},
            {"Name": "suffix", "Value": ".jpg"}
          ]
        }
      }
    }]
  }'
Security: Webhook URLs are validated to prevent SSRF attacks. Internal/private IP ranges are blocked.
23

SelectObjectContent (SQL)

Query CSV, JSON, or Parquet files directly using SQL without downloading the entire object.

Prerequisite: Requires DuckDB to be installed (pip install duckdb)
# Query a CSV file
curl -X POST "{{ api_base }}/<bucket>/data.csv?select" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{
    "Expression": "SELECT name, age FROM s3object WHERE age > 25",
    "ExpressionType": "SQL",
    "InputSerialization": {
      "CSV": {"FileHeaderInfo": "USE", "FieldDelimiter": ","}
    },
    "OutputSerialization": {"JSON": {}}
  }'

Supported Formats

CSV
Headers, delimiters
JSON
Document or lines
Parquet
Auto schema
24

Advanced S3 Operations

Copy, move, and partially download objects using advanced S3 operations.

CopyObject

# Copy within same bucket
curl -X PUT "{{ api_base }}/<bucket>/copy-of-file.txt" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -H "x-amz-copy-source: /<bucket>/original-file.txt"

# Copy with metadata replacement
curl -X PUT "{{ api_base }}/<bucket>/file.txt" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -H "x-amz-copy-source: /<bucket>/file.txt" \
  -H "x-amz-metadata-directive: REPLACE" \
  -H "x-amz-meta-newkey: newvalue"

MoveObject (UI)

Move an object to a different key or bucket via the UI. Performs a copy then deletes the source. Requires read+delete on source and write on destination.

# Move via UI API (session-authenticated)
curl -X POST "http://localhost:5100/ui/buckets/<bucket>/objects/<key>/move" \
  -H "Content-Type: application/json" --cookie "session=..." \
  -d '{"dest_bucket": "other-bucket", "dest_key": "new-path/file.txt"}'

UploadPartCopy

Copy data from an existing object into a multipart upload part:

# Copy bytes 0-10485759 from source as part 1
curl -X PUT "{{ api_base }}/<bucket>/<key>?uploadId=X&partNumber=1" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -H "x-amz-copy-source: /source-bucket/source-file.bin" \
  -H "x-amz-copy-source-range: bytes=0-10485759"

Range Requests

# Get first 1000 bytes
curl "{{ api_base }}/<bucket>/<key>" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -H "Range: bytes=0-999"

# Get last 500 bytes
curl "{{ api_base }}/<bucket>/<key>" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -H "Range: bytes=-500"

Conditional Requests

Header Behavior
If-Modified-Since Only download if changed after date
If-None-Match Only download if ETag differs
If-Match Only download if ETag matches
25

Access Control Lists (ACLs)

ACLs provide legacy-style permission management for buckets and objects.

Canned ACLs

ACL Description
private Owner gets FULL_CONTROL (default)
public-read Owner FULL_CONTROL, public READ
public-read-write Owner FULL_CONTROL, public READ and WRITE
authenticated-read Owner FULL_CONTROL, authenticated users READ
# Set bucket ACL
curl -X PUT "{{ api_base }}/<bucket>?acl" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -H "x-amz-acl: public-read"

# Set object ACL during upload
curl -X PUT "{{ api_base }}/<bucket>/<key>" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -H "x-amz-acl: private" \
  --data-binary @file.txt
Recommendation: For most use cases, prefer bucket policies over ACLs for more flexible access control.
26

Object & Bucket Tagging

Add metadata tags to buckets and objects for organization, cost allocation, or lifecycle rule filtering.

Object Tagging

# Set object tags
curl -X PUT "{{ api_base }}/<bucket>/<key>?tagging" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{
    "TagSet": [
      {"Key": "Classification", "Value": "Confidential"},
      {"Key": "Owner", "Value": "john@example.com"}
    ]
  }'

# Get object tags
curl "{{ api_base }}/<bucket>/<key>?tagging" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Set tags during upload
curl -X PUT "{{ api_base }}/<bucket>/<key>" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -H "x-amz-tagging: Environment=Staging&Team=QA" \
  --data-binary @file.txt

Bucket Tagging

# Set bucket tags
curl -X PUT "{{ api_base }}/<bucket>?tagging" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{
    "TagSet": [
      {"Key": "Environment", "Value": "Production"},
      {"Key": "Team", "Value": "Engineering"}
    ]
  }'

Use Cases

  • Filter objects for lifecycle expiration by tag
  • Use tag conditions in bucket policies
  • Group objects by project or department
  • Trigger automation based on object tags
27

Static Website Hosting

Host static websites directly from S3 buckets with custom index and error pages, served via custom domain mapping.

Prerequisite: Set WEBSITE_HOSTING_ENABLED=true to enable this feature.

1. Configure bucket for website hosting

# Enable website hosting with index and error documents
curl -X PUT "{{ api_base }}/<bucket>?website" \
  -H "Content-Type: application/xml" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '<WebsiteConfiguration>
    <IndexDocument><Suffix>index.html</Suffix></IndexDocument>
    <ErrorDocument><Key>404.html</Key></ErrorDocument>
  </WebsiteConfiguration>'

# Get website configuration
curl "{{ api_base }}/<bucket>?website" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Remove website configuration
curl -X DELETE "{{ api_base }}/<bucket>?website" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

2. Map a custom domain to the bucket

# Create domain mapping (admin only)
curl -X POST "{{ api_base }}/admin/website-domains" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"domain": "example.com", "bucket": "my-site"}'

# List all domain mappings
curl "{{ api_base }}/admin/website-domains" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Update a mapping
curl -X PUT "{{ api_base }}/admin/website-domains/example.com" \
  -H "Content-Type: application/json" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '{"bucket": "new-site-bucket"}'

# Delete a mapping
curl -X DELETE "{{ api_base }}/admin/website-domains/example.com" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

3. Point your domain

MyFSIO handles domain routing natively via the Host header — no path-based proxy rules needed. Just point your domain to the MyFSIO API server.

Direct access (HTTP only): Point your domain's DNS (A or CNAME) directly to the MyFSIO server on port 5000.

For HTTPS, place a reverse proxy in front. The proxy only needs to forward traffic — MyFSIO handles the domain-to-bucket routing:

# nginx example
server {
    server_name example.com;
    location / {
        proxy_pass http://127.0.0.1:5000;
        proxy_set_header Host $host;  # Required: passes the domain to MyFSIO
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
Important: The proxy_set_header Host $host; directive is required. MyFSIO matches the incoming Host header against domain mappings to determine which bucket to serve.

How it works

  • / serves the configured index document
  • /about/ serves about/index.html
  • Objects served with correct Content-Type
  • Missing objects return the error document with 404
  • Website endpoints are public (no auth required)
  • Normal S3 API with auth continues to work
28

CORS Configuration

Configure per-bucket Cross-Origin Resource Sharing rules to control which origins can access your bucket from a browser.

Setting CORS Rules

# Set CORS configuration
curl -X PUT "{{ api_base }}/<bucket>?cors" \
  -H "Content-Type: application/xml" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
  -d '<CORSConfiguration>
  <CORSRule>
    <AllowedOrigin>https://example.com</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedHeader>*</AllowedHeader>
    <ExposeHeader>ETag</ExposeHeader>
    <MaxAgeSeconds>3600</MaxAgeSeconds>
  </CORSRule>
</CORSConfiguration>'

# Get CORS configuration
curl "{{ api_base }}/<bucket>?cors" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Delete CORS configuration
curl -X DELETE "{{ api_base }}/<bucket>?cors" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

Rule Fields

Field Description
AllowedOrigin Origins allowed to make requests (supports * wildcard)
AllowedMethod HTTP methods: GET, PUT, POST, DELETE, HEAD
AllowedHeader Request headers allowed in preflight (supports *)
ExposeHeader Response headers visible to the browser (e.g., ETag, x-amz-request-id)
MaxAgeSeconds How long the browser caches preflight results
29

PostObject (HTML Form Upload)

Upload objects directly from an HTML form using browser-based POST uploads with policy-based authorization.

Form Fields

Field Description
keyObject key (supports ${filename} variable)
fileThe file to upload
policyBase64-encoded policy document (JSON)
x-amz-signatureHMAC-SHA256 signature of the policy
x-amz-credentialAccess key / date / region / s3 / aws4_request
x-amz-algorithmAWS4-HMAC-SHA256
x-amz-dateISO 8601 date (e.g., 20250101T000000Z)
Content-TypeMIME type of the uploaded file
x-amz-meta-*Custom metadata headers

Simple Upload (No Signing)

<form action="{{ api_base }}/my-bucket" method="POST" enctype="multipart/form-data">
  <input type="hidden" name="key" value="uploads/${filename}">
  <input type="file" name="file">
  <button type="submit">Upload</button>
</form>

Signed Upload (With Policy)

For authenticated uploads, include a base64-encoded policy and SigV4 signature fields. The policy constrains allowed keys, content types, and size limits. See docs.md Section 20 for full signing examples.

30

List Objects API v2

Use the v2 list API for improved pagination with continuation tokens instead of markers.

Usage

# List with v2 API
curl "{{ api_base }}/<bucket>?list-type=2&prefix=logs/&delimiter=/&max-keys=100" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Paginate with continuation token
curl "{{ api_base }}/<bucket>?list-type=2&continuation-token=<token>" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

# Start listing after a specific key
curl "{{ api_base }}/<bucket>?list-type=2&start-after=photos/2025/" \
  -H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"

Query Parameters

Parameter Description
list-type=2Enables v2 API (required)
prefixFilter to keys starting with this prefix
delimiterGroup keys by delimiter (typically / for folders)
max-keysMaximum objects to return (default 1000)
continuation-tokenToken from previous response for pagination
start-afterStart listing after this key (first page only)
fetch-ownerInclude owner info in response
encoding-typeSet to url to URL-encode keys in response
31

Upgrading & Updates

How to safely update MyFSIO to a new version.

Pre-Update Backup

Always back up before updating:

# Back up configuration
cp -r data/.myfsio.sys/config/ config-backup/

# Back up data (optional, for critical deployments)
tar czf myfsio-backup-$(date +%Y%m%d).tar.gz data/

# Back up logs
cp -r logs/ logs-backup/

Update Procedure

  1. Stop the service: sudo systemctl stop myfsio (or kill the process)
  2. Pull new version: git pull origin main or download the new binary
  3. Install dependencies: pip install -r requirements.txt
  4. Validate config: python run.py --check-config
  5. Start the service: sudo systemctl start myfsio
  6. Verify: curl http://localhost:5000/myfsio/health

Rollback

If something goes wrong, stop the service, restore the backed-up config and data directories, then restart with the previous binary or code version. See docs.md Section 4 for detailed rollback procedures including blue-green deployment strategies.

32

Full API Reference

Complete list of all S3-compatible, admin, and KMS endpoints.

# Service
GET    /myfsio/health                   # Health check

# Bucket Operations
GET    /                               # List buckets
PUT    /<bucket>                        # Create bucket
DELETE /<bucket>                        # Delete bucket
GET    /<bucket>                        # List objects (?list-type=2)
HEAD   /<bucket>                        # Check bucket exists
POST   /<bucket>                        # POST object / form upload
POST   /<bucket>?delete                 # Bulk delete

# Bucket Configuration
GET|PUT|DELETE /<bucket>?policy          # Bucket policy
GET|PUT        /<bucket>?quota           # Bucket quota
GET|PUT        /<bucket>?versioning      # Versioning
GET|PUT|DELETE /<bucket>?lifecycle        # Lifecycle rules
GET|PUT|DELETE /<bucket>?cors             # CORS config
GET|PUT|DELETE /<bucket>?encryption       # Default encryption
GET|PUT        /<bucket>?acl              # Bucket ACL
GET|PUT|DELETE /<bucket>?tagging          # Bucket tags
GET|PUT|DELETE /<bucket>?replication      # Replication rules
GET|PUT        /<bucket>?logging          # Access logging
GET|PUT        /<bucket>?notification     # Event notifications
GET|PUT        /<bucket>?object-lock      # Object lock config
GET|PUT|DELETE /<bucket>?website          # Static website
GET            /<bucket>?uploads          # List multipart uploads
GET            /<bucket>?versions         # List object versions
GET            /<bucket>?location         # Bucket region

# Object Operations
PUT    /<bucket>/<key>                  # Upload object
GET    /<bucket>/<key>                  # Download (Range supported)
DELETE /<bucket>/<key>                  # Delete object
HEAD   /<bucket>/<key>                  # Object metadata
POST   /<bucket>/<key>?select           # SQL query (SelectObjectContent)

# Object Configuration
GET|PUT|DELETE /<bucket>/<key>?tagging    # Object tags
GET|PUT        /<bucket>/<key>?acl        # Object ACL
GET|PUT        /<bucket>/<key>?retention  # Object retention
GET|PUT        /<bucket>/<key>?legal-hold # Legal hold

# Multipart Upload
POST   /<bucket>/<key>?uploads          # Initiate
PUT    /<bucket>/<key>?uploadId=X&partNumber=N  # Upload part
POST   /<bucket>/<key>?uploadId=X       # Complete
DELETE /<bucket>/<key>?uploadId=X       # Abort
GET    /<bucket>/<key>?uploadId=X       # List parts

# Copy (via x-amz-copy-source header)
PUT    /<bucket>/<key>                  # CopyObject
PUT    /<bucket>/<key>?uploadId&partNumber # UploadPartCopy

# Admin API
GET|PUT /admin/site                     # Local site config
GET     /admin/sites                    # List peers
POST    /admin/sites                    # Register peer
GET|PUT|DELETE /admin/sites/<id>        # Manage peer
GET     /admin/sites/<id>/health        # Peer health
GET     /admin/topology                 # Cluster topology
GET|POST|PUT|DELETE /admin/website-domains  # Domain mappings

# KMS API
GET|POST /kms/keys                      # List / Create keys
GET|DELETE /kms/keys/<id>               # Get / Delete key
POST   /kms/keys/<id>/enable            # Enable key
POST   /kms/keys/<id>/disable           # Disable key
POST   /kms/keys/<id>/rotate            # Rotate key
POST   /kms/encrypt                     # Encrypt data
POST   /kms/decrypt                     # Decrypt data
POST   /kms/generate-data-key           # Generate data key
POST   /kms/generate-random             # Generate random bytes
{% endblock %}