Compare commits
73 Commits
v0.1.0
...
a2745ff2ee
| Author | SHA1 | Date | |
|---|---|---|---|
| a2745ff2ee | |||
| 9165e365e6 | |||
| 01e26754e8 | |||
| b592fa9fdb | |||
| cd9734b398 | |||
| 90893cac27 | |||
| 6e659902bd | |||
| 39a707ecbc | |||
| 4199f8e6c7 | |||
| adc6770273 | |||
| f5451c162b | |||
| aab9ef696a | |||
| be48f59452 | |||
| 86c04f85f6 | |||
| 28cb656d94 | |||
| 992d9eccd9 | |||
| 40f3192c5c | |||
| 2498b950f6 | |||
| 97435f15e5 | |||
| 3c44152fc6 | |||
| 97860669ec | |||
| 4a5dd76286 | |||
| d2dc293722 | |||
| 397515edce | |||
| 563bb8fa6a | |||
| 980fced7e4 | |||
| 5ccf53b688 | |||
| 4d4256830a | |||
| 137e3b7b68 | |||
| bae5009ec4 | |||
| 114e684cb8 | |||
| 5d161c1d92 | |||
| f160827b41 | |||
| 9368715b16 | |||
| 453ac6ea30 | |||
| 804f46d11e | |||
| 766dbb18be | |||
| 590a39ca80 | |||
| 53326f4e41 | |||
| 233780617f | |||
| 6a31a9082e | |||
| aaa230b19b | |||
| 86138636db | |||
| b2f4d1b5db | |||
| cee28c9f81 | |||
| 85ee5b9388 | |||
| e6ee341b93 | |||
| 92cf8825cf | |||
| ef781ae0b1 | |||
| 37d372c617 | |||
| fd8fb21517 | |||
| a095616569 | |||
| c6cbe822e1 | |||
| dddab6dbbc | |||
| 015c9cb52d | |||
| c8b1c33118 | |||
| ebef3dfa57 | |||
| 1116353d0f | |||
| e4b92a32a1 | |||
| 57c40dcdcc | |||
| 7d1735a59f | |||
| 9064f9d60e | |||
| 36c08b0ac1 | |||
| ec5d52f208 | |||
| 96de6164d1 | |||
| 8c00d7bd4b | |||
| a32d9dbd77 | |||
| fe3eacd2be | |||
| 471cf5a305 | |||
| 840fd176d3 | |||
| 5350d04ba5 | |||
| f2daa8a8a3 | |||
| e287b59645 |
13
.dockerignore
Normal file
13
.dockerignore
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
.git
|
||||||
|
.gitignore
|
||||||
|
.venv
|
||||||
|
__pycache__
|
||||||
|
*.pyc
|
||||||
|
*.pyo
|
||||||
|
*.pyd
|
||||||
|
.pytest_cache
|
||||||
|
.coverage
|
||||||
|
htmlcov
|
||||||
|
logs
|
||||||
|
data
|
||||||
|
tmp
|
||||||
11
Dockerfile
11
Dockerfile
@@ -16,9 +16,14 @@ RUN pip install --no-cache-dir -r requirements.txt
|
|||||||
|
|
||||||
COPY . .
|
COPY . .
|
||||||
|
|
||||||
# Drop privileges
|
# Make entrypoint executable
|
||||||
RUN useradd -m -u 1000 myfsio \
|
RUN chmod +x docker-entrypoint.sh
|
||||||
|
|
||||||
|
# Create data directory and set permissions
|
||||||
|
RUN mkdir -p /app/data \
|
||||||
|
&& useradd -m -u 1000 myfsio \
|
||||||
&& chown -R myfsio:myfsio /app
|
&& chown -R myfsio:myfsio /app
|
||||||
|
|
||||||
USER myfsio
|
USER myfsio
|
||||||
|
|
||||||
EXPOSE 5000 5100
|
EXPOSE 5000 5100
|
||||||
@@ -29,4 +34,4 @@ ENV APP_HOST=0.0.0.0 \
|
|||||||
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
||||||
CMD python -c "import requests; requests.get('http://localhost:5000/healthz', timeout=2)"
|
CMD python -c "import requests; requests.get('http://localhost:5000/healthz', timeout=2)"
|
||||||
|
|
||||||
CMD ["python", "run.py", "--mode", "both"]
|
CMD ["./docker-entrypoint.sh"]
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ MyFSIO is a batteries-included, Flask-based recreation of Amazon S3 and IAM work
|
|||||||
- **IAM + access keys:** Users, access keys, key rotation, and bucket-scoped actions (`list/read/write/delete/policy`) now live in `data/.myfsio.sys/config/iam.json` and are editable from the IAM dashboard.
|
- **IAM + access keys:** Users, access keys, key rotation, and bucket-scoped actions (`list/read/write/delete/policy`) now live in `data/.myfsio.sys/config/iam.json` and are editable from the IAM dashboard.
|
||||||
- **Bucket policies + hot reload:** `data/.myfsio.sys/config/bucket_policies.json` uses AWS' policy grammar (Version `2012-10-17`) with a built-in watcher, so editing the JSON file applies immediately. The UI also ships Public/Private/Custom presets for faster edits.
|
- **Bucket policies + hot reload:** `data/.myfsio.sys/config/bucket_policies.json` uses AWS' policy grammar (Version `2012-10-17`) with a built-in watcher, so editing the JSON file applies immediately. The UI also ships Public/Private/Custom presets for faster edits.
|
||||||
- **Presigned URLs everywhere:** Signature Version 4 presigned URLs respect IAM + bucket policies and replace the now-removed "share link" feature for public access scenarios.
|
- **Presigned URLs everywhere:** Signature Version 4 presigned URLs respect IAM + bucket policies and replace the now-removed "share link" feature for public access scenarios.
|
||||||
- **Modern UI:** Responsive tables, quick filters, preview sidebar, object-level delete buttons, a presign modal, and an inline JSON policy editor that respects dark mode keep bucket management friendly.
|
- **Modern UI:** Responsive tables, quick filters, preview sidebar, object-level delete buttons, a presign modal, and an inline JSON policy editor that respects dark mode keep bucket management friendly. The object browser supports folder navigation, infinite scroll pagination, bulk operations, and automatic retry on load failures.
|
||||||
- **Tests & health:** `/healthz` for smoke checks and `pytest` coverage for IAM, CRUD, presign, and policy flows.
|
- **Tests & health:** `/healthz` for smoke checks and `pytest` coverage for IAM, CRUD, presign, and policy flows.
|
||||||
|
|
||||||
## Architecture at a Glance
|
## Architecture at a Glance
|
||||||
@@ -86,7 +86,7 @@ Presigned URLs follow the AWS CLI playbook:
|
|||||||
| `AWS_REGION` | `us-east-1` | Region used in Signature V4 scope |
|
| `AWS_REGION` | `us-east-1` | Region used in Signature V4 scope |
|
||||||
| `AWS_SERVICE` | `s3` | Service used in Signature V4 scope |
|
| `AWS_SERVICE` | `s3` | Service used in Signature V4 scope |
|
||||||
|
|
||||||
> Buckets now live directly under `data/` while system metadata (versions, IAM, bucket policies, multipart uploads, etc.) lives in `data/.myfsio.sys`. Existing installs can keep their environment variables, but the defaults now match MinIO's `data/.system` pattern for easier bind-mounting.
|
> Buckets now live directly under `data/` while system metadata (versions, IAM, bucket policies, multipart uploads, etc.) lives in `data/.myfsio.sys`.
|
||||||
|
|
||||||
## API Cheatsheet (IAM headers required)
|
## API Cheatsheet (IAM headers required)
|
||||||
|
|
||||||
|
|||||||
152
app/__init__.py
152
app/__init__.py
@@ -2,28 +2,59 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
import shutil
|
||||||
|
import sys
|
||||||
import time
|
import time
|
||||||
import uuid
|
import uuid
|
||||||
from logging.handlers import RotatingFileHandler
|
from logging.handlers import RotatingFileHandler
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from datetime import timedelta
|
from datetime import timedelta
|
||||||
from typing import Any, Dict, Optional
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
from flask import Flask, g, has_request_context, redirect, render_template, request, url_for
|
from flask import Flask, g, has_request_context, redirect, render_template, request, url_for
|
||||||
from flask_cors import CORS
|
from flask_cors import CORS
|
||||||
from flask_wtf.csrf import CSRFError
|
from flask_wtf.csrf import CSRFError
|
||||||
|
from werkzeug.middleware.proxy_fix import ProxyFix
|
||||||
|
|
||||||
from .bucket_policies import BucketPolicyStore
|
from .bucket_policies import BucketPolicyStore
|
||||||
from .config import AppConfig
|
from .config import AppConfig
|
||||||
from .connections import ConnectionStore
|
from .connections import ConnectionStore
|
||||||
|
from .encryption import EncryptionManager
|
||||||
from .extensions import limiter, csrf
|
from .extensions import limiter, csrf
|
||||||
from .iam import IamService
|
from .iam import IamService
|
||||||
|
from .kms import KMSManager
|
||||||
from .replication import ReplicationManager
|
from .replication import ReplicationManager
|
||||||
from .secret_store import EphemeralSecretStore
|
from .secret_store import EphemeralSecretStore
|
||||||
from .storage import ObjectStorage
|
from .storage import ObjectStorage
|
||||||
from .version import get_version
|
from .version import get_version
|
||||||
|
|
||||||
|
|
||||||
|
def _migrate_config_file(active_path: Path, legacy_paths: List[Path]) -> Path:
|
||||||
|
"""Migrate config file from legacy locations to the active path.
|
||||||
|
|
||||||
|
Checks each legacy path in order and moves the first one found to the active path.
|
||||||
|
This ensures backward compatibility for users upgrading from older versions.
|
||||||
|
"""
|
||||||
|
active_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
if active_path.exists():
|
||||||
|
return active_path
|
||||||
|
|
||||||
|
for legacy_path in legacy_paths:
|
||||||
|
if legacy_path.exists():
|
||||||
|
try:
|
||||||
|
shutil.move(str(legacy_path), str(active_path))
|
||||||
|
except OSError:
|
||||||
|
shutil.copy2(legacy_path, active_path)
|
||||||
|
try:
|
||||||
|
legacy_path.unlink(missing_ok=True)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
break
|
||||||
|
|
||||||
|
return active_path
|
||||||
|
|
||||||
|
|
||||||
def create_app(
|
def create_app(
|
||||||
test_config: Optional[Dict[str, Any]] = None,
|
test_config: Optional[Dict[str, Any]] = None,
|
||||||
*,
|
*,
|
||||||
@@ -33,7 +64,11 @@ def create_app(
|
|||||||
"""Create and configure the Flask application."""
|
"""Create and configure the Flask application."""
|
||||||
config = AppConfig.from_env(test_config)
|
config = AppConfig.from_env(test_config)
|
||||||
|
|
||||||
project_root = Path(__file__).resolve().parent.parent
|
if getattr(sys, "frozen", False):
|
||||||
|
project_root = Path(sys._MEIPASS)
|
||||||
|
else:
|
||||||
|
project_root = Path(__file__).resolve().parent.parent
|
||||||
|
|
||||||
app = Flask(
|
app = Flask(
|
||||||
__name__,
|
__name__,
|
||||||
static_folder=str(project_root / "static"),
|
static_folder=str(project_root / "static"),
|
||||||
@@ -47,6 +82,9 @@ def create_app(
|
|||||||
if app.config.get("TESTING"):
|
if app.config.get("TESTING"):
|
||||||
app.config.setdefault("WTF_CSRF_ENABLED", False)
|
app.config.setdefault("WTF_CSRF_ENABLED", False)
|
||||||
|
|
||||||
|
# Trust X-Forwarded-* headers from proxies
|
||||||
|
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_prefix=1)
|
||||||
|
|
||||||
_configure_cors(app)
|
_configure_cors(app)
|
||||||
_configure_logging(app)
|
_configure_logging(app)
|
||||||
|
|
||||||
@@ -62,13 +100,46 @@ def create_app(
|
|||||||
bucket_policies = BucketPolicyStore(Path(app.config["BUCKET_POLICY_PATH"]))
|
bucket_policies = BucketPolicyStore(Path(app.config["BUCKET_POLICY_PATH"]))
|
||||||
secret_store = EphemeralSecretStore(default_ttl=app.config.get("SECRET_TTL_SECONDS", 300))
|
secret_store = EphemeralSecretStore(default_ttl=app.config.get("SECRET_TTL_SECONDS", 300))
|
||||||
|
|
||||||
# Initialize Replication components
|
storage_root = Path(app.config["STORAGE_ROOT"])
|
||||||
connections_path = Path(app.config["STORAGE_ROOT"]) / ".connections.json"
|
config_dir = storage_root / ".myfsio.sys" / "config"
|
||||||
replication_rules_path = Path(app.config["STORAGE_ROOT"]) / ".replication_rules.json"
|
config_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
connections_path = _migrate_config_file(
|
||||||
|
active_path=config_dir / "connections.json",
|
||||||
|
legacy_paths=[
|
||||||
|
storage_root / ".myfsio.sys" / "connections.json",
|
||||||
|
storage_root / ".connections.json",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
replication_rules_path = _migrate_config_file(
|
||||||
|
active_path=config_dir / "replication_rules.json",
|
||||||
|
legacy_paths=[
|
||||||
|
storage_root / ".myfsio.sys" / "replication_rules.json",
|
||||||
|
storage_root / ".replication_rules.json",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
connections = ConnectionStore(connections_path)
|
connections = ConnectionStore(connections_path)
|
||||||
replication = ReplicationManager(storage, connections, replication_rules_path)
|
replication = ReplicationManager(storage, connections, replication_rules_path)
|
||||||
|
|
||||||
|
encryption_config = {
|
||||||
|
"encryption_enabled": app.config.get("ENCRYPTION_ENABLED", False),
|
||||||
|
"encryption_master_key_path": app.config.get("ENCRYPTION_MASTER_KEY_PATH"),
|
||||||
|
"default_encryption_algorithm": app.config.get("DEFAULT_ENCRYPTION_ALGORITHM", "AES256"),
|
||||||
|
}
|
||||||
|
encryption_manager = EncryptionManager(encryption_config)
|
||||||
|
|
||||||
|
kms_manager = None
|
||||||
|
if app.config.get("KMS_ENABLED", False):
|
||||||
|
kms_keys_path = Path(app.config.get("KMS_KEYS_PATH", ""))
|
||||||
|
kms_master_key_path = Path(app.config.get("ENCRYPTION_MASTER_KEY_PATH", ""))
|
||||||
|
kms_manager = KMSManager(kms_keys_path, kms_master_key_path)
|
||||||
|
encryption_manager.set_kms_provider(kms_manager)
|
||||||
|
|
||||||
|
if app.config.get("ENCRYPTION_ENABLED", False):
|
||||||
|
from .encrypted_storage import EncryptedObjectStorage
|
||||||
|
storage = EncryptedObjectStorage(storage, encryption_manager)
|
||||||
|
|
||||||
app.extensions["object_storage"] = storage
|
app.extensions["object_storage"] = storage
|
||||||
app.extensions["iam"] = iam
|
app.extensions["iam"] = iam
|
||||||
app.extensions["bucket_policies"] = bucket_policies
|
app.extensions["bucket_policies"] = bucket_policies
|
||||||
@@ -76,6 +147,8 @@ def create_app(
|
|||||||
app.extensions["limiter"] = limiter
|
app.extensions["limiter"] = limiter
|
||||||
app.extensions["connections"] = connections
|
app.extensions["connections"] = connections
|
||||||
app.extensions["replication"] = replication
|
app.extensions["replication"] = replication
|
||||||
|
app.extensions["encryption"] = encryption_manager
|
||||||
|
app.extensions["kms"] = kms_manager
|
||||||
|
|
||||||
@app.errorhandler(500)
|
@app.errorhandler(500)
|
||||||
def internal_error(error):
|
def internal_error(error):
|
||||||
@@ -96,11 +169,35 @@ def create_app(
|
|||||||
value /= 1024.0
|
value /= 1024.0
|
||||||
return f"{value:.1f} PB"
|
return f"{value:.1f} PB"
|
||||||
|
|
||||||
|
@app.template_filter("timestamp_to_datetime")
|
||||||
|
def timestamp_to_datetime(value: float) -> str:
|
||||||
|
"""Format Unix timestamp as human-readable datetime in configured timezone."""
|
||||||
|
from datetime import datetime, timezone as dt_timezone
|
||||||
|
from zoneinfo import ZoneInfo
|
||||||
|
if not value:
|
||||||
|
return "Never"
|
||||||
|
try:
|
||||||
|
dt_utc = datetime.fromtimestamp(value, dt_timezone.utc)
|
||||||
|
display_tz = app.config.get("DISPLAY_TIMEZONE", "UTC")
|
||||||
|
if display_tz and display_tz != "UTC":
|
||||||
|
try:
|
||||||
|
tz = ZoneInfo(display_tz)
|
||||||
|
dt_local = dt_utc.astimezone(tz)
|
||||||
|
return dt_local.strftime("%Y-%m-%d %H:%M:%S")
|
||||||
|
except (KeyError, ValueError):
|
||||||
|
pass
|
||||||
|
return dt_utc.strftime("%Y-%m-%d %H:%M:%S UTC")
|
||||||
|
except (ValueError, OSError):
|
||||||
|
return "Unknown"
|
||||||
|
|
||||||
if include_api:
|
if include_api:
|
||||||
from .s3_api import s3_api_bp
|
from .s3_api import s3_api_bp
|
||||||
|
from .kms_api import kms_api_bp
|
||||||
|
|
||||||
app.register_blueprint(s3_api_bp)
|
app.register_blueprint(s3_api_bp)
|
||||||
|
app.register_blueprint(kms_api_bp)
|
||||||
csrf.exempt(s3_api_bp)
|
csrf.exempt(s3_api_bp)
|
||||||
|
csrf.exempt(kms_api_bp)
|
||||||
|
|
||||||
if include_ui:
|
if include_ui:
|
||||||
from .ui import ui_bp
|
from .ui import ui_bp
|
||||||
@@ -137,14 +234,12 @@ def create_ui_app(test_config: Optional[Dict[str, Any]] = None) -> Flask:
|
|||||||
|
|
||||||
def _configure_cors(app: Flask) -> None:
|
def _configure_cors(app: Flask) -> None:
|
||||||
origins = app.config.get("CORS_ORIGINS", ["*"])
|
origins = app.config.get("CORS_ORIGINS", ["*"])
|
||||||
methods = app.config.get("CORS_METHODS", ["GET", "PUT", "POST", "DELETE", "OPTIONS"])
|
methods = app.config.get("CORS_METHODS", ["GET", "PUT", "POST", "DELETE", "OPTIONS", "HEAD"])
|
||||||
allow_headers = app.config.get(
|
allow_headers = app.config.get("CORS_ALLOW_HEADERS", ["*"])
|
||||||
"CORS_ALLOW_HEADERS",
|
expose_headers = app.config.get("CORS_EXPOSE_HEADERS", ["*"])
|
||||||
["Content-Type", "X-Access-Key", "X-Secret-Key", "X-Amz-Date", "X-Amz-SignedHeaders"],
|
|
||||||
)
|
|
||||||
CORS(
|
CORS(
|
||||||
app,
|
app,
|
||||||
resources={r"/*": {"origins": origins, "methods": methods, "allow_headers": allow_headers}},
|
resources={r"/*": {"origins": origins, "methods": methods, "allow_headers": allow_headers, "expose_headers": expose_headers}},
|
||||||
supports_credentials=True,
|
supports_credentials=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -152,7 +247,7 @@ def _configure_cors(app: Flask) -> None:
|
|||||||
class _RequestContextFilter(logging.Filter):
|
class _RequestContextFilter(logging.Filter):
|
||||||
"""Inject request-specific attributes into log records."""
|
"""Inject request-specific attributes into log records."""
|
||||||
|
|
||||||
def filter(self, record: logging.LogRecord) -> bool: # pragma: no cover - simple boilerplate
|
def filter(self, record: logging.LogRecord) -> bool:
|
||||||
if has_request_context():
|
if has_request_context():
|
||||||
record.request_id = getattr(g, "request_id", "-")
|
record.request_id = getattr(g, "request_id", "-")
|
||||||
record.path = request.path
|
record.path = request.path
|
||||||
@@ -167,23 +262,33 @@ class _RequestContextFilter(logging.Filter):
|
|||||||
|
|
||||||
|
|
||||||
def _configure_logging(app: Flask) -> None:
|
def _configure_logging(app: Flask) -> None:
|
||||||
log_file = Path(app.config["LOG_FILE"])
|
|
||||||
log_file.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
handler = RotatingFileHandler(
|
|
||||||
log_file,
|
|
||||||
maxBytes=int(app.config.get("LOG_MAX_BYTES", 5 * 1024 * 1024)),
|
|
||||||
backupCount=int(app.config.get("LOG_BACKUP_COUNT", 3)),
|
|
||||||
encoding="utf-8",
|
|
||||||
)
|
|
||||||
formatter = logging.Formatter(
|
formatter = logging.Formatter(
|
||||||
"%(asctime)s | %(levelname)s | %(request_id)s | %(method)s %(path)s | %(message)s"
|
"%(asctime)s | %(levelname)s | %(request_id)s | %(method)s %(path)s | %(message)s"
|
||||||
)
|
)
|
||||||
handler.setFormatter(formatter)
|
|
||||||
handler.addFilter(_RequestContextFilter())
|
# Stream Handler (stdout) - Primary for Docker
|
||||||
|
stream_handler = logging.StreamHandler(sys.stdout)
|
||||||
|
stream_handler.setFormatter(formatter)
|
||||||
|
stream_handler.addFilter(_RequestContextFilter())
|
||||||
|
|
||||||
logger = app.logger
|
logger = app.logger
|
||||||
logger.handlers.clear()
|
logger.handlers.clear()
|
||||||
logger.addHandler(handler)
|
logger.addHandler(stream_handler)
|
||||||
|
|
||||||
|
# File Handler (optional, if configured)
|
||||||
|
if app.config.get("LOG_TO_FILE"):
|
||||||
|
log_file = Path(app.config["LOG_FILE"])
|
||||||
|
log_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
file_handler = RotatingFileHandler(
|
||||||
|
log_file,
|
||||||
|
maxBytes=int(app.config.get("LOG_MAX_BYTES", 5 * 1024 * 1024)),
|
||||||
|
backupCount=int(app.config.get("LOG_BACKUP_COUNT", 3)),
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
file_handler.setFormatter(formatter)
|
||||||
|
file_handler.addFilter(_RequestContextFilter())
|
||||||
|
logger.addHandler(file_handler)
|
||||||
|
|
||||||
logger.setLevel(getattr(logging, app.config.get("LOG_LEVEL", "INFO"), logging.INFO))
|
logger.setLevel(getattr(logging, app.config.get("LOG_LEVEL", "INFO"), logging.INFO))
|
||||||
|
|
||||||
@app.before_request
|
@app.before_request
|
||||||
@@ -211,5 +316,4 @@ def _configure_logging(app: Flask) -> None:
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
response.headers["X-Request-Duration-ms"] = f"{duration_ms:.2f}"
|
response.headers["X-Request-Duration-ms"] = f"{duration_ms:.2f}"
|
||||||
response.headers["Server"] = "MyFISO"
|
|
||||||
return response
|
return response
|
||||||
|
|||||||
@@ -11,17 +11,51 @@ from typing import Any, Dict, Iterable, List, Optional, Sequence
|
|||||||
RESOURCE_PREFIX = "arn:aws:s3:::"
|
RESOURCE_PREFIX = "arn:aws:s3:::"
|
||||||
|
|
||||||
ACTION_ALIASES = {
|
ACTION_ALIASES = {
|
||||||
"s3:getobject": "read",
|
# List actions
|
||||||
"s3:getobjectversion": "read",
|
|
||||||
"s3:listbucket": "list",
|
"s3:listbucket": "list",
|
||||||
"s3:listallmybuckets": "list",
|
"s3:listallmybuckets": "list",
|
||||||
|
"s3:listbucketversions": "list",
|
||||||
|
"s3:listmultipartuploads": "list",
|
||||||
|
"s3:listparts": "list",
|
||||||
|
# Read actions
|
||||||
|
"s3:getobject": "read",
|
||||||
|
"s3:getobjectversion": "read",
|
||||||
|
"s3:getobjecttagging": "read",
|
||||||
|
"s3:getobjectversiontagging": "read",
|
||||||
|
"s3:getobjectacl": "read",
|
||||||
|
"s3:getbucketversioning": "read",
|
||||||
|
"s3:headobject": "read",
|
||||||
|
"s3:headbucket": "read",
|
||||||
|
# Write actions
|
||||||
"s3:putobject": "write",
|
"s3:putobject": "write",
|
||||||
"s3:createbucket": "write",
|
"s3:createbucket": "write",
|
||||||
|
"s3:putobjecttagging": "write",
|
||||||
|
"s3:putbucketversioning": "write",
|
||||||
|
"s3:createmultipartupload": "write",
|
||||||
|
"s3:uploadpart": "write",
|
||||||
|
"s3:completemultipartupload": "write",
|
||||||
|
"s3:abortmultipartupload": "write",
|
||||||
|
"s3:copyobject": "write",
|
||||||
|
# Delete actions
|
||||||
"s3:deleteobject": "delete",
|
"s3:deleteobject": "delete",
|
||||||
"s3:deleteobjectversion": "delete",
|
"s3:deleteobjectversion": "delete",
|
||||||
"s3:deletebucket": "delete",
|
"s3:deletebucket": "delete",
|
||||||
|
"s3:deleteobjecttagging": "delete",
|
||||||
|
# Share actions (ACL)
|
||||||
"s3:putobjectacl": "share",
|
"s3:putobjectacl": "share",
|
||||||
|
"s3:putbucketacl": "share",
|
||||||
|
"s3:getbucketacl": "share",
|
||||||
|
# Policy actions
|
||||||
"s3:putbucketpolicy": "policy",
|
"s3:putbucketpolicy": "policy",
|
||||||
|
"s3:getbucketpolicy": "policy",
|
||||||
|
"s3:deletebucketpolicy": "policy",
|
||||||
|
# Replication actions
|
||||||
|
"s3:getreplicationconfiguration": "replication",
|
||||||
|
"s3:putreplicationconfiguration": "replication",
|
||||||
|
"s3:deletereplicationconfiguration": "replication",
|
||||||
|
"s3:replicateobject": "replication",
|
||||||
|
"s3:replicatetags": "replication",
|
||||||
|
"s3:replicatedelete": "replication",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -154,7 +188,6 @@ class BucketPolicyStore:
|
|||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
def evaluate(
|
def evaluate(
|
||||||
self,
|
self,
|
||||||
access_key: Optional[str],
|
access_key: Optional[str],
|
||||||
@@ -195,7 +228,6 @@ class BucketPolicyStore:
|
|||||||
self._policies.pop(bucket, None)
|
self._policies.pop(bucket, None)
|
||||||
self._persist()
|
self._persist()
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
def _load(self) -> None:
|
def _load(self) -> None:
|
||||||
try:
|
try:
|
||||||
content = self.policy_path.read_text(encoding='utf-8')
|
content = self.policy_path.read_text(encoding='utf-8')
|
||||||
|
|||||||
185
app/config.py
185
app/config.py
@@ -4,12 +4,18 @@ from __future__ import annotations
|
|||||||
import os
|
import os
|
||||||
import secrets
|
import secrets
|
||||||
import shutil
|
import shutil
|
||||||
|
import sys
|
||||||
import warnings
|
import warnings
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, Optional
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
PROJECT_ROOT = Path(__file__).resolve().parent.parent
|
if getattr(sys, "frozen", False):
|
||||||
|
# Running in a PyInstaller bundle
|
||||||
|
PROJECT_ROOT = Path(sys._MEIPASS)
|
||||||
|
else:
|
||||||
|
# Running in a normal Python environment
|
||||||
|
PROJECT_ROOT = Path(__file__).resolve().parent.parent
|
||||||
|
|
||||||
|
|
||||||
def _prepare_config_file(active_path: Path, legacy_path: Optional[Path] = None) -> Path:
|
def _prepare_config_file(active_path: Path, legacy_path: Optional[Path] = None) -> Path:
|
||||||
@@ -39,11 +45,12 @@ class AppConfig:
|
|||||||
secret_key: str
|
secret_key: str
|
||||||
iam_config_path: Path
|
iam_config_path: Path
|
||||||
bucket_policy_path: Path
|
bucket_policy_path: Path
|
||||||
api_base_url: str
|
api_base_url: Optional[str]
|
||||||
aws_region: str
|
aws_region: str
|
||||||
aws_service: str
|
aws_service: str
|
||||||
ui_enforce_bucket_policies: bool
|
ui_enforce_bucket_policies: bool
|
||||||
log_level: str
|
log_level: str
|
||||||
|
log_to_file: bool
|
||||||
log_path: Path
|
log_path: Path
|
||||||
log_max_bytes: int
|
log_max_bytes: int
|
||||||
log_backup_count: int
|
log_backup_count: int
|
||||||
@@ -52,6 +59,7 @@ class AppConfig:
|
|||||||
cors_origins: list[str]
|
cors_origins: list[str]
|
||||||
cors_methods: list[str]
|
cors_methods: list[str]
|
||||||
cors_allow_headers: list[str]
|
cors_allow_headers: list[str]
|
||||||
|
cors_expose_headers: list[str]
|
||||||
session_lifetime_days: int
|
session_lifetime_days: int
|
||||||
auth_max_attempts: int
|
auth_max_attempts: int
|
||||||
auth_lockout_minutes: int
|
auth_lockout_minutes: int
|
||||||
@@ -59,6 +67,13 @@ class AppConfig:
|
|||||||
secret_ttl_seconds: int
|
secret_ttl_seconds: int
|
||||||
stream_chunk_size: int
|
stream_chunk_size: int
|
||||||
multipart_min_part_size: int
|
multipart_min_part_size: int
|
||||||
|
bucket_stats_cache_ttl: int
|
||||||
|
encryption_enabled: bool
|
||||||
|
encryption_master_key_path: Path
|
||||||
|
kms_enabled: bool
|
||||||
|
kms_keys_path: Path
|
||||||
|
default_encryption_algorithm: str
|
||||||
|
display_timezone: str
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_env(cls, overrides: Optional[Dict[str, Any]] = None) -> "AppConfig":
|
def from_env(cls, overrides: Optional[Dict[str, Any]] = None) -> "AppConfig":
|
||||||
@@ -78,34 +93,49 @@ class AppConfig:
|
|||||||
multipart_min_part_size = int(_get("MULTIPART_MIN_PART_SIZE", 5 * 1024 * 1024))
|
multipart_min_part_size = int(_get("MULTIPART_MIN_PART_SIZE", 5 * 1024 * 1024))
|
||||||
default_secret = "dev-secret-key"
|
default_secret = "dev-secret-key"
|
||||||
secret_key = str(_get("SECRET_KEY", default_secret))
|
secret_key = str(_get("SECRET_KEY", default_secret))
|
||||||
|
|
||||||
if not secret_key or secret_key == default_secret:
|
if not secret_key or secret_key == default_secret:
|
||||||
generated = secrets.token_urlsafe(32)
|
secret_file = storage_root / ".myfsio.sys" / "config" / ".secret"
|
||||||
if secret_key == default_secret:
|
if secret_file.exists():
|
||||||
warnings.warn("Using insecure default SECRET_KEY. A random value has been generated; set SECRET_KEY for production", RuntimeWarning)
|
secret_key = secret_file.read_text().strip()
|
||||||
secret_key = generated
|
else:
|
||||||
|
generated = secrets.token_urlsafe(32)
|
||||||
|
if secret_key == default_secret:
|
||||||
|
warnings.warn("Using insecure default SECRET_KEY. A random value has been generated and persisted; set SECRET_KEY for production", RuntimeWarning)
|
||||||
|
try:
|
||||||
|
secret_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
secret_file.write_text(generated)
|
||||||
|
secret_key = generated
|
||||||
|
except OSError:
|
||||||
|
secret_key = generated
|
||||||
|
|
||||||
iam_env_override = "IAM_CONFIG" in overrides or "IAM_CONFIG" in os.environ
|
iam_env_override = "IAM_CONFIG" in overrides or "IAM_CONFIG" in os.environ
|
||||||
bucket_policy_override = "BUCKET_POLICY_PATH" in overrides or "BUCKET_POLICY_PATH" in os.environ
|
bucket_policy_override = "BUCKET_POLICY_PATH" in overrides or "BUCKET_POLICY_PATH" in os.environ
|
||||||
|
|
||||||
default_iam_path = PROJECT_ROOT / "data" / ".myfsio.sys" / "config" / "iam.json"
|
default_iam_path = storage_root / ".myfsio.sys" / "config" / "iam.json"
|
||||||
default_bucket_policy_path = PROJECT_ROOT / "data" / ".myfsio.sys" / "config" / "bucket_policies.json"
|
default_bucket_policy_path = storage_root / ".myfsio.sys" / "config" / "bucket_policies.json"
|
||||||
|
|
||||||
iam_config_path = Path(_get("IAM_CONFIG", default_iam_path)).resolve()
|
iam_config_path = Path(_get("IAM_CONFIG", default_iam_path)).resolve()
|
||||||
bucket_policy_path = Path(_get("BUCKET_POLICY_PATH", default_bucket_policy_path)).resolve()
|
bucket_policy_path = Path(_get("BUCKET_POLICY_PATH", default_bucket_policy_path)).resolve()
|
||||||
|
|
||||||
iam_config_path = _prepare_config_file(
|
iam_config_path = _prepare_config_file(
|
||||||
iam_config_path,
|
iam_config_path,
|
||||||
legacy_path=None if iam_env_override else PROJECT_ROOT / "data" / "iam.json",
|
legacy_path=None if iam_env_override else storage_root / "iam.json",
|
||||||
)
|
)
|
||||||
bucket_policy_path = _prepare_config_file(
|
bucket_policy_path = _prepare_config_file(
|
||||||
bucket_policy_path,
|
bucket_policy_path,
|
||||||
legacy_path=None if bucket_policy_override else PROJECT_ROOT / "data" / "bucket_policies.json",
|
legacy_path=None if bucket_policy_override else storage_root / "bucket_policies.json",
|
||||||
)
|
)
|
||||||
api_base_url = str(_get("API_BASE_URL", "http://127.0.0.1:5000"))
|
api_base_url = _get("API_BASE_URL", None)
|
||||||
|
if api_base_url:
|
||||||
|
api_base_url = str(api_base_url)
|
||||||
|
|
||||||
aws_region = str(_get("AWS_REGION", "us-east-1"))
|
aws_region = str(_get("AWS_REGION", "us-east-1"))
|
||||||
aws_service = str(_get("AWS_SERVICE", "s3"))
|
aws_service = str(_get("AWS_SERVICE", "s3"))
|
||||||
enforce_ui_policies = str(_get("UI_ENFORCE_BUCKET_POLICIES", "0")).lower() in {"1", "true", "yes", "on"}
|
enforce_ui_policies = str(_get("UI_ENFORCE_BUCKET_POLICIES", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
log_level = str(_get("LOG_LEVEL", "INFO")).upper()
|
log_level = str(_get("LOG_LEVEL", "INFO")).upper()
|
||||||
log_dir = Path(_get("LOG_DIR", PROJECT_ROOT / "logs")).resolve()
|
log_to_file = str(_get("LOG_TO_FILE", "1")).lower() in {"1", "true", "yes", "on"}
|
||||||
|
log_dir = Path(_get("LOG_DIR", storage_root.parent / "logs")).resolve()
|
||||||
log_dir.mkdir(parents=True, exist_ok=True)
|
log_dir.mkdir(parents=True, exist_ok=True)
|
||||||
log_path = log_dir / str(_get("LOG_FILE", "app.log"))
|
log_path = log_dir / str(_get("LOG_FILE", "app.log"))
|
||||||
log_max_bytes = int(_get("LOG_MAX_BYTES", 5 * 1024 * 1024))
|
log_max_bytes = int(_get("LOG_MAX_BYTES", 5 * 1024 * 1024))
|
||||||
@@ -120,19 +150,19 @@ class AppConfig:
|
|||||||
return parts or default
|
return parts or default
|
||||||
|
|
||||||
cors_origins = _csv(str(_get("CORS_ORIGINS", "*")), ["*"])
|
cors_origins = _csv(str(_get("CORS_ORIGINS", "*")), ["*"])
|
||||||
cors_methods = _csv(str(_get("CORS_METHODS", "GET,PUT,POST,DELETE,OPTIONS")), ["GET", "PUT", "POST", "DELETE", "OPTIONS"])
|
cors_methods = _csv(str(_get("CORS_METHODS", "GET,PUT,POST,DELETE,OPTIONS,HEAD")), ["GET", "PUT", "POST", "DELETE", "OPTIONS", "HEAD"])
|
||||||
cors_allow_headers = _csv(str(_get("CORS_ALLOW_HEADERS", "Content-Type,X-Access-Key,X-Secret-Key,X-Amz-Algorithm,X-Amz-Credential,X-Amz-Date,X-Amz-Expires,X-Amz-SignedHeaders,X-Amz-Signature")), [
|
cors_allow_headers = _csv(str(_get("CORS_ALLOW_HEADERS", "*")), ["*"])
|
||||||
"Content-Type",
|
cors_expose_headers = _csv(str(_get("CORS_EXPOSE_HEADERS", "*")), ["*"])
|
||||||
"X-Access-Key",
|
|
||||||
"X-Secret-Key",
|
|
||||||
"X-Amz-Algorithm",
|
|
||||||
"X-Amz-Credential",
|
|
||||||
"X-Amz-Date",
|
|
||||||
"X-Amz-Expires",
|
|
||||||
"X-Amz-SignedHeaders",
|
|
||||||
"X-Amz-Signature",
|
|
||||||
])
|
|
||||||
session_lifetime_days = int(_get("SESSION_LIFETIME_DAYS", 30))
|
session_lifetime_days = int(_get("SESSION_LIFETIME_DAYS", 30))
|
||||||
|
bucket_stats_cache_ttl = int(_get("BUCKET_STATS_CACHE_TTL", 60))
|
||||||
|
|
||||||
|
encryption_enabled = str(_get("ENCRYPTION_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
|
encryption_keys_dir = storage_root / ".myfsio.sys" / "keys"
|
||||||
|
encryption_master_key_path = Path(_get("ENCRYPTION_MASTER_KEY_PATH", encryption_keys_dir / "master.key")).resolve()
|
||||||
|
kms_enabled = str(_get("KMS_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
|
kms_keys_path = Path(_get("KMS_KEYS_PATH", encryption_keys_dir / "kms_keys.json")).resolve()
|
||||||
|
default_encryption_algorithm = str(_get("DEFAULT_ENCRYPTION_ALGORITHM", "AES256"))
|
||||||
|
display_timezone = str(_get("DISPLAY_TIMEZONE", "UTC"))
|
||||||
|
|
||||||
return cls(storage_root=storage_root,
|
return cls(storage_root=storage_root,
|
||||||
max_upload_size=max_upload_size,
|
max_upload_size=max_upload_size,
|
||||||
@@ -145,6 +175,7 @@ class AppConfig:
|
|||||||
aws_service=aws_service,
|
aws_service=aws_service,
|
||||||
ui_enforce_bucket_policies=enforce_ui_policies,
|
ui_enforce_bucket_policies=enforce_ui_policies,
|
||||||
log_level=log_level,
|
log_level=log_level,
|
||||||
|
log_to_file=log_to_file,
|
||||||
log_path=log_path,
|
log_path=log_path,
|
||||||
log_max_bytes=log_max_bytes,
|
log_max_bytes=log_max_bytes,
|
||||||
log_backup_count=log_backup_count,
|
log_backup_count=log_backup_count,
|
||||||
@@ -153,13 +184,108 @@ class AppConfig:
|
|||||||
cors_origins=cors_origins,
|
cors_origins=cors_origins,
|
||||||
cors_methods=cors_methods,
|
cors_methods=cors_methods,
|
||||||
cors_allow_headers=cors_allow_headers,
|
cors_allow_headers=cors_allow_headers,
|
||||||
|
cors_expose_headers=cors_expose_headers,
|
||||||
session_lifetime_days=session_lifetime_days,
|
session_lifetime_days=session_lifetime_days,
|
||||||
auth_max_attempts=auth_max_attempts,
|
auth_max_attempts=auth_max_attempts,
|
||||||
auth_lockout_minutes=auth_lockout_minutes,
|
auth_lockout_minutes=auth_lockout_minutes,
|
||||||
bulk_delete_max_keys=bulk_delete_max_keys,
|
bulk_delete_max_keys=bulk_delete_max_keys,
|
||||||
secret_ttl_seconds=secret_ttl_seconds,
|
secret_ttl_seconds=secret_ttl_seconds,
|
||||||
stream_chunk_size=stream_chunk_size,
|
stream_chunk_size=stream_chunk_size,
|
||||||
multipart_min_part_size=multipart_min_part_size)
|
multipart_min_part_size=multipart_min_part_size,
|
||||||
|
bucket_stats_cache_ttl=bucket_stats_cache_ttl,
|
||||||
|
encryption_enabled=encryption_enabled,
|
||||||
|
encryption_master_key_path=encryption_master_key_path,
|
||||||
|
kms_enabled=kms_enabled,
|
||||||
|
kms_keys_path=kms_keys_path,
|
||||||
|
default_encryption_algorithm=default_encryption_algorithm,
|
||||||
|
display_timezone=display_timezone)
|
||||||
|
|
||||||
|
def validate_and_report(self) -> list[str]:
|
||||||
|
"""Validate configuration and return a list of warnings/issues.
|
||||||
|
|
||||||
|
Call this at startup to detect potential misconfigurations before
|
||||||
|
the application fully commits to running.
|
||||||
|
"""
|
||||||
|
issues = []
|
||||||
|
|
||||||
|
try:
|
||||||
|
test_file = self.storage_root / ".write_test"
|
||||||
|
test_file.touch()
|
||||||
|
test_file.unlink()
|
||||||
|
except (OSError, PermissionError) as e:
|
||||||
|
issues.append(f"CRITICAL: STORAGE_ROOT '{self.storage_root}' is not writable: {e}")
|
||||||
|
|
||||||
|
storage_str = str(self.storage_root).lower()
|
||||||
|
if "/tmp" in storage_str or "\\temp" in storage_str or "appdata\\local\\temp" in storage_str:
|
||||||
|
issues.append(f"WARNING: STORAGE_ROOT '{self.storage_root}' appears to be a temporary directory. Data may be lost on reboot!")
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.iam_config_path.relative_to(self.storage_root)
|
||||||
|
except ValueError:
|
||||||
|
issues.append(f"WARNING: IAM_CONFIG '{self.iam_config_path}' is outside STORAGE_ROOT '{self.storage_root}'. Consider setting IAM_CONFIG explicitly or ensuring paths are aligned.")
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.bucket_policy_path.relative_to(self.storage_root)
|
||||||
|
except ValueError:
|
||||||
|
issues.append(f"WARNING: BUCKET_POLICY_PATH '{self.bucket_policy_path}' is outside STORAGE_ROOT '{self.storage_root}'. Consider setting BUCKET_POLICY_PATH explicitly.")
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.log_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
test_log = self.log_path.parent / ".write_test"
|
||||||
|
test_log.touch()
|
||||||
|
test_log.unlink()
|
||||||
|
except (OSError, PermissionError) as e:
|
||||||
|
issues.append(f"WARNING: Log directory '{self.log_path.parent}' is not writable: {e}")
|
||||||
|
|
||||||
|
log_str = str(self.log_path).lower()
|
||||||
|
if "/tmp" in log_str or "\\temp" in log_str or "appdata\\local\\temp" in log_str:
|
||||||
|
issues.append(f"WARNING: LOG_DIR '{self.log_path.parent}' appears to be a temporary directory. Logs may be lost on reboot!")
|
||||||
|
|
||||||
|
if self.encryption_enabled:
|
||||||
|
try:
|
||||||
|
self.encryption_master_key_path.relative_to(self.storage_root)
|
||||||
|
except ValueError:
|
||||||
|
issues.append(f"WARNING: ENCRYPTION_MASTER_KEY_PATH '{self.encryption_master_key_path}' is outside STORAGE_ROOT. Ensure proper backup procedures.")
|
||||||
|
|
||||||
|
if self.kms_enabled:
|
||||||
|
try:
|
||||||
|
self.kms_keys_path.relative_to(self.storage_root)
|
||||||
|
except ValueError:
|
||||||
|
issues.append(f"WARNING: KMS_KEYS_PATH '{self.kms_keys_path}' is outside STORAGE_ROOT. Ensure proper backup procedures.")
|
||||||
|
|
||||||
|
if self.secret_key == "dev-secret-key":
|
||||||
|
issues.append("WARNING: Using default SECRET_KEY. Set SECRET_KEY environment variable for production.")
|
||||||
|
|
||||||
|
if "*" in self.cors_origins:
|
||||||
|
issues.append("INFO: CORS_ORIGINS is set to '*'. Consider restricting to specific domains in production.")
|
||||||
|
|
||||||
|
return issues
|
||||||
|
|
||||||
|
def print_startup_summary(self) -> None:
|
||||||
|
"""Print a summary of the configuration at startup."""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("MyFSIO Configuration Summary")
|
||||||
|
print("=" * 60)
|
||||||
|
print(f" STORAGE_ROOT: {self.storage_root}")
|
||||||
|
print(f" IAM_CONFIG: {self.iam_config_path}")
|
||||||
|
print(f" BUCKET_POLICY: {self.bucket_policy_path}")
|
||||||
|
print(f" LOG_PATH: {self.log_path}")
|
||||||
|
if self.api_base_url:
|
||||||
|
print(f" API_BASE_URL: {self.api_base_url}")
|
||||||
|
if self.encryption_enabled:
|
||||||
|
print(f" ENCRYPTION: Enabled (Master key: {self.encryption_master_key_path})")
|
||||||
|
if self.kms_enabled:
|
||||||
|
print(f" KMS: Enabled (Keys: {self.kms_keys_path})")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
issues = self.validate_and_report()
|
||||||
|
if issues:
|
||||||
|
print("\nConfiguration Issues Detected:")
|
||||||
|
for issue in issues:
|
||||||
|
print(f" • {issue}")
|
||||||
|
print()
|
||||||
|
else:
|
||||||
|
print(" ✓ Configuration validated successfully\n")
|
||||||
|
|
||||||
def to_flask_config(self) -> Dict[str, Any]:
|
def to_flask_config(self) -> Dict[str, Any]:
|
||||||
return {
|
return {
|
||||||
@@ -179,7 +305,9 @@ class AppConfig:
|
|||||||
"SECRET_TTL_SECONDS": self.secret_ttl_seconds,
|
"SECRET_TTL_SECONDS": self.secret_ttl_seconds,
|
||||||
"STREAM_CHUNK_SIZE": self.stream_chunk_size,
|
"STREAM_CHUNK_SIZE": self.stream_chunk_size,
|
||||||
"MULTIPART_MIN_PART_SIZE": self.multipart_min_part_size,
|
"MULTIPART_MIN_PART_SIZE": self.multipart_min_part_size,
|
||||||
|
"BUCKET_STATS_CACHE_TTL": self.bucket_stats_cache_ttl,
|
||||||
"LOG_LEVEL": self.log_level,
|
"LOG_LEVEL": self.log_level,
|
||||||
|
"LOG_TO_FILE": self.log_to_file,
|
||||||
"LOG_FILE": str(self.log_path),
|
"LOG_FILE": str(self.log_path),
|
||||||
"LOG_MAX_BYTES": self.log_max_bytes,
|
"LOG_MAX_BYTES": self.log_max_bytes,
|
||||||
"LOG_BACKUP_COUNT": self.log_backup_count,
|
"LOG_BACKUP_COUNT": self.log_backup_count,
|
||||||
@@ -188,5 +316,12 @@ class AppConfig:
|
|||||||
"CORS_ORIGINS": self.cors_origins,
|
"CORS_ORIGINS": self.cors_origins,
|
||||||
"CORS_METHODS": self.cors_methods,
|
"CORS_METHODS": self.cors_methods,
|
||||||
"CORS_ALLOW_HEADERS": self.cors_allow_headers,
|
"CORS_ALLOW_HEADERS": self.cors_allow_headers,
|
||||||
|
"CORS_EXPOSE_HEADERS": self.cors_expose_headers,
|
||||||
"SESSION_LIFETIME_DAYS": self.session_lifetime_days,
|
"SESSION_LIFETIME_DAYS": self.session_lifetime_days,
|
||||||
|
"ENCRYPTION_ENABLED": self.encryption_enabled,
|
||||||
|
"ENCRYPTION_MASTER_KEY_PATH": str(self.encryption_master_key_path),
|
||||||
|
"KMS_ENABLED": self.kms_enabled,
|
||||||
|
"KMS_KEYS_PATH": str(self.kms_keys_path),
|
||||||
|
"DEFAULT_ENCRYPTION_ALGORITHM": self.default_encryption_algorithm,
|
||||||
|
"DISPLAY_TIMEZONE": self.display_timezone,
|
||||||
}
|
}
|
||||||
|
|||||||
279
app/encrypted_storage.py
Normal file
279
app/encrypted_storage.py
Normal file
@@ -0,0 +1,279 @@
|
|||||||
|
"""Encrypted storage layer that wraps ObjectStorage with encryption support."""
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import io
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, BinaryIO, Dict, Optional
|
||||||
|
|
||||||
|
from .encryption import EncryptionManager, EncryptionMetadata, EncryptionError
|
||||||
|
from .storage import ObjectStorage, ObjectMeta, StorageError
|
||||||
|
|
||||||
|
|
||||||
|
class EncryptedObjectStorage:
|
||||||
|
"""Object storage with transparent server-side encryption.
|
||||||
|
|
||||||
|
This class wraps ObjectStorage and provides transparent encryption/decryption
|
||||||
|
of objects based on bucket encryption configuration.
|
||||||
|
|
||||||
|
Encryption is applied when:
|
||||||
|
1. Bucket has default encryption configured (SSE-S3 or SSE-KMS)
|
||||||
|
2. Client explicitly requests encryption via headers
|
||||||
|
|
||||||
|
The encryption metadata is stored alongside object metadata.
|
||||||
|
"""
|
||||||
|
|
||||||
|
STREAMING_THRESHOLD = 64 * 1024
|
||||||
|
|
||||||
|
def __init__(self, storage: ObjectStorage, encryption_manager: EncryptionManager):
|
||||||
|
self.storage = storage
|
||||||
|
self.encryption = encryption_manager
|
||||||
|
|
||||||
|
@property
|
||||||
|
def root(self) -> Path:
|
||||||
|
return self.storage.root
|
||||||
|
|
||||||
|
def _should_encrypt(self, bucket_name: str,
|
||||||
|
server_side_encryption: str | None = None) -> tuple[bool, str, str | None]:
|
||||||
|
"""Determine if object should be encrypted.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (should_encrypt, algorithm, kms_key_id)
|
||||||
|
"""
|
||||||
|
if not self.encryption.enabled:
|
||||||
|
return False, "", None
|
||||||
|
|
||||||
|
if server_side_encryption:
|
||||||
|
if server_side_encryption == "AES256":
|
||||||
|
return True, "AES256", None
|
||||||
|
elif server_side_encryption.startswith("aws:kms"):
|
||||||
|
parts = server_side_encryption.split(":")
|
||||||
|
kms_key_id = parts[2] if len(parts) > 2 else None
|
||||||
|
return True, "aws:kms", kms_key_id
|
||||||
|
|
||||||
|
try:
|
||||||
|
encryption_config = self.storage.get_bucket_encryption(bucket_name)
|
||||||
|
if encryption_config and encryption_config.get("Rules"):
|
||||||
|
rule = encryption_config["Rules"][0]
|
||||||
|
# AWS format: Rules[].ApplyServerSideEncryptionByDefault.SSEAlgorithm
|
||||||
|
sse_default = rule.get("ApplyServerSideEncryptionByDefault", {})
|
||||||
|
algorithm = sse_default.get("SSEAlgorithm", "AES256")
|
||||||
|
kms_key_id = sse_default.get("KMSMasterKeyID")
|
||||||
|
return True, algorithm, kms_key_id
|
||||||
|
except StorageError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return False, "", None
|
||||||
|
|
||||||
|
def _is_encrypted(self, metadata: Dict[str, str]) -> bool:
|
||||||
|
"""Check if object is encrypted based on its metadata."""
|
||||||
|
return "x-amz-server-side-encryption" in metadata
|
||||||
|
|
||||||
|
def put_object(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
stream: BinaryIO,
|
||||||
|
*,
|
||||||
|
metadata: Optional[Dict[str, str]] = None,
|
||||||
|
server_side_encryption: Optional[str] = None,
|
||||||
|
kms_key_id: Optional[str] = None,
|
||||||
|
) -> ObjectMeta:
|
||||||
|
"""Store an object, optionally with encryption.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
bucket_name: Name of the bucket
|
||||||
|
object_key: Key for the object
|
||||||
|
stream: Binary stream of object data
|
||||||
|
metadata: Optional user metadata
|
||||||
|
server_side_encryption: Encryption algorithm ("AES256" or "aws:kms")
|
||||||
|
kms_key_id: KMS key ID (for aws:kms encryption)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
ObjectMeta with object information
|
||||||
|
"""
|
||||||
|
should_encrypt, algorithm, detected_kms_key = self._should_encrypt(
|
||||||
|
bucket_name, server_side_encryption
|
||||||
|
)
|
||||||
|
|
||||||
|
if kms_key_id is None:
|
||||||
|
kms_key_id = detected_kms_key
|
||||||
|
|
||||||
|
if should_encrypt:
|
||||||
|
data = stream.read()
|
||||||
|
|
||||||
|
try:
|
||||||
|
ciphertext, enc_metadata = self.encryption.encrypt_object(
|
||||||
|
data,
|
||||||
|
algorithm=algorithm,
|
||||||
|
kms_key_id=kms_key_id,
|
||||||
|
context={"bucket": bucket_name, "key": object_key},
|
||||||
|
)
|
||||||
|
|
||||||
|
combined_metadata = metadata.copy() if metadata else {}
|
||||||
|
combined_metadata.update(enc_metadata.to_dict())
|
||||||
|
|
||||||
|
encrypted_stream = io.BytesIO(ciphertext)
|
||||||
|
result = self.storage.put_object(
|
||||||
|
bucket_name,
|
||||||
|
object_key,
|
||||||
|
encrypted_stream,
|
||||||
|
metadata=combined_metadata,
|
||||||
|
)
|
||||||
|
|
||||||
|
result.metadata = combined_metadata
|
||||||
|
return result
|
||||||
|
|
||||||
|
except EncryptionError as exc:
|
||||||
|
raise StorageError(f"Encryption failed: {exc}") from exc
|
||||||
|
else:
|
||||||
|
return self.storage.put_object(
|
||||||
|
bucket_name,
|
||||||
|
object_key,
|
||||||
|
stream,
|
||||||
|
metadata=metadata,
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_object_data(self, bucket_name: str, object_key: str) -> tuple[bytes, Dict[str, str]]:
|
||||||
|
"""Get object data, decrypting if necessary.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (data, metadata)
|
||||||
|
"""
|
||||||
|
path = self.storage.get_object_path(bucket_name, object_key)
|
||||||
|
metadata = self.storage.get_object_metadata(bucket_name, object_key)
|
||||||
|
|
||||||
|
with path.open("rb") as f:
|
||||||
|
data = f.read()
|
||||||
|
|
||||||
|
enc_metadata = EncryptionMetadata.from_dict(metadata)
|
||||||
|
if enc_metadata:
|
||||||
|
try:
|
||||||
|
data = self.encryption.decrypt_object(
|
||||||
|
data,
|
||||||
|
enc_metadata,
|
||||||
|
context={"bucket": bucket_name, "key": object_key},
|
||||||
|
)
|
||||||
|
except EncryptionError as exc:
|
||||||
|
raise StorageError(f"Decryption failed: {exc}") from exc
|
||||||
|
|
||||||
|
clean_metadata = {
|
||||||
|
k: v for k, v in metadata.items()
|
||||||
|
if not k.startswith("x-amz-encryption")
|
||||||
|
and k != "x-amz-encrypted-data-key"
|
||||||
|
}
|
||||||
|
|
||||||
|
return data, clean_metadata
|
||||||
|
|
||||||
|
def get_object_stream(self, bucket_name: str, object_key: str) -> tuple[BinaryIO, Dict[str, str], int]:
|
||||||
|
"""Get object as a stream, decrypting if necessary.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (stream, metadata, original_size)
|
||||||
|
"""
|
||||||
|
data, metadata = self.get_object_data(bucket_name, object_key)
|
||||||
|
return io.BytesIO(data), metadata, len(data)
|
||||||
|
|
||||||
|
def list_buckets(self):
|
||||||
|
return self.storage.list_buckets()
|
||||||
|
|
||||||
|
def bucket_exists(self, bucket_name: str) -> bool:
|
||||||
|
return self.storage.bucket_exists(bucket_name)
|
||||||
|
|
||||||
|
def create_bucket(self, bucket_name: str) -> None:
|
||||||
|
return self.storage.create_bucket(bucket_name)
|
||||||
|
|
||||||
|
def delete_bucket(self, bucket_name: str) -> None:
|
||||||
|
return self.storage.delete_bucket(bucket_name)
|
||||||
|
|
||||||
|
def bucket_stats(self, bucket_name: str, cache_ttl: int = 60):
|
||||||
|
return self.storage.bucket_stats(bucket_name, cache_ttl)
|
||||||
|
|
||||||
|
def list_objects(self, bucket_name: str, **kwargs):
|
||||||
|
return self.storage.list_objects(bucket_name, **kwargs)
|
||||||
|
|
||||||
|
def list_objects_all(self, bucket_name: str):
|
||||||
|
return self.storage.list_objects_all(bucket_name)
|
||||||
|
|
||||||
|
def get_object_path(self, bucket_name: str, object_key: str):
|
||||||
|
return self.storage.get_object_path(bucket_name, object_key)
|
||||||
|
|
||||||
|
def get_object_metadata(self, bucket_name: str, object_key: str):
|
||||||
|
return self.storage.get_object_metadata(bucket_name, object_key)
|
||||||
|
|
||||||
|
def delete_object(self, bucket_name: str, object_key: str) -> None:
|
||||||
|
return self.storage.delete_object(bucket_name, object_key)
|
||||||
|
|
||||||
|
def purge_object(self, bucket_name: str, object_key: str) -> None:
|
||||||
|
return self.storage.purge_object(bucket_name, object_key)
|
||||||
|
|
||||||
|
def is_versioning_enabled(self, bucket_name: str) -> bool:
|
||||||
|
return self.storage.is_versioning_enabled(bucket_name)
|
||||||
|
|
||||||
|
def set_bucket_versioning(self, bucket_name: str, enabled: bool) -> None:
|
||||||
|
return self.storage.set_bucket_versioning(bucket_name, enabled)
|
||||||
|
|
||||||
|
def get_bucket_tags(self, bucket_name: str):
|
||||||
|
return self.storage.get_bucket_tags(bucket_name)
|
||||||
|
|
||||||
|
def set_bucket_tags(self, bucket_name: str, tags):
|
||||||
|
return self.storage.set_bucket_tags(bucket_name, tags)
|
||||||
|
|
||||||
|
def get_bucket_cors(self, bucket_name: str):
|
||||||
|
return self.storage.get_bucket_cors(bucket_name)
|
||||||
|
|
||||||
|
def set_bucket_cors(self, bucket_name: str, rules):
|
||||||
|
return self.storage.set_bucket_cors(bucket_name, rules)
|
||||||
|
|
||||||
|
def get_bucket_encryption(self, bucket_name: str):
|
||||||
|
return self.storage.get_bucket_encryption(bucket_name)
|
||||||
|
|
||||||
|
def set_bucket_encryption(self, bucket_name: str, config_payload):
|
||||||
|
return self.storage.set_bucket_encryption(bucket_name, config_payload)
|
||||||
|
|
||||||
|
def get_bucket_lifecycle(self, bucket_name: str):
|
||||||
|
return self.storage.get_bucket_lifecycle(bucket_name)
|
||||||
|
|
||||||
|
def set_bucket_lifecycle(self, bucket_name: str, rules):
|
||||||
|
return self.storage.set_bucket_lifecycle(bucket_name, rules)
|
||||||
|
|
||||||
|
def get_object_tags(self, bucket_name: str, object_key: str):
|
||||||
|
return self.storage.get_object_tags(bucket_name, object_key)
|
||||||
|
|
||||||
|
def set_object_tags(self, bucket_name: str, object_key: str, tags):
|
||||||
|
return self.storage.set_object_tags(bucket_name, object_key, tags)
|
||||||
|
|
||||||
|
def delete_object_tags(self, bucket_name: str, object_key: str):
|
||||||
|
return self.storage.delete_object_tags(bucket_name, object_key)
|
||||||
|
|
||||||
|
def list_object_versions(self, bucket_name: str, object_key: str):
|
||||||
|
return self.storage.list_object_versions(bucket_name, object_key)
|
||||||
|
|
||||||
|
def restore_object_version(self, bucket_name: str, object_key: str, version_id: str):
|
||||||
|
return self.storage.restore_object_version(bucket_name, object_key, version_id)
|
||||||
|
|
||||||
|
def list_orphaned_objects(self, bucket_name: str):
|
||||||
|
return self.storage.list_orphaned_objects(bucket_name)
|
||||||
|
|
||||||
|
def initiate_multipart_upload(self, bucket_name: str, object_key: str, *, metadata=None) -> str:
|
||||||
|
return self.storage.initiate_multipart_upload(bucket_name, object_key, metadata=metadata)
|
||||||
|
|
||||||
|
def upload_multipart_part(self, bucket_name: str, upload_id: str, part_number: int, stream: BinaryIO) -> str:
|
||||||
|
return self.storage.upload_multipart_part(bucket_name, upload_id, part_number, stream)
|
||||||
|
|
||||||
|
def complete_multipart_upload(self, bucket_name: str, upload_id: str, ordered_parts):
|
||||||
|
return self.storage.complete_multipart_upload(bucket_name, upload_id, ordered_parts)
|
||||||
|
|
||||||
|
def abort_multipart_upload(self, bucket_name: str, upload_id: str) -> None:
|
||||||
|
return self.storage.abort_multipart_upload(bucket_name, upload_id)
|
||||||
|
|
||||||
|
def list_multipart_parts(self, bucket_name: str, upload_id: str):
|
||||||
|
return self.storage.list_multipart_parts(bucket_name, upload_id)
|
||||||
|
|
||||||
|
def get_bucket_quota(self, bucket_name: str):
|
||||||
|
return self.storage.get_bucket_quota(bucket_name)
|
||||||
|
|
||||||
|
def set_bucket_quota(self, bucket_name: str, *, max_bytes=None, max_objects=None):
|
||||||
|
return self.storage.set_bucket_quota(bucket_name, max_bytes=max_bytes, max_objects=max_objects)
|
||||||
|
|
||||||
|
def _compute_etag(self, path: Path) -> str:
|
||||||
|
return self.storage._compute_etag(path)
|
||||||
395
app/encryption.py
Normal file
395
app/encryption.py
Normal file
@@ -0,0 +1,395 @@
|
|||||||
|
"""Encryption providers for server-side and client-side encryption."""
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
import io
|
||||||
|
import json
|
||||||
|
import secrets
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, BinaryIO, Dict, Generator, Optional
|
||||||
|
|
||||||
|
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
|
||||||
|
|
||||||
|
|
||||||
|
class EncryptionError(Exception):
|
||||||
|
"""Raised when encryption/decryption fails."""
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class EncryptionResult:
|
||||||
|
"""Result of encrypting data."""
|
||||||
|
ciphertext: bytes
|
||||||
|
nonce: bytes
|
||||||
|
key_id: str
|
||||||
|
encrypted_data_key: bytes
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class EncryptionMetadata:
|
||||||
|
"""Metadata stored with encrypted objects."""
|
||||||
|
algorithm: str
|
||||||
|
key_id: str
|
||||||
|
nonce: bytes
|
||||||
|
encrypted_data_key: bytes
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, str]:
|
||||||
|
return {
|
||||||
|
"x-amz-server-side-encryption": self.algorithm,
|
||||||
|
"x-amz-encryption-key-id": self.key_id,
|
||||||
|
"x-amz-encryption-nonce": base64.b64encode(self.nonce).decode(),
|
||||||
|
"x-amz-encrypted-data-key": base64.b64encode(self.encrypted_data_key).decode(),
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, str]) -> Optional["EncryptionMetadata"]:
|
||||||
|
algorithm = data.get("x-amz-server-side-encryption")
|
||||||
|
if not algorithm:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
return cls(
|
||||||
|
algorithm=algorithm,
|
||||||
|
key_id=data.get("x-amz-encryption-key-id", "local"),
|
||||||
|
nonce=base64.b64decode(data.get("x-amz-encryption-nonce", "")),
|
||||||
|
encrypted_data_key=base64.b64decode(data.get("x-amz-encrypted-data-key", "")),
|
||||||
|
)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
class EncryptionProvider:
|
||||||
|
"""Base class for encryption providers."""
|
||||||
|
|
||||||
|
def encrypt(self, plaintext: bytes, context: Dict[str, str] | None = None) -> EncryptionResult:
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
||||||
|
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def generate_data_key(self) -> tuple[bytes, bytes]:
|
||||||
|
"""Generate a data key and its encrypted form.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (plaintext_key, encrypted_key)
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|
||||||
|
class LocalKeyEncryption(EncryptionProvider):
|
||||||
|
"""SSE-S3 style encryption using a local master key.
|
||||||
|
|
||||||
|
Uses envelope encryption:
|
||||||
|
1. Generate a unique data key for each object
|
||||||
|
2. Encrypt the data with the data key (AES-256-GCM)
|
||||||
|
3. Encrypt the data key with the master key
|
||||||
|
4. Store the encrypted data key alongside the ciphertext
|
||||||
|
"""
|
||||||
|
|
||||||
|
KEY_ID = "local"
|
||||||
|
|
||||||
|
def __init__(self, master_key_path: Path):
|
||||||
|
self.master_key_path = master_key_path
|
||||||
|
self._master_key: bytes | None = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def master_key(self) -> bytes:
|
||||||
|
if self._master_key is None:
|
||||||
|
self._master_key = self._load_or_create_master_key()
|
||||||
|
return self._master_key
|
||||||
|
|
||||||
|
def _load_or_create_master_key(self) -> bytes:
|
||||||
|
"""Load master key from file or generate a new one."""
|
||||||
|
if self.master_key_path.exists():
|
||||||
|
try:
|
||||||
|
return base64.b64decode(self.master_key_path.read_text().strip())
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Failed to load master key: {exc}") from exc
|
||||||
|
|
||||||
|
key = secrets.token_bytes(32)
|
||||||
|
try:
|
||||||
|
self.master_key_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
self.master_key_path.write_text(base64.b64encode(key).decode())
|
||||||
|
except OSError as exc:
|
||||||
|
raise EncryptionError(f"Failed to save master key: {exc}") from exc
|
||||||
|
return key
|
||||||
|
|
||||||
|
def _encrypt_data_key(self, data_key: bytes) -> bytes:
|
||||||
|
"""Encrypt the data key with the master key."""
|
||||||
|
aesgcm = AESGCM(self.master_key)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
encrypted = aesgcm.encrypt(nonce, data_key, None)
|
||||||
|
return nonce + encrypted
|
||||||
|
|
||||||
|
def _decrypt_data_key(self, encrypted_data_key: bytes) -> bytes:
|
||||||
|
"""Decrypt the data key using the master key."""
|
||||||
|
if len(encrypted_data_key) < 12 + 32 + 16: # nonce + key + tag
|
||||||
|
raise EncryptionError("Invalid encrypted data key")
|
||||||
|
aesgcm = AESGCM(self.master_key)
|
||||||
|
nonce = encrypted_data_key[:12]
|
||||||
|
ciphertext = encrypted_data_key[12:]
|
||||||
|
try:
|
||||||
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Failed to decrypt data key: {exc}") from exc
|
||||||
|
|
||||||
|
def generate_data_key(self) -> tuple[bytes, bytes]:
|
||||||
|
"""Generate a data key and its encrypted form."""
|
||||||
|
plaintext_key = secrets.token_bytes(32)
|
||||||
|
encrypted_key = self._encrypt_data_key(plaintext_key)
|
||||||
|
return plaintext_key, encrypted_key
|
||||||
|
|
||||||
|
def encrypt(self, plaintext: bytes, context: Dict[str, str] | None = None) -> EncryptionResult:
|
||||||
|
"""Encrypt data using envelope encryption."""
|
||||||
|
data_key, encrypted_data_key = self.generate_data_key()
|
||||||
|
|
||||||
|
aesgcm = AESGCM(data_key)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
ciphertext = aesgcm.encrypt(nonce, plaintext, None)
|
||||||
|
|
||||||
|
return EncryptionResult(
|
||||||
|
ciphertext=ciphertext,
|
||||||
|
nonce=nonce,
|
||||||
|
key_id=self.KEY_ID,
|
||||||
|
encrypted_data_key=encrypted_data_key,
|
||||||
|
)
|
||||||
|
|
||||||
|
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
||||||
|
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
"""Decrypt data using envelope encryption."""
|
||||||
|
# Decrypt the data key
|
||||||
|
data_key = self._decrypt_data_key(encrypted_data_key)
|
||||||
|
|
||||||
|
# Decrypt the data
|
||||||
|
aesgcm = AESGCM(data_key)
|
||||||
|
try:
|
||||||
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Failed to decrypt data: {exc}") from exc
|
||||||
|
|
||||||
|
|
||||||
|
class StreamingEncryptor:
|
||||||
|
"""Encrypts/decrypts data in streaming fashion for large files.
|
||||||
|
|
||||||
|
For large files, we encrypt in chunks. Each chunk is encrypted with the
|
||||||
|
same data key but a unique nonce derived from the base nonce + chunk index.
|
||||||
|
"""
|
||||||
|
|
||||||
|
CHUNK_SIZE = 64 * 1024
|
||||||
|
HEADER_SIZE = 4
|
||||||
|
|
||||||
|
def __init__(self, provider: EncryptionProvider, chunk_size: int = CHUNK_SIZE):
|
||||||
|
self.provider = provider
|
||||||
|
self.chunk_size = chunk_size
|
||||||
|
|
||||||
|
def _derive_chunk_nonce(self, base_nonce: bytes, chunk_index: int) -> bytes:
|
||||||
|
"""Derive a unique nonce for each chunk."""
|
||||||
|
# XOR the base nonce with the chunk index
|
||||||
|
nonce_int = int.from_bytes(base_nonce, "big")
|
||||||
|
derived = nonce_int ^ chunk_index
|
||||||
|
return derived.to_bytes(12, "big")
|
||||||
|
|
||||||
|
def encrypt_stream(self, stream: BinaryIO,
|
||||||
|
context: Dict[str, str] | None = None) -> tuple[BinaryIO, EncryptionMetadata]:
|
||||||
|
"""Encrypt a stream and return encrypted stream + metadata."""
|
||||||
|
|
||||||
|
data_key, encrypted_data_key = self.provider.generate_data_key()
|
||||||
|
base_nonce = secrets.token_bytes(12)
|
||||||
|
|
||||||
|
aesgcm = AESGCM(data_key)
|
||||||
|
encrypted_chunks = []
|
||||||
|
chunk_index = 0
|
||||||
|
|
||||||
|
while True:
|
||||||
|
chunk = stream.read(self.chunk_size)
|
||||||
|
if not chunk:
|
||||||
|
break
|
||||||
|
|
||||||
|
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
||||||
|
encrypted_chunk = aesgcm.encrypt(chunk_nonce, chunk, None)
|
||||||
|
|
||||||
|
size_prefix = len(encrypted_chunk).to_bytes(self.HEADER_SIZE, "big")
|
||||||
|
encrypted_chunks.append(size_prefix + encrypted_chunk)
|
||||||
|
chunk_index += 1
|
||||||
|
|
||||||
|
header = chunk_index.to_bytes(4, "big")
|
||||||
|
encrypted_data = header + b"".join(encrypted_chunks)
|
||||||
|
|
||||||
|
metadata = EncryptionMetadata(
|
||||||
|
algorithm="AES256",
|
||||||
|
key_id=self.provider.KEY_ID if hasattr(self.provider, "KEY_ID") else "local",
|
||||||
|
nonce=base_nonce,
|
||||||
|
encrypted_data_key=encrypted_data_key,
|
||||||
|
)
|
||||||
|
|
||||||
|
return io.BytesIO(encrypted_data), metadata
|
||||||
|
|
||||||
|
def decrypt_stream(self, stream: BinaryIO, metadata: EncryptionMetadata) -> BinaryIO:
|
||||||
|
"""Decrypt a stream using the provided metadata."""
|
||||||
|
if isinstance(self.provider, LocalKeyEncryption):
|
||||||
|
data_key = self.provider._decrypt_data_key(metadata.encrypted_data_key)
|
||||||
|
else:
|
||||||
|
raise EncryptionError("Unsupported provider for streaming decryption")
|
||||||
|
|
||||||
|
aesgcm = AESGCM(data_key)
|
||||||
|
base_nonce = metadata.nonce
|
||||||
|
|
||||||
|
chunk_count_bytes = stream.read(4)
|
||||||
|
if len(chunk_count_bytes) < 4:
|
||||||
|
raise EncryptionError("Invalid encrypted stream: missing header")
|
||||||
|
chunk_count = int.from_bytes(chunk_count_bytes, "big")
|
||||||
|
|
||||||
|
decrypted_chunks = []
|
||||||
|
for chunk_index in range(chunk_count):
|
||||||
|
size_bytes = stream.read(self.HEADER_SIZE)
|
||||||
|
if len(size_bytes) < self.HEADER_SIZE:
|
||||||
|
raise EncryptionError(f"Invalid encrypted stream: truncated at chunk {chunk_index}")
|
||||||
|
chunk_size = int.from_bytes(size_bytes, "big")
|
||||||
|
|
||||||
|
encrypted_chunk = stream.read(chunk_size)
|
||||||
|
if len(encrypted_chunk) < chunk_size:
|
||||||
|
raise EncryptionError(f"Invalid encrypted stream: incomplete chunk {chunk_index}")
|
||||||
|
|
||||||
|
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
||||||
|
try:
|
||||||
|
decrypted_chunk = aesgcm.decrypt(chunk_nonce, encrypted_chunk, None)
|
||||||
|
decrypted_chunks.append(decrypted_chunk)
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Failed to decrypt chunk {chunk_index}: {exc}") from exc
|
||||||
|
|
||||||
|
return io.BytesIO(b"".join(decrypted_chunks))
|
||||||
|
|
||||||
|
|
||||||
|
class EncryptionManager:
|
||||||
|
"""Manages encryption providers and operations."""
|
||||||
|
|
||||||
|
def __init__(self, config: Dict[str, Any]):
|
||||||
|
self.config = config
|
||||||
|
self._local_provider: LocalKeyEncryption | None = None
|
||||||
|
self._kms_provider: Any = None # Set by KMS module
|
||||||
|
self._streaming_encryptor: StreamingEncryptor | None = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def enabled(self) -> bool:
|
||||||
|
return self.config.get("encryption_enabled", False)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def default_algorithm(self) -> str:
|
||||||
|
return self.config.get("default_encryption_algorithm", "AES256")
|
||||||
|
|
||||||
|
def get_local_provider(self) -> LocalKeyEncryption:
|
||||||
|
if self._local_provider is None:
|
||||||
|
key_path = Path(self.config.get("encryption_master_key_path", "data/.myfsio.sys/keys/master.key"))
|
||||||
|
self._local_provider = LocalKeyEncryption(key_path)
|
||||||
|
return self._local_provider
|
||||||
|
|
||||||
|
def set_kms_provider(self, kms_provider: Any) -> None:
|
||||||
|
"""Set the KMS provider (injected from kms module)."""
|
||||||
|
self._kms_provider = kms_provider
|
||||||
|
|
||||||
|
def get_provider(self, algorithm: str, kms_key_id: str | None = None) -> EncryptionProvider:
|
||||||
|
"""Get the appropriate encryption provider for the algorithm."""
|
||||||
|
if algorithm == "AES256":
|
||||||
|
return self.get_local_provider()
|
||||||
|
elif algorithm == "aws:kms":
|
||||||
|
if self._kms_provider is None:
|
||||||
|
raise EncryptionError("KMS is not configured")
|
||||||
|
return self._kms_provider.get_provider(kms_key_id)
|
||||||
|
else:
|
||||||
|
raise EncryptionError(f"Unsupported encryption algorithm: {algorithm}")
|
||||||
|
|
||||||
|
def get_streaming_encryptor(self) -> StreamingEncryptor:
|
||||||
|
if self._streaming_encryptor is None:
|
||||||
|
self._streaming_encryptor = StreamingEncryptor(self.get_local_provider())
|
||||||
|
return self._streaming_encryptor
|
||||||
|
|
||||||
|
def encrypt_object(self, data: bytes, algorithm: str = "AES256",
|
||||||
|
kms_key_id: str | None = None,
|
||||||
|
context: Dict[str, str] | None = None) -> tuple[bytes, EncryptionMetadata]:
|
||||||
|
"""Encrypt object data."""
|
||||||
|
provider = self.get_provider(algorithm, kms_key_id)
|
||||||
|
result = provider.encrypt(data, context)
|
||||||
|
|
||||||
|
metadata = EncryptionMetadata(
|
||||||
|
algorithm=algorithm,
|
||||||
|
key_id=result.key_id,
|
||||||
|
nonce=result.nonce,
|
||||||
|
encrypted_data_key=result.encrypted_data_key,
|
||||||
|
)
|
||||||
|
|
||||||
|
return result.ciphertext, metadata
|
||||||
|
|
||||||
|
def decrypt_object(self, ciphertext: bytes, metadata: EncryptionMetadata,
|
||||||
|
context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
"""Decrypt object data."""
|
||||||
|
provider = self.get_provider(metadata.algorithm, metadata.key_id)
|
||||||
|
return provider.decrypt(
|
||||||
|
ciphertext,
|
||||||
|
metadata.nonce,
|
||||||
|
metadata.encrypted_data_key,
|
||||||
|
metadata.key_id,
|
||||||
|
context,
|
||||||
|
)
|
||||||
|
|
||||||
|
def encrypt_stream(self, stream: BinaryIO, algorithm: str = "AES256",
|
||||||
|
context: Dict[str, str] | None = None) -> tuple[BinaryIO, EncryptionMetadata]:
|
||||||
|
"""Encrypt a stream for large files."""
|
||||||
|
encryptor = self.get_streaming_encryptor()
|
||||||
|
return encryptor.encrypt_stream(stream, context)
|
||||||
|
|
||||||
|
def decrypt_stream(self, stream: BinaryIO, metadata: EncryptionMetadata) -> BinaryIO:
|
||||||
|
"""Decrypt a stream."""
|
||||||
|
encryptor = self.get_streaming_encryptor()
|
||||||
|
return encryptor.decrypt_stream(stream, metadata)
|
||||||
|
|
||||||
|
|
||||||
|
class ClientEncryptionHelper:
|
||||||
|
"""Helpers for client-side encryption.
|
||||||
|
|
||||||
|
Client-side encryption is performed by the client, but this helper
|
||||||
|
provides key generation and materials for clients that need them.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def generate_client_key() -> Dict[str, str]:
|
||||||
|
"""Generate a new client encryption key."""
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
key = secrets.token_bytes(32)
|
||||||
|
return {
|
||||||
|
"key": base64.b64encode(key).decode(),
|
||||||
|
"algorithm": "AES-256-GCM",
|
||||||
|
"created_at": datetime.now(timezone.utc).isoformat(),
|
||||||
|
}
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def encrypt_with_key(plaintext: bytes, key_b64: str) -> Dict[str, str]:
|
||||||
|
"""Encrypt data with a client-provided key."""
|
||||||
|
key = base64.b64decode(key_b64)
|
||||||
|
if len(key) != 32:
|
||||||
|
raise EncryptionError("Key must be 256 bits (32 bytes)")
|
||||||
|
|
||||||
|
aesgcm = AESGCM(key)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
ciphertext = aesgcm.encrypt(nonce, plaintext, None)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"ciphertext": base64.b64encode(ciphertext).decode(),
|
||||||
|
"nonce": base64.b64encode(nonce).decode(),
|
||||||
|
"algorithm": "AES-256-GCM",
|
||||||
|
}
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def decrypt_with_key(ciphertext_b64: str, nonce_b64: str, key_b64: str) -> bytes:
|
||||||
|
"""Decrypt data with a client-provided key."""
|
||||||
|
key = base64.b64decode(key_b64)
|
||||||
|
nonce = base64.b64decode(nonce_b64)
|
||||||
|
ciphertext = base64.b64decode(ciphertext_b64)
|
||||||
|
|
||||||
|
if len(key) != 32:
|
||||||
|
raise EncryptionError("Key must be 256 bits (32 bytes)")
|
||||||
|
|
||||||
|
aesgcm = AESGCM(key)
|
||||||
|
try:
|
||||||
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Decryption failed: {exc}") from exc
|
||||||
187
app/errors.py
Normal file
187
app/errors.py
Normal file
@@ -0,0 +1,187 @@
|
|||||||
|
"""Standardized error handling for API and UI responses."""
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Optional, Dict, Any
|
||||||
|
from xml.etree.ElementTree import Element, SubElement, tostring
|
||||||
|
|
||||||
|
from flask import Response, jsonify, request, flash, redirect, url_for, g
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class AppError(Exception):
|
||||||
|
"""Base application error with multi-format response support."""
|
||||||
|
code: str
|
||||||
|
message: str
|
||||||
|
status_code: int = 500
|
||||||
|
details: Optional[Dict[str, Any]] = field(default=None)
|
||||||
|
|
||||||
|
def __post_init__(self):
|
||||||
|
super().__init__(self.message)
|
||||||
|
|
||||||
|
def to_xml_response(self) -> Response:
|
||||||
|
"""Convert to S3 API XML error response."""
|
||||||
|
error = Element("Error")
|
||||||
|
SubElement(error, "Code").text = self.code
|
||||||
|
SubElement(error, "Message").text = self.message
|
||||||
|
request_id = getattr(g, 'request_id', None) if g else None
|
||||||
|
SubElement(error, "RequestId").text = request_id or "unknown"
|
||||||
|
xml_bytes = tostring(error, encoding="utf-8")
|
||||||
|
return Response(xml_bytes, status=self.status_code, mimetype="application/xml")
|
||||||
|
|
||||||
|
def to_json_response(self) -> tuple[Response, int]:
|
||||||
|
"""Convert to JSON error response for UI AJAX calls."""
|
||||||
|
payload: Dict[str, Any] = {
|
||||||
|
"success": False,
|
||||||
|
"error": {
|
||||||
|
"code": self.code,
|
||||||
|
"message": self.message
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if self.details:
|
||||||
|
payload["error"]["details"] = self.details
|
||||||
|
return jsonify(payload), self.status_code
|
||||||
|
|
||||||
|
def to_flash_message(self) -> str:
|
||||||
|
"""Convert to user-friendly flash message."""
|
||||||
|
return self.message
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class BucketNotFoundError(AppError):
|
||||||
|
"""Bucket does not exist."""
|
||||||
|
code: str = "NoSuchBucket"
|
||||||
|
message: str = "The specified bucket does not exist"
|
||||||
|
status_code: int = 404
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class BucketAlreadyExistsError(AppError):
|
||||||
|
"""Bucket already exists."""
|
||||||
|
code: str = "BucketAlreadyExists"
|
||||||
|
message: str = "The requested bucket name is not available"
|
||||||
|
status_code: int = 409
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class BucketNotEmptyError(AppError):
|
||||||
|
"""Bucket is not empty."""
|
||||||
|
code: str = "BucketNotEmpty"
|
||||||
|
message: str = "The bucket you tried to delete is not empty"
|
||||||
|
status_code: int = 409
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ObjectNotFoundError(AppError):
|
||||||
|
"""Object does not exist."""
|
||||||
|
code: str = "NoSuchKey"
|
||||||
|
message: str = "The specified key does not exist"
|
||||||
|
status_code: int = 404
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class InvalidObjectKeyError(AppError):
|
||||||
|
"""Invalid object key."""
|
||||||
|
code: str = "InvalidKey"
|
||||||
|
message: str = "The specified key is not valid"
|
||||||
|
status_code: int = 400
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class AccessDeniedError(AppError):
|
||||||
|
"""Access denied."""
|
||||||
|
code: str = "AccessDenied"
|
||||||
|
message: str = "Access Denied"
|
||||||
|
status_code: int = 403
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class InvalidCredentialsError(AppError):
|
||||||
|
"""Invalid credentials."""
|
||||||
|
code: str = "InvalidAccessKeyId"
|
||||||
|
message: str = "The access key ID you provided does not exist"
|
||||||
|
status_code: int = 403
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class MalformedRequestError(AppError):
|
||||||
|
"""Malformed request."""
|
||||||
|
code: str = "MalformedXML"
|
||||||
|
message: str = "The XML you provided was not well-formed"
|
||||||
|
status_code: int = 400
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class InvalidArgumentError(AppError):
|
||||||
|
"""Invalid argument."""
|
||||||
|
code: str = "InvalidArgument"
|
||||||
|
message: str = "Invalid argument"
|
||||||
|
status_code: int = 400
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class EntityTooLargeError(AppError):
|
||||||
|
"""Entity too large."""
|
||||||
|
code: str = "EntityTooLarge"
|
||||||
|
message: str = "Your proposed upload exceeds the maximum allowed size"
|
||||||
|
status_code: int = 413
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class QuotaExceededAppError(AppError):
|
||||||
|
"""Bucket quota exceeded."""
|
||||||
|
code: str = "QuotaExceeded"
|
||||||
|
message: str = "The bucket quota has been exceeded"
|
||||||
|
status_code: int = 403
|
||||||
|
quota: Optional[Dict[str, Any]] = None
|
||||||
|
usage: Optional[Dict[str, int]] = None
|
||||||
|
|
||||||
|
def __post_init__(self):
|
||||||
|
if self.quota or self.usage:
|
||||||
|
self.details = {}
|
||||||
|
if self.quota:
|
||||||
|
self.details["quota"] = self.quota
|
||||||
|
if self.usage:
|
||||||
|
self.details["usage"] = self.usage
|
||||||
|
super().__post_init__()
|
||||||
|
|
||||||
|
|
||||||
|
def handle_app_error(error: AppError) -> Response:
|
||||||
|
"""Handle application errors with appropriate response format."""
|
||||||
|
log_extra = {"error_code": error.code}
|
||||||
|
if error.details:
|
||||||
|
log_extra["details"] = error.details
|
||||||
|
|
||||||
|
logger.error(f"{error.code}: {error.message}", extra=log_extra)
|
||||||
|
|
||||||
|
if request.path.startswith('/ui'):
|
||||||
|
wants_json = (
|
||||||
|
request.is_json or
|
||||||
|
request.headers.get('X-Requested-With') == 'XMLHttpRequest' or
|
||||||
|
'application/json' in request.accept_mimetypes.values()
|
||||||
|
)
|
||||||
|
if wants_json:
|
||||||
|
return error.to_json_response()
|
||||||
|
flash(error.to_flash_message(), 'danger')
|
||||||
|
referrer = request.referrer
|
||||||
|
if referrer and request.host in referrer:
|
||||||
|
return redirect(referrer)
|
||||||
|
return redirect(url_for('ui.buckets_overview'))
|
||||||
|
else:
|
||||||
|
return error.to_xml_response()
|
||||||
|
|
||||||
|
|
||||||
|
def register_error_handlers(app):
|
||||||
|
"""Register error handlers with a Flask app."""
|
||||||
|
app.register_error_handler(AppError, handle_app_error)
|
||||||
|
|
||||||
|
for error_class in [
|
||||||
|
BucketNotFoundError, BucketAlreadyExistsError, BucketNotEmptyError,
|
||||||
|
ObjectNotFoundError, InvalidObjectKeyError,
|
||||||
|
AccessDeniedError, InvalidCredentialsError,
|
||||||
|
MalformedRequestError, InvalidArgumentError, EntityTooLargeError,
|
||||||
|
QuotaExceededAppError,
|
||||||
|
]:
|
||||||
|
app.register_error_handler(error_class, handle_app_error)
|
||||||
@@ -1,10 +1,17 @@
|
|||||||
"""Application-wide extension instances."""
|
"""Application-wide extension instances."""
|
||||||
|
from flask import g
|
||||||
from flask_limiter import Limiter
|
from flask_limiter import Limiter
|
||||||
from flask_limiter.util import get_remote_address
|
from flask_limiter.util import get_remote_address
|
||||||
from flask_wtf import CSRFProtect
|
from flask_wtf import CSRFProtect
|
||||||
|
|
||||||
|
def get_rate_limit_key():
|
||||||
|
"""Generate rate limit key based on authenticated user."""
|
||||||
|
if hasattr(g, 'principal') and g.principal:
|
||||||
|
return g.principal.access_key
|
||||||
|
return get_remote_address()
|
||||||
|
|
||||||
# Shared rate limiter instance; configured in app factory.
|
# Shared rate limiter instance; configured in app factory.
|
||||||
limiter = Limiter(key_func=get_remote_address)
|
limiter = Limiter(key_func=get_rate_limit_key)
|
||||||
|
|
||||||
# Global CSRF protection for UI routes.
|
# Global CSRF protection for UI routes.
|
||||||
csrf = CSRFProtect()
|
csrf = CSRFProtect()
|
||||||
|
|||||||
66
app/iam.py
66
app/iam.py
@@ -6,7 +6,7 @@ import math
|
|||||||
import secrets
|
import secrets
|
||||||
from collections import deque
|
from collections import deque
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta, timezone
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set
|
from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set
|
||||||
|
|
||||||
@@ -15,7 +15,7 @@ class IamError(RuntimeError):
|
|||||||
"""Raised when authentication or authorization fails."""
|
"""Raised when authentication or authorization fails."""
|
||||||
|
|
||||||
|
|
||||||
S3_ACTIONS = {"list", "read", "write", "delete", "share", "policy"}
|
S3_ACTIONS = {"list", "read", "write", "delete", "share", "policy", "replication"}
|
||||||
IAM_ACTIONS = {
|
IAM_ACTIONS = {
|
||||||
"iam:list_users",
|
"iam:list_users",
|
||||||
"iam:create_user",
|
"iam:create_user",
|
||||||
@@ -26,22 +26,59 @@ IAM_ACTIONS = {
|
|||||||
ALLOWED_ACTIONS = (S3_ACTIONS | IAM_ACTIONS) | {"iam:*"}
|
ALLOWED_ACTIONS = (S3_ACTIONS | IAM_ACTIONS) | {"iam:*"}
|
||||||
|
|
||||||
ACTION_ALIASES = {
|
ACTION_ALIASES = {
|
||||||
|
# List actions
|
||||||
"list": "list",
|
"list": "list",
|
||||||
"s3:listbucket": "list",
|
"s3:listbucket": "list",
|
||||||
"s3:listallmybuckets": "list",
|
"s3:listallmybuckets": "list",
|
||||||
|
"s3:listbucketversions": "list",
|
||||||
|
"s3:listmultipartuploads": "list",
|
||||||
|
"s3:listparts": "list",
|
||||||
|
# Read actions
|
||||||
"read": "read",
|
"read": "read",
|
||||||
"s3:getobject": "read",
|
"s3:getobject": "read",
|
||||||
"s3:getobjectversion": "read",
|
"s3:getobjectversion": "read",
|
||||||
|
"s3:getobjecttagging": "read",
|
||||||
|
"s3:getobjectversiontagging": "read",
|
||||||
|
"s3:getobjectacl": "read",
|
||||||
|
"s3:getbucketversioning": "read",
|
||||||
|
"s3:headobject": "read",
|
||||||
|
"s3:headbucket": "read",
|
||||||
|
# Write actions
|
||||||
"write": "write",
|
"write": "write",
|
||||||
"s3:putobject": "write",
|
"s3:putobject": "write",
|
||||||
"s3:createbucket": "write",
|
"s3:createbucket": "write",
|
||||||
|
"s3:putobjecttagging": "write",
|
||||||
|
"s3:putbucketversioning": "write",
|
||||||
|
"s3:createmultipartupload": "write",
|
||||||
|
"s3:uploadpart": "write",
|
||||||
|
"s3:completemultipartupload": "write",
|
||||||
|
"s3:abortmultipartupload": "write",
|
||||||
|
"s3:copyobject": "write",
|
||||||
|
# Delete actions
|
||||||
"delete": "delete",
|
"delete": "delete",
|
||||||
"s3:deleteobject": "delete",
|
"s3:deleteobject": "delete",
|
||||||
|
"s3:deleteobjectversion": "delete",
|
||||||
"s3:deletebucket": "delete",
|
"s3:deletebucket": "delete",
|
||||||
|
"s3:deleteobjecttagging": "delete",
|
||||||
|
# Share actions (ACL)
|
||||||
"share": "share",
|
"share": "share",
|
||||||
"s3:putobjectacl": "share",
|
"s3:putobjectacl": "share",
|
||||||
|
"s3:putbucketacl": "share",
|
||||||
|
"s3:getbucketacl": "share",
|
||||||
|
# Policy actions
|
||||||
"policy": "policy",
|
"policy": "policy",
|
||||||
"s3:putbucketpolicy": "policy",
|
"s3:putbucketpolicy": "policy",
|
||||||
|
"s3:getbucketpolicy": "policy",
|
||||||
|
"s3:deletebucketpolicy": "policy",
|
||||||
|
# Replication actions
|
||||||
|
"replication": "replication",
|
||||||
|
"s3:getreplicationconfiguration": "replication",
|
||||||
|
"s3:putreplicationconfiguration": "replication",
|
||||||
|
"s3:deletereplicationconfiguration": "replication",
|
||||||
|
"s3:replicateobject": "replication",
|
||||||
|
"s3:replicatetags": "replication",
|
||||||
|
"s3:replicatedelete": "replication",
|
||||||
|
# IAM actions
|
||||||
"iam:listusers": "iam:list_users",
|
"iam:listusers": "iam:list_users",
|
||||||
"iam:createuser": "iam:create_user",
|
"iam:createuser": "iam:create_user",
|
||||||
"iam:deleteuser": "iam:delete_user",
|
"iam:deleteuser": "iam:delete_user",
|
||||||
@@ -77,10 +114,19 @@ class IamService:
|
|||||||
self._users: Dict[str, Dict[str, Any]] = {}
|
self._users: Dict[str, Dict[str, Any]] = {}
|
||||||
self._raw_config: Dict[str, Any] = {}
|
self._raw_config: Dict[str, Any] = {}
|
||||||
self._failed_attempts: Dict[str, Deque[datetime]] = {}
|
self._failed_attempts: Dict[str, Deque[datetime]] = {}
|
||||||
|
self._last_load_time = 0.0
|
||||||
self._load()
|
self._load()
|
||||||
|
|
||||||
# ---------------------- authz helpers ----------------------
|
def _maybe_reload(self) -> None:
|
||||||
|
"""Reload configuration if the file has changed on disk."""
|
||||||
|
try:
|
||||||
|
if self.config_path.stat().st_mtime > self._last_load_time:
|
||||||
|
self._load()
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
def authenticate(self, access_key: str, secret_key: str) -> Principal:
|
def authenticate(self, access_key: str, secret_key: str) -> Principal:
|
||||||
|
self._maybe_reload()
|
||||||
access_key = (access_key or "").strip()
|
access_key = (access_key or "").strip()
|
||||||
secret_key = (secret_key or "").strip()
|
secret_key = (secret_key or "").strip()
|
||||||
if not access_key or not secret_key:
|
if not access_key or not secret_key:
|
||||||
@@ -102,7 +148,7 @@ class IamService:
|
|||||||
return
|
return
|
||||||
attempts = self._failed_attempts.setdefault(access_key, deque())
|
attempts = self._failed_attempts.setdefault(access_key, deque())
|
||||||
self._prune_attempts(attempts)
|
self._prune_attempts(attempts)
|
||||||
attempts.append(datetime.now())
|
attempts.append(datetime.now(timezone.utc))
|
||||||
|
|
||||||
def _clear_failed_attempts(self, access_key: str) -> None:
|
def _clear_failed_attempts(self, access_key: str) -> None:
|
||||||
if not access_key:
|
if not access_key:
|
||||||
@@ -110,7 +156,7 @@ class IamService:
|
|||||||
self._failed_attempts.pop(access_key, None)
|
self._failed_attempts.pop(access_key, None)
|
||||||
|
|
||||||
def _prune_attempts(self, attempts: Deque[datetime]) -> None:
|
def _prune_attempts(self, attempts: Deque[datetime]) -> None:
|
||||||
cutoff = datetime.now() - self.auth_lockout_window
|
cutoff = datetime.now(timezone.utc) - self.auth_lockout_window
|
||||||
while attempts and attempts[0] < cutoff:
|
while attempts and attempts[0] < cutoff:
|
||||||
attempts.popleft()
|
attempts.popleft()
|
||||||
|
|
||||||
@@ -131,16 +177,18 @@ class IamService:
|
|||||||
if len(attempts) < self.auth_max_attempts:
|
if len(attempts) < self.auth_max_attempts:
|
||||||
return 0
|
return 0
|
||||||
oldest = attempts[0]
|
oldest = attempts[0]
|
||||||
elapsed = (datetime.now() - oldest).total_seconds()
|
elapsed = (datetime.now(timezone.utc) - oldest).total_seconds()
|
||||||
return int(max(0, self.auth_lockout_window.total_seconds() - elapsed))
|
return int(max(0, self.auth_lockout_window.total_seconds() - elapsed))
|
||||||
|
|
||||||
def principal_for_key(self, access_key: str) -> Principal:
|
def principal_for_key(self, access_key: str) -> Principal:
|
||||||
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if not record:
|
if not record:
|
||||||
raise IamError("Unknown access key")
|
raise IamError("Unknown access key")
|
||||||
return self._build_principal(access_key, record)
|
return self._build_principal(access_key, record)
|
||||||
|
|
||||||
def secret_for_key(self, access_key: str) -> str:
|
def secret_for_key(self, access_key: str) -> str:
|
||||||
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if not record:
|
if not record:
|
||||||
raise IamError("Unknown access key")
|
raise IamError("Unknown access key")
|
||||||
@@ -169,7 +217,6 @@ class IamService:
|
|||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# ---------------------- management helpers ----------------------
|
|
||||||
def list_users(self) -> List[Dict[str, Any]]:
|
def list_users(self) -> List[Dict[str, Any]]:
|
||||||
listing: List[Dict[str, Any]] = []
|
listing: List[Dict[str, Any]] = []
|
||||||
for access_key, record in self._users.items():
|
for access_key, record in self._users.items():
|
||||||
@@ -242,9 +289,9 @@ class IamService:
|
|||||||
self._save()
|
self._save()
|
||||||
self._load()
|
self._load()
|
||||||
|
|
||||||
# ---------------------- config helpers ----------------------
|
|
||||||
def _load(self) -> None:
|
def _load(self) -> None:
|
||||||
try:
|
try:
|
||||||
|
self._last_load_time = self.config_path.stat().st_mtime
|
||||||
content = self.config_path.read_text(encoding='utf-8')
|
content = self.config_path.read_text(encoding='utf-8')
|
||||||
raw = json.loads(content)
|
raw = json.loads(content)
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
@@ -287,7 +334,6 @@ class IamService:
|
|||||||
except (OSError, PermissionError) as e:
|
except (OSError, PermissionError) as e:
|
||||||
raise IamError(f"Cannot save IAM config: {e}")
|
raise IamError(f"Cannot save IAM config: {e}")
|
||||||
|
|
||||||
# ---------------------- insight helpers ----------------------
|
|
||||||
def config_summary(self) -> Dict[str, Any]:
|
def config_summary(self) -> Dict[str, Any]:
|
||||||
return {
|
return {
|
||||||
"path": str(self.config_path),
|
"path": str(self.config_path),
|
||||||
@@ -396,9 +442,11 @@ class IamService:
|
|||||||
raise IamError("User not found")
|
raise IamError("User not found")
|
||||||
|
|
||||||
def get_secret_key(self, access_key: str) -> str | None:
|
def get_secret_key(self, access_key: str) -> str | None:
|
||||||
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
return record["secret_key"] if record else None
|
return record["secret_key"] if record else None
|
||||||
|
|
||||||
def get_principal(self, access_key: str) -> Principal | None:
|
def get_principal(self, access_key: str) -> Principal | None:
|
||||||
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
return self._build_principal(access_key, record) if record else None
|
return self._build_principal(access_key, record) if record else None
|
||||||
|
|||||||
344
app/kms.py
Normal file
344
app/kms.py
Normal file
@@ -0,0 +1,344 @@
|
|||||||
|
"""Key Management Service (KMS) for encryption key management."""
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
import json
|
||||||
|
import secrets
|
||||||
|
import uuid
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
|
||||||
|
|
||||||
|
from .encryption import EncryptionError, EncryptionProvider, EncryptionResult
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class KMSKey:
|
||||||
|
"""Represents a KMS encryption key."""
|
||||||
|
key_id: str
|
||||||
|
description: str
|
||||||
|
created_at: str
|
||||||
|
enabled: bool = True
|
||||||
|
key_material: bytes = field(default_factory=lambda: b"", repr=False)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def arn(self) -> str:
|
||||||
|
return f"arn:aws:kms:local:000000000000:key/{self.key_id}"
|
||||||
|
|
||||||
|
def to_dict(self, include_key: bool = False) -> Dict[str, Any]:
|
||||||
|
data = {
|
||||||
|
"KeyId": self.key_id,
|
||||||
|
"Arn": self.arn,
|
||||||
|
"Description": self.description,
|
||||||
|
"CreationDate": self.created_at,
|
||||||
|
"Enabled": self.enabled,
|
||||||
|
"KeyState": "Enabled" if self.enabled else "Disabled",
|
||||||
|
"KeyUsage": "ENCRYPT_DECRYPT",
|
||||||
|
"KeySpec": "SYMMETRIC_DEFAULT",
|
||||||
|
}
|
||||||
|
if include_key:
|
||||||
|
data["KeyMaterial"] = base64.b64encode(self.key_material).decode()
|
||||||
|
return data
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "KMSKey":
|
||||||
|
key_material = b""
|
||||||
|
if "KeyMaterial" in data:
|
||||||
|
key_material = base64.b64decode(data["KeyMaterial"])
|
||||||
|
return cls(
|
||||||
|
key_id=data["KeyId"],
|
||||||
|
description=data.get("Description", ""),
|
||||||
|
created_at=data.get("CreationDate", datetime.now(timezone.utc).isoformat()),
|
||||||
|
enabled=data.get("Enabled", True),
|
||||||
|
key_material=key_material,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class KMSEncryptionProvider(EncryptionProvider):
|
||||||
|
"""Encryption provider using a specific KMS key."""
|
||||||
|
|
||||||
|
def __init__(self, kms: "KMSManager", key_id: str):
|
||||||
|
self.kms = kms
|
||||||
|
self.key_id = key_id
|
||||||
|
|
||||||
|
@property
|
||||||
|
def KEY_ID(self) -> str:
|
||||||
|
return self.key_id
|
||||||
|
|
||||||
|
def generate_data_key(self) -> tuple[bytes, bytes]:
|
||||||
|
"""Generate a data key encrypted with the KMS key."""
|
||||||
|
return self.kms.generate_data_key(self.key_id)
|
||||||
|
|
||||||
|
def encrypt(self, plaintext: bytes, context: Dict[str, str] | None = None) -> EncryptionResult:
|
||||||
|
"""Encrypt data using envelope encryption with KMS."""
|
||||||
|
data_key, encrypted_data_key = self.generate_data_key()
|
||||||
|
|
||||||
|
aesgcm = AESGCM(data_key)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
ciphertext = aesgcm.encrypt(nonce, plaintext,
|
||||||
|
json.dumps(context).encode() if context else None)
|
||||||
|
|
||||||
|
return EncryptionResult(
|
||||||
|
ciphertext=ciphertext,
|
||||||
|
nonce=nonce,
|
||||||
|
key_id=self.key_id,
|
||||||
|
encrypted_data_key=encrypted_data_key,
|
||||||
|
)
|
||||||
|
|
||||||
|
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
||||||
|
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
"""Decrypt data using envelope encryption with KMS."""
|
||||||
|
# Note: Data key is encrypted without context (AAD), so we decrypt without context
|
||||||
|
data_key = self.kms.decrypt_data_key(key_id, encrypted_data_key, context=None)
|
||||||
|
|
||||||
|
aesgcm = AESGCM(data_key)
|
||||||
|
try:
|
||||||
|
return aesgcm.decrypt(nonce, ciphertext,
|
||||||
|
json.dumps(context).encode() if context else None)
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Failed to decrypt data: {exc}") from exc
|
||||||
|
|
||||||
|
|
||||||
|
class KMSManager:
|
||||||
|
"""Manages KMS keys and operations.
|
||||||
|
|
||||||
|
This is a local implementation that mimics AWS KMS functionality.
|
||||||
|
Keys are stored encrypted on disk.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, keys_path: Path, master_key_path: Path):
|
||||||
|
self.keys_path = keys_path
|
||||||
|
self.master_key_path = master_key_path
|
||||||
|
self._keys: Dict[str, KMSKey] = {}
|
||||||
|
self._master_key: bytes | None = None
|
||||||
|
self._loaded = False
|
||||||
|
|
||||||
|
@property
|
||||||
|
def master_key(self) -> bytes:
|
||||||
|
"""Load or create the master key for encrypting KMS keys."""
|
||||||
|
if self._master_key is None:
|
||||||
|
if self.master_key_path.exists():
|
||||||
|
self._master_key = base64.b64decode(
|
||||||
|
self.master_key_path.read_text().strip()
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self._master_key = secrets.token_bytes(32)
|
||||||
|
self.master_key_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
self.master_key_path.write_text(
|
||||||
|
base64.b64encode(self._master_key).decode()
|
||||||
|
)
|
||||||
|
return self._master_key
|
||||||
|
|
||||||
|
def _load_keys(self) -> None:
|
||||||
|
"""Load keys from disk."""
|
||||||
|
if self._loaded:
|
||||||
|
return
|
||||||
|
|
||||||
|
if self.keys_path.exists():
|
||||||
|
try:
|
||||||
|
data = json.loads(self.keys_path.read_text(encoding="utf-8"))
|
||||||
|
for key_data in data.get("keys", []):
|
||||||
|
key = KMSKey.from_dict(key_data)
|
||||||
|
if key_data.get("EncryptedKeyMaterial"):
|
||||||
|
encrypted = base64.b64decode(key_data["EncryptedKeyMaterial"])
|
||||||
|
key.key_material = self._decrypt_key_material(encrypted)
|
||||||
|
self._keys[key.key_id] = key
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
self._loaded = True
|
||||||
|
|
||||||
|
def _save_keys(self) -> None:
|
||||||
|
"""Save keys to disk (with encrypted key material)."""
|
||||||
|
keys_data = []
|
||||||
|
for key in self._keys.values():
|
||||||
|
data = key.to_dict(include_key=False)
|
||||||
|
encrypted = self._encrypt_key_material(key.key_material)
|
||||||
|
data["EncryptedKeyMaterial"] = base64.b64encode(encrypted).decode()
|
||||||
|
keys_data.append(data)
|
||||||
|
|
||||||
|
self.keys_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
self.keys_path.write_text(
|
||||||
|
json.dumps({"keys": keys_data}, indent=2),
|
||||||
|
encoding="utf-8"
|
||||||
|
)
|
||||||
|
|
||||||
|
def _encrypt_key_material(self, key_material: bytes) -> bytes:
|
||||||
|
"""Encrypt key material with the master key."""
|
||||||
|
aesgcm = AESGCM(self.master_key)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
ciphertext = aesgcm.encrypt(nonce, key_material, None)
|
||||||
|
return nonce + ciphertext
|
||||||
|
|
||||||
|
def _decrypt_key_material(self, encrypted: bytes) -> bytes:
|
||||||
|
"""Decrypt key material with the master key."""
|
||||||
|
aesgcm = AESGCM(self.master_key)
|
||||||
|
nonce = encrypted[:12]
|
||||||
|
ciphertext = encrypted[12:]
|
||||||
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
|
|
||||||
|
def create_key(self, description: str = "", key_id: str | None = None) -> KMSKey:
|
||||||
|
"""Create a new KMS key."""
|
||||||
|
self._load_keys()
|
||||||
|
|
||||||
|
if key_id is None:
|
||||||
|
key_id = str(uuid.uuid4())
|
||||||
|
|
||||||
|
if key_id in self._keys:
|
||||||
|
raise EncryptionError(f"Key already exists: {key_id}")
|
||||||
|
|
||||||
|
key = KMSKey(
|
||||||
|
key_id=key_id,
|
||||||
|
description=description,
|
||||||
|
created_at=datetime.now(timezone.utc).isoformat(),
|
||||||
|
enabled=True,
|
||||||
|
key_material=secrets.token_bytes(32),
|
||||||
|
)
|
||||||
|
|
||||||
|
self._keys[key_id] = key
|
||||||
|
self._save_keys()
|
||||||
|
return key
|
||||||
|
|
||||||
|
def get_key(self, key_id: str) -> KMSKey | None:
|
||||||
|
"""Get a key by ID."""
|
||||||
|
self._load_keys()
|
||||||
|
return self._keys.get(key_id)
|
||||||
|
|
||||||
|
def list_keys(self) -> List[KMSKey]:
|
||||||
|
"""List all keys."""
|
||||||
|
self._load_keys()
|
||||||
|
return list(self._keys.values())
|
||||||
|
|
||||||
|
def enable_key(self, key_id: str) -> None:
|
||||||
|
"""Enable a key."""
|
||||||
|
self._load_keys()
|
||||||
|
key = self._keys.get(key_id)
|
||||||
|
if not key:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
key.enabled = True
|
||||||
|
self._save_keys()
|
||||||
|
|
||||||
|
def disable_key(self, key_id: str) -> None:
|
||||||
|
"""Disable a key."""
|
||||||
|
self._load_keys()
|
||||||
|
key = self._keys.get(key_id)
|
||||||
|
if not key:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
key.enabled = False
|
||||||
|
self._save_keys()
|
||||||
|
|
||||||
|
def delete_key(self, key_id: str) -> None:
|
||||||
|
"""Delete a key (schedule for deletion in real KMS)."""
|
||||||
|
self._load_keys()
|
||||||
|
if key_id not in self._keys:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
del self._keys[key_id]
|
||||||
|
self._save_keys()
|
||||||
|
|
||||||
|
def encrypt(self, key_id: str, plaintext: bytes,
|
||||||
|
context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
"""Encrypt data directly with a KMS key."""
|
||||||
|
self._load_keys()
|
||||||
|
key = self._keys.get(key_id)
|
||||||
|
if not key:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
if not key.enabled:
|
||||||
|
raise EncryptionError(f"Key is disabled: {key_id}")
|
||||||
|
|
||||||
|
aesgcm = AESGCM(key.key_material)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
aad = json.dumps(context).encode() if context else None
|
||||||
|
ciphertext = aesgcm.encrypt(nonce, plaintext, aad)
|
||||||
|
|
||||||
|
key_id_bytes = key_id.encode("utf-8")
|
||||||
|
return len(key_id_bytes).to_bytes(2, "big") + key_id_bytes + nonce + ciphertext
|
||||||
|
|
||||||
|
def decrypt(self, ciphertext: bytes,
|
||||||
|
context: Dict[str, str] | None = None) -> tuple[bytes, str]:
|
||||||
|
"""Decrypt data directly with a KMS key.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (plaintext, key_id)
|
||||||
|
"""
|
||||||
|
self._load_keys()
|
||||||
|
|
||||||
|
key_id_len = int.from_bytes(ciphertext[:2], "big")
|
||||||
|
key_id = ciphertext[2:2 + key_id_len].decode("utf-8")
|
||||||
|
rest = ciphertext[2 + key_id_len:]
|
||||||
|
|
||||||
|
key = self._keys.get(key_id)
|
||||||
|
if not key:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
if not key.enabled:
|
||||||
|
raise EncryptionError(f"Key is disabled: {key_id}")
|
||||||
|
|
||||||
|
nonce = rest[:12]
|
||||||
|
encrypted = rest[12:]
|
||||||
|
|
||||||
|
aesgcm = AESGCM(key.key_material)
|
||||||
|
aad = json.dumps(context).encode() if context else None
|
||||||
|
try:
|
||||||
|
plaintext = aesgcm.decrypt(nonce, encrypted, aad)
|
||||||
|
return plaintext, key_id
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Decryption failed: {exc}") from exc
|
||||||
|
|
||||||
|
def generate_data_key(self, key_id: str,
|
||||||
|
context: Dict[str, str] | None = None) -> tuple[bytes, bytes]:
|
||||||
|
"""Generate a data key and return both plaintext and encrypted versions.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (plaintext_key, encrypted_key)
|
||||||
|
"""
|
||||||
|
self._load_keys()
|
||||||
|
key = self._keys.get(key_id)
|
||||||
|
if not key:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
if not key.enabled:
|
||||||
|
raise EncryptionError(f"Key is disabled: {key_id}")
|
||||||
|
|
||||||
|
plaintext_key = secrets.token_bytes(32)
|
||||||
|
|
||||||
|
encrypted_key = self.encrypt(key_id, plaintext_key, context)
|
||||||
|
|
||||||
|
return plaintext_key, encrypted_key
|
||||||
|
|
||||||
|
def decrypt_data_key(self, key_id: str, encrypted_key: bytes,
|
||||||
|
context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
"""Decrypt a data key."""
|
||||||
|
plaintext, _ = self.decrypt(encrypted_key, context)
|
||||||
|
return plaintext
|
||||||
|
|
||||||
|
def get_provider(self, key_id: str | None = None) -> KMSEncryptionProvider:
|
||||||
|
"""Get an encryption provider for a specific key."""
|
||||||
|
self._load_keys()
|
||||||
|
|
||||||
|
if key_id is None:
|
||||||
|
if not self._keys:
|
||||||
|
key = self.create_key("Default KMS Key")
|
||||||
|
key_id = key.key_id
|
||||||
|
else:
|
||||||
|
key_id = next(iter(self._keys.keys()))
|
||||||
|
|
||||||
|
if key_id not in self._keys:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
|
||||||
|
return KMSEncryptionProvider(self, key_id)
|
||||||
|
|
||||||
|
def re_encrypt(self, ciphertext: bytes, destination_key_id: str,
|
||||||
|
source_context: Dict[str, str] | None = None,
|
||||||
|
destination_context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
"""Re-encrypt data with a different key."""
|
||||||
|
|
||||||
|
plaintext, source_key_id = self.decrypt(ciphertext, source_context)
|
||||||
|
|
||||||
|
return self.encrypt(destination_key_id, plaintext, destination_context)
|
||||||
|
|
||||||
|
def generate_random(self, num_bytes: int = 32) -> bytes:
|
||||||
|
"""Generate cryptographically secure random bytes."""
|
||||||
|
if num_bytes < 1 or num_bytes > 1024:
|
||||||
|
raise EncryptionError("Number of bytes must be between 1 and 1024")
|
||||||
|
return secrets.token_bytes(num_bytes)
|
||||||
445
app/kms_api.py
Normal file
445
app/kms_api.py
Normal file
@@ -0,0 +1,445 @@
|
|||||||
|
"""KMS and encryption API endpoints."""
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
import uuid
|
||||||
|
from typing import Any, Dict
|
||||||
|
|
||||||
|
from flask import Blueprint, Response, current_app, jsonify, request
|
||||||
|
|
||||||
|
from .encryption import ClientEncryptionHelper, EncryptionError
|
||||||
|
from .extensions import limiter
|
||||||
|
from .iam import IamError
|
||||||
|
|
||||||
|
kms_api_bp = Blueprint("kms_api", __name__, url_prefix="/kms")
|
||||||
|
|
||||||
|
|
||||||
|
def _require_principal():
|
||||||
|
"""Require authentication for KMS operations."""
|
||||||
|
from .s3_api import _require_principal as s3_require_principal
|
||||||
|
return s3_require_principal()
|
||||||
|
|
||||||
|
|
||||||
|
def _kms():
|
||||||
|
"""Get KMS manager from app extensions."""
|
||||||
|
return current_app.extensions.get("kms")
|
||||||
|
|
||||||
|
|
||||||
|
def _encryption():
|
||||||
|
"""Get encryption manager from app extensions."""
|
||||||
|
return current_app.extensions.get("encryption")
|
||||||
|
|
||||||
|
|
||||||
|
def _error_response(code: str, message: str, status: int) -> tuple[Dict[str, Any], int]:
|
||||||
|
return {"__type": code, "message": message}, status
|
||||||
|
|
||||||
|
@kms_api_bp.route("/keys", methods=["GET", "POST"])
|
||||||
|
@limiter.limit("30 per minute")
|
||||||
|
def list_or_create_keys():
|
||||||
|
"""List all KMS keys or create a new key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
if request.method == "POST":
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
key_id = payload.get("KeyId") or payload.get("key_id")
|
||||||
|
description = payload.get("Description") or payload.get("description", "")
|
||||||
|
|
||||||
|
try:
|
||||||
|
key = kms.create_key(description=description, key_id=key_id)
|
||||||
|
current_app.logger.info(
|
||||||
|
"KMS key created",
|
||||||
|
extra={"key_id": key.key_id, "principal": principal.access_key},
|
||||||
|
)
|
||||||
|
return jsonify({
|
||||||
|
"KeyMetadata": key.to_dict(),
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
|
|
||||||
|
keys = kms.list_keys()
|
||||||
|
return jsonify({
|
||||||
|
"Keys": [{"KeyId": k.key_id, "KeyArn": k.arn} for k in keys],
|
||||||
|
"Truncated": False,
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/keys/<key_id>", methods=["GET", "DELETE"])
|
||||||
|
@limiter.limit("30 per minute")
|
||||||
|
def get_or_delete_key(key_id: str):
|
||||||
|
"""Get or delete a specific KMS key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
if request.method == "DELETE":
|
||||||
|
try:
|
||||||
|
kms.delete_key(key_id)
|
||||||
|
current_app.logger.info(
|
||||||
|
"KMS key deleted",
|
||||||
|
extra={"key_id": key_id, "principal": principal.access_key},
|
||||||
|
)
|
||||||
|
return Response(status=204)
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("NotFoundException", str(exc), 404)
|
||||||
|
|
||||||
|
key = kms.get_key(key_id)
|
||||||
|
if not key:
|
||||||
|
return _error_response("NotFoundException", f"Key not found: {key_id}", 404)
|
||||||
|
|
||||||
|
return jsonify({"KeyMetadata": key.to_dict()})
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/keys/<key_id>/enable", methods=["POST"])
|
||||||
|
@limiter.limit("30 per minute")
|
||||||
|
def enable_key(key_id: str):
|
||||||
|
"""Enable a KMS key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
kms.enable_key(key_id)
|
||||||
|
current_app.logger.info(
|
||||||
|
"KMS key enabled",
|
||||||
|
extra={"key_id": key_id, "principal": principal.access_key},
|
||||||
|
)
|
||||||
|
return Response(status=200)
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("NotFoundException", str(exc), 404)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/keys/<key_id>/disable", methods=["POST"])
|
||||||
|
@limiter.limit("30 per minute")
|
||||||
|
def disable_key(key_id: str):
|
||||||
|
"""Disable a KMS key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
kms.disable_key(key_id)
|
||||||
|
current_app.logger.info(
|
||||||
|
"KMS key disabled",
|
||||||
|
extra={"key_id": key_id, "principal": principal.access_key},
|
||||||
|
)
|
||||||
|
return Response(status=200)
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("NotFoundException", str(exc), 404)
|
||||||
|
|
||||||
|
@kms_api_bp.route("/encrypt", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def encrypt_data():
|
||||||
|
"""Encrypt data using a KMS key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
key_id = payload.get("KeyId")
|
||||||
|
plaintext_b64 = payload.get("Plaintext")
|
||||||
|
context = payload.get("EncryptionContext")
|
||||||
|
|
||||||
|
if not key_id:
|
||||||
|
return _error_response("ValidationException", "KeyId is required", 400)
|
||||||
|
if not plaintext_b64:
|
||||||
|
return _error_response("ValidationException", "Plaintext is required", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext = base64.b64decode(plaintext_b64)
|
||||||
|
except Exception:
|
||||||
|
return _error_response("ValidationException", "Plaintext must be base64 encoded", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
ciphertext = kms.encrypt(key_id, plaintext, context)
|
||||||
|
return jsonify({
|
||||||
|
"CiphertextBlob": base64.b64encode(ciphertext).decode(),
|
||||||
|
"KeyId": key_id,
|
||||||
|
"EncryptionAlgorithm": "SYMMETRIC_DEFAULT",
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/decrypt", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def decrypt_data():
|
||||||
|
"""Decrypt data using a KMS key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
ciphertext_b64 = payload.get("CiphertextBlob")
|
||||||
|
context = payload.get("EncryptionContext")
|
||||||
|
|
||||||
|
if not ciphertext_b64:
|
||||||
|
return _error_response("ValidationException", "CiphertextBlob is required", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
ciphertext = base64.b64decode(ciphertext_b64)
|
||||||
|
except Exception:
|
||||||
|
return _error_response("ValidationException", "CiphertextBlob must be base64 encoded", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext, key_id = kms.decrypt(ciphertext, context)
|
||||||
|
return jsonify({
|
||||||
|
"Plaintext": base64.b64encode(plaintext).decode(),
|
||||||
|
"KeyId": key_id,
|
||||||
|
"EncryptionAlgorithm": "SYMMETRIC_DEFAULT",
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("InvalidCiphertextException", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/generate-data-key", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def generate_data_key():
|
||||||
|
"""Generate a data encryption key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
key_id = payload.get("KeyId")
|
||||||
|
context = payload.get("EncryptionContext")
|
||||||
|
key_spec = payload.get("KeySpec", "AES_256")
|
||||||
|
|
||||||
|
if not key_id:
|
||||||
|
return _error_response("ValidationException", "KeyId is required", 400)
|
||||||
|
|
||||||
|
if key_spec not in {"AES_256", "AES_128"}:
|
||||||
|
return _error_response("ValidationException", "KeySpec must be AES_256 or AES_128", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext_key, encrypted_key = kms.generate_data_key(key_id, context)
|
||||||
|
|
||||||
|
if key_spec == "AES_128":
|
||||||
|
plaintext_key = plaintext_key[:16]
|
||||||
|
|
||||||
|
return jsonify({
|
||||||
|
"Plaintext": base64.b64encode(plaintext_key).decode(),
|
||||||
|
"CiphertextBlob": base64.b64encode(encrypted_key).decode(),
|
||||||
|
"KeyId": key_id,
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/generate-data-key-without-plaintext", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def generate_data_key_without_plaintext():
|
||||||
|
"""Generate a data encryption key without returning the plaintext."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
key_id = payload.get("KeyId")
|
||||||
|
context = payload.get("EncryptionContext")
|
||||||
|
|
||||||
|
if not key_id:
|
||||||
|
return _error_response("ValidationException", "KeyId is required", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
_, encrypted_key = kms.generate_data_key(key_id, context)
|
||||||
|
return jsonify({
|
||||||
|
"CiphertextBlob": base64.b64encode(encrypted_key).decode(),
|
||||||
|
"KeyId": key_id,
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/re-encrypt", methods=["POST"])
|
||||||
|
@limiter.limit("30 per minute")
|
||||||
|
def re_encrypt():
|
||||||
|
"""Re-encrypt data with a different key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
ciphertext_b64 = payload.get("CiphertextBlob")
|
||||||
|
destination_key_id = payload.get("DestinationKeyId")
|
||||||
|
source_context = payload.get("SourceEncryptionContext")
|
||||||
|
destination_context = payload.get("DestinationEncryptionContext")
|
||||||
|
|
||||||
|
if not ciphertext_b64:
|
||||||
|
return _error_response("ValidationException", "CiphertextBlob is required", 400)
|
||||||
|
if not destination_key_id:
|
||||||
|
return _error_response("ValidationException", "DestinationKeyId is required", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
ciphertext = base64.b64decode(ciphertext_b64)
|
||||||
|
except Exception:
|
||||||
|
return _error_response("ValidationException", "CiphertextBlob must be base64 encoded", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext, source_key_id = kms.decrypt(ciphertext, source_context)
|
||||||
|
new_ciphertext = kms.encrypt(destination_key_id, plaintext, destination_context)
|
||||||
|
|
||||||
|
return jsonify({
|
||||||
|
"CiphertextBlob": base64.b64encode(new_ciphertext).decode(),
|
||||||
|
"SourceKeyId": source_key_id,
|
||||||
|
"KeyId": destination_key_id,
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/generate-random", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def generate_random():
|
||||||
|
"""Generate random bytes."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
num_bytes = payload.get("NumberOfBytes", 32)
|
||||||
|
|
||||||
|
try:
|
||||||
|
num_bytes = int(num_bytes)
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
return _error_response("ValidationException", "NumberOfBytes must be an integer", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
random_bytes = kms.generate_random(num_bytes)
|
||||||
|
return jsonify({
|
||||||
|
"Plaintext": base64.b64encode(random_bytes).decode(),
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("ValidationException", str(exc), 400)
|
||||||
|
|
||||||
|
@kms_api_bp.route("/client/generate-key", methods=["POST"])
|
||||||
|
@limiter.limit("30 per minute")
|
||||||
|
def generate_client_key():
|
||||||
|
"""Generate a client-side encryption key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
key_info = ClientEncryptionHelper.generate_client_key()
|
||||||
|
return jsonify(key_info)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/client/encrypt", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def client_encrypt():
|
||||||
|
"""Encrypt data using client-side encryption."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
plaintext_b64 = payload.get("Plaintext")
|
||||||
|
key_b64 = payload.get("Key")
|
||||||
|
|
||||||
|
if not plaintext_b64 or not key_b64:
|
||||||
|
return _error_response("ValidationException", "Plaintext and Key are required", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext = base64.b64decode(plaintext_b64)
|
||||||
|
result = ClientEncryptionHelper.encrypt_with_key(plaintext, key_b64)
|
||||||
|
return jsonify(result)
|
||||||
|
except Exception as exc:
|
||||||
|
return _error_response("EncryptionError", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/client/decrypt", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def client_decrypt():
|
||||||
|
"""Decrypt data using client-side encryption."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
ciphertext_b64 = payload.get("Ciphertext") or payload.get("ciphertext")
|
||||||
|
nonce_b64 = payload.get("Nonce") or payload.get("nonce")
|
||||||
|
key_b64 = payload.get("Key") or payload.get("key")
|
||||||
|
|
||||||
|
if not ciphertext_b64 or not nonce_b64 or not key_b64:
|
||||||
|
return _error_response("ValidationException", "Ciphertext, Nonce, and Key are required", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext = ClientEncryptionHelper.decrypt_with_key(ciphertext_b64, nonce_b64, key_b64)
|
||||||
|
return jsonify({
|
||||||
|
"Plaintext": base64.b64encode(plaintext).decode(),
|
||||||
|
})
|
||||||
|
except Exception as exc:
|
||||||
|
return _error_response("DecryptionError", str(exc), 400)
|
||||||
|
|
||||||
|
@kms_api_bp.route("/materials/<key_id>", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def get_encryption_materials(key_id: str):
|
||||||
|
"""Get encryption materials for client-side S3 encryption.
|
||||||
|
|
||||||
|
This is used by S3 encryption clients that want to use KMS for
|
||||||
|
key management but perform encryption client-side.
|
||||||
|
"""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
context = payload.get("EncryptionContext")
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext_key, encrypted_key = kms.generate_data_key(key_id, context)
|
||||||
|
|
||||||
|
return jsonify({
|
||||||
|
"PlaintextKey": base64.b64encode(plaintext_key).decode(),
|
||||||
|
"EncryptedKey": base64.b64encode(encrypted_key).decode(),
|
||||||
|
"KeyId": key_id,
|
||||||
|
"Algorithm": "AES-256-GCM",
|
||||||
|
"KeyWrapAlgorithm": "kms",
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
@@ -1,21 +1,65 @@
|
|||||||
"""Background replication worker."""
|
"""Background replication worker."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
import logging
|
import logging
|
||||||
|
import mimetypes
|
||||||
import threading
|
import threading
|
||||||
|
import time
|
||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass, field
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Dict, Optional
|
from typing import Dict, Optional
|
||||||
|
|
||||||
import boto3
|
import boto3
|
||||||
|
from botocore.config import Config
|
||||||
from botocore.exceptions import ClientError
|
from botocore.exceptions import ClientError
|
||||||
|
from boto3.exceptions import S3UploadFailedError
|
||||||
|
|
||||||
from .connections import ConnectionStore, RemoteConnection
|
from .connections import ConnectionStore, RemoteConnection
|
||||||
from .storage import ObjectStorage
|
from .storage import ObjectStorage, StorageError
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
REPLICATION_USER_AGENT = "S3ReplicationAgent/1.0"
|
||||||
|
REPLICATION_CONNECT_TIMEOUT = 5
|
||||||
|
REPLICATION_READ_TIMEOUT = 30
|
||||||
|
|
||||||
|
REPLICATION_MODE_NEW_ONLY = "new_only"
|
||||||
|
REPLICATION_MODE_ALL = "all"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ReplicationStats:
|
||||||
|
"""Statistics for replication operations - computed dynamically."""
|
||||||
|
objects_synced: int = 0
|
||||||
|
objects_pending: int = 0
|
||||||
|
objects_orphaned: int = 0
|
||||||
|
bytes_synced: int = 0
|
||||||
|
last_sync_at: Optional[float] = None
|
||||||
|
last_sync_key: Optional[str] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
return {
|
||||||
|
"objects_synced": self.objects_synced,
|
||||||
|
"objects_pending": self.objects_pending,
|
||||||
|
"objects_orphaned": self.objects_orphaned,
|
||||||
|
"bytes_synced": self.bytes_synced,
|
||||||
|
"last_sync_at": self.last_sync_at,
|
||||||
|
"last_sync_key": self.last_sync_key,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: dict) -> "ReplicationStats":
|
||||||
|
return cls(
|
||||||
|
objects_synced=data.get("objects_synced", 0),
|
||||||
|
objects_pending=data.get("objects_pending", 0),
|
||||||
|
objects_orphaned=data.get("objects_orphaned", 0),
|
||||||
|
bytes_synced=data.get("bytes_synced", 0),
|
||||||
|
last_sync_at=data.get("last_sync_at"),
|
||||||
|
last_sync_key=data.get("last_sync_key"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ReplicationRule:
|
class ReplicationRule:
|
||||||
@@ -23,6 +67,31 @@ class ReplicationRule:
|
|||||||
target_connection_id: str
|
target_connection_id: str
|
||||||
target_bucket: str
|
target_bucket: str
|
||||||
enabled: bool = True
|
enabled: bool = True
|
||||||
|
mode: str = REPLICATION_MODE_NEW_ONLY
|
||||||
|
created_at: Optional[float] = None
|
||||||
|
stats: ReplicationStats = field(default_factory=ReplicationStats)
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
return {
|
||||||
|
"bucket_name": self.bucket_name,
|
||||||
|
"target_connection_id": self.target_connection_id,
|
||||||
|
"target_bucket": self.target_bucket,
|
||||||
|
"enabled": self.enabled,
|
||||||
|
"mode": self.mode,
|
||||||
|
"created_at": self.created_at,
|
||||||
|
"stats": self.stats.to_dict(),
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: dict) -> "ReplicationRule":
|
||||||
|
stats_data = data.pop("stats", {})
|
||||||
|
if "mode" not in data:
|
||||||
|
data["mode"] = REPLICATION_MODE_NEW_ONLY
|
||||||
|
if "created_at" not in data:
|
||||||
|
data["created_at"] = None
|
||||||
|
rule = cls(**data)
|
||||||
|
rule.stats = ReplicationStats.from_dict(stats_data) if stats_data else ReplicationStats()
|
||||||
|
return rule
|
||||||
|
|
||||||
|
|
||||||
class ReplicationManager:
|
class ReplicationManager:
|
||||||
@@ -31,6 +100,7 @@ class ReplicationManager:
|
|||||||
self.connections = connections
|
self.connections = connections
|
||||||
self.rules_path = rules_path
|
self.rules_path = rules_path
|
||||||
self._rules: Dict[str, ReplicationRule] = {}
|
self._rules: Dict[str, ReplicationRule] = {}
|
||||||
|
self._stats_lock = threading.Lock()
|
||||||
self._executor = ThreadPoolExecutor(max_workers=4, thread_name_prefix="ReplicationWorker")
|
self._executor = ThreadPoolExecutor(max_workers=4, thread_name_prefix="ReplicationWorker")
|
||||||
self.reload_rules()
|
self.reload_rules()
|
||||||
|
|
||||||
@@ -39,21 +109,46 @@ class ReplicationManager:
|
|||||||
self._rules = {}
|
self._rules = {}
|
||||||
return
|
return
|
||||||
try:
|
try:
|
||||||
import json
|
|
||||||
with open(self.rules_path, "r") as f:
|
with open(self.rules_path, "r") as f:
|
||||||
data = json.load(f)
|
data = json.load(f)
|
||||||
for bucket, rule_data in data.items():
|
for bucket, rule_data in data.items():
|
||||||
self._rules[bucket] = ReplicationRule(**rule_data)
|
self._rules[bucket] = ReplicationRule.from_dict(rule_data)
|
||||||
except (OSError, ValueError) as e:
|
except (OSError, ValueError) as e:
|
||||||
logger.error(f"Failed to load replication rules: {e}")
|
logger.error(f"Failed to load replication rules: {e}")
|
||||||
|
|
||||||
def save_rules(self) -> None:
|
def save_rules(self) -> None:
|
||||||
import json
|
data = {b: rule.to_dict() for b, rule in self._rules.items()}
|
||||||
data = {b: rule.__dict__ for b, rule in self._rules.items()}
|
|
||||||
self.rules_path.parent.mkdir(parents=True, exist_ok=True)
|
self.rules_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
with open(self.rules_path, "w") as f:
|
with open(self.rules_path, "w") as f:
|
||||||
json.dump(data, f, indent=2)
|
json.dump(data, f, indent=2)
|
||||||
|
|
||||||
|
def check_endpoint_health(self, connection: RemoteConnection) -> bool:
|
||||||
|
"""Check if a remote endpoint is reachable and responsive.
|
||||||
|
|
||||||
|
Returns True if endpoint is healthy, False otherwise.
|
||||||
|
Uses short timeouts to prevent blocking.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
config = Config(
|
||||||
|
user_agent_extra=REPLICATION_USER_AGENT,
|
||||||
|
connect_timeout=REPLICATION_CONNECT_TIMEOUT,
|
||||||
|
read_timeout=REPLICATION_READ_TIMEOUT,
|
||||||
|
retries={'max_attempts': 1}
|
||||||
|
)
|
||||||
|
s3 = boto3.client(
|
||||||
|
"s3",
|
||||||
|
endpoint_url=connection.endpoint_url,
|
||||||
|
aws_access_key_id=connection.access_key,
|
||||||
|
aws_secret_access_key=connection.secret_key,
|
||||||
|
region_name=connection.region,
|
||||||
|
config=config,
|
||||||
|
)
|
||||||
|
s3.list_buckets()
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Endpoint health check failed for {connection.name} ({connection.endpoint_url}): {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
def get_rule(self, bucket_name: str) -> Optional[ReplicationRule]:
|
def get_rule(self, bucket_name: str) -> Optional[ReplicationRule]:
|
||||||
return self._rules.get(bucket_name)
|
return self._rules.get(bucket_name)
|
||||||
|
|
||||||
@@ -66,7 +161,115 @@ class ReplicationManager:
|
|||||||
del self._rules[bucket_name]
|
del self._rules[bucket_name]
|
||||||
self.save_rules()
|
self.save_rules()
|
||||||
|
|
||||||
def trigger_replication(self, bucket_name: str, object_key: str) -> None:
|
def _update_last_sync(self, bucket_name: str, object_key: str = "") -> None:
|
||||||
|
"""Update last sync timestamp after a successful operation."""
|
||||||
|
with self._stats_lock:
|
||||||
|
rule = self._rules.get(bucket_name)
|
||||||
|
if not rule:
|
||||||
|
return
|
||||||
|
rule.stats.last_sync_at = time.time()
|
||||||
|
rule.stats.last_sync_key = object_key
|
||||||
|
self.save_rules()
|
||||||
|
|
||||||
|
def get_sync_status(self, bucket_name: str) -> Optional[ReplicationStats]:
|
||||||
|
"""Dynamically compute replication status by comparing source and destination buckets."""
|
||||||
|
rule = self.get_rule(bucket_name)
|
||||||
|
if not rule:
|
||||||
|
return None
|
||||||
|
|
||||||
|
connection = self.connections.get(rule.target_connection_id)
|
||||||
|
if not connection:
|
||||||
|
return rule.stats
|
||||||
|
|
||||||
|
try:
|
||||||
|
source_objects = self.storage.list_objects_all(bucket_name)
|
||||||
|
source_keys = {obj.key: obj.size for obj in source_objects}
|
||||||
|
|
||||||
|
s3 = boto3.client(
|
||||||
|
"s3",
|
||||||
|
endpoint_url=connection.endpoint_url,
|
||||||
|
aws_access_key_id=connection.access_key,
|
||||||
|
aws_secret_access_key=connection.secret_key,
|
||||||
|
region_name=connection.region,
|
||||||
|
)
|
||||||
|
|
||||||
|
dest_keys = set()
|
||||||
|
bytes_synced = 0
|
||||||
|
paginator = s3.get_paginator('list_objects_v2')
|
||||||
|
try:
|
||||||
|
for page in paginator.paginate(Bucket=rule.target_bucket):
|
||||||
|
for obj in page.get('Contents', []):
|
||||||
|
dest_keys.add(obj['Key'])
|
||||||
|
if obj['Key'] in source_keys:
|
||||||
|
bytes_synced += obj.get('Size', 0)
|
||||||
|
except ClientError as e:
|
||||||
|
if e.response['Error']['Code'] == 'NoSuchBucket':
|
||||||
|
dest_keys = set()
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
|
synced = source_keys.keys() & dest_keys
|
||||||
|
orphaned = dest_keys - source_keys.keys()
|
||||||
|
|
||||||
|
if rule.mode == REPLICATION_MODE_ALL:
|
||||||
|
pending = source_keys.keys() - dest_keys
|
||||||
|
else:
|
||||||
|
pending = set()
|
||||||
|
|
||||||
|
rule.stats.objects_synced = len(synced)
|
||||||
|
rule.stats.objects_pending = len(pending)
|
||||||
|
rule.stats.objects_orphaned = len(orphaned)
|
||||||
|
rule.stats.bytes_synced = bytes_synced
|
||||||
|
|
||||||
|
return rule.stats
|
||||||
|
|
||||||
|
except (ClientError, StorageError) as e:
|
||||||
|
logger.error(f"Failed to compute sync status for {bucket_name}: {e}")
|
||||||
|
return rule.stats
|
||||||
|
|
||||||
|
def replicate_existing_objects(self, bucket_name: str) -> None:
|
||||||
|
"""Trigger replication for all existing objects in a bucket."""
|
||||||
|
rule = self.get_rule(bucket_name)
|
||||||
|
if not rule or not rule.enabled:
|
||||||
|
return
|
||||||
|
|
||||||
|
connection = self.connections.get(rule.target_connection_id)
|
||||||
|
if not connection:
|
||||||
|
logger.warning(f"Cannot replicate existing objects: Connection {rule.target_connection_id} not found")
|
||||||
|
return
|
||||||
|
|
||||||
|
if not self.check_endpoint_health(connection):
|
||||||
|
logger.warning(f"Cannot replicate existing objects: Endpoint {connection.name} ({connection.endpoint_url}) is not reachable")
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
objects = self.storage.list_objects_all(bucket_name)
|
||||||
|
logger.info(f"Starting replication of {len(objects)} existing objects from {bucket_name}")
|
||||||
|
for obj in objects:
|
||||||
|
self._executor.submit(self._replicate_task, bucket_name, obj.key, rule, connection, "write")
|
||||||
|
except StorageError as e:
|
||||||
|
logger.error(f"Failed to list objects for replication: {e}")
|
||||||
|
|
||||||
|
def create_remote_bucket(self, connection_id: str, bucket_name: str) -> None:
|
||||||
|
"""Create a bucket on the remote connection."""
|
||||||
|
connection = self.connections.get(connection_id)
|
||||||
|
if not connection:
|
||||||
|
raise ValueError(f"Connection {connection_id} not found")
|
||||||
|
|
||||||
|
try:
|
||||||
|
s3 = boto3.client(
|
||||||
|
"s3",
|
||||||
|
endpoint_url=connection.endpoint_url,
|
||||||
|
aws_access_key_id=connection.access_key,
|
||||||
|
aws_secret_access_key=connection.secret_key,
|
||||||
|
region_name=connection.region,
|
||||||
|
)
|
||||||
|
s3.create_bucket(Bucket=bucket_name)
|
||||||
|
except ClientError as e:
|
||||||
|
logger.error(f"Failed to create remote bucket {bucket_name}: {e}")
|
||||||
|
raise
|
||||||
|
|
||||||
|
def trigger_replication(self, bucket_name: str, object_key: str, action: str = "write") -> None:
|
||||||
rule = self.get_rule(bucket_name)
|
rule = self.get_rule(bucket_name)
|
||||||
if not rule or not rule.enabled:
|
if not rule or not rule.enabled:
|
||||||
return
|
return
|
||||||
@@ -76,46 +279,125 @@ class ReplicationManager:
|
|||||||
logger.warning(f"Replication skipped for {bucket_name}/{object_key}: Connection {rule.target_connection_id} not found")
|
logger.warning(f"Replication skipped for {bucket_name}/{object_key}: Connection {rule.target_connection_id} not found")
|
||||||
return
|
return
|
||||||
|
|
||||||
self._executor.submit(self._replicate_task, bucket_name, object_key, rule, connection)
|
if not self.check_endpoint_health(connection):
|
||||||
|
logger.warning(f"Replication skipped for {bucket_name}/{object_key}: Endpoint {connection.name} ({connection.endpoint_url}) is not reachable")
|
||||||
|
return
|
||||||
|
|
||||||
|
self._executor.submit(self._replicate_task, bucket_name, object_key, rule, connection, action)
|
||||||
|
|
||||||
|
def _replicate_task(self, bucket_name: str, object_key: str, rule: ReplicationRule, conn: RemoteConnection, action: str) -> None:
|
||||||
|
if ".." in object_key or object_key.startswith("/") or object_key.startswith("\\"):
|
||||||
|
logger.error(f"Invalid object key in replication (path traversal attempt): {object_key}")
|
||||||
|
return
|
||||||
|
|
||||||
def _replicate_task(self, bucket_name: str, object_key: str, rule: ReplicationRule, conn: RemoteConnection) -> None:
|
|
||||||
try:
|
try:
|
||||||
# 1. Get local file path
|
from .storage import ObjectStorage
|
||||||
# Note: We are accessing internal storage structure here.
|
ObjectStorage._sanitize_object_key(object_key)
|
||||||
# Ideally storage.py should expose a 'get_file_path' or we read the stream.
|
except StorageError as e:
|
||||||
# For efficiency, we'll try to read the file directly if we can, or use storage.get_object
|
logger.error(f"Object key validation failed in replication: {e}")
|
||||||
|
return
|
||||||
|
|
||||||
# Using boto3 to upload
|
file_size = 0
|
||||||
|
try:
|
||||||
|
config = Config(
|
||||||
|
user_agent_extra=REPLICATION_USER_AGENT,
|
||||||
|
connect_timeout=REPLICATION_CONNECT_TIMEOUT,
|
||||||
|
read_timeout=REPLICATION_READ_TIMEOUT,
|
||||||
|
retries={'max_attempts': 2},
|
||||||
|
signature_version='s3v4',
|
||||||
|
s3={
|
||||||
|
'addressing_style': 'path',
|
||||||
|
},
|
||||||
|
# Disable SDK automatic checksums - they cause SignatureDoesNotMatch errors
|
||||||
|
# with S3-compatible servers that don't support CRC32 checksum headers
|
||||||
|
request_checksum_calculation='when_required',
|
||||||
|
response_checksum_validation='when_required',
|
||||||
|
)
|
||||||
s3 = boto3.client(
|
s3 = boto3.client(
|
||||||
"s3",
|
"s3",
|
||||||
endpoint_url=conn.endpoint_url,
|
endpoint_url=conn.endpoint_url,
|
||||||
aws_access_key_id=conn.access_key,
|
aws_access_key_id=conn.access_key,
|
||||||
aws_secret_access_key=conn.secret_key,
|
aws_secret_access_key=conn.secret_key,
|
||||||
region_name=conn.region,
|
region_name=conn.region or 'us-east-1',
|
||||||
|
config=config,
|
||||||
)
|
)
|
||||||
|
|
||||||
# We need the file content.
|
if action == "delete":
|
||||||
# Since ObjectStorage is filesystem based, let's get the stream.
|
try:
|
||||||
# We need to be careful about closing it.
|
s3.delete_object(Bucket=rule.target_bucket, Key=object_key)
|
||||||
meta = self.storage.get_object_meta(bucket_name, object_key)
|
logger.info(f"Replicated DELETE {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
||||||
if not meta:
|
self._update_last_sync(bucket_name, object_key)
|
||||||
|
except ClientError as e:
|
||||||
|
logger.error(f"Replication DELETE failed for {bucket_name}/{object_key}: {e}")
|
||||||
return
|
return
|
||||||
|
|
||||||
with self.storage.open_object(bucket_name, object_key) as f:
|
try:
|
||||||
extra_args = {}
|
path = self.storage.get_object_path(bucket_name, object_key)
|
||||||
if meta.metadata:
|
except StorageError:
|
||||||
extra_args["Metadata"] = meta.metadata
|
logger.error(f"Source object not found: {bucket_name}/{object_key}")
|
||||||
|
return
|
||||||
|
|
||||||
s3.upload_fileobj(
|
# Don't replicate metadata - destination server will generate its own
|
||||||
f,
|
# __etag__ and __size__. Replicating them causes signature mismatches when they have None/empty values.
|
||||||
rule.target_bucket,
|
|
||||||
object_key,
|
content_type, _ = mimetypes.guess_type(path)
|
||||||
ExtraArgs=extra_args
|
file_size = path.stat().st_size
|
||||||
)
|
|
||||||
|
logger.info(f"Replicating {bucket_name}/{object_key}: Size={file_size}, ContentType={content_type}")
|
||||||
|
|
||||||
|
def do_put_object() -> None:
|
||||||
|
"""Helper to upload object.
|
||||||
|
|
||||||
|
Reads the file content into memory first to avoid signature calculation
|
||||||
|
issues with certain binary file types (like GIFs) when streaming.
|
||||||
|
Do NOT set ContentLength explicitly - boto3 calculates it from the bytes
|
||||||
|
and setting it manually can cause SignatureDoesNotMatch errors.
|
||||||
|
"""
|
||||||
|
file_content = path.read_bytes()
|
||||||
|
put_kwargs = {
|
||||||
|
"Bucket": rule.target_bucket,
|
||||||
|
"Key": object_key,
|
||||||
|
"Body": file_content,
|
||||||
|
}
|
||||||
|
if content_type:
|
||||||
|
put_kwargs["ContentType"] = content_type
|
||||||
|
s3.put_object(**put_kwargs)
|
||||||
|
|
||||||
|
try:
|
||||||
|
do_put_object()
|
||||||
|
except (ClientError, S3UploadFailedError) as e:
|
||||||
|
error_code = None
|
||||||
|
if isinstance(e, ClientError):
|
||||||
|
error_code = e.response['Error']['Code']
|
||||||
|
elif isinstance(e, S3UploadFailedError):
|
||||||
|
if "NoSuchBucket" in str(e):
|
||||||
|
error_code = 'NoSuchBucket'
|
||||||
|
|
||||||
|
if error_code == 'NoSuchBucket':
|
||||||
|
logger.info(f"Target bucket {rule.target_bucket} not found. Attempting to create it.")
|
||||||
|
bucket_ready = False
|
||||||
|
try:
|
||||||
|
s3.create_bucket(Bucket=rule.target_bucket)
|
||||||
|
bucket_ready = True
|
||||||
|
logger.info(f"Created target bucket {rule.target_bucket}")
|
||||||
|
except ClientError as bucket_err:
|
||||||
|
if bucket_err.response['Error']['Code'] in ('BucketAlreadyExists', 'BucketAlreadyOwnedByYou'):
|
||||||
|
logger.debug(f"Bucket {rule.target_bucket} already exists (created by another thread)")
|
||||||
|
bucket_ready = True
|
||||||
|
else:
|
||||||
|
logger.error(f"Failed to create target bucket {rule.target_bucket}: {bucket_err}")
|
||||||
|
raise e
|
||||||
|
|
||||||
|
if bucket_ready:
|
||||||
|
do_put_object()
|
||||||
|
else:
|
||||||
|
raise e
|
||||||
|
|
||||||
logger.info(f"Replicated {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
logger.info(f"Replicated {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
||||||
|
self._update_last_sync(bucket_name, object_key)
|
||||||
|
|
||||||
except (ClientError, OSError, ValueError) as e:
|
except (ClientError, OSError, ValueError) as e:
|
||||||
logger.error(f"Replication failed for {bucket_name}/{object_key}: {e}")
|
logger.error(f"Replication failed for {bucket_name}/{object_key}: {e}")
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception(f"Unexpected error during replication for {bucket_name}/{object_key}")
|
logger.exception(f"Unexpected error during replication for {bucket_name}/{object_key}")
|
||||||
|
|
||||||
|
|||||||
1271
app/s3_api.py
1271
app/s3_api.py
File diff suppressed because it is too large
Load Diff
805
app/storage.py
805
app/storage.py
@@ -10,10 +10,40 @@ import stat
|
|||||||
import time
|
import time
|
||||||
import unicodedata
|
import unicodedata
|
||||||
import uuid
|
import uuid
|
||||||
|
from contextlib import contextmanager
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import datetime, timezone
|
from datetime import datetime, timezone
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, BinaryIO, Dict, List, Optional
|
from typing import Any, BinaryIO, Dict, Generator, List, Optional
|
||||||
|
|
||||||
|
# Platform-specific file locking
|
||||||
|
if os.name == "nt":
|
||||||
|
import msvcrt
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def _file_lock(file_handle) -> Generator[None, None, None]:
|
||||||
|
"""Acquire an exclusive lock on a file (Windows)."""
|
||||||
|
try:
|
||||||
|
msvcrt.locking(file_handle.fileno(), msvcrt.LK_NBLCK, 1)
|
||||||
|
yield
|
||||||
|
finally:
|
||||||
|
try:
|
||||||
|
file_handle.seek(0)
|
||||||
|
msvcrt.locking(file_handle.fileno(), msvcrt.LK_UNLCK, 1)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
import fcntl # type: ignore
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def _file_lock(file_handle) -> Generator[None, None, None]:
|
||||||
|
"""Acquire an exclusive lock on a file (Unix)."""
|
||||||
|
try:
|
||||||
|
fcntl.flock(file_handle.fileno(), fcntl.LOCK_EX)
|
||||||
|
yield
|
||||||
|
finally:
|
||||||
|
fcntl.flock(file_handle.fileno(), fcntl.LOCK_UN)
|
||||||
|
|
||||||
|
|
||||||
WINDOWS_RESERVED_NAMES = {
|
WINDOWS_RESERVED_NAMES = {
|
||||||
"CON",
|
"CON",
|
||||||
@@ -45,6 +75,15 @@ class StorageError(RuntimeError):
|
|||||||
"""Raised when the storage layer encounters an unrecoverable problem."""
|
"""Raised when the storage layer encounters an unrecoverable problem."""
|
||||||
|
|
||||||
|
|
||||||
|
class QuotaExceededError(StorageError):
|
||||||
|
"""Raised when an operation would exceed bucket quota limits."""
|
||||||
|
|
||||||
|
def __init__(self, message: str, quota: Dict[str, Any], usage: Dict[str, int]):
|
||||||
|
super().__init__(message)
|
||||||
|
self.quota = quota
|
||||||
|
self.usage = usage
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ObjectMeta:
|
class ObjectMeta:
|
||||||
key: str
|
key: str
|
||||||
@@ -60,6 +99,15 @@ class BucketMeta:
|
|||||||
created_at: datetime
|
created_at: datetime
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ListObjectsResult:
|
||||||
|
"""Paginated result for object listing."""
|
||||||
|
objects: List[ObjectMeta]
|
||||||
|
is_truncated: bool
|
||||||
|
next_continuation_token: Optional[str]
|
||||||
|
total_count: Optional[int] = None # Total objects in bucket (from stats cache)
|
||||||
|
|
||||||
|
|
||||||
def _utcnow() -> datetime:
|
def _utcnow() -> datetime:
|
||||||
return datetime.now(timezone.utc)
|
return datetime.now(timezone.utc)
|
||||||
|
|
||||||
@@ -80,13 +128,14 @@ class ObjectStorage:
|
|||||||
BUCKET_VERSIONS_DIR = "versions"
|
BUCKET_VERSIONS_DIR = "versions"
|
||||||
MULTIPART_MANIFEST = "manifest.json"
|
MULTIPART_MANIFEST = "manifest.json"
|
||||||
BUCKET_CONFIG_FILE = ".bucket.json"
|
BUCKET_CONFIG_FILE = ".bucket.json"
|
||||||
|
KEY_INDEX_CACHE_TTL = 30
|
||||||
|
|
||||||
def __init__(self, root: Path) -> None:
|
def __init__(self, root: Path) -> None:
|
||||||
self.root = Path(root)
|
self.root = Path(root)
|
||||||
self.root.mkdir(parents=True, exist_ok=True)
|
self.root.mkdir(parents=True, exist_ok=True)
|
||||||
self._ensure_system_roots()
|
self._ensure_system_roots()
|
||||||
|
self._object_cache: Dict[str, tuple[Dict[str, ObjectMeta], float]] = {}
|
||||||
|
|
||||||
# ---------------------- Bucket helpers ----------------------
|
|
||||||
def list_buckets(self) -> List[BucketMeta]:
|
def list_buckets(self) -> List[BucketMeta]:
|
||||||
buckets: List[BucketMeta] = []
|
buckets: List[BucketMeta] = []
|
||||||
for bucket in sorted(self.root.iterdir()):
|
for bucket in sorted(self.root.iterdir()):
|
||||||
@@ -95,7 +144,7 @@ class ObjectStorage:
|
|||||||
buckets.append(
|
buckets.append(
|
||||||
BucketMeta(
|
BucketMeta(
|
||||||
name=bucket.name,
|
name=bucket.name,
|
||||||
created_at=datetime.fromtimestamp(stat.st_ctime),
|
created_at=datetime.fromtimestamp(stat.st_ctime, timezone.utc),
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
return buckets
|
return buckets
|
||||||
@@ -119,22 +168,73 @@ class ObjectStorage:
|
|||||||
bucket_path.mkdir(parents=True, exist_ok=False)
|
bucket_path.mkdir(parents=True, exist_ok=False)
|
||||||
self._system_bucket_root(bucket_path.name).mkdir(parents=True, exist_ok=True)
|
self._system_bucket_root(bucket_path.name).mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
def bucket_stats(self, bucket_name: str) -> dict[str, int]:
|
def bucket_stats(self, bucket_name: str, cache_ttl: int = 60) -> dict[str, int]:
|
||||||
"""Return object count and total size for the bucket without hashing files."""
|
"""Return object count and total size for the bucket (cached).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
bucket_name: Name of the bucket
|
||||||
|
cache_ttl: Cache time-to-live in seconds (default 60)
|
||||||
|
"""
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise StorageError("Bucket does not exist")
|
raise StorageError("Bucket does not exist")
|
||||||
|
|
||||||
|
cache_path = self._system_bucket_root(bucket_name) / "stats.json"
|
||||||
|
if cache_path.exists():
|
||||||
|
try:
|
||||||
|
if time.time() - cache_path.stat().st_mtime < cache_ttl:
|
||||||
|
return json.loads(cache_path.read_text(encoding="utf-8"))
|
||||||
|
except (OSError, json.JSONDecodeError):
|
||||||
|
pass
|
||||||
|
|
||||||
object_count = 0
|
object_count = 0
|
||||||
total_bytes = 0
|
total_bytes = 0
|
||||||
|
version_count = 0
|
||||||
|
version_bytes = 0
|
||||||
|
|
||||||
for path in bucket_path.rglob("*"):
|
for path in bucket_path.rglob("*"):
|
||||||
if path.is_file():
|
if path.is_file():
|
||||||
rel = path.relative_to(bucket_path)
|
rel = path.relative_to(bucket_path)
|
||||||
if rel.parts and rel.parts[0] in self.INTERNAL_FOLDERS:
|
if not rel.parts:
|
||||||
continue
|
continue
|
||||||
stat = path.stat()
|
top_folder = rel.parts[0]
|
||||||
object_count += 1
|
if top_folder not in self.INTERNAL_FOLDERS:
|
||||||
total_bytes += stat.st_size
|
stat = path.stat()
|
||||||
return {"objects": object_count, "bytes": total_bytes}
|
object_count += 1
|
||||||
|
total_bytes += stat.st_size
|
||||||
|
|
||||||
|
versions_root = self._bucket_versions_root(bucket_name)
|
||||||
|
if versions_root.exists():
|
||||||
|
for path in versions_root.rglob("*.bin"):
|
||||||
|
if path.is_file():
|
||||||
|
stat = path.stat()
|
||||||
|
version_count += 1
|
||||||
|
version_bytes += stat.st_size
|
||||||
|
|
||||||
|
stats = {
|
||||||
|
"objects": object_count,
|
||||||
|
"bytes": total_bytes,
|
||||||
|
"version_count": version_count,
|
||||||
|
"version_bytes": version_bytes,
|
||||||
|
"total_objects": object_count + version_count,
|
||||||
|
"total_bytes": total_bytes + version_bytes,
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
cache_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
cache_path.write_text(json.dumps(stats), encoding="utf-8")
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return stats
|
||||||
|
|
||||||
|
def _invalidate_bucket_stats_cache(self, bucket_id: str) -> None:
|
||||||
|
"""Invalidate the cached bucket statistics."""
|
||||||
|
cache_path = self._system_bucket_root(bucket_id) / "stats.json"
|
||||||
|
try:
|
||||||
|
cache_path.unlink(missing_ok=True)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
def delete_bucket(self, bucket_name: str) -> None:
|
def delete_bucket(self, bucket_name: str) -> None:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
@@ -150,32 +250,76 @@ class ObjectStorage:
|
|||||||
self._remove_tree(self._system_bucket_root(bucket_path.name))
|
self._remove_tree(self._system_bucket_root(bucket_path.name))
|
||||||
self._remove_tree(self._multipart_bucket_root(bucket_path.name))
|
self._remove_tree(self._multipart_bucket_root(bucket_path.name))
|
||||||
|
|
||||||
# ---------------------- Object helpers ----------------------
|
def list_objects(
|
||||||
def list_objects(self, bucket_name: str) -> List[ObjectMeta]:
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
*,
|
||||||
|
max_keys: int = 1000,
|
||||||
|
continuation_token: Optional[str] = None,
|
||||||
|
prefix: Optional[str] = None,
|
||||||
|
) -> ListObjectsResult:
|
||||||
|
"""List objects in a bucket with pagination support.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
bucket_name: Name of the bucket
|
||||||
|
max_keys: Maximum number of objects to return (default 1000)
|
||||||
|
continuation_token: Token from previous request for pagination
|
||||||
|
prefix: Filter objects by key prefix
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
ListObjectsResult with objects, truncation status, and continuation token
|
||||||
|
"""
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise StorageError("Bucket does not exist")
|
raise StorageError("Bucket does not exist")
|
||||||
bucket_id = bucket_path.name
|
bucket_id = bucket_path.name
|
||||||
|
|
||||||
objects: List[ObjectMeta] = []
|
object_cache = self._get_object_cache(bucket_id, bucket_path)
|
||||||
for path in bucket_path.rglob("*"):
|
|
||||||
if path.is_file():
|
all_keys = sorted(object_cache.keys())
|
||||||
stat = path.stat()
|
|
||||||
rel = path.relative_to(bucket_path)
|
if prefix:
|
||||||
if rel.parts and rel.parts[0] in self.INTERNAL_FOLDERS:
|
all_keys = [k for k in all_keys if k.startswith(prefix)]
|
||||||
continue
|
|
||||||
metadata = self._read_metadata(bucket_id, rel)
|
total_count = len(all_keys)
|
||||||
objects.append(
|
start_index = 0
|
||||||
ObjectMeta(
|
if continuation_token:
|
||||||
key=str(rel.as_posix()),
|
try:
|
||||||
size=stat.st_size,
|
import bisect
|
||||||
last_modified=datetime.fromtimestamp(stat.st_mtime),
|
start_index = bisect.bisect_right(all_keys, continuation_token)
|
||||||
etag=self._compute_etag(path),
|
if start_index >= total_count:
|
||||||
metadata=metadata or None,
|
return ListObjectsResult(
|
||||||
|
objects=[],
|
||||||
|
is_truncated=False,
|
||||||
|
next_continuation_token=None,
|
||||||
|
total_count=total_count,
|
||||||
)
|
)
|
||||||
)
|
except Exception:
|
||||||
objects.sort(key=lambda meta: meta.key)
|
pass
|
||||||
return objects
|
|
||||||
|
end_index = start_index + max_keys
|
||||||
|
keys_slice = all_keys[start_index:end_index]
|
||||||
|
is_truncated = end_index < total_count
|
||||||
|
|
||||||
|
objects: List[ObjectMeta] = []
|
||||||
|
for key in keys_slice:
|
||||||
|
obj = object_cache.get(key)
|
||||||
|
if obj:
|
||||||
|
objects.append(obj)
|
||||||
|
|
||||||
|
next_token = keys_slice[-1] if is_truncated and keys_slice else None
|
||||||
|
|
||||||
|
return ListObjectsResult(
|
||||||
|
objects=objects,
|
||||||
|
is_truncated=is_truncated,
|
||||||
|
next_continuation_token=next_token,
|
||||||
|
total_count=total_count,
|
||||||
|
)
|
||||||
|
|
||||||
|
def list_objects_all(self, bucket_name: str) -> List[ObjectMeta]:
|
||||||
|
"""List all objects in a bucket (no pagination). Use with caution for large buckets."""
|
||||||
|
result = self.list_objects(bucket_name, max_keys=100000)
|
||||||
|
return result.objects
|
||||||
|
|
||||||
def put_object(
|
def put_object(
|
||||||
self,
|
self,
|
||||||
@@ -184,6 +328,7 @@ class ObjectStorage:
|
|||||||
stream: BinaryIO,
|
stream: BinaryIO,
|
||||||
*,
|
*,
|
||||||
metadata: Optional[Dict[str, str]] = None,
|
metadata: Optional[Dict[str, str]] = None,
|
||||||
|
enforce_quota: bool = True,
|
||||||
) -> ObjectMeta:
|
) -> ObjectMeta:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
@@ -194,23 +339,62 @@ class ObjectStorage:
|
|||||||
destination = bucket_path / safe_key
|
destination = bucket_path / safe_key
|
||||||
destination.parent.mkdir(parents=True, exist_ok=True)
|
destination.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
if self._is_versioning_enabled(bucket_path) and destination.exists():
|
is_overwrite = destination.exists()
|
||||||
|
existing_size = destination.stat().st_size if is_overwrite else 0
|
||||||
|
|
||||||
|
if self._is_versioning_enabled(bucket_path) and is_overwrite:
|
||||||
self._archive_current_version(bucket_id, safe_key, reason="overwrite")
|
self._archive_current_version(bucket_id, safe_key, reason="overwrite")
|
||||||
|
|
||||||
checksum = hashlib.md5()
|
tmp_dir = self._system_root_path() / self.SYSTEM_TMP_DIR
|
||||||
with destination.open("wb") as target:
|
tmp_dir.mkdir(parents=True, exist_ok=True)
|
||||||
shutil.copyfileobj(_HashingReader(stream, checksum), target)
|
tmp_path = tmp_dir / f"{uuid.uuid4().hex}.tmp"
|
||||||
|
|
||||||
|
try:
|
||||||
|
checksum = hashlib.md5()
|
||||||
|
with tmp_path.open("wb") as target:
|
||||||
|
shutil.copyfileobj(_HashingReader(stream, checksum), target)
|
||||||
|
|
||||||
|
new_size = tmp_path.stat().st_size
|
||||||
|
|
||||||
|
if enforce_quota:
|
||||||
|
size_delta = new_size - existing_size
|
||||||
|
object_delta = 0 if is_overwrite else 1
|
||||||
|
|
||||||
|
quota_check = self.check_quota(
|
||||||
|
bucket_name,
|
||||||
|
additional_bytes=max(0, size_delta),
|
||||||
|
additional_objects=object_delta,
|
||||||
|
)
|
||||||
|
if not quota_check["allowed"]:
|
||||||
|
raise QuotaExceededError(
|
||||||
|
quota_check["message"] or "Quota exceeded",
|
||||||
|
quota_check["quota"],
|
||||||
|
quota_check["usage"],
|
||||||
|
)
|
||||||
|
|
||||||
|
shutil.move(str(tmp_path), str(destination))
|
||||||
|
|
||||||
|
finally:
|
||||||
|
try:
|
||||||
|
tmp_path.unlink(missing_ok=True)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
stat = destination.stat()
|
stat = destination.stat()
|
||||||
if metadata:
|
etag = checksum.hexdigest()
|
||||||
self._write_metadata(bucket_id, safe_key, metadata)
|
|
||||||
else:
|
internal_meta = {"__etag__": etag, "__size__": str(stat.st_size)}
|
||||||
self._delete_metadata(bucket_id, safe_key)
|
combined_meta = {**internal_meta, **(metadata or {})}
|
||||||
|
self._write_metadata(bucket_id, safe_key, combined_meta)
|
||||||
|
|
||||||
|
self._invalidate_bucket_stats_cache(bucket_id)
|
||||||
|
self._invalidate_object_cache(bucket_id)
|
||||||
|
|
||||||
return ObjectMeta(
|
return ObjectMeta(
|
||||||
key=safe_key.as_posix(),
|
key=safe_key.as_posix(),
|
||||||
size=stat.st_size,
|
size=stat.st_size,
|
||||||
last_modified=datetime.fromtimestamp(stat.st_mtime),
|
last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc),
|
||||||
etag=checksum.hexdigest(),
|
etag=etag,
|
||||||
metadata=metadata,
|
metadata=metadata,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -227,6 +411,25 @@ class ObjectStorage:
|
|||||||
safe_key = self._sanitize_object_key(object_key)
|
safe_key = self._sanitize_object_key(object_key)
|
||||||
return self._read_metadata(bucket_path.name, safe_key) or {}
|
return self._read_metadata(bucket_path.name, safe_key) or {}
|
||||||
|
|
||||||
|
def _cleanup_empty_parents(self, path: Path, stop_at: Path) -> None:
|
||||||
|
"""Remove empty parent directories up to (but not including) stop_at.
|
||||||
|
|
||||||
|
On Windows/OneDrive, directories may be locked briefly after file deletion.
|
||||||
|
This method retries with a small delay to handle that case.
|
||||||
|
"""
|
||||||
|
for parent in path.parents:
|
||||||
|
if parent == stop_at:
|
||||||
|
break
|
||||||
|
for attempt in range(3):
|
||||||
|
try:
|
||||||
|
if parent.exists() and not any(parent.iterdir()):
|
||||||
|
parent.rmdir()
|
||||||
|
break
|
||||||
|
except OSError:
|
||||||
|
if attempt < 2:
|
||||||
|
time.sleep(0.1)
|
||||||
|
break
|
||||||
|
|
||||||
def delete_object(self, bucket_name: str, object_key: str) -> None:
|
def delete_object(self, bucket_name: str, object_key: str) -> None:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
path = self._object_path(bucket_name, object_key)
|
path = self._object_path(bucket_name, object_key)
|
||||||
@@ -239,12 +442,10 @@ class ObjectStorage:
|
|||||||
rel = path.relative_to(bucket_path)
|
rel = path.relative_to(bucket_path)
|
||||||
self._safe_unlink(path)
|
self._safe_unlink(path)
|
||||||
self._delete_metadata(bucket_id, rel)
|
self._delete_metadata(bucket_id, rel)
|
||||||
# Clean up now empty parents inside the bucket.
|
|
||||||
for parent in path.parents:
|
self._invalidate_bucket_stats_cache(bucket_id)
|
||||||
if parent == bucket_path:
|
self._invalidate_object_cache(bucket_id)
|
||||||
break
|
self._cleanup_empty_parents(path, bucket_path)
|
||||||
if parent.exists() and not any(parent.iterdir()):
|
|
||||||
parent.rmdir()
|
|
||||||
|
|
||||||
def purge_object(self, bucket_name: str, object_key: str) -> None:
|
def purge_object(self, bucket_name: str, object_key: str) -> None:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
@@ -263,13 +464,11 @@ class ObjectStorage:
|
|||||||
legacy_version_dir = self._legacy_version_dir(bucket_id, rel)
|
legacy_version_dir = self._legacy_version_dir(bucket_id, rel)
|
||||||
if legacy_version_dir.exists():
|
if legacy_version_dir.exists():
|
||||||
shutil.rmtree(legacy_version_dir, ignore_errors=True)
|
shutil.rmtree(legacy_version_dir, ignore_errors=True)
|
||||||
for parent in target.parents:
|
|
||||||
if parent == bucket_path:
|
|
||||||
break
|
|
||||||
if parent.exists() and not any(parent.iterdir()):
|
|
||||||
parent.rmdir()
|
|
||||||
|
|
||||||
# ---------------------- Versioning helpers ----------------------
|
self._invalidate_bucket_stats_cache(bucket_id)
|
||||||
|
self._invalidate_object_cache(bucket_id)
|
||||||
|
self._cleanup_empty_parents(target, bucket_path)
|
||||||
|
|
||||||
def is_versioning_enabled(self, bucket_name: str) -> bool:
|
def is_versioning_enabled(self, bucket_name: str) -> bool:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
@@ -282,7 +481,6 @@ class ObjectStorage:
|
|||||||
config["versioning_enabled"] = bool(enabled)
|
config["versioning_enabled"] = bool(enabled)
|
||||||
self._write_bucket_config(bucket_path.name, config)
|
self._write_bucket_config(bucket_path.name, config)
|
||||||
|
|
||||||
# ---------------------- Bucket configuration helpers ----------------------
|
|
||||||
def get_bucket_tags(self, bucket_name: str) -> List[Dict[str, str]]:
|
def get_bucket_tags(self, bucket_name: str) -> List[Dict[str, str]]:
|
||||||
bucket_path = self._require_bucket_path(bucket_name)
|
bucket_path = self._require_bucket_path(bucket_name)
|
||||||
config = self._read_bucket_config(bucket_path.name)
|
config = self._read_bucket_config(bucket_path.name)
|
||||||
@@ -335,6 +533,195 @@ class ObjectStorage:
|
|||||||
bucket_path = self._require_bucket_path(bucket_name)
|
bucket_path = self._require_bucket_path(bucket_name)
|
||||||
self._set_bucket_config_entry(bucket_path.name, "encryption", config_payload or None)
|
self._set_bucket_config_entry(bucket_path.name, "encryption", config_payload or None)
|
||||||
|
|
||||||
|
def get_bucket_lifecycle(self, bucket_name: str) -> Optional[List[Dict[str, Any]]]:
|
||||||
|
"""Get lifecycle configuration for bucket."""
|
||||||
|
bucket_path = self._require_bucket_path(bucket_name)
|
||||||
|
config = self._read_bucket_config(bucket_path.name)
|
||||||
|
lifecycle = config.get("lifecycle")
|
||||||
|
return lifecycle if isinstance(lifecycle, list) else None
|
||||||
|
|
||||||
|
def set_bucket_lifecycle(self, bucket_name: str, rules: Optional[List[Dict[str, Any]]]) -> None:
|
||||||
|
"""Set lifecycle configuration for bucket."""
|
||||||
|
bucket_path = self._require_bucket_path(bucket_name)
|
||||||
|
self._set_bucket_config_entry(bucket_path.name, "lifecycle", rules)
|
||||||
|
|
||||||
|
def get_bucket_quota(self, bucket_name: str) -> Dict[str, Any]:
|
||||||
|
"""Get quota configuration for bucket.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict with 'max_bytes' and 'max_objects' (None if unlimited).
|
||||||
|
"""
|
||||||
|
bucket_path = self._require_bucket_path(bucket_name)
|
||||||
|
config = self._read_bucket_config(bucket_path.name)
|
||||||
|
quota = config.get("quota")
|
||||||
|
if isinstance(quota, dict):
|
||||||
|
return {
|
||||||
|
"max_bytes": quota.get("max_bytes"),
|
||||||
|
"max_objects": quota.get("max_objects"),
|
||||||
|
}
|
||||||
|
return {"max_bytes": None, "max_objects": None}
|
||||||
|
|
||||||
|
def set_bucket_quota(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
*,
|
||||||
|
max_bytes: Optional[int] = None,
|
||||||
|
max_objects: Optional[int] = None,
|
||||||
|
) -> None:
|
||||||
|
"""Set quota limits for a bucket.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
bucket_name: Name of the bucket
|
||||||
|
max_bytes: Maximum total size in bytes (None to remove limit)
|
||||||
|
max_objects: Maximum number of objects (None to remove limit)
|
||||||
|
"""
|
||||||
|
bucket_path = self._require_bucket_path(bucket_name)
|
||||||
|
|
||||||
|
if max_bytes is None and max_objects is None:
|
||||||
|
self._set_bucket_config_entry(bucket_path.name, "quota", None)
|
||||||
|
return
|
||||||
|
|
||||||
|
quota: Dict[str, Any] = {}
|
||||||
|
if max_bytes is not None:
|
||||||
|
if max_bytes < 0:
|
||||||
|
raise StorageError("max_bytes must be non-negative")
|
||||||
|
quota["max_bytes"] = max_bytes
|
||||||
|
if max_objects is not None:
|
||||||
|
if max_objects < 0:
|
||||||
|
raise StorageError("max_objects must be non-negative")
|
||||||
|
quota["max_objects"] = max_objects
|
||||||
|
|
||||||
|
self._set_bucket_config_entry(bucket_path.name, "quota", quota)
|
||||||
|
|
||||||
|
def check_quota(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
additional_bytes: int = 0,
|
||||||
|
additional_objects: int = 0,
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""Check if an operation would exceed bucket quota.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
bucket_name: Name of the bucket
|
||||||
|
additional_bytes: Bytes that would be added
|
||||||
|
additional_objects: Objects that would be added
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict with 'allowed' (bool), 'quota' (current limits),
|
||||||
|
'usage' (current usage), and 'message' (if not allowed).
|
||||||
|
"""
|
||||||
|
quota = self.get_bucket_quota(bucket_name)
|
||||||
|
if not quota:
|
||||||
|
return {
|
||||||
|
"allowed": True,
|
||||||
|
"quota": None,
|
||||||
|
"usage": None,
|
||||||
|
"message": None,
|
||||||
|
}
|
||||||
|
|
||||||
|
stats = self.bucket_stats(bucket_name)
|
||||||
|
current_bytes = stats.get("total_bytes", stats.get("bytes", 0))
|
||||||
|
current_objects = stats.get("total_objects", stats.get("objects", 0))
|
||||||
|
|
||||||
|
result = {
|
||||||
|
"allowed": True,
|
||||||
|
"quota": quota,
|
||||||
|
"usage": {
|
||||||
|
"bytes": current_bytes,
|
||||||
|
"objects": current_objects,
|
||||||
|
"version_count": stats.get("version_count", 0),
|
||||||
|
"version_bytes": stats.get("version_bytes", 0),
|
||||||
|
},
|
||||||
|
"message": None,
|
||||||
|
}
|
||||||
|
|
||||||
|
max_bytes_limit = quota.get("max_bytes")
|
||||||
|
max_objects = quota.get("max_objects")
|
||||||
|
|
||||||
|
if max_bytes_limit is not None:
|
||||||
|
projected_bytes = current_bytes + additional_bytes
|
||||||
|
if projected_bytes > max_bytes_limit:
|
||||||
|
result["allowed"] = False
|
||||||
|
result["message"] = (
|
||||||
|
f"Quota exceeded: adding {additional_bytes} bytes would result in "
|
||||||
|
f"{projected_bytes} bytes, exceeding limit of {max_bytes_limit} bytes"
|
||||||
|
)
|
||||||
|
return result
|
||||||
|
|
||||||
|
if max_objects is not None:
|
||||||
|
projected_objects = current_objects + additional_objects
|
||||||
|
if projected_objects > max_objects:
|
||||||
|
result["allowed"] = False
|
||||||
|
result["message"] = (
|
||||||
|
f"Quota exceeded: adding {additional_objects} objects would result in "
|
||||||
|
f"{projected_objects} objects, exceeding limit of {max_objects} objects"
|
||||||
|
)
|
||||||
|
return result
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
def get_object_tags(self, bucket_name: str, object_key: str) -> List[Dict[str, str]]:
|
||||||
|
"""Get tags for an object."""
|
||||||
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
|
if not bucket_path.exists():
|
||||||
|
raise StorageError("Bucket does not exist")
|
||||||
|
safe_key = self._sanitize_object_key(object_key)
|
||||||
|
object_path = bucket_path / safe_key
|
||||||
|
if not object_path.exists():
|
||||||
|
raise StorageError("Object does not exist")
|
||||||
|
|
||||||
|
for meta_file in (self._metadata_file(bucket_path.name, safe_key), self._legacy_metadata_file(bucket_path.name, safe_key)):
|
||||||
|
if not meta_file.exists():
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
payload = json.loads(meta_file.read_text(encoding="utf-8"))
|
||||||
|
tags = payload.get("tags")
|
||||||
|
if isinstance(tags, list):
|
||||||
|
return tags
|
||||||
|
return []
|
||||||
|
except (OSError, json.JSONDecodeError):
|
||||||
|
return []
|
||||||
|
return []
|
||||||
|
|
||||||
|
def set_object_tags(self, bucket_name: str, object_key: str, tags: Optional[List[Dict[str, str]]]) -> None:
|
||||||
|
"""Set tags for an object."""
|
||||||
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
|
if not bucket_path.exists():
|
||||||
|
raise StorageError("Bucket does not exist")
|
||||||
|
safe_key = self._sanitize_object_key(object_key)
|
||||||
|
object_path = bucket_path / safe_key
|
||||||
|
if not object_path.exists():
|
||||||
|
raise StorageError("Object does not exist")
|
||||||
|
|
||||||
|
meta_file = self._metadata_file(bucket_path.name, safe_key)
|
||||||
|
|
||||||
|
existing_payload: Dict[str, Any] = {}
|
||||||
|
if meta_file.exists():
|
||||||
|
try:
|
||||||
|
existing_payload = json.loads(meta_file.read_text(encoding="utf-8"))
|
||||||
|
except (OSError, json.JSONDecodeError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
if tags:
|
||||||
|
existing_payload["tags"] = tags
|
||||||
|
else:
|
||||||
|
existing_payload.pop("tags", None)
|
||||||
|
|
||||||
|
if existing_payload.get("metadata") or existing_payload.get("tags"):
|
||||||
|
meta_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
meta_file.write_text(json.dumps(existing_payload), encoding="utf-8")
|
||||||
|
elif meta_file.exists():
|
||||||
|
meta_file.unlink()
|
||||||
|
parent = meta_file.parent
|
||||||
|
meta_root = self._bucket_meta_root(bucket_path.name)
|
||||||
|
while parent != meta_root and parent.exists() and not any(parent.iterdir()):
|
||||||
|
parent.rmdir()
|
||||||
|
parent = parent.parent
|
||||||
|
|
||||||
|
def delete_object_tags(self, bucket_name: str, object_key: str) -> None:
|
||||||
|
"""Delete all tags from an object."""
|
||||||
|
self.set_object_tags(bucket_name, object_key, None)
|
||||||
|
|
||||||
def list_object_versions(self, bucket_name: str, object_key: str) -> List[Dict[str, Any]]:
|
def list_object_versions(self, bucket_name: str, object_key: str) -> List[Dict[str, Any]]:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
@@ -389,10 +776,11 @@ class ObjectStorage:
|
|||||||
else:
|
else:
|
||||||
self._delete_metadata(bucket_id, safe_key)
|
self._delete_metadata(bucket_id, safe_key)
|
||||||
stat = destination.stat()
|
stat = destination.stat()
|
||||||
|
self._invalidate_bucket_stats_cache(bucket_id)
|
||||||
return ObjectMeta(
|
return ObjectMeta(
|
||||||
key=safe_key.as_posix(),
|
key=safe_key.as_posix(),
|
||||||
size=stat.st_size,
|
size=stat.st_size,
|
||||||
last_modified=datetime.fromtimestamp(stat.st_mtime),
|
last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc),
|
||||||
etag=self._compute_etag(destination),
|
etag=self._compute_etag(destination),
|
||||||
metadata=metadata or None,
|
metadata=metadata or None,
|
||||||
)
|
)
|
||||||
@@ -459,7 +847,6 @@ class ObjectStorage:
|
|||||||
record.pop("_latest_sort", None)
|
record.pop("_latest_sort", None)
|
||||||
return sorted(aggregated.values(), key=lambda item: item["key"])
|
return sorted(aggregated.values(), key=lambda item: item["key"])
|
||||||
|
|
||||||
# ---------------------- Multipart helpers ----------------------
|
|
||||||
def initiate_multipart_upload(
|
def initiate_multipart_upload(
|
||||||
self,
|
self,
|
||||||
bucket_name: str,
|
bucket_name: str,
|
||||||
@@ -495,7 +882,13 @@ class ObjectStorage:
|
|||||||
if part_number < 1:
|
if part_number < 1:
|
||||||
raise StorageError("part_number must be >= 1")
|
raise StorageError("part_number must be >= 1")
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
manifest, upload_root = self._load_multipart_manifest(bucket_path.name, upload_id)
|
|
||||||
|
upload_root = self._multipart_dir(bucket_path.name, upload_id)
|
||||||
|
if not upload_root.exists():
|
||||||
|
upload_root = self._legacy_multipart_dir(bucket_path.name, upload_id)
|
||||||
|
if not upload_root.exists():
|
||||||
|
raise StorageError("Multipart upload not found")
|
||||||
|
|
||||||
checksum = hashlib.md5()
|
checksum = hashlib.md5()
|
||||||
part_filename = f"part-{part_number:05d}.part"
|
part_filename = f"part-{part_number:05d}.part"
|
||||||
part_path = upload_root / part_filename
|
part_path = upload_root / part_filename
|
||||||
@@ -506,9 +899,21 @@ class ObjectStorage:
|
|||||||
"size": part_path.stat().st_size,
|
"size": part_path.stat().st_size,
|
||||||
"filename": part_filename,
|
"filename": part_filename,
|
||||||
}
|
}
|
||||||
parts = manifest.setdefault("parts", {})
|
|
||||||
parts[str(part_number)] = record
|
manifest_path = upload_root / self.MULTIPART_MANIFEST
|
||||||
self._write_multipart_manifest(upload_root, manifest)
|
lock_path = upload_root / ".manifest.lock"
|
||||||
|
|
||||||
|
with lock_path.open("w") as lock_file:
|
||||||
|
with _file_lock(lock_file):
|
||||||
|
try:
|
||||||
|
manifest = json.loads(manifest_path.read_text(encoding="utf-8"))
|
||||||
|
except (OSError, json.JSONDecodeError) as exc:
|
||||||
|
raise StorageError("Multipart manifest unreadable") from exc
|
||||||
|
|
||||||
|
parts = manifest.setdefault("parts", {})
|
||||||
|
parts[str(part_number)] = record
|
||||||
|
manifest_path.write_text(json.dumps(manifest), encoding="utf-8")
|
||||||
|
|
||||||
return record["etag"]
|
return record["etag"]
|
||||||
|
|
||||||
def complete_multipart_upload(
|
def complete_multipart_upload(
|
||||||
@@ -516,6 +921,7 @@ class ObjectStorage:
|
|||||||
bucket_name: str,
|
bucket_name: str,
|
||||||
upload_id: str,
|
upload_id: str,
|
||||||
ordered_parts: List[Dict[str, Any]],
|
ordered_parts: List[Dict[str, Any]],
|
||||||
|
enforce_quota: bool = True,
|
||||||
) -> ObjectMeta:
|
) -> ObjectMeta:
|
||||||
if not ordered_parts:
|
if not ordered_parts:
|
||||||
raise StorageError("parts list required")
|
raise StorageError("parts list required")
|
||||||
@@ -526,6 +932,7 @@ class ObjectStorage:
|
|||||||
if not parts_map:
|
if not parts_map:
|
||||||
raise StorageError("No uploaded parts found")
|
raise StorageError("No uploaded parts found")
|
||||||
validated: List[tuple[int, Dict[str, Any]]] = []
|
validated: List[tuple[int, Dict[str, Any]]] = []
|
||||||
|
total_size = 0
|
||||||
for part in ordered_parts:
|
for part in ordered_parts:
|
||||||
raw_number = part.get("part_number")
|
raw_number = part.get("part_number")
|
||||||
if raw_number is None:
|
if raw_number is None:
|
||||||
@@ -545,39 +952,77 @@ class ObjectStorage:
|
|||||||
if supplied_etag and record.get("etag") and supplied_etag.strip('"') != record["etag"]:
|
if supplied_etag and record.get("etag") and supplied_etag.strip('"') != record["etag"]:
|
||||||
raise StorageError(f"ETag mismatch for part {number}")
|
raise StorageError(f"ETag mismatch for part {number}")
|
||||||
validated.append((number, record))
|
validated.append((number, record))
|
||||||
|
total_size += record.get("size", 0)
|
||||||
validated.sort(key=lambda entry: entry[0])
|
validated.sort(key=lambda entry: entry[0])
|
||||||
|
|
||||||
safe_key = self._sanitize_object_key(manifest["object_key"])
|
safe_key = self._sanitize_object_key(manifest["object_key"])
|
||||||
destination = bucket_path / safe_key
|
destination = bucket_path / safe_key
|
||||||
destination.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
if self._is_versioning_enabled(bucket_path) and destination.exists():
|
|
||||||
self._archive_current_version(bucket_id, safe_key, reason="overwrite")
|
|
||||||
checksum = hashlib.md5()
|
|
||||||
with destination.open("wb") as target:
|
|
||||||
for _, record in validated:
|
|
||||||
part_path = upload_root / record["filename"]
|
|
||||||
if not part_path.exists():
|
|
||||||
raise StorageError(f"Missing part file {record['filename']}")
|
|
||||||
with part_path.open("rb") as chunk:
|
|
||||||
while True:
|
|
||||||
data = chunk.read(1024 * 1024)
|
|
||||||
if not data:
|
|
||||||
break
|
|
||||||
checksum.update(data)
|
|
||||||
target.write(data)
|
|
||||||
|
|
||||||
metadata = manifest.get("metadata")
|
is_overwrite = destination.exists()
|
||||||
if metadata:
|
existing_size = destination.stat().st_size if is_overwrite else 0
|
||||||
self._write_metadata(bucket_id, safe_key, metadata)
|
|
||||||
else:
|
if enforce_quota:
|
||||||
self._delete_metadata(bucket_id, safe_key)
|
size_delta = total_size - existing_size
|
||||||
|
object_delta = 0 if is_overwrite else 1
|
||||||
|
|
||||||
|
quota_check = self.check_quota(
|
||||||
|
bucket_name,
|
||||||
|
additional_bytes=max(0, size_delta),
|
||||||
|
additional_objects=object_delta,
|
||||||
|
)
|
||||||
|
if not quota_check["allowed"]:
|
||||||
|
raise QuotaExceededError(
|
||||||
|
quota_check["message"] or "Quota exceeded",
|
||||||
|
quota_check["quota"],
|
||||||
|
quota_check["usage"],
|
||||||
|
)
|
||||||
|
|
||||||
|
destination.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
lock_file_path = self._system_bucket_root(bucket_id) / "locks" / f"{safe_key.as_posix().replace('/', '_')}.lock"
|
||||||
|
lock_file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with lock_file_path.open("w") as lock_file:
|
||||||
|
with _file_lock(lock_file):
|
||||||
|
if self._is_versioning_enabled(bucket_path) and destination.exists():
|
||||||
|
self._archive_current_version(bucket_id, safe_key, reason="overwrite")
|
||||||
|
checksum = hashlib.md5()
|
||||||
|
with destination.open("wb") as target:
|
||||||
|
for _, record in validated:
|
||||||
|
part_path = upload_root / record["filename"]
|
||||||
|
if not part_path.exists():
|
||||||
|
raise StorageError(f"Missing part file {record['filename']}")
|
||||||
|
with part_path.open("rb") as chunk:
|
||||||
|
while True:
|
||||||
|
data = chunk.read(1024 * 1024)
|
||||||
|
if not data:
|
||||||
|
break
|
||||||
|
checksum.update(data)
|
||||||
|
target.write(data)
|
||||||
|
|
||||||
|
metadata = manifest.get("metadata")
|
||||||
|
if metadata:
|
||||||
|
self._write_metadata(bucket_id, safe_key, metadata)
|
||||||
|
else:
|
||||||
|
self._delete_metadata(bucket_id, safe_key)
|
||||||
|
except BlockingIOError:
|
||||||
|
raise StorageError("Another upload to this key is in progress")
|
||||||
|
finally:
|
||||||
|
try:
|
||||||
|
lock_file_path.unlink(missing_ok=True)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
shutil.rmtree(upload_root, ignore_errors=True)
|
shutil.rmtree(upload_root, ignore_errors=True)
|
||||||
|
|
||||||
|
self._invalidate_bucket_stats_cache(bucket_id)
|
||||||
|
|
||||||
stat = destination.stat()
|
stat = destination.stat()
|
||||||
return ObjectMeta(
|
return ObjectMeta(
|
||||||
key=safe_key.as_posix(),
|
key=safe_key.as_posix(),
|
||||||
size=stat.st_size,
|
size=stat.st_size,
|
||||||
last_modified=datetime.fromtimestamp(stat.st_mtime),
|
last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc),
|
||||||
etag=checksum.hexdigest(),
|
etag=checksum.hexdigest(),
|
||||||
metadata=metadata,
|
metadata=metadata,
|
||||||
)
|
)
|
||||||
@@ -592,7 +1037,33 @@ class ObjectStorage:
|
|||||||
if legacy_root.exists():
|
if legacy_root.exists():
|
||||||
shutil.rmtree(legacy_root, ignore_errors=True)
|
shutil.rmtree(legacy_root, ignore_errors=True)
|
||||||
|
|
||||||
# ---------------------- internal helpers ----------------------
|
def list_multipart_parts(self, bucket_name: str, upload_id: str) -> List[Dict[str, Any]]:
|
||||||
|
"""List uploaded parts for a multipart upload."""
|
||||||
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
|
manifest, upload_root = self._load_multipart_manifest(bucket_path.name, upload_id)
|
||||||
|
|
||||||
|
parts = []
|
||||||
|
parts_map = manifest.get("parts", {})
|
||||||
|
for part_num_str, record in parts_map.items():
|
||||||
|
part_num = int(part_num_str)
|
||||||
|
part_filename = record.get("filename")
|
||||||
|
if not part_filename:
|
||||||
|
continue
|
||||||
|
part_path = upload_root / part_filename
|
||||||
|
if not part_path.exists():
|
||||||
|
continue
|
||||||
|
|
||||||
|
stat = part_path.stat()
|
||||||
|
parts.append({
|
||||||
|
"PartNumber": part_num,
|
||||||
|
"Size": stat.st_size,
|
||||||
|
"ETag": record.get("etag"),
|
||||||
|
"LastModified": datetime.fromtimestamp(stat.st_mtime, timezone.utc)
|
||||||
|
})
|
||||||
|
|
||||||
|
parts.sort(key=lambda x: x["PartNumber"])
|
||||||
|
return parts
|
||||||
|
|
||||||
def _bucket_path(self, bucket_name: str) -> Path:
|
def _bucket_path(self, bucket_name: str) -> Path:
|
||||||
safe_name = self._sanitize_bucket_name(bucket_name)
|
safe_name = self._sanitize_bucket_name(bucket_name)
|
||||||
return self.root / safe_name
|
return self.root / safe_name
|
||||||
@@ -649,6 +1120,172 @@ class ObjectStorage:
|
|||||||
def _legacy_multipart_dir(self, bucket_name: str, upload_id: str) -> Path:
|
def _legacy_multipart_dir(self, bucket_name: str, upload_id: str) -> Path:
|
||||||
return self._legacy_multipart_bucket_root(bucket_name) / upload_id
|
return self._legacy_multipart_bucket_root(bucket_name) / upload_id
|
||||||
|
|
||||||
|
def _fast_list_keys(self, bucket_path: Path) -> List[str]:
|
||||||
|
"""Fast directory walk using os.scandir instead of pathlib.rglob.
|
||||||
|
|
||||||
|
This is significantly faster for large directories (10K+ files).
|
||||||
|
Returns just the keys (for backward compatibility).
|
||||||
|
"""
|
||||||
|
return list(self._build_object_cache(bucket_path).keys())
|
||||||
|
|
||||||
|
def _build_object_cache(self, bucket_path: Path) -> Dict[str, ObjectMeta]:
|
||||||
|
"""Build a complete object metadata cache for a bucket.
|
||||||
|
|
||||||
|
Uses os.scandir for fast directory walking and a persistent etag index.
|
||||||
|
"""
|
||||||
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
|
|
||||||
|
bucket_id = bucket_path.name
|
||||||
|
objects: Dict[str, ObjectMeta] = {}
|
||||||
|
bucket_str = str(bucket_path)
|
||||||
|
bucket_len = len(bucket_str) + 1
|
||||||
|
|
||||||
|
etag_index_path = self._system_bucket_root(bucket_id) / "etag_index.json"
|
||||||
|
meta_cache: Dict[str, str] = {}
|
||||||
|
index_mtime: float = 0
|
||||||
|
|
||||||
|
if etag_index_path.exists():
|
||||||
|
try:
|
||||||
|
index_mtime = etag_index_path.stat().st_mtime
|
||||||
|
with open(etag_index_path, 'r', encoding='utf-8') as f:
|
||||||
|
meta_cache = json.load(f)
|
||||||
|
except (OSError, json.JSONDecodeError):
|
||||||
|
meta_cache = {}
|
||||||
|
|
||||||
|
meta_root = self._bucket_meta_root(bucket_id)
|
||||||
|
needs_rebuild = False
|
||||||
|
|
||||||
|
if meta_root.exists() and index_mtime > 0:
|
||||||
|
def check_newer(dir_path: str) -> bool:
|
||||||
|
try:
|
||||||
|
with os.scandir(dir_path) as it:
|
||||||
|
for entry in it:
|
||||||
|
if entry.is_dir(follow_symlinks=False):
|
||||||
|
if check_newer(entry.path):
|
||||||
|
return True
|
||||||
|
elif entry.is_file(follow_symlinks=False) and entry.name.endswith('.meta.json'):
|
||||||
|
if entry.stat().st_mtime > index_mtime:
|
||||||
|
return True
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
return False
|
||||||
|
needs_rebuild = check_newer(str(meta_root))
|
||||||
|
elif not meta_cache:
|
||||||
|
needs_rebuild = True
|
||||||
|
|
||||||
|
if needs_rebuild and meta_root.exists():
|
||||||
|
meta_str = str(meta_root)
|
||||||
|
meta_len = len(meta_str) + 1
|
||||||
|
meta_files: list[tuple[str, str]] = []
|
||||||
|
|
||||||
|
def collect_meta_files(dir_path: str) -> None:
|
||||||
|
try:
|
||||||
|
with os.scandir(dir_path) as it:
|
||||||
|
for entry in it:
|
||||||
|
if entry.is_dir(follow_symlinks=False):
|
||||||
|
collect_meta_files(entry.path)
|
||||||
|
elif entry.is_file(follow_symlinks=False) and entry.name.endswith('.meta.json'):
|
||||||
|
rel = entry.path[meta_len:]
|
||||||
|
key = rel[:-10].replace(os.sep, '/')
|
||||||
|
meta_files.append((key, entry.path))
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
collect_meta_files(meta_str)
|
||||||
|
|
||||||
|
def read_meta_file(item: tuple[str, str]) -> tuple[str, str | None]:
|
||||||
|
key, path = item
|
||||||
|
try:
|
||||||
|
with open(path, 'rb') as f:
|
||||||
|
content = f.read()
|
||||||
|
etag_marker = b'"__etag__"'
|
||||||
|
idx = content.find(etag_marker)
|
||||||
|
if idx != -1:
|
||||||
|
start = content.find(b'"', idx + len(etag_marker) + 1)
|
||||||
|
if start != -1:
|
||||||
|
end = content.find(b'"', start + 1)
|
||||||
|
if end != -1:
|
||||||
|
return key, content[start+1:end].decode('utf-8')
|
||||||
|
return key, None
|
||||||
|
except (OSError, UnicodeDecodeError):
|
||||||
|
return key, None
|
||||||
|
|
||||||
|
if meta_files:
|
||||||
|
meta_cache = {}
|
||||||
|
with ThreadPoolExecutor(max_workers=min(64, len(meta_files))) as executor:
|
||||||
|
for key, etag in executor.map(read_meta_file, meta_files):
|
||||||
|
if etag:
|
||||||
|
meta_cache[key] = etag
|
||||||
|
|
||||||
|
try:
|
||||||
|
etag_index_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
with open(etag_index_path, 'w', encoding='utf-8') as f:
|
||||||
|
json.dump(meta_cache, f)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def scan_dir(dir_path: str) -> None:
|
||||||
|
try:
|
||||||
|
with os.scandir(dir_path) as it:
|
||||||
|
for entry in it:
|
||||||
|
if entry.is_dir(follow_symlinks=False):
|
||||||
|
rel_start = entry.path[bucket_len:].split(os.sep)[0] if len(entry.path) > bucket_len else entry.name
|
||||||
|
if rel_start in self.INTERNAL_FOLDERS:
|
||||||
|
continue
|
||||||
|
scan_dir(entry.path)
|
||||||
|
elif entry.is_file(follow_symlinks=False):
|
||||||
|
rel = entry.path[bucket_len:]
|
||||||
|
first_part = rel.split(os.sep)[0] if os.sep in rel else rel
|
||||||
|
if first_part in self.INTERNAL_FOLDERS:
|
||||||
|
continue
|
||||||
|
|
||||||
|
key = rel.replace(os.sep, '/')
|
||||||
|
try:
|
||||||
|
stat = entry.stat()
|
||||||
|
|
||||||
|
etag = meta_cache.get(key)
|
||||||
|
|
||||||
|
if not etag:
|
||||||
|
etag = f'"{stat.st_size}-{int(stat.st_mtime)}"'
|
||||||
|
|
||||||
|
objects[key] = ObjectMeta(
|
||||||
|
key=key,
|
||||||
|
size=stat.st_size,
|
||||||
|
last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc),
|
||||||
|
etag=etag,
|
||||||
|
metadata=None,
|
||||||
|
)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
scan_dir(bucket_str)
|
||||||
|
return objects
|
||||||
|
|
||||||
|
def _get_object_cache(self, bucket_id: str, bucket_path: Path) -> Dict[str, ObjectMeta]:
|
||||||
|
"""Get cached object metadata for a bucket, refreshing if stale."""
|
||||||
|
now = time.time()
|
||||||
|
cached = self._object_cache.get(bucket_id)
|
||||||
|
|
||||||
|
if cached:
|
||||||
|
objects, timestamp = cached
|
||||||
|
if now - timestamp < self.KEY_INDEX_CACHE_TTL:
|
||||||
|
return objects
|
||||||
|
|
||||||
|
objects = self._build_object_cache(bucket_path)
|
||||||
|
self._object_cache[bucket_id] = (objects, now)
|
||||||
|
return objects
|
||||||
|
|
||||||
|
def _invalidate_object_cache(self, bucket_id: str) -> None:
|
||||||
|
"""Invalidate the object cache and etag index for a bucket."""
|
||||||
|
self._object_cache.pop(bucket_id, None)
|
||||||
|
etag_index_path = self._system_bucket_root(bucket_id) / "etag_index.json"
|
||||||
|
try:
|
||||||
|
etag_index_path.unlink(missing_ok=True)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
def _ensure_system_roots(self) -> None:
|
def _ensure_system_roots(self) -> None:
|
||||||
for path in (
|
for path in (
|
||||||
self._system_root_path(),
|
self._system_root_path(),
|
||||||
@@ -886,7 +1523,11 @@ class ObjectStorage:
|
|||||||
normalized = unicodedata.normalize("NFC", object_key)
|
normalized = unicodedata.normalize("NFC", object_key)
|
||||||
if normalized != object_key:
|
if normalized != object_key:
|
||||||
raise StorageError("Object key must use normalized Unicode")
|
raise StorageError("Object key must use normalized Unicode")
|
||||||
|
|
||||||
candidate = Path(normalized)
|
candidate = Path(normalized)
|
||||||
|
if ".." in candidate.parts:
|
||||||
|
raise StorageError("Object key contains parent directory references")
|
||||||
|
|
||||||
if candidate.is_absolute():
|
if candidate.is_absolute():
|
||||||
raise StorageError("Absolute object keys are not allowed")
|
raise StorageError("Absolute object keys are not allowed")
|
||||||
if getattr(candidate, "drive", ""):
|
if getattr(candidate, "drive", ""):
|
||||||
|
|||||||
603
app/ui.py
603
app/ui.py
@@ -3,10 +3,14 @@ from __future__ import annotations
|
|||||||
|
|
||||||
import json
|
import json
|
||||||
import uuid
|
import uuid
|
||||||
|
import psutil
|
||||||
|
import shutil
|
||||||
from typing import Any
|
from typing import Any
|
||||||
from urllib.parse import urlparse
|
from urllib.parse import quote, urlparse
|
||||||
|
|
||||||
|
import boto3
|
||||||
import requests
|
import requests
|
||||||
|
from botocore.exceptions import ClientError
|
||||||
from flask import (
|
from flask import (
|
||||||
Blueprint,
|
Blueprint,
|
||||||
Response,
|
Response,
|
||||||
@@ -26,6 +30,7 @@ from .bucket_policies import BucketPolicyStore
|
|||||||
from .connections import ConnectionStore, RemoteConnection
|
from .connections import ConnectionStore, RemoteConnection
|
||||||
from .extensions import limiter
|
from .extensions import limiter
|
||||||
from .iam import IamError
|
from .iam import IamError
|
||||||
|
from .kms import KMSManager
|
||||||
from .replication import ReplicationManager, ReplicationRule
|
from .replication import ReplicationManager, ReplicationRule
|
||||||
from .secret_store import EphemeralSecretStore
|
from .secret_store import EphemeralSecretStore
|
||||||
from .storage import ObjectStorage, StorageError
|
from .storage import ObjectStorage, StorageError
|
||||||
@@ -38,10 +43,17 @@ def _storage() -> ObjectStorage:
|
|||||||
return current_app.extensions["object_storage"]
|
return current_app.extensions["object_storage"]
|
||||||
|
|
||||||
|
|
||||||
|
def _replication_manager() -> ReplicationManager:
|
||||||
|
return current_app.extensions["replication"]
|
||||||
|
|
||||||
|
|
||||||
def _iam():
|
def _iam():
|
||||||
return current_app.extensions["iam"]
|
return current_app.extensions["iam"]
|
||||||
|
|
||||||
|
|
||||||
|
def _kms() -> KMSManager | None:
|
||||||
|
return current_app.extensions.get("kms")
|
||||||
|
|
||||||
|
|
||||||
def _bucket_policies() -> BucketPolicyStore:
|
def _bucket_policies() -> BucketPolicyStore:
|
||||||
store: BucketPolicyStore = current_app.extensions["bucket_policies"]
|
store: BucketPolicyStore = current_app.extensions["bucket_policies"]
|
||||||
@@ -177,6 +189,7 @@ def inject_nav_state() -> dict[str, Any]:
|
|||||||
return {
|
return {
|
||||||
"principal": principal,
|
"principal": principal,
|
||||||
"can_manage_iam": can_manage,
|
"can_manage_iam": can_manage,
|
||||||
|
"can_view_metrics": can_manage,
|
||||||
"csrf_token": generate_csrf,
|
"csrf_token": generate_csrf,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -241,14 +254,15 @@ def buckets_overview():
|
|||||||
if bucket.name not in allowed_names:
|
if bucket.name not in allowed_names:
|
||||||
continue
|
continue
|
||||||
policy = policy_store.get_policy(bucket.name)
|
policy = policy_store.get_policy(bucket.name)
|
||||||
stats = _storage().bucket_stats(bucket.name)
|
cache_ttl = current_app.config.get("BUCKET_STATS_CACHE_TTL", 60)
|
||||||
|
stats = _storage().bucket_stats(bucket.name, cache_ttl=cache_ttl)
|
||||||
access_label, access_badge = _bucket_access_descriptor(policy)
|
access_label, access_badge = _bucket_access_descriptor(policy)
|
||||||
visible_buckets.append({
|
visible_buckets.append({
|
||||||
"meta": bucket,
|
"meta": bucket,
|
||||||
"summary": {
|
"summary": {
|
||||||
"objects": stats["objects"],
|
"objects": stats["total_objects"],
|
||||||
"total_bytes": stats["bytes"],
|
"total_bytes": stats["total_bytes"],
|
||||||
"human_size": _format_bytes(stats["bytes"]),
|
"human_size": _format_bytes(stats["total_bytes"]),
|
||||||
},
|
},
|
||||||
"access_label": access_label,
|
"access_label": access_label,
|
||||||
"access_badge": access_badge,
|
"access_badge": access_badge,
|
||||||
@@ -280,7 +294,8 @@ def bucket_detail(bucket_name: str):
|
|||||||
storage = _storage()
|
storage = _storage()
|
||||||
try:
|
try:
|
||||||
_authorize_ui(principal, bucket_name, "list")
|
_authorize_ui(principal, bucket_name, "list")
|
||||||
objects = storage.list_objects(bucket_name)
|
if not storage.bucket_exists(bucket_name):
|
||||||
|
raise StorageError("Bucket does not exist")
|
||||||
except (StorageError, IamError) as exc:
|
except (StorageError, IamError) as exc:
|
||||||
flash(_friendly_error_message(exc), "danger")
|
flash(_friendly_error_message(exc), "danger")
|
||||||
return redirect(url_for("ui.buckets_overview"))
|
return redirect(url_for("ui.buckets_overview"))
|
||||||
@@ -327,26 +342,124 @@ def bucket_detail(bucket_name: str):
|
|||||||
except IamError:
|
except IamError:
|
||||||
can_manage_versioning = False
|
can_manage_versioning = False
|
||||||
|
|
||||||
# Replication info
|
can_manage_replication = False
|
||||||
|
if principal:
|
||||||
|
try:
|
||||||
|
_iam().authorize(principal, bucket_name, "replication")
|
||||||
|
can_manage_replication = True
|
||||||
|
except IamError:
|
||||||
|
can_manage_replication = False
|
||||||
|
|
||||||
|
is_replication_admin = False
|
||||||
|
if principal:
|
||||||
|
try:
|
||||||
|
_iam().authorize(principal, None, "iam:list_users")
|
||||||
|
is_replication_admin = True
|
||||||
|
except IamError:
|
||||||
|
is_replication_admin = False
|
||||||
|
|
||||||
replication_rule = _replication().get_rule(bucket_name)
|
replication_rule = _replication().get_rule(bucket_name)
|
||||||
connections = _connections().list()
|
connections = _connections().list() if (is_replication_admin or replication_rule) else []
|
||||||
|
|
||||||
|
encryption_config = storage.get_bucket_encryption(bucket_name)
|
||||||
|
kms_manager = _kms()
|
||||||
|
kms_keys = kms_manager.list_keys() if kms_manager else []
|
||||||
|
kms_enabled = current_app.config.get("KMS_ENABLED", False)
|
||||||
|
encryption_enabled = current_app.config.get("ENCRYPTION_ENABLED", False)
|
||||||
|
can_manage_encryption = can_manage_versioning # Same as other bucket properties
|
||||||
|
|
||||||
|
bucket_quota = storage.get_bucket_quota(bucket_name)
|
||||||
|
bucket_stats = storage.bucket_stats(bucket_name)
|
||||||
|
can_manage_quota = False
|
||||||
|
try:
|
||||||
|
_iam().authorize(principal, None, "iam:list_users")
|
||||||
|
can_manage_quota = True
|
||||||
|
except IamError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
objects_api_url = url_for("ui.list_bucket_objects", bucket_name=bucket_name)
|
||||||
|
|
||||||
return render_template(
|
return render_template(
|
||||||
"bucket_detail.html",
|
"bucket_detail.html",
|
||||||
bucket_name=bucket_name,
|
bucket_name=bucket_name,
|
||||||
objects=objects,
|
objects_api_url=objects_api_url,
|
||||||
principal=principal,
|
principal=principal,
|
||||||
bucket_policy_text=policy_text,
|
bucket_policy_text=policy_text,
|
||||||
bucket_policy=bucket_policy,
|
bucket_policy=bucket_policy,
|
||||||
can_edit_policy=can_edit_policy,
|
can_edit_policy=can_edit_policy,
|
||||||
can_manage_versioning=can_manage_versioning,
|
can_manage_versioning=can_manage_versioning,
|
||||||
|
can_manage_replication=can_manage_replication,
|
||||||
|
can_manage_encryption=can_manage_encryption,
|
||||||
|
is_replication_admin=is_replication_admin,
|
||||||
default_policy=default_policy,
|
default_policy=default_policy,
|
||||||
versioning_enabled=versioning_enabled,
|
versioning_enabled=versioning_enabled,
|
||||||
replication_rule=replication_rule,
|
replication_rule=replication_rule,
|
||||||
connections=connections,
|
connections=connections,
|
||||||
|
encryption_config=encryption_config,
|
||||||
|
kms_keys=kms_keys,
|
||||||
|
kms_enabled=kms_enabled,
|
||||||
|
encryption_enabled=encryption_enabled,
|
||||||
|
bucket_quota=bucket_quota,
|
||||||
|
bucket_stats=bucket_stats,
|
||||||
|
can_manage_quota=can_manage_quota,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.get("/buckets/<bucket_name>/objects")
|
||||||
|
def list_bucket_objects(bucket_name: str):
|
||||||
|
"""API endpoint for paginated object listing."""
|
||||||
|
principal = _current_principal()
|
||||||
|
storage = _storage()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "list")
|
||||||
|
except IamError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 403
|
||||||
|
|
||||||
|
max_keys = min(int(request.args.get("max_keys", 1000)), 10000)
|
||||||
|
continuation_token = request.args.get("continuation_token") or None
|
||||||
|
prefix = request.args.get("prefix") or None
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = storage.list_objects(
|
||||||
|
bucket_name,
|
||||||
|
max_keys=max_keys,
|
||||||
|
continuation_token=continuation_token,
|
||||||
|
prefix=prefix,
|
||||||
|
)
|
||||||
|
except StorageError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 400
|
||||||
|
|
||||||
|
try:
|
||||||
|
versioning_enabled = storage.is_versioning_enabled(bucket_name)
|
||||||
|
except StorageError:
|
||||||
|
versioning_enabled = False
|
||||||
|
|
||||||
|
objects_data = []
|
||||||
|
for obj in result.objects:
|
||||||
|
objects_data.append({
|
||||||
|
"key": obj.key,
|
||||||
|
"size": obj.size,
|
||||||
|
"last_modified": obj.last_modified.isoformat(),
|
||||||
|
"last_modified_display": obj.last_modified.strftime("%b %d, %Y %H:%M"),
|
||||||
|
"etag": obj.etag,
|
||||||
|
"metadata": obj.metadata or {},
|
||||||
|
"preview_url": url_for("ui.object_preview", bucket_name=bucket_name, object_key=obj.key),
|
||||||
|
"download_url": url_for("ui.object_preview", bucket_name=bucket_name, object_key=obj.key) + "?download=1",
|
||||||
|
"presign_endpoint": url_for("ui.object_presign", bucket_name=bucket_name, object_key=obj.key),
|
||||||
|
"delete_endpoint": url_for("ui.delete_object", bucket_name=bucket_name, object_key=obj.key),
|
||||||
|
"versions_endpoint": url_for("ui.object_versions", bucket_name=bucket_name, object_key=obj.key),
|
||||||
|
"restore_template": url_for("ui.restore_object_version", bucket_name=bucket_name, object_key=obj.key, version_id="VERSION_ID_PLACEHOLDER"),
|
||||||
|
})
|
||||||
|
|
||||||
|
return jsonify({
|
||||||
|
"objects": objects_data,
|
||||||
|
"is_truncated": result.is_truncated,
|
||||||
|
"next_continuation_token": result.next_continuation_token,
|
||||||
|
"total_count": result.total_count,
|
||||||
|
"versioning_enabled": versioning_enabled,
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
@ui_bp.post("/buckets/<bucket_name>/upload")
|
@ui_bp.post("/buckets/<bucket_name>/upload")
|
||||||
@limiter.limit("30 per minute")
|
@limiter.limit("30 per minute")
|
||||||
def upload_object(bucket_name: str):
|
def upload_object(bucket_name: str):
|
||||||
@@ -463,8 +576,6 @@ def complete_multipart_upload(bucket_name: str, upload_id: str):
|
|||||||
normalized.append({"part_number": number, "etag": etag})
|
normalized.append({"part_number": number, "etag": etag})
|
||||||
try:
|
try:
|
||||||
result = _storage().complete_multipart_upload(bucket_name, upload_id, normalized)
|
result = _storage().complete_multipart_upload(bucket_name, upload_id, normalized)
|
||||||
|
|
||||||
# Trigger replication
|
|
||||||
_replication().trigger_replication(bucket_name, result["key"])
|
_replication().trigger_replication(bucket_name, result["key"])
|
||||||
|
|
||||||
return jsonify(result)
|
return jsonify(result)
|
||||||
@@ -494,6 +605,7 @@ def delete_bucket(bucket_name: str):
|
|||||||
_authorize_ui(principal, bucket_name, "delete")
|
_authorize_ui(principal, bucket_name, "delete")
|
||||||
_storage().delete_bucket(bucket_name)
|
_storage().delete_bucket(bucket_name)
|
||||||
_bucket_policies().delete_policy(bucket_name)
|
_bucket_policies().delete_policy(bucket_name)
|
||||||
|
_replication_manager().delete_rule(bucket_name)
|
||||||
flash(f"Bucket '{bucket_name}' removed", "success")
|
flash(f"Bucket '{bucket_name}' removed", "success")
|
||||||
except (StorageError, IamError) as exc:
|
except (StorageError, IamError) as exc:
|
||||||
flash(_friendly_error_message(exc), "danger")
|
flash(_friendly_error_message(exc), "danger")
|
||||||
@@ -512,6 +624,7 @@ def delete_object(bucket_name: str, object_key: str):
|
|||||||
flash(f"Permanently deleted '{object_key}' and all versions", "success")
|
flash(f"Permanently deleted '{object_key}' and all versions", "success")
|
||||||
else:
|
else:
|
||||||
_storage().delete_object(bucket_name, object_key)
|
_storage().delete_object(bucket_name, object_key)
|
||||||
|
_replication_manager().trigger_replication(bucket_name, object_key, action="delete")
|
||||||
flash(f"Deleted '{object_key}'", "success")
|
flash(f"Deleted '{object_key}'", "success")
|
||||||
except (IamError, StorageError) as exc:
|
except (IamError, StorageError) as exc:
|
||||||
flash(_friendly_error_message(exc), "danger")
|
flash(_friendly_error_message(exc), "danger")
|
||||||
@@ -572,6 +685,7 @@ def bulk_delete_objects(bucket_name: str):
|
|||||||
storage.purge_object(bucket_name, key)
|
storage.purge_object(bucket_name, key)
|
||||||
else:
|
else:
|
||||||
storage.delete_object(bucket_name, key)
|
storage.delete_object(bucket_name, key)
|
||||||
|
_replication_manager().trigger_replication(bucket_name, key, action="delete")
|
||||||
deleted.append(key)
|
deleted.append(key)
|
||||||
except StorageError as exc:
|
except StorageError as exc:
|
||||||
errors.append({"key": key, "error": str(exc)})
|
errors.append({"key": key, "error": str(exc)})
|
||||||
@@ -616,32 +730,30 @@ def bulk_download_objects(bucket_name: str):
|
|||||||
unique_keys = list(dict.fromkeys(cleaned))
|
unique_keys = list(dict.fromkeys(cleaned))
|
||||||
storage = _storage()
|
storage = _storage()
|
||||||
|
|
||||||
# Check permissions for all keys first (or at least bucket read)
|
# Verify permission to read bucket contents
|
||||||
# We'll check bucket read once, then object read for each if needed?
|
|
||||||
# _authorize_ui checks bucket level if object_key is None, but we need to check each object if fine-grained policies exist.
|
|
||||||
# For simplicity/performance, we check bucket list/read.
|
|
||||||
try:
|
try:
|
||||||
_authorize_ui(principal, bucket_name, "read")
|
_authorize_ui(principal, bucket_name, "read")
|
||||||
except IamError as exc:
|
except IamError as exc:
|
||||||
return jsonify({"error": str(exc)}), 403
|
return jsonify({"error": str(exc)}), 403
|
||||||
|
|
||||||
# Create ZIP
|
# Create ZIP archive of selected objects
|
||||||
buffer = io.BytesIO()
|
buffer = io.BytesIO()
|
||||||
with zipfile.ZipFile(buffer, "w", zipfile.ZIP_DEFLATED) as zf:
|
with zipfile.ZipFile(buffer, "w", zipfile.ZIP_DEFLATED) as zf:
|
||||||
for key in unique_keys:
|
for key in unique_keys:
|
||||||
try:
|
try:
|
||||||
# Verify individual object permission if needed?
|
|
||||||
# _authorize_ui(principal, bucket_name, "read", object_key=key)
|
|
||||||
# This might be slow for many objects. Assuming bucket read is enough for now or we accept the overhead.
|
|
||||||
# Let's skip individual check for bulk speed, assuming bucket read implies object read unless denied.
|
|
||||||
# But strictly we should check. Let's check.
|
|
||||||
_authorize_ui(principal, bucket_name, "read", object_key=key)
|
_authorize_ui(principal, bucket_name, "read", object_key=key)
|
||||||
|
|
||||||
path = storage.get_object_path(bucket_name, key)
|
metadata = storage.get_object_metadata(bucket_name, key)
|
||||||
# Use the key as the filename in the zip
|
is_encrypted = "x-amz-server-side-encryption" in metadata
|
||||||
zf.write(path, arcname=key)
|
|
||||||
|
if is_encrypted and hasattr(storage, 'get_object_data'):
|
||||||
|
data, _ = storage.get_object_data(bucket_name, key)
|
||||||
|
zf.writestr(key, data)
|
||||||
|
else:
|
||||||
|
path = storage.get_object_path(bucket_name, key)
|
||||||
|
zf.write(path, arcname=key)
|
||||||
except (StorageError, IamError):
|
except (StorageError, IamError):
|
||||||
# Skip files we can't read or don't exist
|
# Skip objects that can't be accessed
|
||||||
continue
|
continue
|
||||||
|
|
||||||
buffer.seek(0)
|
buffer.seek(0)
|
||||||
@@ -681,13 +793,34 @@ def purge_object_versions(bucket_name: str, object_key: str):
|
|||||||
@ui_bp.get("/buckets/<bucket_name>/objects/<path:object_key>/preview")
|
@ui_bp.get("/buckets/<bucket_name>/objects/<path:object_key>/preview")
|
||||||
def object_preview(bucket_name: str, object_key: str) -> Response:
|
def object_preview(bucket_name: str, object_key: str) -> Response:
|
||||||
principal = _current_principal()
|
principal = _current_principal()
|
||||||
|
storage = _storage()
|
||||||
try:
|
try:
|
||||||
_authorize_ui(principal, bucket_name, "read", object_key=object_key)
|
_authorize_ui(principal, bucket_name, "read", object_key=object_key)
|
||||||
path = _storage().get_object_path(bucket_name, object_key)
|
path = storage.get_object_path(bucket_name, object_key)
|
||||||
|
metadata = storage.get_object_metadata(bucket_name, object_key)
|
||||||
except (StorageError, IamError) as exc:
|
except (StorageError, IamError) as exc:
|
||||||
status = 403 if isinstance(exc, IamError) else 404
|
status = 403 if isinstance(exc, IamError) else 404
|
||||||
return Response(str(exc), status=status)
|
return Response(str(exc), status=status)
|
||||||
|
|
||||||
download = request.args.get("download") == "1"
|
download = request.args.get("download") == "1"
|
||||||
|
|
||||||
|
# Check if object is encrypted and needs decryption
|
||||||
|
is_encrypted = "x-amz-server-side-encryption" in metadata
|
||||||
|
if is_encrypted and hasattr(storage, 'get_object_data'):
|
||||||
|
try:
|
||||||
|
data, _ = storage.get_object_data(bucket_name, object_key)
|
||||||
|
import io
|
||||||
|
import mimetypes
|
||||||
|
mimetype = mimetypes.guess_type(object_key)[0] or "application/octet-stream"
|
||||||
|
return send_file(
|
||||||
|
io.BytesIO(data),
|
||||||
|
mimetype=mimetype,
|
||||||
|
as_attachment=download,
|
||||||
|
download_name=path.name
|
||||||
|
)
|
||||||
|
except StorageError as exc:
|
||||||
|
return Response(f"Decryption failed: {exc}", status=500)
|
||||||
|
|
||||||
return send_file(path, as_attachment=download, download_name=path.name)
|
return send_file(path, as_attachment=download, download_name=path.name)
|
||||||
|
|
||||||
|
|
||||||
@@ -701,22 +834,31 @@ def object_presign(bucket_name: str, object_key: str):
|
|||||||
_authorize_ui(principal, bucket_name, action, object_key=object_key)
|
_authorize_ui(principal, bucket_name, action, object_key=object_key)
|
||||||
except IamError as exc:
|
except IamError as exc:
|
||||||
return jsonify({"error": str(exc)}), 403
|
return jsonify({"error": str(exc)}), 403
|
||||||
api_base = current_app.config["API_BASE_URL"].rstrip("/")
|
|
||||||
url = f"{api_base}/presign/{bucket_name}/{object_key}"
|
api_base = current_app.config.get("API_BASE_URL") or "http://127.0.0.1:5000"
|
||||||
|
api_base = api_base.rstrip("/")
|
||||||
|
encoded_key = quote(object_key, safe="/")
|
||||||
|
url = f"{api_base}/presign/{bucket_name}/{encoded_key}"
|
||||||
|
|
||||||
|
# Use API base URL for forwarded headers so presigned URLs point to API, not UI
|
||||||
|
parsed_api = urlparse(api_base)
|
||||||
|
headers = _api_headers()
|
||||||
|
headers["X-Forwarded-Host"] = parsed_api.netloc or "127.0.0.1:5000"
|
||||||
|
headers["X-Forwarded-Proto"] = parsed_api.scheme or "http"
|
||||||
|
headers["X-Forwarded-For"] = request.remote_addr or "127.0.0.1"
|
||||||
|
|
||||||
try:
|
try:
|
||||||
response = requests.post(url, headers=_api_headers(), json=payload, timeout=5)
|
response = requests.post(url, headers=headers, json=payload, timeout=5)
|
||||||
except requests.RequestException as exc:
|
except requests.RequestException as exc:
|
||||||
return jsonify({"error": f"API unavailable: {exc}"}), 502
|
return jsonify({"error": f"API unavailable: {exc}"}), 502
|
||||||
try:
|
try:
|
||||||
body = response.json()
|
body = response.json()
|
||||||
except ValueError:
|
except ValueError:
|
||||||
# Handle XML error responses from S3 backend
|
|
||||||
text = response.text or ""
|
text = response.text or ""
|
||||||
if text.strip().startswith("<"):
|
if text.strip().startswith("<"):
|
||||||
import xml.etree.ElementTree as ET
|
import xml.etree.ElementTree as ET
|
||||||
try:
|
try:
|
||||||
root = ET.fromstring(text)
|
root = ET.fromstring(text)
|
||||||
# Try to find Message or Code
|
|
||||||
message = root.findtext(".//Message") or root.findtext(".//Code") or "Unknown S3 error"
|
message = root.findtext(".//Message") or root.findtext(".//Code") or "Unknown S3 error"
|
||||||
body = {"error": message}
|
body = {"error": message}
|
||||||
except ET.ParseError:
|
except ET.ParseError:
|
||||||
@@ -838,6 +980,124 @@ def update_bucket_versioning(bucket_name: str):
|
|||||||
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.post("/buckets/<bucket_name>/quota")
|
||||||
|
def update_bucket_quota(bucket_name: str):
|
||||||
|
"""Update bucket quota configuration (admin only)."""
|
||||||
|
principal = _current_principal()
|
||||||
|
|
||||||
|
# Quota management is admin-only
|
||||||
|
is_admin = False
|
||||||
|
try:
|
||||||
|
_iam().authorize(principal, None, "iam:list_users")
|
||||||
|
is_admin = True
|
||||||
|
except IamError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if not is_admin:
|
||||||
|
flash("Only administrators can manage bucket quotas", "danger")
|
||||||
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
||||||
|
|
||||||
|
action = request.form.get("action", "set")
|
||||||
|
|
||||||
|
if action == "remove":
|
||||||
|
try:
|
||||||
|
_storage().set_bucket_quota(bucket_name, max_bytes=None, max_objects=None)
|
||||||
|
flash("Bucket quota removed", "info")
|
||||||
|
except StorageError as exc:
|
||||||
|
flash(_friendly_error_message(exc), "danger")
|
||||||
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
||||||
|
|
||||||
|
# Parse quota values
|
||||||
|
max_mb_str = request.form.get("max_mb", "").strip()
|
||||||
|
max_objects_str = request.form.get("max_objects", "").strip()
|
||||||
|
|
||||||
|
max_bytes = None
|
||||||
|
max_objects = None
|
||||||
|
|
||||||
|
if max_mb_str:
|
||||||
|
try:
|
||||||
|
max_mb = int(max_mb_str)
|
||||||
|
if max_mb < 1:
|
||||||
|
raise ValueError("Size must be at least 1 MB")
|
||||||
|
max_bytes = max_mb * 1024 * 1024 # Convert MB to bytes
|
||||||
|
except ValueError as exc:
|
||||||
|
flash(f"Invalid size value: {exc}", "danger")
|
||||||
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
||||||
|
|
||||||
|
if max_objects_str:
|
||||||
|
try:
|
||||||
|
max_objects = int(max_objects_str)
|
||||||
|
if max_objects < 0:
|
||||||
|
raise ValueError("Object count must be non-negative")
|
||||||
|
except ValueError as exc:
|
||||||
|
flash(f"Invalid object count: {exc}", "danger")
|
||||||
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
||||||
|
|
||||||
|
try:
|
||||||
|
_storage().set_bucket_quota(bucket_name, max_bytes=max_bytes, max_objects=max_objects)
|
||||||
|
if max_bytes is None and max_objects is None:
|
||||||
|
flash("Bucket quota removed", "info")
|
||||||
|
else:
|
||||||
|
flash("Bucket quota updated", "success")
|
||||||
|
except StorageError as exc:
|
||||||
|
flash(_friendly_error_message(exc), "danger")
|
||||||
|
|
||||||
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.post("/buckets/<bucket_name>/encryption")
|
||||||
|
def update_bucket_encryption(bucket_name: str):
|
||||||
|
"""Update bucket default encryption configuration."""
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "write")
|
||||||
|
except IamError as exc:
|
||||||
|
flash(_friendly_error_message(exc), "danger")
|
||||||
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
||||||
|
|
||||||
|
action = request.form.get("action", "enable")
|
||||||
|
|
||||||
|
if action == "disable":
|
||||||
|
try:
|
||||||
|
_storage().set_bucket_encryption(bucket_name, None)
|
||||||
|
flash("Default encryption disabled", "info")
|
||||||
|
except StorageError as exc:
|
||||||
|
flash(_friendly_error_message(exc), "danger")
|
||||||
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
||||||
|
|
||||||
|
algorithm = request.form.get("algorithm", "AES256")
|
||||||
|
kms_key_id = request.form.get("kms_key_id", "").strip() or None
|
||||||
|
|
||||||
|
if algorithm not in ("AES256", "aws:kms"):
|
||||||
|
flash("Invalid encryption algorithm", "danger")
|
||||||
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
||||||
|
|
||||||
|
# Build encryption configuration in AWS S3 format
|
||||||
|
encryption_config: dict[str, Any] = {
|
||||||
|
"Rules": [
|
||||||
|
{
|
||||||
|
"ApplyServerSideEncryptionByDefault": {
|
||||||
|
"SSEAlgorithm": algorithm,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
if algorithm == "aws:kms" and kms_key_id:
|
||||||
|
encryption_config["Rules"][0]["ApplyServerSideEncryptionByDefault"]["KMSMasterKeyID"] = kms_key_id
|
||||||
|
|
||||||
|
try:
|
||||||
|
_storage().set_bucket_encryption(bucket_name, encryption_config)
|
||||||
|
if algorithm == "aws:kms":
|
||||||
|
flash("Default KMS encryption enabled", "success")
|
||||||
|
else:
|
||||||
|
flash("Default AES-256 encryption enabled", "success")
|
||||||
|
except StorageError as exc:
|
||||||
|
flash(_friendly_error_message(exc), "danger")
|
||||||
|
|
||||||
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
||||||
|
|
||||||
|
|
||||||
@ui_bp.get("/iam")
|
@ui_bp.get("/iam")
|
||||||
def iam_dashboard():
|
def iam_dashboard():
|
||||||
principal = _current_principal()
|
principal = _current_principal()
|
||||||
@@ -926,6 +1186,11 @@ def rotate_iam_secret(access_key: str):
|
|||||||
return redirect(url_for("ui.iam_dashboard"))
|
return redirect(url_for("ui.iam_dashboard"))
|
||||||
try:
|
try:
|
||||||
new_secret = _iam().rotate_secret(access_key)
|
new_secret = _iam().rotate_secret(access_key)
|
||||||
|
if principal and principal.access_key == access_key:
|
||||||
|
creds = session.get("credentials", {})
|
||||||
|
creds["secret_key"] = new_secret
|
||||||
|
session["credentials"] = creds
|
||||||
|
session.modified = True
|
||||||
except IamError as exc:
|
except IamError as exc:
|
||||||
if request.accept_mimetypes.accept_json and not request.accept_mimetypes.accept_html:
|
if request.accept_mimetypes.accept_json and not request.accept_mimetypes.accept_html:
|
||||||
return jsonify({"error": str(exc)}), 400
|
return jsonify({"error": str(exc)}), 400
|
||||||
@@ -983,7 +1248,6 @@ def delete_iam_user(access_key: str):
|
|||||||
return redirect(url_for("ui.iam_dashboard"))
|
return redirect(url_for("ui.iam_dashboard"))
|
||||||
|
|
||||||
if access_key == principal.access_key:
|
if access_key == principal.access_key:
|
||||||
# Self-deletion
|
|
||||||
try:
|
try:
|
||||||
_iam().delete_user(access_key)
|
_iam().delete_user(access_key)
|
||||||
session.pop("credentials", None)
|
session.pop("credentials", None)
|
||||||
@@ -1012,7 +1276,6 @@ def update_iam_policies(access_key: str):
|
|||||||
|
|
||||||
policies_raw = request.form.get("policies", "").strip()
|
policies_raw = request.form.get("policies", "").strip()
|
||||||
if not policies_raw:
|
if not policies_raw:
|
||||||
# Empty policies list is valid (clears permissions)
|
|
||||||
policies = []
|
policies = []
|
||||||
else:
|
else:
|
||||||
try:
|
try:
|
||||||
@@ -1064,6 +1327,90 @@ def create_connection():
|
|||||||
return redirect(url_for("ui.connections_dashboard"))
|
return redirect(url_for("ui.connections_dashboard"))
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.post("/connections/test")
|
||||||
|
def test_connection():
|
||||||
|
from botocore.config import Config as BotoConfig
|
||||||
|
from botocore.exceptions import ConnectTimeoutError, EndpointConnectionError, ReadTimeoutError
|
||||||
|
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_iam().authorize(principal, None, "iam:list_users")
|
||||||
|
except IamError:
|
||||||
|
return jsonify({"status": "error", "message": "Access denied"}), 403
|
||||||
|
|
||||||
|
data = request.get_json(silent=True) or request.form
|
||||||
|
endpoint = data.get("endpoint_url", "").strip()
|
||||||
|
access_key = data.get("access_key", "").strip()
|
||||||
|
secret_key = data.get("secret_key", "").strip()
|
||||||
|
region = data.get("region", "us-east-1").strip()
|
||||||
|
|
||||||
|
if not all([endpoint, access_key, secret_key]):
|
||||||
|
return jsonify({"status": "error", "message": "Missing credentials"}), 400
|
||||||
|
|
||||||
|
try:
|
||||||
|
config = BotoConfig(
|
||||||
|
connect_timeout=5,
|
||||||
|
read_timeout=10,
|
||||||
|
retries={'max_attempts': 1}
|
||||||
|
)
|
||||||
|
s3 = boto3.client(
|
||||||
|
"s3",
|
||||||
|
endpoint_url=endpoint,
|
||||||
|
aws_access_key_id=access_key,
|
||||||
|
aws_secret_access_key=secret_key,
|
||||||
|
region_name=region,
|
||||||
|
config=config,
|
||||||
|
)
|
||||||
|
|
||||||
|
s3.list_buckets()
|
||||||
|
return jsonify({"status": "ok", "message": "Connection successful"})
|
||||||
|
except (ConnectTimeoutError, ReadTimeoutError):
|
||||||
|
return jsonify({"status": "error", "message": f"Connection timed out - endpoint may be down or unreachable: {endpoint}"}), 400
|
||||||
|
except EndpointConnectionError:
|
||||||
|
return jsonify({"status": "error", "message": f"Could not connect to endpoint: {endpoint}"}), 400
|
||||||
|
except ClientError as e:
|
||||||
|
error_code = e.response.get('Error', {}).get('Code', 'Unknown')
|
||||||
|
error_msg = e.response.get('Error', {}).get('Message', str(e))
|
||||||
|
return jsonify({"status": "error", "message": f"Connection failed ({error_code}): {error_msg}"}), 400
|
||||||
|
except Exception as e:
|
||||||
|
return jsonify({"status": "error", "message": f"Connection failed: {str(e)}"}), 400
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.post("/connections/<connection_id>/update")
|
||||||
|
def update_connection(connection_id: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_iam().authorize(principal, None, "iam:list_users")
|
||||||
|
except IamError:
|
||||||
|
flash("Access denied", "danger")
|
||||||
|
return redirect(url_for("ui.buckets_overview"))
|
||||||
|
|
||||||
|
conn = _connections().get(connection_id)
|
||||||
|
if not conn:
|
||||||
|
flash("Connection not found", "danger")
|
||||||
|
return redirect(url_for("ui.connections_dashboard"))
|
||||||
|
|
||||||
|
name = request.form.get("name", "").strip()
|
||||||
|
endpoint = request.form.get("endpoint_url", "").strip()
|
||||||
|
access_key = request.form.get("access_key", "").strip()
|
||||||
|
secret_key = request.form.get("secret_key", "").strip()
|
||||||
|
region = request.form.get("region", "us-east-1").strip()
|
||||||
|
|
||||||
|
if not all([name, endpoint, access_key, secret_key]):
|
||||||
|
flash("All fields are required", "danger")
|
||||||
|
return redirect(url_for("ui.connections_dashboard"))
|
||||||
|
|
||||||
|
conn.name = name
|
||||||
|
conn.endpoint_url = endpoint
|
||||||
|
conn.access_key = access_key
|
||||||
|
conn.secret_key = secret_key
|
||||||
|
conn.region = region
|
||||||
|
|
||||||
|
_connections().save()
|
||||||
|
flash(f"Connection '{name}' updated", "success")
|
||||||
|
return redirect(url_for("ui.connections_dashboard"))
|
||||||
|
|
||||||
|
|
||||||
@ui_bp.post("/connections/<connection_id>/delete")
|
@ui_bp.post("/connections/<connection_id>/delete")
|
||||||
def delete_connection(connection_id: str):
|
def delete_connection(connection_id: str):
|
||||||
principal = _current_principal()
|
principal = _current_principal()
|
||||||
@@ -1082,19 +1429,53 @@ def delete_connection(connection_id: str):
|
|||||||
def update_bucket_replication(bucket_name: str):
|
def update_bucket_replication(bucket_name: str):
|
||||||
principal = _current_principal()
|
principal = _current_principal()
|
||||||
try:
|
try:
|
||||||
_authorize_ui(principal, bucket_name, "write")
|
_authorize_ui(principal, bucket_name, "replication")
|
||||||
except IamError as exc:
|
except IamError as exc:
|
||||||
flash(str(exc), "danger")
|
flash(str(exc), "danger")
|
||||||
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="replication"))
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="replication"))
|
||||||
|
|
||||||
|
is_admin = False
|
||||||
|
try:
|
||||||
|
_iam().authorize(principal, None, "iam:list_users")
|
||||||
|
is_admin = True
|
||||||
|
except IamError:
|
||||||
|
is_admin = False
|
||||||
|
|
||||||
action = request.form.get("action")
|
action = request.form.get("action")
|
||||||
|
|
||||||
if action == "delete":
|
if action == "delete":
|
||||||
|
if not is_admin:
|
||||||
|
flash("Only administrators can remove replication configuration", "danger")
|
||||||
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="replication"))
|
||||||
_replication().delete_rule(bucket_name)
|
_replication().delete_rule(bucket_name)
|
||||||
flash("Replication disabled", "info")
|
flash("Replication configuration removed", "info")
|
||||||
else:
|
elif action == "pause":
|
||||||
|
rule = _replication().get_rule(bucket_name)
|
||||||
|
if rule:
|
||||||
|
rule.enabled = False
|
||||||
|
_replication().set_rule(rule)
|
||||||
|
flash("Replication paused", "info")
|
||||||
|
else:
|
||||||
|
flash("No replication configuration to pause", "warning")
|
||||||
|
elif action == "resume":
|
||||||
|
rule = _replication().get_rule(bucket_name)
|
||||||
|
if rule:
|
||||||
|
rule.enabled = True
|
||||||
|
_replication().set_rule(rule)
|
||||||
|
flash("Replication resumed", "success")
|
||||||
|
else:
|
||||||
|
flash("No replication configuration to resume", "warning")
|
||||||
|
elif action == "create":
|
||||||
|
if not is_admin:
|
||||||
|
flash("Only administrators can configure replication settings", "danger")
|
||||||
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="replication"))
|
||||||
|
|
||||||
|
from .replication import REPLICATION_MODE_NEW_ONLY, REPLICATION_MODE_ALL
|
||||||
|
import time
|
||||||
|
|
||||||
target_conn_id = request.form.get("target_connection_id")
|
target_conn_id = request.form.get("target_connection_id")
|
||||||
target_bucket = request.form.get("target_bucket", "").strip()
|
target_bucket = request.form.get("target_bucket", "").strip()
|
||||||
|
replication_mode = request.form.get("replication_mode", REPLICATION_MODE_NEW_ONLY)
|
||||||
|
|
||||||
if not target_conn_id or not target_bucket:
|
if not target_conn_id or not target_bucket:
|
||||||
flash("Target connection and bucket are required", "danger")
|
flash("Target connection and bucket are required", "danger")
|
||||||
@@ -1103,14 +1484,94 @@ def update_bucket_replication(bucket_name: str):
|
|||||||
bucket_name=bucket_name,
|
bucket_name=bucket_name,
|
||||||
target_connection_id=target_conn_id,
|
target_connection_id=target_conn_id,
|
||||||
target_bucket=target_bucket,
|
target_bucket=target_bucket,
|
||||||
enabled=True
|
enabled=True,
|
||||||
|
mode=replication_mode,
|
||||||
|
created_at=time.time(),
|
||||||
)
|
)
|
||||||
_replication().set_rule(rule)
|
_replication().set_rule(rule)
|
||||||
flash("Replication configured", "success")
|
|
||||||
|
if replication_mode == REPLICATION_MODE_ALL:
|
||||||
|
_replication().replicate_existing_objects(bucket_name)
|
||||||
|
flash("Replication configured. Existing objects are being replicated in the background.", "success")
|
||||||
|
else:
|
||||||
|
flash("Replication configured. Only new uploads will be replicated.", "success")
|
||||||
|
else:
|
||||||
|
flash("Invalid action", "danger")
|
||||||
|
|
||||||
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="replication"))
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="replication"))
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.get("/buckets/<bucket_name>/replication/status")
|
||||||
|
def get_replication_status(bucket_name: str):
|
||||||
|
"""Async endpoint to fetch replication sync status without blocking page load."""
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "replication")
|
||||||
|
except IamError:
|
||||||
|
return jsonify({"error": "Access denied"}), 403
|
||||||
|
|
||||||
|
rule = _replication().get_rule(bucket_name)
|
||||||
|
if not rule:
|
||||||
|
return jsonify({"error": "No replication rule"}), 404
|
||||||
|
|
||||||
|
connection = _connections().get(rule.target_connection_id)
|
||||||
|
endpoint_healthy = False
|
||||||
|
endpoint_error = None
|
||||||
|
if connection:
|
||||||
|
endpoint_healthy = _replication().check_endpoint_health(connection)
|
||||||
|
if not endpoint_healthy:
|
||||||
|
endpoint_error = f"Cannot reach endpoint: {connection.endpoint_url}"
|
||||||
|
else:
|
||||||
|
endpoint_error = "Target connection not found"
|
||||||
|
|
||||||
|
stats = None
|
||||||
|
if endpoint_healthy:
|
||||||
|
stats = _replication().get_sync_status(bucket_name)
|
||||||
|
|
||||||
|
if not stats:
|
||||||
|
return jsonify({
|
||||||
|
"objects_synced": 0,
|
||||||
|
"objects_pending": 0,
|
||||||
|
"objects_orphaned": 0,
|
||||||
|
"bytes_synced": 0,
|
||||||
|
"last_sync_at": rule.stats.last_sync_at if rule.stats else None,
|
||||||
|
"last_sync_key": rule.stats.last_sync_key if rule.stats else None,
|
||||||
|
"endpoint_healthy": endpoint_healthy,
|
||||||
|
"endpoint_error": endpoint_error,
|
||||||
|
})
|
||||||
|
|
||||||
|
return jsonify({
|
||||||
|
"objects_synced": stats.objects_synced,
|
||||||
|
"objects_pending": stats.objects_pending,
|
||||||
|
"objects_orphaned": stats.objects_orphaned,
|
||||||
|
"bytes_synced": stats.bytes_synced,
|
||||||
|
"last_sync_at": stats.last_sync_at,
|
||||||
|
"last_sync_key": stats.last_sync_key,
|
||||||
|
"endpoint_healthy": endpoint_healthy,
|
||||||
|
"endpoint_error": endpoint_error,
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.get("/connections/<connection_id>/health")
|
||||||
|
def check_connection_health(connection_id: str):
|
||||||
|
"""Check if a connection endpoint is reachable."""
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_iam().authorize(principal, None, "iam:list_users")
|
||||||
|
except IamError:
|
||||||
|
return jsonify({"error": "Access denied"}), 403
|
||||||
|
|
||||||
|
conn = _connections().get(connection_id)
|
||||||
|
if not conn:
|
||||||
|
return jsonify({"healthy": False, "error": "Connection not found"}), 404
|
||||||
|
|
||||||
|
healthy = _replication().check_endpoint_health(conn)
|
||||||
|
return jsonify({
|
||||||
|
"healthy": healthy,
|
||||||
|
"error": None if healthy else f"Cannot reach endpoint: {conn.endpoint_url}"
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
@ui_bp.get("/connections")
|
@ui_bp.get("/connections")
|
||||||
def connections_dashboard():
|
def connections_dashboard():
|
||||||
principal = _current_principal()
|
principal = _current_principal()
|
||||||
@@ -1124,6 +1585,72 @@ def connections_dashboard():
|
|||||||
return render_template("connections.html", connections=connections, principal=principal)
|
return render_template("connections.html", connections=connections, principal=principal)
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.get("/metrics")
|
||||||
|
def metrics_dashboard():
|
||||||
|
principal = _current_principal()
|
||||||
|
|
||||||
|
try:
|
||||||
|
_iam().authorize(principal, None, "iam:list_users")
|
||||||
|
except IamError:
|
||||||
|
flash("Access denied: Metrics require admin permissions", "danger")
|
||||||
|
return redirect(url_for("ui.buckets_overview"))
|
||||||
|
|
||||||
|
from app.version import APP_VERSION
|
||||||
|
import time
|
||||||
|
|
||||||
|
cpu_percent = psutil.cpu_percent(interval=0.1)
|
||||||
|
memory = psutil.virtual_memory()
|
||||||
|
|
||||||
|
storage_root = current_app.config["STORAGE_ROOT"]
|
||||||
|
disk = psutil.disk_usage(storage_root)
|
||||||
|
|
||||||
|
storage = _storage()
|
||||||
|
buckets = storage.list_buckets()
|
||||||
|
total_buckets = len(buckets)
|
||||||
|
|
||||||
|
total_objects = 0
|
||||||
|
total_bytes_used = 0
|
||||||
|
total_versions = 0
|
||||||
|
|
||||||
|
cache_ttl = current_app.config.get("BUCKET_STATS_CACHE_TTL", 60)
|
||||||
|
for bucket in buckets:
|
||||||
|
stats = storage.bucket_stats(bucket.name, cache_ttl=cache_ttl)
|
||||||
|
total_objects += stats.get("total_objects", stats.get("objects", 0))
|
||||||
|
total_bytes_used += stats.get("total_bytes", stats.get("bytes", 0))
|
||||||
|
total_versions += stats.get("version_count", 0)
|
||||||
|
|
||||||
|
boot_time = psutil.boot_time()
|
||||||
|
uptime_seconds = time.time() - boot_time
|
||||||
|
uptime_days = int(uptime_seconds / 86400)
|
||||||
|
|
||||||
|
return render_template(
|
||||||
|
"metrics.html",
|
||||||
|
principal=principal,
|
||||||
|
cpu_percent=cpu_percent,
|
||||||
|
memory={
|
||||||
|
"total": _format_bytes(memory.total),
|
||||||
|
"available": _format_bytes(memory.available),
|
||||||
|
"used": _format_bytes(memory.used),
|
||||||
|
"percent": memory.percent,
|
||||||
|
},
|
||||||
|
disk={
|
||||||
|
"total": _format_bytes(disk.total),
|
||||||
|
"free": _format_bytes(disk.free),
|
||||||
|
"used": _format_bytes(disk.used),
|
||||||
|
"percent": disk.percent,
|
||||||
|
},
|
||||||
|
app={
|
||||||
|
"buckets": total_buckets,
|
||||||
|
"objects": total_objects,
|
||||||
|
"versions": total_versions,
|
||||||
|
"storage_used": _format_bytes(total_bytes_used),
|
||||||
|
"storage_raw": total_bytes_used,
|
||||||
|
"version": APP_VERSION,
|
||||||
|
"uptime_days": uptime_days,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@ui_bp.app_errorhandler(404)
|
@ui_bp.app_errorhandler(404)
|
||||||
def ui_not_found(error): # type: ignore[override]
|
def ui_not_found(error): # type: ignore[override]
|
||||||
prefix = ui_bp.url_prefix or ""
|
prefix = ui_bp.url_prefix or ""
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
"""Central location for the application version string."""
|
"""Central location for the application version string."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
APP_VERSION = "0.1.0"
|
APP_VERSION = "0.1.8"
|
||||||
|
|
||||||
|
|
||||||
def get_version() -> str:
|
def get_version() -> str:
|
||||||
|
|||||||
5
docker-entrypoint.sh
Normal file
5
docker-entrypoint.sh
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Run both services using the python runner in production mode
|
||||||
|
exec python run.py --prod
|
||||||
812
docs.md
812
docs.md
@@ -33,6 +33,63 @@ python run.py --mode api # API only (port 5000)
|
|||||||
python run.py --mode ui # UI only (port 5100)
|
python run.py --mode ui # UI only (port 5100)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Configuration validation
|
||||||
|
|
||||||
|
Validate your configuration before deploying:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Show configuration summary
|
||||||
|
python run.py --show-config
|
||||||
|
./myfsio --show-config
|
||||||
|
|
||||||
|
# Validate and check for issues (exits with code 1 if critical issues found)
|
||||||
|
python run.py --check-config
|
||||||
|
./myfsio --check-config
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linux Installation (Recommended for Production)
|
||||||
|
|
||||||
|
For production deployments on Linux, use the provided installation script:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download the binary and install script
|
||||||
|
# Then run the installer with sudo:
|
||||||
|
sudo ./scripts/install.sh --binary ./myfsio
|
||||||
|
|
||||||
|
# Or with custom paths:
|
||||||
|
sudo ./scripts/install.sh \
|
||||||
|
--binary ./myfsio \
|
||||||
|
--install-dir /opt/myfsio \
|
||||||
|
--data-dir /mnt/storage/myfsio \
|
||||||
|
--log-dir /var/log/myfsio \
|
||||||
|
--api-url https://s3.example.com \
|
||||||
|
--user myfsio
|
||||||
|
|
||||||
|
# Non-interactive mode (for automation):
|
||||||
|
sudo ./scripts/install.sh --binary ./myfsio -y
|
||||||
|
```
|
||||||
|
|
||||||
|
The installer will:
|
||||||
|
1. Create a dedicated system user
|
||||||
|
2. Set up directories with proper permissions
|
||||||
|
3. Generate a secure `SECRET_KEY`
|
||||||
|
4. Create an environment file at `/opt/myfsio/myfsio.env`
|
||||||
|
5. Install and configure a systemd service
|
||||||
|
|
||||||
|
After installation:
|
||||||
|
```bash
|
||||||
|
sudo systemctl start myfsio # Start the service
|
||||||
|
sudo systemctl enable myfsio # Enable on boot
|
||||||
|
sudo systemctl status myfsio # Check status
|
||||||
|
sudo journalctl -u myfsio -f # View logs
|
||||||
|
```
|
||||||
|
|
||||||
|
To uninstall:
|
||||||
|
```bash
|
||||||
|
sudo ./scripts/uninstall.sh # Full removal
|
||||||
|
sudo ./scripts/uninstall.sh --keep-data # Keep data directory
|
||||||
|
```
|
||||||
|
|
||||||
### Docker quickstart
|
### Docker quickstart
|
||||||
|
|
||||||
The repo now ships a `Dockerfile` so you can run both services in one container:
|
The repo now ships a `Dockerfile` so you can run both services in one container:
|
||||||
@@ -69,19 +126,433 @@ The repo now tracks a human-friendly release string inside `app/version.py` (see
|
|||||||
|
|
||||||
## 3. Configuration Reference
|
## 3. Configuration Reference
|
||||||
|
|
||||||
|
All configuration is done via environment variables. The table below lists every supported variable.
|
||||||
|
|
||||||
|
### Core Settings
|
||||||
|
|
||||||
| Variable | Default | Notes |
|
| Variable | Default | Notes |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `STORAGE_ROOT` | `<repo>/data` | Filesystem home for all buckets/objects. |
|
| `STORAGE_ROOT` | `<repo>/data` | Filesystem home for all buckets/objects. |
|
||||||
| `MAX_UPLOAD_SIZE` | `1073741824` | Bytes. Caps incoming uploads in both API + UI. |
|
| `MAX_UPLOAD_SIZE` | `1073741824` (1 GiB) | Bytes. Caps incoming uploads in both API + UI. |
|
||||||
| `UI_PAGE_SIZE` | `100` | `MaxKeys` hint shown in listings. |
|
| `UI_PAGE_SIZE` | `100` | `MaxKeys` hint shown in listings. |
|
||||||
| `SECRET_KEY` | `dev-secret-key` | Flask session key for UI auth. |
|
| `SECRET_KEY` | Auto-generated | Flask session key. Auto-generates and persists if not set. **Set explicitly in production.** |
|
||||||
| `IAM_CONFIG` | `<repo>/data/.myfsio.sys/config/iam.json` | Stores users, secrets, and inline policies. |
|
| `API_BASE_URL` | `None` | Public URL for presigned URLs. Required behind proxies. |
|
||||||
| `BUCKET_POLICY_PATH` | `<repo>/data/.myfsio.sys/config/bucket_policies.json` | Bucket policy store (auto hot-reload). |
|
|
||||||
| `API_BASE_URL` | `http://127.0.0.1:5000` | Used by the UI to hit API endpoints (presign/policy). |
|
|
||||||
| `AWS_REGION` | `us-east-1` | Region embedded in SigV4 credential scope. |
|
| `AWS_REGION` | `us-east-1` | Region embedded in SigV4 credential scope. |
|
||||||
| `AWS_SERVICE` | `s3` | Service string for SigV4. |
|
| `AWS_SERVICE` | `s3` | Service string for SigV4. |
|
||||||
|
|
||||||
Set env vars (or pass overrides to `create_app`) to point the servers at custom paths.
|
### IAM & Security
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `IAM_CONFIG` | `data/.myfsio.sys/config/iam.json` | Stores users, secrets, and inline policies. |
|
||||||
|
| `BUCKET_POLICY_PATH` | `data/.myfsio.sys/config/bucket_policies.json` | Bucket policy store (auto hot-reload). |
|
||||||
|
| `AUTH_MAX_ATTEMPTS` | `5` | Failed login attempts before lockout. |
|
||||||
|
| `AUTH_LOCKOUT_MINUTES` | `15` | Lockout duration after max failed attempts. |
|
||||||
|
| `SESSION_LIFETIME_DAYS` | `30` | How long UI sessions remain valid. |
|
||||||
|
| `SECRET_TTL_SECONDS` | `300` | TTL for ephemeral secrets (presigned URLs). |
|
||||||
|
| `UI_ENFORCE_BUCKET_POLICIES` | `false` | Whether the UI should enforce bucket policies. |
|
||||||
|
|
||||||
|
### CORS (Cross-Origin Resource Sharing)
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `CORS_ORIGINS` | `*` | Comma-separated allowed origins. Use specific domains in production. |
|
||||||
|
| `CORS_METHODS` | `GET,PUT,POST,DELETE,OPTIONS,HEAD` | Allowed HTTP methods. |
|
||||||
|
| `CORS_ALLOW_HEADERS` | `*` | Allowed request headers. |
|
||||||
|
| `CORS_EXPOSE_HEADERS` | `*` | Response headers visible to browsers (e.g., `ETag`). |
|
||||||
|
|
||||||
|
### Rate Limiting
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `RATE_LIMIT_DEFAULT` | `200 per minute` | Default rate limit for API endpoints. |
|
||||||
|
| `RATE_LIMIT_STORAGE_URI` | `memory://` | Storage backend for rate limits. Use `redis://host:port` for distributed setups. |
|
||||||
|
|
||||||
|
### Logging
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `LOG_LEVEL` | `INFO` | Log verbosity: `DEBUG`, `INFO`, `WARNING`, `ERROR`. |
|
||||||
|
| `LOG_TO_FILE` | `true` | Enable file logging. |
|
||||||
|
| `LOG_DIR` | `<repo>/logs` | Directory for log files. |
|
||||||
|
| `LOG_FILE` | `app.log` | Log filename. |
|
||||||
|
| `LOG_MAX_BYTES` | `5242880` (5 MB) | Max log file size before rotation. |
|
||||||
|
| `LOG_BACKUP_COUNT` | `3` | Number of rotated log files to keep. |
|
||||||
|
|
||||||
|
### Encryption
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `ENCRYPTION_ENABLED` | `false` | Enable server-side encryption support. |
|
||||||
|
| `ENCRYPTION_MASTER_KEY_PATH` | `data/.myfsio.sys/keys/master.key` | Path to the master encryption key file. |
|
||||||
|
| `DEFAULT_ENCRYPTION_ALGORITHM` | `AES256` | Default algorithm for new encrypted objects. |
|
||||||
|
| `KMS_ENABLED` | `false` | Enable KMS key management for encryption. |
|
||||||
|
| `KMS_KEYS_PATH` | `data/.myfsio.sys/keys/kms_keys.json` | Path to store KMS key metadata. |
|
||||||
|
|
||||||
|
### Performance Tuning
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `STREAM_CHUNK_SIZE` | `65536` (64 KB) | Chunk size for streaming large files. |
|
||||||
|
| `MULTIPART_MIN_PART_SIZE` | `5242880` (5 MB) | Minimum part size for multipart uploads. |
|
||||||
|
| `BUCKET_STATS_CACHE_TTL` | `60` | Seconds to cache bucket statistics. |
|
||||||
|
| `BULK_DELETE_MAX_KEYS` | `500` | Maximum keys per bulk delete request. |
|
||||||
|
|
||||||
|
### Server Settings
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `APP_HOST` | `0.0.0.0` | Network interface to bind to. |
|
||||||
|
| `APP_PORT` | `5000` | API server port (UI uses 5100). |
|
||||||
|
| `FLASK_DEBUG` | `0` | Enable Flask debug mode. **Never enable in production.** |
|
||||||
|
|
||||||
|
### Production Checklist
|
||||||
|
|
||||||
|
Before deploying to production, ensure you:
|
||||||
|
|
||||||
|
1. **Set `SECRET_KEY`** - Use a strong, unique value (e.g., `openssl rand -base64 32`)
|
||||||
|
2. **Restrict CORS** - Set `CORS_ORIGINS` to your specific domains instead of `*`
|
||||||
|
3. **Configure `API_BASE_URL`** - Required for correct presigned URLs behind proxies
|
||||||
|
4. **Enable HTTPS** - Use a reverse proxy (nginx, Cloudflare) with TLS termination
|
||||||
|
5. **Review rate limits** - Adjust `RATE_LIMIT_DEFAULT` based on your needs
|
||||||
|
6. **Secure master keys** - Back up `ENCRYPTION_MASTER_KEY_PATH` if using encryption
|
||||||
|
7. **Use `--prod` flag** - Runs with Waitress instead of Flask dev server
|
||||||
|
|
||||||
|
### Proxy Configuration
|
||||||
|
|
||||||
|
If running behind a reverse proxy (e.g., Nginx, Cloudflare, or a tunnel), ensure the proxy sets the standard forwarding headers:
|
||||||
|
- `X-Forwarded-Host`
|
||||||
|
- `X-Forwarded-Proto`
|
||||||
|
|
||||||
|
The application automatically trusts these headers to generate correct presigned URLs (e.g., `https://s3.example.com/...` instead of `http://127.0.0.1:5000/...`). Alternatively, you can explicitly set `API_BASE_URL` to your public endpoint.
|
||||||
|
|
||||||
|
## 4. Upgrading and Updates
|
||||||
|
|
||||||
|
### Version Checking
|
||||||
|
|
||||||
|
The application version is tracked in `app/version.py` and exposed via:
|
||||||
|
- **Health endpoint:** `GET /healthz` returns JSON with `version` field
|
||||||
|
- **Metrics dashboard:** Navigate to `/ui/metrics` to see the running version in the System Status card
|
||||||
|
|
||||||
|
To check your current version:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# API health endpoint
|
||||||
|
curl http://localhost:5000/healthz
|
||||||
|
|
||||||
|
# Or inspect version.py directly
|
||||||
|
cat app/version.py | grep APP_VERSION
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pre-Update Backup Procedures
|
||||||
|
|
||||||
|
**Always backup before upgrading to prevent data loss:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Stop the application
|
||||||
|
# Ctrl+C if running in terminal, or:
|
||||||
|
docker stop myfsio # if using Docker
|
||||||
|
|
||||||
|
# 2. Backup configuration files (CRITICAL)
|
||||||
|
mkdir -p backups/$(date +%Y%m%d_%H%M%S)
|
||||||
|
cp -r data/.myfsio.sys/config backups/$(date +%Y%m%d_%H%M%S)/
|
||||||
|
|
||||||
|
# 3. Backup all data (optional but recommended)
|
||||||
|
tar -czf backups/data_$(date +%Y%m%d_%H%M%S).tar.gz data/
|
||||||
|
|
||||||
|
# 4. Backup logs for audit trail
|
||||||
|
cp -r logs backups/$(date +%Y%m%d_%H%M%S)/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Windows PowerShell:**
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
# Create timestamped backup
|
||||||
|
$timestamp = Get-Date -Format "yyyyMMdd_HHmmss"
|
||||||
|
New-Item -ItemType Directory -Path "backups\$timestamp" -Force
|
||||||
|
|
||||||
|
# Backup configs
|
||||||
|
Copy-Item -Recurse "data\.myfsio.sys\config" "backups\$timestamp\"
|
||||||
|
|
||||||
|
# Backup entire data directory
|
||||||
|
Compress-Archive -Path "data\" -DestinationPath "backups\data_$timestamp.zip"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Critical files to backup:**
|
||||||
|
- `data/.myfsio.sys/config/iam.json` – User accounts and access keys
|
||||||
|
- `data/.myfsio.sys/config/bucket_policies.json` – Bucket access policies
|
||||||
|
- `data/.myfsio.sys/config/kms_keys.json` – Encryption keys (if using KMS)
|
||||||
|
- `data/.myfsio.sys/config/secret_store.json` – Application secrets
|
||||||
|
|
||||||
|
### Update Procedures
|
||||||
|
|
||||||
|
#### Source Installation Updates
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Backup (see above)
|
||||||
|
# 2. Pull latest code
|
||||||
|
git fetch origin
|
||||||
|
git checkout main # or your target branch/tag
|
||||||
|
git pull
|
||||||
|
|
||||||
|
# 3. Check for dependency changes
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
# 4. Review CHANGELOG/release notes for breaking changes
|
||||||
|
cat CHANGELOG.md # if available
|
||||||
|
|
||||||
|
# 5. Run migration scripts (if any)
|
||||||
|
# python scripts/migrate_vX_to_vY.py # example
|
||||||
|
|
||||||
|
# 6. Restart application
|
||||||
|
python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Docker Updates
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Backup (see above)
|
||||||
|
# 2. Pull/rebuild image
|
||||||
|
docker pull yourregistry/myfsio:latest
|
||||||
|
# OR rebuild from source:
|
||||||
|
docker build -t myfsio:latest .
|
||||||
|
|
||||||
|
# 3. Stop and remove old container
|
||||||
|
docker stop myfsio
|
||||||
|
docker rm myfsio
|
||||||
|
|
||||||
|
# 4. Start new container with same volumes
|
||||||
|
docker run -d \
|
||||||
|
--name myfsio \
|
||||||
|
-p 5000:5000 -p 5100:5100 \
|
||||||
|
-v "$(pwd)/data:/app/data" \
|
||||||
|
-v "$(pwd)/logs:/app/logs" \
|
||||||
|
-e SECRET_KEY="your-secret" \
|
||||||
|
myfsio:latest
|
||||||
|
|
||||||
|
# 5. Verify health
|
||||||
|
curl http://localhost:5000/healthz
|
||||||
|
```
|
||||||
|
|
||||||
|
### Version Compatibility Checks
|
||||||
|
|
||||||
|
Before upgrading across major versions, verify compatibility:
|
||||||
|
|
||||||
|
| From Version | To Version | Breaking Changes | Migration Required |
|
||||||
|
|--------------|------------|------------------|-------------------|
|
||||||
|
| 0.1.x | 0.2.x | None expected | No |
|
||||||
|
| 0.1.6 | 0.1.7 | None | No |
|
||||||
|
| < 0.1.0 | >= 0.1.0 | New IAM config format | Yes - run migration script |
|
||||||
|
|
||||||
|
**Automatic compatibility detection:**
|
||||||
|
|
||||||
|
The application will log warnings on startup if config files need migration:
|
||||||
|
|
||||||
|
```
|
||||||
|
WARNING: IAM config format is outdated (v1). Please run: python scripts/migrate_iam.py
|
||||||
|
```
|
||||||
|
|
||||||
|
**Manual compatibility check:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Compare version schemas
|
||||||
|
python -c "from app.version import APP_VERSION; print(f'Running: {APP_VERSION}')"
|
||||||
|
python scripts/check_compatibility.py data/.myfsio.sys/config/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Migration Steps for Breaking Changes
|
||||||
|
|
||||||
|
When release notes indicate breaking changes, follow these steps:
|
||||||
|
|
||||||
|
#### Config Format Migrations
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Backup first (critical!)
|
||||||
|
cp data/.myfsio.sys/config/iam.json data/.myfsio.sys/config/iam.json.backup
|
||||||
|
|
||||||
|
# 2. Run provided migration script
|
||||||
|
python scripts/migrate_iam_v1_to_v2.py
|
||||||
|
|
||||||
|
# 3. Validate migration
|
||||||
|
python scripts/validate_config.py
|
||||||
|
|
||||||
|
# 4. Test with read-only mode first (if available)
|
||||||
|
# python run.py --read-only
|
||||||
|
|
||||||
|
# 5. Restart normally
|
||||||
|
python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Database/Storage Schema Changes
|
||||||
|
|
||||||
|
If object metadata format changes:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Run storage migration script
|
||||||
|
python scripts/migrate_storage.py --dry-run # preview changes
|
||||||
|
|
||||||
|
# 2. Apply migration
|
||||||
|
python scripts/migrate_storage.py --apply
|
||||||
|
|
||||||
|
# 3. Verify integrity
|
||||||
|
python scripts/verify_storage.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### IAM Policy Updates
|
||||||
|
|
||||||
|
If IAM action names change (e.g., `s3:Get` → `s3:GetObject`):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Migration script will update all policies
|
||||||
|
python scripts/migrate_policies.py \
|
||||||
|
--input data/.myfsio.sys/config/iam.json \
|
||||||
|
--backup data/.myfsio.sys/config/iam.json.v1
|
||||||
|
|
||||||
|
# Review changes before committing
|
||||||
|
python scripts/diff_policies.py \
|
||||||
|
data/.myfsio.sys/config/iam.json.v1 \
|
||||||
|
data/.myfsio.sys/config/iam.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rollback Procedures
|
||||||
|
|
||||||
|
If an update causes issues, rollback to the previous version:
|
||||||
|
|
||||||
|
#### Quick Rollback (Source)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Stop application
|
||||||
|
# Ctrl+C or kill process
|
||||||
|
|
||||||
|
# 2. Revert code
|
||||||
|
git checkout <previous-version-tag>
|
||||||
|
# OR
|
||||||
|
git reset --hard HEAD~1
|
||||||
|
|
||||||
|
# 3. Restore configs from backup
|
||||||
|
cp backups/20241213_103000/config/* data/.myfsio.sys/config/
|
||||||
|
|
||||||
|
# 4. Downgrade dependencies if needed
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
# 5. Restart
|
||||||
|
python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Docker Rollback
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Stop current container
|
||||||
|
docker stop myfsio
|
||||||
|
docker rm myfsio
|
||||||
|
|
||||||
|
# 2. Start previous version
|
||||||
|
docker run -d \
|
||||||
|
--name myfsio \
|
||||||
|
-p 5000:5000 -p 5100:5100 \
|
||||||
|
-v "$(pwd)/data:/app/data" \
|
||||||
|
-v "$(pwd)/logs:/app/logs" \
|
||||||
|
-e SECRET_KEY="your-secret" \
|
||||||
|
myfsio:0.1.3 # specify previous version tag
|
||||||
|
|
||||||
|
# 3. Verify
|
||||||
|
curl http://localhost:5000/healthz
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Emergency Config Restore
|
||||||
|
|
||||||
|
If only config is corrupted but code is fine:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Stop app
|
||||||
|
# Restore from latest backup
|
||||||
|
cp backups/20241213_103000/config/iam.json data/.myfsio.sys/config/
|
||||||
|
cp backups/20241213_103000/config/bucket_policies.json data/.myfsio.sys/config/
|
||||||
|
|
||||||
|
# Restart app
|
||||||
|
python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
### Blue-Green Deployment (Zero Downtime)
|
||||||
|
|
||||||
|
For production environments requiring zero downtime:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Run new version on different port (e.g., 5001/5101)
|
||||||
|
APP_PORT=5001 UI_PORT=5101 python run.py &
|
||||||
|
|
||||||
|
# 2. Health check new instance
|
||||||
|
curl http://localhost:5001/healthz
|
||||||
|
|
||||||
|
# 3. Update load balancer to route to new ports
|
||||||
|
|
||||||
|
# 4. Monitor for issues
|
||||||
|
|
||||||
|
# 5. Gracefully stop old instance
|
||||||
|
kill -SIGTERM <old-pid>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Post-Update Verification
|
||||||
|
|
||||||
|
After any update, verify functionality:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Health check
|
||||||
|
curl http://localhost:5000/healthz
|
||||||
|
|
||||||
|
# 2. Login to UI
|
||||||
|
open http://localhost:5100/ui
|
||||||
|
|
||||||
|
# 3. Test IAM authentication
|
||||||
|
curl -H "X-Amz-Security-Token: <your-access-key>:<your-secret>" \
|
||||||
|
http://localhost:5000/
|
||||||
|
|
||||||
|
# 4. Test presigned URL generation
|
||||||
|
# Via UI or API
|
||||||
|
|
||||||
|
# 5. Check logs for errors
|
||||||
|
tail -n 100 logs/myfsio.log
|
||||||
|
```
|
||||||
|
|
||||||
|
### Automated Update Scripts
|
||||||
|
|
||||||
|
Create a custom update script for your environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# update.sh - Automated update with rollback capability
|
||||||
|
|
||||||
|
set -e # Exit on error
|
||||||
|
|
||||||
|
VERSION_NEW="$1"
|
||||||
|
BACKUP_DIR="backups/$(date +%Y%m%d_%H%M%S)"
|
||||||
|
|
||||||
|
echo "Creating backup..."
|
||||||
|
mkdir -p "$BACKUP_DIR"
|
||||||
|
cp -r data/.myfsio.sys/config "$BACKUP_DIR/"
|
||||||
|
|
||||||
|
echo "Updating to version $VERSION_NEW..."
|
||||||
|
git fetch origin
|
||||||
|
git checkout "v$VERSION_NEW"
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
echo "Starting application..."
|
||||||
|
python run.py &
|
||||||
|
APP_PID=$!
|
||||||
|
|
||||||
|
# Wait and health check
|
||||||
|
sleep 5
|
||||||
|
if curl -f http://localhost:5000/healthz; then
|
||||||
|
echo "Update successful!"
|
||||||
|
else
|
||||||
|
echo "Health check failed, rolling back..."
|
||||||
|
kill $APP_PID
|
||||||
|
git checkout -
|
||||||
|
cp -r "$BACKUP_DIR/config/*" data/.myfsio.sys/config/
|
||||||
|
python run.py &
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
## 4. Authentication & IAM
|
## 4. Authentication & IAM
|
||||||
|
|
||||||
@@ -94,6 +565,46 @@ Set env vars (or pass overrides to `create_app`) to point the servers at custom
|
|||||||
|
|
||||||
The API expects every request to include `X-Access-Key` and `X-Secret-Key` headers. The UI persists them in the Flask session after login.
|
The API expects every request to include `X-Access-Key` and `X-Secret-Key` headers. The UI persists them in the Flask session after login.
|
||||||
|
|
||||||
|
### Available IAM Actions
|
||||||
|
|
||||||
|
| Action | Description | AWS Aliases |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `list` | List buckets and objects | `s3:ListBucket`, `s3:ListAllMyBuckets`, `s3:ListBucketVersions`, `s3:ListMultipartUploads`, `s3:ListParts` |
|
||||||
|
| `read` | Download objects | `s3:GetObject`, `s3:GetObjectVersion`, `s3:GetObjectTagging`, `s3:HeadObject`, `s3:HeadBucket` |
|
||||||
|
| `write` | Upload objects, create buckets | `s3:PutObject`, `s3:CreateBucket`, `s3:CreateMultipartUpload`, `s3:UploadPart`, `s3:CompleteMultipartUpload`, `s3:AbortMultipartUpload`, `s3:CopyObject` |
|
||||||
|
| `delete` | Remove objects and buckets | `s3:DeleteObject`, `s3:DeleteObjectVersion`, `s3:DeleteBucket` |
|
||||||
|
| `share` | Manage ACLs | `s3:PutObjectAcl`, `s3:PutBucketAcl`, `s3:GetBucketAcl` |
|
||||||
|
| `policy` | Manage bucket policies | `s3:PutBucketPolicy`, `s3:GetBucketPolicy`, `s3:DeleteBucketPolicy` |
|
||||||
|
| `replication` | Configure and manage replication | `s3:GetReplicationConfiguration`, `s3:PutReplicationConfiguration`, `s3:ReplicateObject`, `s3:ReplicateTags`, `s3:ReplicateDelete` |
|
||||||
|
| `iam:list_users` | View IAM users | `iam:ListUsers` |
|
||||||
|
| `iam:create_user` | Create IAM users | `iam:CreateUser` |
|
||||||
|
| `iam:delete_user` | Delete IAM users | `iam:DeleteUser` |
|
||||||
|
| `iam:rotate_key` | Rotate user secrets | `iam:RotateAccessKey` |
|
||||||
|
| `iam:update_policy` | Modify user policies | `iam:PutUserPolicy` |
|
||||||
|
| `iam:*` | All IAM actions (admin wildcard) | — |
|
||||||
|
|
||||||
|
### Example Policies
|
||||||
|
|
||||||
|
**Full Control (admin):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["list", "read", "write", "delete", "share", "policy", "replication", "iam:*"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Read-Only:**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["list", "read"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Single Bucket Access (no listing other buckets):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "user-bucket", "actions": ["read", "write", "delete"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Bucket Access with Replication:**
|
||||||
|
```json
|
||||||
|
[{"bucket": "my-bucket", "actions": ["list", "read", "write", "delete", "replication"]}]
|
||||||
|
```
|
||||||
|
|
||||||
## 5. Bucket Policies & Presets
|
## 5. Bucket Policies & Presets
|
||||||
|
|
||||||
- **Storage**: Policies are persisted in `data/.myfsio.sys/config/bucket_policies.json` under `{"policies": {"bucket": {...}}}`.
|
- **Storage**: Policies are persisted in `data/.myfsio.sys/config/bucket_policies.json` under `{"policies": {"bucket": {...}}}`.
|
||||||
@@ -124,6 +635,48 @@ curl -X PUT http://127.0.0.1:5000/bucket-policy/test \
|
|||||||
|
|
||||||
The UI will reflect this change as soon as the request completes thanks to the hot reload.
|
The UI will reflect this change as soon as the request completes thanks to the hot reload.
|
||||||
|
|
||||||
|
### UI Object Browser
|
||||||
|
|
||||||
|
The bucket detail page includes a powerful object browser with the following features:
|
||||||
|
|
||||||
|
#### Folder Navigation
|
||||||
|
|
||||||
|
Objects with forward slashes (`/`) in their keys are displayed as a folder hierarchy. Click a folder row to navigate into it. A breadcrumb navigation bar shows your current path and allows quick navigation back to parent folders or the root.
|
||||||
|
|
||||||
|
#### Pagination & Infinite Scroll
|
||||||
|
|
||||||
|
- Objects load in configurable batches (50, 100, 150, 200, or 250 per page)
|
||||||
|
- Scroll to the bottom to automatically load more objects (infinite scroll)
|
||||||
|
- A **Load more** button is available as a fallback for touch devices or when infinite scroll doesn't trigger
|
||||||
|
- The footer shows the current load status (e.g., "Showing 100 of 500 objects")
|
||||||
|
|
||||||
|
#### Bulk Operations
|
||||||
|
|
||||||
|
- Select multiple objects using checkboxes
|
||||||
|
- **Bulk Delete**: Delete multiple objects at once
|
||||||
|
- **Bulk Download**: Download selected objects as individual files
|
||||||
|
|
||||||
|
#### Search & Filter
|
||||||
|
|
||||||
|
Use the search box to filter objects by name in real-time. The filter applies to the currently loaded objects.
|
||||||
|
|
||||||
|
#### Error Handling
|
||||||
|
|
||||||
|
If object loading fails (e.g., network error), a friendly error message is displayed with a **Retry** button to attempt loading again.
|
||||||
|
|
||||||
|
#### Object Preview
|
||||||
|
|
||||||
|
Click any object row to view its details in the preview sidebar:
|
||||||
|
- File size and last modified date
|
||||||
|
- ETag (content hash)
|
||||||
|
- Custom metadata (if present)
|
||||||
|
- Download and presign (share link) buttons
|
||||||
|
- Version history (when versioning is enabled)
|
||||||
|
|
||||||
|
#### Drag & Drop Upload
|
||||||
|
|
||||||
|
Drag files directly onto the objects table to upload them to the current bucket and folder path.
|
||||||
|
|
||||||
## 6. Presigned URLs
|
## 6. Presigned URLs
|
||||||
|
|
||||||
- Trigger from the UI using the **Presign** button after selecting an object.
|
- Trigger from the UI using the **Presign** button after selecting an object.
|
||||||
@@ -165,9 +718,207 @@ s3.complete_multipart_upload(
|
|||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
## 6. Site Replication
|
## 7. Encryption
|
||||||
|
|
||||||
MyFSIO supports **Site Replication**, allowing you to automatically copy new objects from one MyFSIO instance (Source) to another (Target). This is useful for disaster recovery, data locality, or backups.
|
MyFSIO supports **server-side encryption at rest** to protect your data. When enabled, objects are encrypted using AES-256-GCM before being written to disk.
|
||||||
|
|
||||||
|
### Encryption Types
|
||||||
|
|
||||||
|
| Type | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| **AES-256 (SSE-S3)** | Server-managed encryption using a local master key |
|
||||||
|
| **KMS (SSE-KMS)** | Encryption using customer-managed keys via the built-in KMS |
|
||||||
|
|
||||||
|
### Enabling Encryption
|
||||||
|
|
||||||
|
#### 1. Set Environment Variables
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
# PowerShell
|
||||||
|
$env:ENCRYPTION_ENABLED = "true"
|
||||||
|
$env:KMS_ENABLED = "true" # Optional, for KMS key management
|
||||||
|
python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Bash
|
||||||
|
export ENCRYPTION_ENABLED=true
|
||||||
|
export KMS_ENABLED=true
|
||||||
|
python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Configure Bucket Default Encryption (UI)
|
||||||
|
|
||||||
|
1. Navigate to your bucket in the UI
|
||||||
|
2. Click the **Properties** tab
|
||||||
|
3. Find the **Default Encryption** card
|
||||||
|
4. Click **Enable Encryption**
|
||||||
|
5. Choose algorithm:
|
||||||
|
- **AES-256**: Uses the server's master key
|
||||||
|
- **aws:kms**: Uses a KMS-managed key (select from dropdown)
|
||||||
|
6. Save changes
|
||||||
|
|
||||||
|
Once enabled, all **new objects** uploaded to the bucket will be automatically encrypted.
|
||||||
|
|
||||||
|
### KMS Key Management
|
||||||
|
|
||||||
|
When `KMS_ENABLED=true`, you can manage encryption keys via the KMS API:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create a new KMS key
|
||||||
|
curl -X POST http://localhost:5000/kms/keys \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
||||||
|
-d '{"alias": "my-key", "description": "Production encryption key"}'
|
||||||
|
|
||||||
|
# List all keys
|
||||||
|
curl http://localhost:5000/kms/keys \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Get key details
|
||||||
|
curl http://localhost:5000/kms/keys/{key-id} \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Rotate a key (creates new key material)
|
||||||
|
curl -X POST http://localhost:5000/kms/keys/{key-id}/rotate \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Disable/Enable a key
|
||||||
|
curl -X POST http://localhost:5000/kms/keys/{key-id}/disable \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
curl -X POST http://localhost:5000/kms/keys/{key-id}/enable \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Schedule key deletion (30-day waiting period)
|
||||||
|
curl -X DELETE http://localhost:5000/kms/keys/{key-id}?waiting_period_days=30 \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
```
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
|
||||||
|
1. **Envelope Encryption**: Each object is encrypted with a unique Data Encryption Key (DEK)
|
||||||
|
2. **Key Wrapping**: The DEK is encrypted (wrapped) by the master key or KMS key
|
||||||
|
3. **Storage**: The encrypted DEK is stored alongside the encrypted object
|
||||||
|
4. **Decryption**: On read, the DEK is unwrapped and used to decrypt the object
|
||||||
|
|
||||||
|
### Client-Side Encryption
|
||||||
|
|
||||||
|
For additional security, you can use client-side encryption. The `ClientEncryptionHelper` class provides utilities:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from app.encryption import ClientEncryptionHelper
|
||||||
|
|
||||||
|
# Generate a client-side key
|
||||||
|
key = ClientEncryptionHelper.generate_key()
|
||||||
|
key_b64 = ClientEncryptionHelper.key_to_base64(key)
|
||||||
|
|
||||||
|
# Encrypt before upload
|
||||||
|
plaintext = b"sensitive data"
|
||||||
|
encrypted, metadata = ClientEncryptionHelper.encrypt_for_upload(plaintext, key)
|
||||||
|
|
||||||
|
# Upload with metadata headers
|
||||||
|
# x-amz-meta-x-amz-key: <wrapped-key>
|
||||||
|
# x-amz-meta-x-amz-iv: <iv>
|
||||||
|
# x-amz-meta-x-amz-matdesc: <material-description>
|
||||||
|
|
||||||
|
# Decrypt after download
|
||||||
|
decrypted = ClientEncryptionHelper.decrypt_from_download(encrypted, metadata, key)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Important Notes
|
||||||
|
|
||||||
|
- **Existing objects are NOT encrypted** - Only new uploads after enabling encryption are encrypted
|
||||||
|
- **Master key security** - The master key file (`master.key`) should be backed up securely and protected
|
||||||
|
- **Key rotation** - Rotating a KMS key creates new key material; existing objects remain encrypted with the old material
|
||||||
|
- **Disabled keys** - Objects encrypted with a disabled key cannot be decrypted until the key is re-enabled
|
||||||
|
- **Deleted keys** - Once a key is deleted (after the waiting period), objects encrypted with it are permanently inaccessible
|
||||||
|
|
||||||
|
### Verifying Encryption
|
||||||
|
|
||||||
|
To verify an object is encrypted:
|
||||||
|
1. Check the raw file in `data/<bucket>/` - it should be unreadable binary
|
||||||
|
2. Look for `.meta` files containing encryption metadata
|
||||||
|
3. Download via the API/UI - the object should be automatically decrypted
|
||||||
|
|
||||||
|
## 8. Bucket Quotas
|
||||||
|
|
||||||
|
MyFSIO supports **storage quotas** to limit how much data a bucket can hold. Quotas are enforced on uploads and multipart completions.
|
||||||
|
|
||||||
|
### Quota Types
|
||||||
|
|
||||||
|
| Limit | Description |
|
||||||
|
|-------|-------------|
|
||||||
|
| **Max Size (MB)** | Maximum total storage in megabytes (includes current objects + archived versions) |
|
||||||
|
| **Max Objects** | Maximum number of objects (includes current objects + archived versions) |
|
||||||
|
|
||||||
|
### Managing Quotas (Admin Only)
|
||||||
|
|
||||||
|
Quota management is restricted to administrators (users with `iam:*` or `iam:list_users` permissions).
|
||||||
|
|
||||||
|
#### Via UI
|
||||||
|
|
||||||
|
1. Navigate to your bucket in the UI
|
||||||
|
2. Click the **Properties** tab
|
||||||
|
3. Find the **Storage Quota** card
|
||||||
|
4. Enter limits:
|
||||||
|
- **Max Size (MB)**: Leave empty for unlimited
|
||||||
|
- **Max Objects**: Leave empty for unlimited
|
||||||
|
5. Click **Update Quota**
|
||||||
|
|
||||||
|
To remove a quota, click **Remove Quota**.
|
||||||
|
|
||||||
|
#### Via API
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set quota (max 100MB, max 1000 objects)
|
||||||
|
curl -X PUT "http://localhost:5000/bucket/<bucket>?quota" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
||||||
|
-d '{"max_bytes": 104857600, "max_objects": 1000}'
|
||||||
|
|
||||||
|
# Get current quota
|
||||||
|
curl "http://localhost:5000/bucket/<bucket>?quota" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Remove quota
|
||||||
|
curl -X PUT "http://localhost:5000/bucket/<bucket>?quota" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
||||||
|
-d '{"max_bytes": null, "max_objects": null}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Quota Behavior
|
||||||
|
|
||||||
|
- **Version Counting**: When versioning is enabled, archived versions count toward the quota
|
||||||
|
- **Enforcement Points**: Quotas are checked during `PUT` object and `CompleteMultipartUpload` operations
|
||||||
|
- **Error Response**: When quota is exceeded, the API returns `HTTP 400` with error code `QuotaExceeded`
|
||||||
|
- **Visibility**: All users can view quota usage in the bucket detail page, but only admins can modify quotas
|
||||||
|
|
||||||
|
### Example Error
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<Error>
|
||||||
|
<Code>QuotaExceeded</Code>
|
||||||
|
<Message>Bucket quota exceeded: storage limit reached</Message>
|
||||||
|
<BucketName>my-bucket</BucketName>
|
||||||
|
</Error>
|
||||||
|
```
|
||||||
|
|
||||||
|
## 9. Site Replication
|
||||||
|
|
||||||
|
### Permission Model
|
||||||
|
|
||||||
|
Replication uses a two-tier permission system:
|
||||||
|
|
||||||
|
| Role | Capabilities |
|
||||||
|
|------|--------------|
|
||||||
|
| **Admin** (users with `iam:*` permissions) | Create/delete replication rules, configure connections and target buckets |
|
||||||
|
| **Users** (with `replication` permission) | Enable/disable (pause/resume) existing replication rules |
|
||||||
|
|
||||||
|
> **Note:** The Replication tab is hidden for users without the `replication` permission on the bucket.
|
||||||
|
|
||||||
|
This separation allows administrators to pre-configure where data should replicate, while allowing authorized users to toggle replication on/off without accessing connection credentials.
|
||||||
|
|
||||||
### Architecture
|
### Architecture
|
||||||
|
|
||||||
@@ -245,13 +996,15 @@ Now, configure the primary instance to replicate to the target.
|
|||||||
- **Secret Key**: The secret you generated on the Target.
|
- **Secret Key**: The secret you generated on the Target.
|
||||||
- Click **Add Connection**.
|
- Click **Add Connection**.
|
||||||
|
|
||||||
3. **Enable Replication**:
|
3. **Enable Replication** (Admin):
|
||||||
- Navigate to **Buckets** and select the source bucket.
|
- Navigate to **Buckets** and select the source bucket.
|
||||||
- Switch to the **Replication** tab.
|
- Switch to the **Replication** tab.
|
||||||
- Select the `Secondary Site` connection.
|
- Select the `Secondary Site` connection.
|
||||||
- Enter the target bucket name (`backup-bucket`).
|
- Enter the target bucket name (`backup-bucket`).
|
||||||
- Click **Enable Replication**.
|
- Click **Enable Replication**.
|
||||||
|
|
||||||
|
Once configured, users with `replication` permission on this bucket can pause/resume replication without needing access to connection details.
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
1. Upload a file to the source bucket.
|
1. Upload a file to the source bucket.
|
||||||
@@ -262,7 +1015,34 @@ Now, configure the primary instance to replicate to the target.
|
|||||||
aws --endpoint-url http://target-server:5002 s3 ls s3://backup-bucket
|
aws --endpoint-url http://target-server:5002 s3 ls s3://backup-bucket
|
||||||
```
|
```
|
||||||
|
|
||||||
## 7. Running Tests
|
### Pausing and Resuming Replication
|
||||||
|
|
||||||
|
Users with the `replication` permission (but not admin rights) can pause and resume existing replication rules:
|
||||||
|
|
||||||
|
1. Navigate to the bucket's **Replication** tab.
|
||||||
|
2. If replication is **Active**, click **Pause Replication** to temporarily stop syncing.
|
||||||
|
3. If replication is **Paused**, click **Resume Replication** to continue syncing.
|
||||||
|
|
||||||
|
When paused, new objects uploaded to the source will not replicate until replication is resumed. Objects uploaded while paused will be replicated once resumed.
|
||||||
|
|
||||||
|
> **Note:** Only admins can create new replication rules, change the target connection/bucket, or delete rules entirely.
|
||||||
|
|
||||||
|
### Bidirectional Replication (Active-Active)
|
||||||
|
|
||||||
|
To set up two-way replication (Server A ↔ Server B):
|
||||||
|
|
||||||
|
1. Follow the steps above to replicate **A → B**.
|
||||||
|
2. Repeat the process on Server B to replicate **B → A**:
|
||||||
|
- Create a connection on Server B pointing to Server A.
|
||||||
|
- Enable replication on the target bucket on Server B.
|
||||||
|
|
||||||
|
**Loop Prevention**: The system automatically detects replication traffic using a custom User-Agent (`S3ReplicationAgent`). This prevents infinite loops where an object replicated from A to B is immediately replicated back to A.
|
||||||
|
|
||||||
|
**Deletes**: Deleting an object on one server will propagate the deletion to the other server.
|
||||||
|
|
||||||
|
**Note**: Deleting a bucket will automatically remove its associated replication configuration.
|
||||||
|
|
||||||
|
## 11. Running Tests
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pytest -q
|
pytest -q
|
||||||
@@ -272,7 +1052,7 @@ The suite now includes a boto3 integration test that spins up a live HTTP server
|
|||||||
|
|
||||||
The suite covers bucket CRUD, presigned downloads, bucket policy enforcement, and regression tests for anonymous reads when a Public policy is attached.
|
The suite covers bucket CRUD, presigned downloads, bucket policy enforcement, and regression tests for anonymous reads when a Public policy is attached.
|
||||||
|
|
||||||
## 8. Troubleshooting
|
## 12. Troubleshooting
|
||||||
|
|
||||||
| Symptom | Likely Cause | Fix |
|
| Symptom | Likely Cause | Fix |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
@@ -281,7 +1061,7 @@ The suite covers bucket CRUD, presigned downloads, bucket policy enforcement, an
|
|||||||
| Presign modal errors with 403 | IAM user lacks `read/write/delete` for target bucket or bucket policy denies | Update IAM inline policies or remove conflicting deny statements. |
|
| Presign modal errors with 403 | IAM user lacks `read/write/delete` for target bucket or bucket policy denies | Update IAM inline policies or remove conflicting deny statements. |
|
||||||
| Large upload rejected immediately | File exceeds `MAX_UPLOAD_SIZE` | Increase env var or shrink object. |
|
| Large upload rejected immediately | File exceeds `MAX_UPLOAD_SIZE` | Increase env var or shrink object. |
|
||||||
|
|
||||||
## 9. API Matrix
|
## 13. API Matrix
|
||||||
|
|
||||||
```
|
```
|
||||||
GET / # List buckets
|
GET / # List buckets
|
||||||
@@ -295,10 +1075,6 @@ POST /presign/<bucket>/<key> # Generate SigV4 URL
|
|||||||
GET /bucket-policy/<bucket> # Fetch policy
|
GET /bucket-policy/<bucket> # Fetch policy
|
||||||
PUT /bucket-policy/<bucket> # Upsert policy
|
PUT /bucket-policy/<bucket> # Upsert policy
|
||||||
DELETE /bucket-policy/<bucket> # Delete policy
|
DELETE /bucket-policy/<bucket> # Delete policy
|
||||||
|
GET /<bucket>?quota # Get bucket quota
|
||||||
|
PUT /<bucket>?quota # Set bucket quota (admin only)
|
||||||
```
|
```
|
||||||
|
|
||||||
## 10. Next Steps
|
|
||||||
|
|
||||||
- Tailor IAM + policy JSON files for team-ready presets.
|
|
||||||
- Wrap `run_api.py` with gunicorn or another WSGI server for long-running workloads.
|
|
||||||
- Extend `bucket_policies.json` to cover Deny statements that simulate production security controls.
|
|
||||||
|
|||||||
3
pytest.ini
Normal file
3
pytest.ini
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
[pytest]
|
||||||
|
testpaths = tests
|
||||||
|
norecursedirs = data .git __pycache__ .venv
|
||||||
@@ -1,7 +1,10 @@
|
|||||||
Flask>=3.0.2
|
Flask>=3.1.2
|
||||||
Flask-Limiter>=3.5.0
|
Flask-Limiter>=4.1.1
|
||||||
Flask-Cors>=4.0.0
|
Flask-Cors>=6.0.2
|
||||||
Flask-WTF>=1.2.1
|
Flask-WTF>=1.2.2
|
||||||
pytest>=7.4
|
pytest>=9.0.2
|
||||||
requests>=2.31
|
requests>=2.32.5
|
||||||
boto3>=1.34
|
boto3>=1.42.14
|
||||||
|
waitress>=3.0.2
|
||||||
|
psutil>=7.1.3
|
||||||
|
cryptography>=46.0.3
|
||||||
86
run.py
86
run.py
@@ -3,10 +3,12 @@ from __future__ import annotations
|
|||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import os
|
import os
|
||||||
|
import sys
|
||||||
import warnings
|
import warnings
|
||||||
from multiprocessing import Process
|
from multiprocessing import Process
|
||||||
|
|
||||||
from app import create_api_app, create_ui_app
|
from app import create_api_app, create_ui_app
|
||||||
|
from app.config import AppConfig
|
||||||
|
|
||||||
|
|
||||||
def _server_host() -> str:
|
def _server_host() -> str:
|
||||||
@@ -18,20 +20,33 @@ def _is_debug_enabled() -> bool:
|
|||||||
return os.getenv("FLASK_DEBUG", "0").lower() in ("1", "true", "yes")
|
return os.getenv("FLASK_DEBUG", "0").lower() in ("1", "true", "yes")
|
||||||
|
|
||||||
|
|
||||||
def serve_api(port: int) -> None:
|
def _is_frozen() -> bool:
|
||||||
|
"""Check if running as a compiled binary (PyInstaller/Nuitka)."""
|
||||||
|
return getattr(sys, 'frozen', False) or '__compiled__' in globals()
|
||||||
|
|
||||||
|
|
||||||
|
def serve_api(port: int, prod: bool = False) -> None:
|
||||||
app = create_api_app()
|
app = create_api_app()
|
||||||
debug = _is_debug_enabled()
|
if prod:
|
||||||
if debug:
|
from waitress import serve
|
||||||
warnings.warn("DEBUG MODE ENABLED - DO NOT USE IN PRODUCTION", RuntimeWarning)
|
serve(app, host=_server_host(), port=port, ident="MyFSIO")
|
||||||
app.run(host=_server_host(), port=port, debug=debug)
|
else:
|
||||||
|
debug = _is_debug_enabled()
|
||||||
|
if debug:
|
||||||
|
warnings.warn("DEBUG MODE ENABLED - DO NOT USE IN PRODUCTION", RuntimeWarning)
|
||||||
|
app.run(host=_server_host(), port=port, debug=debug)
|
||||||
|
|
||||||
|
|
||||||
def serve_ui(port: int) -> None:
|
def serve_ui(port: int, prod: bool = False) -> None:
|
||||||
app = create_ui_app()
|
app = create_ui_app()
|
||||||
debug = _is_debug_enabled()
|
if prod:
|
||||||
if debug:
|
from waitress import serve
|
||||||
warnings.warn("DEBUG MODE ENABLED - DO NOT USE IN PRODUCTION", RuntimeWarning)
|
serve(app, host=_server_host(), port=port, ident="MyFSIO")
|
||||||
app.run(host=_server_host(), port=port, debug=debug)
|
else:
|
||||||
|
debug = _is_debug_enabled()
|
||||||
|
if debug:
|
||||||
|
warnings.warn("DEBUG MODE ENABLED - DO NOT USE IN PRODUCTION", RuntimeWarning)
|
||||||
|
app.run(host=_server_host(), port=port, debug=debug)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
@@ -39,18 +54,65 @@ if __name__ == "__main__":
|
|||||||
parser.add_argument("--mode", choices=["api", "ui", "both"], default="both")
|
parser.add_argument("--mode", choices=["api", "ui", "both"], default="both")
|
||||||
parser.add_argument("--api-port", type=int, default=5000)
|
parser.add_argument("--api-port", type=int, default=5000)
|
||||||
parser.add_argument("--ui-port", type=int, default=5100)
|
parser.add_argument("--ui-port", type=int, default=5100)
|
||||||
|
parser.add_argument("--prod", action="store_true", help="Run in production mode using Waitress")
|
||||||
|
parser.add_argument("--dev", action="store_true", help="Force development mode (Flask dev server)")
|
||||||
|
parser.add_argument("--check-config", action="store_true", help="Validate configuration and exit")
|
||||||
|
parser.add_argument("--show-config", action="store_true", help="Show configuration summary and exit")
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Handle config check/show modes
|
||||||
|
if args.check_config or args.show_config:
|
||||||
|
config = AppConfig.from_env()
|
||||||
|
config.print_startup_summary()
|
||||||
|
if args.check_config:
|
||||||
|
issues = config.validate_and_report()
|
||||||
|
critical = [i for i in issues if i.startswith("CRITICAL:")]
|
||||||
|
sys.exit(1 if critical else 0)
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
# Default to production mode when running as compiled binary
|
||||||
|
# unless --dev is explicitly passed
|
||||||
|
prod_mode = args.prod or (_is_frozen() and not args.dev)
|
||||||
|
|
||||||
|
# Validate configuration before starting
|
||||||
|
config = AppConfig.from_env()
|
||||||
|
|
||||||
|
# Show startup summary only on first run (when marker file doesn't exist)
|
||||||
|
first_run_marker = config.storage_root / ".myfsio.sys" / ".initialized"
|
||||||
|
is_first_run = not first_run_marker.exists()
|
||||||
|
|
||||||
|
if is_first_run:
|
||||||
|
config.print_startup_summary()
|
||||||
|
|
||||||
|
# Check for critical issues that should prevent startup
|
||||||
|
issues = config.validate_and_report()
|
||||||
|
critical_issues = [i for i in issues if i.startswith("CRITICAL:")]
|
||||||
|
if critical_issues:
|
||||||
|
print("ABORTING: Critical configuration issues detected. Fix them before starting.")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Create the marker file to indicate successful first run
|
||||||
|
try:
|
||||||
|
first_run_marker.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
first_run_marker.write_text(f"Initialized on {__import__('datetime').datetime.now().isoformat()}\n")
|
||||||
|
except OSError:
|
||||||
|
pass # Non-critical, just skip marker creation
|
||||||
|
|
||||||
|
if prod_mode:
|
||||||
|
print("Running in production mode (Waitress)")
|
||||||
|
else:
|
||||||
|
print("Running in development mode (Flask dev server)")
|
||||||
|
|
||||||
if args.mode in {"api", "both"}:
|
if args.mode in {"api", "both"}:
|
||||||
print(f"Starting API server on port {args.api_port}...")
|
print(f"Starting API server on port {args.api_port}...")
|
||||||
api_proc = Process(target=serve_api, args=(args.api_port,), daemon=True)
|
api_proc = Process(target=serve_api, args=(args.api_port, prod_mode), daemon=True)
|
||||||
api_proc.start()
|
api_proc.start()
|
||||||
else:
|
else:
|
||||||
api_proc = None
|
api_proc = None
|
||||||
|
|
||||||
if args.mode in {"ui", "both"}:
|
if args.mode in {"ui", "both"}:
|
||||||
print(f"Starting UI server on port {args.ui_port}...")
|
print(f"Starting UI server on port {args.ui_port}...")
|
||||||
serve_ui(args.ui_port)
|
serve_ui(args.ui_port, prod_mode)
|
||||||
elif api_proc:
|
elif api_proc:
|
||||||
try:
|
try:
|
||||||
api_proc.join()
|
api_proc.join()
|
||||||
|
|||||||
370
scripts/install.sh
Normal file
370
scripts/install.sh
Normal file
@@ -0,0 +1,370 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# MyFSIO Installation Script
|
||||||
|
# This script sets up MyFSIO for production use on Linux systems.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./install.sh [OPTIONS]
|
||||||
|
#
|
||||||
|
# Options:
|
||||||
|
# --install-dir DIR Installation directory (default: /opt/myfsio)
|
||||||
|
# --data-dir DIR Data directory (default: /var/lib/myfsio)
|
||||||
|
# --log-dir DIR Log directory (default: /var/log/myfsio)
|
||||||
|
# --user USER System user to run as (default: myfsio)
|
||||||
|
# --port PORT API port (default: 5000)
|
||||||
|
# --ui-port PORT UI port (default: 5100)
|
||||||
|
# --api-url URL Public API URL (for presigned URLs behind proxy)
|
||||||
|
# --no-systemd Skip systemd service creation
|
||||||
|
# --binary PATH Path to myfsio binary (will download if not provided)
|
||||||
|
# -y, --yes Skip confirmation prompts
|
||||||
|
#
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
INSTALL_DIR="/opt/myfsio"
|
||||||
|
DATA_DIR="/var/lib/myfsio"
|
||||||
|
LOG_DIR="/var/log/myfsio"
|
||||||
|
SERVICE_USER="myfsio"
|
||||||
|
API_PORT="5000"
|
||||||
|
UI_PORT="5100"
|
||||||
|
API_URL=""
|
||||||
|
SKIP_SYSTEMD=false
|
||||||
|
BINARY_PATH=""
|
||||||
|
AUTO_YES=false
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
--install-dir)
|
||||||
|
INSTALL_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--data-dir)
|
||||||
|
DATA_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--log-dir)
|
||||||
|
LOG_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--user)
|
||||||
|
SERVICE_USER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--port)
|
||||||
|
API_PORT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--ui-port)
|
||||||
|
UI_PORT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--api-url)
|
||||||
|
API_URL="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--no-systemd)
|
||||||
|
SKIP_SYSTEMD=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--binary)
|
||||||
|
BINARY_PATH="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-y|--yes)
|
||||||
|
AUTO_YES=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
head -30 "$0" | tail -25
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " MyFSIO Installation Script"
|
||||||
|
echo " S3-Compatible Object Storage"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
|
echo "Documentation: https://go.jzwsite.com/myfsio"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ $EUID -ne 0 ]]; then
|
||||||
|
echo "Error: This script must be run as root (use sudo)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 1: Review Installation Configuration"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
echo " Install directory: $INSTALL_DIR"
|
||||||
|
echo " Data directory: $DATA_DIR"
|
||||||
|
echo " Log directory: $LOG_DIR"
|
||||||
|
echo " Service user: $SERVICE_USER"
|
||||||
|
echo " API port: $API_PORT"
|
||||||
|
echo " UI port: $UI_PORT"
|
||||||
|
if [[ -n "$API_URL" ]]; then
|
||||||
|
echo " Public API URL: $API_URL"
|
||||||
|
fi
|
||||||
|
if [[ -n "$BINARY_PATH" ]]; then
|
||||||
|
echo " Binary path: $BINARY_PATH"
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$AUTO_YES" != true ]]; then
|
||||||
|
read -p "Do you want to proceed with these settings? [y/N] " -n 1 -r
|
||||||
|
echo
|
||||||
|
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||||
|
echo "Installation cancelled."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 2: Creating System User"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if id "$SERVICE_USER" &>/dev/null; then
|
||||||
|
echo " [OK] User '$SERVICE_USER' already exists"
|
||||||
|
else
|
||||||
|
useradd --system --no-create-home --shell /usr/sbin/nologin "$SERVICE_USER"
|
||||||
|
echo " [OK] Created user '$SERVICE_USER'"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 3: Creating Directories"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
mkdir -p "$INSTALL_DIR"
|
||||||
|
echo " [OK] Created $INSTALL_DIR"
|
||||||
|
mkdir -p "$DATA_DIR"
|
||||||
|
echo " [OK] Created $DATA_DIR"
|
||||||
|
mkdir -p "$LOG_DIR"
|
||||||
|
echo " [OK] Created $LOG_DIR"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 4: Installing Binary"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ -n "$BINARY_PATH" ]]; then
|
||||||
|
if [[ -f "$BINARY_PATH" ]]; then
|
||||||
|
cp "$BINARY_PATH" "$INSTALL_DIR/myfsio"
|
||||||
|
echo " [OK] Copied binary from $BINARY_PATH"
|
||||||
|
else
|
||||||
|
echo " [ERROR] Binary not found at $BINARY_PATH"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
elif [[ -f "./myfsio" ]]; then
|
||||||
|
cp "./myfsio" "$INSTALL_DIR/myfsio"
|
||||||
|
echo " [OK] Copied binary from ./myfsio"
|
||||||
|
else
|
||||||
|
echo " [ERROR] No binary provided."
|
||||||
|
echo " Use --binary PATH or place 'myfsio' in current directory"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
chmod +x "$INSTALL_DIR/myfsio"
|
||||||
|
echo " [OK] Set executable permissions"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 5: Generating Secret Key"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
SECRET_KEY=$(openssl rand -base64 32)
|
||||||
|
echo " [OK] Generated secure SECRET_KEY"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 6: Creating Configuration File"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
cat > "$INSTALL_DIR/myfsio.env" << EOF
|
||||||
|
# MyFSIO Configuration
|
||||||
|
# Generated by install.sh on $(date)
|
||||||
|
# Documentation: https://go.jzwsite.com/myfsio
|
||||||
|
|
||||||
|
# Storage paths
|
||||||
|
STORAGE_ROOT=$DATA_DIR
|
||||||
|
LOG_DIR=$LOG_DIR
|
||||||
|
|
||||||
|
# Network
|
||||||
|
APP_HOST=0.0.0.0
|
||||||
|
APP_PORT=$API_PORT
|
||||||
|
|
||||||
|
# Security - CHANGE IN PRODUCTION
|
||||||
|
SECRET_KEY=$SECRET_KEY
|
||||||
|
CORS_ORIGINS=*
|
||||||
|
|
||||||
|
# Public URL (set this if behind a reverse proxy)
|
||||||
|
$(if [[ -n "$API_URL" ]]; then echo "API_BASE_URL=$API_URL"; else echo "# API_BASE_URL=https://s3.example.com"; fi)
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
LOG_LEVEL=INFO
|
||||||
|
LOG_TO_FILE=true
|
||||||
|
|
||||||
|
# Rate limiting
|
||||||
|
RATE_LIMIT_DEFAULT=200 per minute
|
||||||
|
|
||||||
|
# Optional: Encryption (uncomment to enable)
|
||||||
|
# ENCRYPTION_ENABLED=true
|
||||||
|
# KMS_ENABLED=true
|
||||||
|
EOF
|
||||||
|
chmod 600 "$INSTALL_DIR/myfsio.env"
|
||||||
|
echo " [OK] Created $INSTALL_DIR/myfsio.env"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 7: Setting Permissions"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
chown -R "$SERVICE_USER:$SERVICE_USER" "$INSTALL_DIR"
|
||||||
|
echo " [OK] Set ownership for $INSTALL_DIR"
|
||||||
|
chown -R "$SERVICE_USER:$SERVICE_USER" "$DATA_DIR"
|
||||||
|
echo " [OK] Set ownership for $DATA_DIR"
|
||||||
|
chown -R "$SERVICE_USER:$SERVICE_USER" "$LOG_DIR"
|
||||||
|
echo " [OK] Set ownership for $LOG_DIR"
|
||||||
|
|
||||||
|
if [[ "$SKIP_SYSTEMD" != true ]]; then
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 8: Creating Systemd Service"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
cat > /etc/systemd/system/myfsio.service << EOF
|
||||||
|
[Unit]
|
||||||
|
Description=MyFSIO S3-Compatible Storage
|
||||||
|
Documentation=https://go.jzwsite.com/myfsio
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=$SERVICE_USER
|
||||||
|
Group=$SERVICE_USER
|
||||||
|
WorkingDirectory=$INSTALL_DIR
|
||||||
|
EnvironmentFile=$INSTALL_DIR/myfsio.env
|
||||||
|
ExecStart=$INSTALL_DIR/myfsio
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
# Security hardening
|
||||||
|
NoNewPrivileges=true
|
||||||
|
ProtectSystem=strict
|
||||||
|
ProtectHome=true
|
||||||
|
ReadWritePaths=$DATA_DIR $LOG_DIR
|
||||||
|
PrivateTmp=true
|
||||||
|
|
||||||
|
# Resource limits (adjust as needed)
|
||||||
|
# LimitNOFILE=65535
|
||||||
|
# MemoryMax=2G
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
|
||||||
|
systemctl daemon-reload
|
||||||
|
echo " [OK] Created /etc/systemd/system/myfsio.service"
|
||||||
|
echo " [OK] Reloaded systemd daemon"
|
||||||
|
else
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 8: Skipping Systemd Service (--no-systemd flag used)"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " Installation Complete!"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$SKIP_SYSTEMD" != true ]]; then
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 9: Start the Service"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$AUTO_YES" != true ]]; then
|
||||||
|
read -p "Would you like to start MyFSIO now? [Y/n] " -n 1 -r
|
||||||
|
echo
|
||||||
|
START_SERVICE=true
|
||||||
|
if [[ $REPLY =~ ^[Nn]$ ]]; then
|
||||||
|
START_SERVICE=false
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
START_SERVICE=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$START_SERVICE" == true ]]; then
|
||||||
|
echo " Starting MyFSIO service..."
|
||||||
|
systemctl start myfsio
|
||||||
|
echo " [OK] Service started"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
read -p "Would you like to enable MyFSIO to start on boot? [Y/n] " -n 1 -r
|
||||||
|
echo
|
||||||
|
if [[ ! $REPLY =~ ^[Nn]$ ]]; then
|
||||||
|
systemctl enable myfsio
|
||||||
|
echo " [OK] Service enabled on boot"
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
sleep 2
|
||||||
|
echo " Service Status:"
|
||||||
|
echo " ---------------"
|
||||||
|
if systemctl is-active --quiet myfsio; then
|
||||||
|
echo " [OK] MyFSIO is running"
|
||||||
|
else
|
||||||
|
echo " [WARNING] MyFSIO may not have started correctly"
|
||||||
|
echo " Check logs with: journalctl -u myfsio -f"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " [SKIPPED] Service not started"
|
||||||
|
echo ""
|
||||||
|
echo " To start manually, run:"
|
||||||
|
echo " sudo systemctl start myfsio"
|
||||||
|
echo ""
|
||||||
|
echo " To enable on boot, run:"
|
||||||
|
echo " sudo systemctl enable myfsio"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " Summary"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
|
echo "Access Points:"
|
||||||
|
echo " API: http://$(hostname -I 2>/dev/null | awk '{print $1}' || echo "localhost"):$API_PORT"
|
||||||
|
echo " UI: http://$(hostname -I 2>/dev/null | awk '{print $1}' || echo "localhost"):$UI_PORT/ui"
|
||||||
|
echo ""
|
||||||
|
echo "Default Credentials:"
|
||||||
|
echo " Username: localadmin"
|
||||||
|
echo " Password: localadmin"
|
||||||
|
echo " [!] WARNING: Change these immediately after first login!"
|
||||||
|
echo ""
|
||||||
|
echo "Configuration Files:"
|
||||||
|
echo " Environment: $INSTALL_DIR/myfsio.env"
|
||||||
|
echo " IAM Users: $DATA_DIR/.myfsio.sys/config/iam.json"
|
||||||
|
echo " Bucket Policies: $DATA_DIR/.myfsio.sys/config/bucket_policies.json"
|
||||||
|
echo ""
|
||||||
|
echo "Useful Commands:"
|
||||||
|
echo " Check status: sudo systemctl status myfsio"
|
||||||
|
echo " View logs: sudo journalctl -u myfsio -f"
|
||||||
|
echo " Restart: sudo systemctl restart myfsio"
|
||||||
|
echo " Stop: sudo systemctl stop myfsio"
|
||||||
|
echo ""
|
||||||
|
echo "Documentation: https://go.jzwsite.com/myfsio"
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " Thank you for installing MyFSIO!"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
244
scripts/uninstall.sh
Normal file
244
scripts/uninstall.sh
Normal file
@@ -0,0 +1,244 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# MyFSIO Uninstall Script
|
||||||
|
# This script removes MyFSIO from your system.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./uninstall.sh [OPTIONS]
|
||||||
|
#
|
||||||
|
# Options:
|
||||||
|
# --keep-data Don't remove data directory
|
||||||
|
# --keep-logs Don't remove log directory
|
||||||
|
# --install-dir DIR Installation directory (default: /opt/myfsio)
|
||||||
|
# --data-dir DIR Data directory (default: /var/lib/myfsio)
|
||||||
|
# --log-dir DIR Log directory (default: /var/log/myfsio)
|
||||||
|
# --user USER System user (default: myfsio)
|
||||||
|
# -y, --yes Skip confirmation prompts
|
||||||
|
#
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
INSTALL_DIR="/opt/myfsio"
|
||||||
|
DATA_DIR="/var/lib/myfsio"
|
||||||
|
LOG_DIR="/var/log/myfsio"
|
||||||
|
SERVICE_USER="myfsio"
|
||||||
|
KEEP_DATA=false
|
||||||
|
KEEP_LOGS=false
|
||||||
|
AUTO_YES=false
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
--keep-data)
|
||||||
|
KEEP_DATA=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--keep-logs)
|
||||||
|
KEEP_LOGS=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--install-dir)
|
||||||
|
INSTALL_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--data-dir)
|
||||||
|
DATA_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--log-dir)
|
||||||
|
LOG_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--user)
|
||||||
|
SERVICE_USER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-y|--yes)
|
||||||
|
AUTO_YES=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
head -20 "$0" | tail -15
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " MyFSIO Uninstallation Script"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
|
echo "Documentation: https://go.jzwsite.com/myfsio"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ $EUID -ne 0 ]]; then
|
||||||
|
echo "Error: This script must be run as root (use sudo)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 1: Review What Will Be Removed"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
echo "The following items will be removed:"
|
||||||
|
echo ""
|
||||||
|
echo " Install directory: $INSTALL_DIR"
|
||||||
|
if [[ "$KEEP_DATA" != true ]]; then
|
||||||
|
echo " Data directory: $DATA_DIR (ALL YOUR DATA WILL BE DELETED!)"
|
||||||
|
else
|
||||||
|
echo " Data directory: $DATA_DIR (WILL BE KEPT)"
|
||||||
|
fi
|
||||||
|
if [[ "$KEEP_LOGS" != true ]]; then
|
||||||
|
echo " Log directory: $LOG_DIR"
|
||||||
|
else
|
||||||
|
echo " Log directory: $LOG_DIR (WILL BE KEPT)"
|
||||||
|
fi
|
||||||
|
echo " Systemd service: /etc/systemd/system/myfsio.service"
|
||||||
|
echo " System user: $SERVICE_USER"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$AUTO_YES" != true ]]; then
|
||||||
|
echo "WARNING: This action cannot be undone!"
|
||||||
|
echo ""
|
||||||
|
read -p "Are you sure you want to uninstall MyFSIO? [y/N] " -n 1 -r
|
||||||
|
echo
|
||||||
|
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||||
|
echo ""
|
||||||
|
echo "Uninstallation cancelled."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$KEEP_DATA" != true ]]; then
|
||||||
|
echo ""
|
||||||
|
read -p "This will DELETE ALL YOUR DATA. Type 'DELETE' to confirm: " CONFIRM
|
||||||
|
if [[ "$CONFIRM" != "DELETE" ]]; then
|
||||||
|
echo ""
|
||||||
|
echo "Uninstallation cancelled."
|
||||||
|
echo "Tip: Use --keep-data to preserve your data directory"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 2: Stopping Service"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if systemctl is-active --quiet myfsio 2>/dev/null; then
|
||||||
|
systemctl stop myfsio
|
||||||
|
echo " [OK] Stopped myfsio service"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Service not running"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 3: Disabling Service"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if systemctl is-enabled --quiet myfsio 2>/dev/null; then
|
||||||
|
systemctl disable myfsio
|
||||||
|
echo " [OK] Disabled myfsio service"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Service not enabled"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 4: Removing Systemd Service File"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ -f /etc/systemd/system/myfsio.service ]]; then
|
||||||
|
rm -f /etc/systemd/system/myfsio.service
|
||||||
|
systemctl daemon-reload
|
||||||
|
echo " [OK] Removed /etc/systemd/system/myfsio.service"
|
||||||
|
echo " [OK] Reloaded systemd daemon"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Service file not found"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 5: Removing Installation Directory"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ -d "$INSTALL_DIR" ]]; then
|
||||||
|
rm -rf "$INSTALL_DIR"
|
||||||
|
echo " [OK] Removed $INSTALL_DIR"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Directory not found: $INSTALL_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 6: Removing Data Directory"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ "$KEEP_DATA" != true ]]; then
|
||||||
|
if [[ -d "$DATA_DIR" ]]; then
|
||||||
|
rm -rf "$DATA_DIR"
|
||||||
|
echo " [OK] Removed $DATA_DIR"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Directory not found: $DATA_DIR"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " [KEPT] Data preserved at: $DATA_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 7: Removing Log Directory"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ "$KEEP_LOGS" != true ]]; then
|
||||||
|
if [[ -d "$LOG_DIR" ]]; then
|
||||||
|
rm -rf "$LOG_DIR"
|
||||||
|
echo " [OK] Removed $LOG_DIR"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Directory not found: $LOG_DIR"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " [KEPT] Logs preserved at: $LOG_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 8: Removing System User"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if id "$SERVICE_USER" &>/dev/null; then
|
||||||
|
userdel "$SERVICE_USER" 2>/dev/null || true
|
||||||
|
echo " [OK] Removed user '$SERVICE_USER'"
|
||||||
|
else
|
||||||
|
echo " [SKIP] User not found: $SERVICE_USER"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " Uninstallation Complete!"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$KEEP_DATA" == true ]]; then
|
||||||
|
echo "Your data has been preserved at: $DATA_DIR"
|
||||||
|
echo ""
|
||||||
|
echo "To reinstall MyFSIO with existing data, run:"
|
||||||
|
echo " curl -fsSL https://go.jzwsite.com/myfsio-install | sudo bash"
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$KEEP_LOGS" == true ]]; then
|
||||||
|
echo "Your logs have been preserved at: $LOG_DIR"
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Thank you for using MyFSIO."
|
||||||
|
echo "Documentation: https://go.jzwsite.com/myfsio"
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -3,7 +3,7 @@
|
|||||||
<head>
|
<head>
|
||||||
<meta charset="utf-8" />
|
<meta charset="utf-8" />
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||||
<meta name="csrf-token" content="{{ csrf_token() }}" />
|
{% if principal %}<meta name="csrf-token" content="{{ csrf_token() }}" />{% endif %}
|
||||||
<title>MyFSIO Console</title>
|
<title>MyFSIO Console</title>
|
||||||
<link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFISO.png') }}" />
|
<link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFISO.png') }}" />
|
||||||
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFISO.ico') }}" />
|
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFISO.ico') }}" />
|
||||||
@@ -51,18 +51,17 @@
|
|||||||
<li class="nav-item">
|
<li class="nav-item">
|
||||||
<a class="nav-link" href="{{ url_for('ui.buckets_overview') }}">Buckets</a>
|
<a class="nav-link" href="{{ url_for('ui.buckets_overview') }}">Buckets</a>
|
||||||
</li>
|
</li>
|
||||||
|
{% if can_manage_iam %}
|
||||||
<li class="nav-item">
|
<li class="nav-item">
|
||||||
<a class="nav-link {% if not can_manage_iam %}nav-link-muted{% endif %}" href="{{ url_for('ui.iam_dashboard') }}">
|
<a class="nav-link" href="{{ url_for('ui.iam_dashboard') }}">IAM</a>
|
||||||
IAM
|
|
||||||
{% if not can_manage_iam %}<span class="badge ms-2 text-bg-warning">Restricted</span>{% endif %}
|
|
||||||
</a>
|
|
||||||
</li>
|
</li>
|
||||||
<li class="nav-item">
|
<li class="nav-item">
|
||||||
<a class="nav-link {% if not can_manage_iam %}nav-link-muted{% endif %}" href="{{ url_for('ui.connections_dashboard') }}">
|
<a class="nav-link" href="{{ url_for('ui.connections_dashboard') }}">Connections</a>
|
||||||
Connections
|
|
||||||
{% if not can_manage_iam %}<span class="badge ms-2 text-bg-warning">Restricted</span>{% endif %}
|
|
||||||
</a>
|
|
||||||
</li>
|
</li>
|
||||||
|
<li class="nav-item">
|
||||||
|
<a class="nav-link" href="{{ url_for('ui.metrics_dashboard') }}">Metrics</a>
|
||||||
|
</li>
|
||||||
|
{% endif %}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% if principal %}
|
{% if principal %}
|
||||||
<li class="nav-item">
|
<li class="nav-item">
|
||||||
@@ -200,7 +199,7 @@
|
|||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
<script>
|
<script>
|
||||||
// Toast utility
|
|
||||||
window.showToast = function(message, title = 'Notification', type = 'info') {
|
window.showToast = function(message, title = 'Notification', type = 'info') {
|
||||||
const toastEl = document.getElementById('liveToast');
|
const toastEl = document.getElementById('liveToast');
|
||||||
const toastTitle = document.getElementById('toastTitle');
|
const toastTitle = document.getElementById('toastTitle');
|
||||||
@@ -209,7 +208,6 @@
|
|||||||
toastTitle.textContent = title;
|
toastTitle.textContent = title;
|
||||||
toastMessage.textContent = message;
|
toastMessage.textContent = message;
|
||||||
|
|
||||||
// Reset classes
|
|
||||||
toastEl.classList.remove('text-bg-primary', 'text-bg-success', 'text-bg-danger', 'text-bg-warning');
|
toastEl.classList.remove('text-bg-primary', 'text-bg-success', 'text-bg-danger', 'text-bg-warning');
|
||||||
|
|
||||||
if (type === 'success') toastEl.classList.add('text-bg-success');
|
if (type === 'success') toastEl.classList.add('text-bg-success');
|
||||||
@@ -222,13 +220,11 @@
|
|||||||
</script>
|
</script>
|
||||||
<script>
|
<script>
|
||||||
(function () {
|
(function () {
|
||||||
// Show flashed messages as toasts
|
|
||||||
{% with messages = get_flashed_messages(with_categories=true) %}
|
{% with messages = get_flashed_messages(with_categories=true) %}
|
||||||
{% if messages %}
|
{% if messages %}
|
||||||
{% for category, message in messages %}
|
{% for category, message in messages %}
|
||||||
// Map Flask categories to Toast types
|
|
||||||
// Flask: success, danger, warning, info
|
|
||||||
// Toast: success, error, warning, info
|
|
||||||
var type = "{{ category }}";
|
var type = "{{ category }}";
|
||||||
if (type === "danger") type = "error";
|
if (type === "danger") type = "error";
|
||||||
window.showToast({{ message | tojson | safe }}, "Notification", type);
|
window.showToast({{ message | tojson | safe }}, "Notification", type);
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -40,48 +40,53 @@
|
|||||||
<div class="row g-3" id="buckets-container">
|
<div class="row g-3" id="buckets-container">
|
||||||
{% for bucket in buckets %}
|
{% for bucket in buckets %}
|
||||||
<div class="col-md-6 col-xl-4 bucket-item">
|
<div class="col-md-6 col-xl-4 bucket-item">
|
||||||
<div class="card h-100 shadow-sm border-0 bucket-card" data-bucket-row data-href="{{ bucket.detail_url }}">
|
<div class="card h-100 shadow-sm bucket-card" data-bucket-row data-href="{{ bucket.detail_url }}">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex justify-content-between align-items-start mb-3">
|
<div class="d-flex justify-content-between align-items-start mb-2">
|
||||||
<div class="d-flex align-items-center gap-2">
|
<div class="d-flex align-items-center gap-3">
|
||||||
<div class="bg-primary-subtle text-primary rounded p-2">
|
<div class="bucket-icon">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="bi bi-hdd-network" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="22" height="22" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
||||||
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5v3.375a.5.5 0 0 1-.5.5h-2a.5.5 0 0 1-.5-.5V11.5a.5.5 0 0 1 .5-.5h1V9.5a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1.5a.5.5 0 0 1 .5.5h1v3.375a.5.5 0 0 1-.5.5h-2a.5.5 0 0 1-.5-.5V11.5a.5.5 0 0 1 .5-.5h1V9.5a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1.5a.5.5 0 0 1 .5.5h1v3.375a.5.5 0 0 1-.5.5h-2a.5.5 0 0 1-.5-.5V11.5a.5.5 0 0 1 .5-.5h1V9.5a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1.5a.5.5 0 0 1 .5.5h1v3.375a.5.5 0 0 1-.5.5h-2a.5.5 0 0 1-.5-.5V11.5a.5.5 0 0 1 .5-.5h1V9.5a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1.5a.5.5 0 0 1 .5.5h1V13.5a1.5 1.5 0 0 1 1.5-1.5h3V7H2a2 2 0 0 1-2-2V4zm1 0a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1v1z"/>
|
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H11a.5.5 0 0 1 0 1h-1v1h1a.5.5 0 0 1 0 1h-1v1a.5.5 0 0 1-1 0v-1H6v1a.5.5 0 0 1-1 0v-1H4a.5.5 0 0 1 0-1h1v-1H4a.5.5 0 0 1 0-1h1.5A1.5 1.5 0 0 1 7 10.5V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1zm5 7.5v1h3v-1a.5.5 0 0 0-.5-.5h-2a.5.5 0 0 0-.5.5z"/>
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<h5 class="card-title mb-0 text-break">{{ bucket.meta.name }}</h5>
|
<div>
|
||||||
|
<h5 class="bucket-name text-break">{{ bucket.meta.name }}</h5>
|
||||||
|
<small class="text-muted">Created {{ bucket.meta.created_at.strftime('%b %d, %Y') }}</small>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<span class="badge {{ bucket.access_badge }} rounded-pill">{{ bucket.access_label }}</span>
|
<span class="badge {{ bucket.access_badge }} bucket-access-badge">{{ bucket.access_label }}</span>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="d-flex justify-content-between align-items-end mt-4">
|
<div class="bucket-stats">
|
||||||
<div>
|
<div class="bucket-stat">
|
||||||
<div class="text-muted small mb-1">Storage Used</div>
|
<div class="bucket-stat-value">{{ bucket.summary.human_size }}</div>
|
||||||
<div class="fw-semibold">{{ bucket.summary.human_size }}</div>
|
<div class="bucket-stat-label">Storage</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="text-end">
|
<div class="bucket-stat">
|
||||||
<div class="text-muted small mb-1">Objects</div>
|
<div class="bucket-stat-value">{{ bucket.summary.objects }}</div>
|
||||||
<div class="fw-semibold">{{ bucket.summary.objects }}</div>
|
<div class="bucket-stat-label">Objects</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="card-footer bg-transparent border-top-0 pt-0 pb-3">
|
|
||||||
<small class="text-muted">Created {{ bucket.meta.created_at.strftime('%b %d, %Y') }}</small>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
{% else %}
|
{% else %}
|
||||||
<div class="col-12">
|
<div class="col-12">
|
||||||
<div class="text-center py-5 bg-panel rounded-3 border border-dashed">
|
<div class="empty-state bg-panel rounded-3 border border-dashed">
|
||||||
<div class="mb-3 text-muted">
|
<div class="empty-state-icon">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" class="bi bi-bucket" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="36" height="36" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<h5>No buckets found</h5>
|
<h5 class="mb-2">No buckets yet</h5>
|
||||||
<p class="text-muted mb-4">Get started by creating your first storage bucket.</p>
|
<p class="text-muted mb-4">Create your first storage bucket to start organizing your files.</p>
|
||||||
<button class="btn btn-primary" data-bs-toggle="modal" data-bs-target="#createBucketModal">Create Bucket</button>
|
<button class="btn btn-primary" data-bs-toggle="modal" data-bs-target="#createBucketModal">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 2a.5.5 0 0 1 .5.5v5h5a.5.5 0 0 1 0 1h-5v5a.5.5 0 0 1-1 0v-5h-5a.5.5 0 0 1 0-1h5v-5A.5.5 0 0 1 8 2Z"/>
|
||||||
|
</svg>
|
||||||
|
Create Bucket
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
@@ -90,20 +95,31 @@
|
|||||||
<div class="modal fade" id="createBucketModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="createBucketModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header">
|
<div class="modal-header border-0">
|
||||||
<h1 class="modal-title fs-5">Create bucket</h1>
|
<h1 class="modal-title fs-5">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M.5 9.9a.5.5 0 0 1 .5.5v2.5a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1v-2.5a.5.5 0 0 1 1 0v2.5a2 2 0 0 1-2 2H2a2 2 0 0 1-2-2v-2.5a.5.5 0 0 1 .5-.5z"/>
|
||||||
|
<path d="M7.646 1.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1-.708.708L8.5 2.707V11.5a.5.5 0 0 1-1 0V2.707L5.354 4.854a.5.5 0 1 1-.708-.708l3-3z"/>
|
||||||
|
</svg>
|
||||||
|
Create bucket
|
||||||
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<form method="post" action="{{ url_for('ui.create_bucket') }}">
|
<form method="post" action="{{ url_for('ui.create_bucket') }}">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<div class="modal-body">
|
<div class="modal-body pt-0">
|
||||||
<label class="form-label">Bucket name</label>
|
<label class="form-label fw-medium">Bucket name</label>
|
||||||
<input class="form-control" type="text" name="bucket_name" pattern="[a-z0-9.-]{3,63}" placeholder="team-assets" required />
|
<input class="form-control" type="text" name="bucket_name" pattern="[a-z0-9.-]{3,63}" placeholder="my-bucket-name" required autofocus />
|
||||||
<div class="form-text">Must be 3-63 chars, lowercase letters, numbers, dots, or hyphens.</div>
|
<div class="form-text">Use 3-63 characters: lowercase letters, numbers, dots, or hyphens.</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-footer">
|
<div class="modal-footer">
|
||||||
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
<button class="btn btn-primary" type="submit">Create</button>
|
<button class="btn btn-primary" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 2a.5.5 0 0 1 .5.5v5h5a.5.5 0 0 1 0 1h-5v5a.5.5 0 0 1-1 0v-5h-5a.5.5 0 0 1 0-1h5v-5A.5.5 0 0 1 8 2Z"/>
|
||||||
|
</svg>
|
||||||
|
Create
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</form>
|
</form>
|
||||||
</div>
|
</div>
|
||||||
@@ -115,7 +131,7 @@
|
|||||||
{{ super() }}
|
{{ super() }}
|
||||||
<script>
|
<script>
|
||||||
(function () {
|
(function () {
|
||||||
// Search functionality
|
|
||||||
const searchInput = document.getElementById('bucket-search');
|
const searchInput = document.getElementById('bucket-search');
|
||||||
const bucketItems = document.querySelectorAll('.bucket-item');
|
const bucketItems = document.querySelectorAll('.bucket-item');
|
||||||
const noBucketsMsg = document.querySelector('.text-center.py-5'); // The "No buckets found" empty state
|
const noBucketsMsg = document.querySelector('.text-center.py-5'); // The "No buckets found" empty state
|
||||||
@@ -137,7 +153,6 @@
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
// View toggle functionality
|
|
||||||
const viewGrid = document.getElementById('view-grid');
|
const viewGrid = document.getElementById('view-grid');
|
||||||
const viewList = document.getElementById('view-list');
|
const viewList = document.getElementById('view-list');
|
||||||
const container = document.getElementById('buckets-container');
|
const container = document.getElementById('buckets-container');
|
||||||
@@ -152,8 +167,7 @@
|
|||||||
});
|
});
|
||||||
cards.forEach(card => {
|
cards.forEach(card => {
|
||||||
card.classList.remove('h-100');
|
card.classList.remove('h-100');
|
||||||
// Optional: Add flex-row to card-body content if we want a horizontal layout
|
|
||||||
// For now, full-width stacked cards is a good list view
|
|
||||||
});
|
});
|
||||||
localStorage.setItem('bucket-view-pref', 'list');
|
localStorage.setItem('bucket-view-pref', 'list');
|
||||||
} else {
|
} else {
|
||||||
@@ -172,7 +186,6 @@
|
|||||||
viewGrid.addEventListener('change', () => setView('grid'));
|
viewGrid.addEventListener('change', () => setView('grid'));
|
||||||
viewList.addEventListener('change', () => setView('list'));
|
viewList.addEventListener('change', () => setView('list'));
|
||||||
|
|
||||||
// Restore preference
|
|
||||||
const pref = localStorage.getItem('bucket-view-pref');
|
const pref = localStorage.getItem('bucket-view-pref');
|
||||||
if (pref === 'list') {
|
if (pref === 'list') {
|
||||||
viewList.checked = true;
|
viewList.checked = true;
|
||||||
|
|||||||
@@ -3,76 +3,167 @@
|
|||||||
{% block title %}Connections - S3 Compatible Storage{% endblock %}
|
{% block title %}Connections - S3 Compatible Storage{% endblock %}
|
||||||
|
|
||||||
{% block content %}
|
{% block content %}
|
||||||
<div class="row mb-4">
|
<div class="page-header d-flex justify-content-between align-items-center mb-4">
|
||||||
<div class="col-md-12">
|
<div>
|
||||||
<h2>Remote Connections</h2>
|
<p class="text-uppercase text-muted small mb-1">Replication</p>
|
||||||
<p class="text-muted">Manage connections to other S3-compatible services for replication.</p>
|
<h1 class="h3 mb-1 d-flex align-items-center gap-2">
|
||||||
</div>
|
<svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
||||||
|
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
||||||
|
</svg>
|
||||||
|
Remote Connections
|
||||||
|
</h1>
|
||||||
|
<p class="text-muted mb-0 mt-1">Manage connections to other S3-compatible services for replication.</p>
|
||||||
|
</div>
|
||||||
|
<div class="d-none d-md-block">
|
||||||
|
<span class="badge bg-primary bg-opacity-10 text-primary fs-6 px-3 py-2">
|
||||||
|
{{ connections|length }} connection{{ 's' if connections|length != 1 else '' }}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="row">
|
<div class="row g-4">
|
||||||
<div class="col-md-4">
|
<div class="col-lg-4 col-md-5">
|
||||||
<div class="card">
|
<div class="card shadow-sm border-0" style="border-radius: 1rem;">
|
||||||
<div class="card-header">
|
<div class="card-header bg-transparent border-0 pt-4 pb-0 px-4">
|
||||||
Add New Connection
|
<h5 class="fw-semibold d-flex align-items-center gap-2 mb-1">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 2a.5.5 0 0 1 .5.5v5h5a.5.5 0 0 1 0 1h-5v5a.5.5 0 0 1-1 0v-5h-5a.5.5 0 0 1 0-1h5v-5A.5.5 0 0 1 8 2Z"/>
|
||||||
|
</svg>
|
||||||
|
Add New Connection
|
||||||
|
</h5>
|
||||||
|
<p class="text-muted small mb-0">Connect to an S3-compatible endpoint</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="card-body">
|
<div class="card-body px-4 pb-4">
|
||||||
<form method="POST" action="{{ url_for('ui.create_connection') }}">
|
<form method="POST" action="{{ url_for('ui.create_connection') }}" id="createConnectionForm">
|
||||||
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label for="name" class="form-label">Name</label>
|
<label for="name" class="form-label fw-medium">Name</label>
|
||||||
<input type="text" class="form-control" id="name" name="name" required placeholder="e.g. Production Backup">
|
<input type="text" class="form-control" id="name" name="name" required placeholder="Production Backup">
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label for="endpoint_url" class="form-label">Endpoint URL</label>
|
<label for="endpoint_url" class="form-label fw-medium">Endpoint URL</label>
|
||||||
<input type="url" class="form-control" id="endpoint_url" name="endpoint_url" required placeholder="https://s3.us-east-1.amazonaws.com">
|
<input type="url" class="form-control" id="endpoint_url" name="endpoint_url" required placeholder="https://s3.us-east-1.amazonaws.com">
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label for="region" class="form-label">Region</label>
|
<label for="region" class="form-label fw-medium">Region</label>
|
||||||
<input type="text" class="form-control" id="region" name="region" value="us-east-1">
|
<input type="text" class="form-control" id="region" name="region" value="us-east-1">
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label for="access_key" class="form-label">Access Key</label>
|
<label for="access_key" class="form-label fw-medium">Access Key</label>
|
||||||
<input type="text" class="form-control" id="access_key" name="access_key" required>
|
<input type="text" class="form-control font-monospace" id="access_key" name="access_key" required>
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label for="secret_key" class="form-label">Secret Key</label>
|
<label for="secret_key" class="form-label fw-medium">Secret Key</label>
|
||||||
<input type="password" class="form-control" id="secret_key" name="secret_key" required>
|
<div class="input-group">
|
||||||
|
<input type="password" class="form-control font-monospace" id="secret_key" name="secret_key" required>
|
||||||
|
<button class="btn btn-outline-secondary" type="button" onclick="togglePassword('secret_key')" title="Toggle visibility">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
||||||
|
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div id="testResult" class="mb-3"></div>
|
||||||
|
<div class="d-grid gap-2">
|
||||||
|
<button type="button" class="btn btn-outline-secondary" id="testConnectionBtn">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M11.251.068a.5.5 0 0 1 .227.58L9.677 6.5H13a.5.5 0 0 1 .364.843l-8 8.5a.5.5 0 0 1-.842-.49L6.323 9.5H3a.5.5 0 0 1-.364-.843l8-8.5a.5.5 0 0 1 .615-.09z"/>
|
||||||
|
</svg>
|
||||||
|
Test Connection
|
||||||
|
</button>
|
||||||
|
<button type="submit" class="btn btn-primary">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 2a.5.5 0 0 1 .5.5v5h5a.5.5 0 0 1 0 1h-5v5a.5.5 0 0 1-1 0v-5h-5a.5.5 0 0 1 0-1h5v-5A.5.5 0 0 1 8 2Z"/>
|
||||||
|
</svg>
|
||||||
|
Add Connection
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
<button type="submit" class="btn btn-primary">Add Connection</button>
|
|
||||||
</form>
|
</form>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="col-md-8">
|
<div class="col-lg-8 col-md-7">
|
||||||
<div class="card">
|
<div class="card shadow-sm border-0" style="border-radius: 1rem;">
|
||||||
<div class="card-header">
|
<div class="card-header bg-transparent border-0 pt-4 pb-0 px-4 d-flex justify-content-between align-items-center">
|
||||||
Existing Connections
|
<div>
|
||||||
|
<h5 class="fw-semibold d-flex align-items-center gap-2 mb-1">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-muted" viewBox="0 0 16 16">
|
||||||
|
<path d="M0 1.5A1.5 1.5 0 0 1 1.5 0h2A1.5 1.5 0 0 1 5 1.5v2A1.5 1.5 0 0 1 3.5 5h-2A1.5 1.5 0 0 1 0 3.5v-2zM1.5 1a.5.5 0 0 0-.5.5v2a.5.5 0 0 0 .5.5h2a.5.5 0 0 0 .5-.5v-2a.5.5 0 0 0-.5-.5h-2zM0 8a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v5a2 2 0 0 1-2 2H2a2 2 0 0 1-2-2V8zm1 3v2a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1v-2H1zm14-1V8a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1v2h14zM2 8.5a.5.5 0 0 1 .5-.5h9a.5.5 0 0 1 0 1h-9a.5.5 0 0 1-.5-.5zm0 4a.5.5 0 0 1 .5-.5h6a.5.5 0 0 1 0 1h-6a.5.5 0 0 1-.5-.5z"/>
|
||||||
|
</svg>
|
||||||
|
Existing Connections
|
||||||
|
</h5>
|
||||||
|
<p class="text-muted small mb-0">Configured remote endpoints</p>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="card-body">
|
<div class="card-body px-4 pb-4">
|
||||||
{% if connections %}
|
{% if connections %}
|
||||||
<div class="table-responsive">
|
<div class="table-responsive">
|
||||||
<table class="table table-hover">
|
<table class="table table-hover align-middle mb-0">
|
||||||
<thead>
|
<thead class="table-light">
|
||||||
<tr>
|
<tr>
|
||||||
<th>Name</th>
|
<th scope="col" style="width: 50px;">Status</th>
|
||||||
<th>Endpoint</th>
|
<th scope="col">Name</th>
|
||||||
<th>Region</th>
|
<th scope="col">Endpoint</th>
|
||||||
<th>Access Key</th>
|
<th scope="col">Region</th>
|
||||||
<th>Actions</th>
|
<th scope="col">Access Key</th>
|
||||||
|
<th scope="col" class="text-end">Actions</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
{% for conn in connections %}
|
{% for conn in connections %}
|
||||||
<tr>
|
<tr data-connection-id="{{ conn.id }}">
|
||||||
<td>{{ conn.name }}</td>
|
<td class="text-center">
|
||||||
<td>{{ conn.endpoint_url }}</td>
|
<span class="connection-status" data-status="checking" title="Checking...">
|
||||||
<td>{{ conn.region }}</td>
|
<span class="spinner-border spinner-border-sm text-muted" role="status" style="width: 12px; height: 12px;"></span>
|
||||||
<td><code>{{ conn.access_key }}</code></td>
|
</span>
|
||||||
|
</td>
|
||||||
<td>
|
<td>
|
||||||
<form method="POST" action="{{ url_for('ui.delete_connection', connection_id=conn.id) }}" onsubmit="return confirm('Are you sure?');" style="display: inline;">
|
<div class="d-flex align-items-center gap-2">
|
||||||
<button type="submit" class="btn btn-sm btn-danger">Delete</button>
|
<div class="connection-icon">
|
||||||
</form>
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
||||||
|
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<span class="fw-medium">{{ conn.name }}</span>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<span class="text-muted small text-truncate d-inline-block" style="max-width: 200px;" title="{{ conn.endpoint_url }}">{{ conn.endpoint_url }}</span>
|
||||||
|
</td>
|
||||||
|
<td><span class="badge bg-primary bg-opacity-10 text-primary">{{ conn.region }}</span></td>
|
||||||
|
<td><code class="small">{{ conn.access_key[:8] }}...{{ conn.access_key[-4:] }}</code></td>
|
||||||
|
<td class="text-end">
|
||||||
|
<div class="btn-group btn-group-sm" role="group">
|
||||||
|
<button type="button" class="btn btn-outline-secondary"
|
||||||
|
data-bs-toggle="modal"
|
||||||
|
data-bs-target="#editConnectionModal"
|
||||||
|
data-id="{{ conn.id }}"
|
||||||
|
data-name="{{ conn.name }}"
|
||||||
|
data-endpoint="{{ conn.endpoint_url }}"
|
||||||
|
data-region="{{ conn.region }}"
|
||||||
|
data-access="{{ conn.access_key }}"
|
||||||
|
data-secret="{{ conn.secret_key }}"
|
||||||
|
title="Edit connection">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
<button type="button" class="btn btn-outline-danger"
|
||||||
|
data-bs-toggle="modal"
|
||||||
|
data-bs-target="#deleteConnectionModal"
|
||||||
|
data-id="{{ conn.id }}"
|
||||||
|
data-name="{{ conn.name }}"
|
||||||
|
title="Delete connection">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
@@ -80,10 +171,278 @@
|
|||||||
</table>
|
</table>
|
||||||
</div>
|
</div>
|
||||||
{% else %}
|
{% else %}
|
||||||
<p class="text-muted text-center my-4">No remote connections configured.</p>
|
<div class="empty-state text-center py-5">
|
||||||
|
<div class="empty-state-icon mx-auto mb-3">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
||||||
|
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<h5 class="fw-semibold mb-2">No connections yet</h5>
|
||||||
|
<p class="text-muted mb-0">Add your first remote connection to enable bucket replication.</p>
|
||||||
|
</div>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<div class="modal fade" id="editConnectionModal" tabindex="-1" aria-hidden="true">
|
||||||
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
|
<div class="modal-content">
|
||||||
|
<div class="modal-header border-0 pb-0">
|
||||||
|
<h5 class="modal-title fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5zm-9.761 5.175-.106.106-1.528 3.821 3.821-1.528.106-.106A.5.5 0 0 1 5 12.5V12h-.5a.5.5 0 0 1-.5-.5V11h-.5a.5.5 0 0 1-.468-.325z"/>
|
||||||
|
</svg>
|
||||||
|
Edit Connection
|
||||||
|
</h5>
|
||||||
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
|
</div>
|
||||||
|
<form method="POST" id="editConnectionForm">
|
||||||
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
|
||||||
|
<div class="modal-body">
|
||||||
|
<div class="mb-3">
|
||||||
|
<label for="edit_name" class="form-label fw-medium">Name</label>
|
||||||
|
<input type="text" class="form-control" id="edit_name" name="name" required>
|
||||||
|
</div>
|
||||||
|
<div class="mb-3">
|
||||||
|
<label for="edit_endpoint_url" class="form-label fw-medium">Endpoint URL</label>
|
||||||
|
<input type="url" class="form-control" id="edit_endpoint_url" name="endpoint_url" required>
|
||||||
|
</div>
|
||||||
|
<div class="mb-3">
|
||||||
|
<label for="edit_region" class="form-label fw-medium">Region</label>
|
||||||
|
<input type="text" class="form-control" id="edit_region" name="region" required>
|
||||||
|
</div>
|
||||||
|
<div class="mb-3">
|
||||||
|
<label for="edit_access_key" class="form-label fw-medium">Access Key</label>
|
||||||
|
<input type="text" class="form-control font-monospace" id="edit_access_key" name="access_key" required>
|
||||||
|
</div>
|
||||||
|
<div class="mb-3">
|
||||||
|
<label for="edit_secret_key" class="form-label fw-medium">Secret Key</label>
|
||||||
|
<div class="input-group">
|
||||||
|
<input type="password" class="form-control font-monospace" id="edit_secret_key" name="secret_key" required>
|
||||||
|
<button class="btn btn-outline-secondary" type="button" onclick="togglePassword('edit_secret_key')">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
||||||
|
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div id="editTestResult" class="mt-2"></div>
|
||||||
|
</div>
|
||||||
|
<div class="modal-footer">
|
||||||
|
<button type="button" class="btn btn-outline-secondary" id="editTestConnectionBtn">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M11.251.068a.5.5 0 0 1 .227.58L9.677 6.5H13a.5.5 0 0 1 .364.843l-8 8.5a.5.5 0 0 1-.842-.49L6.323 9.5H3a.5.5 0 0 1-.364-.843l8-8.5a.5.5 0 0 1 .615-.09z"/>
|
||||||
|
</svg>
|
||||||
|
Test
|
||||||
|
</button>
|
||||||
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
|
<button type="submit" class="btn btn-primary">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M10.97 4.97a.75.75 0 0 1 1.07 1.05l-3.99 4.99a.75.75 0 0 1-1.08.02L4.324 8.384a.75.75 0 1 1 1.06-1.06l2.094 2.093 3.473-4.425a.267.267 0 0 1 .02-.022z"/>
|
||||||
|
</svg>
|
||||||
|
Save
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</form>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="modal fade" id="deleteConnectionModal" tabindex="-1" aria-hidden="true">
|
||||||
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
|
<div class="modal-content">
|
||||||
|
<div class="modal-header border-0 pb-0">
|
||||||
|
<h5 class="modal-title fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-danger" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
Delete Connection
|
||||||
|
</h5>
|
||||||
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
|
</div>
|
||||||
|
<div class="modal-body">
|
||||||
|
<p>Are you sure you want to delete <strong id="deleteConnectionName"></strong>?</p>
|
||||||
|
<div class="alert alert-warning d-flex align-items-start small" role="alert">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="flex-shrink-0 me-2 mt-0" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 16A8 8 0 1 0 8 0a8 8 0 0 0 0 16zm.93-9.412-1 4.705c-.07.34.029.533.304.533.194 0 .487-.07.686-.246l-.088.416c-.287.346-.92.598-1.465.598-.703 0-1.002-.422-.808-1.319l.738-3.468c.064-.293.006-.399-.287-.47l-.451-.081.082-.381 2.29-.287zM8 5.5a1 1 0 1 1 0-2 1 1 0 0 1 0 2z"/>
|
||||||
|
</svg>
|
||||||
|
<div>This will stop any replication rules using this connection.</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="modal-footer">
|
||||||
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
|
<form method="POST" id="deleteConnectionForm">
|
||||||
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
|
||||||
|
<button type="submit" class="btn btn-danger">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
Delete
|
||||||
|
</button>
|
||||||
|
</form>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
function togglePassword(id) {
|
||||||
|
const input = document.getElementById(id);
|
||||||
|
if (input.type === "password") {
|
||||||
|
input.type = "text";
|
||||||
|
} else {
|
||||||
|
input.type = "password";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function testConnection(formId, resultId) {
|
||||||
|
const form = document.getElementById(formId);
|
||||||
|
const resultDiv = document.getElementById(resultId);
|
||||||
|
const formData = new FormData(form);
|
||||||
|
const data = Object.fromEntries(formData.entries());
|
||||||
|
|
||||||
|
resultDiv.innerHTML = '<div class="text-info"><span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Testing connection...</div>';
|
||||||
|
|
||||||
|
// Use AbortController to timeout client-side after 20 seconds
|
||||||
|
const controller = new AbortController();
|
||||||
|
const timeoutId = setTimeout(() => controller.abort(), 20000);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch("{{ url_for('ui.test_connection') }}", {
|
||||||
|
method: "POST",
|
||||||
|
headers: {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"X-CSRFToken": "{{ csrf_token() }}"
|
||||||
|
},
|
||||||
|
body: JSON.stringify(data),
|
||||||
|
signal: controller.signal
|
||||||
|
});
|
||||||
|
clearTimeout(timeoutId);
|
||||||
|
|
||||||
|
const result = await response.json();
|
||||||
|
if (response.ok) {
|
||||||
|
resultDiv.innerHTML = `<div class="text-success">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
||||||
|
</svg>
|
||||||
|
${result.message}
|
||||||
|
</div>`;
|
||||||
|
} else {
|
||||||
|
resultDiv.innerHTML = `<div class="text-danger">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>
|
||||||
|
</svg>
|
||||||
|
${result.message}
|
||||||
|
</div>`;
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
clearTimeout(timeoutId);
|
||||||
|
if (error.name === 'AbortError') {
|
||||||
|
resultDiv.innerHTML = `<div class="text-danger">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>
|
||||||
|
</svg>
|
||||||
|
Connection test timed out - endpoint may be unreachable
|
||||||
|
</div>`;
|
||||||
|
} else {
|
||||||
|
resultDiv.innerHTML = `<div class="text-danger">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>
|
||||||
|
</svg>
|
||||||
|
Connection failed: Network error
|
||||||
|
</div>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
document.getElementById('testConnectionBtn').addEventListener('click', () => {
|
||||||
|
testConnection('createConnectionForm', 'testResult');
|
||||||
|
});
|
||||||
|
|
||||||
|
document.getElementById('editTestConnectionBtn').addEventListener('click', () => {
|
||||||
|
testConnection('editConnectionForm', 'editTestResult');
|
||||||
|
});
|
||||||
|
|
||||||
|
const editModal = document.getElementById('editConnectionModal');
|
||||||
|
editModal.addEventListener('show.bs.modal', event => {
|
||||||
|
const button = event.relatedTarget;
|
||||||
|
const id = button.getAttribute('data-id');
|
||||||
|
|
||||||
|
document.getElementById('edit_name').value = button.getAttribute('data-name');
|
||||||
|
document.getElementById('edit_endpoint_url').value = button.getAttribute('data-endpoint');
|
||||||
|
document.getElementById('edit_region').value = button.getAttribute('data-region');
|
||||||
|
document.getElementById('edit_access_key').value = button.getAttribute('data-access');
|
||||||
|
document.getElementById('edit_secret_key').value = button.getAttribute('data-secret');
|
||||||
|
document.getElementById('editTestResult').innerHTML = '';
|
||||||
|
|
||||||
|
const form = document.getElementById('editConnectionForm');
|
||||||
|
form.action = "{{ url_for('ui.update_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
|
||||||
|
});
|
||||||
|
|
||||||
|
const deleteModal = document.getElementById('deleteConnectionModal');
|
||||||
|
deleteModal.addEventListener('show.bs.modal', event => {
|
||||||
|
const button = event.relatedTarget;
|
||||||
|
const id = button.getAttribute('data-id');
|
||||||
|
const name = button.getAttribute('data-name');
|
||||||
|
|
||||||
|
document.getElementById('deleteConnectionName').textContent = name;
|
||||||
|
const form = document.getElementById('deleteConnectionForm');
|
||||||
|
form.action = "{{ url_for('ui.delete_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Check connection health for each connection in the table
|
||||||
|
// Uses staggered requests to avoid overwhelming the server
|
||||||
|
async function checkConnectionHealth(connectionId, statusEl) {
|
||||||
|
try {
|
||||||
|
const controller = new AbortController();
|
||||||
|
const timeoutId = setTimeout(() => controller.abort(), 15000);
|
||||||
|
|
||||||
|
const response = await fetch(`/ui/connections/${connectionId}/health`, {
|
||||||
|
signal: controller.signal
|
||||||
|
});
|
||||||
|
clearTimeout(timeoutId);
|
||||||
|
|
||||||
|
const data = await response.json();
|
||||||
|
if (data.healthy) {
|
||||||
|
statusEl.innerHTML = `
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-success" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
||||||
|
</svg>`;
|
||||||
|
statusEl.setAttribute('data-status', 'healthy');
|
||||||
|
statusEl.setAttribute('title', 'Connected');
|
||||||
|
} else {
|
||||||
|
statusEl.innerHTML = `
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-danger" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>
|
||||||
|
</svg>`;
|
||||||
|
statusEl.setAttribute('data-status', 'unhealthy');
|
||||||
|
statusEl.setAttribute('title', data.error || 'Unreachable');
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
statusEl.innerHTML = `
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-warning" viewBox="0 0 16 16">
|
||||||
|
<path d="M8.982 1.566a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566zM8 5c.535 0 .954.462.9.995l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995A.905.905 0 0 1 8 5zm.002 6a1 1 0 1 1 0 2 1 1 0 0 1 0-2z"/>
|
||||||
|
</svg>`;
|
||||||
|
statusEl.setAttribute('data-status', 'unknown');
|
||||||
|
statusEl.setAttribute('title', 'Could not check status');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stagger health checks to avoid all requests at once
|
||||||
|
const connectionRows = document.querySelectorAll('tr[data-connection-id]');
|
||||||
|
connectionRows.forEach((row, index) => {
|
||||||
|
const connectionId = row.getAttribute('data-connection-id');
|
||||||
|
const statusEl = row.querySelector('.connection-status');
|
||||||
|
if (statusEl) {
|
||||||
|
// Stagger requests by 200ms each
|
||||||
|
setTimeout(() => checkConnectionHealth(connectionId, statusEl), index * 200);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
</script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
@@ -31,20 +31,194 @@
|
|||||||
. .venv/Scripts/activate # PowerShell: .\\.venv\\Scripts\\Activate.ps1
|
. .venv/Scripts/activate # PowerShell: .\\.venv\\Scripts\\Activate.ps1
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
|
|
||||||
# Run both API and UI
|
# Run both API and UI (Development)
|
||||||
python run.py
|
python run.py
|
||||||
|
|
||||||
|
# Run in Production (Waitress server)
|
||||||
|
python run.py --prod
|
||||||
|
|
||||||
# Or run individually
|
# Or run individually
|
||||||
python run.py --mode api
|
python run.py --mode api
|
||||||
python run.py --mode ui
|
python run.py --mode ui
|
||||||
</code></pre>
|
</code></pre>
|
||||||
<p class="small text-muted mb-0">Configuration lives in <code>app/config.py</code>; override variables via the shell (e.g., <code>STORAGE_ROOT</code>, <code>API_BASE_URL</code>, <code>SECRET_KEY</code>, <code>MAX_UPLOAD_SIZE</code>).</p>
|
<h3 class="h6 mt-4 mb-2">Configuration</h3>
|
||||||
|
<p class="text-muted small">Configuration defaults live in <code>app/config.py</code>. You can override them using environment variables. This is critical for production deployments behind proxies.</p>
|
||||||
|
<div class="table-responsive">
|
||||||
|
<table class="table table-sm table-bordered small mb-0">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th style="min-width: 180px;">Variable</th>
|
||||||
|
<th style="min-width: 120px;">Default</th>
|
||||||
|
<th class="text-wrap" style="min-width: 250px;">Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><code>API_BASE_URL</code></td>
|
||||||
|
<td><code>None</code></td>
|
||||||
|
<td>The public URL of the API. <strong>Required</strong> if running behind a proxy. Ensures presigned URLs are generated correctly.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>STORAGE_ROOT</code></td>
|
||||||
|
<td><code>./data</code></td>
|
||||||
|
<td>Directory for buckets and objects.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>MAX_UPLOAD_SIZE</code></td>
|
||||||
|
<td><code>1 GB</code></td>
|
||||||
|
<td>Max request body size in bytes.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>SECRET_KEY</code></td>
|
||||||
|
<td>(Auto-generated)</td>
|
||||||
|
<td>Flask session key. Auto-generates if not set. <strong>Set explicitly in production.</strong></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>APP_HOST</code></td>
|
||||||
|
<td><code>0.0.0.0</code></td>
|
||||||
|
<td>Bind interface.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>APP_PORT</code></td>
|
||||||
|
<td><code>5000</code></td>
|
||||||
|
<td>Listen port (UI uses 5100).</td>
|
||||||
|
</tr>
|
||||||
|
<tr class="table-secondary">
|
||||||
|
<td colspan="3" class="fw-semibold">CORS Settings</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>CORS_ORIGINS</code></td>
|
||||||
|
<td><code>*</code></td>
|
||||||
|
<td>Allowed origins. <strong>Restrict in production.</strong></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>CORS_METHODS</code></td>
|
||||||
|
<td><code>GET,PUT,POST,DELETE,OPTIONS,HEAD</code></td>
|
||||||
|
<td>Allowed HTTP methods.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>CORS_ALLOW_HEADERS</code></td>
|
||||||
|
<td><code>*</code></td>
|
||||||
|
<td>Allowed request headers.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>CORS_EXPOSE_HEADERS</code></td>
|
||||||
|
<td><code>*</code></td>
|
||||||
|
<td>Response headers visible to browsers (e.g., <code>ETag</code>).</td>
|
||||||
|
</tr>
|
||||||
|
<tr class="table-secondary">
|
||||||
|
<td colspan="3" class="fw-semibold">Security Settings</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>AUTH_MAX_ATTEMPTS</code></td>
|
||||||
|
<td><code>5</code></td>
|
||||||
|
<td>Failed login attempts before lockout.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>AUTH_LOCKOUT_MINUTES</code></td>
|
||||||
|
<td><code>15</code></td>
|
||||||
|
<td>Lockout duration after max failed attempts.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>RATE_LIMIT_DEFAULT</code></td>
|
||||||
|
<td><code>200 per minute</code></td>
|
||||||
|
<td>Default API rate limit.</td>
|
||||||
|
</tr>
|
||||||
|
<tr class="table-secondary">
|
||||||
|
<td colspan="3" class="fw-semibold">Encryption Settings</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>ENCRYPTION_ENABLED</code></td>
|
||||||
|
<td><code>false</code></td>
|
||||||
|
<td>Enable server-side encryption support.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>KMS_ENABLED</code></td>
|
||||||
|
<td><code>false</code></td>
|
||||||
|
<td>Enable KMS key management for encryption.</td>
|
||||||
|
</tr>
|
||||||
|
<tr class="table-secondary">
|
||||||
|
<td colspan="3" class="fw-semibold">Logging Settings</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>LOG_LEVEL</code></td>
|
||||||
|
<td><code>INFO</code></td>
|
||||||
|
<td>Log verbosity: DEBUG, INFO, WARNING, ERROR.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>LOG_TO_FILE</code></td>
|
||||||
|
<td><code>true</code></td>
|
||||||
|
<td>Enable file logging.</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
<div class="alert alert-warning mt-3 mb-0 small">
|
||||||
|
<strong>Production Checklist:</strong> Set <code>SECRET_KEY</code>, restrict <code>CORS_ORIGINS</code>, configure <code>API_BASE_URL</code>, enable HTTPS via reverse proxy, and use <code>--prod</code> flag.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="background" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">02</span>
|
||||||
|
<h2 class="h4 mb-0">Running in background</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">For production or server deployments, run MyFSIO as a background service so it persists after you close the terminal.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Quick Start (nohup)</h3>
|
||||||
|
<p class="text-muted small">Simplest way to run in background—survives terminal close:</p>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Using Python
|
||||||
|
nohup python run.py --prod > /dev/null 2>&1 &
|
||||||
|
|
||||||
|
# Using compiled binary
|
||||||
|
nohup ./myfsio > /dev/null 2>&1 &
|
||||||
|
|
||||||
|
# Check if running
|
||||||
|
ps aux | grep myfsio</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Screen / Tmux</h3>
|
||||||
|
<p class="text-muted small">Attach/detach from a persistent session:</p>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Start in a detached screen session
|
||||||
|
screen -dmS myfsio ./myfsio
|
||||||
|
|
||||||
|
# Attach to view logs
|
||||||
|
screen -r myfsio
|
||||||
|
|
||||||
|
# Detach: press Ctrl+A, then D</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Systemd (Recommended for Production)</h3>
|
||||||
|
<p class="text-muted small">Create <code>/etc/systemd/system/myfsio.service</code>:</p>
|
||||||
|
<pre class="mb-3"><code class="language-ini">[Unit]
|
||||||
|
Description=MyFSIO S3-Compatible Storage
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=myfsio
|
||||||
|
WorkingDirectory=/opt/myfsio
|
||||||
|
ExecStart=/opt/myfsio/myfsio
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
Environment=STORAGE_ROOT=/var/lib/myfsio
|
||||||
|
Environment=API_BASE_URL=https://s3.example.com
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target</code></pre>
|
||||||
|
<p class="text-muted small">Then enable and start:</p>
|
||||||
|
<pre class="mb-0"><code class="language-bash">sudo systemctl daemon-reload
|
||||||
|
sudo systemctl enable myfsio
|
||||||
|
sudo systemctl start myfsio
|
||||||
|
|
||||||
|
# Check status
|
||||||
|
sudo systemctl status myfsio
|
||||||
|
sudo journalctl -u myfsio -f # View logs</code></pre>
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="auth" class="card shadow-sm docs-section">
|
<article id="auth" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">02</span>
|
<span class="docs-section-kicker">03</span>
|
||||||
<h2 class="h4 mb-0">Authenticate & manage IAM</h2>
|
<h2 class="h4 mb-0">Authenticate & manage IAM</h2>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-muted">MyFSIO seeds <code>data/.myfsio.sys/config/iam.json</code> with <code>localadmin/localadmin</code>. Sign in once, rotate it, then grant least-privilege access to teammates and tools.</p>
|
<p class="text-muted">MyFSIO seeds <code>data/.myfsio.sys/config/iam.json</code> with <code>localadmin/localadmin</code>. Sign in once, rotate it, then grant least-privilege access to teammates and tools.</p>
|
||||||
@@ -62,7 +236,7 @@ python run.py --mode ui
|
|||||||
<article id="console" class="card shadow-sm docs-section">
|
<article id="console" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">03</span>
|
<span class="docs-section-kicker">04</span>
|
||||||
<h2 class="h4 mb-0">Use the console effectively</h2>
|
<h2 class="h4 mb-0">Use the console effectively</h2>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-muted">Each workspace models an S3 workflow so you can administer buckets end-to-end.</p>
|
<p class="text-muted">Each workspace models an S3 workflow so you can administer buckets end-to-end.</p>
|
||||||
@@ -81,6 +255,15 @@ python run.py --mode ui
|
|||||||
<li>Progress rows highlight retries, throughput, and completion even if you close the modal.</li>
|
<li>Progress rows highlight retries, throughput, and completion even if you close the modal.</li>
|
||||||
</ul>
|
</ul>
|
||||||
</div>
|
</div>
|
||||||
|
<div>
|
||||||
|
<h3 class="h6 text-uppercase text-muted">Object browser</h3>
|
||||||
|
<ul>
|
||||||
|
<li>Navigate folder hierarchies using breadcrumbs. Objects with <code>/</code> in keys display as folders.</li>
|
||||||
|
<li>Infinite scroll loads more objects automatically. Choose batch size (50–250) from the footer dropdown.</li>
|
||||||
|
<li>Bulk select objects for multi-delete or multi-download. Filter by name using the search box.</li>
|
||||||
|
<li>If loading fails, click <strong>Retry</strong> to attempt again—no page refresh needed.</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
<div>
|
<div>
|
||||||
<h3 class="h6 text-uppercase text-muted">Object details</h3>
|
<h3 class="h6 text-uppercase text-muted">Object details</h3>
|
||||||
<ul>
|
<ul>
|
||||||
@@ -101,7 +284,7 @@ python run.py --mode ui
|
|||||||
<article id="automation" class="card shadow-sm docs-section">
|
<article id="automation" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">04</span>
|
<span class="docs-section-kicker">05</span>
|
||||||
<h2 class="h4 mb-0">Automate with CLI & tools</h2>
|
<h2 class="h4 mb-0">Automate with CLI & tools</h2>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-muted">Point standard S3 clients at {{ api_base }} and reuse the same IAM credentials.</p>
|
<p class="text-muted">Point standard S3 clients at {{ api_base }} and reuse the same IAM credentials.</p>
|
||||||
@@ -154,7 +337,7 @@ curl -X POST {{ api_base }}/presign/demo/notes.txt \
|
|||||||
<article id="api" class="card shadow-sm docs-section">
|
<article id="api" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">05</span>
|
<span class="docs-section-kicker">06</span>
|
||||||
<h2 class="h4 mb-0">Key REST endpoints</h2>
|
<h2 class="h4 mb-0">Key REST endpoints</h2>
|
||||||
</div>
|
</div>
|
||||||
<div class="table-responsive">
|
<div class="table-responsive">
|
||||||
@@ -221,7 +404,7 @@ curl -X POST {{ api_base }}/presign/demo/notes.txt \
|
|||||||
<article id="examples" class="card shadow-sm docs-section">
|
<article id="examples" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">06</span>
|
<span class="docs-section-kicker">07</span>
|
||||||
<h2 class="h4 mb-0">API Examples</h2>
|
<h2 class="h4 mb-0">API Examples</h2>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-muted">Common operations using boto3.</p>
|
<p class="text-muted">Common operations using boto3.</p>
|
||||||
@@ -260,7 +443,7 @@ s3.complete_multipart_upload(
|
|||||||
<article id="replication" class="card shadow-sm docs-section">
|
<article id="replication" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">07</span>
|
<span class="docs-section-kicker">08</span>
|
||||||
<h2 class="h4 mb-0">Site Replication</h2>
|
<h2 class="h4 mb-0">Site Replication</h2>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-muted">Automatically copy new objects to another MyFSIO instance or S3-compatible service for backup or disaster recovery.</p>
|
<p class="text-muted">Automatically copy new objects to another MyFSIO instance or S3-compatible service for backup or disaster recovery.</p>
|
||||||
@@ -290,12 +473,186 @@ s3.complete_multipart_upload(
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Bidirectional Replication (Active-Active)</h3>
|
||||||
|
<p class="small text-muted">To set up two-way replication (Server A ↔ Server B):</p>
|
||||||
|
<ol class="docs-steps mb-3">
|
||||||
|
<li>Follow the steps above to replicate <strong>A → B</strong>.</li>
|
||||||
|
<li>Repeat the process on Server B to replicate <strong>B → A</strong> (create a connection to A, enable rule).</li>
|
||||||
|
</ol>
|
||||||
|
<p class="small text-muted mb-0">
|
||||||
|
<strong>Loop Prevention:</strong> The system automatically detects replication traffic using a custom User-Agent (<code>S3ReplicationAgent</code>). This prevents infinite loops where an object replicated from A to B is immediately replicated back to A.
|
||||||
|
<br>
|
||||||
|
<strong>Deletes:</strong> Deleting an object on one server will propagate the deletion to the other server.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="quotas" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">10</span>
|
||||||
|
<h2 class="h4 mb-0">Bucket Quotas</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">Limit how much data a bucket can hold using storage quotas. Quotas are enforced on uploads and multipart completions.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Quota Types</h3>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Limit</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Max Size (MB)</strong></td>
|
||||||
|
<td>Maximum total storage in megabytes (includes current objects + archived versions)</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Max Objects</strong></td>
|
||||||
|
<td>Maximum number of objects (includes current objects + archived versions)</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Managing Quotas (Admin Only)</h3>
|
||||||
|
<p class="small text-muted">Quota management is restricted to administrators (users with <code>iam:*</code> permissions).</p>
|
||||||
|
<ol class="docs-steps mb-3">
|
||||||
|
<li>Navigate to your bucket → <strong>Properties</strong> tab → <strong>Storage Quota</strong> card.</li>
|
||||||
|
<li>Enter limits: <strong>Max Size (MB)</strong> and/or <strong>Max Objects</strong>. Leave empty for unlimited.</li>
|
||||||
|
<li>Click <strong>Update Quota</strong> to save, or <strong>Remove Quota</strong> to clear limits.</li>
|
||||||
|
</ol>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">API Usage</h3>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Set quota (max 100MB, max 1000 objects)
|
||||||
|
curl -X PUT "{{ api_base }}/bucket/<bucket>?quota" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '{"max_bytes": 104857600, "max_objects": 1000}'
|
||||||
|
|
||||||
|
# Get current quota
|
||||||
|
curl "{{ api_base }}/bucket/<bucket>?quota" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Remove quota
|
||||||
|
curl -X PUT "{{ api_base }}/bucket/<bucket>?quota" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '{"max_bytes": null, "max_objects": null}'</code></pre>
|
||||||
|
|
||||||
|
<div class="alert alert-light border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Version Counting:</strong> When versioning is enabled, archived versions count toward the quota. The quota is checked against total storage, not just current objects.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="encryption" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">11</span>
|
||||||
|
<h2 class="h4 mb-0">Encryption</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">Protect data at rest with server-side encryption using AES-256-GCM. Objects are encrypted before being written to disk and decrypted transparently on read.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Encryption Types</h3>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Type</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><strong>AES-256 (SSE-S3)</strong></td>
|
||||||
|
<td>Server-managed encryption using a local master key</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>KMS (SSE-KMS)</strong></td>
|
||||||
|
<td>Encryption using customer-managed keys via the built-in KMS</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Enabling Encryption</h3>
|
||||||
|
<ol class="docs-steps mb-3">
|
||||||
|
<li>
|
||||||
|
<strong>Set environment variables:</strong>
|
||||||
|
<pre class="mb-2"><code class="language-bash"># PowerShell
|
||||||
|
$env:ENCRYPTION_ENABLED = "true"
|
||||||
|
$env:KMS_ENABLED = "true" # Optional
|
||||||
|
python run.py
|
||||||
|
|
||||||
|
# Bash
|
||||||
|
export ENCRYPTION_ENABLED=true
|
||||||
|
export KMS_ENABLED=true
|
||||||
|
python run.py</code></pre>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong>Configure bucket encryption:</strong> Navigate to your bucket → <strong>Properties</strong> tab → <strong>Default Encryption</strong> card → Click <strong>Enable Encryption</strong>.
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong>Choose algorithm:</strong> Select <strong>AES-256</strong> for server-managed keys or <strong>aws:kms</strong> to use a KMS-managed key.
|
||||||
|
</li>
|
||||||
|
</ol>
|
||||||
|
|
||||||
|
<div class="alert alert-warning border-warning bg-warning-subtle mb-3">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-exclamation-triangle mt-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
||||||
|
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Important:</strong> Only <em>new uploads</em> after enabling encryption will be encrypted. Existing objects remain unencrypted.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">KMS Key Management</h3>
|
||||||
|
<p class="small text-muted">When <code>KMS_ENABLED=true</code>, manage encryption keys via the API:</p>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Create a new KMS key
|
||||||
|
curl -X POST {{ api_base }}/kms/keys \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '{"alias": "my-key", "description": "Production key"}'
|
||||||
|
|
||||||
|
# List all keys
|
||||||
|
curl {{ api_base }}/kms/keys \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Rotate a key (creates new key material)
|
||||||
|
curl -X POST {{ api_base }}/kms/keys/{key-id}/rotate \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Disable/Enable a key
|
||||||
|
curl -X POST {{ api_base }}/kms/keys/{key-id}/disable \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Schedule key deletion (30-day waiting period)
|
||||||
|
curl -X DELETE "{{ api_base }}/kms/keys/{key-id}?waiting_period_days=30" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">How It Works</h3>
|
||||||
|
<p class="small text-muted mb-0">
|
||||||
|
<strong>Envelope Encryption:</strong> Each object is encrypted with a unique Data Encryption Key (DEK). The DEK is then encrypted (wrapped) by the master key or KMS key and stored alongside the ciphertext. On read, the DEK is unwrapped and used to decrypt the object transparently.
|
||||||
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="troubleshooting" class="card shadow-sm docs-section">
|
<article id="troubleshooting" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">08</span>
|
<span class="docs-section-kicker">12</span>
|
||||||
<h2 class="h4 mb-0">Troubleshooting & tips</h2>
|
<h2 class="h4 mb-0">Troubleshooting & tips</h2>
|
||||||
</div>
|
</div>
|
||||||
<div class="table-responsive">
|
<div class="table-responsive">
|
||||||
@@ -330,8 +687,8 @@ s3.complete_multipart_upload(
|
|||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>Requests hit the wrong host</td>
|
<td>Requests hit the wrong host</td>
|
||||||
<td><code>API_BASE_URL</code> not updated after tunneling/forwarding</td>
|
<td>Proxy headers missing or <code>API_BASE_URL</code> incorrect</td>
|
||||||
<td>Set <code>API_BASE_URL</code> in your shell or <code>.env</code> to match the published host.</td>
|
<td>Ensure your proxy sends <code>X-Forwarded-Host</code>/<code>Proto</code> headers, or explicitly set <code>API_BASE_URL</code> to your public domain.</td>
|
||||||
</tr>
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
@@ -345,11 +702,15 @@ s3.complete_multipart_upload(
|
|||||||
<h3 class="h6 text-uppercase text-muted mb-3">On this page</h3>
|
<h3 class="h6 text-uppercase text-muted mb-3">On this page</h3>
|
||||||
<ul class="list-unstyled docs-toc mb-4">
|
<ul class="list-unstyled docs-toc mb-4">
|
||||||
<li><a href="#setup">Set up & run</a></li>
|
<li><a href="#setup">Set up & run</a></li>
|
||||||
|
<li><a href="#background">Running in background</a></li>
|
||||||
<li><a href="#auth">Authentication & IAM</a></li>
|
<li><a href="#auth">Authentication & IAM</a></li>
|
||||||
<li><a href="#console">Console tour</a></li>
|
<li><a href="#console">Console tour</a></li>
|
||||||
<li><a href="#automation">Automation / CLI</a></li>
|
<li><a href="#automation">Automation / CLI</a></li>
|
||||||
<li><a href="#api">REST endpoints</a></li>
|
<li><a href="#api">REST endpoints</a></li>
|
||||||
|
<li><a href="#examples">API Examples</a></li>
|
||||||
<li><a href="#replication">Site Replication</a></li>
|
<li><a href="#replication">Site Replication</a></li>
|
||||||
|
<li><a href="#quotas">Bucket Quotas</a></li>
|
||||||
|
<li><a href="#encryption">Encryption</a></li>
|
||||||
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
||||||
</ul>
|
</ul>
|
||||||
<div class="docs-sidebar-callouts">
|
<div class="docs-sidebar-callouts">
|
||||||
|
|||||||
@@ -4,7 +4,12 @@
|
|||||||
<div class="page-header d-flex justify-content-between align-items-center mb-4">
|
<div class="page-header d-flex justify-content-between align-items-center mb-4">
|
||||||
<div>
|
<div>
|
||||||
<p class="text-uppercase text-muted small mb-1">Identity & Access Management</p>
|
<p class="text-uppercase text-muted small mb-1">Identity & Access Management</p>
|
||||||
<h1 class="h3 mb-1">IAM Configuration</h1>
|
<h1 class="h3 mb-1 d-flex align-items-center gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
|
||||||
|
</svg>
|
||||||
|
IAM Configuration
|
||||||
|
</h1>
|
||||||
</div>
|
</div>
|
||||||
<div class="d-flex gap-2">
|
<div class="d-flex gap-2">
|
||||||
{% if not iam_locked %}
|
{% if not iam_locked %}
|
||||||
@@ -79,123 +84,188 @@
|
|||||||
</div>
|
</div>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
<div class="card shadow-sm">
|
<div class="card shadow-sm border-0" style="border-radius: 1rem;">
|
||||||
<div class="card-header bg-body d-flex justify-content-between align-items-center">
|
<div class="card-header bg-transparent border-0 pt-4 pb-0 px-4 d-flex justify-content-between align-items-center">
|
||||||
<span class="fw-semibold">Users</span>
|
<div>
|
||||||
{% if iam_locked %}<span class="badge text-bg-warning">View only</span>{% endif %}
|
<h5 class="fw-semibold d-flex align-items-center gap-2 mb-1">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-muted" viewBox="0 0 16 16">
|
||||||
|
<path d="M15 14s1 0 1-1-1-4-5-4-5 3-5 4 1 1 1 1h8zm-7.978-1A.261.261 0 0 1 7 12.996c.001-.264.167-1.03.76-1.72C8.312 10.629 9.282 10 11 10c1.717 0 2.687.63 3.24 1.276.593.69.758 1.457.76 1.72l-.008.002a.274.274 0 0 1-.014.002H7.022zM11 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4zm3-2a3 3 0 1 1-6 0 3 3 0 0 1 6 0zM6.936 9.28a5.88 5.88 0 0 0-1.23-.247A7.35 7.35 0 0 0 5 9c-4 0-5 3-5 4 0 .667.333 1 1 1h4.216A2.238 2.238 0 0 1 5 13c0-1.01.377-2.042 1.09-2.904.243-.294.526-.569.846-.816zM4.92 10A5.493 5.493 0 0 0 4 13H1c0-.26.164-1.03.76-1.724.545-.636 1.492-1.256 3.16-1.275zM1.5 5.5a3 3 0 1 1 6 0 3 3 0 0 1-6 0zm3-2a2 2 0 1 0 0 4 2 2 0 0 0 0-4z"/>
|
||||||
|
</svg>
|
||||||
|
Users
|
||||||
|
</h5>
|
||||||
|
<p class="text-muted small mb-0">{{ users|length if not iam_locked else '?' }} user{{ 's' if (users|length if not iam_locked else 0) != 1 else '' }} configured</p>
|
||||||
|
</div>
|
||||||
|
{% if iam_locked %}<span class="badge bg-warning bg-opacity-10 text-warning">View only</span>{% endif %}
|
||||||
</div>
|
</div>
|
||||||
{% if iam_locked %}
|
{% if iam_locked %}
|
||||||
<div class="card-body">
|
<div class="card-body px-4 pb-4">
|
||||||
<p class="text-muted mb-0">Sign in with an administrator to list or edit IAM users.</p>
|
<div class="alert alert-secondary d-flex align-items-center mb-0" role="alert">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="flex-shrink-0 me-2" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 1a2 2 0 0 1 2 2v4H6V3a2 2 0 0 1 2-2zm3 6V3a3 3 0 0 0-6 0v4a2 2 0 0 0-2 2v5a2 2 0 0 0 2 2h6a2 2 0 0 0 2-2V9a2 2 0 0 0-2-2z"/>
|
||||||
|
</svg>
|
||||||
|
<div>Sign in with an administrator account to list or edit IAM users.</div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
{% else %}
|
{% else %}
|
||||||
<div class="table-responsive">
|
<div class="card-body px-4 pb-4">
|
||||||
<table class="table table-hover align-middle mb-0">
|
{% if users %}
|
||||||
<thead class="table-light">
|
<div class="table-responsive">
|
||||||
<tr>
|
<table class="table table-hover align-middle mb-0">
|
||||||
<th scope="col">Access Key</th>
|
<thead class="table-light">
|
||||||
<th scope="col">Display Name</th>
|
<tr>
|
||||||
<th scope="col">Policies</th>
|
<th scope="col">User</th>
|
||||||
<th scope="col" class="text-end">Actions</th>
|
<th scope="col">Policies</th>
|
||||||
</tr>
|
<th scope="col" class="text-end">Actions</th>
|
||||||
</thead>
|
</tr>
|
||||||
<tbody>
|
</thead>
|
||||||
{% for user in users %}
|
<tbody>
|
||||||
<tr>
|
{% for user in users %}
|
||||||
<td class="font-monospace">{{ user.access_key }}</td>
|
<tr>
|
||||||
<td>{{ user.display_name }}</td>
|
<td>
|
||||||
<td>
|
<div class="d-flex align-items-center gap-3">
|
||||||
{% for policy in user.policies %}
|
<div class="user-avatar">
|
||||||
<span class="badge text-bg-light border text-dark mb-1">
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
{{ policy.bucket }}
|
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
|
||||||
{% if '*' in policy.actions %}
|
</svg>
|
||||||
<span class="text-muted">(*)</span>
|
</div>
|
||||||
|
<div>
|
||||||
|
<div class="fw-medium">{{ user.display_name }}</div>
|
||||||
|
<code class="small text-muted">{{ user.access_key }}</code>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<div class="d-flex flex-wrap gap-1">
|
||||||
|
{% for policy in user.policies %}
|
||||||
|
<span class="badge bg-primary bg-opacity-10 text-primary">
|
||||||
|
{{ policy.bucket }}
|
||||||
|
{% if '*' in policy.actions %}
|
||||||
|
<span class="opacity-75">(full)</span>
|
||||||
|
{% else %}
|
||||||
|
<span class="opacity-75">({{ policy.actions|length }})</span>
|
||||||
|
{% endif %}
|
||||||
|
</span>
|
||||||
{% else %}
|
{% else %}
|
||||||
<span class="text-muted">({{ policy.actions|length }})</span>
|
<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>
|
||||||
{% endif %}
|
{% endfor %}
|
||||||
</span>
|
</div>
|
||||||
{% endfor %}
|
</td>
|
||||||
</td>
|
<td class="text-end">
|
||||||
<td class="text-end">
|
<div class="btn-group btn-group-sm" role="group">
|
||||||
<div class="btn-group btn-group-sm" role="group">
|
<button class="btn btn-outline-primary" type="button" data-rotate-user="{{ user.access_key }}" title="Rotate Secret">
|
||||||
<button class="btn btn-outline-primary" type="button" data-rotate-user="{{ user.access_key }}" title="Rotate Secret">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-arrow-repeat" viewBox="0 0 16 16">
|
<path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
|
||||||
<path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
|
<path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
|
||||||
<path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
|
</svg>
|
||||||
</svg>
|
</button>
|
||||||
</button>
|
<button class="btn btn-outline-secondary" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}" title="Edit User">
|
||||||
<button class="btn btn-outline-secondary" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}" title="Edit User">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-pencil" viewBox="0 0 16 16">
|
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
|
||||||
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5zm-9.761 5.175-.106.378.378-.106 5-5-.378-.378-5 5z"/>
|
</svg>
|
||||||
</svg>
|
</button>
|
||||||
</button>
|
<button class="btn btn-outline-secondary" type="button" data-policy-editor data-access-key="{{ user.access_key }}" title="Edit Policies">
|
||||||
<button class="btn btn-outline-secondary" type="button" data-policy-editor data-access-key="{{ user.access_key }}" title="Edit Policies">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-pencil-square" viewBox="0 0 16 16">
|
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/>
|
||||||
<path d="M15.502 1.94a.5.5 0 0 1 0 .706L14.459 3.69l-2-2L13.502.646a.5.5 0 0 1 .707 0l1.293 1.293zm-1.75 2.456-2-2L4.939 9.21a.5.5 0 0 0-.121.196l-.805 2.414a.25.25 0 0 0 .316.316l2.414-.805a.5.5 0 0 0 .196-.12l6.813-6.814z"/>
|
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/>
|
||||||
<path fill-rule="evenodd" d="M1 13.5A1.5 1.5 0 0 0 2.5 15h11a1.5 1.5 0 0 0 1.5-1.5v-6a.5.5 0 0 0-1 0v6a.5.5 0 0 1-.5.5h-11a.5.5 0 0 1-.5-.5v-11a.5.5 0 0 1 .5-.5H9a.5.5 0 0 0 0-1H2.5A1.5 1.5 0 0 0 1 2.5v11z"/>
|
</svg>
|
||||||
</svg>
|
</button>
|
||||||
</button>
|
<button class="btn btn-outline-danger" type="button" data-delete-user="{{ user.access_key }}" title="Delete User">
|
||||||
<button class="btn btn-outline-danger" type="button" data-delete-user="{{ user.access_key }}" title="Delete User">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-trash" viewBox="0 0 16 16">
|
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
|
||||||
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
</svg>
|
||||||
</svg>
|
</button>
|
||||||
</button>
|
</div>
|
||||||
</div>
|
</td>
|
||||||
</td>
|
</tr>
|
||||||
</tr>
|
{% endfor %}
|
||||||
{% else %}
|
</tbody>
|
||||||
<tr>
|
</table>
|
||||||
<td colspan="4" class="text-center text-muted py-4">No IAM users defined.</td>
|
</div>
|
||||||
</tr>
|
{% else %}
|
||||||
{% endfor %}
|
<div class="empty-state text-center py-5">
|
||||||
</tbody>
|
<div class="empty-state-icon mx-auto mb-3">
|
||||||
</table>
|
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M15 14s1 0 1-1-1-4-5-4-5 3-5 4 1 1 1 1h8zm-7.978-1A.261.261 0 0 1 7 12.996c.001-.264.167-1.03.76-1.72C8.312 10.629 9.282 10 11 10c1.717 0 2.687.63 3.24 1.276.593.69.758 1.457.76 1.72l-.008.002a.274.274 0 0 1-.014.002H7.022zM11 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4zm3-2a3 3 0 1 1-6 0 3 3 0 0 1 6 0zM6.936 9.28a5.88 5.88 0 0 0-1.23-.247A7.35 7.35 0 0 0 5 9c-4 0-5 3-5 4 0 .667.333 1 1 1h4.216A2.238 2.238 0 0 1 5 13c0-1.01.377-2.042 1.09-2.904.243-.294.526-.569.846-.816zM4.92 10A5.493 5.493 0 0 0 4 13H1c0-.26.164-1.03.76-1.724.545-.636 1.492-1.256 3.16-1.275zM1.5 5.5a3 3 0 1 1 6 0 3 3 0 0 1-6 0zm3-2a2 2 0 1 0 0 4 2 2 0 0 0 0-4z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<h5 class="fw-semibold mb-2">No users yet</h5>
|
||||||
|
<p class="text-muted mb-3">Create your first IAM user to manage access to your storage.</p>
|
||||||
|
<button class="btn btn-primary" data-bs-toggle="modal" data-bs-target="#createUserModal">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 2a.5.5 0 0 1 .5.5v5h5a.5.5 0 0 1 0 1h-5v5a.5.5 0 0 1-1 0v-5h-5a.5.5 0 0 1 0-1h5v-5A.5.5 0 0 1 8 2Z"/>
|
||||||
|
</svg>
|
||||||
|
Create First User
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Create User Modal -->
|
|
||||||
<div class="modal fade" id="createUserModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="createUserModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header">
|
<div class="modal-header border-0 pb-0">
|
||||||
<h1 class="modal-title fs-5">Create IAM User</h1>
|
<h1 class="modal-title fs-5 fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M1 14s-1 0-1-1 1-4 6-4 6 3 6 4-1 1-1 1H1zm5-6a3 3 0 1 0 0-6 3 3 0 0 0 0 6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M13.5 5a.5.5 0 0 1 .5.5V7h1.5a.5.5 0 0 1 0 1H14v1.5a.5.5 0 0 1-1 0V8h-1.5a.5.5 0 0 1 0-1H13V5.5a.5.5 0 0 1 .5-.5z"/>
|
||||||
|
</svg>
|
||||||
|
Create IAM User
|
||||||
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<form method="post" action="{{ url_for('ui.create_iam_user') }}">
|
<form method="post" action="{{ url_for('ui.create_iam_user') }}">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<div class="modal-body">
|
<div class="modal-body">
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label class="form-label">Display Name</label>
|
<label class="form-label fw-medium">Display Name</label>
|
||||||
<input class="form-control" type="text" name="display_name" placeholder="Analytics Team" required />
|
<input class="form-control" type="text" name="display_name" placeholder="Analytics Team" required autofocus />
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label class="form-label">Initial Policies (JSON)</label>
|
<label class="form-label fw-medium">Initial Policies (JSON)</label>
|
||||||
<textarea class="form-control font-monospace" name="policies" rows="6" spellcheck="false" placeholder='[
|
<textarea class="form-control font-monospace" name="policies" id="createUserPolicies" rows="6" spellcheck="false" placeholder='[
|
||||||
{"bucket": "*", "actions": ["list", "read"]}
|
{"bucket": "*", "actions": ["list", "read"]}
|
||||||
]'></textarea>
|
]'></textarea>
|
||||||
<div class="form-text">Leave blank to grant full control (for bootstrap admins only).</div>
|
<div class="form-text">Leave blank to grant full control (for bootstrap admins only).</div>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="d-flex flex-wrap gap-2">
|
||||||
|
<span class="text-muted small me-2 align-self-center">Quick templates:</span>
|
||||||
|
<button class="btn btn-outline-secondary btn-sm" type="button" data-create-policy-template="full">Full Control</button>
|
||||||
|
<button class="btn btn-outline-secondary btn-sm" type="button" data-create-policy-template="readonly">Read-Only</button>
|
||||||
|
<button class="btn btn-outline-secondary btn-sm" type="button" data-create-policy-template="writer">Read + Write</button>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-footer">
|
<div class="modal-footer">
|
||||||
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
<button class="btn btn-primary" type="submit">Create User</button>
|
<button class="btn btn-primary" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 2a.5.5 0 0 1 .5.5v5h5a.5.5 0 0 1 0 1h-5v5a.5.5 0 0 1-1 0v-5h-5a.5.5 0 0 1 0-1h5v-5A.5.5 0 0 1 8 2Z"/>
|
||||||
|
</svg>
|
||||||
|
Create User
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</form>
|
</form>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Policy Editor Modal -->
|
|
||||||
<div class="modal fade" id="policyEditorModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="policyEditorModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-lg modal-dialog-centered">
|
<div class="modal-dialog modal-lg modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header">
|
<div class="modal-header border-0 pb-0">
|
||||||
<h1 class="modal-title fs-5">Edit Policies: <span id="policyEditorUserLabel" class="font-monospace"></span></h1>
|
<h1 class="modal-title fs-5 fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/>
|
||||||
|
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319zm-2.633.283c.246-.835 1.428-.835 1.674 0l.094.319a1.873 1.873 0 0 0 2.693 1.115l.291-.16c.764-.415 1.6.42 1.184 1.185l-.159.292a1.873 1.873 0 0 0 1.116 2.692l.318.094c.835.246.835 1.428 0 1.674l-.319.094a1.873 1.873 0 0 0-1.115 2.693l.16.291c.415.764-.42 1.6-1.185 1.184l-.291-.159a1.873 1.873 0 0 0-2.693 1.116l-.094.318c-.246.835-1.428.835-1.674 0l-.094-.319a1.873 1.873 0 0 0-2.692-1.115l-.292.16c-.764.415-1.6-.42-1.184-1.185l.159-.291A1.873 1.873 0 0 0 1.945 8.93l-.319-.094c-.835-.246-.835-1.428 0-1.674l.319-.094A1.873 1.873 0 0 0 3.06 4.377l-.16-.292c-.415-.764.42-1.6 1.185-1.184l.292.159a1.873 1.873 0 0 0 2.692-1.115l.094-.319z"/>
|
||||||
|
</svg>
|
||||||
|
Edit Policies
|
||||||
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-body">
|
<div class="modal-body">
|
||||||
|
<p class="text-muted small mb-3">Editing policies for <code id="policyEditorUserLabel"></code></p>
|
||||||
<form
|
<form
|
||||||
id="policyEditorForm"
|
id="policyEditorForm"
|
||||||
method="post"
|
method="post"
|
||||||
@@ -206,11 +276,12 @@
|
|||||||
<input type="hidden" id="policyEditorUser" name="access_key" />
|
<input type="hidden" id="policyEditorUser" name="access_key" />
|
||||||
|
|
||||||
<div>
|
<div>
|
||||||
<label class="form-label">Inline Policies (JSON array)</label>
|
<label class="form-label fw-medium">Inline Policies (JSON array)</label>
|
||||||
<textarea class="form-control font-monospace" id="policyEditorDocument" name="policies" rows="12" spellcheck="false"></textarea>
|
<textarea class="form-control font-monospace" id="policyEditorDocument" name="policies" rows="12" spellcheck="false"></textarea>
|
||||||
<div class="form-text">Use standard MyFSIO policy format. Validation happens server-side.</div>
|
<div class="form-text">Use standard MyFSIO policy format. Validation happens server-side.</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="d-flex flex-wrap gap-2">
|
<div class="d-flex flex-wrap gap-2">
|
||||||
|
<span class="text-muted small me-2 align-self-center">Quick templates:</span>
|
||||||
<button class="btn btn-outline-secondary btn-sm" type="button" data-policy-template="full">Full Control</button>
|
<button class="btn btn-outline-secondary btn-sm" type="button" data-policy-template="full">Full Control</button>
|
||||||
<button class="btn btn-outline-secondary btn-sm" type="button" data-policy-template="readonly">Read-Only</button>
|
<button class="btn btn-outline-secondary btn-sm" type="button" data-policy-template="readonly">Read-Only</button>
|
||||||
<button class="btn btn-outline-secondary btn-sm" type="button" data-policy-template="writer">Read + Write</button>
|
<button class="btn btn-outline-secondary btn-sm" type="button" data-policy-template="writer">Read + Write</button>
|
||||||
@@ -219,91 +290,145 @@
|
|||||||
</div>
|
</div>
|
||||||
<div class="modal-footer">
|
<div class="modal-footer">
|
||||||
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
<button class="btn btn-primary" type="submit" form="policyEditorForm">Save Policies</button>
|
<button class="btn btn-primary" type="submit" form="policyEditorForm">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M10.97 4.97a.75.75 0 0 1 1.07 1.05l-3.99 4.99a.75.75 0 0 1-1.08.02L4.324 8.384a.75.75 0 1 1 1.06-1.06l2.094 2.093 3.473-4.425a.267.267 0 0 1 .02-.022z"/>
|
||||||
|
</svg>
|
||||||
|
Save Policies
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Edit User Modal -->
|
|
||||||
<div class="modal fade" id="editUserModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="editUserModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header">
|
<div class="modal-header border-0 pb-0">
|
||||||
<h1 class="modal-title fs-5">Edit User</h1>
|
<h1 class="modal-title fs-5 fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5zm-9.761 5.175-.106.106-1.528 3.821 3.821-1.528.106-.106A.5.5 0 0 1 5 12.5V12h-.5a.5.5 0 0 1-.5-.5V11h-.5a.5.5 0 0 1-.468-.325z"/>
|
||||||
|
</svg>
|
||||||
|
Edit User
|
||||||
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<form method="post" id="editUserForm">
|
<form method="post" id="editUserForm">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<div class="modal-body">
|
<div class="modal-body">
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label class="form-label">Display Name</label>
|
<label class="form-label fw-medium">Display Name</label>
|
||||||
<input class="form-control" type="text" name="display_name" id="editUserDisplayName" required />
|
<input class="form-control" type="text" name="display_name" id="editUserDisplayName" required />
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-footer">
|
<div class="modal-footer">
|
||||||
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
<button class="btn btn-primary" type="submit">Save Changes</button>
|
<button class="btn btn-primary" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M10.97 4.97a.75.75 0 0 1 1.07 1.05l-3.99 4.99a.75.75 0 0 1-1.08.02L4.324 8.384a.75.75 0 1 1 1.06-1.06l2.094 2.093 3.473-4.425a.267.267 0 0 1 .02-.022z"/>
|
||||||
|
</svg>
|
||||||
|
Save Changes
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</form>
|
</form>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Delete User Modal -->
|
|
||||||
<div class="modal fade" id="deleteUserModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="deleteUserModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header">
|
<div class="modal-header border-0 pb-0">
|
||||||
<h1 class="modal-title fs-5">Delete User</h1>
|
<h1 class="modal-title fs-5 fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-danger" viewBox="0 0 16 16">
|
||||||
|
<path d="M1 14s-1 0-1-1 1-4 6-4 6 3 6 4-1 1-1 1H1zm5-6a3 3 0 1 0 0-6 3 3 0 0 0 0 6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M11 1.5v1h5v1h-1v9a2 2 0 0 1-2 2H3a2 2 0 0 1-2-2v-9H0v-1h5v-1a1 1 0 0 1 1-1h4a1 1 0 0 1 1 1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118z"/>
|
||||||
|
</svg>
|
||||||
|
Delete User
|
||||||
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-body">
|
<div class="modal-body">
|
||||||
<p>Are you sure you want to delete user <strong id="deleteUserLabel"></strong>?</p>
|
<p>Are you sure you want to delete user <strong id="deleteUserLabel"></strong>?</p>
|
||||||
<div id="deleteSelfWarning" class="alert alert-danger d-none">
|
<div id="deleteSelfWarning" class="alert alert-danger d-flex align-items-start d-none">
|
||||||
<strong>Warning:</strong> You are deleting your own account. You will be logged out immediately and will lose access to this session.
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="flex-shrink-0 me-2 mt-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
||||||
|
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Warning:</strong> You are deleting your own account. You will be logged out immediately.
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-danger mb-0">This action cannot be undone.</p>
|
<p class="text-danger small mb-0">This action cannot be undone.</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-footer">
|
<div class="modal-footer">
|
||||||
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
<form method="post" id="deleteUserForm">
|
<form method="post" id="deleteUserForm">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<button class="btn btn-danger" type="submit">Delete User</button>
|
<button class="btn btn-danger" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
Delete User
|
||||||
|
</button>
|
||||||
</form>
|
</form>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Rotate Secret Modal -->
|
|
||||||
<div class="modal fade" id="rotateSecretModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="rotateSecretModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header">
|
<div class="modal-header border-0 pb-0">
|
||||||
<h1 class="modal-title fs-5">Rotate Secret Key</h1>
|
<h1 class="modal-title fs-5 fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-warning" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 3a5 5 0 1 0 4.546 2.914.5.5 0 0 1 .908-.417A6 6 0 1 1 8 2v1z"/>
|
||||||
|
<path d="M8 4.466V.534a.25.25 0 0 1 .41-.192l2.36 1.966c.12.1.12.284 0 .384L8.41 4.658A.25.25 0 0 1 8 4.466z"/>
|
||||||
|
</svg>
|
||||||
|
Rotate Secret Key
|
||||||
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-body" id="rotateSecretConfirm">
|
<div class="modal-body" id="rotateSecretConfirm">
|
||||||
<p>Are you sure you want to rotate the secret key for <strong id="rotateUserLabel"></strong>?</p>
|
<p>Rotate the secret key for <strong id="rotateUserLabel"></strong>?</p>
|
||||||
<div id="rotateSelfWarning" class="alert alert-warning d-none">
|
<div class="alert alert-warning d-flex align-items-start mb-0">
|
||||||
<strong>Warning:</strong> You are rotating your own secret key. You will need to sign in again with the new key.
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="flex-shrink-0 me-2 mt-1" viewBox="0 0 16 16">
|
||||||
</div>
|
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
||||||
<div class="alert alert-warning mb-0">
|
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
||||||
The old secret key will stop working immediately. Any applications using it must be updated.
|
</svg>
|
||||||
|
<div>The old secret key will stop working immediately. Update any applications using it.</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-body d-none" id="rotateSecretResult">
|
<div class="modal-body d-none" id="rotateSecretResult">
|
||||||
<p class="mb-2">Secret rotated successfully!</p>
|
<div class="alert alert-success d-flex align-items-center mb-3">
|
||||||
<div class="input-group mb-3">
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="flex-shrink-0 me-2" viewBox="0 0 16 16">
|
||||||
<input type="text" class="form-control font-monospace" id="newSecretKey" readonly>
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
||||||
<button class="btn btn-outline-primary" type="button" id="copyNewSecret">Copy</button>
|
</svg>
|
||||||
|
<div>Secret rotated successfully!</div>
|
||||||
</div>
|
</div>
|
||||||
<p class="small text-muted mb-0">Copy this now. It will not be shown again.</p>
|
<label class="form-label fw-medium">New Secret Key</label>
|
||||||
|
<div class="input-group">
|
||||||
|
<input type="text" class="form-control font-monospace bg-body-tertiary" id="newSecretKey" readonly>
|
||||||
|
<button class="btn btn-outline-primary" type="button" id="copyNewSecret">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M4 1.5H3a2 2 0 0 0-2 2V14a2 2 0 0 0 2 2h10a2 2 0 0 0 2-2V3.5a2 2 0 0 0-2-2h-1v1h1a1 1 0 0 1 1 1V14a1 1 0 0 1-1 1H3a1 1 0 0 1-1-1V3.5a1 1 0 0 1 1-1h1v-1z"/>
|
||||||
|
<path d="M9.5 1a.5.5 0 0 1 .5.5v1a.5.5 0 0 1-.5.5h-3a.5.5 0 0 1-.5-.5v-1a.5.5 0 0 1 .5-.5h3zm-3-1A1.5 1.5 0 0 0 5 1.5v1A1.5 1.5 0 0 0 6.5 4h3A1.5 1.5 0 0 0 11 2.5v-1A1.5 1.5 0 0 0 9.5 0h-3z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<p class="form-text mb-0">Copy this now. It will not be shown again.</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-footer">
|
<div class="modal-footer">
|
||||||
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal" id="rotateCancelBtn">Cancel</button>
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal" id="rotateCancelBtn">Cancel</button>
|
||||||
<button type="button" class="btn btn-primary" id="confirmRotateBtn">Rotate Key</button>
|
<button type="button" class="btn btn-warning" id="confirmRotateBtn">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 3a5 5 0 1 0 4.546 2.914.5.5 0 0 1 .908-.417A6 6 0 1 1 8 2v1z"/>
|
||||||
|
<path d="M8 4.466V.534a.25.25 0 0 1 .41-.192l2.36 1.966c.12.1.12.284 0 .384L8.41 4.658A.25.25 0 0 1 8 4.466z"/>
|
||||||
|
</svg>
|
||||||
|
Rotate Key
|
||||||
|
</button>
|
||||||
<button type="button" class="btn btn-primary d-none" data-bs-dismiss="modal" id="rotateDoneBtn">Done</button>
|
<button type="button" class="btn btn-primary d-none" data-bs-dismiss="modal" id="rotateDoneBtn">Done</button>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -357,7 +482,6 @@
|
|||||||
const iamUsersData = document.getElementById('iamUsersJson');
|
const iamUsersData = document.getElementById('iamUsersJson');
|
||||||
const users = iamUsersData ? JSON.parse(iamUsersData.textContent || '[]') : [];
|
const users = iamUsersData ? JSON.parse(iamUsersData.textContent || '[]') : [];
|
||||||
|
|
||||||
// Policy Editor Logic
|
|
||||||
const policyModalEl = document.getElementById('policyEditorModal');
|
const policyModalEl = document.getElementById('policyEditorModal');
|
||||||
const policyModal = new bootstrap.Modal(policyModalEl);
|
const policyModal = new bootstrap.Modal(policyModalEl);
|
||||||
const userLabelEl = document.getElementById('policyEditorUserLabel');
|
const userLabelEl = document.getElementById('policyEditorUserLabel');
|
||||||
@@ -379,7 +503,7 @@
|
|||||||
full: [
|
full: [
|
||||||
{
|
{
|
||||||
bucket: '*',
|
bucket: '*',
|
||||||
actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'iam:list_users', 'iam:*'],
|
actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'replication', 'iam:list_users', 'iam:*'],
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
readonly: [
|
readonly: [
|
||||||
@@ -404,6 +528,39 @@
|
|||||||
button.addEventListener('click', () => applyTemplate(button.dataset.policyTemplate));
|
button.addEventListener('click', () => applyTemplate(button.dataset.policyTemplate));
|
||||||
});
|
});
|
||||||
|
|
||||||
|
const createUserPoliciesEl = document.getElementById('createUserPolicies');
|
||||||
|
const createTemplateButtons = document.querySelectorAll('[data-create-policy-template]');
|
||||||
|
|
||||||
|
const applyCreateTemplate = (name) => {
|
||||||
|
const templates = {
|
||||||
|
full: [
|
||||||
|
{
|
||||||
|
bucket: '*',
|
||||||
|
actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'replication', 'iam:list_users', 'iam:*'],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
readonly: [
|
||||||
|
{
|
||||||
|
bucket: '*',
|
||||||
|
actions: ['list', 'read'],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
writer: [
|
||||||
|
{
|
||||||
|
bucket: '*',
|
||||||
|
actions: ['list', 'read', 'write'],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
if (templates[name] && createUserPoliciesEl) {
|
||||||
|
createUserPoliciesEl.value = JSON.stringify(templates[name], null, 2);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
createTemplateButtons.forEach((button) => {
|
||||||
|
button.addEventListener('click', () => applyCreateTemplate(button.dataset.createPolicyTemplate));
|
||||||
|
});
|
||||||
|
|
||||||
formEl?.addEventListener('submit', (event) => {
|
formEl?.addEventListener('submit', (event) => {
|
||||||
const key = userInputEl.value;
|
const key = userInputEl.value;
|
||||||
if (!key) {
|
if (!key) {
|
||||||
@@ -427,7 +584,6 @@
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
// Edit User Logic
|
|
||||||
const editUserModal = new bootstrap.Modal(document.getElementById('editUserModal'));
|
const editUserModal = new bootstrap.Modal(document.getElementById('editUserModal'));
|
||||||
const editUserForm = document.getElementById('editUserForm');
|
const editUserForm = document.getElementById('editUserForm');
|
||||||
const editUserDisplayName = document.getElementById('editUserDisplayName');
|
const editUserDisplayName = document.getElementById('editUserDisplayName');
|
||||||
@@ -442,7 +598,6 @@
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
// Delete User Logic
|
|
||||||
const deleteUserModal = new bootstrap.Modal(document.getElementById('deleteUserModal'));
|
const deleteUserModal = new bootstrap.Modal(document.getElementById('deleteUserModal'));
|
||||||
const deleteUserForm = document.getElementById('deleteUserForm');
|
const deleteUserForm = document.getElementById('deleteUserForm');
|
||||||
const deleteUserLabel = document.getElementById('deleteUserLabel');
|
const deleteUserLabel = document.getElementById('deleteUserLabel');
|
||||||
@@ -464,7 +619,6 @@
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
// Rotate Secret Logic
|
|
||||||
const rotateSecretModal = new bootstrap.Modal(document.getElementById('rotateSecretModal'));
|
const rotateSecretModal = new bootstrap.Modal(document.getElementById('rotateSecretModal'));
|
||||||
const rotateUserLabel = document.getElementById('rotateUserLabel');
|
const rotateUserLabel = document.getElementById('rotateUserLabel');
|
||||||
const confirmRotateBtn = document.getElementById('confirmRotateBtn');
|
const confirmRotateBtn = document.getElementById('confirmRotateBtn');
|
||||||
@@ -474,7 +628,6 @@
|
|||||||
const rotateSecretResult = document.getElementById('rotateSecretResult');
|
const rotateSecretResult = document.getElementById('rotateSecretResult');
|
||||||
const newSecretKeyInput = document.getElementById('newSecretKey');
|
const newSecretKeyInput = document.getElementById('newSecretKey');
|
||||||
const copyNewSecretBtn = document.getElementById('copyNewSecret');
|
const copyNewSecretBtn = document.getElementById('copyNewSecret');
|
||||||
const rotateSelfWarning = document.getElementById('rotateSelfWarning');
|
|
||||||
let currentRotateKey = null;
|
let currentRotateKey = null;
|
||||||
|
|
||||||
document.querySelectorAll('[data-rotate-user]').forEach(btn => {
|
document.querySelectorAll('[data-rotate-user]').forEach(btn => {
|
||||||
@@ -482,13 +635,6 @@
|
|||||||
currentRotateKey = btn.dataset.rotateUser;
|
currentRotateKey = btn.dataset.rotateUser;
|
||||||
rotateUserLabel.textContent = currentRotateKey;
|
rotateUserLabel.textContent = currentRotateKey;
|
||||||
|
|
||||||
if (currentRotateKey === currentUserKey) {
|
|
||||||
rotateSelfWarning.classList.remove('d-none');
|
|
||||||
} else {
|
|
||||||
rotateSelfWarning.classList.add('d-none');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reset Modal State
|
|
||||||
rotateSecretConfirm.classList.remove('d-none');
|
rotateSecretConfirm.classList.remove('d-none');
|
||||||
rotateSecretResult.classList.add('d-none');
|
rotateSecretResult.classList.add('d-none');
|
||||||
confirmRotateBtn.classList.remove('d-none');
|
confirmRotateBtn.classList.remove('d-none');
|
||||||
@@ -523,7 +669,6 @@
|
|||||||
const data = await response.json();
|
const data = await response.json();
|
||||||
newSecretKeyInput.value = data.secret_key;
|
newSecretKeyInput.value = data.secret_key;
|
||||||
|
|
||||||
// Show Result
|
|
||||||
rotateSecretConfirm.classList.add('d-none');
|
rotateSecretConfirm.classList.add('d-none');
|
||||||
rotateSecretResult.classList.remove('d-none');
|
rotateSecretResult.classList.remove('d-none');
|
||||||
confirmRotateBtn.classList.add('d-none');
|
confirmRotateBtn.classList.add('d-none');
|
||||||
@@ -531,7 +676,9 @@
|
|||||||
rotateDoneBtn.classList.remove('d-none');
|
rotateDoneBtn.classList.remove('d-none');
|
||||||
|
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
alert(err.message);
|
if (window.showToast) {
|
||||||
|
window.showToast(err.message, 'Error', 'danger');
|
||||||
|
}
|
||||||
rotateSecretModal.hide();
|
rotateSecretModal.hide();
|
||||||
} finally {
|
} finally {
|
||||||
confirmRotateBtn.disabled = false;
|
confirmRotateBtn.disabled = false;
|
||||||
|
|||||||
@@ -1,29 +1,102 @@
|
|||||||
{% extends "base.html" %}
|
{% extends "base.html" %}
|
||||||
{% block content %}
|
{% block content %}
|
||||||
<div class="row align-items-center mt-5 g-4">
|
<div class="row align-items-center justify-content-center min-vh-75 g-5">
|
||||||
<div class="col-lg-6">
|
<div class="col-lg-5 d-none d-lg-block">
|
||||||
<h1 class="display-6 mb-3">Welcome to <span class="text-primary">MyFSIO</span></h1>
|
<div class="text-center mb-4">
|
||||||
<p class="lead text-muted">A developer-friendly object storage solution for prototyping and local development.</p>
|
<div class="position-relative d-inline-block mb-4">
|
||||||
<p class="text-muted mb-0">Need help getting started? Review the project README and docs for bootstrap credentials, IAM walkthroughs, and bucket policy samples.</p>
|
<div class="login-hero-icon">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="64" height="64" fill="currentColor" class="bi bi-cloud-arrow-up" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M7.646 5.146a.5.5 0 0 1 .708 0l2 2a.5.5 0 0 1-.708.708L8.5 6.707V10.5a.5.5 0 0 1-1 0V6.707L6.354 7.854a.5.5 0 1 1-.708-.708l2-2z"/>
|
||||||
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<h1 class="display-5 fw-bold mb-3">Welcome to <span class="text-gradient">MyFSIO</span></h1>
|
||||||
|
<p class="lead text-muted mb-4">A developer-friendly object storage solution for prototyping and local development.</p>
|
||||||
|
<div class="d-flex justify-content-center gap-4 text-muted">
|
||||||
|
<div class="text-center">
|
||||||
|
<div class="h4 fw-bold text-gradient mb-1">S3</div>
|
||||||
|
<small>Compatible</small>
|
||||||
|
</div>
|
||||||
|
<div class="vr"></div>
|
||||||
|
<div class="text-center">
|
||||||
|
<div class="h4 fw-bold text-gradient mb-1">Fast</div>
|
||||||
|
<small>Local Storage</small>
|
||||||
|
</div>
|
||||||
|
<div class="vr"></div>
|
||||||
|
<div class="text-center">
|
||||||
|
<div class="h4 fw-bold text-gradient mb-1">Secure</div>
|
||||||
|
<small>IAM Support</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="col-lg-5 ms-auto">
|
<div class="col-lg-5 col-md-8 col-sm-10">
|
||||||
<div class="card shadow-sm">
|
<div class="card shadow-lg login-card position-relative">
|
||||||
<div class="card-body">
|
<div class="card-body p-4 p-md-5">
|
||||||
<h2 class="h4 mb-3">Sign in</h2>
|
<div class="text-center mb-4 d-lg-none">
|
||||||
|
<img src="{{ url_for('static', filename='images/MyFISO.png') }}" alt="MyFSIO" width="48" height="48" class="mb-3 rounded-3">
|
||||||
|
<h2 class="h4 fw-bold">MyFSIO</h2>
|
||||||
|
</div>
|
||||||
|
<h2 class="h4 mb-1 d-none d-lg-block">Sign in</h2>
|
||||||
|
<p class="text-muted mb-4 d-none d-lg-block">Enter your credentials to continue</p>
|
||||||
<form method="post" action="{{ url_for('ui.login') }}">
|
<form method="post" action="{{ url_for('ui.login') }}">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label class="form-label">Access key</label>
|
<label class="form-label fw-medium">Access key</label>
|
||||||
<input class="form-control" type="text" name="access_key" required autofocus />
|
<div class="input-group">
|
||||||
|
<span class="input-group-text bg-transparent">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-key text-muted" viewBox="0 0 16 16">
|
||||||
|
<path d="M0 8a4 4 0 0 1 7.465-2H14a.5.5 0 0 1 .354.146l1.5 1.5a.5.5 0 0 1 0 .708l-1.5 1.5a.5.5 0 0 1-.708 0L13 9.207l-.646.647a.5.5 0 0 1-.708 0L11 9.207l-.646.647a.5.5 0 0 1-.708 0L9 9.207l-.646.647A.5.5 0 0 1 8 10h-.535A4 4 0 0 1 0 8zm4-3a3 3 0 1 0 2.712 4.285A.5.5 0 0 1 7.163 9h.63l.853-.854a.5.5 0 0 1 .708 0l.646.647.646-.647a.5.5 0 0 1 .708 0l.646.647.646-.647a.5.5 0 0 1 .708 0l.646.647.793-.793-1-1h-6.63a.5.5 0 0 1-.451-.285A3 3 0 0 0 4 5z"/>
|
||||||
|
<path d="M4 8a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
||||||
|
</svg>
|
||||||
|
</span>
|
||||||
|
<input class="form-control" type="text" name="access_key" required autofocus placeholder="Enter your access key" />
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-4">
|
<div class="mb-4">
|
||||||
<label class="form-label">Secret key</label>
|
<label class="form-label fw-medium">Secret key</label>
|
||||||
<input class="form-control" type="password" name="secret_key" required />
|
<div class="input-group">
|
||||||
|
<span class="input-group-text bg-transparent">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-shield-lock text-muted" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.338 1.59a61.44 61.44 0 0 0-2.837.856.481.481 0 0 0-.328.39c-.554 4.157.726 7.19 2.253 9.188a10.725 10.725 0 0 0 2.287 2.233c.346.244.652.42.893.533.12.057.218.095.293.118a.55.55 0 0 0 .101.025.615.615 0 0 0 .1-.025c.076-.023.174-.061.294-.118.24-.113.547-.29.893-.533a10.726 10.726 0 0 0 2.287-2.233c1.527-1.997 2.807-5.031 2.253-9.188a.48.48 0 0 0-.328-.39c-.651-.213-1.75-.56-2.837-.855C9.552 1.29 8.531 1.067 8 1.067c-.53 0-1.552.223-2.662.524zM5.072.56C6.157.265 7.31 0 8 0s1.843.265 2.928.56c1.11.3 2.229.655 2.887.87a1.54 1.54 0 0 1 1.044 1.262c.596 4.477-.787 7.795-2.465 9.99a11.775 11.775 0 0 1-2.517 2.453 7.159 7.159 0 0 1-1.048.625c-.28.132-.581.24-.829.24s-.548-.108-.829-.24a7.158 7.158 0 0 1-1.048-.625 11.777 11.777 0 0 1-2.517-2.453C1.928 10.487.545 7.169 1.141 2.692A1.54 1.54 0 0 1 2.185 1.43 62.456 62.456 0 0 1 5.072.56z"/>
|
||||||
|
<path d="M9.5 6.5a1.5 1.5 0 0 1-1 1.415l.385 1.99a.5.5 0 0 1-.491.595h-.788a.5.5 0 0 1-.49-.595l.384-1.99a1.5 1.5 0 1 1 2-1.415z"/>
|
||||||
|
</svg>
|
||||||
|
</span>
|
||||||
|
<input class="form-control" type="password" name="secret_key" required placeholder="Enter your secret key" />
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<button class="btn btn-primary w-100" type="submit">Continue</button>
|
<button class="btn btn-primary btn-lg w-100 fw-medium" type="submit">
|
||||||
|
Sign in
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-arrow-right ms-2" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M1 8a.5.5 0 0 1 .5-.5h11.793l-3.147-3.146a.5.5 0 0 1 .708-.708l4 4a.5.5 0 0 1 0 .708l-4 4a.5.5 0 0 1-.708-.708L13.293 8.5H1.5A.5.5 0 0 1 1 8z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
</form>
|
</form>
|
||||||
|
<div class="text-center mt-4">
|
||||||
|
<small class="text-muted">Need help? Check the <a href="#" class="text-decoration-none">documentation</a></small>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<style>
|
||||||
|
.min-vh-75 { min-height: 75vh; }
|
||||||
|
.login-hero-icon {
|
||||||
|
width: 120px;
|
||||||
|
height: 120px;
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
background: linear-gradient(135deg, rgba(59, 130, 246, 0.15) 0%, rgba(139, 92, 246, 0.15) 100%);
|
||||||
|
border-radius: 50%;
|
||||||
|
color: #3b82f6;
|
||||||
|
margin: 0 auto;
|
||||||
|
}
|
||||||
|
[data-theme='dark'] .login-hero-icon {
|
||||||
|
background: linear-gradient(135deg, rgba(59, 130, 246, 0.25) 0%, rgba(139, 92, 246, 0.25) 100%);
|
||||||
|
color: #60a5fa;
|
||||||
|
}
|
||||||
|
</style>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
254
templates/metrics.html
Normal file
254
templates/metrics.html
Normal file
@@ -0,0 +1,254 @@
|
|||||||
|
{% extends "base.html" %}
|
||||||
|
{% block content %}
|
||||||
|
<div class="d-flex justify-content-between align-items-center mb-4">
|
||||||
|
<div>
|
||||||
|
<h1 class="h3 mb-1 fw-bold">System Metrics</h1>
|
||||||
|
<p class="text-muted mb-0">Real-time server performance and storage usage</p>
|
||||||
|
</div>
|
||||||
|
<div class="d-flex gap-2 align-items-center">
|
||||||
|
<span class="d-flex align-items-center gap-2 text-muted small">
|
||||||
|
<span class="live-indicator"></span>
|
||||||
|
Live
|
||||||
|
</span>
|
||||||
|
<button class="btn btn-outline-secondary btn-sm" onclick="window.location.reload()">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-arrow-clockwise me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 3a5 5 0 1 0 4.546 2.914.5.5 0 0 1 .908-.417A6 6 0 1 1 8 2v1z"/>
|
||||||
|
<path d="M8 4.466V.534a.25.25 0 0 1 .41-.192l2.36 1.966c.12.1.12.284 0 .384L8.41 4.658A.25.25 0 0 1 8 4.466z"/>
|
||||||
|
</svg>
|
||||||
|
Refresh
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="row g-4 mb-4">
|
||||||
|
<div class="col-md-6 col-xl-3">
|
||||||
|
<div class="card shadow-sm h-100 border-0 metric-card">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center justify-content-between mb-3">
|
||||||
|
<h6 class="card-subtitle text-muted text-uppercase small fw-bold mb-0">CPU Usage</h6>
|
||||||
|
<div class="icon-box bg-primary-subtle text-primary rounded-circle p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="bi bi-cpu" viewBox="0 0 16 16">
|
||||||
|
<path d="M5 0a.5.5 0 0 1 .5.5V2h1V.5a.5.5 0 0 1 1 0V2h1V.5a.5.5 0 0 1 1 0V2h1V.5a.5.5 0 0 1 1 0V2A2.5 2.5 0 0 1 14 4.5h1.5a.5.5 0 0 1 0 1H14v1h1.5a.5.5 0 0 1 0 1H14v1h1.5a.5.5 0 0 1 0 1H14v1h1.5a.5.5 0 0 1 0 1H14a2.5 2.5 0 0 1-2.5 2.5v1.5a.5.5 0 0 1-1 0V14h-1v1.5a.5.5 0 0 1-1 0V14h-1v1.5a.5.5 0 0 1-1 0V14h-1v1.5a.5.5 0 0 1-1 0V14A2.5 2.5 0 0 1 2 11.5H.5a.5.5 0 0 1 0-1H2v-1H.5a.5.5 0 0 1 0-1H2v-1H.5a.5.5 0 0 1 0-1H2v-1H.5a.5.5 0 0 1 0-1H2A2.5 2.5 0 0 1 4.5 2V.5a.5.5 0 0 1 .5-.5zM5 4H5v8h6V4H5z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<h2 class="display-6 fw-bold mb-2 stat-value">{{ cpu_percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
|
<div class="progress-bar {% if cpu_percent > 80 %}bg-danger{% elif cpu_percent > 50 %}bg-warning{% else %}bg-primary{% endif %}" role="progressbar" style="width: {{ cpu_percent }}%"></div>
|
||||||
|
</div>
|
||||||
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
|
<small class="text-muted">Current load</small>
|
||||||
|
<small class="{% if cpu_percent > 80 %}text-danger{% elif cpu_percent > 50 %}text-warning{% else %}text-success{% endif %}">
|
||||||
|
{% if cpu_percent > 80 %}High{% elif cpu_percent > 50 %}Medium{% else %}Normal{% endif %}
|
||||||
|
</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="col-md-6 col-xl-3">
|
||||||
|
<div class="card shadow-sm h-100 border-0 metric-card">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center justify-content-between mb-3">
|
||||||
|
<h6 class="card-subtitle text-muted text-uppercase small fw-bold mb-0">Memory</h6>
|
||||||
|
<div class="icon-box bg-info-subtle text-info rounded-circle p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="bi bi-memory" viewBox="0 0 16 16">
|
||||||
|
<path d="M1 3a1 1 0 0 0-1 1v8a1 1 0 0 0 1 1h4.586a1 1 0 0 0 .707-.293l.353-.353a.5.5 0 0 1 .708 0l.353.353a1 1 0 0 0 .707.293H15a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H1Zm.5 1h3a.5.5 0 0 1 .5.5v4a.5.5 0 0 1-.5.5h-3a.5.5 0 0 1-.5-.5v-4a.5.5 0 0 1 .5-.5Zm5 0h3a.5.5 0 0 1 .5.5v4a.5.5 0 0 1-.5.5h-3a.5.5 0 0 1-.5-.5v-4a.5.5 0 0 1 .5-.5Zm4.5.5a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 .5.5v4a.5.5 0 0 1-.5.5h-3a.5.5 0 0 1-.5-.5v-4Z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<h2 class="display-6 fw-bold mb-2 stat-value">{{ memory.percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
|
<div class="progress-bar bg-info" role="progressbar" style="width: {{ memory.percent }}%"></div>
|
||||||
|
</div>
|
||||||
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
|
<small class="text-muted">{{ memory.used }} used</small>
|
||||||
|
<small class="text-muted">{{ memory.total }} total</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="col-md-6 col-xl-3">
|
||||||
|
<div class="card shadow-sm h-100 border-0 metric-card">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center justify-content-between mb-3">
|
||||||
|
<h6 class="card-subtitle text-muted text-uppercase small fw-bold mb-0">Disk Space</h6>
|
||||||
|
<div class="icon-box bg-warning-subtle text-warning rounded-circle p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="bi bi-hdd" viewBox="0 0 16 16">
|
||||||
|
<path d="M4.5 11a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 10.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
||||||
|
<path d="M16 11a2 2 0 0 1-2 2H2a2 2 0 0 1-2-2V9.51c0-.418.105-.83.305-1.197l2.472-4.531A1.5 1.5 0 0 1 4.094 3h7.812a1.5 1.5 0 0 1 1.317.782l2.472 4.53c.2.368.305.78.305 1.198V11zM3.655 4.26 1.592 8.043C1.724 8.014 1.86 8 2 8h12c.14 0 .276.014.408.042L12.345 4.26a.5.5 0 0 0-.439-.26H4.094a.5.5 0 0 0-.439.26zM1 10v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1v-1a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<h2 class="display-6 fw-bold mb-2 stat-value">{{ disk.percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
|
<div class="progress-bar {% if disk.percent > 90 %}bg-danger{% elif disk.percent > 75 %}bg-warning{% else %}bg-warning{% endif %}" role="progressbar" style="width: {{ disk.percent }}%"></div>
|
||||||
|
</div>
|
||||||
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
|
<small class="text-muted">{{ disk.free }} free</small>
|
||||||
|
<small class="text-muted">{{ disk.total }} total</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="col-md-6 col-xl-3">
|
||||||
|
<div class="card shadow-sm h-100 border-0 metric-card">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center justify-content-between mb-3">
|
||||||
|
<h6 class="card-subtitle text-muted text-uppercase small fw-bold mb-0">Storage</h6>
|
||||||
|
<div class="icon-box bg-success-subtle text-success rounded-circle p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="bi bi-database" viewBox="0 0 16 16">
|
||||||
|
<path d="M4.318 2.687C5.234 2.271 6.536 2 8 2s2.766.27 3.682.687C12.644 3.125 13 3.627 13 4c0 .374-.356.875-1.318 1.313C10.766 5.729 9.464 6 8 6s-2.766-.27-3.682-.687C3.356 4.875 3 4.373 3 4c0-.374.356-.875 1.318-1.313ZM13 5.698V7c0 .374-.356.875-1.318 1.313C10.766 8.729 9.464 9 8 9s-2.766-.27-3.682-.687C3.356 7.875 3 7.373 3 7V5.698c.271.202.58.378.904.525C4.978 6.711 6.427 7 8 7s3.022-.289 4.096-.777A4.92 4.92 0 0 0 13 5.698ZM14 4c0-1.007-.875-1.755-1.904-2.223C11.022 1.289 9.573 1 8 1s-3.022.289-4.096.777C2.875 2.245 2 2.993 2 4v9c0 1.007.875 1.755 1.904 2.223C4.978 15.71 6.427 16 8 16s3.022-.289 4.096-.777C13.125 14.755 14 14.007 14 13V4Zm-1 4.698V10c0 .374-.356.875-1.318 1.313C10.766 11.729 9.464 12 8 12s-2.766-.27-3.682-.687C3.356 10.875 3 10.373 3 10V8.698c.271.202.58.378.904.525C4.978 9.71 6.427 10 8 10s3.022-.289 4.096-.777A4.92 4.92 0 0 0 13 8.698Zm0 3V13c0 .374-.356.875-1.318 1.313C10.766 14.729 9.464 15 8 15s-2.766-.27-3.682-.687C3.356 13.875 3 13.373 3 13v-1.302c.271.202.58.378.904.525C4.978 12.71 6.427 13 8 13s3.022-.289 4.096-.777c.324-.147.633-.323.904-.525Z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<h2 class="display-6 fw-bold mb-2 stat-value">{{ app.storage_used }}</h2>
|
||||||
|
<div class="d-flex gap-3 mt-3">
|
||||||
|
<div class="text-center flex-fill">
|
||||||
|
<div class="h5 fw-bold mb-0">{{ app.buckets }}</div>
|
||||||
|
<small class="text-muted">Buckets</small>
|
||||||
|
</div>
|
||||||
|
<div class="vr"></div>
|
||||||
|
<div class="text-center flex-fill">
|
||||||
|
<div class="h5 fw-bold mb-0">{{ app.objects }}</div>
|
||||||
|
<small class="text-muted">Objects</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="row g-4">
|
||||||
|
<div class="col-lg-8">
|
||||||
|
<div class="card shadow-sm border-0">
|
||||||
|
<div class="card-header bg-transparent border-0 pt-4 px-4 d-flex justify-content-between align-items-center">
|
||||||
|
<h5 class="card-title mb-0 fw-semibold">System Overview</h5>
|
||||||
|
</div>
|
||||||
|
<div class="card-body p-4">
|
||||||
|
<div class="table-responsive">
|
||||||
|
<table class="table table-hover align-middle mb-0">
|
||||||
|
<thead>
|
||||||
|
<tr class="text-muted small text-uppercase">
|
||||||
|
<th class="fw-semibold border-0 pb-3">Resource</th>
|
||||||
|
<th class="fw-semibold border-0 pb-3">Value</th>
|
||||||
|
<th class="fw-semibold border-0 pb-3">Status</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td class="py-3">
|
||||||
|
<div class="d-flex align-items-center gap-2">
|
||||||
|
<div class="bg-secondary-subtle rounded p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-hdd-stack text-secondary" viewBox="0 0 16 16">
|
||||||
|
<path d="M14 10a1 1 0 0 1 1 1v1a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1v-1a1 1 0 0 1 1-1h12zM2 9a2 2 0 0 0-2 2v1a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2v-1a2 2 0 0 0-2-2H2z"/>
|
||||||
|
<path d="M5 11.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0zm-2 0a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0zM14 3a1 1 0 0 1 1 1v1a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V4a1 1 0 0 1 1-1h12zM2 2a2 2 0 0 0-2 2v1a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V4a2 2 0 0 0-2-2H2z"/>
|
||||||
|
<path d="M5 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0zm-2 0a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<span class="fw-medium">Total Disk Capacity</span>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td class="py-3 fw-semibold">{{ disk.total }}</td>
|
||||||
|
<td class="py-3"><span class="badge bg-secondary-subtle text-secondary">Hardware</span></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td class="py-3">
|
||||||
|
<div class="d-flex align-items-center gap-2">
|
||||||
|
<div class="bg-success-subtle rounded p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-check-circle text-success" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="M10.97 4.97a.235.235 0 0 0-.02.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-1.071-1.05z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<span class="fw-medium">Available Space</span>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td class="py-3 fw-semibold">{{ disk.free }}</td>
|
||||||
|
<td class="py-3">
|
||||||
|
{% if disk.percent > 90 %}
|
||||||
|
<span class="status-badge status-badge-danger badge bg-danger-subtle text-danger">
|
||||||
|
<span class="status-badge-dot"></span>Critical
|
||||||
|
</span>
|
||||||
|
{% elif disk.percent > 75 %}
|
||||||
|
<span class="status-badge status-badge-warning badge bg-warning-subtle text-warning">
|
||||||
|
<span class="status-badge-dot"></span>Low
|
||||||
|
</span>
|
||||||
|
{% else %}
|
||||||
|
<span class="status-badge status-badge-success badge bg-success-subtle text-success">
|
||||||
|
<span class="status-badge-dot"></span>Good
|
||||||
|
</span>
|
||||||
|
{% endif %}
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td class="py-3">
|
||||||
|
<div class="d-flex align-items-center gap-2">
|
||||||
|
<div class="bg-primary-subtle rounded p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-bucket text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<span class="fw-medium">MyFSIO Data</span>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td class="py-3 fw-semibold">{{ app.storage_used }}</td>
|
||||||
|
<td class="py-3"><span class="badge bg-primary-subtle text-primary">Application</span></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td class="py-3">
|
||||||
|
<div class="d-flex align-items-center gap-2">
|
||||||
|
<div class="bg-info-subtle rounded p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-file-earmark text-info" viewBox="0 0 16 16">
|
||||||
|
<path d="M14 4.5V14a2 2 0 0 1-2 2H4a2 2 0 0 1-2-2V2a2 2 0 0 1 2-2h5.5L14 4.5zm-3 0A1.5 1.5 0 0 1 9.5 3V1H4a1 1 0 0 0-1 1v12a1 1 0 0 0 1 1h8a1 1 0 0 0 1-1V4.5h-2z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<span class="fw-medium">Total Objects</span>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td class="py-3 fw-semibold">{{ app.objects }}</td>
|
||||||
|
<td class="py-3"><span class="badge bg-info-subtle text-info">Count</span></td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="col-lg-4">
|
||||||
|
<div class="card shadow-sm border-0 h-100 overflow-hidden" style="background: linear-gradient(135deg, #3b82f6 0%, #8b5cf6 100%);">
|
||||||
|
<div class="card-body p-4 d-flex flex-column justify-content-center text-white position-relative">
|
||||||
|
<div class="position-absolute top-0 end-0 opacity-25" style="transform: translate(20%, -20%);">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="160" height="160" fill="currentColor" class="bi bi-cloud-check" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M10.354 6.146a.5.5 0 0 1 0 .708l-3 3a.5.5 0 0 1-.708 0l-1.5-1.5a.5.5 0 1 1 .708-.708L7 8.793l2.646-2.647a.5.5 0 0 1 .708 0z"/>
|
||||||
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<div class="mb-3">
|
||||||
|
<span class="badge bg-white text-primary fw-semibold px-3 py-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-check-circle-fill me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
||||||
|
</svg>
|
||||||
|
v{{ app.version }}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
<h4 class="card-title fw-bold mb-3">System Status</h4>
|
||||||
|
<p class="card-text opacity-90 mb-4">All systems operational. Your storage infrastructure is running smoothly with no detected issues.</p>
|
||||||
|
<div class="d-flex gap-4">
|
||||||
|
<div>
|
||||||
|
<div class="h3 fw-bold mb-0">{{ app.uptime_days }}d</div>
|
||||||
|
<small class="opacity-75">Uptime</small>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<div class="h3 fw-bold mb-0">{{ app.buckets }}</div>
|
||||||
|
<small class="opacity-75">Active Buckets</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{% endblock %}
|
||||||
93
tests/test_api_multipart.py
Normal file
93
tests/test_api_multipart.py
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
import io
|
||||||
|
import pytest
|
||||||
|
from xml.etree.ElementTree import fromstring
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def client(app):
|
||||||
|
return app.test_client()
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def auth_headers(app):
|
||||||
|
# Create a test user and return headers
|
||||||
|
# Using the user defined in conftest.py
|
||||||
|
return {
|
||||||
|
"X-Access-Key": "test",
|
||||||
|
"X-Secret-Key": "secret"
|
||||||
|
}
|
||||||
|
|
||||||
|
def test_multipart_upload_flow(client, auth_headers):
|
||||||
|
# 1. Create bucket
|
||||||
|
client.put("/test-bucket", headers=auth_headers)
|
||||||
|
|
||||||
|
# 2. Initiate Multipart Upload
|
||||||
|
resp = client.post("/test-bucket/large-file.txt?uploads", headers=auth_headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
upload_id = root.find("UploadId").text
|
||||||
|
assert upload_id
|
||||||
|
|
||||||
|
# 3. Upload Part 1
|
||||||
|
resp = client.put(
|
||||||
|
f"/test-bucket/large-file.txt?partNumber=1&uploadId={upload_id}",
|
||||||
|
headers=auth_headers,
|
||||||
|
data=b"part1"
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
etag1 = resp.headers["ETag"]
|
||||||
|
assert etag1
|
||||||
|
|
||||||
|
# 4. Upload Part 2
|
||||||
|
resp = client.put(
|
||||||
|
f"/test-bucket/large-file.txt?partNumber=2&uploadId={upload_id}",
|
||||||
|
headers=auth_headers,
|
||||||
|
data=b"part2"
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
etag2 = resp.headers["ETag"]
|
||||||
|
assert etag2
|
||||||
|
|
||||||
|
# 5. Complete Multipart Upload
|
||||||
|
xml_body = f"""
|
||||||
|
<CompleteMultipartUpload>
|
||||||
|
<Part>
|
||||||
|
<PartNumber>1</PartNumber>
|
||||||
|
<ETag>{etag1}</ETag>
|
||||||
|
</Part>
|
||||||
|
<Part>
|
||||||
|
<PartNumber>2</PartNumber>
|
||||||
|
<ETag>{etag2}</ETag>
|
||||||
|
</Part>
|
||||||
|
</CompleteMultipartUpload>
|
||||||
|
"""
|
||||||
|
resp = client.post(
|
||||||
|
f"/test-bucket/large-file.txt?uploadId={upload_id}",
|
||||||
|
headers=auth_headers,
|
||||||
|
data=xml_body
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.find("Key").text == "large-file.txt"
|
||||||
|
|
||||||
|
# 6. Verify object content
|
||||||
|
resp = client.get("/test-bucket/large-file.txt", headers=auth_headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
assert resp.data == b"part1part2"
|
||||||
|
|
||||||
|
def test_abort_multipart_upload(client, auth_headers):
|
||||||
|
client.put("/abort-bucket", headers=auth_headers)
|
||||||
|
|
||||||
|
# Initiate
|
||||||
|
resp = client.post("/abort-bucket/file.txt?uploads", headers=auth_headers)
|
||||||
|
upload_id = fromstring(resp.data).find("UploadId").text
|
||||||
|
|
||||||
|
# Abort
|
||||||
|
resp = client.delete(f"/abort-bucket/file.txt?uploadId={upload_id}", headers=auth_headers)
|
||||||
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
# Try to upload part (should fail)
|
||||||
|
resp = client.put(
|
||||||
|
f"/abort-bucket/file.txt?partNumber=1&uploadId={upload_id}",
|
||||||
|
headers=auth_headers,
|
||||||
|
data=b"data"
|
||||||
|
)
|
||||||
|
assert resp.status_code == 404 # NoSuchUpload
|
||||||
@@ -24,14 +24,6 @@ def test_boto3_basic_operations(live_server):
|
|||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
# No need to inject custom headers anymore, as we support SigV4
|
|
||||||
# def _inject_headers(params, **_kwargs):
|
|
||||||
# headers = params.setdefault("headers", {})
|
|
||||||
# headers["X-Access-Key"] = "test"
|
|
||||||
# headers["X-Secret-Key"] = "secret"
|
|
||||||
|
|
||||||
# s3.meta.events.register("before-call.s3", _inject_headers)
|
|
||||||
|
|
||||||
s3.create_bucket(Bucket=bucket_name)
|
s3.create_bucket(Bucket=bucket_name)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|||||||
28
tests/test_boto3_multipart.py
Normal file
28
tests/test_boto3_multipart.py
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
import uuid
|
||||||
|
import pytest
|
||||||
|
import boto3
|
||||||
|
from botocore.client import Config
|
||||||
|
|
||||||
|
@pytest.mark.integration
|
||||||
|
def test_boto3_multipart_upload(live_server):
|
||||||
|
bucket_name = f'mp-test-{uuid.uuid4().hex[:8]}'
|
||||||
|
object_key = 'large-file.bin'
|
||||||
|
s3 = boto3.client('s3', endpoint_url=live_server, aws_access_key_id='test', aws_secret_access_key='secret', region_name='us-east-1', use_ssl=False, config=Config(signature_version='s3v4', retries={'max_attempts': 1}, s3={'addressing_style': 'path'}))
|
||||||
|
s3.create_bucket(Bucket=bucket_name)
|
||||||
|
try:
|
||||||
|
response = s3.create_multipart_upload(Bucket=bucket_name, Key=object_key)
|
||||||
|
upload_id = response['UploadId']
|
||||||
|
parts = []
|
||||||
|
part1_data = b'A' * 1024
|
||||||
|
part2_data = b'B' * 1024
|
||||||
|
resp1 = s3.upload_part(Bucket=bucket_name, Key=object_key, PartNumber=1, UploadId=upload_id, Body=part1_data)
|
||||||
|
parts.append({'PartNumber': 1, 'ETag': resp1['ETag']})
|
||||||
|
resp2 = s3.upload_part(Bucket=bucket_name, Key=object_key, PartNumber=2, UploadId=upload_id, Body=part2_data)
|
||||||
|
parts.append({'PartNumber': 2, 'ETag': resp2['ETag']})
|
||||||
|
s3.complete_multipart_upload(Bucket=bucket_name, Key=object_key, UploadId=upload_id, MultipartUpload={'Parts': parts})
|
||||||
|
obj = s3.get_object(Bucket=bucket_name, Key=object_key)
|
||||||
|
content = obj['Body'].read()
|
||||||
|
assert content == part1_data + part2_data
|
||||||
|
s3.delete_object(Bucket=bucket_name, Key=object_key)
|
||||||
|
finally:
|
||||||
|
s3.delete_bucket(Bucket=bucket_name)
|
||||||
@@ -38,7 +38,7 @@ def test_unicode_bucket_and_object_names(tmp_path: Path):
|
|||||||
assert storage.get_object_path("unicode-test", key).exists()
|
assert storage.get_object_path("unicode-test", key).exists()
|
||||||
|
|
||||||
# Verify listing
|
# Verify listing
|
||||||
objects = storage.list_objects("unicode-test")
|
objects = storage.list_objects_all("unicode-test")
|
||||||
assert any(o.key == key for o in objects)
|
assert any(o.key == key for o in objects)
|
||||||
|
|
||||||
def test_special_characters_in_metadata(tmp_path: Path):
|
def test_special_characters_in_metadata(tmp_path: Path):
|
||||||
|
|||||||
763
tests/test_encryption.py
Normal file
763
tests/test_encryption.py
Normal file
@@ -0,0 +1,763 @@
|
|||||||
|
"""Tests for encryption functionality."""
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
import io
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import secrets
|
||||||
|
import tempfile
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
|
class TestLocalKeyEncryption:
|
||||||
|
"""Tests for LocalKeyEncryption provider."""
|
||||||
|
|
||||||
|
def test_create_master_key(self, tmp_path):
|
||||||
|
"""Test that master key is created if it doesn't exist."""
|
||||||
|
from app.encryption import LocalKeyEncryption
|
||||||
|
|
||||||
|
key_path = tmp_path / "keys" / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
|
||||||
|
# Access master key to trigger creation
|
||||||
|
key = provider.master_key
|
||||||
|
|
||||||
|
assert key_path.exists()
|
||||||
|
assert len(key) == 32 # 256-bit key
|
||||||
|
|
||||||
|
def test_load_existing_master_key(self, tmp_path):
|
||||||
|
"""Test loading an existing master key."""
|
||||||
|
from app.encryption import LocalKeyEncryption
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
original_key = secrets.token_bytes(32)
|
||||||
|
key_path.write_text(base64.b64encode(original_key).decode())
|
||||||
|
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
loaded_key = provider.master_key
|
||||||
|
|
||||||
|
assert loaded_key == original_key
|
||||||
|
|
||||||
|
def test_encrypt_decrypt_roundtrip(self, tmp_path):
|
||||||
|
"""Test that data can be encrypted and decrypted correctly."""
|
||||||
|
from app.encryption import LocalKeyEncryption
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
|
||||||
|
plaintext = b"Hello, World! This is a test message."
|
||||||
|
|
||||||
|
# Encrypt
|
||||||
|
result = provider.encrypt(plaintext)
|
||||||
|
|
||||||
|
assert result.ciphertext != plaintext
|
||||||
|
assert result.key_id == "local"
|
||||||
|
assert len(result.nonce) == 12
|
||||||
|
assert len(result.encrypted_data_key) > 0
|
||||||
|
|
||||||
|
# Decrypt
|
||||||
|
decrypted = provider.decrypt(
|
||||||
|
result.ciphertext,
|
||||||
|
result.nonce,
|
||||||
|
result.encrypted_data_key,
|
||||||
|
result.key_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
def test_different_data_keys_per_encryption(self, tmp_path):
|
||||||
|
"""Test that each encryption uses a different data key."""
|
||||||
|
from app.encryption import LocalKeyEncryption
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
|
||||||
|
plaintext = b"Same message"
|
||||||
|
|
||||||
|
result1 = provider.encrypt(plaintext)
|
||||||
|
result2 = provider.encrypt(plaintext)
|
||||||
|
|
||||||
|
# Different encrypted data keys
|
||||||
|
assert result1.encrypted_data_key != result2.encrypted_data_key
|
||||||
|
# Different nonces
|
||||||
|
assert result1.nonce != result2.nonce
|
||||||
|
# Different ciphertexts
|
||||||
|
assert result1.ciphertext != result2.ciphertext
|
||||||
|
|
||||||
|
def test_generate_data_key(self, tmp_path):
|
||||||
|
"""Test data key generation."""
|
||||||
|
from app.encryption import LocalKeyEncryption
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
|
||||||
|
plaintext_key, encrypted_key = provider.generate_data_key()
|
||||||
|
|
||||||
|
assert len(plaintext_key) == 32
|
||||||
|
assert len(encrypted_key) > 32 # nonce + ciphertext + tag
|
||||||
|
|
||||||
|
# Verify we can decrypt the key
|
||||||
|
decrypted_key = provider._decrypt_data_key(encrypted_key)
|
||||||
|
assert decrypted_key == plaintext_key
|
||||||
|
|
||||||
|
def test_decrypt_with_wrong_key_fails(self, tmp_path):
|
||||||
|
"""Test that decryption fails with wrong master key."""
|
||||||
|
from app.encryption import LocalKeyEncryption, EncryptionError
|
||||||
|
|
||||||
|
# Create two providers with different keys
|
||||||
|
key_path1 = tmp_path / "master1.key"
|
||||||
|
key_path2 = tmp_path / "master2.key"
|
||||||
|
|
||||||
|
provider1 = LocalKeyEncryption(key_path1)
|
||||||
|
provider2 = LocalKeyEncryption(key_path2)
|
||||||
|
|
||||||
|
# Encrypt with provider1
|
||||||
|
plaintext = b"Secret message"
|
||||||
|
result = provider1.encrypt(plaintext)
|
||||||
|
|
||||||
|
# Try to decrypt with provider2
|
||||||
|
with pytest.raises(EncryptionError):
|
||||||
|
provider2.decrypt(
|
||||||
|
result.ciphertext,
|
||||||
|
result.nonce,
|
||||||
|
result.encrypted_data_key,
|
||||||
|
result.key_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestEncryptionMetadata:
|
||||||
|
"""Tests for EncryptionMetadata class."""
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
"""Test converting metadata to dictionary."""
|
||||||
|
from app.encryption import EncryptionMetadata
|
||||||
|
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
encrypted_key = secrets.token_bytes(60)
|
||||||
|
|
||||||
|
metadata = EncryptionMetadata(
|
||||||
|
algorithm="AES256",
|
||||||
|
key_id="local",
|
||||||
|
nonce=nonce,
|
||||||
|
encrypted_data_key=encrypted_key,
|
||||||
|
)
|
||||||
|
|
||||||
|
result = metadata.to_dict()
|
||||||
|
|
||||||
|
assert result["x-amz-server-side-encryption"] == "AES256"
|
||||||
|
assert result["x-amz-encryption-key-id"] == "local"
|
||||||
|
assert base64.b64decode(result["x-amz-encryption-nonce"]) == nonce
|
||||||
|
assert base64.b64decode(result["x-amz-encrypted-data-key"]) == encrypted_key
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
"""Test creating metadata from dictionary."""
|
||||||
|
from app.encryption import EncryptionMetadata
|
||||||
|
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
encrypted_key = secrets.token_bytes(60)
|
||||||
|
|
||||||
|
data = {
|
||||||
|
"x-amz-server-side-encryption": "AES256",
|
||||||
|
"x-amz-encryption-key-id": "local",
|
||||||
|
"x-amz-encryption-nonce": base64.b64encode(nonce).decode(),
|
||||||
|
"x-amz-encrypted-data-key": base64.b64encode(encrypted_key).decode(),
|
||||||
|
}
|
||||||
|
|
||||||
|
metadata = EncryptionMetadata.from_dict(data)
|
||||||
|
|
||||||
|
assert metadata is not None
|
||||||
|
assert metadata.algorithm == "AES256"
|
||||||
|
assert metadata.key_id == "local"
|
||||||
|
assert metadata.nonce == nonce
|
||||||
|
assert metadata.encrypted_data_key == encrypted_key
|
||||||
|
|
||||||
|
def test_from_dict_returns_none_for_unencrypted(self):
|
||||||
|
"""Test that from_dict returns None for unencrypted objects."""
|
||||||
|
from app.encryption import EncryptionMetadata
|
||||||
|
|
||||||
|
data = {"some-other-key": "value"}
|
||||||
|
|
||||||
|
metadata = EncryptionMetadata.from_dict(data)
|
||||||
|
|
||||||
|
assert metadata is None
|
||||||
|
|
||||||
|
|
||||||
|
class TestStreamingEncryptor:
|
||||||
|
"""Tests for streaming encryption."""
|
||||||
|
|
||||||
|
def test_encrypt_decrypt_stream(self, tmp_path):
|
||||||
|
"""Test streaming encryption and decryption."""
|
||||||
|
from app.encryption import LocalKeyEncryption, StreamingEncryptor
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
encryptor = StreamingEncryptor(provider, chunk_size=1024)
|
||||||
|
|
||||||
|
# Create test data
|
||||||
|
original_data = b"A" * 5000 + b"B" * 5000 + b"C" * 5000 # 15KB
|
||||||
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
|
# Encrypt
|
||||||
|
encrypted_stream, metadata = encryptor.encrypt_stream(stream)
|
||||||
|
encrypted_data = encrypted_stream.read()
|
||||||
|
|
||||||
|
assert encrypted_data != original_data
|
||||||
|
assert metadata.algorithm == "AES256"
|
||||||
|
|
||||||
|
# Decrypt
|
||||||
|
encrypted_stream = io.BytesIO(encrypted_data)
|
||||||
|
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
|
||||||
|
decrypted_data = decrypted_stream.read()
|
||||||
|
|
||||||
|
assert decrypted_data == original_data
|
||||||
|
|
||||||
|
def test_encrypt_small_data(self, tmp_path):
|
||||||
|
"""Test encrypting data smaller than chunk size."""
|
||||||
|
from app.encryption import LocalKeyEncryption, StreamingEncryptor
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
encryptor = StreamingEncryptor(provider, chunk_size=1024)
|
||||||
|
|
||||||
|
original_data = b"Small data"
|
||||||
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
|
encrypted_stream, metadata = encryptor.encrypt_stream(stream)
|
||||||
|
encrypted_stream.seek(0)
|
||||||
|
|
||||||
|
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
|
||||||
|
decrypted_data = decrypted_stream.read()
|
||||||
|
|
||||||
|
assert decrypted_data == original_data
|
||||||
|
|
||||||
|
def test_encrypt_empty_data(self, tmp_path):
|
||||||
|
"""Test encrypting empty data."""
|
||||||
|
from app.encryption import LocalKeyEncryption, StreamingEncryptor
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
encryptor = StreamingEncryptor(provider)
|
||||||
|
|
||||||
|
stream = io.BytesIO(b"")
|
||||||
|
|
||||||
|
encrypted_stream, metadata = encryptor.encrypt_stream(stream)
|
||||||
|
encrypted_stream.seek(0)
|
||||||
|
|
||||||
|
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
|
||||||
|
decrypted_data = decrypted_stream.read()
|
||||||
|
|
||||||
|
assert decrypted_data == b""
|
||||||
|
|
||||||
|
|
||||||
|
class TestEncryptionManager:
|
||||||
|
"""Tests for EncryptionManager."""
|
||||||
|
|
||||||
|
def test_encryption_disabled_by_default(self, tmp_path):
|
||||||
|
"""Test that encryption is disabled by default."""
|
||||||
|
from app.encryption import EncryptionManager
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"encryption_enabled": False,
|
||||||
|
"encryption_master_key_path": str(tmp_path / "master.key"),
|
||||||
|
}
|
||||||
|
|
||||||
|
manager = EncryptionManager(config)
|
||||||
|
|
||||||
|
assert not manager.enabled
|
||||||
|
|
||||||
|
def test_encryption_enabled(self, tmp_path):
|
||||||
|
"""Test enabling encryption."""
|
||||||
|
from app.encryption import EncryptionManager
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"encryption_enabled": True,
|
||||||
|
"encryption_master_key_path": str(tmp_path / "master.key"),
|
||||||
|
"default_encryption_algorithm": "AES256",
|
||||||
|
}
|
||||||
|
|
||||||
|
manager = EncryptionManager(config)
|
||||||
|
|
||||||
|
assert manager.enabled
|
||||||
|
assert manager.default_algorithm == "AES256"
|
||||||
|
|
||||||
|
def test_encrypt_decrypt_object(self, tmp_path):
|
||||||
|
"""Test encrypting and decrypting an object."""
|
||||||
|
from app.encryption import EncryptionManager
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"encryption_enabled": True,
|
||||||
|
"encryption_master_key_path": str(tmp_path / "master.key"),
|
||||||
|
}
|
||||||
|
|
||||||
|
manager = EncryptionManager(config)
|
||||||
|
|
||||||
|
plaintext = b"Object data to encrypt"
|
||||||
|
|
||||||
|
ciphertext, metadata = manager.encrypt_object(plaintext)
|
||||||
|
|
||||||
|
assert ciphertext != plaintext
|
||||||
|
assert metadata.algorithm == "AES256"
|
||||||
|
|
||||||
|
decrypted = manager.decrypt_object(ciphertext, metadata)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
|
||||||
|
class TestClientEncryptionHelper:
|
||||||
|
"""Tests for client-side encryption helpers."""
|
||||||
|
|
||||||
|
def test_generate_client_key(self):
|
||||||
|
"""Test generating a client encryption key."""
|
||||||
|
from app.encryption import ClientEncryptionHelper
|
||||||
|
|
||||||
|
key_info = ClientEncryptionHelper.generate_client_key()
|
||||||
|
|
||||||
|
assert "key" in key_info
|
||||||
|
assert key_info["algorithm"] == "AES-256-GCM"
|
||||||
|
assert "created_at" in key_info
|
||||||
|
|
||||||
|
# Verify key is 256 bits
|
||||||
|
key = base64.b64decode(key_info["key"])
|
||||||
|
assert len(key) == 32
|
||||||
|
|
||||||
|
def test_encrypt_with_key(self):
|
||||||
|
"""Test encrypting data with a client key."""
|
||||||
|
from app.encryption import ClientEncryptionHelper
|
||||||
|
|
||||||
|
key = base64.b64encode(secrets.token_bytes(32)).decode()
|
||||||
|
plaintext = b"Client-side encrypted data"
|
||||||
|
|
||||||
|
result = ClientEncryptionHelper.encrypt_with_key(plaintext, key)
|
||||||
|
|
||||||
|
assert "ciphertext" in result
|
||||||
|
assert "nonce" in result
|
||||||
|
assert result["algorithm"] == "AES-256-GCM"
|
||||||
|
|
||||||
|
def test_encrypt_decrypt_with_key(self):
|
||||||
|
"""Test round-trip client-side encryption."""
|
||||||
|
from app.encryption import ClientEncryptionHelper
|
||||||
|
|
||||||
|
key = base64.b64encode(secrets.token_bytes(32)).decode()
|
||||||
|
plaintext = b"Client-side encrypted data"
|
||||||
|
|
||||||
|
encrypted = ClientEncryptionHelper.encrypt_with_key(plaintext, key)
|
||||||
|
|
||||||
|
decrypted = ClientEncryptionHelper.decrypt_with_key(
|
||||||
|
encrypted["ciphertext"],
|
||||||
|
encrypted["nonce"],
|
||||||
|
key,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
def test_wrong_key_fails(self):
|
||||||
|
"""Test that decryption with wrong key fails."""
|
||||||
|
from app.encryption import ClientEncryptionHelper, EncryptionError
|
||||||
|
|
||||||
|
key1 = base64.b64encode(secrets.token_bytes(32)).decode()
|
||||||
|
key2 = base64.b64encode(secrets.token_bytes(32)).decode()
|
||||||
|
plaintext = b"Secret data"
|
||||||
|
|
||||||
|
encrypted = ClientEncryptionHelper.encrypt_with_key(plaintext, key1)
|
||||||
|
|
||||||
|
with pytest.raises(EncryptionError):
|
||||||
|
ClientEncryptionHelper.decrypt_with_key(
|
||||||
|
encrypted["ciphertext"],
|
||||||
|
encrypted["nonce"],
|
||||||
|
key2,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSManager:
|
||||||
|
"""Tests for KMS key management."""
|
||||||
|
|
||||||
|
def test_create_key(self, tmp_path):
|
||||||
|
"""Test creating a KMS key."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
key = kms.create_key("Test key", key_id="test-key-1")
|
||||||
|
|
||||||
|
assert key.key_id == "test-key-1"
|
||||||
|
assert key.description == "Test key"
|
||||||
|
assert key.enabled
|
||||||
|
assert keys_path.exists()
|
||||||
|
|
||||||
|
def test_list_keys(self, tmp_path):
|
||||||
|
"""Test listing KMS keys."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Key 1", key_id="key-1")
|
||||||
|
kms.create_key("Key 2", key_id="key-2")
|
||||||
|
|
||||||
|
keys = kms.list_keys()
|
||||||
|
|
||||||
|
assert len(keys) == 2
|
||||||
|
key_ids = {k.key_id for k in keys}
|
||||||
|
assert "key-1" in key_ids
|
||||||
|
assert "key-2" in key_ids
|
||||||
|
|
||||||
|
def test_get_key(self, tmp_path):
|
||||||
|
"""Test getting a specific key."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
key = kms.get_key("test-key")
|
||||||
|
|
||||||
|
assert key is not None
|
||||||
|
assert key.key_id == "test-key"
|
||||||
|
|
||||||
|
# Non-existent key
|
||||||
|
assert kms.get_key("non-existent") is None
|
||||||
|
|
||||||
|
def test_enable_disable_key(self, tmp_path):
|
||||||
|
"""Test enabling and disabling keys."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
# Initially enabled
|
||||||
|
assert kms.get_key("test-key").enabled
|
||||||
|
|
||||||
|
# Disable
|
||||||
|
kms.disable_key("test-key")
|
||||||
|
assert not kms.get_key("test-key").enabled
|
||||||
|
|
||||||
|
# Enable
|
||||||
|
kms.enable_key("test-key")
|
||||||
|
assert kms.get_key("test-key").enabled
|
||||||
|
|
||||||
|
def test_delete_key(self, tmp_path):
|
||||||
|
"""Test deleting a key."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
assert kms.get_key("test-key") is not None
|
||||||
|
|
||||||
|
kms.delete_key("test-key")
|
||||||
|
assert kms.get_key("test-key") is None
|
||||||
|
|
||||||
|
def test_encrypt_decrypt(self, tmp_path):
|
||||||
|
"""Test KMS encrypt and decrypt."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
key = kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
plaintext = b"Secret data to encrypt"
|
||||||
|
|
||||||
|
ciphertext = kms.encrypt("test-key", plaintext)
|
||||||
|
|
||||||
|
assert ciphertext != plaintext
|
||||||
|
|
||||||
|
decrypted, key_id = kms.decrypt(ciphertext)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
assert key_id == "test-key"
|
||||||
|
|
||||||
|
def test_encrypt_with_context(self, tmp_path):
|
||||||
|
"""Test encryption with encryption context."""
|
||||||
|
from app.kms import KMSManager, EncryptionError
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
plaintext = b"Secret data"
|
||||||
|
context = {"bucket": "test-bucket", "key": "test-key"}
|
||||||
|
|
||||||
|
ciphertext = kms.encrypt("test-key", plaintext, context)
|
||||||
|
|
||||||
|
# Decrypt with same context succeeds
|
||||||
|
decrypted, _ = kms.decrypt(ciphertext, context)
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
# Decrypt with different context fails
|
||||||
|
with pytest.raises(EncryptionError):
|
||||||
|
kms.decrypt(ciphertext, {"different": "context"})
|
||||||
|
|
||||||
|
def test_generate_data_key(self, tmp_path):
|
||||||
|
"""Test generating a data key."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
plaintext_key, encrypted_key = kms.generate_data_key("test-key")
|
||||||
|
|
||||||
|
assert len(plaintext_key) == 32
|
||||||
|
assert len(encrypted_key) > 0
|
||||||
|
|
||||||
|
# Decrypt the encrypted key
|
||||||
|
decrypted_key = kms.decrypt_data_key("test-key", encrypted_key)
|
||||||
|
|
||||||
|
assert decrypted_key == plaintext_key
|
||||||
|
|
||||||
|
def test_disabled_key_cannot_encrypt(self, tmp_path):
|
||||||
|
"""Test that disabled keys cannot be used for encryption."""
|
||||||
|
from app.kms import KMSManager, EncryptionError
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
kms.disable_key("test-key")
|
||||||
|
|
||||||
|
with pytest.raises(EncryptionError, match="disabled"):
|
||||||
|
kms.encrypt("test-key", b"data")
|
||||||
|
|
||||||
|
def test_re_encrypt(self, tmp_path):
|
||||||
|
"""Test re-encrypting data with a different key."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Key 1", key_id="key-1")
|
||||||
|
kms.create_key("Key 2", key_id="key-2")
|
||||||
|
|
||||||
|
plaintext = b"Data to re-encrypt"
|
||||||
|
|
||||||
|
# Encrypt with key-1
|
||||||
|
ciphertext1 = kms.encrypt("key-1", plaintext)
|
||||||
|
|
||||||
|
# Re-encrypt with key-2
|
||||||
|
ciphertext2 = kms.re_encrypt(ciphertext1, "key-2")
|
||||||
|
|
||||||
|
# Decrypt with key-2
|
||||||
|
decrypted, key_id = kms.decrypt(ciphertext2)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
assert key_id == "key-2"
|
||||||
|
|
||||||
|
def test_generate_random(self, tmp_path):
|
||||||
|
"""Test generating random bytes."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
random1 = kms.generate_random(32)
|
||||||
|
random2 = kms.generate_random(32)
|
||||||
|
|
||||||
|
assert len(random1) == 32
|
||||||
|
assert len(random2) == 32
|
||||||
|
assert random1 != random2 # Very unlikely to be equal
|
||||||
|
|
||||||
|
def test_keys_persist_across_instances(self, tmp_path):
|
||||||
|
"""Test that keys persist and can be loaded by new instances."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
# Create key with first instance
|
||||||
|
kms1 = KMSManager(keys_path, master_key_path)
|
||||||
|
kms1.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
plaintext = b"Persistent encryption test"
|
||||||
|
ciphertext = kms1.encrypt("test-key", plaintext)
|
||||||
|
|
||||||
|
# Create new instance and verify key works
|
||||||
|
kms2 = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
decrypted, key_id = kms2.decrypt(ciphertext)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
assert key_id == "test-key"
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSEncryptionProvider:
|
||||||
|
"""Tests for KMS encryption provider."""
|
||||||
|
|
||||||
|
def test_kms_encryption_provider(self, tmp_path):
|
||||||
|
"""Test using KMS as an encryption provider."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
provider = kms.get_provider("test-key")
|
||||||
|
|
||||||
|
plaintext = b"Data encrypted with KMS provider"
|
||||||
|
|
||||||
|
result = provider.encrypt(plaintext)
|
||||||
|
|
||||||
|
assert result.key_id == "test-key"
|
||||||
|
assert result.ciphertext != plaintext
|
||||||
|
|
||||||
|
decrypted = provider.decrypt(
|
||||||
|
result.ciphertext,
|
||||||
|
result.nonce,
|
||||||
|
result.encrypted_data_key,
|
||||||
|
result.key_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
|
||||||
|
class TestEncryptedStorage:
|
||||||
|
"""Tests for encrypted storage layer."""
|
||||||
|
|
||||||
|
def test_put_and_get_encrypted_object(self, tmp_path):
|
||||||
|
"""Test storing and retrieving an encrypted object."""
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
from app.encryption import EncryptionManager
|
||||||
|
from app.encrypted_storage import EncryptedObjectStorage
|
||||||
|
|
||||||
|
storage_root = tmp_path / "storage"
|
||||||
|
storage = ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"encryption_enabled": True,
|
||||||
|
"encryption_master_key_path": str(tmp_path / "master.key"),
|
||||||
|
"default_encryption_algorithm": "AES256",
|
||||||
|
}
|
||||||
|
encryption = EncryptionManager(config)
|
||||||
|
|
||||||
|
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
||||||
|
|
||||||
|
# Create bucket with encryption config
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
storage.set_bucket_encryption("test-bucket", {
|
||||||
|
"Rules": [{"SSEAlgorithm": "AES256"}]
|
||||||
|
})
|
||||||
|
|
||||||
|
# Put object
|
||||||
|
original_data = b"This is secret data that should be encrypted"
|
||||||
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
|
meta = encrypted_storage.put_object(
|
||||||
|
"test-bucket",
|
||||||
|
"secret.txt",
|
||||||
|
stream,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert meta is not None
|
||||||
|
|
||||||
|
# Verify file on disk is encrypted (not plaintext)
|
||||||
|
file_path = storage_root / "test-bucket" / "secret.txt"
|
||||||
|
stored_data = file_path.read_bytes()
|
||||||
|
assert stored_data != original_data
|
||||||
|
|
||||||
|
# Get object - should be decrypted
|
||||||
|
data, metadata = encrypted_storage.get_object_data("test-bucket", "secret.txt")
|
||||||
|
|
||||||
|
assert data == original_data
|
||||||
|
|
||||||
|
def test_no_encryption_without_config(self, tmp_path):
|
||||||
|
"""Test that objects are not encrypted without bucket config."""
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
from app.encryption import EncryptionManager
|
||||||
|
from app.encrypted_storage import EncryptedObjectStorage
|
||||||
|
|
||||||
|
storage_root = tmp_path / "storage"
|
||||||
|
storage = ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"encryption_enabled": True,
|
||||||
|
"encryption_master_key_path": str(tmp_path / "master.key"),
|
||||||
|
}
|
||||||
|
encryption = EncryptionManager(config)
|
||||||
|
|
||||||
|
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
||||||
|
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
# No encryption config
|
||||||
|
|
||||||
|
original_data = b"Unencrypted data"
|
||||||
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
|
encrypted_storage.put_object("test-bucket", "plain.txt", stream)
|
||||||
|
|
||||||
|
# Verify file on disk is NOT encrypted
|
||||||
|
file_path = storage_root / "test-bucket" / "plain.txt"
|
||||||
|
stored_data = file_path.read_bytes()
|
||||||
|
assert stored_data == original_data
|
||||||
|
|
||||||
|
def test_explicit_encryption_request(self, tmp_path):
|
||||||
|
"""Test explicitly requesting encryption."""
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
from app.encryption import EncryptionManager
|
||||||
|
from app.encrypted_storage import EncryptedObjectStorage
|
||||||
|
|
||||||
|
storage_root = tmp_path / "storage"
|
||||||
|
storage = ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"encryption_enabled": True,
|
||||||
|
"encryption_master_key_path": str(tmp_path / "master.key"),
|
||||||
|
}
|
||||||
|
encryption = EncryptionManager(config)
|
||||||
|
|
||||||
|
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
||||||
|
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
original_data = b"Explicitly encrypted data"
|
||||||
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
|
# Request encryption explicitly
|
||||||
|
encrypted_storage.put_object(
|
||||||
|
"test-bucket",
|
||||||
|
"encrypted.txt",
|
||||||
|
stream,
|
||||||
|
server_side_encryption="AES256",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Verify file is encrypted
|
||||||
|
file_path = storage_root / "test-bucket" / "encrypted.txt"
|
||||||
|
stored_data = file_path.read_bytes()
|
||||||
|
assert stored_data != original_data
|
||||||
|
|
||||||
|
# Get object - should be decrypted
|
||||||
|
data, _ = encrypted_storage.get_object_data("test-bucket", "encrypted.txt")
|
||||||
|
assert data == original_data
|
||||||
506
tests/test_kms_api.py
Normal file
506
tests/test_kms_api.py
Normal file
@@ -0,0 +1,506 @@
|
|||||||
|
"""Tests for KMS API endpoints."""
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
import json
|
||||||
|
import secrets
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def kms_client(tmp_path):
|
||||||
|
"""Create a test client with KMS enabled."""
|
||||||
|
from app import create_app
|
||||||
|
|
||||||
|
app = create_app({
|
||||||
|
"TESTING": True,
|
||||||
|
"STORAGE_ROOT": str(tmp_path / "storage"),
|
||||||
|
"IAM_CONFIG": str(tmp_path / "iam.json"),
|
||||||
|
"BUCKET_POLICY_PATH": str(tmp_path / "policies.json"),
|
||||||
|
"ENCRYPTION_ENABLED": True,
|
||||||
|
"KMS_ENABLED": True,
|
||||||
|
"ENCRYPTION_MASTER_KEY_PATH": str(tmp_path / "master.key"),
|
||||||
|
"KMS_KEYS_PATH": str(tmp_path / "kms_keys.json"),
|
||||||
|
})
|
||||||
|
|
||||||
|
# Create default IAM config with admin user
|
||||||
|
iam_config = {
|
||||||
|
"users": [
|
||||||
|
{
|
||||||
|
"access_key": "test-access-key",
|
||||||
|
"secret_key": "test-secret-key",
|
||||||
|
"display_name": "Test User",
|
||||||
|
"permissions": ["*"]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
(tmp_path / "iam.json").write_text(json.dumps(iam_config))
|
||||||
|
|
||||||
|
return app.test_client()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def auth_headers():
|
||||||
|
"""Get authentication headers."""
|
||||||
|
return {
|
||||||
|
"X-Access-Key": "test-access-key",
|
||||||
|
"X-Secret-Key": "test-secret-key",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSKeyManagement:
|
||||||
|
"""Tests for KMS key management endpoints."""
|
||||||
|
|
||||||
|
def test_create_key(self, kms_client, auth_headers):
|
||||||
|
"""Test creating a KMS key."""
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/keys",
|
||||||
|
json={"Description": "Test encryption key"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert "KeyMetadata" in data
|
||||||
|
assert data["KeyMetadata"]["Description"] == "Test encryption key"
|
||||||
|
assert data["KeyMetadata"]["Enabled"] is True
|
||||||
|
assert "KeyId" in data["KeyMetadata"]
|
||||||
|
|
||||||
|
def test_create_key_with_custom_id(self, kms_client, auth_headers):
|
||||||
|
"""Test creating a key with a custom ID."""
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/keys",
|
||||||
|
json={"KeyId": "my-custom-key", "Description": "Custom key"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert data["KeyMetadata"]["KeyId"] == "my-custom-key"
|
||||||
|
|
||||||
|
def test_list_keys(self, kms_client, auth_headers):
|
||||||
|
"""Test listing KMS keys."""
|
||||||
|
# Create some keys
|
||||||
|
kms_client.post("/kms/keys", json={"Description": "Key 1"}, headers=auth_headers)
|
||||||
|
kms_client.post("/kms/keys", json={"Description": "Key 2"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.get("/kms/keys", headers=auth_headers)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert "Keys" in data
|
||||||
|
assert len(data["Keys"]) == 2
|
||||||
|
|
||||||
|
def test_get_key(self, kms_client, auth_headers):
|
||||||
|
"""Test getting a specific key."""
|
||||||
|
# Create a key
|
||||||
|
create_response = kms_client.post(
|
||||||
|
"/kms/keys",
|
||||||
|
json={"KeyId": "test-key", "Description": "Test key"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert data["KeyMetadata"]["KeyId"] == "test-key"
|
||||||
|
assert data["KeyMetadata"]["Description"] == "Test key"
|
||||||
|
|
||||||
|
def test_get_nonexistent_key(self, kms_client, auth_headers):
|
||||||
|
"""Test getting a key that doesn't exist."""
|
||||||
|
response = kms_client.get("/kms/keys/nonexistent", headers=auth_headers)
|
||||||
|
|
||||||
|
assert response.status_code == 404
|
||||||
|
|
||||||
|
def test_delete_key(self, kms_client, auth_headers):
|
||||||
|
"""Test deleting a key."""
|
||||||
|
# Create a key
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
# Delete it
|
||||||
|
response = kms_client.delete("/kms/keys/test-key", headers=auth_headers)
|
||||||
|
|
||||||
|
assert response.status_code == 204
|
||||||
|
|
||||||
|
# Verify it's gone
|
||||||
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
|
assert get_response.status_code == 404
|
||||||
|
|
||||||
|
def test_enable_disable_key(self, kms_client, auth_headers):
|
||||||
|
"""Test enabling and disabling a key."""
|
||||||
|
# Create a key
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
# Disable
|
||||||
|
response = kms_client.post("/kms/keys/test-key/disable", headers=auth_headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
# Verify disabled
|
||||||
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
|
assert get_response.get_json()["KeyMetadata"]["Enabled"] is False
|
||||||
|
|
||||||
|
# Enable
|
||||||
|
response = kms_client.post("/kms/keys/test-key/enable", headers=auth_headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
# Verify enabled
|
||||||
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
|
assert get_response.get_json()["KeyMetadata"]["Enabled"] is True
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSEncryption:
|
||||||
|
"""Tests for KMS encryption operations."""
|
||||||
|
|
||||||
|
def test_encrypt_decrypt(self, kms_client, auth_headers):
|
||||||
|
"""Test encrypting and decrypting data."""
|
||||||
|
# Create a key
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
plaintext = b"Hello, World!"
|
||||||
|
plaintext_b64 = base64.b64encode(plaintext).decode()
|
||||||
|
|
||||||
|
# Encrypt
|
||||||
|
encrypt_response = kms_client.post(
|
||||||
|
"/kms/encrypt",
|
||||||
|
json={"KeyId": "test-key", "Plaintext": plaintext_b64},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert encrypt_response.status_code == 200
|
||||||
|
encrypt_data = encrypt_response.get_json()
|
||||||
|
|
||||||
|
assert "CiphertextBlob" in encrypt_data
|
||||||
|
assert encrypt_data["KeyId"] == "test-key"
|
||||||
|
|
||||||
|
# Decrypt
|
||||||
|
decrypt_response = kms_client.post(
|
||||||
|
"/kms/decrypt",
|
||||||
|
json={"CiphertextBlob": encrypt_data["CiphertextBlob"]},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert decrypt_response.status_code == 200
|
||||||
|
decrypt_data = decrypt_response.get_json()
|
||||||
|
|
||||||
|
decrypted = base64.b64decode(decrypt_data["Plaintext"])
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
def test_encrypt_with_context(self, kms_client, auth_headers):
|
||||||
|
"""Test encryption with encryption context."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
plaintext = b"Contextualized data"
|
||||||
|
plaintext_b64 = base64.b64encode(plaintext).decode()
|
||||||
|
context = {"purpose": "testing", "bucket": "my-bucket"}
|
||||||
|
|
||||||
|
# Encrypt with context
|
||||||
|
encrypt_response = kms_client.post(
|
||||||
|
"/kms/encrypt",
|
||||||
|
json={
|
||||||
|
"KeyId": "test-key",
|
||||||
|
"Plaintext": plaintext_b64,
|
||||||
|
"EncryptionContext": context,
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert encrypt_response.status_code == 200
|
||||||
|
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
||||||
|
|
||||||
|
# Decrypt with same context succeeds
|
||||||
|
decrypt_response = kms_client.post(
|
||||||
|
"/kms/decrypt",
|
||||||
|
json={
|
||||||
|
"CiphertextBlob": ciphertext,
|
||||||
|
"EncryptionContext": context,
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert decrypt_response.status_code == 200
|
||||||
|
|
||||||
|
# Decrypt with wrong context fails
|
||||||
|
wrong_context_response = kms_client.post(
|
||||||
|
"/kms/decrypt",
|
||||||
|
json={
|
||||||
|
"CiphertextBlob": ciphertext,
|
||||||
|
"EncryptionContext": {"wrong": "context"},
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert wrong_context_response.status_code == 400
|
||||||
|
|
||||||
|
def test_encrypt_missing_key_id(self, kms_client, auth_headers):
|
||||||
|
"""Test encryption without KeyId."""
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/encrypt",
|
||||||
|
json={"Plaintext": base64.b64encode(b"data").decode()},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 400
|
||||||
|
assert "KeyId is required" in response.get_json()["message"]
|
||||||
|
|
||||||
|
def test_encrypt_missing_plaintext(self, kms_client, auth_headers):
|
||||||
|
"""Test encryption without Plaintext."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/encrypt",
|
||||||
|
json={"KeyId": "test-key"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 400
|
||||||
|
assert "Plaintext is required" in response.get_json()["message"]
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSDataKey:
|
||||||
|
"""Tests for KMS data key generation."""
|
||||||
|
|
||||||
|
def test_generate_data_key(self, kms_client, auth_headers):
|
||||||
|
"""Test generating a data key."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/generate-data-key",
|
||||||
|
json={"KeyId": "test-key"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert "Plaintext" in data
|
||||||
|
assert "CiphertextBlob" in data
|
||||||
|
assert data["KeyId"] == "test-key"
|
||||||
|
|
||||||
|
# Verify plaintext key is 256 bits (32 bytes)
|
||||||
|
plaintext_key = base64.b64decode(data["Plaintext"])
|
||||||
|
assert len(plaintext_key) == 32
|
||||||
|
|
||||||
|
def test_generate_data_key_aes_128(self, kms_client, auth_headers):
|
||||||
|
"""Test generating an AES-128 data key."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/generate-data-key",
|
||||||
|
json={"KeyId": "test-key", "KeySpec": "AES_128"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
# Verify plaintext key is 128 bits (16 bytes)
|
||||||
|
plaintext_key = base64.b64decode(data["Plaintext"])
|
||||||
|
assert len(plaintext_key) == 16
|
||||||
|
|
||||||
|
def test_generate_data_key_without_plaintext(self, kms_client, auth_headers):
|
||||||
|
"""Test generating a data key without plaintext."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/generate-data-key-without-plaintext",
|
||||||
|
json={"KeyId": "test-key"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert "CiphertextBlob" in data
|
||||||
|
assert "Plaintext" not in data
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSReEncrypt:
|
||||||
|
"""Tests for KMS re-encryption."""
|
||||||
|
|
||||||
|
def test_re_encrypt(self, kms_client, auth_headers):
|
||||||
|
"""Test re-encrypting data with a different key."""
|
||||||
|
# Create two keys
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "key-1"}, headers=auth_headers)
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "key-2"}, headers=auth_headers)
|
||||||
|
|
||||||
|
# Encrypt with key-1
|
||||||
|
plaintext = b"Data to re-encrypt"
|
||||||
|
encrypt_response = kms_client.post(
|
||||||
|
"/kms/encrypt",
|
||||||
|
json={
|
||||||
|
"KeyId": "key-1",
|
||||||
|
"Plaintext": base64.b64encode(plaintext).decode(),
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
||||||
|
|
||||||
|
# Re-encrypt with key-2
|
||||||
|
re_encrypt_response = kms_client.post(
|
||||||
|
"/kms/re-encrypt",
|
||||||
|
json={
|
||||||
|
"CiphertextBlob": ciphertext,
|
||||||
|
"DestinationKeyId": "key-2",
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert re_encrypt_response.status_code == 200
|
||||||
|
data = re_encrypt_response.get_json()
|
||||||
|
|
||||||
|
assert data["SourceKeyId"] == "key-1"
|
||||||
|
assert data["KeyId"] == "key-2"
|
||||||
|
|
||||||
|
# Verify new ciphertext can be decrypted
|
||||||
|
decrypt_response = kms_client.post(
|
||||||
|
"/kms/decrypt",
|
||||||
|
json={"CiphertextBlob": data["CiphertextBlob"]},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
decrypted = base64.b64decode(decrypt_response.get_json()["Plaintext"])
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSRandom:
|
||||||
|
"""Tests for random number generation."""
|
||||||
|
|
||||||
|
def test_generate_random(self, kms_client, auth_headers):
|
||||||
|
"""Test generating random bytes."""
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/generate-random",
|
||||||
|
json={"NumberOfBytes": 64},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
random_bytes = base64.b64decode(data["Plaintext"])
|
||||||
|
assert len(random_bytes) == 64
|
||||||
|
|
||||||
|
def test_generate_random_default_size(self, kms_client, auth_headers):
|
||||||
|
"""Test generating random bytes with default size."""
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/generate-random",
|
||||||
|
json={},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
random_bytes = base64.b64decode(data["Plaintext"])
|
||||||
|
assert len(random_bytes) == 32 # Default is 32 bytes
|
||||||
|
|
||||||
|
|
||||||
|
class TestClientSideEncryption:
|
||||||
|
"""Tests for client-side encryption helpers."""
|
||||||
|
|
||||||
|
def test_generate_client_key(self, kms_client, auth_headers):
|
||||||
|
"""Test generating a client encryption key."""
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/client/generate-key",
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert "key" in data
|
||||||
|
assert data["algorithm"] == "AES-256-GCM"
|
||||||
|
|
||||||
|
key = base64.b64decode(data["key"])
|
||||||
|
assert len(key) == 32
|
||||||
|
|
||||||
|
def test_client_encrypt_decrypt(self, kms_client, auth_headers):
|
||||||
|
"""Test client-side encryption and decryption."""
|
||||||
|
# Generate a key
|
||||||
|
key_response = kms_client.post("/kms/client/generate-key", headers=auth_headers)
|
||||||
|
key = key_response.get_json()["key"]
|
||||||
|
|
||||||
|
# Encrypt
|
||||||
|
plaintext = b"Client-side encrypted data"
|
||||||
|
encrypt_response = kms_client.post(
|
||||||
|
"/kms/client/encrypt",
|
||||||
|
json={
|
||||||
|
"Plaintext": base64.b64encode(plaintext).decode(),
|
||||||
|
"Key": key,
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert encrypt_response.status_code == 200
|
||||||
|
encrypted = encrypt_response.get_json()
|
||||||
|
|
||||||
|
# Decrypt
|
||||||
|
decrypt_response = kms_client.post(
|
||||||
|
"/kms/client/decrypt",
|
||||||
|
json={
|
||||||
|
"Ciphertext": encrypted["ciphertext"],
|
||||||
|
"Nonce": encrypted["nonce"],
|
||||||
|
"Key": key,
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert decrypt_response.status_code == 200
|
||||||
|
decrypted = base64.b64decode(decrypt_response.get_json()["Plaintext"])
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
|
||||||
|
class TestEncryptionMaterials:
|
||||||
|
"""Tests for S3 encryption materials endpoint."""
|
||||||
|
|
||||||
|
def test_get_encryption_materials(self, kms_client, auth_headers):
|
||||||
|
"""Test getting encryption materials for client-side S3 encryption."""
|
||||||
|
# Create a key
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "s3-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/materials/s3-key",
|
||||||
|
json={},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert "PlaintextKey" in data
|
||||||
|
assert "EncryptedKey" in data
|
||||||
|
assert data["KeyId"] == "s3-key"
|
||||||
|
assert data["Algorithm"] == "AES-256-GCM"
|
||||||
|
|
||||||
|
# Verify key is 256 bits
|
||||||
|
key = base64.b64decode(data["PlaintextKey"])
|
||||||
|
assert len(key) == 32
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSAuthentication:
|
||||||
|
"""Tests for KMS authentication requirements."""
|
||||||
|
|
||||||
|
def test_unauthenticated_request_fails(self, kms_client):
|
||||||
|
"""Test that unauthenticated requests are rejected."""
|
||||||
|
response = kms_client.get("/kms/keys")
|
||||||
|
|
||||||
|
# Should fail with 403 (no credentials)
|
||||||
|
assert response.status_code == 403
|
||||||
|
|
||||||
|
def test_invalid_credentials_fail(self, kms_client):
|
||||||
|
"""Test that invalid credentials are rejected."""
|
||||||
|
response = kms_client.get(
|
||||||
|
"/kms/keys",
|
||||||
|
headers={
|
||||||
|
"X-Access-Key": "wrong-key",
|
||||||
|
"X-Secret-Key": "wrong-secret",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 403
|
||||||
286
tests/test_new_api_endpoints.py
Normal file
286
tests/test_new_api_endpoints.py
Normal file
@@ -0,0 +1,286 @@
|
|||||||
|
"""Tests for newly implemented S3 API endpoints."""
|
||||||
|
import io
|
||||||
|
import pytest
|
||||||
|
from xml.etree.ElementTree import fromstring
|
||||||
|
|
||||||
|
|
||||||
|
# Helper to create file-like stream
|
||||||
|
def _stream(data: bytes):
|
||||||
|
return io.BytesIO(data)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def storage(app):
|
||||||
|
"""Get the storage instance from the app."""
|
||||||
|
return app.extensions["object_storage"]
|
||||||
|
|
||||||
|
|
||||||
|
class TestListObjectsV2:
|
||||||
|
"""Tests for ListObjectsV2 endpoint."""
|
||||||
|
|
||||||
|
def test_list_objects_v2_basic(self, client, signer, storage):
|
||||||
|
# Create bucket and objects
|
||||||
|
storage.create_bucket("v2-test")
|
||||||
|
storage.put_object("v2-test", "file1.txt", _stream(b"hello"))
|
||||||
|
storage.put_object("v2-test", "file2.txt", _stream(b"world"))
|
||||||
|
storage.put_object("v2-test", "folder/file3.txt", _stream(b"nested"))
|
||||||
|
|
||||||
|
# ListObjectsV2 request
|
||||||
|
headers = signer("GET", "/v2-test?list-type=2")
|
||||||
|
resp = client.get("/v2-test", query_string={"list-type": "2"}, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.find("KeyCount").text == "3"
|
||||||
|
assert root.find("IsTruncated").text == "false"
|
||||||
|
|
||||||
|
keys = [el.find("Key").text for el in root.findall("Contents")]
|
||||||
|
assert "file1.txt" in keys
|
||||||
|
assert "file2.txt" in keys
|
||||||
|
assert "folder/file3.txt" in keys
|
||||||
|
|
||||||
|
def test_list_objects_v2_with_prefix_and_delimiter(self, client, signer, storage):
|
||||||
|
storage.create_bucket("prefix-test")
|
||||||
|
storage.put_object("prefix-test", "photos/2023/jan.jpg", _stream(b"jan"))
|
||||||
|
storage.put_object("prefix-test", "photos/2023/feb.jpg", _stream(b"feb"))
|
||||||
|
storage.put_object("prefix-test", "photos/2024/mar.jpg", _stream(b"mar"))
|
||||||
|
storage.put_object("prefix-test", "docs/readme.md", _stream(b"readme"))
|
||||||
|
|
||||||
|
# List with prefix and delimiter
|
||||||
|
headers = signer("GET", "/prefix-test?list-type=2&prefix=photos/&delimiter=/")
|
||||||
|
resp = client.get(
|
||||||
|
"/prefix-test",
|
||||||
|
query_string={"list-type": "2", "prefix": "photos/", "delimiter": "/"},
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
# Should show common prefixes for 2023/ and 2024/
|
||||||
|
prefixes = [el.find("Prefix").text for el in root.findall("CommonPrefixes")]
|
||||||
|
assert "photos/2023/" in prefixes
|
||||||
|
assert "photos/2024/" in prefixes
|
||||||
|
assert len(root.findall("Contents")) == 0 # No direct files under photos/
|
||||||
|
|
||||||
|
|
||||||
|
class TestPutBucketVersioning:
|
||||||
|
"""Tests for PutBucketVersioning endpoint."""
|
||||||
|
|
||||||
|
def test_put_versioning_enabled(self, client, signer, storage):
|
||||||
|
storage.create_bucket("version-test")
|
||||||
|
|
||||||
|
payload = b"""<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<VersioningConfiguration>
|
||||||
|
<Status>Enabled</Status>
|
||||||
|
</VersioningConfiguration>"""
|
||||||
|
|
||||||
|
headers = signer("PUT", "/version-test?versioning", body=payload)
|
||||||
|
resp = client.put("/version-test", query_string={"versioning": ""}, data=payload, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
# Verify via GET
|
||||||
|
headers = signer("GET", "/version-test?versioning")
|
||||||
|
resp = client.get("/version-test", query_string={"versioning": ""}, headers=headers)
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.find("Status").text == "Enabled"
|
||||||
|
|
||||||
|
def test_put_versioning_suspended(self, client, signer, storage):
|
||||||
|
storage.create_bucket("suspend-test")
|
||||||
|
storage.set_bucket_versioning("suspend-test", True)
|
||||||
|
|
||||||
|
payload = b"""<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<VersioningConfiguration>
|
||||||
|
<Status>Suspended</Status>
|
||||||
|
</VersioningConfiguration>"""
|
||||||
|
|
||||||
|
headers = signer("PUT", "/suspend-test?versioning", body=payload)
|
||||||
|
resp = client.put("/suspend-test", query_string={"versioning": ""}, data=payload, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
headers = signer("GET", "/suspend-test?versioning")
|
||||||
|
resp = client.get("/suspend-test", query_string={"versioning": ""}, headers=headers)
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.find("Status").text == "Suspended"
|
||||||
|
|
||||||
|
|
||||||
|
class TestDeleteBucketTagging:
|
||||||
|
"""Tests for DeleteBucketTagging endpoint."""
|
||||||
|
|
||||||
|
def test_delete_bucket_tags(self, client, signer, storage):
|
||||||
|
storage.create_bucket("tag-delete-test")
|
||||||
|
storage.set_bucket_tags("tag-delete-test", [{"Key": "env", "Value": "test"}])
|
||||||
|
|
||||||
|
# Delete tags
|
||||||
|
headers = signer("DELETE", "/tag-delete-test?tagging")
|
||||||
|
resp = client.delete("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
# Verify tags are gone
|
||||||
|
headers = signer("GET", "/tag-delete-test?tagging")
|
||||||
|
resp = client.get("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 404 # NoSuchTagSet
|
||||||
|
|
||||||
|
|
||||||
|
class TestDeleteBucketCors:
|
||||||
|
"""Tests for DeleteBucketCors endpoint."""
|
||||||
|
|
||||||
|
def test_delete_bucket_cors(self, client, signer, storage):
|
||||||
|
storage.create_bucket("cors-delete-test")
|
||||||
|
storage.set_bucket_cors("cors-delete-test", [
|
||||||
|
{"AllowedOrigins": ["*"], "AllowedMethods": ["GET"]}
|
||||||
|
])
|
||||||
|
|
||||||
|
# Delete CORS
|
||||||
|
headers = signer("DELETE", "/cors-delete-test?cors")
|
||||||
|
resp = client.delete("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
# Verify CORS is gone
|
||||||
|
headers = signer("GET", "/cors-delete-test?cors")
|
||||||
|
resp = client.get("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 404 # NoSuchCORSConfiguration
|
||||||
|
|
||||||
|
|
||||||
|
class TestGetBucketLocation:
|
||||||
|
"""Tests for GetBucketLocation endpoint."""
|
||||||
|
|
||||||
|
def test_get_bucket_location(self, client, signer, storage):
|
||||||
|
storage.create_bucket("location-test")
|
||||||
|
|
||||||
|
headers = signer("GET", "/location-test?location")
|
||||||
|
resp = client.get("/location-test", query_string={"location": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.tag == "LocationConstraint"
|
||||||
|
|
||||||
|
|
||||||
|
class TestBucketAcl:
|
||||||
|
"""Tests for Bucket ACL operations."""
|
||||||
|
|
||||||
|
def test_get_bucket_acl(self, client, signer, storage):
|
||||||
|
storage.create_bucket("acl-test")
|
||||||
|
|
||||||
|
headers = signer("GET", "/acl-test?acl")
|
||||||
|
resp = client.get("/acl-test", query_string={"acl": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.tag == "AccessControlPolicy"
|
||||||
|
assert root.find("Owner/ID") is not None
|
||||||
|
assert root.find(".//Permission").text == "FULL_CONTROL"
|
||||||
|
|
||||||
|
def test_put_bucket_acl(self, client, signer, storage):
|
||||||
|
storage.create_bucket("acl-put-test")
|
||||||
|
|
||||||
|
# PUT with canned ACL header
|
||||||
|
headers = signer("PUT", "/acl-put-test?acl")
|
||||||
|
headers["x-amz-acl"] = "public-read"
|
||||||
|
resp = client.put("/acl-put-test", query_string={"acl": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
|
||||||
|
class TestCopyObject:
|
||||||
|
"""Tests for CopyObject operation."""
|
||||||
|
|
||||||
|
def test_copy_object_basic(self, client, signer, storage):
|
||||||
|
storage.create_bucket("copy-src")
|
||||||
|
storage.create_bucket("copy-dst")
|
||||||
|
storage.put_object("copy-src", "original.txt", _stream(b"original content"))
|
||||||
|
|
||||||
|
# Copy object
|
||||||
|
headers = signer("PUT", "/copy-dst/copied.txt")
|
||||||
|
headers["x-amz-copy-source"] = "/copy-src/original.txt"
|
||||||
|
resp = client.put("/copy-dst/copied.txt", headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.tag == "CopyObjectResult"
|
||||||
|
assert root.find("ETag") is not None
|
||||||
|
assert root.find("LastModified") is not None
|
||||||
|
|
||||||
|
# Verify copy exists
|
||||||
|
path = storage.get_object_path("copy-dst", "copied.txt")
|
||||||
|
assert path.read_bytes() == b"original content"
|
||||||
|
|
||||||
|
def test_copy_object_with_metadata_replace(self, client, signer, storage):
|
||||||
|
storage.create_bucket("meta-src")
|
||||||
|
storage.create_bucket("meta-dst")
|
||||||
|
storage.put_object("meta-src", "source.txt", _stream(b"data"), metadata={"old": "value"})
|
||||||
|
|
||||||
|
# Copy with REPLACE directive
|
||||||
|
headers = signer("PUT", "/meta-dst/target.txt")
|
||||||
|
headers["x-amz-copy-source"] = "/meta-src/source.txt"
|
||||||
|
headers["x-amz-metadata-directive"] = "REPLACE"
|
||||||
|
headers["x-amz-meta-new"] = "metadata"
|
||||||
|
resp = client.put("/meta-dst/target.txt", headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
# Verify new metadata (note: header keys are Title-Cased)
|
||||||
|
meta = storage.get_object_metadata("meta-dst", "target.txt")
|
||||||
|
assert "New" in meta or "new" in meta
|
||||||
|
assert "old" not in meta and "Old" not in meta
|
||||||
|
|
||||||
|
|
||||||
|
class TestObjectTagging:
|
||||||
|
"""Tests for Object tagging operations."""
|
||||||
|
|
||||||
|
def test_put_get_delete_object_tags(self, client, signer, storage):
|
||||||
|
storage.create_bucket("obj-tag-test")
|
||||||
|
storage.put_object("obj-tag-test", "tagged.txt", _stream(b"content"))
|
||||||
|
|
||||||
|
# PUT tags
|
||||||
|
payload = b"""<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<Tagging>
|
||||||
|
<TagSet>
|
||||||
|
<Tag><Key>project</Key><Value>demo</Value></Tag>
|
||||||
|
<Tag><Key>env</Key><Value>test</Value></Tag>
|
||||||
|
</TagSet>
|
||||||
|
</Tagging>"""
|
||||||
|
|
||||||
|
headers = signer("PUT", "/obj-tag-test/tagged.txt?tagging", body=payload)
|
||||||
|
resp = client.put(
|
||||||
|
"/obj-tag-test/tagged.txt",
|
||||||
|
query_string={"tagging": ""},
|
||||||
|
data=payload,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
# GET tags
|
||||||
|
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
||||||
|
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
tags = {el.find("Key").text: el.find("Value").text for el in root.findall(".//Tag")}
|
||||||
|
assert tags["project"] == "demo"
|
||||||
|
assert tags["env"] == "test"
|
||||||
|
|
||||||
|
# DELETE tags
|
||||||
|
headers = signer("DELETE", "/obj-tag-test/tagged.txt?tagging")
|
||||||
|
resp = client.delete("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
# Verify empty
|
||||||
|
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
||||||
|
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert len(root.findall(".//Tag")) == 0
|
||||||
|
|
||||||
|
def test_object_tags_limit(self, client, signer, storage):
|
||||||
|
storage.create_bucket("tag-limit")
|
||||||
|
storage.put_object("tag-limit", "file.txt", _stream(b"x"))
|
||||||
|
|
||||||
|
# Try to set 11 tags (limit is 10)
|
||||||
|
tags = "".join(f"<Tag><Key>key{i}</Key><Value>val{i}</Value></Tag>" for i in range(11))
|
||||||
|
payload = f"<Tagging><TagSet>{tags}</TagSet></Tagging>".encode()
|
||||||
|
|
||||||
|
headers = signer("PUT", "/tag-limit/file.txt?tagging", body=payload)
|
||||||
|
resp = client.put(
|
||||||
|
"/tag-limit/file.txt",
|
||||||
|
query_string={"tagging": ""},
|
||||||
|
data=payload,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
assert resp.status_code == 400
|
||||||
191
tests/test_security.py
Normal file
191
tests/test_security.py
Normal file
@@ -0,0 +1,191 @@
|
|||||||
|
import hashlib
|
||||||
|
import hmac
|
||||||
|
import pytest
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from urllib.parse import quote
|
||||||
|
|
||||||
|
def _sign(key, msg):
|
||||||
|
return hmac.new(key, msg.encode("utf-8"), hashlib.sha256).digest()
|
||||||
|
|
||||||
|
def _get_signature_key(key, date_stamp, region_name, service_name):
|
||||||
|
k_date = _sign(("AWS4" + key).encode("utf-8"), date_stamp)
|
||||||
|
k_region = _sign(k_date, region_name)
|
||||||
|
k_service = _sign(k_region, service_name)
|
||||||
|
k_signing = _sign(k_service, "aws4_request")
|
||||||
|
return k_signing
|
||||||
|
|
||||||
|
def create_signed_headers(
|
||||||
|
method,
|
||||||
|
path,
|
||||||
|
headers=None,
|
||||||
|
body=None,
|
||||||
|
access_key="test",
|
||||||
|
secret_key="secret",
|
||||||
|
region="us-east-1",
|
||||||
|
service="s3",
|
||||||
|
timestamp=None
|
||||||
|
):
|
||||||
|
if headers is None:
|
||||||
|
headers = {}
|
||||||
|
|
||||||
|
if timestamp is None:
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
else:
|
||||||
|
now = timestamp
|
||||||
|
|
||||||
|
amz_date = now.strftime("%Y%m%dT%H%M%SZ")
|
||||||
|
date_stamp = now.strftime("%Y%m%d")
|
||||||
|
|
||||||
|
headers["X-Amz-Date"] = amz_date
|
||||||
|
headers["Host"] = "testserver"
|
||||||
|
|
||||||
|
canonical_uri = quote(path, safe="/-_.~")
|
||||||
|
canonical_query_string = ""
|
||||||
|
|
||||||
|
canonical_headers = ""
|
||||||
|
signed_headers_list = []
|
||||||
|
for k, v in sorted(headers.items(), key=lambda x: x[0].lower()):
|
||||||
|
canonical_headers += f"{k.lower()}:{v.strip()}\n"
|
||||||
|
signed_headers_list.append(k.lower())
|
||||||
|
|
||||||
|
signed_headers = ";".join(signed_headers_list)
|
||||||
|
|
||||||
|
payload_hash = hashlib.sha256(body or b"").hexdigest()
|
||||||
|
headers["X-Amz-Content-Sha256"] = payload_hash
|
||||||
|
|
||||||
|
canonical_request = f"{method}\n{canonical_uri}\n{canonical_query_string}\n{canonical_headers}\n{signed_headers}\n{payload_hash}"
|
||||||
|
|
||||||
|
credential_scope = f"{date_stamp}/{region}/{service}/aws4_request"
|
||||||
|
string_to_sign = f"AWS4-HMAC-SHA256\n{amz_date}\n{credential_scope}\n{hashlib.sha256(canonical_request.encode('utf-8')).hexdigest()}"
|
||||||
|
|
||||||
|
signing_key = _get_signature_key(secret_key, date_stamp, region, service)
|
||||||
|
signature = hmac.new(signing_key, string_to_sign.encode("utf-8"), hashlib.sha256).hexdigest()
|
||||||
|
|
||||||
|
headers["Authorization"] = (
|
||||||
|
f"AWS4-HMAC-SHA256 Credential={access_key}/{credential_scope}, "
|
||||||
|
f"SignedHeaders={signed_headers}, Signature={signature}"
|
||||||
|
)
|
||||||
|
return headers
|
||||||
|
|
||||||
|
def test_sigv4_old_date(client):
|
||||||
|
# Test with a date 20 minutes in the past
|
||||||
|
old_time = datetime.now(timezone.utc) - timedelta(minutes=20)
|
||||||
|
headers = create_signed_headers("GET", "/", timestamp=old_time)
|
||||||
|
|
||||||
|
response = client.get("/", headers=headers)
|
||||||
|
assert response.status_code == 403
|
||||||
|
assert b"Request timestamp too old" in response.data
|
||||||
|
|
||||||
|
def test_sigv4_future_date(client):
|
||||||
|
# Test with a date 20 minutes in the future
|
||||||
|
future_time = datetime.now(timezone.utc) + timedelta(minutes=20)
|
||||||
|
headers = create_signed_headers("GET", "/", timestamp=future_time)
|
||||||
|
|
||||||
|
response = client.get("/", headers=headers)
|
||||||
|
assert response.status_code == 403
|
||||||
|
assert b"Request timestamp too old" in response.data # The error message is the same
|
||||||
|
|
||||||
|
def test_path_traversal_in_key(client, signer):
|
||||||
|
headers = signer("PUT", "/test-bucket")
|
||||||
|
client.put("/test-bucket", headers=headers)
|
||||||
|
|
||||||
|
# Try to upload with .. in key
|
||||||
|
headers = signer("PUT", "/test-bucket/../secret.txt", body=b"attack")
|
||||||
|
response = client.put("/test-bucket/../secret.txt", headers=headers, data=b"attack")
|
||||||
|
|
||||||
|
# Should be rejected by storage layer or flask routing
|
||||||
|
# Flask might normalize it before it reaches the app, but if it reaches, it should fail.
|
||||||
|
# If Flask normalizes /test-bucket/../secret.txt to /secret.txt, then it hits 404 (bucket not found) or 403.
|
||||||
|
# But we want to test the storage layer check.
|
||||||
|
# We can try to encode the dots?
|
||||||
|
|
||||||
|
# If we use a key that doesn't get normalized by Flask routing easily.
|
||||||
|
# But wait, the route is /<bucket_name>/<path:object_key>
|
||||||
|
# If I send /test-bucket/folder/../file.txt, Flask might pass "folder/../file.txt" as object_key?
|
||||||
|
# Let's try.
|
||||||
|
|
||||||
|
headers = signer("PUT", "/test-bucket/folder/../file.txt", body=b"attack")
|
||||||
|
response = client.put("/test-bucket/folder/../file.txt", headers=headers, data=b"attack")
|
||||||
|
|
||||||
|
# If Flask normalizes it, it becomes /test-bucket/file.txt.
|
||||||
|
# If it doesn't, it hits our check.
|
||||||
|
|
||||||
|
# Let's try to call the storage method directly to verify the check works,
|
||||||
|
# because testing via client depends on Flask's URL handling.
|
||||||
|
pass
|
||||||
|
|
||||||
|
def test_storage_path_traversal(app):
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
from app.storage import StorageError, ObjectStorage
|
||||||
|
from app.encrypted_storage import EncryptedObjectStorage
|
||||||
|
|
||||||
|
# Get the underlying ObjectStorage if wrapped
|
||||||
|
if isinstance(storage, EncryptedObjectStorage):
|
||||||
|
storage = storage.storage
|
||||||
|
|
||||||
|
with pytest.raises(StorageError, match="Object key contains parent directory references"):
|
||||||
|
storage._sanitize_object_key("folder/../file.txt")
|
||||||
|
|
||||||
|
with pytest.raises(StorageError, match="Object key contains parent directory references"):
|
||||||
|
storage._sanitize_object_key("..")
|
||||||
|
|
||||||
|
def test_head_bucket(client, signer):
|
||||||
|
headers = signer("PUT", "/head-test")
|
||||||
|
client.put("/head-test", headers=headers)
|
||||||
|
|
||||||
|
headers = signer("HEAD", "/head-test")
|
||||||
|
response = client.head("/head-test", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
headers = signer("HEAD", "/non-existent")
|
||||||
|
response = client.head("/non-existent", headers=headers)
|
||||||
|
assert response.status_code == 404
|
||||||
|
|
||||||
|
def test_head_object(client, signer):
|
||||||
|
headers = signer("PUT", "/head-obj-test")
|
||||||
|
client.put("/head-obj-test", headers=headers)
|
||||||
|
|
||||||
|
headers = signer("PUT", "/head-obj-test/obj", body=b"content")
|
||||||
|
client.put("/head-obj-test/obj", headers=headers, data=b"content")
|
||||||
|
|
||||||
|
headers = signer("HEAD", "/head-obj-test/obj")
|
||||||
|
response = client.head("/head-obj-test/obj", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
assert response.headers["ETag"]
|
||||||
|
assert response.headers["Content-Length"] == "7"
|
||||||
|
|
||||||
|
headers = signer("HEAD", "/head-obj-test/missing")
|
||||||
|
response = client.head("/head-obj-test/missing", headers=headers)
|
||||||
|
assert response.status_code == 404
|
||||||
|
|
||||||
|
def test_list_parts(client, signer):
|
||||||
|
# Create bucket
|
||||||
|
headers = signer("PUT", "/multipart-test")
|
||||||
|
client.put("/multipart-test", headers=headers)
|
||||||
|
|
||||||
|
# Initiate multipart upload
|
||||||
|
headers = signer("POST", "/multipart-test/obj?uploads")
|
||||||
|
response = client.post("/multipart-test/obj?uploads", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
from xml.etree.ElementTree import fromstring
|
||||||
|
upload_id = fromstring(response.data).find("UploadId").text
|
||||||
|
|
||||||
|
# Upload part 1
|
||||||
|
headers = signer("PUT", f"/multipart-test/obj?partNumber=1&uploadId={upload_id}", body=b"part1")
|
||||||
|
client.put(f"/multipart-test/obj?partNumber=1&uploadId={upload_id}", headers=headers, data=b"part1")
|
||||||
|
|
||||||
|
# Upload part 2
|
||||||
|
headers = signer("PUT", f"/multipart-test/obj?partNumber=2&uploadId={upload_id}", body=b"part2")
|
||||||
|
client.put(f"/multipart-test/obj?partNumber=2&uploadId={upload_id}", headers=headers, data=b"part2")
|
||||||
|
|
||||||
|
# List parts
|
||||||
|
headers = signer("GET", f"/multipart-test/obj?uploadId={upload_id}")
|
||||||
|
response = client.get(f"/multipart-test/obj?uploadId={upload_id}", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(response.data)
|
||||||
|
assert root.tag == "ListPartsResult"
|
||||||
|
parts = root.findall("Part")
|
||||||
|
assert len(parts) == 2
|
||||||
|
assert parts[0].find("PartNumber").text == "1"
|
||||||
|
assert parts[1].find("PartNumber").text == "2"
|
||||||
@@ -99,11 +99,11 @@ def test_delete_object_retries_when_locked(tmp_path, monkeypatch):
|
|||||||
original_unlink = Path.unlink
|
original_unlink = Path.unlink
|
||||||
attempts = {"count": 0}
|
attempts = {"count": 0}
|
||||||
|
|
||||||
def flaky_unlink(self):
|
def flaky_unlink(self, missing_ok=False):
|
||||||
if self == target_path and attempts["count"] < 1:
|
if self == target_path and attempts["count"] < 1:
|
||||||
attempts["count"] += 1
|
attempts["count"] += 1
|
||||||
raise PermissionError("locked")
|
raise PermissionError("locked")
|
||||||
return original_unlink(self)
|
return original_unlink(self, missing_ok=missing_ok)
|
||||||
|
|
||||||
monkeypatch.setattr(Path, "unlink", flaky_unlink)
|
monkeypatch.setattr(Path, "unlink", flaky_unlink)
|
||||||
|
|
||||||
@@ -220,7 +220,7 @@ def test_bucket_config_filename_allowed(tmp_path):
|
|||||||
storage.create_bucket("demo")
|
storage.create_bucket("demo")
|
||||||
storage.put_object("demo", ".bucket.json", io.BytesIO(b"{}"))
|
storage.put_object("demo", ".bucket.json", io.BytesIO(b"{}"))
|
||||||
|
|
||||||
objects = storage.list_objects("demo")
|
objects = storage.list_objects_all("demo")
|
||||||
assert any(meta.key == ".bucket.json" for meta in objects)
|
assert any(meta.key == ".bucket.json" for meta in objects)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -62,7 +62,7 @@ def test_bulk_delete_json_route(tmp_path: Path):
|
|||||||
assert set(payload["deleted"]) == {"first.txt", "missing.txt"}
|
assert set(payload["deleted"]) == {"first.txt", "missing.txt"}
|
||||||
assert payload["errors"] == []
|
assert payload["errors"] == []
|
||||||
|
|
||||||
listing = storage.list_objects("demo")
|
listing = storage.list_objects_all("demo")
|
||||||
assert {meta.key for meta in listing} == {"second.txt"}
|
assert {meta.key for meta in listing} == {"second.txt"}
|
||||||
|
|
||||||
|
|
||||||
@@ -92,5 +92,5 @@ def test_bulk_delete_validation(tmp_path: Path):
|
|||||||
assert limit_response.status_code == 400
|
assert limit_response.status_code == 400
|
||||||
assert limit_response.get_json()["status"] == "error"
|
assert limit_response.get_json()["status"] == "error"
|
||||||
|
|
||||||
still_there = storage.list_objects("demo")
|
still_there = storage.list_objects_all("demo")
|
||||||
assert {meta.key for meta in still_there} == {"keep.txt"}
|
assert {meta.key for meta in still_there} == {"keep.txt"}
|
||||||
|
|||||||
268
tests/test_ui_encryption.py
Normal file
268
tests/test_ui_encryption.py
Normal file
@@ -0,0 +1,268 @@
|
|||||||
|
"""Tests for UI-based encryption configuration."""
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app import create_app
|
||||||
|
|
||||||
|
|
||||||
|
def get_csrf_token(response):
|
||||||
|
"""Extract CSRF token from response HTML."""
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
import re
|
||||||
|
match = re.search(r'name="csrf_token"\s+value="([^"]+)"', html)
|
||||||
|
return match.group(1) if match else None
|
||||||
|
|
||||||
|
|
||||||
|
def _make_encryption_app(tmp_path: Path, *, kms_enabled: bool = True):
|
||||||
|
"""Create an app with encryption enabled."""
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
iam_config = tmp_path / "iam.json"
|
||||||
|
bucket_policies = tmp_path / "bucket_policies.json"
|
||||||
|
iam_payload = {
|
||||||
|
"users": [
|
||||||
|
{
|
||||||
|
"access_key": "test",
|
||||||
|
"secret_key": "secret",
|
||||||
|
"display_name": "Test User",
|
||||||
|
"policies": [{"bucket": "*", "actions": ["list", "read", "write", "delete", "policy"]}],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"access_key": "readonly",
|
||||||
|
"secret_key": "secret",
|
||||||
|
"display_name": "Read Only User",
|
||||||
|
"policies": [{"bucket": "*", "actions": ["list", "read"]}],
|
||||||
|
},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
iam_config.write_text(json.dumps(iam_payload))
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"TESTING": True,
|
||||||
|
"STORAGE_ROOT": storage_root,
|
||||||
|
"IAM_CONFIG": iam_config,
|
||||||
|
"BUCKET_POLICY_PATH": bucket_policies,
|
||||||
|
"API_BASE_URL": "http://testserver",
|
||||||
|
"SECRET_KEY": "testing",
|
||||||
|
"ENCRYPTION_ENABLED": True,
|
||||||
|
}
|
||||||
|
|
||||||
|
if kms_enabled:
|
||||||
|
config["KMS_ENABLED"] = True
|
||||||
|
config["KMS_KEYS_PATH"] = str(tmp_path / "kms_keys.json")
|
||||||
|
config["ENCRYPTION_MASTER_KEY_PATH"] = str(tmp_path / "master.key")
|
||||||
|
|
||||||
|
app = create_app(config)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
return app
|
||||||
|
|
||||||
|
|
||||||
|
class TestUIBucketEncryption:
|
||||||
|
"""Test bucket encryption configuration via UI."""
|
||||||
|
|
||||||
|
def test_bucket_detail_shows_encryption_card(self, tmp_path):
|
||||||
|
"""Encryption card should be visible on bucket detail page."""
|
||||||
|
app = _make_encryption_app(tmp_path)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Login first
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
assert "Default Encryption" in html
|
||||||
|
assert "Encryption Algorithm" in html or "Default encryption disabled" in html
|
||||||
|
|
||||||
|
def test_enable_aes256_encryption(self, tmp_path):
|
||||||
|
"""Should be able to enable AES-256 encryption."""
|
||||||
|
app = _make_encryption_app(tmp_path)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Login
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Get CSRF token
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
# Enable AES-256 encryption
|
||||||
|
response = client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "enable",
|
||||||
|
"algorithm": "AES256",
|
||||||
|
},
|
||||||
|
follow_redirects=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
# Should see success message or enabled state
|
||||||
|
assert "AES-256" in html or "encryption enabled" in html.lower()
|
||||||
|
|
||||||
|
def test_enable_kms_encryption(self, tmp_path):
|
||||||
|
"""Should be able to enable KMS encryption."""
|
||||||
|
app = _make_encryption_app(tmp_path, kms_enabled=True)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Create a KMS key first
|
||||||
|
with app.app_context():
|
||||||
|
kms = app.extensions.get("kms")
|
||||||
|
if kms:
|
||||||
|
key = kms.create_key("test-key")
|
||||||
|
key_id = key.key_id
|
||||||
|
else:
|
||||||
|
pytest.skip("KMS not available")
|
||||||
|
|
||||||
|
# Login
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Get CSRF token
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
# Enable KMS encryption
|
||||||
|
response = client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "enable",
|
||||||
|
"algorithm": "aws:kms",
|
||||||
|
"kms_key_id": key_id,
|
||||||
|
},
|
||||||
|
follow_redirects=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
assert "KMS" in html or "encryption enabled" in html.lower()
|
||||||
|
|
||||||
|
def test_disable_encryption(self, tmp_path):
|
||||||
|
"""Should be able to disable encryption."""
|
||||||
|
app = _make_encryption_app(tmp_path)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Login
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# First enable encryption
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "enable",
|
||||||
|
"algorithm": "AES256",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
# Now disable it
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
response = client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "disable",
|
||||||
|
},
|
||||||
|
follow_redirects=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
assert "disabled" in html.lower() or "Default encryption disabled" in html
|
||||||
|
|
||||||
|
def test_invalid_algorithm_rejected(self, tmp_path):
|
||||||
|
"""Invalid encryption algorithm should be rejected."""
|
||||||
|
app = _make_encryption_app(tmp_path)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Login
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
response = client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "enable",
|
||||||
|
"algorithm": "INVALID",
|
||||||
|
},
|
||||||
|
follow_redirects=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
assert "Invalid" in html or "danger" in html
|
||||||
|
|
||||||
|
def test_encryption_persists_in_config(self, tmp_path):
|
||||||
|
"""Encryption config should persist in bucket config."""
|
||||||
|
app = _make_encryption_app(tmp_path)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Login
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Enable encryption
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "enable",
|
||||||
|
"algorithm": "AES256",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
# Verify it's stored
|
||||||
|
with app.app_context():
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
config = storage.get_bucket_encryption("test-bucket")
|
||||||
|
|
||||||
|
assert "Rules" in config
|
||||||
|
assert len(config["Rules"]) == 1
|
||||||
|
assert config["Rules"][0]["ApplyServerSideEncryptionByDefault"]["SSEAlgorithm"] == "AES256"
|
||||||
|
|
||||||
|
|
||||||
|
class TestUIEncryptionWithoutPermission:
|
||||||
|
"""Test encryption UI when user lacks permissions."""
|
||||||
|
|
||||||
|
def test_readonly_user_cannot_change_encryption(self, tmp_path):
|
||||||
|
"""Read-only user should not be able to change encryption settings."""
|
||||||
|
app = _make_encryption_app(tmp_path)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Login as readonly user
|
||||||
|
client.post("/ui/login", data={"access_key": "readonly", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# This should fail or be rejected
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
response = client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "enable",
|
||||||
|
"algorithm": "AES256",
|
||||||
|
},
|
||||||
|
follow_redirects=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should either redirect with error or show permission denied
|
||||||
|
assert response.status_code == 200
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
# Should contain error about permission denied
|
||||||
|
assert "Access denied" in html or "permission" in html.lower() or "not authorized" in html.lower()
|
||||||
183
tests/test_ui_pagination.py
Normal file
183
tests/test_ui_pagination.py
Normal file
@@ -0,0 +1,183 @@
|
|||||||
|
"""Tests for UI pagination of bucket objects."""
|
||||||
|
import json
|
||||||
|
from io import BytesIO
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app import create_app
|
||||||
|
|
||||||
|
|
||||||
|
def _make_app(tmp_path: Path):
|
||||||
|
"""Create an app for testing."""
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
iam_config = tmp_path / "iam.json"
|
||||||
|
bucket_policies = tmp_path / "bucket_policies.json"
|
||||||
|
iam_payload = {
|
||||||
|
"users": [
|
||||||
|
{
|
||||||
|
"access_key": "test",
|
||||||
|
"secret_key": "secret",
|
||||||
|
"display_name": "Test User",
|
||||||
|
"policies": [{"bucket": "*", "actions": ["list", "read", "write", "delete", "policy"]}],
|
||||||
|
},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
iam_config.write_text(json.dumps(iam_payload))
|
||||||
|
|
||||||
|
flask_app = create_app(
|
||||||
|
{
|
||||||
|
"TESTING": True,
|
||||||
|
"WTF_CSRF_ENABLED": False,
|
||||||
|
"STORAGE_ROOT": storage_root,
|
||||||
|
"IAM_CONFIG": iam_config,
|
||||||
|
"BUCKET_POLICY_PATH": bucket_policies,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return flask_app
|
||||||
|
|
||||||
|
|
||||||
|
class TestPaginatedObjectListing:
|
||||||
|
"""Test paginated object listing API."""
|
||||||
|
|
||||||
|
def test_objects_api_returns_paginated_results(self, tmp_path):
|
||||||
|
"""Objects API should return paginated results."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
# Create 10 test objects
|
||||||
|
for i in range(10):
|
||||||
|
storage.put_object("test-bucket", f"file{i:02d}.txt", BytesIO(b"content"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
# Login first
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Request first page of 3 objects
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects?max_keys=3")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
data = resp.get_json()
|
||||||
|
assert len(data["objects"]) == 3
|
||||||
|
assert data["is_truncated"] is True
|
||||||
|
assert data["next_continuation_token"] is not None
|
||||||
|
assert data["total_count"] == 10
|
||||||
|
|
||||||
|
def test_objects_api_pagination_continuation(self, tmp_path):
|
||||||
|
"""Objects API should support continuation tokens."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
# Create 5 test objects
|
||||||
|
for i in range(5):
|
||||||
|
storage.put_object("test-bucket", f"file{i:02d}.txt", BytesIO(b"content"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Get first page
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects?max_keys=2")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.get_json()
|
||||||
|
|
||||||
|
first_page_keys = [obj["key"] for obj in data["objects"]]
|
||||||
|
assert len(first_page_keys) == 2
|
||||||
|
assert data["is_truncated"] is True
|
||||||
|
|
||||||
|
# Get second page
|
||||||
|
token = data["next_continuation_token"]
|
||||||
|
resp = client.get(f"/ui/buckets/test-bucket/objects?max_keys=2&continuation_token={token}")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.get_json()
|
||||||
|
|
||||||
|
second_page_keys = [obj["key"] for obj in data["objects"]]
|
||||||
|
assert len(second_page_keys) == 2
|
||||||
|
|
||||||
|
# No overlap between pages
|
||||||
|
assert set(first_page_keys).isdisjoint(set(second_page_keys))
|
||||||
|
|
||||||
|
def test_objects_api_prefix_filter(self, tmp_path):
|
||||||
|
"""Objects API should support prefix filtering."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
# Create objects with different prefixes
|
||||||
|
storage.put_object("test-bucket", "logs/access.log", BytesIO(b"log"))
|
||||||
|
storage.put_object("test-bucket", "logs/error.log", BytesIO(b"log"))
|
||||||
|
storage.put_object("test-bucket", "data/file.txt", BytesIO(b"data"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Filter by prefix
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects?prefix=logs/")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.get_json()
|
||||||
|
|
||||||
|
keys = [obj["key"] for obj in data["objects"]]
|
||||||
|
assert all(k.startswith("logs/") for k in keys)
|
||||||
|
assert len(keys) == 2
|
||||||
|
|
||||||
|
def test_objects_api_requires_authentication(self, tmp_path):
|
||||||
|
"""Objects API should require login."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
# Don't login
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects")
|
||||||
|
# Should redirect to login
|
||||||
|
assert resp.status_code == 302
|
||||||
|
assert "/ui/login" in resp.headers.get("Location", "")
|
||||||
|
|
||||||
|
def test_objects_api_returns_object_metadata(self, tmp_path):
|
||||||
|
"""Objects API should return complete object metadata."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
storage.put_object("test-bucket", "test.txt", BytesIO(b"test content"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.get_json()
|
||||||
|
|
||||||
|
assert len(data["objects"]) == 1
|
||||||
|
obj = data["objects"][0]
|
||||||
|
|
||||||
|
# Check all expected fields
|
||||||
|
assert obj["key"] == "test.txt"
|
||||||
|
assert obj["size"] == 12 # len("test content")
|
||||||
|
assert "last_modified" in obj
|
||||||
|
assert "last_modified_display" in obj
|
||||||
|
assert "etag" in obj
|
||||||
|
assert "preview_url" in obj
|
||||||
|
assert "download_url" in obj
|
||||||
|
assert "delete_endpoint" in obj
|
||||||
|
|
||||||
|
def test_bucket_detail_page_loads_without_objects(self, tmp_path):
|
||||||
|
"""Bucket detail page should load even with many objects."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
# Create many objects
|
||||||
|
for i in range(100):
|
||||||
|
storage.put_object("test-bucket", f"file{i:03d}.txt", BytesIO(b"x"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# The page should load quickly (objects loaded via JS)
|
||||||
|
resp = client.get("/ui/buckets/test-bucket")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
html = resp.data.decode("utf-8")
|
||||||
|
# Should have the JavaScript loading infrastructure
|
||||||
|
assert "loadObjects" in html or "objectsApiUrl" in html
|
||||||
@@ -70,8 +70,12 @@ def test_ui_bucket_policy_enforcement_toggle(tmp_path: Path, enforce: bool):
|
|||||||
assert b"Access denied by bucket policy" in response.data
|
assert b"Access denied by bucket policy" in response.data
|
||||||
else:
|
else:
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert b"vid.mp4" in response.data
|
|
||||||
assert b"Access denied by bucket policy" not in response.data
|
assert b"Access denied by bucket policy" not in response.data
|
||||||
|
# Objects are now loaded via async API - check the objects endpoint
|
||||||
|
objects_response = client.get("/ui/buckets/testbucket/objects")
|
||||||
|
assert objects_response.status_code == 200
|
||||||
|
data = objects_response.get_json()
|
||||||
|
assert any(obj["key"] == "vid.mp4" for obj in data["objects"])
|
||||||
|
|
||||||
|
|
||||||
def test_ui_bucket_policy_disabled_by_default(tmp_path: Path):
|
def test_ui_bucket_policy_disabled_by_default(tmp_path: Path):
|
||||||
@@ -109,5 +113,9 @@ def test_ui_bucket_policy_disabled_by_default(tmp_path: Path):
|
|||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
response = client.get("/ui/buckets/testbucket", follow_redirects=True)
|
response = client.get("/ui/buckets/testbucket", follow_redirects=True)
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert b"vid.mp4" in response.data
|
|
||||||
assert b"Access denied by bucket policy" not in response.data
|
assert b"Access denied by bucket policy" not in response.data
|
||||||
|
# Objects are now loaded via async API - check the objects endpoint
|
||||||
|
objects_response = client.get("/ui/buckets/testbucket/objects")
|
||||||
|
assert objects_response.status_code == 200
|
||||||
|
data = objects_response.get_json()
|
||||||
|
assert any(obj["key"] == "vid.mp4" for obj in data["objects"])
|
||||||
|
|||||||
Reference in New Issue
Block a user