13 Commits

Author SHA1 Message Date
e792b86485 (UI): Add lifecycle, CORS, ACL, move/copy objects functionalities 2026-01-01 16:48:44 +08:00
cdb86aeea7 Implement Object Lock, Event Notifications, SSE-C, and Access Logging 2025-12-31 23:40:46 +08:00
cdbc156b5b Implement 9 S3 compatibility features: ACLs, range requests, lifecycle enforcement, replication ALL mode, bulk delete with VersionId, KMS integration, copy conditionals, response header overrides, and SigV4 session tokens 2025-12-31 19:12:54 +08:00
1df8ff9d25 Clean up code comments 2025-12-31 18:00:03 +08:00
05f1b00473 Update Dockerfile Python runtime 2025-12-31 14:13:30 +08:00
5ebc97300e Update README 2025-12-31 14:12:37 +08:00
d2f9c3bded Update README 2025-12-31 14:10:55 +08:00
9f347f2caa Fix brand typos 2025-12-31 14:02:50 +08:00
4ab58e59c2 Optimize S3 performance: add caching, per-bucket locks, streaming encryption 2025-12-29 18:12:28 +08:00
32232211a1 Revamp UI/UX: bucket icons, dynamic metrics, mobile docs navigation, rework IAM UI, add JSON auto-indent to policy editors 2025-12-29 17:37:56 +08:00
1cacb80dd6 Fix replication pause, multipart cache, and select all with virtual scroll 2025-12-29 14:46:06 +08:00
e89bbb62dc Fix pausing replication and resuming replication does not continue the replication for the remaining pending objects; Improve Documentation 2025-12-29 14:05:17 +08:00
c8eb3de629 Fix issues -- Bug fixes:
- Fix duplicate _legacy_version_dir check in storage.py
      - Fix max_size_bytes -> max_bytes param in quota handler
      - Move base64 import to module level in s3_api.py
      - Add retry logic and atomic file ops to multipart upload
      - Add shutdown() method to ReplicationManager

      Performance:
      - Add LRU eviction with OrderedDict to object cache
      - Add cache version tracking for stale read detection
      - Add streaming uploads for large files (>10 MiB) in replication
      - Create _find_element() XML parsing helpers

      Security:
      - Gate SigV4 debug logging behind DEBUG_SIGV4 config
2025-12-29 12:46:23 +08:00
43 changed files with 5190 additions and 940 deletions

View File

@@ -1,5 +1,5 @@
# syntax=docker/dockerfile:1.7
FROM python:3.11-slim
FROM python:3.12.12-slim
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1

300
README.md
View File

@@ -1,117 +1,251 @@
# MyFSIO (Flask S3 + IAM)
# MyFSIO
MyFSIO is a batteries-included, Flask-based recreation of Amazon S3 and IAM workflows built for local development. The design mirrors the [AWS S3 documentation](https://docs.aws.amazon.com/s3/) wherever practical: bucket naming, Signature Version 4 presigning, Version 2012-10-17 bucket policies, IAM-style users, and familiar REST endpoints.
A lightweight, S3-compatible object storage system built with Flask. MyFSIO implements core AWS S3 REST API operations with filesystem-backed storage, making it ideal for local development, testing, and self-hosted storage scenarios.
## Why MyFSIO?
## Features
- **Dual servers:** Run both the API (port 5000) and UI (port 5100) with a single command: `python run.py`.
- **IAM + access keys:** Users, access keys, key rotation, and bucket-scoped actions (`list/read/write/delete/policy`) now live in `data/.myfsio.sys/config/iam.json` and are editable from the IAM dashboard.
- **Bucket policies + hot reload:** `data/.myfsio.sys/config/bucket_policies.json` uses AWS' policy grammar (Version `2012-10-17`) with a built-in watcher, so editing the JSON file applies immediately. The UI also ships Public/Private/Custom presets for faster edits.
- **Presigned URLs everywhere:** Signature Version 4 presigned URLs respect IAM + bucket policies and replace the now-removed "share link" feature for public access scenarios.
- **Modern UI:** Responsive tables, quick filters, preview sidebar, object-level delete buttons, a presign modal, and an inline JSON policy editor that respects dark mode keep bucket management friendly. The object browser supports folder navigation, infinite scroll pagination, bulk operations, and automatic retry on load failures.
- **Tests & health:** `/healthz` for smoke checks and `pytest` coverage for IAM, CRUD, presign, and policy flows.
**Core Storage**
- S3-compatible REST API with AWS Signature Version 4 authentication
- Bucket and object CRUD operations
- Object versioning with version history
- Multipart uploads for large files
- Presigned URLs (1 second to 7 days validity)
## Architecture at a Glance
**Security & Access Control**
- IAM users with access key management and rotation
- Bucket policies (AWS Policy Version 2012-10-17)
- Server-side encryption (SSE-S3 and SSE-KMS)
- Built-in Key Management Service (KMS)
- Rate limiting per endpoint
**Advanced Features**
- Cross-bucket replication to remote S3-compatible endpoints
- Hot-reload for bucket policies (no restart required)
- CORS configuration per bucket
**Management UI**
- Web console for bucket and object management
- IAM dashboard for user administration
- Inline JSON policy editor with presets
- Object browser with folder navigation and bulk operations
- Dark mode support
## Architecture
```
+-----------------+ +----------------+
| API Server |<----->| Object storage |
| (port 5000) | | (filesystem) |
| - S3 routes | +----------------+
| - Presigned URLs |
| - Bucket policy |
+-----------------+
^
|
+-----------------+
| UI Server |
| (port 5100) |
| - Auth console |
| - IAM dashboard|
| - Bucket editor|
+-----------------+
+------------------+ +------------------+
| API Server | | UI Server |
| (port 5000) | | (port 5100) |
| | | |
| - S3 REST API |<------->| - Web Console |
| - SigV4 Auth | | - IAM Dashboard |
| - Presign URLs | | - Bucket Editor |
+--------+---------+ +------------------+
|
v
+------------------+ +------------------+
| Object Storage | | System Metadata |
| (filesystem) | | (.myfsio.sys/) |
| | | |
| data/<bucket>/ | | - IAM config |
| <objects> | | - Bucket policies|
| | | - Encryption keys|
+------------------+ +------------------+
```
Both apps load the same configuration via `AppConfig` so IAM data and bucket policies stay consistent no matter which process you run.
Bucket policies are automatically reloaded whenever `bucket_policies.json` changes—no restarts required.
## Getting Started
## Quick Start
```bash
# Clone and setup
git clone https://gitea.jzwsite.com/kqjy/MyFSIO
cd s3
python -m venv .venv
. .venv/Scripts/activate # PowerShell: .\.venv\Scripts\Activate.ps1
# Activate virtual environment
# Windows PowerShell:
.\.venv\Scripts\Activate.ps1
# Windows CMD:
.venv\Scripts\activate.bat
# Linux/macOS:
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Run both API and UI (default)
# Start both servers
python run.py
# Or run individually:
# python run.py --mode api
# python run.py --mode ui
# Or start individually
python run.py --mode api # API only (port 5000)
python run.py --mode ui # UI only (port 5100)
```
Visit `http://127.0.0.1:5100/ui` for the console and `http://127.0.0.1:5000/` for the raw API. Override ports/hosts with the environment variables listed below.
**Default Credentials:** `localadmin` / `localadmin`
## IAM, Access Keys, and Bucket Policies
- First run creates `data/.myfsio.sys/config/iam.json` with `localadmin / localadmin` (full control). Sign in via the UI, then use the **IAM** tab to create users, rotate secrets, or edit inline policies without touching JSON by hand.
- Bucket policies live in `data/.myfsio.sys/config/bucket_policies.json` and follow the AWS `arn:aws:s3:::bucket/key` resource syntax with Version `2012-10-17`. Attach/replace/remove policies from the bucket detail page or edit the JSON by hand—changes hot reload automatically.
- IAM actions include extended verbs (`iam:list_users`, `iam:create_user`, `iam:update_policy`, etc.) so you can control who is allowed to manage other users and policies.
### Bucket Policy Presets & Hot Reload
- **Presets:** Every bucket detail view includes Public (read-only), Private (detach policy), and Custom presets. Public auto-populates a policy that grants anonymous `s3:ListBucket` + `s3:GetObject` access to the entire bucket.
- **Custom drafts:** Switching back to Custom restores your last manual edit so you can toggle between presets without losing work.
- **Hot reload:** The server watches `bucket_policies.json` and reloads statements on-the-fly—ideal for editing policies in your favorite editor while testing Via curl or the UI.
## Presigned URLs
Presigned URLs follow the AWS CLI playbook:
- Call `POST /presign/<bucket>/<key>` (or use the "Presign" button in the UI) to request a Signature Version 4 URL valid for 1 second to 7 days.
- The generated URL honors IAM permissions and bucket-policy decisions at generation-time and again when somebody fetches it.
- Because presigned URLs cover both authenticated and public sharing scenarios, the legacy "share link" feature has been removed.
- **Web Console:** http://127.0.0.1:5100/ui
- **API Endpoint:** http://127.0.0.1:5000
## Configuration
| Variable | Default | Description |
| --- | --- | --- |
| `STORAGE_ROOT` | `<project>/data` | Filesystem root for bucket directories |
| `MAX_UPLOAD_SIZE` | `1073741824` | Maximum upload size (bytes) |
| `UI_PAGE_SIZE` | `100` | `MaxKeys` hint for listings |
| `SECRET_KEY` | `dev-secret-key` | Flask session secret for the UI |
| `IAM_CONFIG` | `<project>/data/.myfsio.sys/config/iam.json` | IAM user + policy store |
| `BUCKET_POLICY_PATH` | `<project>/data/.myfsio.sys/config/bucket_policies.json` | Bucket policy store |
| `API_BASE_URL` | `http://127.0.0.1:5000` | Used by the UI when calling API endpoints (presign, bucket policy) |
| `AWS_REGION` | `us-east-1` | Region used in Signature V4 scope |
| `AWS_SERVICE` | `s3` | Service used in Signature V4 scope |
|----------|---------|-------------|
| `STORAGE_ROOT` | `./data` | Filesystem root for bucket storage |
| `IAM_CONFIG` | `.myfsio.sys/config/iam.json` | IAM user and policy store |
| `BUCKET_POLICY_PATH` | `.myfsio.sys/config/bucket_policies.json` | Bucket policy store |
| `API_BASE_URL` | `http://127.0.0.1:5000` | API endpoint for UI calls |
| `MAX_UPLOAD_SIZE` | `1073741824` | Maximum upload size in bytes (1 GB) |
| `MULTIPART_MIN_PART_SIZE` | `5242880` | Minimum multipart part size (5 MB) |
| `UI_PAGE_SIZE` | `100` | Default page size for listings |
| `SECRET_KEY` | `dev-secret-key` | Flask session secret |
| `AWS_REGION` | `us-east-1` | Region for SigV4 signing |
| `AWS_SERVICE` | `s3` | Service name for SigV4 signing |
| `ENCRYPTION_ENABLED` | `false` | Enable server-side encryption |
| `KMS_ENABLED` | `false` | Enable Key Management Service |
| `LOG_LEVEL` | `INFO` | Logging verbosity |
> Buckets now live directly under `data/` while system metadata (versions, IAM, bucket policies, multipart uploads, etc.) lives in `data/.myfsio.sys`.
## API Cheatsheet (IAM headers required)
## Data Layout
```
GET / -> List buckets (XML)
PUT /<bucket> -> Create bucket
DELETE /<bucket> -> Delete bucket (must be empty)
GET /<bucket> -> List objects (XML)
PUT /<bucket>/<key> -> Upload object (binary stream)
GET /<bucket>/<key> -> Download object
DELETE /<bucket>/<key> -> Delete object
POST /presign/<bucket>/<key> -> Generate AWS SigV4 presigned URL (JSON)
GET /bucket-policy/<bucket> -> Fetch bucket policy (JSON)
PUT /bucket-policy/<bucket> -> Attach/replace bucket policy (JSON)
DELETE /bucket-policy/<bucket> -> Remove bucket policy
data/
├── <bucket>/ # User buckets with objects
└── .myfsio.sys/ # System metadata
├── config/
│ ├── iam.json # IAM users and policies
│ ├── bucket_policies.json # Bucket policies
│ ├── replication_rules.json
│ └── connections.json # Remote S3 connections
├── buckets/<bucket>/
│ ├── meta/ # Object metadata (.meta.json)
│ ├── versions/ # Archived object versions
│ └── .bucket.json # Bucket config (versioning, CORS)
├── multipart/ # Active multipart uploads
└── keys/ # Encryption keys (SSE-S3/KMS)
```
## API Reference
All endpoints require AWS Signature Version 4 authentication unless using presigned URLs or public bucket policies.
### Bucket Operations
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/` | List all buckets |
| `PUT` | `/<bucket>` | Create bucket |
| `DELETE` | `/<bucket>` | Delete bucket (must be empty) |
| `HEAD` | `/<bucket>` | Check bucket exists |
### Object Operations
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/<bucket>` | List objects (supports `list-type=2`) |
| `PUT` | `/<bucket>/<key>` | Upload object |
| `GET` | `/<bucket>/<key>` | Download object |
| `DELETE` | `/<bucket>/<key>` | Delete object |
| `HEAD` | `/<bucket>/<key>` | Get object metadata |
| `POST` | `/<bucket>/<key>?uploads` | Initiate multipart upload |
| `PUT` | `/<bucket>/<key>?partNumber=N&uploadId=X` | Upload part |
| `POST` | `/<bucket>/<key>?uploadId=X` | Complete multipart upload |
| `DELETE` | `/<bucket>/<key>?uploadId=X` | Abort multipart upload |
### Presigned URLs
| Method | Endpoint | Description |
|--------|----------|-------------|
| `POST` | `/presign/<bucket>/<key>` | Generate presigned URL |
### Bucket Policies
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/bucket-policy/<bucket>` | Get bucket policy |
| `PUT` | `/bucket-policy/<bucket>` | Set bucket policy |
| `DELETE` | `/bucket-policy/<bucket>` | Delete bucket policy |
### Versioning
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/<bucket>/<key>?versionId=X` | Get specific version |
| `DELETE` | `/<bucket>/<key>?versionId=X` | Delete specific version |
| `GET` | `/<bucket>?versions` | List object versions |
### Health Check
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/healthz` | Health check endpoint |
## IAM & Access Control
### Users and Access Keys
On first run, MyFSIO creates a default admin user (`localadmin`/`localadmin`). Use the IAM dashboard to:
- Create and delete users
- Generate and rotate access keys
- Attach inline policies to users
- Control IAM management permissions
### Bucket Policies
Bucket policies follow AWS policy grammar (Version `2012-10-17`) with support for:
- Principal-based access (`*` for anonymous, specific users)
- Action-based permissions (`s3:GetObject`, `s3:PutObject`, etc.)
- Resource patterns (`arn:aws:s3:::bucket/*`)
- Condition keys
**Policy Presets:**
- **Public:** Grants anonymous read access (`s3:GetObject`, `s3:ListBucket`)
- **Private:** Removes bucket policy (IAM-only access)
- **Custom:** Manual policy editing with draft preservation
Policies hot-reload when the JSON file changes.
## Server-Side Encryption
MyFSIO supports two encryption modes:
- **SSE-S3:** Server-managed keys with automatic key rotation
- **SSE-KMS:** Customer-managed keys via built-in KMS
Enable encryption with:
```bash
ENCRYPTION_ENABLED=true python run.py
```
## Cross-Bucket Replication
Replicate objects to remote S3-compatible endpoints:
1. Configure remote connections in the UI
2. Create replication rules specifying source/destination
3. Objects are automatically replicated on upload
## Docker
```bash
docker build -t myfsio .
docker run -p 5000:5000 -p 5100:5100 -v ./data:/app/data myfsio
```
## Testing
```bash
pytest -q
# Run all tests
pytest tests/ -v
# Run specific test file
pytest tests/test_api.py -v
# Run with coverage
pytest tests/ --cov=app --cov-report=html
```
## References
- [Amazon Simple Storage Service Documentation](https://docs.aws.amazon.com/s3/)
- [Signature Version 4 Signing Process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)
- [Amazon S3 Bucket Policy Examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html)
- [Amazon S3 Documentation](https://docs.aws.amazon.com/s3/)
- [AWS Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)
- [S3 Bucket Policy Examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html)

View File

@@ -1,4 +1,3 @@
"""Application factory for the mini S3-compatible object store."""
from __future__ import annotations
import logging
@@ -16,6 +15,8 @@ from flask_cors import CORS
from flask_wtf.csrf import CSRFError
from werkzeug.middleware.proxy_fix import ProxyFix
from .access_logging import AccessLoggingService
from .acl import AclService
from .bucket_policies import BucketPolicyStore
from .config import AppConfig
from .connections import ConnectionStore
@@ -23,6 +24,9 @@ from .encryption import EncryptionManager
from .extensions import limiter, csrf
from .iam import IamService
from .kms import KMSManager
from .lifecycle import LifecycleManager
from .notifications import NotificationService
from .object_lock import ObjectLockService
from .replication import ReplicationManager
from .secret_store import EphemeralSecretStore
from .storage import ObjectStorage
@@ -140,6 +144,21 @@ def create_app(
from .encrypted_storage import EncryptedObjectStorage
storage = EncryptedObjectStorage(storage, encryption_manager)
acl_service = AclService(storage_root)
object_lock_service = ObjectLockService(storage_root)
notification_service = NotificationService(storage_root)
access_logging_service = AccessLoggingService(storage_root)
access_logging_service.set_storage(storage)
lifecycle_manager = None
if app.config.get("LIFECYCLE_ENABLED", False):
base_storage = storage.storage if hasattr(storage, 'storage') else storage
lifecycle_manager = LifecycleManager(
base_storage,
interval_seconds=app.config.get("LIFECYCLE_INTERVAL_SECONDS", 3600),
)
lifecycle_manager.start()
app.extensions["object_storage"] = storage
app.extensions["iam"] = iam
app.extensions["bucket_policies"] = bucket_policies
@@ -149,6 +168,11 @@ def create_app(
app.extensions["replication"] = replication
app.extensions["encryption"] = encryption_manager
app.extensions["kms"] = kms_manager
app.extensions["acl"] = acl_service
app.extensions["lifecycle"] = lifecycle_manager
app.extensions["object_lock"] = object_lock_service
app.extensions["notifications"] = notification_service
app.extensions["access_logging"] = access_logging_service
@app.errorhandler(500)
def internal_error(error):

262
app/access_logging.py Normal file
View File

@@ -0,0 +1,262 @@
from __future__ import annotations
import io
import json
import logging
import queue
import threading
import time
import uuid
from dataclasses import dataclass, field
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, Dict, List, Optional
logger = logging.getLogger(__name__)
@dataclass
class AccessLogEntry:
bucket_owner: str = "-"
bucket: str = "-"
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
remote_ip: str = "-"
requester: str = "-"
request_id: str = field(default_factory=lambda: uuid.uuid4().hex[:16].upper())
operation: str = "-"
key: str = "-"
request_uri: str = "-"
http_status: int = 200
error_code: str = "-"
bytes_sent: int = 0
object_size: int = 0
total_time_ms: int = 0
turn_around_time_ms: int = 0
referrer: str = "-"
user_agent: str = "-"
version_id: str = "-"
host_id: str = "-"
signature_version: str = "SigV4"
cipher_suite: str = "-"
authentication_type: str = "AuthHeader"
host_header: str = "-"
tls_version: str = "-"
def to_log_line(self) -> str:
time_str = self.timestamp.strftime("[%d/%b/%Y:%H:%M:%S %z]")
return (
f'{self.bucket_owner} {self.bucket} {time_str} {self.remote_ip} '
f'{self.requester} {self.request_id} {self.operation} {self.key} '
f'"{self.request_uri}" {self.http_status} {self.error_code or "-"} '
f'{self.bytes_sent or "-"} {self.object_size or "-"} {self.total_time_ms or "-"} '
f'{self.turn_around_time_ms or "-"} "{self.referrer}" "{self.user_agent}" {self.version_id}'
)
def to_dict(self) -> Dict[str, Any]:
return {
"bucket_owner": self.bucket_owner,
"bucket": self.bucket,
"timestamp": self.timestamp.isoformat(),
"remote_ip": self.remote_ip,
"requester": self.requester,
"request_id": self.request_id,
"operation": self.operation,
"key": self.key,
"request_uri": self.request_uri,
"http_status": self.http_status,
"error_code": self.error_code,
"bytes_sent": self.bytes_sent,
"object_size": self.object_size,
"total_time_ms": self.total_time_ms,
"referrer": self.referrer,
"user_agent": self.user_agent,
"version_id": self.version_id,
}
@dataclass
class LoggingConfiguration:
target_bucket: str
target_prefix: str = ""
enabled: bool = True
def to_dict(self) -> Dict[str, Any]:
return {
"LoggingEnabled": {
"TargetBucket": self.target_bucket,
"TargetPrefix": self.target_prefix,
}
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> Optional["LoggingConfiguration"]:
logging_enabled = data.get("LoggingEnabled")
if not logging_enabled:
return None
return cls(
target_bucket=logging_enabled.get("TargetBucket", ""),
target_prefix=logging_enabled.get("TargetPrefix", ""),
enabled=True,
)
class AccessLoggingService:
def __init__(self, storage_root: Path, flush_interval: int = 60, max_buffer_size: int = 1000):
self.storage_root = storage_root
self.flush_interval = flush_interval
self.max_buffer_size = max_buffer_size
self._configs: Dict[str, LoggingConfiguration] = {}
self._buffer: Dict[str, List[AccessLogEntry]] = {}
self._buffer_lock = threading.Lock()
self._shutdown = threading.Event()
self._storage = None
self._flush_thread = threading.Thread(target=self._flush_loop, name="access-log-flush", daemon=True)
self._flush_thread.start()
def set_storage(self, storage: Any) -> None:
self._storage = storage
def _config_path(self, bucket_name: str) -> Path:
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "logging.json"
def get_bucket_logging(self, bucket_name: str) -> Optional[LoggingConfiguration]:
if bucket_name in self._configs:
return self._configs[bucket_name]
config_path = self._config_path(bucket_name)
if not config_path.exists():
return None
try:
data = json.loads(config_path.read_text(encoding="utf-8"))
config = LoggingConfiguration.from_dict(data)
if config:
self._configs[bucket_name] = config
return config
except (json.JSONDecodeError, OSError) as e:
logger.warning(f"Failed to load logging config for {bucket_name}: {e}")
return None
def set_bucket_logging(self, bucket_name: str, config: LoggingConfiguration) -> None:
config_path = self._config_path(bucket_name)
config_path.parent.mkdir(parents=True, exist_ok=True)
config_path.write_text(json.dumps(config.to_dict(), indent=2), encoding="utf-8")
self._configs[bucket_name] = config
def delete_bucket_logging(self, bucket_name: str) -> None:
config_path = self._config_path(bucket_name)
try:
if config_path.exists():
config_path.unlink()
except OSError:
pass
self._configs.pop(bucket_name, None)
def log_request(
self,
bucket_name: str,
*,
operation: str,
key: str = "-",
remote_ip: str = "-",
requester: str = "-",
request_uri: str = "-",
http_status: int = 200,
error_code: str = "",
bytes_sent: int = 0,
object_size: int = 0,
total_time_ms: int = 0,
referrer: str = "-",
user_agent: str = "-",
version_id: str = "-",
request_id: str = "",
) -> None:
config = self.get_bucket_logging(bucket_name)
if not config or not config.enabled:
return
entry = AccessLogEntry(
bucket_owner="local-owner",
bucket=bucket_name,
remote_ip=remote_ip,
requester=requester,
request_id=request_id or uuid.uuid4().hex[:16].upper(),
operation=operation,
key=key,
request_uri=request_uri,
http_status=http_status,
error_code=error_code,
bytes_sent=bytes_sent,
object_size=object_size,
total_time_ms=total_time_ms,
referrer=referrer,
user_agent=user_agent,
version_id=version_id,
)
target_key = f"{config.target_bucket}:{config.target_prefix}"
with self._buffer_lock:
if target_key not in self._buffer:
self._buffer[target_key] = []
self._buffer[target_key].append(entry)
if len(self._buffer[target_key]) >= self.max_buffer_size:
self._flush_buffer(target_key)
def _flush_loop(self) -> None:
while not self._shutdown.is_set():
time.sleep(self.flush_interval)
self._flush_all()
def _flush_all(self) -> None:
with self._buffer_lock:
targets = list(self._buffer.keys())
for target_key in targets:
self._flush_buffer(target_key)
def _flush_buffer(self, target_key: str) -> None:
with self._buffer_lock:
entries = self._buffer.pop(target_key, [])
if not entries or not self._storage:
return
try:
bucket_name, prefix = target_key.split(":", 1)
except ValueError:
logger.error(f"Invalid target key: {target_key}")
return
now = datetime.now(timezone.utc)
log_key = f"{prefix}{now.strftime('%Y-%m-%d-%H-%M-%S')}-{uuid.uuid4().hex[:8]}"
log_content = "\n".join(entry.to_log_line() for entry in entries) + "\n"
try:
stream = io.BytesIO(log_content.encode("utf-8"))
self._storage.put_object(bucket_name, log_key, stream, enforce_quota=False)
logger.info(f"Flushed {len(entries)} access log entries to {bucket_name}/{log_key}")
except Exception as e:
logger.error(f"Failed to write access log to {bucket_name}/{log_key}: {e}")
with self._buffer_lock:
if target_key not in self._buffer:
self._buffer[target_key] = []
self._buffer[target_key] = entries + self._buffer[target_key]
def flush(self) -> None:
self._flush_all()
def shutdown(self) -> None:
self._shutdown.set()
self._flush_all()
self._flush_thread.join(timeout=5.0)
def get_stats(self) -> Dict[str, Any]:
with self._buffer_lock:
buffered = sum(len(entries) for entries in self._buffer.values())
return {
"buffered_entries": buffered,
"target_buckets": len(self._buffer),
}

204
app/acl.py Normal file
View File

@@ -0,0 +1,204 @@
from __future__ import annotations
import json
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, Dict, List, Optional, Set
ACL_PERMISSION_FULL_CONTROL = "FULL_CONTROL"
ACL_PERMISSION_WRITE = "WRITE"
ACL_PERMISSION_WRITE_ACP = "WRITE_ACP"
ACL_PERMISSION_READ = "READ"
ACL_PERMISSION_READ_ACP = "READ_ACP"
ALL_PERMISSIONS = {
ACL_PERMISSION_FULL_CONTROL,
ACL_PERMISSION_WRITE,
ACL_PERMISSION_WRITE_ACP,
ACL_PERMISSION_READ,
ACL_PERMISSION_READ_ACP,
}
PERMISSION_TO_ACTIONS = {
ACL_PERMISSION_FULL_CONTROL: {"read", "write", "delete", "list", "share"},
ACL_PERMISSION_WRITE: {"write", "delete"},
ACL_PERMISSION_WRITE_ACP: {"share"},
ACL_PERMISSION_READ: {"read", "list"},
ACL_PERMISSION_READ_ACP: {"share"},
}
GRANTEE_ALL_USERS = "*"
GRANTEE_AUTHENTICATED_USERS = "authenticated"
@dataclass
class AclGrant:
grantee: str
permission: str
def to_dict(self) -> Dict[str, str]:
return {"grantee": self.grantee, "permission": self.permission}
@classmethod
def from_dict(cls, data: Dict[str, str]) -> "AclGrant":
return cls(grantee=data["grantee"], permission=data["permission"])
@dataclass
class Acl:
owner: str
grants: List[AclGrant] = field(default_factory=list)
def to_dict(self) -> Dict[str, Any]:
return {
"owner": self.owner,
"grants": [g.to_dict() for g in self.grants],
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "Acl":
return cls(
owner=data.get("owner", ""),
grants=[AclGrant.from_dict(g) for g in data.get("grants", [])],
)
def get_allowed_actions(self, principal_id: Optional[str], is_authenticated: bool = True) -> Set[str]:
actions: Set[str] = set()
if principal_id and principal_id == self.owner:
actions.update(PERMISSION_TO_ACTIONS[ACL_PERMISSION_FULL_CONTROL])
for grant in self.grants:
if grant.grantee == GRANTEE_ALL_USERS:
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
elif grant.grantee == GRANTEE_AUTHENTICATED_USERS and is_authenticated:
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
elif principal_id and grant.grantee == principal_id:
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
return actions
CANNED_ACLS = {
"private": lambda owner: Acl(
owner=owner,
grants=[AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL)],
),
"public-read": lambda owner: Acl(
owner=owner,
grants=[
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
],
),
"public-read-write": lambda owner: Acl(
owner=owner,
grants=[
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_WRITE),
],
),
"authenticated-read": lambda owner: Acl(
owner=owner,
grants=[
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
AclGrant(grantee=GRANTEE_AUTHENTICATED_USERS, permission=ACL_PERMISSION_READ),
],
),
"bucket-owner-read": lambda owner: Acl(
owner=owner,
grants=[
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
],
),
"bucket-owner-full-control": lambda owner: Acl(
owner=owner,
grants=[
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
],
),
}
def create_canned_acl(canned_acl: str, owner: str) -> Acl:
factory = CANNED_ACLS.get(canned_acl)
if not factory:
return CANNED_ACLS["private"](owner)
return factory(owner)
class AclService:
def __init__(self, storage_root: Path):
self.storage_root = storage_root
self._bucket_acl_cache: Dict[str, Acl] = {}
def _bucket_acl_path(self, bucket_name: str) -> Path:
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / ".acl.json"
def get_bucket_acl(self, bucket_name: str) -> Optional[Acl]:
if bucket_name in self._bucket_acl_cache:
return self._bucket_acl_cache[bucket_name]
acl_path = self._bucket_acl_path(bucket_name)
if not acl_path.exists():
return None
try:
data = json.loads(acl_path.read_text(encoding="utf-8"))
acl = Acl.from_dict(data)
self._bucket_acl_cache[bucket_name] = acl
return acl
except (OSError, json.JSONDecodeError):
return None
def set_bucket_acl(self, bucket_name: str, acl: Acl) -> None:
acl_path = self._bucket_acl_path(bucket_name)
acl_path.parent.mkdir(parents=True, exist_ok=True)
acl_path.write_text(json.dumps(acl.to_dict(), indent=2), encoding="utf-8")
self._bucket_acl_cache[bucket_name] = acl
def set_bucket_canned_acl(self, bucket_name: str, canned_acl: str, owner: str) -> Acl:
acl = create_canned_acl(canned_acl, owner)
self.set_bucket_acl(bucket_name, acl)
return acl
def delete_bucket_acl(self, bucket_name: str) -> None:
acl_path = self._bucket_acl_path(bucket_name)
if acl_path.exists():
acl_path.unlink()
self._bucket_acl_cache.pop(bucket_name, None)
def evaluate_bucket_acl(
self,
bucket_name: str,
principal_id: Optional[str],
action: str,
is_authenticated: bool = True,
) -> bool:
acl = self.get_bucket_acl(bucket_name)
if not acl:
return False
allowed_actions = acl.get_allowed_actions(principal_id, is_authenticated)
return action in allowed_actions
def get_object_acl(self, bucket_name: str, object_key: str, object_metadata: Dict[str, Any]) -> Optional[Acl]:
acl_data = object_metadata.get("__acl__")
if not acl_data:
return None
try:
return Acl.from_dict(acl_data)
except (TypeError, KeyError):
return None
def create_object_acl_metadata(self, acl: Acl) -> Dict[str, Any]:
return {"__acl__": acl.to_dict()}
def evaluate_object_acl(
self,
object_metadata: Dict[str, Any],
principal_id: Optional[str],
action: str,
is_authenticated: bool = True,
) -> bool:
acl = self.get_object_acl("", "", object_metadata)
if not acl:
return False
allowed_actions = acl.get_allowed_actions(principal_id, is_authenticated)
return action in allowed_actions

View File

@@ -1,11 +1,12 @@
"""Bucket policy loader/enforcer with a subset of AWS semantics."""
from __future__ import annotations
import json
import re
import time
from dataclasses import dataclass
from fnmatch import fnmatch
from fnmatch import fnmatch, translate
from pathlib import Path
from typing import Any, Dict, Iterable, List, Optional, Sequence
from typing import Any, Dict, Iterable, List, Optional, Pattern, Sequence, Tuple
RESOURCE_PREFIX = "arn:aws:s3:::"
@@ -133,7 +134,22 @@ class BucketPolicyStatement:
effect: str
principals: List[str] | str
actions: List[str]
resources: List[tuple[str | None, str | None]]
resources: List[Tuple[str | None, str | None]]
# Performance: Pre-compiled regex patterns for resource matching
_compiled_patterns: List[Tuple[str | None, Optional[Pattern[str]]]] | None = None
def _get_compiled_patterns(self) -> List[Tuple[str | None, Optional[Pattern[str]]]]:
"""Lazily compile fnmatch patterns to regex for faster matching."""
if self._compiled_patterns is None:
self._compiled_patterns = []
for resource_bucket, key_pattern in self.resources:
if key_pattern is None:
self._compiled_patterns.append((resource_bucket, None))
else:
# Convert fnmatch pattern to regex
regex_pattern = translate(key_pattern)
self._compiled_patterns.append((resource_bucket, re.compile(regex_pattern)))
return self._compiled_patterns
def matches_principal(self, access_key: Optional[str]) -> bool:
if self.principals == "*":
@@ -149,15 +165,16 @@ class BucketPolicyStatement:
def matches_resource(self, bucket: Optional[str], object_key: Optional[str]) -> bool:
bucket = (bucket or "*").lower()
key = object_key or ""
for resource_bucket, key_pattern in self.resources:
for resource_bucket, compiled_pattern in self._get_compiled_patterns():
resource_bucket = (resource_bucket or "*").lower()
if resource_bucket not in {"*", bucket}:
continue
if key_pattern is None:
if compiled_pattern is None:
if not key:
return True
continue
if fnmatch(key, key_pattern):
# Performance: Use pre-compiled regex instead of fnmatch
if compiled_pattern.match(key):
return True
return False
@@ -174,8 +191,16 @@ class BucketPolicyStore:
self._policies: Dict[str, List[BucketPolicyStatement]] = {}
self._load()
self._last_mtime = self._current_mtime()
# Performance: Avoid stat() on every request
self._last_stat_check = 0.0
self._stat_check_interval = 1.0 # Only check mtime every 1 second
def maybe_reload(self) -> None:
# Performance: Skip stat check if we checked recently
now = time.time()
if now - self._last_stat_check < self._stat_check_interval:
return
self._last_stat_check = now
current = self._current_mtime()
if current is None or current == self._last_mtime:
return

View File

@@ -1,4 +1,3 @@
"""Configuration helpers for the S3 clone application."""
from __future__ import annotations
import os
@@ -74,6 +73,8 @@ class AppConfig:
kms_keys_path: Path
default_encryption_algorithm: str
display_timezone: str
lifecycle_enabled: bool
lifecycle_interval_seconds: int
@classmethod
def from_env(cls, overrides: Optional[Dict[str, Any]] = None) -> "AppConfig":
@@ -91,6 +92,8 @@ class AppConfig:
secret_ttl_seconds = int(_get("SECRET_TTL_SECONDS", 300))
stream_chunk_size = int(_get("STREAM_CHUNK_SIZE", 64 * 1024))
multipart_min_part_size = int(_get("MULTIPART_MIN_PART_SIZE", 5 * 1024 * 1024))
lifecycle_enabled = _get("LIFECYCLE_ENABLED", "false").lower() in ("true", "1", "yes")
lifecycle_interval_seconds = int(_get("LIFECYCLE_INTERVAL_SECONDS", 3600))
default_secret = "dev-secret-key"
secret_key = str(_get("SECRET_KEY", default_secret))
@@ -198,7 +201,9 @@ class AppConfig:
kms_enabled=kms_enabled,
kms_keys_path=kms_keys_path,
default_encryption_algorithm=default_encryption_algorithm,
display_timezone=display_timezone)
display_timezone=display_timezone,
lifecycle_enabled=lifecycle_enabled,
lifecycle_interval_seconds=lifecycle_interval_seconds)
def validate_and_report(self) -> list[str]:
"""Validate configuration and return a list of warnings/issues.

View File

@@ -1,4 +1,3 @@
"""Manage remote S3 connections."""
from __future__ import annotations
import json

View File

@@ -1,4 +1,3 @@
"""Encrypted storage layer that wraps ObjectStorage with encryption support."""
from __future__ import annotations
import io
@@ -90,6 +89,8 @@ class EncryptedObjectStorage:
Returns:
ObjectMeta with object information
Performance: Uses streaming encryption for large files to reduce memory usage.
"""
should_encrypt, algorithm, detected_kms_key = self._should_encrypt(
bucket_name, server_side_encryption
@@ -99,20 +100,17 @@ class EncryptedObjectStorage:
kms_key_id = detected_kms_key
if should_encrypt:
data = stream.read()
try:
ciphertext, enc_metadata = self.encryption.encrypt_object(
data,
# Performance: Use streaming encryption to avoid loading entire file into memory
encrypted_stream, enc_metadata = self.encryption.encrypt_stream(
stream,
algorithm=algorithm,
kms_key_id=kms_key_id,
context={"bucket": bucket_name, "key": object_key},
)
combined_metadata = metadata.copy() if metadata else {}
combined_metadata.update(enc_metadata.to_dict())
encrypted_stream = io.BytesIO(ciphertext)
result = self.storage.put_object(
bucket_name,
object_key,
@@ -138,23 +136,24 @@ class EncryptedObjectStorage:
Returns:
Tuple of (data, metadata)
Performance: Uses streaming decryption to reduce memory usage.
"""
path = self.storage.get_object_path(bucket_name, object_key)
metadata = self.storage.get_object_metadata(bucket_name, object_key)
with path.open("rb") as f:
data = f.read()
enc_metadata = EncryptionMetadata.from_dict(metadata)
if enc_metadata:
try:
data = self.encryption.decrypt_object(
data,
enc_metadata,
context={"bucket": bucket_name, "key": object_key},
)
# Performance: Use streaming decryption to avoid loading entire file into memory
with path.open("rb") as f:
decrypted_stream = self.encryption.decrypt_stream(f, enc_metadata)
data = decrypted_stream.read()
except EncryptionError as exc:
raise StorageError(f"Decryption failed: {exc}") from exc
else:
with path.open("rb") as f:
data = f.read()
clean_metadata = {
k: v for k, v in metadata.items()

View File

@@ -157,10 +157,7 @@ class LocalKeyEncryption(EncryptionProvider):
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
key_id: str, context: Dict[str, str] | None = None) -> bytes:
"""Decrypt data using envelope encryption."""
# Decrypt the data key
data_key = self._decrypt_data_key(encrypted_data_key)
# Decrypt the data
aesgcm = AESGCM(data_key)
try:
return aesgcm.decrypt(nonce, ciphertext, None)
@@ -183,21 +180,26 @@ class StreamingEncryptor:
self.chunk_size = chunk_size
def _derive_chunk_nonce(self, base_nonce: bytes, chunk_index: int) -> bytes:
"""Derive a unique nonce for each chunk."""
# XOR the base nonce with the chunk index
nonce_int = int.from_bytes(base_nonce, "big")
derived = nonce_int ^ chunk_index
return derived.to_bytes(12, "big")
"""Derive a unique nonce for each chunk.
Performance: Use direct byte manipulation instead of full int conversion.
"""
# Performance: Only modify last 4 bytes instead of full 12-byte conversion
return base_nonce[:8] + (chunk_index ^ int.from_bytes(base_nonce[8:], "big")).to_bytes(4, "big")
def encrypt_stream(self, stream: BinaryIO,
context: Dict[str, str] | None = None) -> tuple[BinaryIO, EncryptionMetadata]:
"""Encrypt a stream and return encrypted stream + metadata."""
"""Encrypt a stream and return encrypted stream + metadata.
Performance: Writes chunks directly to output buffer instead of accumulating in list.
"""
data_key, encrypted_data_key = self.provider.generate_data_key()
base_nonce = secrets.token_bytes(12)
aesgcm = AESGCM(data_key)
encrypted_chunks = []
# Performance: Write directly to BytesIO instead of accumulating chunks
output = io.BytesIO()
output.write(b"\x00\x00\x00\x00") # Placeholder for chunk count
chunk_index = 0
while True:
@@ -208,12 +210,15 @@ class StreamingEncryptor:
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
encrypted_chunk = aesgcm.encrypt(chunk_nonce, chunk, None)
size_prefix = len(encrypted_chunk).to_bytes(self.HEADER_SIZE, "big")
encrypted_chunks.append(size_prefix + encrypted_chunk)
# Write size prefix + encrypted chunk directly
output.write(len(encrypted_chunk).to_bytes(self.HEADER_SIZE, "big"))
output.write(encrypted_chunk)
chunk_index += 1
header = chunk_index.to_bytes(4, "big")
encrypted_data = header + b"".join(encrypted_chunks)
# Write actual chunk count to header
output.seek(0)
output.write(chunk_index.to_bytes(4, "big"))
output.seek(0)
metadata = EncryptionMetadata(
algorithm="AES256",
@@ -222,10 +227,13 @@ class StreamingEncryptor:
encrypted_data_key=encrypted_data_key,
)
return io.BytesIO(encrypted_data), metadata
return output, metadata
def decrypt_stream(self, stream: BinaryIO, metadata: EncryptionMetadata) -> BinaryIO:
"""Decrypt a stream using the provided metadata."""
"""Decrypt a stream using the provided metadata.
Performance: Writes chunks directly to output buffer instead of accumulating in list.
"""
if isinstance(self.provider, LocalKeyEncryption):
data_key = self.provider._decrypt_data_key(metadata.encrypted_data_key)
else:
@@ -239,7 +247,8 @@ class StreamingEncryptor:
raise EncryptionError("Invalid encrypted stream: missing header")
chunk_count = int.from_bytes(chunk_count_bytes, "big")
decrypted_chunks = []
# Performance: Write directly to BytesIO instead of accumulating chunks
output = io.BytesIO()
for chunk_index in range(chunk_count):
size_bytes = stream.read(self.HEADER_SIZE)
if len(size_bytes) < self.HEADER_SIZE:
@@ -253,11 +262,12 @@ class StreamingEncryptor:
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
try:
decrypted_chunk = aesgcm.decrypt(chunk_nonce, encrypted_chunk, None)
decrypted_chunks.append(decrypted_chunk)
output.write(decrypted_chunk) # Write directly instead of appending to list
except Exception as exc:
raise EncryptionError(f"Failed to decrypt chunk {chunk_index}: {exc}") from exc
return io.BytesIO(b"".join(decrypted_chunks))
output.seek(0)
return output
class EncryptionManager:
@@ -343,6 +353,106 @@ class EncryptionManager:
return encryptor.decrypt_stream(stream, metadata)
class SSECEncryption(EncryptionProvider):
"""SSE-C: Server-Side Encryption with Customer-Provided Keys.
The client provides the encryption key with each request.
Server encrypts/decrypts but never stores the key.
Required headers for PUT:
- x-amz-server-side-encryption-customer-algorithm: AES256
- x-amz-server-side-encryption-customer-key: Base64-encoded 256-bit key
- x-amz-server-side-encryption-customer-key-MD5: Base64-encoded MD5 of key
"""
KEY_ID = "customer-provided"
def __init__(self, customer_key: bytes):
if len(customer_key) != 32:
raise EncryptionError("Customer key must be exactly 256 bits (32 bytes)")
self.customer_key = customer_key
@classmethod
def from_headers(cls, headers: Dict[str, str]) -> "SSECEncryption":
algorithm = headers.get("x-amz-server-side-encryption-customer-algorithm", "")
if algorithm.upper() != "AES256":
raise EncryptionError(f"Unsupported SSE-C algorithm: {algorithm}. Only AES256 is supported.")
key_b64 = headers.get("x-amz-server-side-encryption-customer-key", "")
if not key_b64:
raise EncryptionError("Missing x-amz-server-side-encryption-customer-key header")
key_md5_b64 = headers.get("x-amz-server-side-encryption-customer-key-md5", "")
try:
customer_key = base64.b64decode(key_b64)
except Exception as e:
raise EncryptionError(f"Invalid base64 in customer key: {e}") from e
if len(customer_key) != 32:
raise EncryptionError(f"Customer key must be 256 bits, got {len(customer_key) * 8} bits")
if key_md5_b64:
import hashlib
expected_md5 = base64.b64encode(hashlib.md5(customer_key).digest()).decode()
if key_md5_b64 != expected_md5:
raise EncryptionError("Customer key MD5 mismatch")
return cls(customer_key)
def encrypt(self, plaintext: bytes, context: Dict[str, str] | None = None) -> EncryptionResult:
aesgcm = AESGCM(self.customer_key)
nonce = secrets.token_bytes(12)
ciphertext = aesgcm.encrypt(nonce, plaintext, None)
return EncryptionResult(
ciphertext=ciphertext,
nonce=nonce,
key_id=self.KEY_ID,
encrypted_data_key=b"",
)
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
key_id: str, context: Dict[str, str] | None = None) -> bytes:
aesgcm = AESGCM(self.customer_key)
try:
return aesgcm.decrypt(nonce, ciphertext, None)
except Exception as exc:
raise EncryptionError(f"SSE-C decryption failed: {exc}") from exc
def generate_data_key(self) -> tuple[bytes, bytes]:
return self.customer_key, b""
@dataclass
class SSECMetadata:
algorithm: str = "AES256"
nonce: bytes = b""
key_md5: str = ""
def to_dict(self) -> Dict[str, str]:
return {
"x-amz-server-side-encryption-customer-algorithm": self.algorithm,
"x-amz-encryption-nonce": base64.b64encode(self.nonce).decode(),
"x-amz-server-side-encryption-customer-key-MD5": self.key_md5,
}
@classmethod
def from_dict(cls, data: Dict[str, str]) -> Optional["SSECMetadata"]:
algorithm = data.get("x-amz-server-side-encryption-customer-algorithm")
if not algorithm:
return None
try:
nonce = base64.b64decode(data.get("x-amz-encryption-nonce", ""))
return cls(
algorithm=algorithm,
nonce=nonce,
key_md5=data.get("x-amz-server-side-encryption-customer-key-MD5", ""),
)
except Exception:
return None
class ClientEncryptionHelper:
"""Helpers for client-side encryption.

View File

@@ -1,4 +1,3 @@
"""Standardized error handling for API and UI responses."""
from __future__ import annotations
import logging

View File

@@ -1,4 +1,3 @@
"""Application-wide extension instances."""
from flask import g
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address

View File

@@ -1,14 +1,14 @@
"""Lightweight IAM-style user and policy management."""
from __future__ import annotations
import json
import math
import secrets
import time
from collections import deque
from dataclasses import dataclass
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set
from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set, Tuple
class IamError(RuntimeError):
@@ -115,13 +115,25 @@ class IamService:
self._raw_config: Dict[str, Any] = {}
self._failed_attempts: Dict[str, Deque[datetime]] = {}
self._last_load_time = 0.0
# Performance: credential cache with TTL
self._credential_cache: Dict[str, Tuple[str, Principal, float]] = {}
self._cache_ttl = 60.0 # Cache credentials for 60 seconds
self._last_stat_check = 0.0
self._stat_check_interval = 1.0 # Only stat() file every 1 second
self._sessions: Dict[str, Dict[str, Any]] = {}
self._load()
def _maybe_reload(self) -> None:
"""Reload configuration if the file has changed on disk."""
# Performance: Skip stat check if we checked recently
now = time.time()
if now - self._last_stat_check < self._stat_check_interval:
return
self._last_stat_check = now
try:
if self.config_path.stat().st_mtime > self._last_load_time:
self._load()
self._credential_cache.clear() # Invalidate cache on reload
except OSError:
pass
@@ -180,18 +192,72 @@ class IamService:
elapsed = (datetime.now(timezone.utc) - oldest).total_seconds()
return int(max(0, self.auth_lockout_window.total_seconds() - elapsed))
def principal_for_key(self, access_key: str) -> Principal:
def create_session_token(self, access_key: str, duration_seconds: int = 3600) -> str:
"""Create a temporary session token for an access key."""
self._maybe_reload()
record = self._users.get(access_key)
if not record:
raise IamError("Unknown access key")
return self._build_principal(access_key, record)
self._cleanup_expired_sessions()
token = secrets.token_urlsafe(32)
expires_at = time.time() + duration_seconds
self._sessions[token] = {
"access_key": access_key,
"expires_at": expires_at,
}
return token
def validate_session_token(self, access_key: str, session_token: str) -> bool:
"""Validate a session token for an access key."""
session = self._sessions.get(session_token)
if not session:
return False
if session["access_key"] != access_key:
return False
if time.time() > session["expires_at"]:
del self._sessions[session_token]
return False
return True
def _cleanup_expired_sessions(self) -> None:
"""Remove expired session tokens."""
now = time.time()
expired = [token for token, data in self._sessions.items() if now > data["expires_at"]]
for token in expired:
del self._sessions[token]
def principal_for_key(self, access_key: str) -> Principal:
# Performance: Check cache first
now = time.time()
cached = self._credential_cache.get(access_key)
if cached:
secret, principal, cached_time = cached
if now - cached_time < self._cache_ttl:
return principal
self._maybe_reload()
record = self._users.get(access_key)
if not record:
raise IamError("Unknown access key")
principal = self._build_principal(access_key, record)
self._credential_cache[access_key] = (record["secret_key"], principal, now)
return principal
def secret_for_key(self, access_key: str) -> str:
# Performance: Check cache first
now = time.time()
cached = self._credential_cache.get(access_key)
if cached:
secret, principal, cached_time = cached
if now - cached_time < self._cache_ttl:
return secret
self._maybe_reload()
record = self._users.get(access_key)
if not record:
raise IamError("Unknown access key")
principal = self._build_principal(access_key, record)
self._credential_cache[access_key] = (record["secret_key"], principal, now)
return record["secret_key"]
def authorize(self, principal: Principal, bucket_name: str | None, action: str) -> None:
@@ -442,11 +508,36 @@ class IamService:
raise IamError("User not found")
def get_secret_key(self, access_key: str) -> str | None:
# Performance: Check cache first
now = time.time()
cached = self._credential_cache.get(access_key)
if cached:
secret, principal, cached_time = cached
if now - cached_time < self._cache_ttl:
return secret
self._maybe_reload()
record = self._users.get(access_key)
return record["secret_key"] if record else None
if record:
# Cache the result
principal = self._build_principal(access_key, record)
self._credential_cache[access_key] = (record["secret_key"], principal, now)
return record["secret_key"]
return None
def get_principal(self, access_key: str) -> Principal | None:
# Performance: Check cache first
now = time.time()
cached = self._credential_cache.get(access_key)
if cached:
secret, principal, cached_time = cached
if now - cached_time < self._cache_ttl:
return principal
self._maybe_reload()
record = self._users.get(access_key)
return self._build_principal(access_key, record) if record else None
if record:
principal = self._build_principal(access_key, record)
self._credential_cache[access_key] = (record["secret_key"], principal, now)
return principal
return None

View File

@@ -1,4 +1,3 @@
"""Key Management Service (KMS) for encryption key management."""
from __future__ import annotations
import base64
@@ -212,6 +211,26 @@ class KMSManager:
self._load_keys()
return list(self._keys.values())
def get_default_key_id(self) -> str:
"""Get the default KMS key ID, creating one if none exist."""
self._load_keys()
for key in self._keys.values():
if key.enabled:
return key.key_id
default_key = self.create_key(description="Default KMS Key")
return default_key.key_id
def get_provider(self, key_id: str | None = None) -> "KMSEncryptionProvider":
"""Get a KMS encryption provider for the specified key."""
if key_id is None:
key_id = self.get_default_key_id()
key = self.get_key(key_id)
if not key:
raise EncryptionError(f"Key not found: {key_id}")
if not key.enabled:
raise EncryptionError(f"Key is disabled: {key_id}")
return KMSEncryptionProvider(self, key_id)
def enable_key(self, key_id: str) -> None:
"""Enable a key."""
self._load_keys()

View File

@@ -1,4 +1,3 @@
"""KMS and encryption API endpoints."""
from __future__ import annotations
import base64

235
app/lifecycle.py Normal file
View File

@@ -0,0 +1,235 @@
from __future__ import annotations
import logging
import threading
import time
from dataclasses import dataclass, field
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Any, Dict, List, Optional
from .storage import ObjectStorage, StorageError
logger = logging.getLogger(__name__)
@dataclass
class LifecycleResult:
bucket_name: str
objects_deleted: int = 0
versions_deleted: int = 0
uploads_aborted: int = 0
errors: List[str] = field(default_factory=list)
execution_time_seconds: float = 0.0
class LifecycleManager:
def __init__(self, storage: ObjectStorage, interval_seconds: int = 3600):
self.storage = storage
self.interval_seconds = interval_seconds
self._timer: Optional[threading.Timer] = None
self._shutdown = False
self._lock = threading.Lock()
def start(self) -> None:
if self._timer is not None:
return
self._shutdown = False
self._schedule_next()
logger.info(f"Lifecycle manager started with interval {self.interval_seconds}s")
def stop(self) -> None:
self._shutdown = True
if self._timer:
self._timer.cancel()
self._timer = None
logger.info("Lifecycle manager stopped")
def _schedule_next(self) -> None:
if self._shutdown:
return
self._timer = threading.Timer(self.interval_seconds, self._run_enforcement)
self._timer.daemon = True
self._timer.start()
def _run_enforcement(self) -> None:
if self._shutdown:
return
try:
self.enforce_all_buckets()
except Exception as e:
logger.error(f"Lifecycle enforcement failed: {e}")
finally:
self._schedule_next()
def enforce_all_buckets(self) -> Dict[str, LifecycleResult]:
results = {}
try:
buckets = self.storage.list_buckets()
for bucket in buckets:
result = self.enforce_rules(bucket.name)
if result.objects_deleted > 0 or result.versions_deleted > 0 or result.uploads_aborted > 0:
results[bucket.name] = result
except StorageError as e:
logger.error(f"Failed to list buckets for lifecycle: {e}")
return results
def enforce_rules(self, bucket_name: str) -> LifecycleResult:
start_time = time.time()
result = LifecycleResult(bucket_name=bucket_name)
try:
lifecycle = self.storage.get_bucket_lifecycle(bucket_name)
if not lifecycle:
return result
for rule in lifecycle:
if rule.get("Status") != "Enabled":
continue
rule_id = rule.get("ID", "unknown")
prefix = rule.get("Prefix", rule.get("Filter", {}).get("Prefix", ""))
self._enforce_expiration(bucket_name, rule, prefix, result)
self._enforce_noncurrent_expiration(bucket_name, rule, prefix, result)
self._enforce_abort_multipart(bucket_name, rule, result)
except StorageError as e:
result.errors.append(str(e))
logger.error(f"Lifecycle enforcement error for {bucket_name}: {e}")
result.execution_time_seconds = time.time() - start_time
if result.objects_deleted > 0 or result.versions_deleted > 0 or result.uploads_aborted > 0:
logger.info(
f"Lifecycle enforcement for {bucket_name}: "
f"deleted={result.objects_deleted}, versions={result.versions_deleted}, "
f"aborted={result.uploads_aborted}, time={result.execution_time_seconds:.2f}s"
)
return result
def _enforce_expiration(
self, bucket_name: str, rule: Dict[str, Any], prefix: str, result: LifecycleResult
) -> None:
expiration = rule.get("Expiration", {})
if not expiration:
return
days = expiration.get("Days")
date_str = expiration.get("Date")
if days:
cutoff = datetime.now(timezone.utc) - timedelta(days=days)
elif date_str:
try:
cutoff = datetime.fromisoformat(date_str.replace("Z", "+00:00"))
except ValueError:
return
else:
return
try:
objects = self.storage.list_objects_all(bucket_name)
for obj in objects:
if prefix and not obj.key.startswith(prefix):
continue
if obj.last_modified < cutoff:
try:
self.storage.delete_object(bucket_name, obj.key)
result.objects_deleted += 1
except StorageError as e:
result.errors.append(f"Failed to delete {obj.key}: {e}")
except StorageError as e:
result.errors.append(f"Failed to list objects: {e}")
def _enforce_noncurrent_expiration(
self, bucket_name: str, rule: Dict[str, Any], prefix: str, result: LifecycleResult
) -> None:
noncurrent = rule.get("NoncurrentVersionExpiration", {})
noncurrent_days = noncurrent.get("NoncurrentDays")
if not noncurrent_days:
return
cutoff = datetime.now(timezone.utc) - timedelta(days=noncurrent_days)
try:
objects = self.storage.list_objects_all(bucket_name)
for obj in objects:
if prefix and not obj.key.startswith(prefix):
continue
try:
versions = self.storage.list_object_versions(bucket_name, obj.key)
for version in versions:
archived_at_str = version.get("archived_at", "")
if not archived_at_str:
continue
try:
archived_at = datetime.fromisoformat(archived_at_str.replace("Z", "+00:00"))
if archived_at < cutoff:
version_id = version.get("version_id")
if version_id:
self.storage.delete_object_version(bucket_name, obj.key, version_id)
result.versions_deleted += 1
except (ValueError, StorageError) as e:
result.errors.append(f"Failed to process version: {e}")
except StorageError:
pass
except StorageError as e:
result.errors.append(f"Failed to list objects: {e}")
try:
orphaned = self.storage.list_orphaned_objects(bucket_name)
for item in orphaned:
obj_key = item.get("key", "")
if prefix and not obj_key.startswith(prefix):
continue
try:
versions = self.storage.list_object_versions(bucket_name, obj_key)
for version in versions:
archived_at_str = version.get("archived_at", "")
if not archived_at_str:
continue
try:
archived_at = datetime.fromisoformat(archived_at_str.replace("Z", "+00:00"))
if archived_at < cutoff:
version_id = version.get("version_id")
if version_id:
self.storage.delete_object_version(bucket_name, obj_key, version_id)
result.versions_deleted += 1
except (ValueError, StorageError) as e:
result.errors.append(f"Failed to process orphaned version: {e}")
except StorageError:
pass
except StorageError as e:
result.errors.append(f"Failed to list orphaned objects: {e}")
def _enforce_abort_multipart(
self, bucket_name: str, rule: Dict[str, Any], result: LifecycleResult
) -> None:
abort_config = rule.get("AbortIncompleteMultipartUpload", {})
days_after = abort_config.get("DaysAfterInitiation")
if not days_after:
return
cutoff = datetime.now(timezone.utc) - timedelta(days=days_after)
try:
uploads = self.storage.list_multipart_uploads(bucket_name)
for upload in uploads:
created_at_str = upload.get("created_at", "")
if not created_at_str:
continue
try:
created_at = datetime.fromisoformat(created_at_str.replace("Z", "+00:00"))
if created_at < cutoff:
upload_id = upload.get("upload_id")
if upload_id:
self.storage.abort_multipart_upload(bucket_name, upload_id)
result.uploads_aborted += 1
except (ValueError, StorageError) as e:
result.errors.append(f"Failed to abort upload: {e}")
except StorageError as e:
result.errors.append(f"Failed to list multipart uploads: {e}")
def run_now(self, bucket_name: Optional[str] = None) -> Dict[str, LifecycleResult]:
if bucket_name:
return {bucket_name: self.enforce_rules(bucket_name)}
return self.enforce_all_buckets()

334
app/notifications.py Normal file
View File

@@ -0,0 +1,334 @@
from __future__ import annotations
import json
import logging
import queue
import threading
import time
import uuid
from dataclasses import dataclass, field
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, Dict, List, Optional
from urllib.parse import urlparse
import requests
logger = logging.getLogger(__name__)
@dataclass
class NotificationEvent:
event_name: str
bucket_name: str
object_key: str
object_size: int = 0
etag: str = ""
version_id: Optional[str] = None
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
request_id: str = field(default_factory=lambda: uuid.uuid4().hex)
source_ip: str = ""
user_identity: str = ""
def to_s3_event(self) -> Dict[str, Any]:
return {
"Records": [
{
"eventVersion": "2.1",
"eventSource": "myfsio:s3",
"awsRegion": "local",
"eventTime": self.timestamp.strftime("%Y-%m-%dT%H:%M:%S.000Z"),
"eventName": self.event_name,
"userIdentity": {
"principalId": self.user_identity or "ANONYMOUS",
},
"requestParameters": {
"sourceIPAddress": self.source_ip or "127.0.0.1",
},
"responseElements": {
"x-amz-request-id": self.request_id,
"x-amz-id-2": self.request_id,
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "notification",
"bucket": {
"name": self.bucket_name,
"ownerIdentity": {"principalId": "local"},
"arn": f"arn:aws:s3:::{self.bucket_name}",
},
"object": {
"key": self.object_key,
"size": self.object_size,
"eTag": self.etag,
"versionId": self.version_id or "null",
"sequencer": f"{int(time.time() * 1000):016X}",
},
},
}
]
}
@dataclass
class WebhookDestination:
url: str
headers: Dict[str, str] = field(default_factory=dict)
timeout_seconds: int = 30
retry_count: int = 3
retry_delay_seconds: int = 1
def to_dict(self) -> Dict[str, Any]:
return {
"url": self.url,
"headers": self.headers,
"timeout_seconds": self.timeout_seconds,
"retry_count": self.retry_count,
"retry_delay_seconds": self.retry_delay_seconds,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "WebhookDestination":
return cls(
url=data.get("url", ""),
headers=data.get("headers", {}),
timeout_seconds=data.get("timeout_seconds", 30),
retry_count=data.get("retry_count", 3),
retry_delay_seconds=data.get("retry_delay_seconds", 1),
)
@dataclass
class NotificationConfiguration:
id: str
events: List[str]
destination: WebhookDestination
prefix_filter: str = ""
suffix_filter: str = ""
def matches_event(self, event_name: str, object_key: str) -> bool:
event_match = False
for pattern in self.events:
if pattern.endswith("*"):
base = pattern[:-1]
if event_name.startswith(base):
event_match = True
break
elif pattern == event_name:
event_match = True
break
if not event_match:
return False
if self.prefix_filter and not object_key.startswith(self.prefix_filter):
return False
if self.suffix_filter and not object_key.endswith(self.suffix_filter):
return False
return True
def to_dict(self) -> Dict[str, Any]:
return {
"Id": self.id,
"Events": self.events,
"Destination": self.destination.to_dict(),
"Filter": {
"Key": {
"FilterRules": [
{"Name": "prefix", "Value": self.prefix_filter},
{"Name": "suffix", "Value": self.suffix_filter},
]
}
},
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "NotificationConfiguration":
prefix = ""
suffix = ""
filter_data = data.get("Filter", {})
key_filter = filter_data.get("Key", {})
for rule in key_filter.get("FilterRules", []):
if rule.get("Name") == "prefix":
prefix = rule.get("Value", "")
elif rule.get("Name") == "suffix":
suffix = rule.get("Value", "")
return cls(
id=data.get("Id", uuid.uuid4().hex),
events=data.get("Events", []),
destination=WebhookDestination.from_dict(data.get("Destination", {})),
prefix_filter=prefix,
suffix_filter=suffix,
)
class NotificationService:
def __init__(self, storage_root: Path, worker_count: int = 2):
self.storage_root = storage_root
self._configs: Dict[str, List[NotificationConfiguration]] = {}
self._queue: queue.Queue[tuple[NotificationEvent, WebhookDestination]] = queue.Queue()
self._workers: List[threading.Thread] = []
self._shutdown = threading.Event()
self._stats = {
"events_queued": 0,
"events_sent": 0,
"events_failed": 0,
}
for i in range(worker_count):
worker = threading.Thread(target=self._worker_loop, name=f"notification-worker-{i}", daemon=True)
worker.start()
self._workers.append(worker)
def _config_path(self, bucket_name: str) -> Path:
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "notifications.json"
def get_bucket_notifications(self, bucket_name: str) -> List[NotificationConfiguration]:
if bucket_name in self._configs:
return self._configs[bucket_name]
config_path = self._config_path(bucket_name)
if not config_path.exists():
return []
try:
data = json.loads(config_path.read_text(encoding="utf-8"))
configs = [NotificationConfiguration.from_dict(c) for c in data.get("configurations", [])]
self._configs[bucket_name] = configs
return configs
except (json.JSONDecodeError, OSError) as e:
logger.warning(f"Failed to load notification config for {bucket_name}: {e}")
return []
def set_bucket_notifications(
self, bucket_name: str, configurations: List[NotificationConfiguration]
) -> None:
config_path = self._config_path(bucket_name)
config_path.parent.mkdir(parents=True, exist_ok=True)
data = {"configurations": [c.to_dict() for c in configurations]}
config_path.write_text(json.dumps(data, indent=2), encoding="utf-8")
self._configs[bucket_name] = configurations
def delete_bucket_notifications(self, bucket_name: str) -> None:
config_path = self._config_path(bucket_name)
try:
if config_path.exists():
config_path.unlink()
except OSError:
pass
self._configs.pop(bucket_name, None)
def emit_event(self, event: NotificationEvent) -> None:
configurations = self.get_bucket_notifications(event.bucket_name)
if not configurations:
return
for config in configurations:
if config.matches_event(event.event_name, event.object_key):
self._queue.put((event, config.destination))
self._stats["events_queued"] += 1
logger.debug(
f"Queued notification for {event.event_name} on {event.bucket_name}/{event.object_key}"
)
def emit_object_created(
self,
bucket_name: str,
object_key: str,
*,
size: int = 0,
etag: str = "",
version_id: Optional[str] = None,
request_id: str = "",
source_ip: str = "",
user_identity: str = "",
operation: str = "Put",
) -> None:
event = NotificationEvent(
event_name=f"s3:ObjectCreated:{operation}",
bucket_name=bucket_name,
object_key=object_key,
object_size=size,
etag=etag,
version_id=version_id,
request_id=request_id or uuid.uuid4().hex,
source_ip=source_ip,
user_identity=user_identity,
)
self.emit_event(event)
def emit_object_removed(
self,
bucket_name: str,
object_key: str,
*,
version_id: Optional[str] = None,
request_id: str = "",
source_ip: str = "",
user_identity: str = "",
operation: str = "Delete",
) -> None:
event = NotificationEvent(
event_name=f"s3:ObjectRemoved:{operation}",
bucket_name=bucket_name,
object_key=object_key,
version_id=version_id,
request_id=request_id or uuid.uuid4().hex,
source_ip=source_ip,
user_identity=user_identity,
)
self.emit_event(event)
def _worker_loop(self) -> None:
while not self._shutdown.is_set():
try:
event, destination = self._queue.get(timeout=1.0)
except queue.Empty:
continue
try:
self._send_notification(event, destination)
self._stats["events_sent"] += 1
except Exception as e:
self._stats["events_failed"] += 1
logger.error(f"Failed to send notification: {e}")
finally:
self._queue.task_done()
def _send_notification(self, event: NotificationEvent, destination: WebhookDestination) -> None:
payload = event.to_s3_event()
headers = {"Content-Type": "application/json", **destination.headers}
last_error = None
for attempt in range(destination.retry_count):
try:
response = requests.post(
destination.url,
json=payload,
headers=headers,
timeout=destination.timeout_seconds,
)
if response.status_code < 400:
logger.info(
f"Notification sent: {event.event_name} -> {destination.url} (status={response.status_code})"
)
return
last_error = f"HTTP {response.status_code}: {response.text[:200]}"
except requests.RequestException as e:
last_error = str(e)
if attempt < destination.retry_count - 1:
time.sleep(destination.retry_delay_seconds * (attempt + 1))
raise RuntimeError(f"Failed after {destination.retry_count} attempts: {last_error}")
def get_stats(self) -> Dict[str, int]:
return dict(self._stats)
def shutdown(self) -> None:
self._shutdown.set()
for worker in self._workers:
worker.join(timeout=5.0)

234
app/object_lock.py Normal file
View File

@@ -0,0 +1,234 @@
from __future__ import annotations
import json
from dataclasses import dataclass
from datetime import datetime, timezone
from enum import Enum
from pathlib import Path
from typing import Any, Dict, Optional
class RetentionMode(Enum):
GOVERNANCE = "GOVERNANCE"
COMPLIANCE = "COMPLIANCE"
class ObjectLockError(Exception):
pass
@dataclass
class ObjectLockRetention:
mode: RetentionMode
retain_until_date: datetime
def to_dict(self) -> Dict[str, str]:
return {
"Mode": self.mode.value,
"RetainUntilDate": self.retain_until_date.isoformat(),
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> Optional["ObjectLockRetention"]:
if not data:
return None
mode_str = data.get("Mode")
date_str = data.get("RetainUntilDate")
if not mode_str or not date_str:
return None
try:
mode = RetentionMode(mode_str)
retain_until = datetime.fromisoformat(date_str.replace("Z", "+00:00"))
return cls(mode=mode, retain_until_date=retain_until)
except (ValueError, KeyError):
return None
def is_expired(self) -> bool:
return datetime.now(timezone.utc) > self.retain_until_date
@dataclass
class ObjectLockConfig:
enabled: bool = False
default_retention: Optional[ObjectLockRetention] = None
def to_dict(self) -> Dict[str, Any]:
result: Dict[str, Any] = {"ObjectLockEnabled": "Enabled" if self.enabled else "Disabled"}
if self.default_retention:
result["Rule"] = {
"DefaultRetention": {
"Mode": self.default_retention.mode.value,
"Days": None,
"Years": None,
}
}
return result
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "ObjectLockConfig":
enabled = data.get("ObjectLockEnabled") == "Enabled"
default_retention = None
rule = data.get("Rule")
if rule and "DefaultRetention" in rule:
dr = rule["DefaultRetention"]
mode_str = dr.get("Mode", "GOVERNANCE")
days = dr.get("Days")
years = dr.get("Years")
if days or years:
from datetime import timedelta
now = datetime.now(timezone.utc)
if years:
delta = timedelta(days=int(years) * 365)
else:
delta = timedelta(days=int(days))
default_retention = ObjectLockRetention(
mode=RetentionMode(mode_str),
retain_until_date=now + delta,
)
return cls(enabled=enabled, default_retention=default_retention)
class ObjectLockService:
def __init__(self, storage_root: Path):
self.storage_root = storage_root
self._config_cache: Dict[str, ObjectLockConfig] = {}
def _bucket_lock_config_path(self, bucket_name: str) -> Path:
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "object_lock.json"
def _object_lock_meta_path(self, bucket_name: str, object_key: str) -> Path:
safe_key = object_key.replace("/", "_").replace("\\", "_")
return (
self.storage_root / ".myfsio.sys" / "buckets" / bucket_name /
"locks" / f"{safe_key}.lock.json"
)
def get_bucket_lock_config(self, bucket_name: str) -> ObjectLockConfig:
if bucket_name in self._config_cache:
return self._config_cache[bucket_name]
config_path = self._bucket_lock_config_path(bucket_name)
if not config_path.exists():
return ObjectLockConfig(enabled=False)
try:
data = json.loads(config_path.read_text(encoding="utf-8"))
config = ObjectLockConfig.from_dict(data)
self._config_cache[bucket_name] = config
return config
except (json.JSONDecodeError, OSError):
return ObjectLockConfig(enabled=False)
def set_bucket_lock_config(self, bucket_name: str, config: ObjectLockConfig) -> None:
config_path = self._bucket_lock_config_path(bucket_name)
config_path.parent.mkdir(parents=True, exist_ok=True)
config_path.write_text(json.dumps(config.to_dict()), encoding="utf-8")
self._config_cache[bucket_name] = config
def enable_bucket_lock(self, bucket_name: str) -> None:
config = self.get_bucket_lock_config(bucket_name)
config.enabled = True
self.set_bucket_lock_config(bucket_name, config)
def is_bucket_lock_enabled(self, bucket_name: str) -> bool:
return self.get_bucket_lock_config(bucket_name).enabled
def get_object_retention(self, bucket_name: str, object_key: str) -> Optional[ObjectLockRetention]:
meta_path = self._object_lock_meta_path(bucket_name, object_key)
if not meta_path.exists():
return None
try:
data = json.loads(meta_path.read_text(encoding="utf-8"))
return ObjectLockRetention.from_dict(data.get("retention", {}))
except (json.JSONDecodeError, OSError):
return None
def set_object_retention(
self,
bucket_name: str,
object_key: str,
retention: ObjectLockRetention,
bypass_governance: bool = False,
) -> None:
existing = self.get_object_retention(bucket_name, object_key)
if existing and not existing.is_expired():
if existing.mode == RetentionMode.COMPLIANCE:
raise ObjectLockError(
"Cannot modify retention on object with COMPLIANCE mode until retention expires"
)
if existing.mode == RetentionMode.GOVERNANCE and not bypass_governance:
raise ObjectLockError(
"Cannot modify GOVERNANCE retention without bypass-governance permission"
)
meta_path = self._object_lock_meta_path(bucket_name, object_key)
meta_path.parent.mkdir(parents=True, exist_ok=True)
existing_data: Dict[str, Any] = {}
if meta_path.exists():
try:
existing_data = json.loads(meta_path.read_text(encoding="utf-8"))
except (json.JSONDecodeError, OSError):
pass
existing_data["retention"] = retention.to_dict()
meta_path.write_text(json.dumps(existing_data), encoding="utf-8")
def get_legal_hold(self, bucket_name: str, object_key: str) -> bool:
meta_path = self._object_lock_meta_path(bucket_name, object_key)
if not meta_path.exists():
return False
try:
data = json.loads(meta_path.read_text(encoding="utf-8"))
return data.get("legal_hold", False)
except (json.JSONDecodeError, OSError):
return False
def set_legal_hold(self, bucket_name: str, object_key: str, enabled: bool) -> None:
meta_path = self._object_lock_meta_path(bucket_name, object_key)
meta_path.parent.mkdir(parents=True, exist_ok=True)
existing_data: Dict[str, Any] = {}
if meta_path.exists():
try:
existing_data = json.loads(meta_path.read_text(encoding="utf-8"))
except (json.JSONDecodeError, OSError):
pass
existing_data["legal_hold"] = enabled
meta_path.write_text(json.dumps(existing_data), encoding="utf-8")
def can_delete_object(
self,
bucket_name: str,
object_key: str,
bypass_governance: bool = False,
) -> tuple[bool, str]:
if self.get_legal_hold(bucket_name, object_key):
return False, "Object is under legal hold"
retention = self.get_object_retention(bucket_name, object_key)
if retention and not retention.is_expired():
if retention.mode == RetentionMode.COMPLIANCE:
return False, f"Object is locked in COMPLIANCE mode until {retention.retain_until_date.isoformat()}"
if retention.mode == RetentionMode.GOVERNANCE:
if not bypass_governance:
return False, f"Object is locked in GOVERNANCE mode until {retention.retain_until_date.isoformat()}"
return True, ""
def can_overwrite_object(
self,
bucket_name: str,
object_key: str,
bypass_governance: bool = False,
) -> tuple[bool, str]:
return self.can_delete_object(bucket_name, object_key, bypass_governance)
def delete_object_lock_metadata(self, bucket_name: str, object_key: str) -> None:
meta_path = self._object_lock_meta_path(bucket_name, object_key)
try:
if meta_path.exists():
meta_path.unlink()
except OSError:
pass

View File

@@ -1,4 +1,3 @@
"""Background replication worker."""
from __future__ import annotations
import json
@@ -9,7 +8,7 @@ import time
from concurrent.futures import ThreadPoolExecutor
from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, Optional
from typing import Any, Dict, Optional
import boto3
from botocore.config import Config
@@ -24,11 +23,38 @@ logger = logging.getLogger(__name__)
REPLICATION_USER_AGENT = "S3ReplicationAgent/1.0"
REPLICATION_CONNECT_TIMEOUT = 5
REPLICATION_READ_TIMEOUT = 30
STREAMING_THRESHOLD_BYTES = 10 * 1024 * 1024 # 10 MiB - use streaming for larger files
REPLICATION_MODE_NEW_ONLY = "new_only"
REPLICATION_MODE_ALL = "all"
def _create_s3_client(connection: RemoteConnection, *, health_check: bool = False) -> Any:
"""Create a boto3 S3 client for the given connection.
Args:
connection: Remote S3 connection configuration
health_check: If True, use minimal retries for quick health checks
"""
config = Config(
user_agent_extra=REPLICATION_USER_AGENT,
connect_timeout=REPLICATION_CONNECT_TIMEOUT,
read_timeout=REPLICATION_READ_TIMEOUT,
retries={'max_attempts': 1 if health_check else 2},
signature_version='s3v4',
s3={'addressing_style': 'path'},
request_checksum_calculation='when_required',
response_checksum_validation='when_required',
)
return boto3.client(
"s3",
endpoint_url=connection.endpoint_url,
aws_access_key_id=connection.access_key,
aws_secret_access_key=connection.secret_key,
region_name=connection.region or 'us-east-1',
config=config,
)
@dataclass
class ReplicationStats:
"""Statistics for replication operations - computed dynamically."""
@@ -102,8 +128,19 @@ class ReplicationManager:
self._rules: Dict[str, ReplicationRule] = {}
self._stats_lock = threading.Lock()
self._executor = ThreadPoolExecutor(max_workers=4, thread_name_prefix="ReplicationWorker")
self._shutdown = False
self.reload_rules()
def shutdown(self, wait: bool = True) -> None:
"""Shutdown the replication executor gracefully.
Args:
wait: If True, wait for pending tasks to complete
"""
self._shutdown = True
self._executor.shutdown(wait=wait)
logger.info("Replication manager shut down")
def reload_rules(self) -> None:
if not self.rules_path.exists():
self._rules = {}
@@ -129,20 +166,7 @@ class ReplicationManager:
Uses short timeouts to prevent blocking.
"""
try:
config = Config(
user_agent_extra=REPLICATION_USER_AGENT,
connect_timeout=REPLICATION_CONNECT_TIMEOUT,
read_timeout=REPLICATION_READ_TIMEOUT,
retries={'max_attempts': 1}
)
s3 = boto3.client(
"s3",
endpoint_url=connection.endpoint_url,
aws_access_key_id=connection.access_key,
aws_secret_access_key=connection.secret_key,
region_name=connection.region,
config=config,
)
s3 = _create_s3_client(connection, health_check=True)
s3.list_buckets()
return True
except Exception as e:
@@ -153,9 +177,15 @@ class ReplicationManager:
return self._rules.get(bucket_name)
def set_rule(self, rule: ReplicationRule) -> None:
old_rule = self._rules.get(rule.bucket_name)
was_all_mode = old_rule and old_rule.mode == REPLICATION_MODE_ALL if old_rule else False
self._rules[rule.bucket_name] = rule
self.save_rules()
if rule.mode == REPLICATION_MODE_ALL and rule.enabled and not was_all_mode:
logger.info(f"Replication mode ALL enabled for {rule.bucket_name}, triggering sync of existing objects")
self._executor.submit(self.replicate_existing_objects, rule.bucket_name)
def delete_rule(self, bucket_name: str) -> None:
if bucket_name in self._rules:
del self._rules[bucket_name]
@@ -185,13 +215,7 @@ class ReplicationManager:
source_objects = self.storage.list_objects_all(bucket_name)
source_keys = {obj.key: obj.size for obj in source_objects}
s3 = boto3.client(
"s3",
endpoint_url=connection.endpoint_url,
aws_access_key_id=connection.access_key,
aws_secret_access_key=connection.secret_key,
region_name=connection.region,
)
s3 = _create_s3_client(connection)
dest_keys = set()
bytes_synced = 0
@@ -257,13 +281,7 @@ class ReplicationManager:
raise ValueError(f"Connection {connection_id} not found")
try:
s3 = boto3.client(
"s3",
endpoint_url=connection.endpoint_url,
aws_access_key_id=connection.access_key,
aws_secret_access_key=connection.secret_key,
region_name=connection.region,
)
s3 = _create_s3_client(connection)
s3.create_bucket(Bucket=bucket_name)
except ClientError as e:
logger.error(f"Failed to create remote bucket {bucket_name}: {e}")
@@ -286,6 +304,15 @@ class ReplicationManager:
self._executor.submit(self._replicate_task, bucket_name, object_key, rule, connection, action)
def _replicate_task(self, bucket_name: str, object_key: str, rule: ReplicationRule, conn: RemoteConnection, action: str) -> None:
if self._shutdown:
return
# Re-check if rule is still enabled (may have been paused after task was submitted)
current_rule = self.get_rule(bucket_name)
if not current_rule or not current_rule.enabled:
logger.debug(f"Replication skipped for {bucket_name}/{object_key}: rule disabled or removed")
return
if ".." in object_key or object_key.startswith("/") or object_key.startswith("\\"):
logger.error(f"Invalid object key in replication (path traversal attempt): {object_key}")
return
@@ -297,30 +324,8 @@ class ReplicationManager:
logger.error(f"Object key validation failed in replication: {e}")
return
file_size = 0
try:
config = Config(
user_agent_extra=REPLICATION_USER_AGENT,
connect_timeout=REPLICATION_CONNECT_TIMEOUT,
read_timeout=REPLICATION_READ_TIMEOUT,
retries={'max_attempts': 2},
signature_version='s3v4',
s3={
'addressing_style': 'path',
},
# Disable SDK automatic checksums - they cause SignatureDoesNotMatch errors
# with S3-compatible servers that don't support CRC32 checksum headers
request_checksum_calculation='when_required',
response_checksum_validation='when_required',
)
s3 = boto3.client(
"s3",
endpoint_url=conn.endpoint_url,
aws_access_key_id=conn.access_key,
aws_secret_access_key=conn.secret_key,
region_name=conn.region or 'us-east-1',
config=config,
)
s3 = _create_s3_client(conn)
if action == "delete":
try:
@@ -337,34 +342,42 @@ class ReplicationManager:
logger.error(f"Source object not found: {bucket_name}/{object_key}")
return
# Don't replicate metadata - destination server will generate its own
# __etag__ and __size__. Replicating them causes signature mismatches when they have None/empty values.
content_type, _ = mimetypes.guess_type(path)
file_size = path.stat().st_size
logger.info(f"Replicating {bucket_name}/{object_key}: Size={file_size}, ContentType={content_type}")
def do_put_object() -> None:
"""Helper to upload object.
def do_upload() -> None:
"""Upload object using appropriate method based on file size.
Reads the file content into memory first to avoid signature calculation
issues with certain binary file types (like GIFs) when streaming.
Do NOT set ContentLength explicitly - boto3 calculates it from the bytes
and setting it manually can cause SignatureDoesNotMatch errors.
For small files (< 10 MiB): Read into memory for simpler handling
For large files: Use streaming upload to avoid memory issues
"""
file_content = path.read_bytes()
put_kwargs = {
"Bucket": rule.target_bucket,
"Key": object_key,
"Body": file_content,
}
extra_args = {}
if content_type:
put_kwargs["ContentType"] = content_type
s3.put_object(**put_kwargs)
extra_args["ContentType"] = content_type
if file_size >= STREAMING_THRESHOLD_BYTES:
# Use multipart upload for large files
s3.upload_file(
str(path),
rule.target_bucket,
object_key,
ExtraArgs=extra_args if extra_args else None,
)
else:
# Read small files into memory
file_content = path.read_bytes()
put_kwargs = {
"Bucket": rule.target_bucket,
"Key": object_key,
"Body": file_content,
**extra_args,
}
s3.put_object(**put_kwargs)
try:
do_put_object()
do_upload()
except (ClientError, S3UploadFailedError) as e:
error_code = None
if isinstance(e, ClientError):
@@ -389,7 +402,7 @@ class ReplicationManager:
raise e
if bucket_ready:
do_put_object()
do_upload()
else:
raise e

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,3 @@
"""Ephemeral store for one-time secrets communicated to the UI."""
from __future__ import annotations
import secrets

View File

@@ -1,4 +1,3 @@
"""Filesystem-backed object storage helpers."""
from __future__ import annotations
import hashlib
@@ -7,9 +6,11 @@ import os
import re
import shutil
import stat
import threading
import time
import unicodedata
import uuid
from collections import OrderedDict
from contextlib import contextmanager
from dataclasses import dataclass
from datetime import datetime, timezone
@@ -129,12 +130,29 @@ class ObjectStorage:
MULTIPART_MANIFEST = "manifest.json"
BUCKET_CONFIG_FILE = ".bucket.json"
KEY_INDEX_CACHE_TTL = 30
OBJECT_CACHE_MAX_SIZE = 100 # Maximum number of buckets to cache
def __init__(self, root: Path) -> None:
self.root = Path(root)
self.root.mkdir(parents=True, exist_ok=True)
self._ensure_system_roots()
self._object_cache: Dict[str, tuple[Dict[str, ObjectMeta], float]] = {}
# LRU cache for object metadata with thread-safe access
self._object_cache: OrderedDict[str, tuple[Dict[str, ObjectMeta], float]] = OrderedDict()
self._cache_lock = threading.Lock() # Global lock for cache structure
# Performance: Per-bucket locks to reduce contention
self._bucket_locks: Dict[str, threading.Lock] = {}
# Cache version counter for detecting stale reads
self._cache_version: Dict[str, int] = {}
# Performance: Bucket config cache with TTL
self._bucket_config_cache: Dict[str, tuple[dict[str, Any], float]] = {}
self._bucket_config_cache_ttl = 30.0 # 30 second TTL
def _get_bucket_lock(self, bucket_id: str) -> threading.Lock:
"""Get or create a lock for a specific bucket. Reduces global lock contention."""
with self._cache_lock:
if bucket_id not in self._bucket_locks:
self._bucket_locks[bucket_id] = threading.Lock()
return self._bucket_locks[bucket_id]
def list_buckets(self) -> List[BucketMeta]:
buckets: List[BucketMeta] = []
@@ -240,11 +258,13 @@ class ObjectStorage:
bucket_path = self._bucket_path(bucket_name)
if not bucket_path.exists():
raise StorageError("Bucket does not exist")
if self._has_visible_objects(bucket_path):
# Performance: Single check instead of three separate traversals
has_objects, has_versions, has_multipart = self._check_bucket_contents(bucket_path)
if has_objects:
raise StorageError("Bucket not empty")
if self._has_archived_versions(bucket_path):
if has_versions:
raise StorageError("Bucket contains archived object versions")
if self._has_active_multipart_uploads(bucket_path):
if has_multipart:
raise StorageError("Bucket has active multipart uploads")
self._remove_tree(bucket_path)
self._remove_tree(self._system_bucket_root(bucket_path.name))
@@ -388,15 +408,18 @@ class ObjectStorage:
self._write_metadata(bucket_id, safe_key, combined_meta)
self._invalidate_bucket_stats_cache(bucket_id)
self._invalidate_object_cache(bucket_id)
return ObjectMeta(
# Performance: Lazy update - only update the affected key instead of invalidating whole cache
obj_meta = ObjectMeta(
key=safe_key.as_posix(),
size=stat.st_size,
last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc),
etag=etag,
metadata=metadata,
)
self._update_object_cache_entry(bucket_id, safe_key.as_posix(), obj_meta)
return obj_meta
def get_object_path(self, bucket_name: str, object_key: str) -> Path:
path = self._object_path(bucket_name, object_key)
@@ -444,7 +467,8 @@ class ObjectStorage:
self._delete_metadata(bucket_id, rel)
self._invalidate_bucket_stats_cache(bucket_id)
self._invalidate_object_cache(bucket_id)
# Performance: Lazy update - only remove the affected key instead of invalidating whole cache
self._update_object_cache_entry(bucket_id, safe_key.as_posix(), None)
self._cleanup_empty_parents(path, bucket_path)
def purge_object(self, bucket_name: str, object_key: str) -> None:
@@ -466,7 +490,8 @@ class ObjectStorage:
shutil.rmtree(legacy_version_dir, ignore_errors=True)
self._invalidate_bucket_stats_cache(bucket_id)
self._invalidate_object_cache(bucket_id)
# Performance: Lazy update - only remove the affected key instead of invalidating whole cache
self._update_object_cache_entry(bucket_id, rel.as_posix(), None)
self._cleanup_empty_parents(target, bucket_path)
def is_versioning_enabled(self, bucket_name: str) -> bool:
@@ -729,8 +754,6 @@ class ObjectStorage:
bucket_id = bucket_path.name
safe_key = self._sanitize_object_key(object_key)
version_dir = self._version_dir(bucket_id, safe_key)
if not version_dir.exists():
version_dir = self._legacy_version_dir(bucket_id, safe_key)
if not version_dir.exists():
version_dir = self._legacy_version_dir(bucket_id, safe_key)
if not version_dir.exists():
@@ -785,6 +808,29 @@ class ObjectStorage:
metadata=metadata or None,
)
def delete_object_version(self, bucket_name: str, object_key: str, version_id: str) -> None:
bucket_path = self._bucket_path(bucket_name)
if not bucket_path.exists():
raise StorageError("Bucket does not exist")
bucket_id = bucket_path.name
safe_key = self._sanitize_object_key(object_key)
version_dir = self._version_dir(bucket_id, safe_key)
data_path = version_dir / f"{version_id}.bin"
meta_path = version_dir / f"{version_id}.json"
if not data_path.exists() and not meta_path.exists():
legacy_version_dir = self._legacy_version_dir(bucket_id, safe_key)
data_path = legacy_version_dir / f"{version_id}.bin"
meta_path = legacy_version_dir / f"{version_id}.json"
if not data_path.exists() and not meta_path.exists():
raise StorageError(f"Version {version_id} not found")
if data_path.exists():
data_path.unlink()
if meta_path.exists():
meta_path.unlink()
parent = data_path.parent
if parent.exists() and not any(parent.iterdir()):
parent.rmdir()
def list_orphaned_objects(self, bucket_name: str) -> List[Dict[str, Any]]:
bucket_path = self._bucket_path(bucket_name)
if not bucket_path.exists():
@@ -879,6 +925,10 @@ class ObjectStorage:
part_number: int,
stream: BinaryIO,
) -> str:
"""Upload a part for a multipart upload.
Uses file locking to safely update the manifest and handle concurrent uploads.
"""
if part_number < 1:
raise StorageError("part_number must be >= 1")
bucket_path = self._bucket_path(bucket_name)
@@ -889,11 +939,26 @@ class ObjectStorage:
if not upload_root.exists():
raise StorageError("Multipart upload not found")
# Write part to temporary file first, then rename atomically
checksum = hashlib.md5()
part_filename = f"part-{part_number:05d}.part"
part_path = upload_root / part_filename
with part_path.open("wb") as target:
shutil.copyfileobj(_HashingReader(stream, checksum), target)
temp_path = upload_root / f".{part_filename}.tmp"
try:
with temp_path.open("wb") as target:
shutil.copyfileobj(_HashingReader(stream, checksum), target)
# Atomic rename (or replace on Windows)
temp_path.replace(part_path)
except OSError:
# Clean up temp file on failure
try:
temp_path.unlink(missing_ok=True)
except OSError:
pass
raise
record = {
"etag": checksum.hexdigest(),
"size": part_path.stat().st_size,
@@ -903,16 +968,29 @@ class ObjectStorage:
manifest_path = upload_root / self.MULTIPART_MANIFEST
lock_path = upload_root / ".manifest.lock"
with lock_path.open("w") as lock_file:
with _file_lock(lock_file):
try:
manifest = json.loads(manifest_path.read_text(encoding="utf-8"))
except (OSError, json.JSONDecodeError) as exc:
raise StorageError("Multipart manifest unreadable") from exc
# Retry loop for handling transient lock/read failures
max_retries = 3
for attempt in range(max_retries):
try:
with lock_path.open("w") as lock_file:
with _file_lock(lock_file):
try:
manifest = json.loads(manifest_path.read_text(encoding="utf-8"))
except (OSError, json.JSONDecodeError) as exc:
if attempt < max_retries - 1:
time.sleep(0.1 * (attempt + 1))
continue
raise StorageError("Multipart manifest unreadable") from exc
parts = manifest.setdefault("parts", {})
parts[str(part_number)] = record
manifest_path.write_text(json.dumps(manifest), encoding="utf-8")
parts = manifest.setdefault("parts", {})
parts[str(part_number)] = record
manifest_path.write_text(json.dumps(manifest), encoding="utf-8")
break
except OSError as exc:
if attempt < max_retries - 1:
time.sleep(0.1 * (attempt + 1))
continue
raise StorageError(f"Failed to update multipart manifest: {exc}") from exc
return record["etag"]
@@ -1019,13 +1097,17 @@ class ObjectStorage:
self._invalidate_bucket_stats_cache(bucket_id)
stat = destination.stat()
return ObjectMeta(
# Performance: Lazy update - only update the affected key instead of invalidating whole cache
obj_meta = ObjectMeta(
key=safe_key.as_posix(),
size=stat.st_size,
last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc),
etag=checksum.hexdigest(),
metadata=metadata,
)
self._update_object_cache_entry(bucket_id, safe_key.as_posix(), obj_meta)
return obj_meta
def abort_multipart_upload(self, bucket_name: str, upload_id: str) -> None:
bucket_path = self._bucket_path(bucket_name)
@@ -1064,6 +1146,49 @@ class ObjectStorage:
parts.sort(key=lambda x: x["PartNumber"])
return parts
def list_multipart_uploads(self, bucket_name: str) -> List[Dict[str, Any]]:
"""List all active multipart uploads for a bucket."""
bucket_path = self._bucket_path(bucket_name)
if not bucket_path.exists():
raise StorageError("Bucket does not exist")
bucket_id = bucket_path.name
uploads = []
multipart_root = self._bucket_multipart_root(bucket_id)
if multipart_root.exists():
for upload_dir in multipart_root.iterdir():
if not upload_dir.is_dir():
continue
manifest_path = upload_dir / "manifest.json"
if not manifest_path.exists():
continue
try:
manifest = json.loads(manifest_path.read_text(encoding="utf-8"))
uploads.append({
"upload_id": manifest.get("upload_id", upload_dir.name),
"object_key": manifest.get("object_key", ""),
"created_at": manifest.get("created_at", ""),
})
except (OSError, json.JSONDecodeError):
continue
legacy_root = self._legacy_multipart_root(bucket_id)
if legacy_root.exists():
for upload_dir in legacy_root.iterdir():
if not upload_dir.is_dir():
continue
manifest_path = upload_dir / "manifest.json"
if not manifest_path.exists():
continue
try:
manifest = json.loads(manifest_path.read_text(encoding="utf-8"))
uploads.append({
"upload_id": manifest.get("upload_id", upload_dir.name),
"object_key": manifest.get("object_key", ""),
"created_at": manifest.get("created_at", ""),
})
except (OSError, json.JSONDecodeError):
continue
return uploads
def _bucket_path(self, bucket_name: str) -> Path:
safe_name = self._sanitize_bucket_name(bucket_name)
return self.root / safe_name
@@ -1264,28 +1389,85 @@ class ObjectStorage:
return objects
def _get_object_cache(self, bucket_id: str, bucket_path: Path) -> Dict[str, ObjectMeta]:
"""Get cached object metadata for a bucket, refreshing if stale."""
"""Get cached object metadata for a bucket, refreshing if stale.
Uses LRU eviction to prevent unbounded cache growth.
Thread-safe with per-bucket locks to reduce contention.
"""
now = time.time()
cached = self._object_cache.get(bucket_id)
if cached:
objects, timestamp = cached
if now - timestamp < self.KEY_INDEX_CACHE_TTL:
return objects
# Quick check with global lock (brief)
with self._cache_lock:
cached = self._object_cache.get(bucket_id)
if cached:
objects, timestamp = cached
if now - timestamp < self.KEY_INDEX_CACHE_TTL:
self._object_cache.move_to_end(bucket_id)
return objects
cache_version = self._cache_version.get(bucket_id, 0)
# Use per-bucket lock for cache building (allows parallel builds for different buckets)
bucket_lock = self._get_bucket_lock(bucket_id)
with bucket_lock:
# Double-check cache after acquiring per-bucket lock
with self._cache_lock:
cached = self._object_cache.get(bucket_id)
if cached:
objects, timestamp = cached
if now - timestamp < self.KEY_INDEX_CACHE_TTL:
self._object_cache.move_to_end(bucket_id)
return objects
# Build cache with per-bucket lock held (prevents duplicate work)
objects = self._build_object_cache(bucket_path)
with self._cache_lock:
# Check if cache was invalidated while we were building
current_version = self._cache_version.get(bucket_id, 0)
if current_version != cache_version:
objects = self._build_object_cache(bucket_path)
# Evict oldest entries if cache is full
while len(self._object_cache) >= self.OBJECT_CACHE_MAX_SIZE:
self._object_cache.popitem(last=False)
self._object_cache[bucket_id] = (objects, time.time())
self._object_cache.move_to_end(bucket_id)
objects = self._build_object_cache(bucket_path)
self._object_cache[bucket_id] = (objects, now)
return objects
def _invalidate_object_cache(self, bucket_id: str) -> None:
"""Invalidate the object cache and etag index for a bucket."""
self._object_cache.pop(bucket_id, None)
"""Invalidate the object cache and etag index for a bucket.
Increments version counter to signal stale reads.
"""
with self._cache_lock:
self._object_cache.pop(bucket_id, None)
self._cache_version[bucket_id] = self._cache_version.get(bucket_id, 0) + 1
etag_index_path = self._system_bucket_root(bucket_id) / "etag_index.json"
try:
etag_index_path.unlink(missing_ok=True)
except OSError:
pass
def _update_object_cache_entry(self, bucket_id: str, key: str, meta: Optional[ObjectMeta]) -> None:
"""Update a single entry in the object cache instead of invalidating the whole cache.
This is a performance optimization - lazy update instead of full invalidation.
"""
with self._cache_lock:
cached = self._object_cache.get(bucket_id)
if cached:
objects, timestamp = cached
if meta is None:
# Delete operation - remove key from cache
objects.pop(key, None)
else:
# Put operation - update/add key in cache
objects[key] = meta
# Keep same timestamp - don't reset TTL for single key updates
def _ensure_system_roots(self) -> None:
for path in (
self._system_root_path(),
@@ -1305,19 +1487,33 @@ class ObjectStorage:
return self._system_bucket_root(bucket_name) / self.BUCKET_CONFIG_FILE
def _read_bucket_config(self, bucket_name: str) -> dict[str, Any]:
# Performance: Check cache first
now = time.time()
cached = self._bucket_config_cache.get(bucket_name)
if cached:
config, cached_time = cached
if now - cached_time < self._bucket_config_cache_ttl:
return config.copy() # Return copy to prevent mutation
config_path = self._bucket_config_path(bucket_name)
if not config_path.exists():
self._bucket_config_cache[bucket_name] = ({}, now)
return {}
try:
data = json.loads(config_path.read_text(encoding="utf-8"))
return data if isinstance(data, dict) else {}
config = data if isinstance(data, dict) else {}
self._bucket_config_cache[bucket_name] = (config, now)
return config.copy()
except (OSError, json.JSONDecodeError):
self._bucket_config_cache[bucket_name] = ({}, now)
return {}
def _write_bucket_config(self, bucket_name: str, payload: dict[str, Any]) -> None:
config_path = self._bucket_config_path(bucket_name)
config_path.parent.mkdir(parents=True, exist_ok=True)
config_path.write_text(json.dumps(payload), encoding="utf-8")
# Performance: Update cache immediately after write
self._bucket_config_cache[bucket_name] = (payload.copy(), time.time())
def _set_bucket_config_entry(self, bucket_name: str, key: str, value: Any | None) -> None:
config = self._read_bucket_config(bucket_name)
@@ -1439,33 +1635,68 @@ class ObjectStorage:
except OSError:
continue
def _has_visible_objects(self, bucket_path: Path) -> bool:
def _check_bucket_contents(self, bucket_path: Path) -> tuple[bool, bool, bool]:
"""Check bucket for objects, versions, and multipart uploads in a single pass.
Performance optimization: Combines three separate rglob traversals into one.
Returns (has_visible_objects, has_archived_versions, has_active_multipart_uploads).
Uses early exit when all three are found.
"""
has_objects = False
has_versions = False
has_multipart = False
bucket_name = bucket_path.name
# Check visible objects in bucket
for path in bucket_path.rglob("*"):
if has_objects:
break
if not path.is_file():
continue
rel = path.relative_to(bucket_path)
if rel.parts and rel.parts[0] in self.INTERNAL_FOLDERS:
continue
return True
return False
has_objects = True
# Check archived versions (only if needed)
for version_root in (
self._bucket_versions_root(bucket_name),
self._legacy_versions_root(bucket_name),
):
if has_versions:
break
if version_root.exists():
for path in version_root.rglob("*"):
if path.is_file():
has_versions = True
break
# Check multipart uploads (only if needed)
for uploads_root in (
self._multipart_bucket_root(bucket_name),
self._legacy_multipart_bucket_root(bucket_name),
):
if has_multipart:
break
if uploads_root.exists():
for path in uploads_root.rglob("*"):
if path.is_file():
has_multipart = True
break
return has_objects, has_versions, has_multipart
def _has_visible_objects(self, bucket_path: Path) -> bool:
has_objects, _, _ = self._check_bucket_contents(bucket_path)
return has_objects
def _has_archived_versions(self, bucket_path: Path) -> bool:
for version_root in (
self._bucket_versions_root(bucket_path.name),
self._legacy_versions_root(bucket_path.name),
):
if version_root.exists() and any(path.is_file() for path in version_root.rglob("*")):
return True
return False
_, has_versions, _ = self._check_bucket_contents(bucket_path)
return has_versions
def _has_active_multipart_uploads(self, bucket_path: Path) -> bool:
for uploads_root in (
self._multipart_bucket_root(bucket_path.name),
self._legacy_multipart_bucket_root(bucket_path.name),
):
if uploads_root.exists() and any(path.is_file() for path in uploads_root.rglob("*")):
return True
return False
_, _, has_multipart = self._check_bucket_contents(bucket_path)
return has_multipart
def _remove_tree(self, path: Path) -> None:
if not path.exists():

377
app/ui.py
View File

@@ -1,4 +1,3 @@
"""Authenticated HTML UI for browsing buckets and objects."""
from __future__ import annotations
import json
@@ -26,6 +25,7 @@ from flask import (
)
from flask_wtf.csrf import generate_csrf
from .acl import AclService, create_canned_acl, CANNED_ACLS
from .bucket_policies import BucketPolicyStore
from .connections import ConnectionStore, RemoteConnection
from .extensions import limiter
@@ -75,6 +75,10 @@ def _secret_store() -> EphemeralSecretStore:
return store
def _acl() -> AclService:
return current_app.extensions["acl"]
def _format_bytes(num: int) -> str:
step = 1024
units = ["B", "KB", "MB", "GB", "TB", "PB"]
@@ -379,10 +383,21 @@ def bucket_detail(bucket_name: str):
objects_api_url = url_for("ui.list_bucket_objects", bucket_name=bucket_name)
lifecycle_url = url_for("ui.bucket_lifecycle", bucket_name=bucket_name)
cors_url = url_for("ui.bucket_cors", bucket_name=bucket_name)
acl_url = url_for("ui.bucket_acl", bucket_name=bucket_name)
folders_url = url_for("ui.create_folder", bucket_name=bucket_name)
buckets_for_copy_url = url_for("ui.list_buckets_for_copy", bucket_name=bucket_name)
return render_template(
"bucket_detail.html",
bucket_name=bucket_name,
objects_api_url=objects_api_url,
lifecycle_url=lifecycle_url,
cors_url=cors_url,
acl_url=acl_url,
folders_url=folders_url,
buckets_for_copy_url=buckets_for_copy_url,
principal=principal,
bucket_policy_text=policy_text,
bucket_policy=bucket_policy,
@@ -415,7 +430,7 @@ def list_bucket_objects(bucket_name: str):
except IamError as exc:
return jsonify({"error": str(exc)}), 403
max_keys = min(int(request.args.get("max_keys", 1000)), 10000)
max_keys = min(int(request.args.get("max_keys", 1000)), 100000)
continuation_token = request.args.get("continuation_token") or None
prefix = request.args.get("prefix") or None
@@ -434,6 +449,17 @@ def list_bucket_objects(bucket_name: str):
except StorageError:
versioning_enabled = False
# Pre-compute URL templates once (not per-object) for performance
# Frontend will construct actual URLs by replacing KEY_PLACEHOLDER
preview_template = url_for("ui.object_preview", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
delete_template = url_for("ui.delete_object", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
presign_template = url_for("ui.object_presign", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
versions_template = url_for("ui.object_versions", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
restore_template = url_for("ui.restore_object_version", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER", version_id="VERSION_ID_PLACEHOLDER")
tags_template = url_for("ui.object_tags", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
copy_template = url_for("ui.copy_object", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
move_template = url_for("ui.move_object", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
objects_data = []
for obj in result.objects:
objects_data.append({
@@ -442,13 +468,6 @@ def list_bucket_objects(bucket_name: str):
"last_modified": obj.last_modified.isoformat(),
"last_modified_display": obj.last_modified.strftime("%b %d, %Y %H:%M"),
"etag": obj.etag,
"metadata": obj.metadata or {},
"preview_url": url_for("ui.object_preview", bucket_name=bucket_name, object_key=obj.key),
"download_url": url_for("ui.object_preview", bucket_name=bucket_name, object_key=obj.key) + "?download=1",
"presign_endpoint": url_for("ui.object_presign", bucket_name=bucket_name, object_key=obj.key),
"delete_endpoint": url_for("ui.delete_object", bucket_name=bucket_name, object_key=obj.key),
"versions_endpoint": url_for("ui.object_versions", bucket_name=bucket_name, object_key=obj.key),
"restore_template": url_for("ui.restore_object_version", bucket_name=bucket_name, object_key=obj.key, version_id="VERSION_ID_PLACEHOLDER"),
})
return jsonify({
@@ -457,6 +476,17 @@ def list_bucket_objects(bucket_name: str):
"next_continuation_token": result.next_continuation_token,
"total_count": result.total_count,
"versioning_enabled": versioning_enabled,
"url_templates": {
"preview": preview_template,
"download": preview_template + "?download=1",
"presign": presign_template,
"delete": delete_template,
"versions": versions_template,
"restore": restore_template,
"tags": tags_template,
"copy": copy_template,
"move": move_template,
},
})
@@ -1458,11 +1488,17 @@ def update_bucket_replication(bucket_name: str):
else:
flash("No replication configuration to pause", "warning")
elif action == "resume":
from .replication import REPLICATION_MODE_ALL
rule = _replication().get_rule(bucket_name)
if rule:
rule.enabled = True
_replication().set_rule(rule)
flash("Replication resumed", "success")
# When resuming, sync any pending objects that accumulated while paused
if rule.mode == REPLICATION_MODE_ALL:
_replication().replicate_existing_objects(bucket_name)
flash("Replication resumed. Syncing pending objects in background.", "success")
else:
flash("Replication resumed", "success")
else:
flash("No replication configuration to resume", "warning")
elif action == "create":
@@ -1651,6 +1687,327 @@ def metrics_dashboard():
)
@ui_bp.route("/buckets/<bucket_name>/lifecycle", methods=["GET", "POST", "DELETE"])
def bucket_lifecycle(bucket_name: str):
principal = _current_principal()
try:
_authorize_ui(principal, bucket_name, "policy")
except IamError as exc:
return jsonify({"error": str(exc)}), 403
storage = _storage()
if not storage.bucket_exists(bucket_name):
return jsonify({"error": "Bucket does not exist"}), 404
if request.method == "GET":
rules = storage.get_bucket_lifecycle(bucket_name) or []
return jsonify({"rules": rules})
if request.method == "DELETE":
storage.set_bucket_lifecycle(bucket_name, None)
return jsonify({"status": "ok", "message": "Lifecycle configuration deleted"})
payload = request.get_json(silent=True) or {}
rules = payload.get("rules", [])
if not isinstance(rules, list):
return jsonify({"error": "rules must be a list"}), 400
validated_rules = []
for i, rule in enumerate(rules):
if not isinstance(rule, dict):
return jsonify({"error": f"Rule {i} must be an object"}), 400
validated = {
"ID": str(rule.get("ID", f"rule-{i+1}")),
"Status": "Enabled" if rule.get("Status", "Enabled") == "Enabled" else "Disabled",
}
if rule.get("Prefix"):
validated["Prefix"] = str(rule["Prefix"])
if rule.get("Expiration"):
exp = rule["Expiration"]
if isinstance(exp, dict) and exp.get("Days"):
validated["Expiration"] = {"Days": int(exp["Days"])}
if rule.get("NoncurrentVersionExpiration"):
nve = rule["NoncurrentVersionExpiration"]
if isinstance(nve, dict) and nve.get("NoncurrentDays"):
validated["NoncurrentVersionExpiration"] = {"NoncurrentDays": int(nve["NoncurrentDays"])}
if rule.get("AbortIncompleteMultipartUpload"):
aimu = rule["AbortIncompleteMultipartUpload"]
if isinstance(aimu, dict) and aimu.get("DaysAfterInitiation"):
validated["AbortIncompleteMultipartUpload"] = {"DaysAfterInitiation": int(aimu["DaysAfterInitiation"])}
validated_rules.append(validated)
storage.set_bucket_lifecycle(bucket_name, validated_rules if validated_rules else None)
return jsonify({"status": "ok", "message": "Lifecycle configuration saved", "rules": validated_rules})
@ui_bp.route("/buckets/<bucket_name>/cors", methods=["GET", "POST", "DELETE"])
def bucket_cors(bucket_name: str):
principal = _current_principal()
try:
_authorize_ui(principal, bucket_name, "policy")
except IamError as exc:
return jsonify({"error": str(exc)}), 403
storage = _storage()
if not storage.bucket_exists(bucket_name):
return jsonify({"error": "Bucket does not exist"}), 404
if request.method == "GET":
rules = storage.get_bucket_cors(bucket_name) or []
return jsonify({"rules": rules})
if request.method == "DELETE":
storage.set_bucket_cors(bucket_name, None)
return jsonify({"status": "ok", "message": "CORS configuration deleted"})
payload = request.get_json(silent=True) or {}
rules = payload.get("rules", [])
if not isinstance(rules, list):
return jsonify({"error": "rules must be a list"}), 400
validated_rules = []
for i, rule in enumerate(rules):
if not isinstance(rule, dict):
return jsonify({"error": f"Rule {i} must be an object"}), 400
origins = rule.get("AllowedOrigins", [])
methods = rule.get("AllowedMethods", [])
if not origins or not methods:
return jsonify({"error": f"Rule {i} must have AllowedOrigins and AllowedMethods"}), 400
validated = {
"AllowedOrigins": [str(o) for o in origins if o],
"AllowedMethods": [str(m).upper() for m in methods if m],
}
if rule.get("AllowedHeaders"):
validated["AllowedHeaders"] = [str(h) for h in rule["AllowedHeaders"] if h]
if rule.get("ExposeHeaders"):
validated["ExposeHeaders"] = [str(h) for h in rule["ExposeHeaders"] if h]
if rule.get("MaxAgeSeconds") is not None:
try:
validated["MaxAgeSeconds"] = int(rule["MaxAgeSeconds"])
except (ValueError, TypeError):
pass
validated_rules.append(validated)
storage.set_bucket_cors(bucket_name, validated_rules if validated_rules else None)
return jsonify({"status": "ok", "message": "CORS configuration saved", "rules": validated_rules})
@ui_bp.route("/buckets/<bucket_name>/acl", methods=["GET", "POST"])
def bucket_acl(bucket_name: str):
principal = _current_principal()
action = "read" if request.method == "GET" else "write"
try:
_authorize_ui(principal, bucket_name, action)
except IamError as exc:
return jsonify({"error": str(exc)}), 403
storage = _storage()
if not storage.bucket_exists(bucket_name):
return jsonify({"error": "Bucket does not exist"}), 404
acl_service = _acl()
owner_id = principal.access_key if principal else "anonymous"
if request.method == "GET":
try:
acl = acl_service.get_bucket_acl(bucket_name)
if not acl:
acl = create_canned_acl("private", owner_id)
return jsonify({
"owner": acl.owner,
"grants": [g.to_dict() for g in acl.grants],
"canned_acls": list(CANNED_ACLS.keys()),
})
except Exception as exc:
return jsonify({"error": str(exc)}), 500
payload = request.get_json(silent=True) or {}
canned_acl = payload.get("canned_acl")
if canned_acl:
if canned_acl not in CANNED_ACLS:
return jsonify({"error": f"Invalid canned ACL: {canned_acl}"}), 400
acl_service.set_bucket_canned_acl(bucket_name, canned_acl, owner_id)
return jsonify({"status": "ok", "message": f"ACL set to {canned_acl}"})
return jsonify({"error": "canned_acl is required"}), 400
@ui_bp.route("/buckets/<bucket_name>/objects/<path:object_key>/tags", methods=["GET", "POST"])
def object_tags(bucket_name: str, object_key: str):
principal = _current_principal()
try:
_authorize_ui(principal, bucket_name, "read", object_key=object_key)
except IamError as exc:
return jsonify({"error": str(exc)}), 403
storage = _storage()
if request.method == "GET":
try:
tags = storage.get_object_tags(bucket_name, object_key)
return jsonify({"tags": tags})
except StorageError as exc:
return jsonify({"error": str(exc)}), 404
try:
_authorize_ui(principal, bucket_name, "write", object_key=object_key)
except IamError as exc:
return jsonify({"error": str(exc)}), 403
payload = request.get_json(silent=True) or {}
tags = payload.get("tags", [])
if not isinstance(tags, list):
return jsonify({"error": "tags must be a list"}), 400
if len(tags) > 10:
return jsonify({"error": "Maximum 10 tags allowed"}), 400
validated_tags = []
for tag in tags:
if isinstance(tag, dict) and tag.get("Key"):
validated_tags.append({
"Key": str(tag["Key"]),
"Value": str(tag.get("Value", ""))
})
try:
storage.set_object_tags(bucket_name, object_key, validated_tags if validated_tags else None)
return jsonify({"status": "ok", "message": "Tags saved", "tags": validated_tags})
except StorageError as exc:
return jsonify({"error": str(exc)}), 400
@ui_bp.post("/buckets/<bucket_name>/folders")
def create_folder(bucket_name: str):
principal = _current_principal()
try:
_authorize_ui(principal, bucket_name, "write")
except IamError as exc:
return jsonify({"error": str(exc)}), 403
payload = request.get_json(silent=True) or {}
folder_name = str(payload.get("folder_name", "")).strip()
prefix = str(payload.get("prefix", "")).strip()
if not folder_name:
return jsonify({"error": "folder_name is required"}), 400
folder_name = folder_name.rstrip("/")
if "/" in folder_name:
return jsonify({"error": "Folder name cannot contain /"}), 400
folder_key = f"{prefix}{folder_name}/" if prefix else f"{folder_name}/"
import io
try:
_storage().put_object(bucket_name, folder_key, io.BytesIO(b""))
return jsonify({"status": "ok", "message": f"Folder '{folder_name}' created", "key": folder_key})
except StorageError as exc:
return jsonify({"error": str(exc)}), 400
@ui_bp.post("/buckets/<bucket_name>/objects/<path:object_key>/copy")
def copy_object(bucket_name: str, object_key: str):
principal = _current_principal()
try:
_authorize_ui(principal, bucket_name, "read", object_key=object_key)
except IamError as exc:
return jsonify({"error": str(exc)}), 403
payload = request.get_json(silent=True) or {}
dest_bucket = str(payload.get("dest_bucket", bucket_name)).strip()
dest_key = str(payload.get("dest_key", "")).strip()
if not dest_key:
return jsonify({"error": "dest_key is required"}), 400
try:
_authorize_ui(principal, dest_bucket, "write", object_key=dest_key)
except IamError as exc:
return jsonify({"error": str(exc)}), 403
storage = _storage()
try:
source_path = storage.get_object_path(bucket_name, object_key)
source_metadata = storage.get_object_metadata(bucket_name, object_key)
except StorageError as exc:
return jsonify({"error": str(exc)}), 404
try:
with source_path.open("rb") as stream:
storage.put_object(dest_bucket, dest_key, stream, metadata=source_metadata or None)
return jsonify({
"status": "ok",
"message": f"Copied to {dest_bucket}/{dest_key}",
"dest_bucket": dest_bucket,
"dest_key": dest_key,
})
except StorageError as exc:
return jsonify({"error": str(exc)}), 400
@ui_bp.post("/buckets/<bucket_name>/objects/<path:object_key>/move")
def move_object(bucket_name: str, object_key: str):
principal = _current_principal()
try:
_authorize_ui(principal, bucket_name, "read", object_key=object_key)
_authorize_ui(principal, bucket_name, "delete", object_key=object_key)
except IamError as exc:
return jsonify({"error": str(exc)}), 403
payload = request.get_json(silent=True) or {}
dest_bucket = str(payload.get("dest_bucket", bucket_name)).strip()
dest_key = str(payload.get("dest_key", "")).strip()
if not dest_key:
return jsonify({"error": "dest_key is required"}), 400
if dest_bucket == bucket_name and dest_key == object_key:
return jsonify({"error": "Cannot move object to the same location"}), 400
try:
_authorize_ui(principal, dest_bucket, "write", object_key=dest_key)
except IamError as exc:
return jsonify({"error": str(exc)}), 403
storage = _storage()
try:
source_path = storage.get_object_path(bucket_name, object_key)
source_metadata = storage.get_object_metadata(bucket_name, object_key)
except StorageError as exc:
return jsonify({"error": str(exc)}), 404
try:
import io
with source_path.open("rb") as f:
data = f.read()
storage.put_object(dest_bucket, dest_key, io.BytesIO(data), metadata=source_metadata or None)
storage.delete_object(bucket_name, object_key)
return jsonify({
"status": "ok",
"message": f"Moved to {dest_bucket}/{dest_key}",
"dest_bucket": dest_bucket,
"dest_key": dest_key,
})
except StorageError as exc:
return jsonify({"error": str(exc)}), 400
@ui_bp.get("/buckets/<bucket_name>/list-for-copy")
def list_buckets_for_copy(bucket_name: str):
principal = _current_principal()
buckets = _storage().list_buckets()
allowed = []
for bucket in buckets:
try:
_authorize_ui(principal, bucket.name, "write")
allowed.append(bucket.name)
except IamError:
pass
return jsonify({"buckets": allowed})
@ui_bp.app_errorhandler(404)
def ui_not_found(error): # type: ignore[override]
prefix = ui_bp.url_prefix or ""

View File

@@ -1,7 +1,6 @@
"""Central location for the application version string."""
from __future__ import annotations
APP_VERSION = "0.1.8"
APP_VERSION = "0.2.0"
def get_version() -> str:

View File

@@ -362,6 +362,68 @@ code {
color: #2563eb;
}
.docs-sidebar-mobile {
border-radius: 0.75rem;
border: 1px solid var(--myfsio-card-border);
}
.docs-sidebar-mobile .docs-toc {
display: flex;
flex-wrap: wrap;
gap: 0.5rem 1rem;
padding-top: 0.5rem;
}
.docs-sidebar-mobile .docs-toc li {
flex: 1 0 45%;
}
.min-width-0 {
min-width: 0;
}
/* Ensure pre blocks don't overflow on mobile */
.alert pre {
max-width: 100%;
overflow-x: auto;
-webkit-overflow-scrolling: touch;
}
/* IAM User Cards */
.iam-user-card {
border: 1px solid var(--myfsio-card-border);
border-radius: 0.75rem;
transition: box-shadow 0.2s ease, transform 0.2s ease;
}
.iam-user-card:hover {
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
}
[data-theme='dark'] .iam-user-card:hover {
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.3);
}
.user-avatar-lg {
width: 48px;
height: 48px;
border-radius: 12px;
}
.btn-icon {
padding: 0.25rem;
line-height: 1;
border: none;
background: transparent;
color: var(--myfsio-muted);
border-radius: 0.375rem;
}
.btn-icon:hover {
background: var(--myfsio-hover-bg);
color: var(--myfsio-text);
}
.badge {
font-weight: 500;
padding: 0.35em 0.65em;
@@ -1035,6 +1097,9 @@ pre code {
.modal-body {
padding: 1.5rem;
overflow-wrap: break-word;
word-wrap: break-word;
word-break: break-word;
}
.modal-footer {
@@ -1688,3 +1753,67 @@ body.theme-transitioning * {
border: 2px solid transparent;
background: linear-gradient(var(--myfsio-card-bg), var(--myfsio-card-bg)) padding-box, linear-gradient(135deg, #3b82f6, #8b5cf6) border-box;
}
#objects-table .dropdown-menu {
position: fixed !important;
z-index: 1050;
}
.objects-header-responsive {
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
align-items: center;
}
.objects-header-responsive > .header-title {
flex: 0 0 auto;
}
.objects-header-responsive > .header-actions {
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
align-items: center;
flex: 1;
}
@media (max-width: 640px) {
.objects-header-responsive {
flex-direction: column;
align-items: stretch;
}
.objects-header-responsive > .header-title {
margin-bottom: 0.5rem;
}
.objects-header-responsive > .header-actions {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 0.5rem;
}
.objects-header-responsive > .header-actions .btn {
justify-content: center;
}
.objects-header-responsive > .header-actions .search-wrapper {
grid-column: span 2;
}
.objects-header-responsive > .header-actions .search-wrapper input {
max-width: 100% !important;
width: 100%;
}
.objects-header-responsive > .header-actions .bulk-actions {
grid-column: span 2;
display: flex;
gap: 0.5rem;
}
.objects-header-responsive > .header-actions .bulk-actions .btn {
flex: 1;
}
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 628 KiB

BIN
static/images/MyFSIO.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 200 KiB

BIN
static/images/MyFSIO.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 872 KiB

View File

@@ -5,8 +5,8 @@
<meta name="viewport" content="width=device-width, initial-scale=1" />
{% if principal %}<meta name="csrf-token" content="{{ csrf_token() }}" />{% endif %}
<title>MyFSIO Console</title>
<link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFISO.png') }}" />
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFISO.ico') }}" />
<link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFSIO.png') }}" />
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFSIO.ico') }}" />
<link
href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css"
rel="stylesheet"
@@ -33,7 +33,7 @@
<div class="container-fluid">
<a class="navbar-brand fw-semibold" href="{{ url_for('ui.buckets_overview') }}">
<img
src="{{ url_for('static', filename='images/MyFISO.png') }}"
src="{{ url_for('static', filename='images/MyFSIO.png') }}"
alt="MyFSIO logo"
class="myfsio-logo"
width="32"

File diff suppressed because it is too large Load Diff

View File

@@ -46,8 +46,7 @@
<div class="d-flex align-items-center gap-3">
<div class="bucket-icon">
<svg xmlns="http://www.w3.org/2000/svg" width="22" height="22" fill="currentColor" viewBox="0 0 16 16">
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H11a.5.5 0 0 1 0 1h-1v1h1a.5.5 0 0 1 0 1h-1v1a.5.5 0 0 1-1 0v-1H6v1a.5.5 0 0 1-1 0v-1H4a.5.5 0 0 1 0-1h1v-1H4a.5.5 0 0 1 0-1h1.5A1.5 1.5 0 0 1 7 10.5V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1zm5 7.5v1h3v-1a.5.5 0 0 0-.5-.5h-2a.5.5 0 0 0-.5.5z"/>
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
</svg>
</div>
<div>
@@ -134,7 +133,7 @@
const searchInput = document.getElementById('bucket-search');
const bucketItems = document.querySelectorAll('.bucket-item');
const noBucketsMsg = document.querySelector('.text-center.py-5'); // The "No buckets found" empty state
const noBucketsMsg = document.querySelector('.text-center.py-5');
if (searchInput) {
searchInput.addEventListener('input', (e) => {

View File

@@ -8,8 +8,8 @@
<p class="text-uppercase text-muted small mb-1">Replication</p>
<h1 class="h3 mb-1 d-flex align-items-center gap-2">
<svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
<path d="M10.232 8.768l.546-.353a.25.25 0 0 0 0-.418l-.546-.354a.25.25 0 0 1-.116-.21V6.25a.25.25 0 0 0-.25-.25h-.5a.25.25 0 0 0-.25.25v1.183a.25.25 0 0 1-.116.21l-.546.354a.25.25 0 0 0 0 .418l.546.353a.25.25 0 0 1 .116.21v1.183a.25.25 0 0 0 .25.25h.5a.25.25 0 0 0 .25-.25V8.978a.25.25 0 0 1 .116-.21z"/>
</svg>
Remote Connections
</h1>
@@ -124,8 +124,7 @@
<div class="d-flex align-items-center gap-2">
<div class="connection-icon">
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
</svg>
</div>
<span class="fw-medium">{{ conn.name }}</span>
@@ -174,8 +173,7 @@
<div class="empty-state text-center py-5">
<div class="empty-state-icon mx-auto mb-3">
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" viewBox="0 0 16 16">
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
</svg>
</div>
<h5 class="fw-semibold mb-2">No connections yet</h5>
@@ -309,7 +307,6 @@
resultDiv.innerHTML = '<div class="text-info"><span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Testing connection...</div>';
// Use AbortController to timeout client-side after 20 seconds
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 20000);
@@ -396,8 +393,6 @@
form.action = "{{ url_for('ui.delete_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
});
// Check connection health for each connection in the table
// Uses staggered requests to avoid overwhelming the server
async function checkConnectionHealth(connectionId, statusEl) {
try {
const controller = new AbortController();
@@ -434,13 +429,11 @@
}
}
// Stagger health checks to avoid all requests at once
const connectionRows = document.querySelectorAll('tr[data-connection-id]');
connectionRows.forEach((row, index) => {
const connectionId = row.getAttribute('data-connection-id');
const statusEl = row.querySelector('.connection-status');
if (statusEl) {
// Stagger requests by 200ms each
setTimeout(() => checkConnectionHealth(connectionId, statusEl), index * 200);
}
});

View File

@@ -14,6 +14,36 @@
</div>
</section>
<div class="row g-4">
<div class="col-12 d-xl-none">
<div class="card shadow-sm docs-sidebar-mobile mb-0">
<div class="card-body py-3">
<div class="d-flex align-items-center justify-content-between mb-2">
<h3 class="h6 text-uppercase text-muted mb-0">On this page</h3>
<button class="btn btn-sm btn-outline-secondary" type="button" data-bs-toggle="collapse" data-bs-target="#mobileDocsToc" aria-expanded="false" aria-controls="mobileDocsToc">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
<path fill-rule="evenodd" d="M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z"/>
</svg>
</button>
</div>
<div class="collapse" id="mobileDocsToc">
<ul class="list-unstyled docs-toc mb-0 small">
<li><a href="#setup">Set up &amp; run</a></li>
<li><a href="#background">Running in background</a></li>
<li><a href="#auth">Authentication &amp; IAM</a></li>
<li><a href="#console">Console tour</a></li>
<li><a href="#automation">Automation / CLI</a></li>
<li><a href="#api">REST endpoints</a></li>
<li><a href="#examples">API Examples</a></li>
<li><a href="#replication">Site Replication</a></li>
<li><a href="#versioning">Object Versioning</a></li>
<li><a href="#quotas">Bucket Quotas</a></li>
<li><a href="#encryption">Encryption</a></li>
<li><a href="#troubleshooting">Troubleshooting</a></li>
</ul>
</div>
</div>
</div>
</div>
<div class="col-xl-8">
<article id="setup" class="card shadow-sm docs-section">
<div class="card-body">
@@ -407,10 +437,62 @@ curl -X POST {{ api_base }}/presign/demo/notes.txt \
<span class="docs-section-kicker">07</span>
<h2 class="h4 mb-0">API Examples</h2>
</div>
<p class="text-muted">Common operations using boto3.</p>
<p class="text-muted">Common operations using popular SDKs and tools.</p>
<h5 class="mt-4">Multipart Upload</h5>
<pre><code class="language-python">import boto3
<h3 class="h6 text-uppercase text-muted mt-4">Python (boto3)</h3>
<pre class="mb-4"><code class="language-python">import boto3
s3 = boto3.client(
's3',
endpoint_url='{{ api_base }}',
aws_access_key_id='&lt;access_key&gt;',
aws_secret_access_key='&lt;secret_key&gt;'
)
# List buckets
buckets = s3.list_buckets()['Buckets']
# Create bucket
s3.create_bucket(Bucket='mybucket')
# Upload file
s3.upload_file('local.txt', 'mybucket', 'remote.txt')
# Download file
s3.download_file('mybucket', 'remote.txt', 'downloaded.txt')
# Generate presigned URL (valid 1 hour)
url = s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'mybucket', 'Key': 'remote.txt'},
ExpiresIn=3600
)</code></pre>
<h3 class="h6 text-uppercase text-muted mt-4">JavaScript (AWS SDK v3)</h3>
<pre class="mb-4"><code class="language-javascript">import { S3Client, ListBucketsCommand, PutObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({
endpoint: '{{ api_base }}',
region: 'us-east-1',
credentials: {
accessKeyId: '&lt;access_key&gt;',
secretAccessKey: '&lt;secret_key&gt;'
},
forcePathStyle: true // Required for S3-compatible services
});
// List buckets
const { Buckets } = await s3.send(new ListBucketsCommand({}));
// Upload object
await s3.send(new PutObjectCommand({
Bucket: 'mybucket',
Key: 'hello.txt',
Body: 'Hello, World!'
}));</code></pre>
<h3 class="h6 text-uppercase text-muted mt-4">Multipart Upload (Python)</h3>
<pre class="mb-4"><code class="language-python">import boto3
s3 = boto3.client('s3', endpoint_url='{{ api_base }}')
@@ -418,9 +500,9 @@ s3 = boto3.client('s3', endpoint_url='{{ api_base }}')
response = s3.create_multipart_upload(Bucket='mybucket', Key='large.bin')
upload_id = response['UploadId']
# Upload parts
# Upload parts (minimum 5MB each, except last part)
parts = []
chunks = [b'chunk1', b'chunk2'] # Example data chunks
chunks = [b'chunk1...', b'chunk2...']
for part_number, chunk in enumerate(chunks, start=1):
response = s3.upload_part(
Bucket='mybucket',
@@ -438,6 +520,19 @@ s3.complete_multipart_upload(
UploadId=upload_id,
MultipartUpload={'Parts': parts}
)</code></pre>
<h3 class="h6 text-uppercase text-muted mt-4">Presigned URLs for Sharing</h3>
<pre class="mb-0"><code class="language-bash"># Generate a download link valid for 15 minutes
curl -X POST "{{ api_base }}/presign/mybucket/photo.jpg" \
-H "Content-Type: application/json" \
-H "X-Access-Key: &lt;key&gt;" -H "X-Secret-Key: &lt;secret&gt;" \
-d '{"method": "GET", "expires_in": 900}'
# Generate an upload link (PUT) valid for 1 hour
curl -X POST "{{ api_base }}/presign/mybucket/upload.bin" \
-H "Content-Type: application/json" \
-H "X-Access-Key: &lt;key&gt;" -H "X-Secret-Key: &lt;secret&gt;" \
-d '{"method": "PUT", "expires_in": 3600}'</code></pre>
</div>
</article>
<article id="replication" class="card shadow-sm docs-section">
@@ -461,15 +556,46 @@ s3.complete_multipart_upload(
</li>
</ol>
<div class="alert alert-light border mb-0">
<div class="d-flex gap-2">
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-terminal text-muted mt-1" viewBox="0 0 16 16">
<div class="alert alert-light border mb-3 overflow-hidden">
<div class="d-flex flex-column flex-sm-row gap-2 mb-2">
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-terminal text-muted mt-1 flex-shrink-0 d-none d-sm-block" viewBox="0 0 16 16">
<path d="M6 9a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 0 1h-3A.5.5 0 0 1 6 9zM3.854 4.146a.5.5 0 1 0-.708.708L4.793 6.5 3.146 8.146a.5.5 0 1 0 .708.708l2-2a.5.5 0 0 0 0-.708l-2-2z"/>
<path d="M2 1a2 2 0 0 0-2 2v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V3a2 2 0 0 0-2-2H2zm12 1a1 1 0 0 1 1 1v10a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1h12z"/>
</svg>
<div>
<strong>Headless Target Setup?</strong>
<p class="small text-muted mb-0">If your target server has no UI, use the Python API directly to bootstrap credentials. See <code>docs.md</code> in the project root for the <code>setup_target.py</code> script.</p>
<div class="flex-grow-1 min-width-0">
<strong>Headless Target Setup</strong>
<p class="small text-muted mb-2">If your target server has no UI, create a <code>setup_target.py</code> script to bootstrap credentials:</p>
<pre class="mb-0 overflow-auto" style="max-width: 100%;"><code class="language-python"># setup_target.py
from pathlib import Path
from app.iam import IamService
from app.storage import ObjectStorage
# Initialize services (paths match default config)
data_dir = Path("data")
iam = IamService(data_dir / ".myfsio.sys" / "config" / "iam.json")
storage = ObjectStorage(data_dir)
# 1. Create the bucket
bucket_name = "backup-bucket"
try:
storage.create_bucket(bucket_name)
print(f"Bucket '{bucket_name}' created.")
except Exception as e:
print(f"Bucket creation skipped: {e}")
# 2. Create the user
try:
creds = iam.create_user(
display_name="Replication User",
policies=[{"bucket": bucket_name, "actions": ["write", "read", "list"]}]
)
print("\n--- CREDENTIALS GENERATED ---")
print(f"Access Key: {creds['access_key']}")
print(f"Secret Key: {creds['secret_key']}")
print("-----------------------------")
except Exception as e:
print(f"User creation failed: {e}")</code></pre>
<p class="small text-muted mt-2 mb-0">Save and run: <code>python setup_target.py</code></p>
</div>
</div>
</div>
@@ -487,6 +613,86 @@ s3.complete_multipart_upload(
</p>
</div>
</article>
<article id="versioning" class="card shadow-sm docs-section">
<div class="card-body">
<div class="d-flex align-items-center gap-2 mb-3">
<span class="docs-section-kicker">09</span>
<h2 class="h4 mb-0">Object Versioning</h2>
</div>
<p class="text-muted">Keep multiple versions of objects to protect against accidental deletions and overwrites. Restore previous versions at any time.</p>
<h3 class="h6 text-uppercase text-muted mt-4">Enabling Versioning</h3>
<ol class="docs-steps mb-3">
<li>Navigate to your bucket's <strong>Properties</strong> tab.</li>
<li>Find the <strong>Versioning</strong> card and click <strong>Enable</strong>.</li>
<li>All subsequent uploads will create new versions instead of overwriting.</li>
</ol>
<h3 class="h6 text-uppercase text-muted mt-4">Version Operations</h3>
<div class="table-responsive mb-3">
<table class="table table-sm table-bordered small">
<thead class="table-light">
<tr>
<th>Operation</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>View Versions</strong></td>
<td>Click the version icon on any object to see all historical versions with timestamps and sizes.</td>
</tr>
<tr>
<td><strong>Restore Version</strong></td>
<td>Click <strong>Restore</strong> on any version to make it the current version (creates a copy).</td>
</tr>
<tr>
<td><strong>Delete Current</strong></td>
<td>Deleting an object archives it. Previous versions remain accessible.</td>
</tr>
<tr>
<td><strong>Purge All</strong></td>
<td>Permanently delete an object and all its versions. This cannot be undone.</td>
</tr>
</tbody>
</table>
</div>
<h3 class="h6 text-uppercase text-muted mt-4">Archived Objects</h3>
<p class="small text-muted mb-3">When you delete a versioned object, it becomes "archived" - the current version is removed but historical versions remain. The <strong>Archived</strong> tab shows these objects so you can restore them.</p>
<h3 class="h6 text-uppercase text-muted mt-4">API Usage</h3>
<pre class="mb-3"><code class="language-bash"># Enable versioning
curl -X PUT "{{ api_base }}/&lt;bucket&gt;?versioning" \
-H "Content-Type: application/json" \
-H "X-Access-Key: &lt;key&gt;" -H "X-Secret-Key: &lt;secret&gt;" \
-d '{"Status": "Enabled"}'
# Get versioning status
curl "{{ api_base }}/&lt;bucket&gt;?versioning" \
-H "X-Access-Key: &lt;key&gt;" -H "X-Secret-Key: &lt;secret&gt;"
# List object versions
curl "{{ api_base }}/&lt;bucket&gt;?versions" \
-H "X-Access-Key: &lt;key&gt;" -H "X-Secret-Key: &lt;secret&gt;"
# Get specific version
curl "{{ api_base }}/&lt;bucket&gt;/&lt;key&gt;?versionId=&lt;version-id&gt;" \
-H "X-Access-Key: &lt;key&gt;" -H "X-Secret-Key: &lt;secret&gt;"</code></pre>
<div class="alert alert-light border mb-0">
<div class="d-flex gap-2">
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1" viewBox="0 0 16 16">
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
</svg>
<div>
<strong>Storage Impact:</strong> Each version consumes storage. Enable quotas to limit total bucket size including all versions.
</div>
</div>
</div>
</div>
</article>
<article id="quotas" class="card shadow-sm docs-section">
<div class="card-body">
<div class="d-flex align-items-center gap-2 mb-3">
@@ -709,6 +915,7 @@ curl -X DELETE "{{ api_base }}/kms/keys/{key-id}?waiting_period_days=30" \
<li><a href="#api">REST endpoints</a></li>
<li><a href="#examples">API Examples</a></li>
<li><a href="#replication">Site Replication</a></li>
<li><a href="#versioning">Object Versioning</a></li>
<li><a href="#quotas">Bucket Quotas</a></li>
<li><a href="#encryption">Encryption</a></li>
<li><a href="#troubleshooting">Troubleshooting</a></li>

View File

@@ -10,6 +10,7 @@
</svg>
IAM Configuration
</h1>
<p class="text-muted mb-0 mt-1">Create and manage users with fine-grained bucket permissions.</p>
</div>
<div class="d-flex gap-2">
{% if not iam_locked %}
@@ -109,35 +110,68 @@
{% else %}
<div class="card-body px-4 pb-4">
{% if users %}
<div class="table-responsive">
<table class="table table-hover align-middle mb-0">
<thead class="table-light">
<tr>
<th scope="col">User</th>
<th scope="col">Policies</th>
<th scope="col" class="text-end">Actions</th>
</tr>
</thead>
<tbody>
{% for user in users %}
<tr>
<td>
<div class="row g-3">
{% for user in users %}
<div class="col-md-6 col-xl-4">
<div class="card h-100 iam-user-card">
<div class="card-body">
<div class="d-flex align-items-start justify-content-between mb-3">
<div class="d-flex align-items-center gap-3">
<div class="user-avatar">
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
<div class="user-avatar user-avatar-lg">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
</svg>
</div>
<div>
<div class="fw-medium">{{ user.display_name }}</div>
<code class="small text-muted">{{ user.access_key }}</code>
<div class="min-width-0">
<h6 class="fw-semibold mb-0 text-truncate" title="{{ user.display_name }}">{{ user.display_name }}</h6>
<code class="small text-muted d-block text-truncate" title="{{ user.access_key }}">{{ user.access_key }}</code>
</div>
</div>
</td>
<td>
<div class="dropdown">
<button class="btn btn-sm btn-icon" type="button" data-bs-toggle="dropdown" aria-expanded="false">
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
<path d="M9.5 13a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0z"/>
</svg>
</button>
<ul class="dropdown-menu dropdown-menu-end">
<li>
<button class="dropdown-item" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
</svg>
Edit Name
</button>
</li>
<li>
<button class="dropdown-item" type="button" data-rotate-user="{{ user.access_key }}">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
<path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
<path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
</svg>
Rotate Secret
</button>
</li>
<li><hr class="dropdown-divider"></li>
<li>
<button class="dropdown-item text-danger" type="button" data-delete-user="{{ user.access_key }}">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
</svg>
Delete User
</button>
</li>
</ul>
</div>
</div>
<div class="mb-3">
<div class="small text-muted mb-2">Bucket Permissions</div>
<div class="d-flex flex-wrap gap-1">
{% for policy in user.policies %}
<span class="badge bg-primary bg-opacity-10 text-primary">
<svg xmlns="http://www.w3.org/2000/svg" width="10" height="10" fill="currentColor" class="me-1" viewBox="0 0 16 16">
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
</svg>
{{ policy.bucket }}
{% if '*' in policy.actions %}
<span class="opacity-75">(full)</span>
@@ -149,38 +183,18 @@
<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>
{% endfor %}
</div>
</td>
<td class="text-end">
<div class="btn-group btn-group-sm" role="group">
<button class="btn btn-outline-primary" type="button" data-rotate-user="{{ user.access_key }}" title="Rotate Secret">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
<path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
<path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
</svg>
</button>
<button class="btn btn-outline-secondary" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}" title="Edit User">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
</svg>
</button>
<button class="btn btn-outline-secondary" type="button" data-policy-editor data-access-key="{{ user.access_key }}" title="Edit Policies">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/>
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/>
</svg>
</button>
<button class="btn btn-outline-danger" type="button" data-delete-user="{{ user.access_key }}" title="Delete User">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
</svg>
</button>
</div>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
<button class="btn btn-outline-primary btn-sm w-100" type="button" data-policy-editor data-access-key="{{ user.access_key }}">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/>
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/>
</svg>
Manage Policies
</button>
</div>
</div>
</div>
{% endfor %}
</div>
{% else %}
<div class="empty-state text-center py-5">
@@ -442,6 +456,80 @@
{{ super() }}
<script>
(function () {
function setupJsonAutoIndent(textarea) {
if (!textarea) return;
textarea.addEventListener('keydown', function(e) {
if (e.key === 'Enter') {
e.preventDefault();
const start = this.selectionStart;
const end = this.selectionEnd;
const value = this.value;
const lineStart = value.lastIndexOf('\n', start - 1) + 1;
const currentLine = value.substring(lineStart, start);
const indentMatch = currentLine.match(/^(\s*)/);
let indent = indentMatch ? indentMatch[1] : '';
const trimmedLine = currentLine.trim();
const lastChar = trimmedLine.slice(-1);
const charBeforeCursor = value.substring(start - 1, start).trim();
let newIndent = indent;
let insertAfter = '';
if (lastChar === '{' || lastChar === '[') {
newIndent = indent + ' ';
const charAfterCursor = value.substring(start, start + 1).trim();
if ((lastChar === '{' && charAfterCursor === '}') ||
(lastChar === '[' && charAfterCursor === ']')) {
insertAfter = '\n' + indent;
}
} else if (lastChar === ',' || lastChar === ':') {
newIndent = indent;
}
const insertion = '\n' + newIndent + insertAfter;
const newValue = value.substring(0, start) + insertion + value.substring(end);
this.value = newValue;
const newCursorPos = start + 1 + newIndent.length;
this.selectionStart = this.selectionEnd = newCursorPos;
this.dispatchEvent(new Event('input', { bubbles: true }));
}
if (e.key === 'Tab') {
e.preventDefault();
const start = this.selectionStart;
const end = this.selectionEnd;
if (e.shiftKey) {
const lineStart = this.value.lastIndexOf('\n', start - 1) + 1;
const lineContent = this.value.substring(lineStart, start);
if (lineContent.startsWith(' ')) {
this.value = this.value.substring(0, lineStart) +
this.value.substring(lineStart + 2);
this.selectionStart = this.selectionEnd = Math.max(lineStart, start - 2);
}
} else {
this.value = this.value.substring(0, start) + ' ' + this.value.substring(end);
this.selectionStart = this.selectionEnd = start + 2;
}
this.dispatchEvent(new Event('input', { bubbles: true }));
}
});
}
setupJsonAutoIndent(document.getElementById('policyEditorDocument'));
setupJsonAutoIndent(document.getElementById('createUserPolicies'));
const currentUserKey = {{ principal.access_key | tojson }};
const configCopyButtons = document.querySelectorAll('.config-copy');
configCopyButtons.forEach((button) => {

View File

@@ -35,7 +35,7 @@
<div class="card shadow-lg login-card position-relative">
<div class="card-body p-4 p-md-5">
<div class="text-center mb-4 d-lg-none">
<img src="{{ url_for('static', filename='images/MyFISO.png') }}" alt="MyFSIO" width="48" height="48" class="mb-3 rounded-3">
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO" width="48" height="48" class="mb-3 rounded-3">
<h2 class="h4 fw-bold">MyFSIO</h2>
</div>
<h2 class="h4 mb-1 d-none d-lg-block">Sign in</h2>

View File

@@ -219,24 +219,42 @@
</div>
<div class="col-lg-4">
<div class="card shadow-sm border-0 h-100 overflow-hidden" style="background: linear-gradient(135deg, #3b82f6 0%, #8b5cf6 100%);">
{% set has_issues = (cpu_percent > 80) or (memory.percent > 85) or (disk.percent > 90) %}
<div class="card shadow-sm border-0 h-100 overflow-hidden" style="background: linear-gradient(135deg, {% if has_issues %}#ef4444 0%, #f97316{% else %}#3b82f6 0%, #8b5cf6{% endif %} 100%);">
<div class="card-body p-4 d-flex flex-column justify-content-center text-white position-relative">
<div class="position-absolute top-0 end-0 opacity-25" style="transform: translate(20%, -20%);">
<svg xmlns="http://www.w3.org/2000/svg" width="160" height="160" fill="currentColor" class="bi bi-cloud-check" viewBox="0 0 16 16">
<svg xmlns="http://www.w3.org/2000/svg" width="160" height="160" fill="currentColor" class="bi bi-{% if has_issues %}exclamation-triangle{% else %}cloud-check{% endif %}" viewBox="0 0 16 16">
{% if has_issues %}
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
{% else %}
<path fill-rule="evenodd" d="M10.354 6.146a.5.5 0 0 1 0 .708l-3 3a.5.5 0 0 1-.708 0l-1.5-1.5a.5.5 0 1 1 .708-.708L7 8.793l2.646-2.647a.5.5 0 0 1 .708 0z"/>
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
{% endif %}
</svg>
</div>
<div class="mb-3">
<span class="badge bg-white text-primary fw-semibold px-3 py-2">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-check-circle-fill me-1" viewBox="0 0 16 16">
<span class="badge bg-white {% if has_issues %}text-danger{% else %}text-primary{% endif %} fw-semibold px-3 py-2">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-{% if has_issues %}exclamation-circle-fill{% else %}check-circle-fill{% endif %} me-1" viewBox="0 0 16 16">
{% if has_issues %}
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM8 4a.905.905 0 0 0-.9.995l.35 3.507a.552.552 0 0 0 1.1 0l.35-3.507A.905.905 0 0 0 8 4zm.002 6a1 1 0 1 0 0 2 1 1 0 0 0 0-2z"/>
{% else %}
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
{% endif %}
</svg>
v{{ app.version }}
</span>
</div>
<h4 class="card-title fw-bold mb-3">System Status</h4>
<p class="card-text opacity-90 mb-4">All systems operational. Your storage infrastructure is running smoothly with no detected issues.</p>
<h4 class="card-title fw-bold mb-3">System Health</h4>
{% if has_issues %}
<ul class="list-unstyled small mb-4 opacity-90">
{% if cpu_percent > 80 %}<li class="mb-1">CPU usage is high ({{ cpu_percent }}%)</li>{% endif %}
{% if memory.percent > 85 %}<li class="mb-1">Memory usage is high ({{ memory.percent }}%)</li>{% endif %}
{% if disk.percent > 90 %}<li class="mb-1">Disk space is critically low ({{ disk.percent }}% used)</li>{% endif %}
</ul>
{% else %}
<p class="card-text opacity-90 mb-4 small">All resources are within normal operating parameters.</p>
{% endif %}
<div class="d-flex gap-4">
<div>
<div class="h3 fw-bold mb-0">{{ app.uptime_days }}d</div>

View File

@@ -8,8 +8,6 @@ def client(app):
@pytest.fixture
def auth_headers(app):
# Create a test user and return headers
# Using the user defined in conftest.py
return {
"X-Access-Key": "test",
"X-Secret-Key": "secret"
@@ -76,18 +74,15 @@ def test_multipart_upload_flow(client, auth_headers):
def test_abort_multipart_upload(client, auth_headers):
client.put("/abort-bucket", headers=auth_headers)
# Initiate
resp = client.post("/abort-bucket/file.txt?uploads", headers=auth_headers)
upload_id = fromstring(resp.data).find("UploadId").text
# Abort
resp = client.delete(f"/abort-bucket/file.txt?uploadId={upload_id}", headers=auth_headers)
assert resp.status_code == 204
# Try to upload part (should fail)
resp = client.put(
f"/abort-bucket/file.txt?partNumber=1&uploadId={upload_id}",
headers=auth_headers,
data=b"data"
)
assert resp.status_code == 404 # NoSuchUpload
assert resp.status_code == 404

View File

@@ -22,11 +22,10 @@ class TestLocalKeyEncryption:
key_path = tmp_path / "keys" / "master.key"
provider = LocalKeyEncryption(key_path)
# Access master key to trigger creation
key = provider.master_key
assert key_path.exists()
assert len(key) == 32 # 256-bit key
assert len(key) == 32
def test_load_existing_master_key(self, tmp_path):
"""Test loading an existing master key."""
@@ -50,7 +49,6 @@ class TestLocalKeyEncryption:
plaintext = b"Hello, World! This is a test message."
# Encrypt
result = provider.encrypt(plaintext)
assert result.ciphertext != plaintext
@@ -58,7 +56,6 @@ class TestLocalKeyEncryption:
assert len(result.nonce) == 12
assert len(result.encrypted_data_key) > 0
# Decrypt
decrypted = provider.decrypt(
result.ciphertext,
result.nonce,
@@ -80,11 +77,8 @@ class TestLocalKeyEncryption:
result1 = provider.encrypt(plaintext)
result2 = provider.encrypt(plaintext)
# Different encrypted data keys
assert result1.encrypted_data_key != result2.encrypted_data_key
# Different nonces
assert result1.nonce != result2.nonce
# Different ciphertexts
assert result1.ciphertext != result2.ciphertext
def test_generate_data_key(self, tmp_path):
@@ -97,9 +91,8 @@ class TestLocalKeyEncryption:
plaintext_key, encrypted_key = provider.generate_data_key()
assert len(plaintext_key) == 32
assert len(encrypted_key) > 32 # nonce + ciphertext + tag
assert len(encrypted_key) > 32
# Verify we can decrypt the key
decrypted_key = provider._decrypt_data_key(encrypted_key)
assert decrypted_key == plaintext_key
@@ -107,18 +100,15 @@ class TestLocalKeyEncryption:
"""Test that decryption fails with wrong master key."""
from app.encryption import LocalKeyEncryption, EncryptionError
# Create two providers with different keys
key_path1 = tmp_path / "master1.key"
key_path2 = tmp_path / "master2.key"
provider1 = LocalKeyEncryption(key_path1)
provider2 = LocalKeyEncryption(key_path2)
# Encrypt with provider1
plaintext = b"Secret message"
result = provider1.encrypt(plaintext)
# Try to decrypt with provider2
with pytest.raises(EncryptionError):
provider2.decrypt(
result.ciphertext,
@@ -196,18 +186,15 @@ class TestStreamingEncryptor:
provider = LocalKeyEncryption(key_path)
encryptor = StreamingEncryptor(provider, chunk_size=1024)
# Create test data
original_data = b"A" * 5000 + b"B" * 5000 + b"C" * 5000 # 15KB
original_data = b"A" * 5000 + b"B" * 5000 + b"C" * 5000
stream = io.BytesIO(original_data)
# Encrypt
encrypted_stream, metadata = encryptor.encrypt_stream(stream)
encrypted_data = encrypted_stream.read()
assert encrypted_data != original_data
assert metadata.algorithm == "AES256"
# Decrypt
encrypted_stream = io.BytesIO(encrypted_data)
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
decrypted_data = decrypted_stream.read()
@@ -319,7 +306,6 @@ class TestClientEncryptionHelper:
assert key_info["algorithm"] == "AES-256-GCM"
assert "created_at" in key_info
# Verify key is 256 bits
key = base64.b64decode(key_info["key"])
assert len(key) == 32
@@ -425,7 +411,6 @@ class TestKMSManager:
assert key is not None
assert key.key_id == "test-key"
# Non-existent key
assert kms.get_key("non-existent") is None
def test_enable_disable_key(self, tmp_path):
@@ -439,14 +424,11 @@ class TestKMSManager:
kms.create_key("Test key", key_id="test-key")
# Initially enabled
assert kms.get_key("test-key").enabled
# Disable
kms.disable_key("test-key")
assert not kms.get_key("test-key").enabled
# Enable
kms.enable_key("test-key")
assert kms.get_key("test-key").enabled
@@ -503,11 +485,9 @@ class TestKMSManager:
ciphertext = kms.encrypt("test-key", plaintext, context)
# Decrypt with same context succeeds
decrypted, _ = kms.decrypt(ciphertext, context)
assert decrypted == plaintext
# Decrypt with different context fails
with pytest.raises(EncryptionError):
kms.decrypt(ciphertext, {"different": "context"})
@@ -527,7 +507,6 @@ class TestKMSManager:
assert len(plaintext_key) == 32
assert len(encrypted_key) > 0
# Decrypt the encrypted key
decrypted_key = kms.decrypt_data_key("test-key", encrypted_key)
assert decrypted_key == plaintext_key
@@ -561,13 +540,8 @@ class TestKMSManager:
plaintext = b"Data to re-encrypt"
# Encrypt with key-1
ciphertext1 = kms.encrypt("key-1", plaintext)
# Re-encrypt with key-2
ciphertext2 = kms.re_encrypt(ciphertext1, "key-2")
# Decrypt with key-2
decrypted, key_id = kms.decrypt(ciphertext2)
assert decrypted == plaintext
@@ -587,7 +561,7 @@ class TestKMSManager:
assert len(random1) == 32
assert len(random2) == 32
assert random1 != random2 # Very unlikely to be equal
assert random1 != random2
def test_keys_persist_across_instances(self, tmp_path):
"""Test that keys persist and can be loaded by new instances."""
@@ -596,14 +570,12 @@ class TestKMSManager:
keys_path = tmp_path / "kms_keys.json"
master_key_path = tmp_path / "master.key"
# Create key with first instance
kms1 = KMSManager(keys_path, master_key_path)
kms1.create_key("Test key", key_id="test-key")
plaintext = b"Persistent encryption test"
ciphertext = kms1.encrypt("test-key", plaintext)
# Create new instance and verify key works
kms2 = KMSManager(keys_path, master_key_path)
decrypted, key_id = kms2.decrypt(ciphertext)
@@ -665,13 +637,11 @@ class TestEncryptedStorage:
encrypted_storage = EncryptedObjectStorage(storage, encryption)
# Create bucket with encryption config
storage.create_bucket("test-bucket")
storage.set_bucket_encryption("test-bucket", {
"Rules": [{"SSEAlgorithm": "AES256"}]
})
# Put object
original_data = b"This is secret data that should be encrypted"
stream = io.BytesIO(original_data)
@@ -683,12 +653,10 @@ class TestEncryptedStorage:
assert meta is not None
# Verify file on disk is encrypted (not plaintext)
file_path = storage_root / "test-bucket" / "secret.txt"
stored_data = file_path.read_bytes()
assert stored_data != original_data
# Get object - should be decrypted
data, metadata = encrypted_storage.get_object_data("test-bucket", "secret.txt")
assert data == original_data
@@ -711,14 +679,12 @@ class TestEncryptedStorage:
encrypted_storage = EncryptedObjectStorage(storage, encryption)
storage.create_bucket("test-bucket")
# No encryption config
original_data = b"Unencrypted data"
stream = io.BytesIO(original_data)
encrypted_storage.put_object("test-bucket", "plain.txt", stream)
# Verify file on disk is NOT encrypted
file_path = storage_root / "test-bucket" / "plain.txt"
stored_data = file_path.read_bytes()
assert stored_data == original_data
@@ -745,7 +711,6 @@ class TestEncryptedStorage:
original_data = b"Explicitly encrypted data"
stream = io.BytesIO(original_data)
# Request encryption explicitly
encrypted_storage.put_object(
"test-bucket",
"encrypted.txt",
@@ -753,11 +718,9 @@ class TestEncryptedStorage:
server_side_encryption="AES256",
)
# Verify file is encrypted
file_path = storage_root / "test-bucket" / "encrypted.txt"
stored_data = file_path.read_bytes()
assert stored_data != original_data
# Get object - should be decrypted
data, _ = encrypted_storage.get_object_data("test-bucket", "encrypted.txt")
assert data == original_data

View File

@@ -24,7 +24,6 @@ def kms_client(tmp_path):
"KMS_KEYS_PATH": str(tmp_path / "kms_keys.json"),
})
# Create default IAM config with admin user
iam_config = {
"users": [
{
@@ -83,7 +82,6 @@ class TestKMSKeyManagement:
def test_list_keys(self, kms_client, auth_headers):
"""Test listing KMS keys."""
# Create some keys
kms_client.post("/kms/keys", json={"Description": "Key 1"}, headers=auth_headers)
kms_client.post("/kms/keys", json={"Description": "Key 2"}, headers=auth_headers)
@@ -97,7 +95,6 @@ class TestKMSKeyManagement:
def test_get_key(self, kms_client, auth_headers):
"""Test getting a specific key."""
# Create a key
create_response = kms_client.post(
"/kms/keys",
json={"KeyId": "test-key", "Description": "Test key"},
@@ -120,36 +117,28 @@ class TestKMSKeyManagement:
def test_delete_key(self, kms_client, auth_headers):
"""Test deleting a key."""
# Create a key
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
# Delete it
response = kms_client.delete("/kms/keys/test-key", headers=auth_headers)
assert response.status_code == 204
# Verify it's gone
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
assert get_response.status_code == 404
def test_enable_disable_key(self, kms_client, auth_headers):
"""Test enabling and disabling a key."""
# Create a key
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
# Disable
response = kms_client.post("/kms/keys/test-key/disable", headers=auth_headers)
assert response.status_code == 200
# Verify disabled
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
assert get_response.get_json()["KeyMetadata"]["Enabled"] is False
# Enable
response = kms_client.post("/kms/keys/test-key/enable", headers=auth_headers)
assert response.status_code == 200
# Verify enabled
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
assert get_response.get_json()["KeyMetadata"]["Enabled"] is True
@@ -159,13 +148,11 @@ class TestKMSEncryption:
def test_encrypt_decrypt(self, kms_client, auth_headers):
"""Test encrypting and decrypting data."""
# Create a key
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
plaintext = b"Hello, World!"
plaintext_b64 = base64.b64encode(plaintext).decode()
# Encrypt
encrypt_response = kms_client.post(
"/kms/encrypt",
json={"KeyId": "test-key", "Plaintext": plaintext_b64},
@@ -178,7 +165,6 @@ class TestKMSEncryption:
assert "CiphertextBlob" in encrypt_data
assert encrypt_data["KeyId"] == "test-key"
# Decrypt
decrypt_response = kms_client.post(
"/kms/decrypt",
json={"CiphertextBlob": encrypt_data["CiphertextBlob"]},
@@ -199,7 +185,6 @@ class TestKMSEncryption:
plaintext_b64 = base64.b64encode(plaintext).decode()
context = {"purpose": "testing", "bucket": "my-bucket"}
# Encrypt with context
encrypt_response = kms_client.post(
"/kms/encrypt",
json={
@@ -213,7 +198,6 @@ class TestKMSEncryption:
assert encrypt_response.status_code == 200
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
# Decrypt with same context succeeds
decrypt_response = kms_client.post(
"/kms/decrypt",
json={
@@ -225,7 +209,6 @@ class TestKMSEncryption:
assert decrypt_response.status_code == 200
# Decrypt with wrong context fails
wrong_context_response = kms_client.post(
"/kms/decrypt",
json={
@@ -325,11 +308,9 @@ class TestKMSReEncrypt:
def test_re_encrypt(self, kms_client, auth_headers):
"""Test re-encrypting data with a different key."""
# Create two keys
kms_client.post("/kms/keys", json={"KeyId": "key-1"}, headers=auth_headers)
kms_client.post("/kms/keys", json={"KeyId": "key-2"}, headers=auth_headers)
# Encrypt with key-1
plaintext = b"Data to re-encrypt"
encrypt_response = kms_client.post(
"/kms/encrypt",
@@ -342,7 +323,6 @@ class TestKMSReEncrypt:
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
# Re-encrypt with key-2
re_encrypt_response = kms_client.post(
"/kms/re-encrypt",
json={
@@ -358,7 +338,6 @@ class TestKMSReEncrypt:
assert data["SourceKeyId"] == "key-1"
assert data["KeyId"] == "key-2"
# Verify new ciphertext can be decrypted
decrypt_response = kms_client.post(
"/kms/decrypt",
json={"CiphertextBlob": data["CiphertextBlob"]},
@@ -398,7 +377,7 @@ class TestKMSRandom:
data = response.get_json()
random_bytes = base64.b64decode(data["Plaintext"])
assert len(random_bytes) == 32 # Default is 32 bytes
assert len(random_bytes) == 32
class TestClientSideEncryption:
@@ -422,11 +401,9 @@ class TestClientSideEncryption:
def test_client_encrypt_decrypt(self, kms_client, auth_headers):
"""Test client-side encryption and decryption."""
# Generate a key
key_response = kms_client.post("/kms/client/generate-key", headers=auth_headers)
key = key_response.get_json()["key"]
# Encrypt
plaintext = b"Client-side encrypted data"
encrypt_response = kms_client.post(
"/kms/client/encrypt",
@@ -440,7 +417,6 @@ class TestClientSideEncryption:
assert encrypt_response.status_code == 200
encrypted = encrypt_response.get_json()
# Decrypt
decrypt_response = kms_client.post(
"/kms/client/decrypt",
json={
@@ -461,7 +437,6 @@ class TestEncryptionMaterials:
def test_get_encryption_materials(self, kms_client, auth_headers):
"""Test getting encryption materials for client-side S3 encryption."""
# Create a key
kms_client.post("/kms/keys", json={"KeyId": "s3-key"}, headers=auth_headers)
response = kms_client.post(
@@ -478,7 +453,6 @@ class TestEncryptionMaterials:
assert data["KeyId"] == "s3-key"
assert data["Algorithm"] == "AES-256-GCM"
# Verify key is 256 bits
key = base64.b64decode(data["PlaintextKey"])
assert len(key) == 32
@@ -490,7 +464,6 @@ class TestKMSAuthentication:
"""Test that unauthenticated requests are rejected."""
response = kms_client.get("/kms/keys")
# Should fail with 403 (no credentials)
assert response.status_code == 403
def test_invalid_credentials_fail(self, kms_client):

View File

@@ -4,7 +4,6 @@ import pytest
from xml.etree.ElementTree import fromstring
# Helper to create file-like stream
def _stream(data: bytes):
return io.BytesIO(data)
@@ -19,13 +18,11 @@ class TestListObjectsV2:
"""Tests for ListObjectsV2 endpoint."""
def test_list_objects_v2_basic(self, client, signer, storage):
# Create bucket and objects
storage.create_bucket("v2-test")
storage.put_object("v2-test", "file1.txt", _stream(b"hello"))
storage.put_object("v2-test", "file2.txt", _stream(b"world"))
storage.put_object("v2-test", "folder/file3.txt", _stream(b"nested"))
# ListObjectsV2 request
headers = signer("GET", "/v2-test?list-type=2")
resp = client.get("/v2-test", query_string={"list-type": "2"}, headers=headers)
assert resp.status_code == 200
@@ -46,7 +43,6 @@ class TestListObjectsV2:
storage.put_object("prefix-test", "photos/2024/mar.jpg", _stream(b"mar"))
storage.put_object("prefix-test", "docs/readme.md", _stream(b"readme"))
# List with prefix and delimiter
headers = signer("GET", "/prefix-test?list-type=2&prefix=photos/&delimiter=/")
resp = client.get(
"/prefix-test",
@@ -56,11 +52,10 @@ class TestListObjectsV2:
assert resp.status_code == 200
root = fromstring(resp.data)
# Should show common prefixes for 2023/ and 2024/
prefixes = [el.find("Prefix").text for el in root.findall("CommonPrefixes")]
assert "photos/2023/" in prefixes
assert "photos/2024/" in prefixes
assert len(root.findall("Contents")) == 0 # No direct files under photos/
assert len(root.findall("Contents")) == 0
class TestPutBucketVersioning:
@@ -78,7 +73,6 @@ class TestPutBucketVersioning:
resp = client.put("/version-test", query_string={"versioning": ""}, data=payload, headers=headers)
assert resp.status_code == 200
# Verify via GET
headers = signer("GET", "/version-test?versioning")
resp = client.get("/version-test", query_string={"versioning": ""}, headers=headers)
root = fromstring(resp.data)
@@ -110,15 +104,13 @@ class TestDeleteBucketTagging:
storage.create_bucket("tag-delete-test")
storage.set_bucket_tags("tag-delete-test", [{"Key": "env", "Value": "test"}])
# Delete tags
headers = signer("DELETE", "/tag-delete-test?tagging")
resp = client.delete("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
assert resp.status_code == 204
# Verify tags are gone
headers = signer("GET", "/tag-delete-test?tagging")
resp = client.get("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
assert resp.status_code == 404 # NoSuchTagSet
assert resp.status_code == 404
class TestDeleteBucketCors:
@@ -130,15 +122,13 @@ class TestDeleteBucketCors:
{"AllowedOrigins": ["*"], "AllowedMethods": ["GET"]}
])
# Delete CORS
headers = signer("DELETE", "/cors-delete-test?cors")
resp = client.delete("/cors-delete-test", query_string={"cors": ""}, headers=headers)
assert resp.status_code == 204
# Verify CORS is gone
headers = signer("GET", "/cors-delete-test?cors")
resp = client.get("/cors-delete-test", query_string={"cors": ""}, headers=headers)
assert resp.status_code == 404 # NoSuchCORSConfiguration
assert resp.status_code == 404
class TestGetBucketLocation:
@@ -173,7 +163,6 @@ class TestBucketAcl:
def test_put_bucket_acl(self, client, signer, storage):
storage.create_bucket("acl-put-test")
# PUT with canned ACL header
headers = signer("PUT", "/acl-put-test?acl")
headers["x-amz-acl"] = "public-read"
resp = client.put("/acl-put-test", query_string={"acl": ""}, headers=headers)
@@ -188,7 +177,6 @@ class TestCopyObject:
storage.create_bucket("copy-dst")
storage.put_object("copy-src", "original.txt", _stream(b"original content"))
# Copy object
headers = signer("PUT", "/copy-dst/copied.txt")
headers["x-amz-copy-source"] = "/copy-src/original.txt"
resp = client.put("/copy-dst/copied.txt", headers=headers)
@@ -199,7 +187,6 @@ class TestCopyObject:
assert root.find("ETag") is not None
assert root.find("LastModified") is not None
# Verify copy exists
path = storage.get_object_path("copy-dst", "copied.txt")
assert path.read_bytes() == b"original content"
@@ -208,7 +195,6 @@ class TestCopyObject:
storage.create_bucket("meta-dst")
storage.put_object("meta-src", "source.txt", _stream(b"data"), metadata={"old": "value"})
# Copy with REPLACE directive
headers = signer("PUT", "/meta-dst/target.txt")
headers["x-amz-copy-source"] = "/meta-src/source.txt"
headers["x-amz-metadata-directive"] = "REPLACE"
@@ -216,7 +202,6 @@ class TestCopyObject:
resp = client.put("/meta-dst/target.txt", headers=headers)
assert resp.status_code == 200
# Verify new metadata (note: header keys are Title-Cased)
meta = storage.get_object_metadata("meta-dst", "target.txt")
assert "New" in meta or "new" in meta
assert "old" not in meta and "Old" not in meta
@@ -229,7 +214,6 @@ class TestObjectTagging:
storage.create_bucket("obj-tag-test")
storage.put_object("obj-tag-test", "tagged.txt", _stream(b"content"))
# PUT tags
payload = b"""<?xml version="1.0" encoding="UTF-8"?>
<Tagging>
<TagSet>
@@ -247,7 +231,6 @@ class TestObjectTagging:
)
assert resp.status_code == 204
# GET tags
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
assert resp.status_code == 200
@@ -257,12 +240,10 @@ class TestObjectTagging:
assert tags["project"] == "demo"
assert tags["env"] == "test"
# DELETE tags
headers = signer("DELETE", "/obj-tag-test/tagged.txt?tagging")
resp = client.delete("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
assert resp.status_code == 204
# Verify empty
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
root = fromstring(resp.data)
@@ -272,7 +253,6 @@ class TestObjectTagging:
storage.create_bucket("tag-limit")
storage.put_object("tag-limit", "file.txt", _stream(b"x"))
# Try to set 11 tags (limit is 10)
tags = "".join(f"<Tag><Key>key{i}</Key><Value>val{i}</Value></Tag>" for i in range(11))
payload = f"<Tagging><TagSet>{tags}</TagSet></Tagging>".encode()

View File

@@ -67,7 +67,6 @@ class TestUIBucketEncryption:
app = _make_encryption_app(tmp_path)
client = app.test_client()
# Login first
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
response = client.get("/ui/buckets/test-bucket?tab=properties")
@@ -82,14 +81,11 @@ class TestUIBucketEncryption:
app = _make_encryption_app(tmp_path)
client = app.test_client()
# Login
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
# Get CSRF token
response = client.get("/ui/buckets/test-bucket?tab=properties")
csrf_token = get_csrf_token(response)
# Enable AES-256 encryption
response = client.post(
"/ui/buckets/test-bucket/encryption",
data={
@@ -102,7 +98,6 @@ class TestUIBucketEncryption:
assert response.status_code == 200
html = response.data.decode("utf-8")
# Should see success message or enabled state
assert "AES-256" in html or "encryption enabled" in html.lower()
def test_enable_kms_encryption(self, tmp_path):
@@ -110,7 +105,6 @@ class TestUIBucketEncryption:
app = _make_encryption_app(tmp_path, kms_enabled=True)
client = app.test_client()
# Create a KMS key first
with app.app_context():
kms = app.extensions.get("kms")
if kms:
@@ -119,14 +113,11 @@ class TestUIBucketEncryption:
else:
pytest.skip("KMS not available")
# Login
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
# Get CSRF token
response = client.get("/ui/buckets/test-bucket?tab=properties")
csrf_token = get_csrf_token(response)
# Enable KMS encryption
response = client.post(
"/ui/buckets/test-bucket/encryption",
data={
@@ -147,10 +138,8 @@ class TestUIBucketEncryption:
app = _make_encryption_app(tmp_path)
client = app.test_client()
# Login
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
# First enable encryption
response = client.get("/ui/buckets/test-bucket?tab=properties")
csrf_token = get_csrf_token(response)
@@ -163,7 +152,6 @@ class TestUIBucketEncryption:
},
)
# Now disable it
response = client.get("/ui/buckets/test-bucket?tab=properties")
csrf_token = get_csrf_token(response)
@@ -185,7 +173,6 @@ class TestUIBucketEncryption:
app = _make_encryption_app(tmp_path)
client = app.test_client()
# Login
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
response = client.get("/ui/buckets/test-bucket?tab=properties")
@@ -210,10 +197,8 @@ class TestUIBucketEncryption:
app = _make_encryption_app(tmp_path)
client = app.test_client()
# Login
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
# Enable encryption
response = client.get("/ui/buckets/test-bucket?tab=properties")
csrf_token = get_csrf_token(response)
@@ -226,7 +211,6 @@ class TestUIBucketEncryption:
},
)
# Verify it's stored
with app.app_context():
storage = app.extensions["object_storage"]
config = storage.get_bucket_encryption("test-bucket")
@@ -244,10 +228,8 @@ class TestUIEncryptionWithoutPermission:
app = _make_encryption_app(tmp_path)
client = app.test_client()
# Login as readonly user
client.post("/ui/login", data={"access_key": "readonly", "secret_key": "secret"}, follow_redirects=True)
# This should fail or be rejected
response = client.get("/ui/buckets/test-bucket?tab=properties")
csrf_token = get_csrf_token(response)
@@ -261,8 +243,6 @@ class TestUIEncryptionWithoutPermission:
follow_redirects=True,
)
# Should either redirect with error or show permission denied
assert response.status_code == 200
html = response.data.decode("utf-8")
# Should contain error about permission denied
assert "Access denied" in html or "permission" in html.lower() or "not authorized" in html.lower()

View File

@@ -157,9 +157,14 @@ class TestPaginatedObjectListing:
assert "last_modified" in obj
assert "last_modified_display" in obj
assert "etag" in obj
assert "preview_url" in obj
assert "download_url" in obj
assert "delete_endpoint" in obj
# URLs are now returned as templates (not per-object) for performance
assert "url_templates" in data
templates = data["url_templates"]
assert "preview" in templates
assert "download" in templates
assert "delete" in templates
assert "KEY_PLACEHOLDER" in templates["preview"]
def test_bucket_detail_page_loads_without_objects(self, tmp_path):
"""Bucket detail page should load even with many objects."""