10 Commits

27 changed files with 635 additions and 1118 deletions

View File

@@ -1,5 +1,5 @@
# syntax=docker/dockerfile:1.7 # syntax=docker/dockerfile:1.7
FROM python:3.12.12-slim FROM python:3.11-slim
ENV PYTHONDONTWRITEBYTECODE=1 \ ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 PYTHONUNBUFFERED=1

300
README.md
View File

@@ -1,251 +1,117 @@
# MyFSIO # MyFSIO (Flask S3 + IAM)
A lightweight, S3-compatible object storage system built with Flask. MyFSIO implements core AWS S3 REST API operations with filesystem-backed storage, making it ideal for local development, testing, and self-hosted storage scenarios. MyFSIO is a batteries-included, Flask-based recreation of Amazon S3 and IAM workflows built for local development. The design mirrors the [AWS S3 documentation](https://docs.aws.amazon.com/s3/) wherever practical: bucket naming, Signature Version 4 presigning, Version 2012-10-17 bucket policies, IAM-style users, and familiar REST endpoints.
## Features ## Why MyFSIO?
**Core Storage** - **Dual servers:** Run both the API (port 5000) and UI (port 5100) with a single command: `python run.py`.
- S3-compatible REST API with AWS Signature Version 4 authentication - **IAM + access keys:** Users, access keys, key rotation, and bucket-scoped actions (`list/read/write/delete/policy`) now live in `data/.myfsio.sys/config/iam.json` and are editable from the IAM dashboard.
- Bucket and object CRUD operations - **Bucket policies + hot reload:** `data/.myfsio.sys/config/bucket_policies.json` uses AWS' policy grammar (Version `2012-10-17`) with a built-in watcher, so editing the JSON file applies immediately. The UI also ships Public/Private/Custom presets for faster edits.
- Object versioning with version history - **Presigned URLs everywhere:** Signature Version 4 presigned URLs respect IAM + bucket policies and replace the now-removed "share link" feature for public access scenarios.
- Multipart uploads for large files - **Modern UI:** Responsive tables, quick filters, preview sidebar, object-level delete buttons, a presign modal, and an inline JSON policy editor that respects dark mode keep bucket management friendly. The object browser supports folder navigation, infinite scroll pagination, bulk operations, and automatic retry on load failures.
- Presigned URLs (1 second to 7 days validity) - **Tests & health:** `/healthz` for smoke checks and `pytest` coverage for IAM, CRUD, presign, and policy flows.
**Security & Access Control** ## Architecture at a Glance
- IAM users with access key management and rotation
- Bucket policies (AWS Policy Version 2012-10-17)
- Server-side encryption (SSE-S3 and SSE-KMS)
- Built-in Key Management Service (KMS)
- Rate limiting per endpoint
**Advanced Features**
- Cross-bucket replication to remote S3-compatible endpoints
- Hot-reload for bucket policies (no restart required)
- CORS configuration per bucket
**Management UI**
- Web console for bucket and object management
- IAM dashboard for user administration
- Inline JSON policy editor with presets
- Object browser with folder navigation and bulk operations
- Dark mode support
## Architecture
``` ```
+------------------+ +------------------+ +-----------------+ +----------------+
| API Server | | UI Server | | API Server |<----->| Object storage |
| (port 5000) | | (port 5100) | | (port 5000) | | (filesystem) |
| | | | | - S3 routes | +----------------+
| - S3 REST API |<------->| - Web Console | | - Presigned URLs |
| - SigV4 Auth | | - IAM Dashboard | | - Bucket policy |
| - Presign URLs | | - Bucket Editor | +-----------------+
+--------+---------+ +------------------+ ^
| |
v +-----------------+
+------------------+ +------------------+ | UI Server |
| Object Storage | | System Metadata | | (port 5100) |
| (filesystem) | | (.myfsio.sys/) | | - Auth console |
| | | | | - IAM dashboard|
| data/<bucket>/ | | - IAM config | | - Bucket editor|
| <objects> | | - Bucket policies| +-----------------+
| | | - Encryption keys|
+------------------+ +------------------+
``` ```
## Quick Start Both apps load the same configuration via `AppConfig` so IAM data and bucket policies stay consistent no matter which process you run.
Bucket policies are automatically reloaded whenever `bucket_policies.json` changes—no restarts required.
## Getting Started
```bash ```bash
# Clone and setup
git clone https://gitea.jzwsite.com/kqjy/MyFSIO
cd s3
python -m venv .venv python -m venv .venv
. .venv/Scripts/activate # PowerShell: .\.venv\Scripts\Activate.ps1
# Activate virtual environment
# Windows PowerShell:
.\.venv\Scripts\Activate.ps1
# Windows CMD:
.venv\Scripts\activate.bat
# Linux/macOS:
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt pip install -r requirements.txt
# Start both servers # Run both API and UI (default)
python run.py python run.py
# Or start individually # Or run individually:
python run.py --mode api # API only (port 5000) # python run.py --mode api
python run.py --mode ui # UI only (port 5100) # python run.py --mode ui
``` ```
**Default Credentials:** `localadmin` / `localadmin` Visit `http://127.0.0.1:5100/ui` for the console and `http://127.0.0.1:5000/` for the raw API. Override ports/hosts with the environment variables listed below.
- **Web Console:** http://127.0.0.1:5100/ui ## IAM, Access Keys, and Bucket Policies
- **API Endpoint:** http://127.0.0.1:5000
- First run creates `data/.myfsio.sys/config/iam.json` with `localadmin / localadmin` (full control). Sign in via the UI, then use the **IAM** tab to create users, rotate secrets, or edit inline policies without touching JSON by hand.
- Bucket policies live in `data/.myfsio.sys/config/bucket_policies.json` and follow the AWS `arn:aws:s3:::bucket/key` resource syntax with Version `2012-10-17`. Attach/replace/remove policies from the bucket detail page or edit the JSON by hand—changes hot reload automatically.
- IAM actions include extended verbs (`iam:list_users`, `iam:create_user`, `iam:update_policy`, etc.) so you can control who is allowed to manage other users and policies.
### Bucket Policy Presets & Hot Reload
- **Presets:** Every bucket detail view includes Public (read-only), Private (detach policy), and Custom presets. Public auto-populates a policy that grants anonymous `s3:ListBucket` + `s3:GetObject` access to the entire bucket.
- **Custom drafts:** Switching back to Custom restores your last manual edit so you can toggle between presets without losing work.
- **Hot reload:** The server watches `bucket_policies.json` and reloads statements on-the-fly—ideal for editing policies in your favorite editor while testing Via curl or the UI.
## Presigned URLs
Presigned URLs follow the AWS CLI playbook:
- Call `POST /presign/<bucket>/<key>` (or use the "Presign" button in the UI) to request a Signature Version 4 URL valid for 1 second to 7 days.
- The generated URL honors IAM permissions and bucket-policy decisions at generation-time and again when somebody fetches it.
- Because presigned URLs cover both authenticated and public sharing scenarios, the legacy "share link" feature has been removed.
## Configuration ## Configuration
| Variable | Default | Description | | Variable | Default | Description |
|----------|---------|-------------| | --- | --- | --- |
| `STORAGE_ROOT` | `./data` | Filesystem root for bucket storage | | `STORAGE_ROOT` | `<project>/data` | Filesystem root for bucket directories |
| `IAM_CONFIG` | `.myfsio.sys/config/iam.json` | IAM user and policy store | | `MAX_UPLOAD_SIZE` | `1073741824` | Maximum upload size (bytes) |
| `BUCKET_POLICY_PATH` | `.myfsio.sys/config/bucket_policies.json` | Bucket policy store | | `UI_PAGE_SIZE` | `100` | `MaxKeys` hint for listings |
| `API_BASE_URL` | `http://127.0.0.1:5000` | API endpoint for UI calls | | `SECRET_KEY` | `dev-secret-key` | Flask session secret for the UI |
| `MAX_UPLOAD_SIZE` | `1073741824` | Maximum upload size in bytes (1 GB) | | `IAM_CONFIG` | `<project>/data/.myfsio.sys/config/iam.json` | IAM user + policy store |
| `MULTIPART_MIN_PART_SIZE` | `5242880` | Minimum multipart part size (5 MB) | | `BUCKET_POLICY_PATH` | `<project>/data/.myfsio.sys/config/bucket_policies.json` | Bucket policy store |
| `UI_PAGE_SIZE` | `100` | Default page size for listings | | `API_BASE_URL` | `http://127.0.0.1:5000` | Used by the UI when calling API endpoints (presign, bucket policy) |
| `SECRET_KEY` | `dev-secret-key` | Flask session secret | | `AWS_REGION` | `us-east-1` | Region used in Signature V4 scope |
| `AWS_REGION` | `us-east-1` | Region for SigV4 signing | | `AWS_SERVICE` | `s3` | Service used in Signature V4 scope |
| `AWS_SERVICE` | `s3` | Service name for SigV4 signing |
| `ENCRYPTION_ENABLED` | `false` | Enable server-side encryption |
| `KMS_ENABLED` | `false` | Enable Key Management Service |
| `LOG_LEVEL` | `INFO` | Logging verbosity |
## Data Layout > Buckets now live directly under `data/` while system metadata (versions, IAM, bucket policies, multipart uploads, etc.) lives in `data/.myfsio.sys`.
## API Cheatsheet (IAM headers required)
``` ```
data/ GET / -> List buckets (XML)
├── <bucket>/ # User buckets with objects PUT /<bucket> -> Create bucket
└── .myfsio.sys/ # System metadata DELETE /<bucket> -> Delete bucket (must be empty)
├── config/ GET /<bucket> -> List objects (XML)
│ ├── iam.json # IAM users and policies PUT /<bucket>/<key> -> Upload object (binary stream)
│ ├── bucket_policies.json # Bucket policies GET /<bucket>/<key> -> Download object
├── replication_rules.json DELETE /<bucket>/<key> -> Delete object
│ └── connections.json # Remote S3 connections POST /presign/<bucket>/<key> -> Generate AWS SigV4 presigned URL (JSON)
├── buckets/<bucket>/ GET /bucket-policy/<bucket> -> Fetch bucket policy (JSON)
├── meta/ # Object metadata (.meta.json) PUT /bucket-policy/<bucket> -> Attach/replace bucket policy (JSON)
│ ├── versions/ # Archived object versions DELETE /bucket-policy/<bucket> -> Remove bucket policy
│ └── .bucket.json # Bucket config (versioning, CORS)
├── multipart/ # Active multipart uploads
└── keys/ # Encryption keys (SSE-S3/KMS)
```
## API Reference
All endpoints require AWS Signature Version 4 authentication unless using presigned URLs or public bucket policies.
### Bucket Operations
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/` | List all buckets |
| `PUT` | `/<bucket>` | Create bucket |
| `DELETE` | `/<bucket>` | Delete bucket (must be empty) |
| `HEAD` | `/<bucket>` | Check bucket exists |
### Object Operations
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/<bucket>` | List objects (supports `list-type=2`) |
| `PUT` | `/<bucket>/<key>` | Upload object |
| `GET` | `/<bucket>/<key>` | Download object |
| `DELETE` | `/<bucket>/<key>` | Delete object |
| `HEAD` | `/<bucket>/<key>` | Get object metadata |
| `POST` | `/<bucket>/<key>?uploads` | Initiate multipart upload |
| `PUT` | `/<bucket>/<key>?partNumber=N&uploadId=X` | Upload part |
| `POST` | `/<bucket>/<key>?uploadId=X` | Complete multipart upload |
| `DELETE` | `/<bucket>/<key>?uploadId=X` | Abort multipart upload |
### Presigned URLs
| Method | Endpoint | Description |
|--------|----------|-------------|
| `POST` | `/presign/<bucket>/<key>` | Generate presigned URL |
### Bucket Policies
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/bucket-policy/<bucket>` | Get bucket policy |
| `PUT` | `/bucket-policy/<bucket>` | Set bucket policy |
| `DELETE` | `/bucket-policy/<bucket>` | Delete bucket policy |
### Versioning
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/<bucket>/<key>?versionId=X` | Get specific version |
| `DELETE` | `/<bucket>/<key>?versionId=X` | Delete specific version |
| `GET` | `/<bucket>?versions` | List object versions |
### Health Check
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/healthz` | Health check endpoint |
## IAM & Access Control
### Users and Access Keys
On first run, MyFSIO creates a default admin user (`localadmin`/`localadmin`). Use the IAM dashboard to:
- Create and delete users
- Generate and rotate access keys
- Attach inline policies to users
- Control IAM management permissions
### Bucket Policies
Bucket policies follow AWS policy grammar (Version `2012-10-17`) with support for:
- Principal-based access (`*` for anonymous, specific users)
- Action-based permissions (`s3:GetObject`, `s3:PutObject`, etc.)
- Resource patterns (`arn:aws:s3:::bucket/*`)
- Condition keys
**Policy Presets:**
- **Public:** Grants anonymous read access (`s3:GetObject`, `s3:ListBucket`)
- **Private:** Removes bucket policy (IAM-only access)
- **Custom:** Manual policy editing with draft preservation
Policies hot-reload when the JSON file changes.
## Server-Side Encryption
MyFSIO supports two encryption modes:
- **SSE-S3:** Server-managed keys with automatic key rotation
- **SSE-KMS:** Customer-managed keys via built-in KMS
Enable encryption with:
```bash
ENCRYPTION_ENABLED=true python run.py
```
## Cross-Bucket Replication
Replicate objects to remote S3-compatible endpoints:
1. Configure remote connections in the UI
2. Create replication rules specifying source/destination
3. Objects are automatically replicated on upload
## Docker
```bash
docker build -t myfsio .
docker run -p 5000:5000 -p 5100:5100 -v ./data:/app/data myfsio
``` ```
## Testing ## Testing
```bash ```bash
# Run all tests pytest -q
pytest tests/ -v
# Run specific test file
pytest tests/test_api.py -v
# Run with coverage
pytest tests/ --cov=app --cov-report=html
``` ```
## References ## References
- [Amazon S3 Documentation](https://docs.aws.amazon.com/s3/) - [Amazon Simple Storage Service Documentation](https://docs.aws.amazon.com/s3/)
- [AWS Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) - [Signature Version 4 Signing Process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)
- [S3 Bucket Policy Examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html) - [Amazon S3 Bucket Policy Examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html)

View File

@@ -2,12 +2,10 @@
from __future__ import annotations from __future__ import annotations
import json import json
import re
import time
from dataclasses import dataclass from dataclasses import dataclass
from fnmatch import fnmatch, translate from fnmatch import fnmatch
from pathlib import Path from pathlib import Path
from typing import Any, Dict, Iterable, List, Optional, Pattern, Sequence, Tuple from typing import Any, Dict, Iterable, List, Optional, Sequence
RESOURCE_PREFIX = "arn:aws:s3:::" RESOURCE_PREFIX = "arn:aws:s3:::"
@@ -135,22 +133,7 @@ class BucketPolicyStatement:
effect: str effect: str
principals: List[str] | str principals: List[str] | str
actions: List[str] actions: List[str]
resources: List[Tuple[str | None, str | None]] resources: List[tuple[str | None, str | None]]
# Performance: Pre-compiled regex patterns for resource matching
_compiled_patterns: List[Tuple[str | None, Optional[Pattern[str]]]] | None = None
def _get_compiled_patterns(self) -> List[Tuple[str | None, Optional[Pattern[str]]]]:
"""Lazily compile fnmatch patterns to regex for faster matching."""
if self._compiled_patterns is None:
self._compiled_patterns = []
for resource_bucket, key_pattern in self.resources:
if key_pattern is None:
self._compiled_patterns.append((resource_bucket, None))
else:
# Convert fnmatch pattern to regex
regex_pattern = translate(key_pattern)
self._compiled_patterns.append((resource_bucket, re.compile(regex_pattern)))
return self._compiled_patterns
def matches_principal(self, access_key: Optional[str]) -> bool: def matches_principal(self, access_key: Optional[str]) -> bool:
if self.principals == "*": if self.principals == "*":
@@ -166,16 +149,15 @@ class BucketPolicyStatement:
def matches_resource(self, bucket: Optional[str], object_key: Optional[str]) -> bool: def matches_resource(self, bucket: Optional[str], object_key: Optional[str]) -> bool:
bucket = (bucket or "*").lower() bucket = (bucket or "*").lower()
key = object_key or "" key = object_key or ""
for resource_bucket, compiled_pattern in self._get_compiled_patterns(): for resource_bucket, key_pattern in self.resources:
resource_bucket = (resource_bucket or "*").lower() resource_bucket = (resource_bucket or "*").lower()
if resource_bucket not in {"*", bucket}: if resource_bucket not in {"*", bucket}:
continue continue
if compiled_pattern is None: if key_pattern is None:
if not key: if not key:
return True return True
continue continue
# Performance: Use pre-compiled regex instead of fnmatch if fnmatch(key, key_pattern):
if compiled_pattern.match(key):
return True return True
return False return False
@@ -192,16 +174,8 @@ class BucketPolicyStore:
self._policies: Dict[str, List[BucketPolicyStatement]] = {} self._policies: Dict[str, List[BucketPolicyStatement]] = {}
self._load() self._load()
self._last_mtime = self._current_mtime() self._last_mtime = self._current_mtime()
# Performance: Avoid stat() on every request
self._last_stat_check = 0.0
self._stat_check_interval = 1.0 # Only check mtime every 1 second
def maybe_reload(self) -> None: def maybe_reload(self) -> None:
# Performance: Skip stat check if we checked recently
now = time.time()
if now - self._last_stat_check < self._stat_check_interval:
return
self._last_stat_check = now
current = self._current_mtime() current = self._current_mtime()
if current is None or current == self._last_mtime: if current is None or current == self._last_mtime:
return return

View File

@@ -79,7 +79,7 @@ class EncryptedObjectStorage:
kms_key_id: Optional[str] = None, kms_key_id: Optional[str] = None,
) -> ObjectMeta: ) -> ObjectMeta:
"""Store an object, optionally with encryption. """Store an object, optionally with encryption.
Args: Args:
bucket_name: Name of the bucket bucket_name: Name of the bucket
object_key: Key for the object object_key: Key for the object
@@ -87,41 +87,42 @@ class EncryptedObjectStorage:
metadata: Optional user metadata metadata: Optional user metadata
server_side_encryption: Encryption algorithm ("AES256" or "aws:kms") server_side_encryption: Encryption algorithm ("AES256" or "aws:kms")
kms_key_id: KMS key ID (for aws:kms encryption) kms_key_id: KMS key ID (for aws:kms encryption)
Returns: Returns:
ObjectMeta with object information ObjectMeta with object information
Performance: Uses streaming encryption for large files to reduce memory usage.
""" """
should_encrypt, algorithm, detected_kms_key = self._should_encrypt( should_encrypt, algorithm, detected_kms_key = self._should_encrypt(
bucket_name, server_side_encryption bucket_name, server_side_encryption
) )
if kms_key_id is None: if kms_key_id is None:
kms_key_id = detected_kms_key kms_key_id = detected_kms_key
if should_encrypt: if should_encrypt:
data = stream.read()
try: try:
# Performance: Use streaming encryption to avoid loading entire file into memory ciphertext, enc_metadata = self.encryption.encrypt_object(
encrypted_stream, enc_metadata = self.encryption.encrypt_stream( data,
stream,
algorithm=algorithm, algorithm=algorithm,
kms_key_id=kms_key_id,
context={"bucket": bucket_name, "key": object_key}, context={"bucket": bucket_name, "key": object_key},
) )
combined_metadata = metadata.copy() if metadata else {} combined_metadata = metadata.copy() if metadata else {}
combined_metadata.update(enc_metadata.to_dict()) combined_metadata.update(enc_metadata.to_dict())
encrypted_stream = io.BytesIO(ciphertext)
result = self.storage.put_object( result = self.storage.put_object(
bucket_name, bucket_name,
object_key, object_key,
encrypted_stream, encrypted_stream,
metadata=combined_metadata, metadata=combined_metadata,
) )
result.metadata = combined_metadata result.metadata = combined_metadata
return result return result
except EncryptionError as exc: except EncryptionError as exc:
raise StorageError(f"Encryption failed: {exc}") from exc raise StorageError(f"Encryption failed: {exc}") from exc
else: else:
@@ -134,34 +135,33 @@ class EncryptedObjectStorage:
def get_object_data(self, bucket_name: str, object_key: str) -> tuple[bytes, Dict[str, str]]: def get_object_data(self, bucket_name: str, object_key: str) -> tuple[bytes, Dict[str, str]]:
"""Get object data, decrypting if necessary. """Get object data, decrypting if necessary.
Returns: Returns:
Tuple of (data, metadata) Tuple of (data, metadata)
Performance: Uses streaming decryption to reduce memory usage.
""" """
path = self.storage.get_object_path(bucket_name, object_key) path = self.storage.get_object_path(bucket_name, object_key)
metadata = self.storage.get_object_metadata(bucket_name, object_key) metadata = self.storage.get_object_metadata(bucket_name, object_key)
with path.open("rb") as f:
data = f.read()
enc_metadata = EncryptionMetadata.from_dict(metadata) enc_metadata = EncryptionMetadata.from_dict(metadata)
if enc_metadata: if enc_metadata:
try: try:
# Performance: Use streaming decryption to avoid loading entire file into memory data = self.encryption.decrypt_object(
with path.open("rb") as f: data,
decrypted_stream = self.encryption.decrypt_stream(f, enc_metadata) enc_metadata,
data = decrypted_stream.read() context={"bucket": bucket_name, "key": object_key},
)
except EncryptionError as exc: except EncryptionError as exc:
raise StorageError(f"Decryption failed: {exc}") from exc raise StorageError(f"Decryption failed: {exc}") from exc
else:
with path.open("rb") as f:
data = f.read()
clean_metadata = { clean_metadata = {
k: v for k, v in metadata.items() k: v for k, v in metadata.items()
if not k.startswith("x-amz-encryption") if not k.startswith("x-amz-encryption")
and k != "x-amz-encrypted-data-key" and k != "x-amz-encrypted-data-key"
} }
return data, clean_metadata return data, clean_metadata
def get_object_stream(self, bucket_name: str, object_key: str) -> tuple[BinaryIO, Dict[str, str], int]: def get_object_stream(self, bucket_name: str, object_key: str) -> tuple[BinaryIO, Dict[str, str], int]:

View File

@@ -157,7 +157,10 @@ class LocalKeyEncryption(EncryptionProvider):
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes, def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
key_id: str, context: Dict[str, str] | None = None) -> bytes: key_id: str, context: Dict[str, str] | None = None) -> bytes:
"""Decrypt data using envelope encryption.""" """Decrypt data using envelope encryption."""
# Decrypt the data key
data_key = self._decrypt_data_key(encrypted_data_key) data_key = self._decrypt_data_key(encrypted_data_key)
# Decrypt the data
aesgcm = AESGCM(data_key) aesgcm = AESGCM(data_key)
try: try:
return aesgcm.decrypt(nonce, ciphertext, None) return aesgcm.decrypt(nonce, ciphertext, None)
@@ -180,94 +183,81 @@ class StreamingEncryptor:
self.chunk_size = chunk_size self.chunk_size = chunk_size
def _derive_chunk_nonce(self, base_nonce: bytes, chunk_index: int) -> bytes: def _derive_chunk_nonce(self, base_nonce: bytes, chunk_index: int) -> bytes:
"""Derive a unique nonce for each chunk. """Derive a unique nonce for each chunk."""
# XOR the base nonce with the chunk index
Performance: Use direct byte manipulation instead of full int conversion. nonce_int = int.from_bytes(base_nonce, "big")
""" derived = nonce_int ^ chunk_index
# Performance: Only modify last 4 bytes instead of full 12-byte conversion return derived.to_bytes(12, "big")
return base_nonce[:8] + (chunk_index ^ int.from_bytes(base_nonce[8:], "big")).to_bytes(4, "big")
def encrypt_stream(self, stream: BinaryIO,
def encrypt_stream(self, stream: BinaryIO,
context: Dict[str, str] | None = None) -> tuple[BinaryIO, EncryptionMetadata]: context: Dict[str, str] | None = None) -> tuple[BinaryIO, EncryptionMetadata]:
"""Encrypt a stream and return encrypted stream + metadata. """Encrypt a stream and return encrypted stream + metadata."""
Performance: Writes chunks directly to output buffer instead of accumulating in list.
"""
data_key, encrypted_data_key = self.provider.generate_data_key() data_key, encrypted_data_key = self.provider.generate_data_key()
base_nonce = secrets.token_bytes(12) base_nonce = secrets.token_bytes(12)
aesgcm = AESGCM(data_key) aesgcm = AESGCM(data_key)
# Performance: Write directly to BytesIO instead of accumulating chunks encrypted_chunks = []
output = io.BytesIO()
output.write(b"\x00\x00\x00\x00") # Placeholder for chunk count
chunk_index = 0 chunk_index = 0
while True: while True:
chunk = stream.read(self.chunk_size) chunk = stream.read(self.chunk_size)
if not chunk: if not chunk:
break break
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index) chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
encrypted_chunk = aesgcm.encrypt(chunk_nonce, chunk, None) encrypted_chunk = aesgcm.encrypt(chunk_nonce, chunk, None)
# Write size prefix + encrypted chunk directly size_prefix = len(encrypted_chunk).to_bytes(self.HEADER_SIZE, "big")
output.write(len(encrypted_chunk).to_bytes(self.HEADER_SIZE, "big")) encrypted_chunks.append(size_prefix + encrypted_chunk)
output.write(encrypted_chunk)
chunk_index += 1 chunk_index += 1
# Write actual chunk count to header header = chunk_index.to_bytes(4, "big")
output.seek(0) encrypted_data = header + b"".join(encrypted_chunks)
output.write(chunk_index.to_bytes(4, "big"))
output.seek(0)
metadata = EncryptionMetadata( metadata = EncryptionMetadata(
algorithm="AES256", algorithm="AES256",
key_id=self.provider.KEY_ID if hasattr(self.provider, "KEY_ID") else "local", key_id=self.provider.KEY_ID if hasattr(self.provider, "KEY_ID") else "local",
nonce=base_nonce, nonce=base_nonce,
encrypted_data_key=encrypted_data_key, encrypted_data_key=encrypted_data_key,
) )
return output, metadata return io.BytesIO(encrypted_data), metadata
def decrypt_stream(self, stream: BinaryIO, metadata: EncryptionMetadata) -> BinaryIO: def decrypt_stream(self, stream: BinaryIO, metadata: EncryptionMetadata) -> BinaryIO:
"""Decrypt a stream using the provided metadata. """Decrypt a stream using the provided metadata."""
Performance: Writes chunks directly to output buffer instead of accumulating in list.
"""
if isinstance(self.provider, LocalKeyEncryption): if isinstance(self.provider, LocalKeyEncryption):
data_key = self.provider._decrypt_data_key(metadata.encrypted_data_key) data_key = self.provider._decrypt_data_key(metadata.encrypted_data_key)
else: else:
raise EncryptionError("Unsupported provider for streaming decryption") raise EncryptionError("Unsupported provider for streaming decryption")
aesgcm = AESGCM(data_key) aesgcm = AESGCM(data_key)
base_nonce = metadata.nonce base_nonce = metadata.nonce
chunk_count_bytes = stream.read(4) chunk_count_bytes = stream.read(4)
if len(chunk_count_bytes) < 4: if len(chunk_count_bytes) < 4:
raise EncryptionError("Invalid encrypted stream: missing header") raise EncryptionError("Invalid encrypted stream: missing header")
chunk_count = int.from_bytes(chunk_count_bytes, "big") chunk_count = int.from_bytes(chunk_count_bytes, "big")
# Performance: Write directly to BytesIO instead of accumulating chunks decrypted_chunks = []
output = io.BytesIO()
for chunk_index in range(chunk_count): for chunk_index in range(chunk_count):
size_bytes = stream.read(self.HEADER_SIZE) size_bytes = stream.read(self.HEADER_SIZE)
if len(size_bytes) < self.HEADER_SIZE: if len(size_bytes) < self.HEADER_SIZE:
raise EncryptionError(f"Invalid encrypted stream: truncated at chunk {chunk_index}") raise EncryptionError(f"Invalid encrypted stream: truncated at chunk {chunk_index}")
chunk_size = int.from_bytes(size_bytes, "big") chunk_size = int.from_bytes(size_bytes, "big")
encrypted_chunk = stream.read(chunk_size) encrypted_chunk = stream.read(chunk_size)
if len(encrypted_chunk) < chunk_size: if len(encrypted_chunk) < chunk_size:
raise EncryptionError(f"Invalid encrypted stream: incomplete chunk {chunk_index}") raise EncryptionError(f"Invalid encrypted stream: incomplete chunk {chunk_index}")
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index) chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
try: try:
decrypted_chunk = aesgcm.decrypt(chunk_nonce, encrypted_chunk, None) decrypted_chunk = aesgcm.decrypt(chunk_nonce, encrypted_chunk, None)
output.write(decrypted_chunk) # Write directly instead of appending to list decrypted_chunks.append(decrypted_chunk)
except Exception as exc: except Exception as exc:
raise EncryptionError(f"Failed to decrypt chunk {chunk_index}: {exc}") from exc raise EncryptionError(f"Failed to decrypt chunk {chunk_index}: {exc}") from exc
output.seek(0) return io.BytesIO(b"".join(decrypted_chunks))
return output
class EncryptionManager: class EncryptionManager:

View File

@@ -4,12 +4,11 @@ from __future__ import annotations
import json import json
import math import math
import secrets import secrets
import time
from collections import deque from collections import deque
from dataclasses import dataclass from dataclasses import dataclass
from datetime import datetime, timedelta, timezone from datetime import datetime, timedelta, timezone
from pathlib import Path from pathlib import Path
from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set, Tuple from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set
class IamError(RuntimeError): class IamError(RuntimeError):
@@ -116,24 +115,13 @@ class IamService:
self._raw_config: Dict[str, Any] = {} self._raw_config: Dict[str, Any] = {}
self._failed_attempts: Dict[str, Deque[datetime]] = {} self._failed_attempts: Dict[str, Deque[datetime]] = {}
self._last_load_time = 0.0 self._last_load_time = 0.0
# Performance: credential cache with TTL
self._credential_cache: Dict[str, Tuple[str, Principal, float]] = {}
self._cache_ttl = 60.0 # Cache credentials for 60 seconds
self._last_stat_check = 0.0
self._stat_check_interval = 1.0 # Only stat() file every 1 second
self._load() self._load()
def _maybe_reload(self) -> None: def _maybe_reload(self) -> None:
"""Reload configuration if the file has changed on disk.""" """Reload configuration if the file has changed on disk."""
# Performance: Skip stat check if we checked recently
now = time.time()
if now - self._last_stat_check < self._stat_check_interval:
return
self._last_stat_check = now
try: try:
if self.config_path.stat().st_mtime > self._last_load_time: if self.config_path.stat().st_mtime > self._last_load_time:
self._load() self._load()
self._credential_cache.clear() # Invalidate cache on reload
except OSError: except OSError:
pass pass
@@ -193,37 +181,17 @@ class IamService:
return int(max(0, self.auth_lockout_window.total_seconds() - elapsed)) return int(max(0, self.auth_lockout_window.total_seconds() - elapsed))
def principal_for_key(self, access_key: str) -> Principal: def principal_for_key(self, access_key: str) -> Principal:
# Performance: Check cache first
now = time.time()
cached = self._credential_cache.get(access_key)
if cached:
secret, principal, cached_time = cached
if now - cached_time < self._cache_ttl:
return principal
self._maybe_reload() self._maybe_reload()
record = self._users.get(access_key) record = self._users.get(access_key)
if not record: if not record:
raise IamError("Unknown access key") raise IamError("Unknown access key")
principal = self._build_principal(access_key, record) return self._build_principal(access_key, record)
self._credential_cache[access_key] = (record["secret_key"], principal, now)
return principal
def secret_for_key(self, access_key: str) -> str: def secret_for_key(self, access_key: str) -> str:
# Performance: Check cache first
now = time.time()
cached = self._credential_cache.get(access_key)
if cached:
secret, principal, cached_time = cached
if now - cached_time < self._cache_ttl:
return secret
self._maybe_reload() self._maybe_reload()
record = self._users.get(access_key) record = self._users.get(access_key)
if not record: if not record:
raise IamError("Unknown access key") raise IamError("Unknown access key")
principal = self._build_principal(access_key, record)
self._credential_cache[access_key] = (record["secret_key"], principal, now)
return record["secret_key"] return record["secret_key"]
def authorize(self, principal: Principal, bucket_name: str | None, action: str) -> None: def authorize(self, principal: Principal, bucket_name: str | None, action: str) -> None:
@@ -474,36 +442,11 @@ class IamService:
raise IamError("User not found") raise IamError("User not found")
def get_secret_key(self, access_key: str) -> str | None: def get_secret_key(self, access_key: str) -> str | None:
# Performance: Check cache first
now = time.time()
cached = self._credential_cache.get(access_key)
if cached:
secret, principal, cached_time = cached
if now - cached_time < self._cache_ttl:
return secret
self._maybe_reload() self._maybe_reload()
record = self._users.get(access_key) record = self._users.get(access_key)
if record: return record["secret_key"] if record else None
# Cache the result
principal = self._build_principal(access_key, record)
self._credential_cache[access_key] = (record["secret_key"], principal, now)
return record["secret_key"]
return None
def get_principal(self, access_key: str) -> Principal | None: def get_principal(self, access_key: str) -> Principal | None:
# Performance: Check cache first
now = time.time()
cached = self._credential_cache.get(access_key)
if cached:
secret, principal, cached_time = cached
if now - cached_time < self._cache_ttl:
return principal
self._maybe_reload() self._maybe_reload()
record = self._users.get(access_key) record = self._users.get(access_key)
if record: return self._build_principal(access_key, record) if record else None
principal = self._build_principal(access_key, record)
self._credential_cache[access_key] = (record["secret_key"], principal, now)
return principal
return None

View File

@@ -975,7 +975,8 @@ def _object_tagging_handler(bucket_name: str, object_key: str) -> Response:
return _error_response("NoSuchKey", message, 404) return _error_response("NoSuchKey", message, 404)
current_app.logger.info("Object tags deleted", extra={"bucket": bucket_name, "key": object_key}) current_app.logger.info("Object tags deleted", extra={"bucket": bucket_name, "key": object_key})
return Response(status=204) return Response(status=204)
# PUT
payload = request.get_data(cache=False) or b"" payload = request.get_data(cache=False) or b""
try: try:
tags = _parse_tagging_document(payload) tags = _parse_tagging_document(payload)
@@ -1043,7 +1044,7 @@ def _bucket_cors_handler(bucket_name: str) -> Response:
return _error_response("NoSuchBucket", str(exc), 404) return _error_response("NoSuchBucket", str(exc), 404)
current_app.logger.info("Bucket CORS deleted", extra={"bucket": bucket_name}) current_app.logger.info("Bucket CORS deleted", extra={"bucket": bucket_name})
return Response(status=204) return Response(status=204)
# PUT
payload = request.get_data(cache=False) or b"" payload = request.get_data(cache=False) or b""
if not payload.strip(): if not payload.strip():
try: try:
@@ -1289,7 +1290,8 @@ def _bucket_lifecycle_handler(bucket_name: str) -> Response:
storage.set_bucket_lifecycle(bucket_name, None) storage.set_bucket_lifecycle(bucket_name, None)
current_app.logger.info("Bucket lifecycle deleted", extra={"bucket": bucket_name}) current_app.logger.info("Bucket lifecycle deleted", extra={"bucket": bucket_name})
return Response(status=204) return Response(status=204)
# PUT
payload = request.get_data(cache=False) or b"" payload = request.get_data(cache=False) or b""
if not payload.strip(): if not payload.strip():
return _error_response("MalformedXML", "Request body is required", 400) return _error_response("MalformedXML", "Request body is required", 400)
@@ -1452,7 +1454,8 @@ def _bucket_quota_handler(bucket_name: str) -> Response:
return _error_response("NoSuchBucket", str(exc), 404) return _error_response("NoSuchBucket", str(exc), 404)
current_app.logger.info("Bucket quota deleted", extra={"bucket": bucket_name}) current_app.logger.info("Bucket quota deleted", extra={"bucket": bucket_name})
return Response(status=204) return Response(status=204)
# PUT
payload = request.get_json(silent=True) payload = request.get_json(silent=True)
if not payload: if not payload:
return _error_response("MalformedRequest", "Request body must be JSON with quota limits", 400) return _error_response("MalformedRequest", "Request body must be JSON with quota limits", 400)
@@ -2168,89 +2171,48 @@ def _copy_object(dest_bucket: str, dest_key: str, copy_source: str) -> Response:
class AwsChunkedDecoder: class AwsChunkedDecoder:
"""Decodes aws-chunked encoded streams. """Decodes aws-chunked encoded streams."""
Performance optimized with buffered line reading instead of byte-by-byte.
"""
def __init__(self, stream): def __init__(self, stream):
self.stream = stream self.stream = stream
self._read_buffer = bytearray() # Performance: Pre-allocated buffer self.buffer = b""
self.chunk_remaining = 0 self.chunk_remaining = 0
self.finished = False self.finished = False
def _read_line(self) -> bytes:
"""Read until CRLF using buffered reads instead of byte-by-byte.
Performance: Reads in batches of 64-256 bytes instead of 1 byte at a time.
"""
line = bytearray()
while True:
# Check if we have data in buffer
if self._read_buffer:
# Look for CRLF in buffer
idx = self._read_buffer.find(b"\r\n")
if idx != -1:
# Found CRLF - extract line and update buffer
line.extend(self._read_buffer[: idx + 2])
del self._read_buffer[: idx + 2]
return bytes(line)
# No CRLF yet - consume entire buffer
line.extend(self._read_buffer)
self._read_buffer.clear()
# Read more data in larger chunks (64 bytes is enough for chunk headers)
chunk = self.stream.read(64)
if not chunk:
return bytes(line) if line else b""
self._read_buffer.extend(chunk)
def _read_exact(self, n: int) -> bytes:
"""Read exactly n bytes, using buffer first."""
result = bytearray()
# Use buffered data first
if self._read_buffer:
take = min(len(self._read_buffer), n)
result.extend(self._read_buffer[:take])
del self._read_buffer[:take]
n -= take
# Read remaining directly from stream
if n > 0:
data = self.stream.read(n)
if data:
result.extend(data)
return bytes(result)
def read(self, size=-1): def read(self, size=-1):
if self.finished: if self.finished:
return b"" return b""
result = bytearray() # Performance: Use bytearray for building result result = b""
while size == -1 or len(result) < size: while size == -1 or len(result) < size:
if self.chunk_remaining > 0: if self.chunk_remaining > 0:
to_read = self.chunk_remaining to_read = self.chunk_remaining
if size != -1: if size != -1:
to_read = min(to_read, size - len(result)) to_read = min(to_read, size - len(result))
chunk = self._read_exact(to_read) chunk = self.stream.read(to_read)
if not chunk: if not chunk:
raise IOError("Unexpected EOF in chunk data") raise IOError("Unexpected EOF in chunk data")
result.extend(chunk) result += chunk
self.chunk_remaining -= len(chunk) self.chunk_remaining -= len(chunk)
if self.chunk_remaining == 0: if self.chunk_remaining == 0:
crlf = self._read_exact(2) crlf = self.stream.read(2)
if crlf != b"\r\n": if crlf != b"\r\n":
raise IOError("Malformed chunk: missing CRLF") raise IOError("Malformed chunk: missing CRLF")
else: else:
line = self._read_line() line = b""
if not line: while True:
self.finished = True char = self.stream.read(1)
return bytes(result) if not char:
if not line:
self.finished = True
return result
raise IOError("Unexpected EOF in chunk size")
line += char
if line.endswith(b"\r\n"):
break
try: try:
line_str = line.decode("ascii").strip() line_str = line.decode("ascii").strip()
if ";" in line_str: if ";" in line_str:
@@ -2261,16 +2223,22 @@ class AwsChunkedDecoder:
if chunk_size == 0: if chunk_size == 0:
self.finished = True self.finished = True
# Skip trailing headers
while True: while True:
trailer = self._read_line() line = b""
if trailer == b"\r\n" or not trailer: while True:
char = self.stream.read(1)
if not char:
break
line += char
if line.endswith(b"\r\n"):
break
if line == b"\r\n" or not line:
break break
return bytes(result) return result
self.chunk_remaining = chunk_size self.chunk_remaining = chunk_size
return bytes(result) return result
def _initiate_multipart_upload(bucket_name: str, object_key: str) -> Response: def _initiate_multipart_upload(bucket_name: str, object_key: str) -> Response:

View File

@@ -139,21 +139,9 @@ class ObjectStorage:
self._ensure_system_roots() self._ensure_system_roots()
# LRU cache for object metadata with thread-safe access # LRU cache for object metadata with thread-safe access
self._object_cache: OrderedDict[str, tuple[Dict[str, ObjectMeta], float]] = OrderedDict() self._object_cache: OrderedDict[str, tuple[Dict[str, ObjectMeta], float]] = OrderedDict()
self._cache_lock = threading.Lock() # Global lock for cache structure self._cache_lock = threading.Lock()
# Performance: Per-bucket locks to reduce contention
self._bucket_locks: Dict[str, threading.Lock] = {}
# Cache version counter for detecting stale reads # Cache version counter for detecting stale reads
self._cache_version: Dict[str, int] = {} self._cache_version: Dict[str, int] = {}
# Performance: Bucket config cache with TTL
self._bucket_config_cache: Dict[str, tuple[dict[str, Any], float]] = {}
self._bucket_config_cache_ttl = 30.0 # 30 second TTL
def _get_bucket_lock(self, bucket_id: str) -> threading.Lock:
"""Get or create a lock for a specific bucket. Reduces global lock contention."""
with self._cache_lock:
if bucket_id not in self._bucket_locks:
self._bucket_locks[bucket_id] = threading.Lock()
return self._bucket_locks[bucket_id]
def list_buckets(self) -> List[BucketMeta]: def list_buckets(self) -> List[BucketMeta]:
buckets: List[BucketMeta] = [] buckets: List[BucketMeta] = []
@@ -259,13 +247,11 @@ class ObjectStorage:
bucket_path = self._bucket_path(bucket_name) bucket_path = self._bucket_path(bucket_name)
if not bucket_path.exists(): if not bucket_path.exists():
raise StorageError("Bucket does not exist") raise StorageError("Bucket does not exist")
# Performance: Single check instead of three separate traversals if self._has_visible_objects(bucket_path):
has_objects, has_versions, has_multipart = self._check_bucket_contents(bucket_path)
if has_objects:
raise StorageError("Bucket not empty") raise StorageError("Bucket not empty")
if has_versions: if self._has_archived_versions(bucket_path):
raise StorageError("Bucket contains archived object versions") raise StorageError("Bucket contains archived object versions")
if has_multipart: if self._has_active_multipart_uploads(bucket_path):
raise StorageError("Bucket has active multipart uploads") raise StorageError("Bucket has active multipart uploads")
self._remove_tree(bucket_path) self._remove_tree(bucket_path)
self._remove_tree(self._system_bucket_root(bucket_path.name)) self._remove_tree(self._system_bucket_root(bucket_path.name))
@@ -407,20 +393,17 @@ class ObjectStorage:
internal_meta = {"__etag__": etag, "__size__": str(stat.st_size)} internal_meta = {"__etag__": etag, "__size__": str(stat.st_size)}
combined_meta = {**internal_meta, **(metadata or {})} combined_meta = {**internal_meta, **(metadata or {})}
self._write_metadata(bucket_id, safe_key, combined_meta) self._write_metadata(bucket_id, safe_key, combined_meta)
self._invalidate_bucket_stats_cache(bucket_id) self._invalidate_bucket_stats_cache(bucket_id)
self._invalidate_object_cache(bucket_id)
# Performance: Lazy update - only update the affected key instead of invalidating whole cache
obj_meta = ObjectMeta( return ObjectMeta(
key=safe_key.as_posix(), key=safe_key.as_posix(),
size=stat.st_size, size=stat.st_size,
last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc), last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc),
etag=etag, etag=etag,
metadata=metadata, metadata=metadata,
) )
self._update_object_cache_entry(bucket_id, safe_key.as_posix(), obj_meta)
return obj_meta
def get_object_path(self, bucket_name: str, object_key: str) -> Path: def get_object_path(self, bucket_name: str, object_key: str) -> Path:
path = self._object_path(bucket_name, object_key) path = self._object_path(bucket_name, object_key)
@@ -466,10 +449,9 @@ class ObjectStorage:
rel = path.relative_to(bucket_path) rel = path.relative_to(bucket_path)
self._safe_unlink(path) self._safe_unlink(path)
self._delete_metadata(bucket_id, rel) self._delete_metadata(bucket_id, rel)
self._invalidate_bucket_stats_cache(bucket_id) self._invalidate_bucket_stats_cache(bucket_id)
# Performance: Lazy update - only remove the affected key instead of invalidating whole cache self._invalidate_object_cache(bucket_id)
self._update_object_cache_entry(bucket_id, safe_key.as_posix(), None)
self._cleanup_empty_parents(path, bucket_path) self._cleanup_empty_parents(path, bucket_path)
def purge_object(self, bucket_name: str, object_key: str) -> None: def purge_object(self, bucket_name: str, object_key: str) -> None:
@@ -489,10 +471,9 @@ class ObjectStorage:
legacy_version_dir = self._legacy_version_dir(bucket_id, rel) legacy_version_dir = self._legacy_version_dir(bucket_id, rel)
if legacy_version_dir.exists(): if legacy_version_dir.exists():
shutil.rmtree(legacy_version_dir, ignore_errors=True) shutil.rmtree(legacy_version_dir, ignore_errors=True)
self._invalidate_bucket_stats_cache(bucket_id) self._invalidate_bucket_stats_cache(bucket_id)
# Performance: Lazy update - only remove the affected key instead of invalidating whole cache self._invalidate_object_cache(bucket_id)
self._update_object_cache_entry(bucket_id, rel.as_posix(), None)
self._cleanup_empty_parents(target, bucket_path) self._cleanup_empty_parents(target, bucket_path)
def is_versioning_enabled(self, bucket_name: str) -> bool: def is_versioning_enabled(self, bucket_name: str) -> bool:
@@ -1073,19 +1054,16 @@ class ObjectStorage:
shutil.rmtree(upload_root, ignore_errors=True) shutil.rmtree(upload_root, ignore_errors=True)
self._invalidate_bucket_stats_cache(bucket_id) self._invalidate_bucket_stats_cache(bucket_id)
self._invalidate_object_cache(bucket_id)
stat = destination.stat() stat = destination.stat()
# Performance: Lazy update - only update the affected key instead of invalidating whole cache return ObjectMeta(
obj_meta = ObjectMeta(
key=safe_key.as_posix(), key=safe_key.as_posix(),
size=stat.st_size, size=stat.st_size,
last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc), last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc),
etag=checksum.hexdigest(), etag=checksum.hexdigest(),
metadata=metadata, metadata=metadata,
) )
self._update_object_cache_entry(bucket_id, safe_key.as_posix(), obj_meta)
return obj_meta
def abort_multipart_upload(self, bucket_name: str, upload_id: str) -> None: def abort_multipart_upload(self, bucket_name: str, upload_id: str) -> None:
bucket_path = self._bucket_path(bucket_name) bucket_path = self._bucket_path(bucket_name)
@@ -1327,47 +1305,37 @@ class ObjectStorage:
"""Get cached object metadata for a bucket, refreshing if stale. """Get cached object metadata for a bucket, refreshing if stale.
Uses LRU eviction to prevent unbounded cache growth. Uses LRU eviction to prevent unbounded cache growth.
Thread-safe with per-bucket locks to reduce contention. Thread-safe with version tracking to detect concurrent invalidations.
""" """
now = time.time() now = time.time()
# Quick check with global lock (brief)
with self._cache_lock: with self._cache_lock:
cached = self._object_cache.get(bucket_id) cached = self._object_cache.get(bucket_id)
cache_version = self._cache_version.get(bucket_id, 0)
if cached: if cached:
objects, timestamp = cached objects, timestamp = cached
if now - timestamp < self.KEY_INDEX_CACHE_TTL: if now - timestamp < self.KEY_INDEX_CACHE_TTL:
# Move to end (most recently used)
self._object_cache.move_to_end(bucket_id) self._object_cache.move_to_end(bucket_id)
return objects return objects
cache_version = self._cache_version.get(bucket_id, 0)
# Use per-bucket lock for cache building (allows parallel builds for different buckets) # Build cache outside lock to avoid holding lock during I/O
bucket_lock = self._get_bucket_lock(bucket_id) objects = self._build_object_cache(bucket_path)
with bucket_lock:
# Double-check cache after acquiring per-bucket lock
with self._cache_lock:
cached = self._object_cache.get(bucket_id)
if cached:
objects, timestamp = cached
if now - timestamp < self.KEY_INDEX_CACHE_TTL:
self._object_cache.move_to_end(bucket_id)
return objects
# Build cache with per-bucket lock held (prevents duplicate work) with self._cache_lock:
objects = self._build_object_cache(bucket_path) # Check if cache was invalidated while we were building
current_version = self._cache_version.get(bucket_id, 0)
if current_version != cache_version:
# Cache was invalidated, rebuild
objects = self._build_object_cache(bucket_path)
with self._cache_lock: # Evict oldest entries if cache is full
# Check if cache was invalidated while we were building while len(self._object_cache) >= self.OBJECT_CACHE_MAX_SIZE:
current_version = self._cache_version.get(bucket_id, 0) self._object_cache.popitem(last=False)
if current_version != cache_version:
objects = self._build_object_cache(bucket_path)
# Evict oldest entries if cache is full self._object_cache[bucket_id] = (objects, time.time())
while len(self._object_cache) >= self.OBJECT_CACHE_MAX_SIZE: self._object_cache.move_to_end(bucket_id)
self._object_cache.popitem(last=False)
self._object_cache[bucket_id] = (objects, time.time())
self._object_cache.move_to_end(bucket_id)
return objects return objects
@@ -1386,23 +1354,6 @@ class ObjectStorage:
except OSError: except OSError:
pass pass
def _update_object_cache_entry(self, bucket_id: str, key: str, meta: Optional[ObjectMeta]) -> None:
"""Update a single entry in the object cache instead of invalidating the whole cache.
This is a performance optimization - lazy update instead of full invalidation.
"""
with self._cache_lock:
cached = self._object_cache.get(bucket_id)
if cached:
objects, timestamp = cached
if meta is None:
# Delete operation - remove key from cache
objects.pop(key, None)
else:
# Put operation - update/add key in cache
objects[key] = meta
# Keep same timestamp - don't reset TTL for single key updates
def _ensure_system_roots(self) -> None: def _ensure_system_roots(self) -> None:
for path in ( for path in (
self._system_root_path(), self._system_root_path(),
@@ -1422,33 +1373,19 @@ class ObjectStorage:
return self._system_bucket_root(bucket_name) / self.BUCKET_CONFIG_FILE return self._system_bucket_root(bucket_name) / self.BUCKET_CONFIG_FILE
def _read_bucket_config(self, bucket_name: str) -> dict[str, Any]: def _read_bucket_config(self, bucket_name: str) -> dict[str, Any]:
# Performance: Check cache first
now = time.time()
cached = self._bucket_config_cache.get(bucket_name)
if cached:
config, cached_time = cached
if now - cached_time < self._bucket_config_cache_ttl:
return config.copy() # Return copy to prevent mutation
config_path = self._bucket_config_path(bucket_name) config_path = self._bucket_config_path(bucket_name)
if not config_path.exists(): if not config_path.exists():
self._bucket_config_cache[bucket_name] = ({}, now)
return {} return {}
try: try:
data = json.loads(config_path.read_text(encoding="utf-8")) data = json.loads(config_path.read_text(encoding="utf-8"))
config = data if isinstance(data, dict) else {} return data if isinstance(data, dict) else {}
self._bucket_config_cache[bucket_name] = (config, now)
return config.copy()
except (OSError, json.JSONDecodeError): except (OSError, json.JSONDecodeError):
self._bucket_config_cache[bucket_name] = ({}, now)
return {} return {}
def _write_bucket_config(self, bucket_name: str, payload: dict[str, Any]) -> None: def _write_bucket_config(self, bucket_name: str, payload: dict[str, Any]) -> None:
config_path = self._bucket_config_path(bucket_name) config_path = self._bucket_config_path(bucket_name)
config_path.parent.mkdir(parents=True, exist_ok=True) config_path.parent.mkdir(parents=True, exist_ok=True)
config_path.write_text(json.dumps(payload), encoding="utf-8") config_path.write_text(json.dumps(payload), encoding="utf-8")
# Performance: Update cache immediately after write
self._bucket_config_cache[bucket_name] = (payload.copy(), time.time())
def _set_bucket_config_entry(self, bucket_name: str, key: str, value: Any | None) -> None: def _set_bucket_config_entry(self, bucket_name: str, key: str, value: Any | None) -> None:
config = self._read_bucket_config(bucket_name) config = self._read_bucket_config(bucket_name)
@@ -1570,68 +1507,33 @@ class ObjectStorage:
except OSError: except OSError:
continue continue
def _check_bucket_contents(self, bucket_path: Path) -> tuple[bool, bool, bool]: def _has_visible_objects(self, bucket_path: Path) -> bool:
"""Check bucket for objects, versions, and multipart uploads in a single pass.
Performance optimization: Combines three separate rglob traversals into one.
Returns (has_visible_objects, has_archived_versions, has_active_multipart_uploads).
Uses early exit when all three are found.
"""
has_objects = False
has_versions = False
has_multipart = False
bucket_name = bucket_path.name
# Check visible objects in bucket
for path in bucket_path.rglob("*"): for path in bucket_path.rglob("*"):
if has_objects:
break
if not path.is_file(): if not path.is_file():
continue continue
rel = path.relative_to(bucket_path) rel = path.relative_to(bucket_path)
if rel.parts and rel.parts[0] in self.INTERNAL_FOLDERS: if rel.parts and rel.parts[0] in self.INTERNAL_FOLDERS:
continue continue
has_objects = True return True
return False
# Check archived versions (only if needed)
for version_root in (
self._bucket_versions_root(bucket_name),
self._legacy_versions_root(bucket_name),
):
if has_versions:
break
if version_root.exists():
for path in version_root.rglob("*"):
if path.is_file():
has_versions = True
break
# Check multipart uploads (only if needed)
for uploads_root in (
self._multipart_bucket_root(bucket_name),
self._legacy_multipart_bucket_root(bucket_name),
):
if has_multipart:
break
if uploads_root.exists():
for path in uploads_root.rglob("*"):
if path.is_file():
has_multipart = True
break
return has_objects, has_versions, has_multipart
def _has_visible_objects(self, bucket_path: Path) -> bool:
has_objects, _, _ = self._check_bucket_contents(bucket_path)
return has_objects
def _has_archived_versions(self, bucket_path: Path) -> bool: def _has_archived_versions(self, bucket_path: Path) -> bool:
_, has_versions, _ = self._check_bucket_contents(bucket_path) for version_root in (
return has_versions self._bucket_versions_root(bucket_path.name),
self._legacy_versions_root(bucket_path.name),
):
if version_root.exists() and any(path.is_file() for path in version_root.rglob("*")):
return True
return False
def _has_active_multipart_uploads(self, bucket_path: Path) -> bool: def _has_active_multipart_uploads(self, bucket_path: Path) -> bool:
_, _, has_multipart = self._check_bucket_contents(bucket_path) for uploads_root in (
return has_multipart self._multipart_bucket_root(bucket_path.name),
self._legacy_multipart_bucket_root(bucket_path.name),
):
if uploads_root.exists() and any(path.is_file() for path in uploads_root.rglob("*")):
return True
return False
def _remove_tree(self, path: Path) -> None: def _remove_tree(self, path: Path) -> None:
if not path.exists(): if not path.exists():

View File

@@ -1,7 +1,7 @@
"""Central location for the application version string.""" """Central location for the application version string."""
from __future__ import annotations from __future__ import annotations
APP_VERSION = "0.2.0" APP_VERSION = "0.1.9"
def get_version() -> str: def get_version() -> str:

View File

@@ -362,68 +362,6 @@ code {
color: #2563eb; color: #2563eb;
} }
.docs-sidebar-mobile {
border-radius: 0.75rem;
border: 1px solid var(--myfsio-card-border);
}
.docs-sidebar-mobile .docs-toc {
display: flex;
flex-wrap: wrap;
gap: 0.5rem 1rem;
padding-top: 0.5rem;
}
.docs-sidebar-mobile .docs-toc li {
flex: 1 0 45%;
}
.min-width-0 {
min-width: 0;
}
/* Ensure pre blocks don't overflow on mobile */
.alert pre {
max-width: 100%;
overflow-x: auto;
-webkit-overflow-scrolling: touch;
}
/* IAM User Cards */
.iam-user-card {
border: 1px solid var(--myfsio-card-border);
border-radius: 0.75rem;
transition: box-shadow 0.2s ease, transform 0.2s ease;
}
.iam-user-card:hover {
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
}
[data-theme='dark'] .iam-user-card:hover {
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.3);
}
.user-avatar-lg {
width: 48px;
height: 48px;
border-radius: 12px;
}
.btn-icon {
padding: 0.25rem;
line-height: 1;
border: none;
background: transparent;
color: var(--myfsio-muted);
border-radius: 0.375rem;
}
.btn-icon:hover {
background: var(--myfsio-hover-bg);
color: var(--myfsio-text);
}
.badge { .badge {
font-weight: 500; font-weight: 500;
padding: 0.35em 0.65em; padding: 0.35em 0.65em;

BIN
static/images/MyFISO.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 200 KiB

BIN
static/images/MyFISO.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 628 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 872 KiB

View File

@@ -5,8 +5,8 @@
<meta name="viewport" content="width=device-width, initial-scale=1" /> <meta name="viewport" content="width=device-width, initial-scale=1" />
{% if principal %}<meta name="csrf-token" content="{{ csrf_token() }}" />{% endif %} {% if principal %}<meta name="csrf-token" content="{{ csrf_token() }}" />{% endif %}
<title>MyFSIO Console</title> <title>MyFSIO Console</title>
<link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFSIO.png') }}" /> <link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFISO.png') }}" />
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFSIO.ico') }}" /> <link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFISO.ico') }}" />
<link <link
href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css" href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css"
rel="stylesheet" rel="stylesheet"
@@ -33,7 +33,7 @@
<div class="container-fluid"> <div class="container-fluid">
<a class="navbar-brand fw-semibold" href="{{ url_for('ui.buckets_overview') }}"> <a class="navbar-brand fw-semibold" href="{{ url_for('ui.buckets_overview') }}">
<img <img
src="{{ url_for('static', filename='images/MyFSIO.png') }}" src="{{ url_for('static', filename='images/MyFISO.png') }}"
alt="MyFSIO logo" alt="MyFSIO logo"
class="myfsio-logo" class="myfsio-logo"
width="32" width="32"

View File

@@ -13,7 +13,8 @@
<div class="d-flex align-items-center gap-3"> <div class="d-flex align-items-center gap-3">
<div class="bucket-icon" style="width: 48px; height: 48px; border-radius: 12px;"> <div class="bucket-icon" style="width: 48px; height: 48px; border-radius: 12px;">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16"> <svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/> <path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H11a.5.5 0 0 1 0 1h-1v1h1a.5.5 0 0 1 0 1h-1v1a.5.5 0 0 1-1 0v-1H6v1a.5.5 0 0 1-1 0v-1H4a.5.5 0 0 1 0-1h1v-1H4a.5.5 0 0 1 0-1h1.5A1.5 1.5 0 0 1 7 10.5V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1zm5 7.5v1h3v-1a.5.5 0 0 0-.5-.5h-2a.5.5 0 0 0-.5.5z"/>
</svg> </svg>
</div> </div>
<div> <div>
@@ -968,7 +969,8 @@
{% endif %} {% endif %}
</div> </div>
</div> </div>
<!-- Warning alert for unreachable endpoint (shown by JS if endpoint is down) -->
<div id="replication-endpoint-warning" class="alert alert-danger d-none mb-4" role="alert"> <div id="replication-endpoint-warning" class="alert alert-danger d-none mb-4" role="alert">
<div class="d-flex align-items-start"> <div class="d-flex align-items-start">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" class="flex-shrink-0 me-2" viewBox="0 0 16 16"> <svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" class="flex-shrink-0 me-2" viewBox="0 0 16 16">
@@ -1782,77 +1784,6 @@
{% block extra_scripts %} {% block extra_scripts %}
<script> <script>
function setupJsonAutoIndent(textarea) {
if (!textarea) return;
textarea.addEventListener('keydown', function(e) {
if (e.key === 'Enter') {
e.preventDefault();
const start = this.selectionStart;
const end = this.selectionEnd;
const value = this.value;
const lineStart = value.lastIndexOf('\n', start - 1) + 1;
const currentLine = value.substring(lineStart, start);
const indentMatch = currentLine.match(/^(\s*)/);
let indent = indentMatch ? indentMatch[1] : '';
const trimmedLine = currentLine.trim();
const lastChar = trimmedLine.slice(-1);
let newIndent = indent;
let insertAfter = '';
if (lastChar === '{' || lastChar === '[') {
newIndent = indent + ' ';
const charAfterCursor = value.substring(start, start + 1).trim();
if ((lastChar === '{' && charAfterCursor === '}') ||
(lastChar === '[' && charAfterCursor === ']')) {
insertAfter = '\n' + indent;
}
} else if (lastChar === ',' || lastChar === ':') {
newIndent = indent;
}
const insertion = '\n' + newIndent + insertAfter;
const newValue = value.substring(0, start) + insertion + value.substring(end);
this.value = newValue;
const newCursorPos = start + 1 + newIndent.length;
this.selectionStart = this.selectionEnd = newCursorPos;
this.dispatchEvent(new Event('input', { bubbles: true }));
}
if (e.key === 'Tab') {
e.preventDefault();
const start = this.selectionStart;
const end = this.selectionEnd;
if (e.shiftKey) {
const lineStart = this.value.lastIndexOf('\n', start - 1) + 1;
const lineContent = this.value.substring(lineStart, start);
if (lineContent.startsWith(' ')) {
this.value = this.value.substring(0, lineStart) +
this.value.substring(lineStart + 2);
this.selectionStart = this.selectionEnd = Math.max(lineStart, start - 2);
}
} else {
this.value = this.value.substring(0, start) + ' ' + this.value.substring(end);
this.selectionStart = this.selectionEnd = start + 2;
}
this.dispatchEvent(new Event('input', { bubbles: true }));
}
});
}
setupJsonAutoIndent(document.getElementById('policyDocument'));
const formatBytes = (bytes) => { const formatBytes = (bytes) => {
if (!Number.isFinite(bytes)) return `${bytes} bytes`; if (!Number.isFinite(bytes)) return `${bytes} bytes`;
const units = ['bytes', 'KB', 'MB', 'GB', 'TB']; const units = ['bytes', 'KB', 'MB', 'GB', 'TB'];
@@ -1955,21 +1886,24 @@
let isLoadingObjects = false; let isLoadingObjects = false;
let hasMoreObjects = false; let hasMoreObjects = false;
let currentFilterTerm = ''; let currentFilterTerm = '';
let pageSize = 5000; let pageSize = 5000; // Load large batches for virtual scrolling
let currentPrefix = ''; let currentPrefix = ''; // Current folder prefix for navigation
let allObjects = []; let allObjects = []; // All loaded object metadata (lightweight)
let urlTemplates = null; let urlTemplates = null; // URL templates from API for constructing object URLs
// Helper to build URL from template by replacing KEY_PLACEHOLDER with encoded key
const buildUrlFromTemplate = (template, key) => { const buildUrlFromTemplate = (template, key) => {
if (!template) return ''; if (!template) return '';
return template.replace('KEY_PLACEHOLDER', encodeURIComponent(key).replace(/%2F/g, '/')); return template.replace('KEY_PLACEHOLDER', encodeURIComponent(key).replace(/%2F/g, '/'));
}; };
const ROW_HEIGHT = 53; // Virtual scrolling state
const BUFFER_ROWS = 10; const ROW_HEIGHT = 53; // Height of each table row in pixels
let visibleItems = []; const BUFFER_ROWS = 10; // Extra rows to render above/below viewport
let renderedRange = { start: 0, end: 0 }; let visibleItems = []; // Current items to display (filtered by folder/search)
let renderedRange = { start: 0, end: 0 }; // Currently rendered row indices
// Create a row element from object data (for virtual scrolling)
const createObjectRow = (obj, displayKey = null) => { const createObjectRow = (obj, displayKey = null) => {
const tr = document.createElement('tr'); const tr = document.createElement('tr');
tr.dataset.objectRow = ''; tr.dataset.objectRow = '';
@@ -2008,7 +1942,7 @@
title="Download" title="Download"
aria-label="Download" aria-label="Download"
> >
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="#0d6efd" class="bi bi-download" viewBox="0 0 16 16" aria-hidden="true"> <svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-download" viewBox="0 0 16 16" aria-hidden="true">
<path d="M.5 9.9a.5.5 0 0 1 .5.5v2.5a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1v-2.5a.5.5 0 0 1 1 0v2.5a2 2 0 0 1-2 2H2a2 2 0 0 1-2-2v-2.5a.5.5 0 0 1 .5-.5z" /> <path d="M.5 9.9a.5.5 0 0 1 .5.5v2.5a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1v-2.5a.5.5 0 0 1 1 0v2.5a2 2 0 0 1-2 2H2a2 2 0 0 1-2-2v-2.5a.5.5 0 0 1 .5-.5z" />
<path d="M7.646 11.854a.5.5 0 0 0 .708 0l3-3a.5.5 0 0 0-.708-.708L8.5 10.293V1.5a.5.5 0 0 0-1 0v8.793L5.354 8.146a.5.5 0 1 0-.708.708l3 3z" /> <path d="M7.646 11.854a.5.5 0 0 0 .708 0l3-3a.5.5 0 0 0-.708-.708L8.5 10.293V1.5a.5.5 0 0 0-1 0v8.793L5.354 8.146a.5.5 0 1 0-.708.708l3 3z" />
</svg> </svg>
@@ -2020,7 +1954,7 @@
title="Delete" title="Delete"
aria-label="Delete" aria-label="Delete"
> >
<svg xmlns="http://www.w3.org/2000/svg" width="13" height="13" fill="#dc3545" class="bi bi-trash" viewBox="0 0 16 16" aria-hidden="true"> <svg xmlns="http://www.w3.org/2000/svg" width="13" height="13" fill="currentColor" class="bi bi-trash" viewBox="0 0 16 16" aria-hidden="true">
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z" /> <path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z" />
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z" /> <path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z" />
</svg> </svg>
@@ -2092,12 +2026,16 @@
} }
}; };
// ============== VIRTUAL SCROLLING SYSTEM ==============
// Spacer elements for virtual scroll height
let topSpacer = null; let topSpacer = null;
let bottomSpacer = null; let bottomSpacer = null;
const initVirtualScrollElements = () => { const initVirtualScrollElements = () => {
if (!objectsTableBody) return; if (!objectsTableBody) return;
// Create spacer rows if they don't exist
if (!topSpacer) { if (!topSpacer) {
topSpacer = document.createElement('tr'); topSpacer = document.createElement('tr');
topSpacer.id = 'virtual-top-spacer'; topSpacer.id = 'virtual-top-spacer';
@@ -2109,33 +2047,38 @@
bottomSpacer.innerHTML = '<td colspan="4" style="padding: 0; border: none;"></td>'; bottomSpacer.innerHTML = '<td colspan="4" style="padding: 0; border: none;"></td>';
} }
}; };
// Compute which items should be visible based on current view
const computeVisibleItems = () => { const computeVisibleItems = () => {
const items = []; const items = [];
const folders = new Set(); const folders = new Set();
allObjects.forEach(obj => { allObjects.forEach(obj => {
if (!obj.key.startsWith(currentPrefix)) return; if (!obj.key.startsWith(currentPrefix)) return;
const remainder = obj.key.slice(currentPrefix.length); const remainder = obj.key.slice(currentPrefix.length);
const slashIndex = remainder.indexOf('/'); const slashIndex = remainder.indexOf('/');
if (slashIndex === -1) { if (slashIndex === -1) {
// File in current folder - filter on the displayed filename (remainder)
if (!currentFilterTerm || remainder.toLowerCase().includes(currentFilterTerm)) { if (!currentFilterTerm || remainder.toLowerCase().includes(currentFilterTerm)) {
items.push({ type: 'file', data: obj, displayKey: remainder }); items.push({ type: 'file', data: obj, displayKey: remainder });
} }
} else { } else {
// Folder
const folderName = remainder.slice(0, slashIndex); const folderName = remainder.slice(0, slashIndex);
const folderPath = currentPrefix + folderName + '/'; const folderPath = currentPrefix + folderName + '/';
if (!folders.has(folderPath)) { if (!folders.has(folderPath)) {
folders.add(folderPath); folders.add(folderPath);
// Filter on the displayed folder name only
if (!currentFilterTerm || folderName.toLowerCase().includes(currentFilterTerm)) { if (!currentFilterTerm || folderName.toLowerCase().includes(currentFilterTerm)) {
items.push({ type: 'folder', path: folderPath, displayKey: folderName }); items.push({ type: 'folder', path: folderPath, displayKey: folderName });
} }
} }
} }
}); });
// Sort: folders first, then files
items.sort((a, b) => { items.sort((a, b) => {
if (a.type === 'folder' && b.type === 'file') return -1; if (a.type === 'folder' && b.type === 'file') return -1;
if (a.type === 'file' && b.type === 'folder') return 1; if (a.type === 'file' && b.type === 'folder') return 1;
@@ -2146,30 +2089,36 @@
return items; return items;
}; };
// Render only the visible rows based on scroll position
const renderVirtualRows = () => { const renderVirtualRows = () => {
if (!objectsTableBody || !scrollContainer) return; if (!objectsTableBody || !scrollContainer) return;
const containerHeight = scrollContainer.clientHeight; const containerHeight = scrollContainer.clientHeight;
const scrollTop = scrollContainer.scrollTop; const scrollTop = scrollContainer.scrollTop;
// Calculate visible range
const startIndex = Math.max(0, Math.floor(scrollTop / ROW_HEIGHT) - BUFFER_ROWS); const startIndex = Math.max(0, Math.floor(scrollTop / ROW_HEIGHT) - BUFFER_ROWS);
const endIndex = Math.min(visibleItems.length, Math.ceil((scrollTop + containerHeight) / ROW_HEIGHT) + BUFFER_ROWS); const endIndex = Math.min(visibleItems.length, Math.ceil((scrollTop + containerHeight) / ROW_HEIGHT) + BUFFER_ROWS);
// Skip if range hasn't changed significantly
if (startIndex === renderedRange.start && endIndex === renderedRange.end) return; if (startIndex === renderedRange.start && endIndex === renderedRange.end) return;
renderedRange = { start: startIndex, end: endIndex }; renderedRange = { start: startIndex, end: endIndex };
// Clear and rebuild
objectsTableBody.innerHTML = ''; objectsTableBody.innerHTML = '';
// Add top spacer
initVirtualScrollElements(); initVirtualScrollElements();
topSpacer.querySelector('td').style.height = `${startIndex * ROW_HEIGHT}px`; topSpacer.querySelector('td').style.height = `${startIndex * ROW_HEIGHT}px`;
objectsTableBody.appendChild(topSpacer); objectsTableBody.appendChild(topSpacer);
// Render visible rows
for (let i = startIndex; i < endIndex; i++) { for (let i = startIndex; i < endIndex; i++) {
const item = visibleItems[i]; const item = visibleItems[i];
if (!item) continue; if (!item) continue;
let row; let row;
if (item.type === 'folder') { if (item.type === 'folder') {
row = createFolderRow(item.path, item.displayKey); row = createFolderRow(item.path, item.displayKey);
@@ -2179,28 +2128,33 @@
row.dataset.virtualIndex = i; row.dataset.virtualIndex = i;
objectsTableBody.appendChild(row); objectsTableBody.appendChild(row);
} }
// Add bottom spacer
const remainingRows = visibleItems.length - endIndex; const remainingRows = visibleItems.length - endIndex;
bottomSpacer.querySelector('td').style.height = `${remainingRows * ROW_HEIGHT}px`; bottomSpacer.querySelector('td').style.height = `${remainingRows * ROW_HEIGHT}px`;
objectsTableBody.appendChild(bottomSpacer); objectsTableBody.appendChild(bottomSpacer);
// Re-attach handlers to new rows
attachRowHandlers(); attachRowHandlers();
}; };
// Debounced scroll handler for virtual scrolling
let scrollTimeout = null; let scrollTimeout = null;
const handleVirtualScroll = () => { const handleVirtualScroll = () => {
if (scrollTimeout) cancelAnimationFrame(scrollTimeout); if (scrollTimeout) cancelAnimationFrame(scrollTimeout);
scrollTimeout = requestAnimationFrame(renderVirtualRows); scrollTimeout = requestAnimationFrame(renderVirtualRows);
}; };
// Refresh the virtual list (after data changes or navigation)
const refreshVirtualList = () => { const refreshVirtualList = () => {
visibleItems = computeVisibleItems(); visibleItems = computeVisibleItems();
renderedRange = { start: -1, end: -1 }; renderedRange = { start: -1, end: -1 }; // Force re-render
if (visibleItems.length === 0) { if (visibleItems.length === 0) {
if (allObjects.length === 0 && !hasMoreObjects) { if (allObjects.length === 0 && !hasMoreObjects) {
showEmptyState(); showEmptyState();
} else { } else {
// Empty folder
objectsTableBody.innerHTML = ` objectsTableBody.innerHTML = `
<tr> <tr>
<td colspan="4" class="py-5"> <td colspan="4" class="py-5">
@@ -2224,6 +2178,7 @@
updateFolderViewStatus(); updateFolderViewStatus();
}; };
// Update status bar
const updateFolderViewStatus = () => { const updateFolderViewStatus = () => {
const folderViewStatusEl = document.getElementById('folder-view-status'); const folderViewStatusEl = document.getElementById('folder-view-status');
if (!folderViewStatusEl) return; if (!folderViewStatusEl) return;
@@ -2238,6 +2193,8 @@
} }
}; };
// ============== DATA LOADING ==============
const loadObjects = async (append = false) => { const loadObjects = async (append = false) => {
if (isLoadingObjects) return; if (isLoadingObjects) return;
isLoadingObjects = true; isLoadingObjects = true;
@@ -2249,6 +2206,7 @@
allObjects = []; allObjects = [];
} }
// Show loading spinner when loading more
if (append && loadMoreSpinner) { if (append && loadMoreSpinner) {
loadMoreSpinner.classList.remove('d-none'); loadMoreSpinner.classList.remove('d-none');
} }
@@ -2317,6 +2275,7 @@
updateLoadMoreButton(); updateLoadMoreButton();
} }
// Refresh virtual scroll view
refreshVirtualList(); refreshVirtualList();
renderBreadcrumb(currentPrefix); renderBreadcrumb(currentPrefix);
@@ -2336,6 +2295,7 @@
}; };
const attachRowHandlers = () => { const attachRowHandlers = () => {
// Attach handlers to object rows
const objectRows = document.querySelectorAll('[data-object-row]'); const objectRows = document.querySelectorAll('[data-object-row]');
objectRows.forEach(row => { objectRows.forEach(row => {
if (row.dataset.handlersAttached) return; if (row.dataset.handlersAttached) return;
@@ -2361,12 +2321,14 @@
toggleRowSelection(row, selectCheckbox.checked); toggleRowSelection(row, selectCheckbox.checked);
}); });
// Restore selection state
if (selectedRows.has(row.dataset.key)) { if (selectedRows.has(row.dataset.key)) {
selectCheckbox.checked = true; selectCheckbox.checked = true;
row.classList.add('table-active'); row.classList.add('table-active');
} }
}); });
// Attach handlers to folder rows
const folderRows = document.querySelectorAll('.folder-row'); const folderRows = document.querySelectorAll('.folder-row');
folderRows.forEach(row => { folderRows.forEach(row => {
if (row.dataset.handlersAttached) return; if (row.dataset.handlersAttached) return;
@@ -2377,6 +2339,7 @@
const checkbox = row.querySelector('[data-folder-select]'); const checkbox = row.querySelector('[data-folder-select]');
checkbox?.addEventListener('change', (e) => { checkbox?.addEventListener('change', (e) => {
e.stopPropagation(); e.stopPropagation();
// Select all objects in this folder
const folderObjects = allObjects.filter(obj => obj.key.startsWith(folderPath)); const folderObjects = allObjects.filter(obj => obj.key.startsWith(folderPath));
folderObjects.forEach(obj => { folderObjects.forEach(obj => {
if (checkbox.checked) { if (checkbox.checked) {
@@ -2403,26 +2366,31 @@
updateBulkDeleteState(); updateBulkDeleteState();
}; };
// Scroll container reference (needed for virtual scrolling)
const scrollSentinel = document.getElementById('scroll-sentinel'); const scrollSentinel = document.getElementById('scroll-sentinel');
const scrollContainer = document.querySelector('.objects-table-container'); const scrollContainer = document.querySelector('.objects-table-container');
const loadMoreBtn = document.getElementById('load-more-btn'); const loadMoreBtn = document.getElementById('load-more-btn');
// Virtual scroll: listen to scroll events
if (scrollContainer) { if (scrollContainer) {
scrollContainer.addEventListener('scroll', handleVirtualScroll, { passive: true }); scrollContainer.addEventListener('scroll', handleVirtualScroll, { passive: true });
} }
// Load More button click handler (fallback)
loadMoreBtn?.addEventListener('click', () => { loadMoreBtn?.addEventListener('click', () => {
if (hasMoreObjects && !isLoadingObjects) { if (hasMoreObjects && !isLoadingObjects) {
loadObjects(true); loadObjects(true);
} }
}); });
// Show/hide Load More button based on hasMoreObjects
function updateLoadMoreButton() { function updateLoadMoreButton() {
if (loadMoreBtn) { if (loadMoreBtn) {
loadMoreBtn.classList.toggle('d-none', !hasMoreObjects); loadMoreBtn.classList.toggle('d-none', !hasMoreObjects);
} }
} }
// Auto-load more when near bottom (for loading all data)
if (scrollSentinel && scrollContainer) { if (scrollSentinel && scrollContainer) {
const containerObserver = new IntersectionObserver((entries) => { const containerObserver = new IntersectionObserver((entries) => {
entries.forEach(entry => { entries.forEach(entry => {
@@ -2432,7 +2400,7 @@
}); });
}, { }, {
root: scrollContainer, root: scrollContainer,
rootMargin: '500px', rootMargin: '500px', // Load more earlier for smoother experience
threshold: 0 threshold: 0
}); });
containerObserver.observe(scrollSentinel); containerObserver.observe(scrollSentinel);
@@ -2451,6 +2419,7 @@
viewportObserver.observe(scrollSentinel); viewportObserver.observe(scrollSentinel);
} }
// Page size selector (now controls batch size)
const pageSizeSelect = document.getElementById('page-size-select'); const pageSizeSelect = document.getElementById('page-size-select');
pageSizeSelect?.addEventListener('change', (e) => { pageSizeSelect?.addEventListener('change', (e) => {
pageSize = parseInt(e.target.value, 10); pageSize = parseInt(e.target.value, 10);
@@ -2616,11 +2585,14 @@
return tr; return tr;
}; };
// Instant client-side folder navigation (no server round-trip!)
const navigateToFolder = (prefix) => { const navigateToFolder = (prefix) => {
currentPrefix = prefix; currentPrefix = prefix;
// Scroll to top when navigating
if (scrollContainer) scrollContainer.scrollTop = 0; if (scrollContainer) scrollContainer.scrollTop = 0;
// Instant re-render from already-loaded data
refreshVirtualList(); refreshVirtualList();
renderBreadcrumb(prefix); renderBreadcrumb(prefix);
@@ -2654,9 +2626,9 @@
if (keyCell && currentPrefix) { if (keyCell && currentPrefix) {
const displayName = obj.key.slice(currentPrefix.length); const displayName = obj.key.slice(currentPrefix.length);
keyCell.textContent = displayName; keyCell.textContent = displayName;
keyCell.closest('.object-key').title = obj.key; keyCell.closest('.object-key').title = obj.key; // Full path in tooltip
} else if (keyCell) { } else if (keyCell) {
keyCell.textContent = obj.key; keyCell.textContent = obj.key; // Reset to full key at root
} }
}); });
@@ -2831,6 +2803,7 @@
bulkDeleteConfirm.disabled = selectedCount === 0 || bulkDeleting; bulkDeleteConfirm.disabled = selectedCount === 0 || bulkDeleting;
} }
if (selectAllCheckbox) { if (selectAllCheckbox) {
// With virtual scrolling, count files in current folder from visibleItems
const filesInView = visibleItems.filter(item => item.type === 'file'); const filesInView = visibleItems.filter(item => item.type === 'file');
const total = filesInView.length; const total = filesInView.length;
const visibleSelectedCount = filesInView.filter(item => selectedRows.has(item.data.key)).length; const visibleSelectedCount = filesInView.filter(item => selectedRows.has(item.data.key)).length;
@@ -3467,6 +3440,9 @@
document.getElementById('object-search')?.addEventListener('input', (event) => { document.getElementById('object-search')?.addEventListener('input', (event) => {
currentFilterTerm = event.target.value.toLowerCase(); currentFilterTerm = event.target.value.toLowerCase();
updateFilterWarning(); updateFilterWarning();
// Use the virtual scrolling system for filtering - it properly handles
// both folder view and flat view, and works with large object counts
refreshVirtualList(); refreshVirtualList();
}); });
@@ -3826,8 +3802,10 @@
selectAllCheckbox?.addEventListener('change', (event) => { selectAllCheckbox?.addEventListener('change', (event) => {
const shouldSelect = Boolean(event.target?.checked); const shouldSelect = Boolean(event.target?.checked);
// Get all file items in the current view (works with virtual scrolling)
const filesInView = visibleItems.filter(item => item.type === 'file'); const filesInView = visibleItems.filter(item => item.type === 'file');
// Update selectedRows directly using object keys (not DOM elements)
filesInView.forEach(item => { filesInView.forEach(item => {
if (shouldSelect) { if (shouldSelect) {
selectedRows.set(item.data.key, item.data); selectedRows.set(item.data.key, item.data);
@@ -3836,10 +3814,12 @@
} }
}); });
// Update folder checkboxes in DOM (folders are always rendered)
document.querySelectorAll('[data-folder-select]').forEach(cb => { document.querySelectorAll('[data-folder-select]').forEach(cb => {
cb.checked = shouldSelect; cb.checked = shouldSelect;
}); });
// Update any currently rendered object checkboxes
document.querySelectorAll('[data-object-row]').forEach((row) => { document.querySelectorAll('[data-object-row]').forEach((row) => {
const checkbox = row.querySelector('[data-object-select]'); const checkbox = row.querySelector('[data-object-select]');
if (checkbox) { if (checkbox) {
@@ -3853,6 +3833,7 @@
bulkDownloadButton?.addEventListener('click', async () => { bulkDownloadButton?.addEventListener('click', async () => {
if (!bulkDownloadEndpoint) return; if (!bulkDownloadEndpoint) return;
// Use selectedRows which tracks all selected objects (not just rendered ones)
const selected = Array.from(selectedRows.keys()); const selected = Array.from(selectedRows.keys());
if (selected.length === 0) return; if (selected.length === 0) return;
@@ -4020,6 +4001,7 @@
} }
}); });
// Bucket name validation for replication setup
const targetBucketInput = document.getElementById('target_bucket'); const targetBucketInput = document.getElementById('target_bucket');
const targetBucketFeedback = document.getElementById('target_bucket_feedback'); const targetBucketFeedback = document.getElementById('target_bucket_feedback');
@@ -4054,6 +4036,7 @@
targetBucketInput?.addEventListener('input', updateBucketNameValidation); targetBucketInput?.addEventListener('input', updateBucketNameValidation);
targetBucketInput?.addEventListener('blur', updateBucketNameValidation); targetBucketInput?.addEventListener('blur', updateBucketNameValidation);
// Prevent form submission if bucket name is invalid
const replicationForm = targetBucketInput?.closest('form'); const replicationForm = targetBucketInput?.closest('form');
replicationForm?.addEventListener('submit', (e) => { replicationForm?.addEventListener('submit', (e) => {
const name = targetBucketInput.value.trim(); const name = targetBucketInput.value.trim();
@@ -4066,6 +4049,7 @@
} }
}); });
// Policy JSON validation and formatting
const formatPolicyBtn = document.getElementById('formatPolicyBtn'); const formatPolicyBtn = document.getElementById('formatPolicyBtn');
const policyValidationStatus = document.getElementById('policyValidationStatus'); const policyValidationStatus = document.getElementById('policyValidationStatus');
const policyValidBadge = document.getElementById('policyValidBadge'); const policyValidBadge = document.getElementById('policyValidBadge');
@@ -4108,10 +4092,12 @@
policyTextarea.value = JSON.stringify(parsed, null, 2); policyTextarea.value = JSON.stringify(parsed, null, 2);
validatePolicyJson(); validatePolicyJson();
} catch (err) { } catch (err) {
// Show error in validation
validatePolicyJson(); validatePolicyJson();
} }
}); });
// Initialize policy validation on page load
if (policyTextarea && policyPreset?.value === 'custom') { if (policyTextarea && policyPreset?.value === 'custom') {
validatePolicyJson(); validatePolicyJson();
} }

View File

@@ -46,7 +46,8 @@
<div class="d-flex align-items-center gap-3"> <div class="d-flex align-items-center gap-3">
<div class="bucket-icon"> <div class="bucket-icon">
<svg xmlns="http://www.w3.org/2000/svg" width="22" height="22" fill="currentColor" viewBox="0 0 16 16"> <svg xmlns="http://www.w3.org/2000/svg" width="22" height="22" fill="currentColor" viewBox="0 0 16 16">
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/> <path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H11a.5.5 0 0 1 0 1h-1v1h1a.5.5 0 0 1 0 1h-1v1a.5.5 0 0 1-1 0v-1H6v1a.5.5 0 0 1-1 0v-1H4a.5.5 0 0 1 0-1h1v-1H4a.5.5 0 0 1 0-1h1.5A1.5 1.5 0 0 1 7 10.5V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1zm5 7.5v1h3v-1a.5.5 0 0 0-.5-.5h-2a.5.5 0 0 0-.5.5z"/>
</svg> </svg>
</div> </div>
<div> <div>
@@ -133,7 +134,7 @@
const searchInput = document.getElementById('bucket-search'); const searchInput = document.getElementById('bucket-search');
const bucketItems = document.querySelectorAll('.bucket-item'); const bucketItems = document.querySelectorAll('.bucket-item');
const noBucketsMsg = document.querySelector('.text-center.py-5'); const noBucketsMsg = document.querySelector('.text-center.py-5'); // The "No buckets found" empty state
if (searchInput) { if (searchInput) {
searchInput.addEventListener('input', (e) => { searchInput.addEventListener('input', (e) => {

View File

@@ -8,8 +8,8 @@
<p class="text-uppercase text-muted small mb-1">Replication</p> <p class="text-uppercase text-muted small mb-1">Replication</p>
<h1 class="h3 mb-1 d-flex align-items-center gap-2"> <h1 class="h3 mb-1 d-flex align-items-center gap-2">
<svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" fill="currentColor" class="text-primary" viewBox="0 0 16 16"> <svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/> <path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
<path d="M10.232 8.768l.546-.353a.25.25 0 0 0 0-.418l-.546-.354a.25.25 0 0 1-.116-.21V6.25a.25.25 0 0 0-.25-.25h-.5a.25.25 0 0 0-.25.25v1.183a.25.25 0 0 1-.116.21l-.546.354a.25.25 0 0 0 0 .418l.546.353a.25.25 0 0 1 .116.21v1.183a.25.25 0 0 0 .25.25h.5a.25.25 0 0 0 .25-.25V8.978a.25.25 0 0 1 .116-.21z"/> <path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
</svg> </svg>
Remote Connections Remote Connections
</h1> </h1>
@@ -124,7 +124,8 @@
<div class="d-flex align-items-center gap-2"> <div class="d-flex align-items-center gap-2">
<div class="connection-icon"> <div class="connection-icon">
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16"> <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/> <path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
</svg> </svg>
</div> </div>
<span class="fw-medium">{{ conn.name }}</span> <span class="fw-medium">{{ conn.name }}</span>
@@ -173,7 +174,8 @@
<div class="empty-state text-center py-5"> <div class="empty-state text-center py-5">
<div class="empty-state-icon mx-auto mb-3"> <div class="empty-state-icon mx-auto mb-3">
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" viewBox="0 0 16 16"> <svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" viewBox="0 0 16 16">
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/> <path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
</svg> </svg>
</div> </div>
<h5 class="fw-semibold mb-2">No connections yet</h5> <h5 class="fw-semibold mb-2">No connections yet</h5>
@@ -306,7 +308,8 @@
const data = Object.fromEntries(formData.entries()); const data = Object.fromEntries(formData.entries());
resultDiv.innerHTML = '<div class="text-info"><span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Testing connection...</div>'; resultDiv.innerHTML = '<div class="text-info"><span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Testing connection...</div>';
// Use AbortController to timeout client-side after 20 seconds
const controller = new AbortController(); const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 20000); const timeoutId = setTimeout(() => controller.abort(), 20000);
@@ -393,6 +396,8 @@
form.action = "{{ url_for('ui.delete_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id); form.action = "{{ url_for('ui.delete_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
}); });
// Check connection health for each connection in the table
// Uses staggered requests to avoid overwhelming the server
async function checkConnectionHealth(connectionId, statusEl) { async function checkConnectionHealth(connectionId, statusEl) {
try { try {
const controller = new AbortController(); const controller = new AbortController();
@@ -429,11 +434,13 @@
} }
} }
// Stagger health checks to avoid all requests at once
const connectionRows = document.querySelectorAll('tr[data-connection-id]'); const connectionRows = document.querySelectorAll('tr[data-connection-id]');
connectionRows.forEach((row, index) => { connectionRows.forEach((row, index) => {
const connectionId = row.getAttribute('data-connection-id'); const connectionId = row.getAttribute('data-connection-id');
const statusEl = row.querySelector('.connection-status'); const statusEl = row.querySelector('.connection-status');
if (statusEl) { if (statusEl) {
// Stagger requests by 200ms each
setTimeout(() => checkConnectionHealth(connectionId, statusEl), index * 200); setTimeout(() => checkConnectionHealth(connectionId, statusEl), index * 200);
} }
}); });

View File

@@ -14,36 +14,6 @@
</div> </div>
</section> </section>
<div class="row g-4"> <div class="row g-4">
<div class="col-12 d-xl-none">
<div class="card shadow-sm docs-sidebar-mobile mb-0">
<div class="card-body py-3">
<div class="d-flex align-items-center justify-content-between mb-2">
<h3 class="h6 text-uppercase text-muted mb-0">On this page</h3>
<button class="btn btn-sm btn-outline-secondary" type="button" data-bs-toggle="collapse" data-bs-target="#mobileDocsToc" aria-expanded="false" aria-controls="mobileDocsToc">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
<path fill-rule="evenodd" d="M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z"/>
</svg>
</button>
</div>
<div class="collapse" id="mobileDocsToc">
<ul class="list-unstyled docs-toc mb-0 small">
<li><a href="#setup">Set up &amp; run</a></li>
<li><a href="#background">Running in background</a></li>
<li><a href="#auth">Authentication &amp; IAM</a></li>
<li><a href="#console">Console tour</a></li>
<li><a href="#automation">Automation / CLI</a></li>
<li><a href="#api">REST endpoints</a></li>
<li><a href="#examples">API Examples</a></li>
<li><a href="#replication">Site Replication</a></li>
<li><a href="#versioning">Object Versioning</a></li>
<li><a href="#quotas">Bucket Quotas</a></li>
<li><a href="#encryption">Encryption</a></li>
<li><a href="#troubleshooting">Troubleshooting</a></li>
</ul>
</div>
</div>
</div>
</div>
<div class="col-xl-8"> <div class="col-xl-8">
<article id="setup" class="card shadow-sm docs-section"> <article id="setup" class="card shadow-sm docs-section">
<div class="card-body"> <div class="card-body">
@@ -556,46 +526,15 @@ curl -X POST "{{ api_base }}/presign/mybucket/upload.bin" \
</li> </li>
</ol> </ol>
<div class="alert alert-light border mb-3 overflow-hidden"> <div class="alert alert-light border mb-0">
<div class="d-flex flex-column flex-sm-row gap-2 mb-2"> <div class="d-flex gap-2">
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-terminal text-muted mt-1 flex-shrink-0 d-none d-sm-block" viewBox="0 0 16 16"> <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-terminal text-muted mt-1" viewBox="0 0 16 16">
<path d="M6 9a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 0 1h-3A.5.5 0 0 1 6 9zM3.854 4.146a.5.5 0 1 0-.708.708L4.793 6.5 3.146 8.146a.5.5 0 1 0 .708.708l2-2a.5.5 0 0 0 0-.708l-2-2z"/> <path d="M6 9a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 0 1h-3A.5.5 0 0 1 6 9zM3.854 4.146a.5.5 0 1 0-.708.708L4.793 6.5 3.146 8.146a.5.5 0 1 0 .708.708l2-2a.5.5 0 0 0 0-.708l-2-2z"/>
<path d="M2 1a2 2 0 0 0-2 2v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V3a2 2 0 0 0-2-2H2zm12 1a1 1 0 0 1 1 1v10a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1h12z"/> <path d="M2 1a2 2 0 0 0-2 2v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V3a2 2 0 0 0-2-2H2zm12 1a1 1 0 0 1 1 1v10a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1h12z"/>
</svg> </svg>
<div class="flex-grow-1 min-width-0"> <div>
<strong>Headless Target Setup</strong> <strong>Headless Target Setup?</strong>
<p class="small text-muted mb-2">If your target server has no UI, create a <code>setup_target.py</code> script to bootstrap credentials:</p> <p class="small text-muted mb-0">If your target server has no UI, use the Python API directly to bootstrap credentials. See <code>docs.md</code> in the project root for the <code>setup_target.py</code> script.</p>
<pre class="mb-0 overflow-auto" style="max-width: 100%;"><code class="language-python"># setup_target.py
from pathlib import Path
from app.iam import IamService
from app.storage import ObjectStorage
# Initialize services (paths match default config)
data_dir = Path("data")
iam = IamService(data_dir / ".myfsio.sys" / "config" / "iam.json")
storage = ObjectStorage(data_dir)
# 1. Create the bucket
bucket_name = "backup-bucket"
try:
storage.create_bucket(bucket_name)
print(f"Bucket '{bucket_name}' created.")
except Exception as e:
print(f"Bucket creation skipped: {e}")
# 2. Create the user
try:
creds = iam.create_user(
display_name="Replication User",
policies=[{"bucket": bucket_name, "actions": ["write", "read", "list"]}]
)
print("\n--- CREDENTIALS GENERATED ---")
print(f"Access Key: {creds['access_key']}")
print(f"Secret Key: {creds['secret_key']}")
print("-----------------------------")
except Exception as e:
print(f"User creation failed: {e}")</code></pre>
<p class="small text-muted mt-2 mb-0">Save and run: <code>python setup_target.py</code></p>
</div> </div>
</div> </div>
</div> </div>

View File

@@ -10,7 +10,6 @@
</svg> </svg>
IAM Configuration IAM Configuration
</h1> </h1>
<p class="text-muted mb-0 mt-1">Create and manage users with fine-grained bucket permissions.</p>
</div> </div>
<div class="d-flex gap-2"> <div class="d-flex gap-2">
{% if not iam_locked %} {% if not iam_locked %}
@@ -110,68 +109,35 @@
{% else %} {% else %}
<div class="card-body px-4 pb-4"> <div class="card-body px-4 pb-4">
{% if users %} {% if users %}
<div class="row g-3"> <div class="table-responsive">
{% for user in users %} <table class="table table-hover align-middle mb-0">
<div class="col-md-6 col-xl-4"> <thead class="table-light">
<div class="card h-100 iam-user-card"> <tr>
<div class="card-body"> <th scope="col">User</th>
<div class="d-flex align-items-start justify-content-between mb-3"> <th scope="col">Policies</th>
<th scope="col" class="text-end">Actions</th>
</tr>
</thead>
<tbody>
{% for user in users %}
<tr>
<td>
<div class="d-flex align-items-center gap-3"> <div class="d-flex align-items-center gap-3">
<div class="user-avatar user-avatar-lg"> <div class="user-avatar">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16"> <svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/> <path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
</svg> </svg>
</div> </div>
<div class="min-width-0"> <div>
<h6 class="fw-semibold mb-0 text-truncate" title="{{ user.display_name }}">{{ user.display_name }}</h6> <div class="fw-medium">{{ user.display_name }}</div>
<code class="small text-muted d-block text-truncate" title="{{ user.access_key }}">{{ user.access_key }}</code> <code class="small text-muted">{{ user.access_key }}</code>
</div> </div>
</div> </div>
<div class="dropdown"> </td>
<button class="btn btn-sm btn-icon" type="button" data-bs-toggle="dropdown" aria-expanded="false"> <td>
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
<path d="M9.5 13a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0z"/>
</svg>
</button>
<ul class="dropdown-menu dropdown-menu-end">
<li>
<button class="dropdown-item" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
</svg>
Edit Name
</button>
</li>
<li>
<button class="dropdown-item" type="button" data-rotate-user="{{ user.access_key }}">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
<path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
<path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
</svg>
Rotate Secret
</button>
</li>
<li><hr class="dropdown-divider"></li>
<li>
<button class="dropdown-item text-danger" type="button" data-delete-user="{{ user.access_key }}">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
</svg>
Delete User
</button>
</li>
</ul>
</div>
</div>
<div class="mb-3">
<div class="small text-muted mb-2">Bucket Permissions</div>
<div class="d-flex flex-wrap gap-1"> <div class="d-flex flex-wrap gap-1">
{% for policy in user.policies %} {% for policy in user.policies %}
<span class="badge bg-primary bg-opacity-10 text-primary"> <span class="badge bg-primary bg-opacity-10 text-primary">
<svg xmlns="http://www.w3.org/2000/svg" width="10" height="10" fill="currentColor" class="me-1" viewBox="0 0 16 16">
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
</svg>
{{ policy.bucket }} {{ policy.bucket }}
{% if '*' in policy.actions %} {% if '*' in policy.actions %}
<span class="opacity-75">(full)</span> <span class="opacity-75">(full)</span>
@@ -183,18 +149,38 @@
<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span> <span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>
{% endfor %} {% endfor %}
</div> </div>
</div> </td>
<button class="btn btn-outline-primary btn-sm w-100" type="button" data-policy-editor data-access-key="{{ user.access_key }}"> <td class="text-end">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16"> <div class="btn-group btn-group-sm" role="group">
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/> <button class="btn btn-outline-primary" type="button" data-rotate-user="{{ user.access_key }}" title="Rotate Secret">
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/> <svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
</svg> <path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
Manage Policies <path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
</button> </svg>
</div> </button>
</div> <button class="btn btn-outline-secondary" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}" title="Edit User">
</div> <svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
{% endfor %} <path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
</svg>
</button>
<button class="btn btn-outline-secondary" type="button" data-policy-editor data-access-key="{{ user.access_key }}" title="Edit Policies">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/>
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/>
</svg>
</button>
<button class="btn btn-outline-danger" type="button" data-delete-user="{{ user.access_key }}" title="Delete User">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
</svg>
</button>
</div>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div> </div>
{% else %} {% else %}
<div class="empty-state text-center py-5"> <div class="empty-state text-center py-5">
@@ -456,80 +442,6 @@
{{ super() }} {{ super() }}
<script> <script>
(function () { (function () {
function setupJsonAutoIndent(textarea) {
if (!textarea) return;
textarea.addEventListener('keydown', function(e) {
if (e.key === 'Enter') {
e.preventDefault();
const start = this.selectionStart;
const end = this.selectionEnd;
const value = this.value;
const lineStart = value.lastIndexOf('\n', start - 1) + 1;
const currentLine = value.substring(lineStart, start);
const indentMatch = currentLine.match(/^(\s*)/);
let indent = indentMatch ? indentMatch[1] : '';
const trimmedLine = currentLine.trim();
const lastChar = trimmedLine.slice(-1);
const charBeforeCursor = value.substring(start - 1, start).trim();
let newIndent = indent;
let insertAfter = '';
if (lastChar === '{' || lastChar === '[') {
newIndent = indent + ' ';
const charAfterCursor = value.substring(start, start + 1).trim();
if ((lastChar === '{' && charAfterCursor === '}') ||
(lastChar === '[' && charAfterCursor === ']')) {
insertAfter = '\n' + indent;
}
} else if (lastChar === ',' || lastChar === ':') {
newIndent = indent;
}
const insertion = '\n' + newIndent + insertAfter;
const newValue = value.substring(0, start) + insertion + value.substring(end);
this.value = newValue;
const newCursorPos = start + 1 + newIndent.length;
this.selectionStart = this.selectionEnd = newCursorPos;
this.dispatchEvent(new Event('input', { bubbles: true }));
}
if (e.key === 'Tab') {
e.preventDefault();
const start = this.selectionStart;
const end = this.selectionEnd;
if (e.shiftKey) {
const lineStart = this.value.lastIndexOf('\n', start - 1) + 1;
const lineContent = this.value.substring(lineStart, start);
if (lineContent.startsWith(' ')) {
this.value = this.value.substring(0, lineStart) +
this.value.substring(lineStart + 2);
this.selectionStart = this.selectionEnd = Math.max(lineStart, start - 2);
}
} else {
this.value = this.value.substring(0, start) + ' ' + this.value.substring(end);
this.selectionStart = this.selectionEnd = start + 2;
}
this.dispatchEvent(new Event('input', { bubbles: true }));
}
});
}
setupJsonAutoIndent(document.getElementById('policyEditorDocument'));
setupJsonAutoIndent(document.getElementById('createUserPolicies'));
const currentUserKey = {{ principal.access_key | tojson }}; const currentUserKey = {{ principal.access_key | tojson }};
const configCopyButtons = document.querySelectorAll('.config-copy'); const configCopyButtons = document.querySelectorAll('.config-copy');
configCopyButtons.forEach((button) => { configCopyButtons.forEach((button) => {

View File

@@ -35,7 +35,7 @@
<div class="card shadow-lg login-card position-relative"> <div class="card shadow-lg login-card position-relative">
<div class="card-body p-4 p-md-5"> <div class="card-body p-4 p-md-5">
<div class="text-center mb-4 d-lg-none"> <div class="text-center mb-4 d-lg-none">
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO" width="48" height="48" class="mb-3 rounded-3"> <img src="{{ url_for('static', filename='images/MyFISO.png') }}" alt="MyFSIO" width="48" height="48" class="mb-3 rounded-3">
<h2 class="h4 fw-bold">MyFSIO</h2> <h2 class="h4 fw-bold">MyFSIO</h2>
</div> </div>
<h2 class="h4 mb-1 d-none d-lg-block">Sign in</h2> <h2 class="h4 mb-1 d-none d-lg-block">Sign in</h2>

View File

@@ -219,42 +219,24 @@
</div> </div>
<div class="col-lg-4"> <div class="col-lg-4">
{% set has_issues = (cpu_percent > 80) or (memory.percent > 85) or (disk.percent > 90) %} <div class="card shadow-sm border-0 h-100 overflow-hidden" style="background: linear-gradient(135deg, #3b82f6 0%, #8b5cf6 100%);">
<div class="card shadow-sm border-0 h-100 overflow-hidden" style="background: linear-gradient(135deg, {% if has_issues %}#ef4444 0%, #f97316{% else %}#3b82f6 0%, #8b5cf6{% endif %} 100%);">
<div class="card-body p-4 d-flex flex-column justify-content-center text-white position-relative"> <div class="card-body p-4 d-flex flex-column justify-content-center text-white position-relative">
<div class="position-absolute top-0 end-0 opacity-25" style="transform: translate(20%, -20%);"> <div class="position-absolute top-0 end-0 opacity-25" style="transform: translate(20%, -20%);">
<svg xmlns="http://www.w3.org/2000/svg" width="160" height="160" fill="currentColor" class="bi bi-{% if has_issues %}exclamation-triangle{% else %}cloud-check{% endif %}" viewBox="0 0 16 16"> <svg xmlns="http://www.w3.org/2000/svg" width="160" height="160" fill="currentColor" class="bi bi-cloud-check" viewBox="0 0 16 16">
{% if has_issues %}
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
{% else %}
<path fill-rule="evenodd" d="M10.354 6.146a.5.5 0 0 1 0 .708l-3 3a.5.5 0 0 1-.708 0l-1.5-1.5a.5.5 0 1 1 .708-.708L7 8.793l2.646-2.647a.5.5 0 0 1 .708 0z"/> <path fill-rule="evenodd" d="M10.354 6.146a.5.5 0 0 1 0 .708l-3 3a.5.5 0 0 1-.708 0l-1.5-1.5a.5.5 0 1 1 .708-.708L7 8.793l2.646-2.647a.5.5 0 0 1 .708 0z"/>
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/> <path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
{% endif %}
</svg> </svg>
</div> </div>
<div class="mb-3"> <div class="mb-3">
<span class="badge bg-white {% if has_issues %}text-danger{% else %}text-primary{% endif %} fw-semibold px-3 py-2"> <span class="badge bg-white text-primary fw-semibold px-3 py-2">
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-{% if has_issues %}exclamation-circle-fill{% else %}check-circle-fill{% endif %} me-1" viewBox="0 0 16 16"> <svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-check-circle-fill me-1" viewBox="0 0 16 16">
{% if has_issues %}
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM8 4a.905.905 0 0 0-.9.995l.35 3.507a.552.552 0 0 0 1.1 0l.35-3.507A.905.905 0 0 0 8 4zm.002 6a1 1 0 1 0 0 2 1 1 0 0 0 0-2z"/>
{% else %}
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/> <path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
{% endif %}
</svg> </svg>
v{{ app.version }} v{{ app.version }}
</span> </span>
</div> </div>
<h4 class="card-title fw-bold mb-3">System Health</h4> <h4 class="card-title fw-bold mb-3">System Status</h4>
{% if has_issues %} <p class="card-text opacity-90 mb-4">All systems operational. Your storage infrastructure is running smoothly with no detected issues.</p>
<ul class="list-unstyled small mb-4 opacity-90">
{% if cpu_percent > 80 %}<li class="mb-1">CPU usage is high ({{ cpu_percent }}%)</li>{% endif %}
{% if memory.percent > 85 %}<li class="mb-1">Memory usage is high ({{ memory.percent }}%)</li>{% endif %}
{% if disk.percent > 90 %}<li class="mb-1">Disk space is critically low ({{ disk.percent }}% used)</li>{% endif %}
</ul>
{% else %}
<p class="card-text opacity-90 mb-4 small">All resources are within normal operating parameters.</p>
{% endif %}
<div class="d-flex gap-4"> <div class="d-flex gap-4">
<div> <div>
<div class="h3 fw-bold mb-0">{{ app.uptime_days }}d</div> <div class="h3 fw-bold mb-0">{{ app.uptime_days }}d</div>

View File

@@ -8,6 +8,8 @@ def client(app):
@pytest.fixture @pytest.fixture
def auth_headers(app): def auth_headers(app):
# Create a test user and return headers
# Using the user defined in conftest.py
return { return {
"X-Access-Key": "test", "X-Access-Key": "test",
"X-Secret-Key": "secret" "X-Secret-Key": "secret"
@@ -73,16 +75,19 @@ def test_multipart_upload_flow(client, auth_headers):
def test_abort_multipart_upload(client, auth_headers): def test_abort_multipart_upload(client, auth_headers):
client.put("/abort-bucket", headers=auth_headers) client.put("/abort-bucket", headers=auth_headers)
# Initiate
resp = client.post("/abort-bucket/file.txt?uploads", headers=auth_headers) resp = client.post("/abort-bucket/file.txt?uploads", headers=auth_headers)
upload_id = fromstring(resp.data).find("UploadId").text upload_id = fromstring(resp.data).find("UploadId").text
# Abort
resp = client.delete(f"/abort-bucket/file.txt?uploadId={upload_id}", headers=auth_headers) resp = client.delete(f"/abort-bucket/file.txt?uploadId={upload_id}", headers=auth_headers)
assert resp.status_code == 204 assert resp.status_code == 204
# Try to upload part (should fail)
resp = client.put( resp = client.put(
f"/abort-bucket/file.txt?partNumber=1&uploadId={upload_id}", f"/abort-bucket/file.txt?partNumber=1&uploadId={upload_id}",
headers=auth_headers, headers=auth_headers,
data=b"data" data=b"data"
) )
assert resp.status_code == 404 assert resp.status_code == 404 # NoSuchUpload

View File

@@ -21,11 +21,12 @@ class TestLocalKeyEncryption:
key_path = tmp_path / "keys" / "master.key" key_path = tmp_path / "keys" / "master.key"
provider = LocalKeyEncryption(key_path) provider = LocalKeyEncryption(key_path)
# Access master key to trigger creation
key = provider.master_key key = provider.master_key
assert key_path.exists() assert key_path.exists()
assert len(key) == 32 assert len(key) == 32 # 256-bit key
def test_load_existing_master_key(self, tmp_path): def test_load_existing_master_key(self, tmp_path):
"""Test loading an existing master key.""" """Test loading an existing master key."""
@@ -48,14 +49,16 @@ class TestLocalKeyEncryption:
provider = LocalKeyEncryption(key_path) provider = LocalKeyEncryption(key_path)
plaintext = b"Hello, World! This is a test message." plaintext = b"Hello, World! This is a test message."
# Encrypt
result = provider.encrypt(plaintext) result = provider.encrypt(plaintext)
assert result.ciphertext != plaintext assert result.ciphertext != plaintext
assert result.key_id == "local" assert result.key_id == "local"
assert len(result.nonce) == 12 assert len(result.nonce) == 12
assert len(result.encrypted_data_key) > 0 assert len(result.encrypted_data_key) > 0
# Decrypt
decrypted = provider.decrypt( decrypted = provider.decrypt(
result.ciphertext, result.ciphertext,
result.nonce, result.nonce,
@@ -76,9 +79,12 @@ class TestLocalKeyEncryption:
result1 = provider.encrypt(plaintext) result1 = provider.encrypt(plaintext)
result2 = provider.encrypt(plaintext) result2 = provider.encrypt(plaintext)
# Different encrypted data keys
assert result1.encrypted_data_key != result2.encrypted_data_key assert result1.encrypted_data_key != result2.encrypted_data_key
# Different nonces
assert result1.nonce != result2.nonce assert result1.nonce != result2.nonce
# Different ciphertexts
assert result1.ciphertext != result2.ciphertext assert result1.ciphertext != result2.ciphertext
def test_generate_data_key(self, tmp_path): def test_generate_data_key(self, tmp_path):
@@ -89,26 +95,30 @@ class TestLocalKeyEncryption:
provider = LocalKeyEncryption(key_path) provider = LocalKeyEncryption(key_path)
plaintext_key, encrypted_key = provider.generate_data_key() plaintext_key, encrypted_key = provider.generate_data_key()
assert len(plaintext_key) == 32 assert len(plaintext_key) == 32
assert len(encrypted_key) > 32 assert len(encrypted_key) > 32 # nonce + ciphertext + tag
# Verify we can decrypt the key
decrypted_key = provider._decrypt_data_key(encrypted_key) decrypted_key = provider._decrypt_data_key(encrypted_key)
assert decrypted_key == plaintext_key assert decrypted_key == plaintext_key
def test_decrypt_with_wrong_key_fails(self, tmp_path): def test_decrypt_with_wrong_key_fails(self, tmp_path):
"""Test that decryption fails with wrong master key.""" """Test that decryption fails with wrong master key."""
from app.encryption import LocalKeyEncryption, EncryptionError from app.encryption import LocalKeyEncryption, EncryptionError
# Create two providers with different keys
key_path1 = tmp_path / "master1.key" key_path1 = tmp_path / "master1.key"
key_path2 = tmp_path / "master2.key" key_path2 = tmp_path / "master2.key"
provider1 = LocalKeyEncryption(key_path1) provider1 = LocalKeyEncryption(key_path1)
provider2 = LocalKeyEncryption(key_path2) provider2 = LocalKeyEncryption(key_path2)
# Encrypt with provider1
plaintext = b"Secret message" plaintext = b"Secret message"
result = provider1.encrypt(plaintext) result = provider1.encrypt(plaintext)
# Try to decrypt with provider2
with pytest.raises(EncryptionError): with pytest.raises(EncryptionError):
provider2.decrypt( provider2.decrypt(
result.ciphertext, result.ciphertext,
@@ -185,16 +195,19 @@ class TestStreamingEncryptor:
key_path = tmp_path / "master.key" key_path = tmp_path / "master.key"
provider = LocalKeyEncryption(key_path) provider = LocalKeyEncryption(key_path)
encryptor = StreamingEncryptor(provider, chunk_size=1024) encryptor = StreamingEncryptor(provider, chunk_size=1024)
original_data = b"A" * 5000 + b"B" * 5000 + b"C" * 5000 # Create test data
original_data = b"A" * 5000 + b"B" * 5000 + b"C" * 5000 # 15KB
stream = io.BytesIO(original_data) stream = io.BytesIO(original_data)
# Encrypt
encrypted_stream, metadata = encryptor.encrypt_stream(stream) encrypted_stream, metadata = encryptor.encrypt_stream(stream)
encrypted_data = encrypted_stream.read() encrypted_data = encrypted_stream.read()
assert encrypted_data != original_data assert encrypted_data != original_data
assert metadata.algorithm == "AES256" assert metadata.algorithm == "AES256"
# Decrypt
encrypted_stream = io.BytesIO(encrypted_data) encrypted_stream = io.BytesIO(encrypted_data)
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata) decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
decrypted_data = decrypted_stream.read() decrypted_data = decrypted_stream.read()
@@ -305,7 +318,8 @@ class TestClientEncryptionHelper:
assert "key" in key_info assert "key" in key_info
assert key_info["algorithm"] == "AES-256-GCM" assert key_info["algorithm"] == "AES-256-GCM"
assert "created_at" in key_info assert "created_at" in key_info
# Verify key is 256 bits
key = base64.b64decode(key_info["key"]) key = base64.b64decode(key_info["key"])
assert len(key) == 32 assert len(key) == 32
@@ -410,7 +424,8 @@ class TestKMSManager:
assert key is not None assert key is not None
assert key.key_id == "test-key" assert key.key_id == "test-key"
# Non-existent key
assert kms.get_key("non-existent") is None assert kms.get_key("non-existent") is None
def test_enable_disable_key(self, tmp_path): def test_enable_disable_key(self, tmp_path):
@@ -423,12 +438,15 @@ class TestKMSManager:
kms = KMSManager(keys_path, master_key_path) kms = KMSManager(keys_path, master_key_path)
kms.create_key("Test key", key_id="test-key") kms.create_key("Test key", key_id="test-key")
# Initially enabled
assert kms.get_key("test-key").enabled assert kms.get_key("test-key").enabled
# Disable
kms.disable_key("test-key") kms.disable_key("test-key")
assert not kms.get_key("test-key").enabled assert not kms.get_key("test-key").enabled
# Enable
kms.enable_key("test-key") kms.enable_key("test-key")
assert kms.get_key("test-key").enabled assert kms.get_key("test-key").enabled
@@ -484,10 +502,12 @@ class TestKMSManager:
context = {"bucket": "test-bucket", "key": "test-key"} context = {"bucket": "test-bucket", "key": "test-key"}
ciphertext = kms.encrypt("test-key", plaintext, context) ciphertext = kms.encrypt("test-key", plaintext, context)
# Decrypt with same context succeeds
decrypted, _ = kms.decrypt(ciphertext, context) decrypted, _ = kms.decrypt(ciphertext, context)
assert decrypted == plaintext assert decrypted == plaintext
# Decrypt with different context fails
with pytest.raises(EncryptionError): with pytest.raises(EncryptionError):
kms.decrypt(ciphertext, {"different": "context"}) kms.decrypt(ciphertext, {"different": "context"})
@@ -506,7 +526,8 @@ class TestKMSManager:
assert len(plaintext_key) == 32 assert len(plaintext_key) == 32
assert len(encrypted_key) > 0 assert len(encrypted_key) > 0
# Decrypt the encrypted key
decrypted_key = kms.decrypt_data_key("test-key", encrypted_key) decrypted_key = kms.decrypt_data_key("test-key", encrypted_key)
assert decrypted_key == plaintext_key assert decrypted_key == plaintext_key
@@ -539,9 +560,14 @@ class TestKMSManager:
kms.create_key("Key 2", key_id="key-2") kms.create_key("Key 2", key_id="key-2")
plaintext = b"Data to re-encrypt" plaintext = b"Data to re-encrypt"
# Encrypt with key-1
ciphertext1 = kms.encrypt("key-1", plaintext) ciphertext1 = kms.encrypt("key-1", plaintext)
# Re-encrypt with key-2
ciphertext2 = kms.re_encrypt(ciphertext1, "key-2") ciphertext2 = kms.re_encrypt(ciphertext1, "key-2")
# Decrypt with key-2
decrypted, key_id = kms.decrypt(ciphertext2) decrypted, key_id = kms.decrypt(ciphertext2)
assert decrypted == plaintext assert decrypted == plaintext
@@ -561,7 +587,7 @@ class TestKMSManager:
assert len(random1) == 32 assert len(random1) == 32
assert len(random2) == 32 assert len(random2) == 32
assert random1 != random2 assert random1 != random2 # Very unlikely to be equal
def test_keys_persist_across_instances(self, tmp_path): def test_keys_persist_across_instances(self, tmp_path):
"""Test that keys persist and can be loaded by new instances.""" """Test that keys persist and can be loaded by new instances."""
@@ -569,13 +595,15 @@ class TestKMSManager:
keys_path = tmp_path / "kms_keys.json" keys_path = tmp_path / "kms_keys.json"
master_key_path = tmp_path / "master.key" master_key_path = tmp_path / "master.key"
# Create key with first instance
kms1 = KMSManager(keys_path, master_key_path) kms1 = KMSManager(keys_path, master_key_path)
kms1.create_key("Test key", key_id="test-key") kms1.create_key("Test key", key_id="test-key")
plaintext = b"Persistent encryption test" plaintext = b"Persistent encryption test"
ciphertext = kms1.encrypt("test-key", plaintext) ciphertext = kms1.encrypt("test-key", plaintext)
# Create new instance and verify key works
kms2 = KMSManager(keys_path, master_key_path) kms2 = KMSManager(keys_path, master_key_path)
decrypted, key_id = kms2.decrypt(ciphertext) decrypted, key_id = kms2.decrypt(ciphertext)
@@ -636,27 +664,31 @@ class TestEncryptedStorage:
encryption = EncryptionManager(config) encryption = EncryptionManager(config)
encrypted_storage = EncryptedObjectStorage(storage, encryption) encrypted_storage = EncryptedObjectStorage(storage, encryption)
# Create bucket with encryption config
storage.create_bucket("test-bucket") storage.create_bucket("test-bucket")
storage.set_bucket_encryption("test-bucket", { storage.set_bucket_encryption("test-bucket", {
"Rules": [{"SSEAlgorithm": "AES256"}] "Rules": [{"SSEAlgorithm": "AES256"}]
}) })
# Put object
original_data = b"This is secret data that should be encrypted" original_data = b"This is secret data that should be encrypted"
stream = io.BytesIO(original_data) stream = io.BytesIO(original_data)
meta = encrypted_storage.put_object( meta = encrypted_storage.put_object(
"test-bucket", "test-bucket",
"secret.txt", "secret.txt",
stream, stream,
) )
assert meta is not None assert meta is not None
# Verify file on disk is encrypted (not plaintext)
file_path = storage_root / "test-bucket" / "secret.txt" file_path = storage_root / "test-bucket" / "secret.txt"
stored_data = file_path.read_bytes() stored_data = file_path.read_bytes()
assert stored_data != original_data assert stored_data != original_data
# Get object - should be decrypted
data, metadata = encrypted_storage.get_object_data("test-bucket", "secret.txt") data, metadata = encrypted_storage.get_object_data("test-bucket", "secret.txt")
assert data == original_data assert data == original_data
@@ -679,12 +711,14 @@ class TestEncryptedStorage:
encrypted_storage = EncryptedObjectStorage(storage, encryption) encrypted_storage = EncryptedObjectStorage(storage, encryption)
storage.create_bucket("test-bucket") storage.create_bucket("test-bucket")
# No encryption config
original_data = b"Unencrypted data" original_data = b"Unencrypted data"
stream = io.BytesIO(original_data) stream = io.BytesIO(original_data)
encrypted_storage.put_object("test-bucket", "plain.txt", stream) encrypted_storage.put_object("test-bucket", "plain.txt", stream)
# Verify file on disk is NOT encrypted
file_path = storage_root / "test-bucket" / "plain.txt" file_path = storage_root / "test-bucket" / "plain.txt"
stored_data = file_path.read_bytes() stored_data = file_path.read_bytes()
assert stored_data == original_data assert stored_data == original_data
@@ -710,17 +744,20 @@ class TestEncryptedStorage:
original_data = b"Explicitly encrypted data" original_data = b"Explicitly encrypted data"
stream = io.BytesIO(original_data) stream = io.BytesIO(original_data)
# Request encryption explicitly
encrypted_storage.put_object( encrypted_storage.put_object(
"test-bucket", "test-bucket",
"encrypted.txt", "encrypted.txt",
stream, stream,
server_side_encryption="AES256", server_side_encryption="AES256",
) )
# Verify file is encrypted
file_path = storage_root / "test-bucket" / "encrypted.txt" file_path = storage_root / "test-bucket" / "encrypted.txt"
stored_data = file_path.read_bytes() stored_data = file_path.read_bytes()
assert stored_data != original_data assert stored_data != original_data
# Get object - should be decrypted
data, _ = encrypted_storage.get_object_data("test-bucket", "encrypted.txt") data, _ = encrypted_storage.get_object_data("test-bucket", "encrypted.txt")
assert data == original_data assert data == original_data

View File

@@ -23,7 +23,8 @@ def kms_client(tmp_path):
"ENCRYPTION_MASTER_KEY_PATH": str(tmp_path / "master.key"), "ENCRYPTION_MASTER_KEY_PATH": str(tmp_path / "master.key"),
"KMS_KEYS_PATH": str(tmp_path / "kms_keys.json"), "KMS_KEYS_PATH": str(tmp_path / "kms_keys.json"),
}) })
# Create default IAM config with admin user
iam_config = { iam_config = {
"users": [ "users": [
{ {
@@ -82,6 +83,7 @@ class TestKMSKeyManagement:
def test_list_keys(self, kms_client, auth_headers): def test_list_keys(self, kms_client, auth_headers):
"""Test listing KMS keys.""" """Test listing KMS keys."""
# Create some keys
kms_client.post("/kms/keys", json={"Description": "Key 1"}, headers=auth_headers) kms_client.post("/kms/keys", json={"Description": "Key 1"}, headers=auth_headers)
kms_client.post("/kms/keys", json={"Description": "Key 2"}, headers=auth_headers) kms_client.post("/kms/keys", json={"Description": "Key 2"}, headers=auth_headers)
@@ -95,6 +97,7 @@ class TestKMSKeyManagement:
def test_get_key(self, kms_client, auth_headers): def test_get_key(self, kms_client, auth_headers):
"""Test getting a specific key.""" """Test getting a specific key."""
# Create a key
create_response = kms_client.post( create_response = kms_client.post(
"/kms/keys", "/kms/keys",
json={"KeyId": "test-key", "Description": "Test key"}, json={"KeyId": "test-key", "Description": "Test key"},
@@ -117,28 +120,36 @@ class TestKMSKeyManagement:
def test_delete_key(self, kms_client, auth_headers): def test_delete_key(self, kms_client, auth_headers):
"""Test deleting a key.""" """Test deleting a key."""
# Create a key
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers) kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
# Delete it
response = kms_client.delete("/kms/keys/test-key", headers=auth_headers) response = kms_client.delete("/kms/keys/test-key", headers=auth_headers)
assert response.status_code == 204 assert response.status_code == 204
# Verify it's gone
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers) get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
assert get_response.status_code == 404 assert get_response.status_code == 404
def test_enable_disable_key(self, kms_client, auth_headers): def test_enable_disable_key(self, kms_client, auth_headers):
"""Test enabling and disabling a key.""" """Test enabling and disabling a key."""
# Create a key
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers) kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
# Disable
response = kms_client.post("/kms/keys/test-key/disable", headers=auth_headers) response = kms_client.post("/kms/keys/test-key/disable", headers=auth_headers)
assert response.status_code == 200 assert response.status_code == 200
# Verify disabled
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers) get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
assert get_response.get_json()["KeyMetadata"]["Enabled"] is False assert get_response.get_json()["KeyMetadata"]["Enabled"] is False
# Enable
response = kms_client.post("/kms/keys/test-key/enable", headers=auth_headers) response = kms_client.post("/kms/keys/test-key/enable", headers=auth_headers)
assert response.status_code == 200 assert response.status_code == 200
# Verify enabled
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers) get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
assert get_response.get_json()["KeyMetadata"]["Enabled"] is True assert get_response.get_json()["KeyMetadata"]["Enabled"] is True
@@ -148,11 +159,13 @@ class TestKMSEncryption:
def test_encrypt_decrypt(self, kms_client, auth_headers): def test_encrypt_decrypt(self, kms_client, auth_headers):
"""Test encrypting and decrypting data.""" """Test encrypting and decrypting data."""
# Create a key
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers) kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
plaintext = b"Hello, World!" plaintext = b"Hello, World!"
plaintext_b64 = base64.b64encode(plaintext).decode() plaintext_b64 = base64.b64encode(plaintext).decode()
# Encrypt
encrypt_response = kms_client.post( encrypt_response = kms_client.post(
"/kms/encrypt", "/kms/encrypt",
json={"KeyId": "test-key", "Plaintext": plaintext_b64}, json={"KeyId": "test-key", "Plaintext": plaintext_b64},
@@ -164,7 +177,8 @@ class TestKMSEncryption:
assert "CiphertextBlob" in encrypt_data assert "CiphertextBlob" in encrypt_data
assert encrypt_data["KeyId"] == "test-key" assert encrypt_data["KeyId"] == "test-key"
# Decrypt
decrypt_response = kms_client.post( decrypt_response = kms_client.post(
"/kms/decrypt", "/kms/decrypt",
json={"CiphertextBlob": encrypt_data["CiphertextBlob"]}, json={"CiphertextBlob": encrypt_data["CiphertextBlob"]},
@@ -184,7 +198,8 @@ class TestKMSEncryption:
plaintext = b"Contextualized data" plaintext = b"Contextualized data"
plaintext_b64 = base64.b64encode(plaintext).decode() plaintext_b64 = base64.b64encode(plaintext).decode()
context = {"purpose": "testing", "bucket": "my-bucket"} context = {"purpose": "testing", "bucket": "my-bucket"}
# Encrypt with context
encrypt_response = kms_client.post( encrypt_response = kms_client.post(
"/kms/encrypt", "/kms/encrypt",
json={ json={
@@ -197,7 +212,8 @@ class TestKMSEncryption:
assert encrypt_response.status_code == 200 assert encrypt_response.status_code == 200
ciphertext = encrypt_response.get_json()["CiphertextBlob"] ciphertext = encrypt_response.get_json()["CiphertextBlob"]
# Decrypt with same context succeeds
decrypt_response = kms_client.post( decrypt_response = kms_client.post(
"/kms/decrypt", "/kms/decrypt",
json={ json={
@@ -208,7 +224,8 @@ class TestKMSEncryption:
) )
assert decrypt_response.status_code == 200 assert decrypt_response.status_code == 200
# Decrypt with wrong context fails
wrong_context_response = kms_client.post( wrong_context_response = kms_client.post(
"/kms/decrypt", "/kms/decrypt",
json={ json={
@@ -308,9 +325,11 @@ class TestKMSReEncrypt:
def test_re_encrypt(self, kms_client, auth_headers): def test_re_encrypt(self, kms_client, auth_headers):
"""Test re-encrypting data with a different key.""" """Test re-encrypting data with a different key."""
# Create two keys
kms_client.post("/kms/keys", json={"KeyId": "key-1"}, headers=auth_headers) kms_client.post("/kms/keys", json={"KeyId": "key-1"}, headers=auth_headers)
kms_client.post("/kms/keys", json={"KeyId": "key-2"}, headers=auth_headers) kms_client.post("/kms/keys", json={"KeyId": "key-2"}, headers=auth_headers)
# Encrypt with key-1
plaintext = b"Data to re-encrypt" plaintext = b"Data to re-encrypt"
encrypt_response = kms_client.post( encrypt_response = kms_client.post(
"/kms/encrypt", "/kms/encrypt",
@@ -322,7 +341,8 @@ class TestKMSReEncrypt:
) )
ciphertext = encrypt_response.get_json()["CiphertextBlob"] ciphertext = encrypt_response.get_json()["CiphertextBlob"]
# Re-encrypt with key-2
re_encrypt_response = kms_client.post( re_encrypt_response = kms_client.post(
"/kms/re-encrypt", "/kms/re-encrypt",
json={ json={
@@ -337,7 +357,8 @@ class TestKMSReEncrypt:
assert data["SourceKeyId"] == "key-1" assert data["SourceKeyId"] == "key-1"
assert data["KeyId"] == "key-2" assert data["KeyId"] == "key-2"
# Verify new ciphertext can be decrypted
decrypt_response = kms_client.post( decrypt_response = kms_client.post(
"/kms/decrypt", "/kms/decrypt",
json={"CiphertextBlob": data["CiphertextBlob"]}, json={"CiphertextBlob": data["CiphertextBlob"]},
@@ -377,7 +398,7 @@ class TestKMSRandom:
data = response.get_json() data = response.get_json()
random_bytes = base64.b64decode(data["Plaintext"]) random_bytes = base64.b64decode(data["Plaintext"])
assert len(random_bytes) == 32 assert len(random_bytes) == 32 # Default is 32 bytes
class TestClientSideEncryption: class TestClientSideEncryption:
@@ -401,9 +422,11 @@ class TestClientSideEncryption:
def test_client_encrypt_decrypt(self, kms_client, auth_headers): def test_client_encrypt_decrypt(self, kms_client, auth_headers):
"""Test client-side encryption and decryption.""" """Test client-side encryption and decryption."""
# Generate a key
key_response = kms_client.post("/kms/client/generate-key", headers=auth_headers) key_response = kms_client.post("/kms/client/generate-key", headers=auth_headers)
key = key_response.get_json()["key"] key = key_response.get_json()["key"]
# Encrypt
plaintext = b"Client-side encrypted data" plaintext = b"Client-side encrypted data"
encrypt_response = kms_client.post( encrypt_response = kms_client.post(
"/kms/client/encrypt", "/kms/client/encrypt",
@@ -416,7 +439,8 @@ class TestClientSideEncryption:
assert encrypt_response.status_code == 200 assert encrypt_response.status_code == 200
encrypted = encrypt_response.get_json() encrypted = encrypt_response.get_json()
# Decrypt
decrypt_response = kms_client.post( decrypt_response = kms_client.post(
"/kms/client/decrypt", "/kms/client/decrypt",
json={ json={
@@ -437,6 +461,7 @@ class TestEncryptionMaterials:
def test_get_encryption_materials(self, kms_client, auth_headers): def test_get_encryption_materials(self, kms_client, auth_headers):
"""Test getting encryption materials for client-side S3 encryption.""" """Test getting encryption materials for client-side S3 encryption."""
# Create a key
kms_client.post("/kms/keys", json={"KeyId": "s3-key"}, headers=auth_headers) kms_client.post("/kms/keys", json={"KeyId": "s3-key"}, headers=auth_headers)
response = kms_client.post( response = kms_client.post(
@@ -452,7 +477,8 @@ class TestEncryptionMaterials:
assert "EncryptedKey" in data assert "EncryptedKey" in data
assert data["KeyId"] == "s3-key" assert data["KeyId"] == "s3-key"
assert data["Algorithm"] == "AES-256-GCM" assert data["Algorithm"] == "AES-256-GCM"
# Verify key is 256 bits
key = base64.b64decode(data["PlaintextKey"]) key = base64.b64decode(data["PlaintextKey"])
assert len(key) == 32 assert len(key) == 32
@@ -463,7 +489,8 @@ class TestKMSAuthentication:
def test_unauthenticated_request_fails(self, kms_client): def test_unauthenticated_request_fails(self, kms_client):
"""Test that unauthenticated requests are rejected.""" """Test that unauthenticated requests are rejected."""
response = kms_client.get("/kms/keys") response = kms_client.get("/kms/keys")
# Should fail with 403 (no credentials)
assert response.status_code == 403 assert response.status_code == 403
def test_invalid_credentials_fail(self, kms_client): def test_invalid_credentials_fail(self, kms_client):

View File

@@ -4,6 +4,7 @@ import pytest
from xml.etree.ElementTree import fromstring from xml.etree.ElementTree import fromstring
# Helper to create file-like stream
def _stream(data: bytes): def _stream(data: bytes):
return io.BytesIO(data) return io.BytesIO(data)
@@ -18,11 +19,13 @@ class TestListObjectsV2:
"""Tests for ListObjectsV2 endpoint.""" """Tests for ListObjectsV2 endpoint."""
def test_list_objects_v2_basic(self, client, signer, storage): def test_list_objects_v2_basic(self, client, signer, storage):
# Create bucket and objects
storage.create_bucket("v2-test") storage.create_bucket("v2-test")
storage.put_object("v2-test", "file1.txt", _stream(b"hello")) storage.put_object("v2-test", "file1.txt", _stream(b"hello"))
storage.put_object("v2-test", "file2.txt", _stream(b"world")) storage.put_object("v2-test", "file2.txt", _stream(b"world"))
storage.put_object("v2-test", "folder/file3.txt", _stream(b"nested")) storage.put_object("v2-test", "folder/file3.txt", _stream(b"nested"))
# ListObjectsV2 request
headers = signer("GET", "/v2-test?list-type=2") headers = signer("GET", "/v2-test?list-type=2")
resp = client.get("/v2-test", query_string={"list-type": "2"}, headers=headers) resp = client.get("/v2-test", query_string={"list-type": "2"}, headers=headers)
assert resp.status_code == 200 assert resp.status_code == 200
@@ -43,6 +46,7 @@ class TestListObjectsV2:
storage.put_object("prefix-test", "photos/2024/mar.jpg", _stream(b"mar")) storage.put_object("prefix-test", "photos/2024/mar.jpg", _stream(b"mar"))
storage.put_object("prefix-test", "docs/readme.md", _stream(b"readme")) storage.put_object("prefix-test", "docs/readme.md", _stream(b"readme"))
# List with prefix and delimiter
headers = signer("GET", "/prefix-test?list-type=2&prefix=photos/&delimiter=/") headers = signer("GET", "/prefix-test?list-type=2&prefix=photos/&delimiter=/")
resp = client.get( resp = client.get(
"/prefix-test", "/prefix-test",
@@ -52,10 +56,11 @@ class TestListObjectsV2:
assert resp.status_code == 200 assert resp.status_code == 200
root = fromstring(resp.data) root = fromstring(resp.data)
# Should show common prefixes for 2023/ and 2024/
prefixes = [el.find("Prefix").text for el in root.findall("CommonPrefixes")] prefixes = [el.find("Prefix").text for el in root.findall("CommonPrefixes")]
assert "photos/2023/" in prefixes assert "photos/2023/" in prefixes
assert "photos/2024/" in prefixes assert "photos/2024/" in prefixes
assert len(root.findall("Contents")) == 0 assert len(root.findall("Contents")) == 0 # No direct files under photos/
class TestPutBucketVersioning: class TestPutBucketVersioning:
@@ -73,6 +78,7 @@ class TestPutBucketVersioning:
resp = client.put("/version-test", query_string={"versioning": ""}, data=payload, headers=headers) resp = client.put("/version-test", query_string={"versioning": ""}, data=payload, headers=headers)
assert resp.status_code == 200 assert resp.status_code == 200
# Verify via GET
headers = signer("GET", "/version-test?versioning") headers = signer("GET", "/version-test?versioning")
resp = client.get("/version-test", query_string={"versioning": ""}, headers=headers) resp = client.get("/version-test", query_string={"versioning": ""}, headers=headers)
root = fromstring(resp.data) root = fromstring(resp.data)
@@ -104,13 +110,15 @@ class TestDeleteBucketTagging:
storage.create_bucket("tag-delete-test") storage.create_bucket("tag-delete-test")
storage.set_bucket_tags("tag-delete-test", [{"Key": "env", "Value": "test"}]) storage.set_bucket_tags("tag-delete-test", [{"Key": "env", "Value": "test"}])
# Delete tags
headers = signer("DELETE", "/tag-delete-test?tagging") headers = signer("DELETE", "/tag-delete-test?tagging")
resp = client.delete("/tag-delete-test", query_string={"tagging": ""}, headers=headers) resp = client.delete("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
assert resp.status_code == 204 assert resp.status_code == 204
# Verify tags are gone
headers = signer("GET", "/tag-delete-test?tagging") headers = signer("GET", "/tag-delete-test?tagging")
resp = client.get("/tag-delete-test", query_string={"tagging": ""}, headers=headers) resp = client.get("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
assert resp.status_code == 404 assert resp.status_code == 404 # NoSuchTagSet
class TestDeleteBucketCors: class TestDeleteBucketCors:
@@ -122,13 +130,15 @@ class TestDeleteBucketCors:
{"AllowedOrigins": ["*"], "AllowedMethods": ["GET"]} {"AllowedOrigins": ["*"], "AllowedMethods": ["GET"]}
]) ])
# Delete CORS
headers = signer("DELETE", "/cors-delete-test?cors") headers = signer("DELETE", "/cors-delete-test?cors")
resp = client.delete("/cors-delete-test", query_string={"cors": ""}, headers=headers) resp = client.delete("/cors-delete-test", query_string={"cors": ""}, headers=headers)
assert resp.status_code == 204 assert resp.status_code == 204
# Verify CORS is gone
headers = signer("GET", "/cors-delete-test?cors") headers = signer("GET", "/cors-delete-test?cors")
resp = client.get("/cors-delete-test", query_string={"cors": ""}, headers=headers) resp = client.get("/cors-delete-test", query_string={"cors": ""}, headers=headers)
assert resp.status_code == 404 assert resp.status_code == 404 # NoSuchCORSConfiguration
class TestGetBucketLocation: class TestGetBucketLocation:
@@ -163,6 +173,7 @@ class TestBucketAcl:
def test_put_bucket_acl(self, client, signer, storage): def test_put_bucket_acl(self, client, signer, storage):
storage.create_bucket("acl-put-test") storage.create_bucket("acl-put-test")
# PUT with canned ACL header
headers = signer("PUT", "/acl-put-test?acl") headers = signer("PUT", "/acl-put-test?acl")
headers["x-amz-acl"] = "public-read" headers["x-amz-acl"] = "public-read"
resp = client.put("/acl-put-test", query_string={"acl": ""}, headers=headers) resp = client.put("/acl-put-test", query_string={"acl": ""}, headers=headers)
@@ -177,6 +188,7 @@ class TestCopyObject:
storage.create_bucket("copy-dst") storage.create_bucket("copy-dst")
storage.put_object("copy-src", "original.txt", _stream(b"original content")) storage.put_object("copy-src", "original.txt", _stream(b"original content"))
# Copy object
headers = signer("PUT", "/copy-dst/copied.txt") headers = signer("PUT", "/copy-dst/copied.txt")
headers["x-amz-copy-source"] = "/copy-src/original.txt" headers["x-amz-copy-source"] = "/copy-src/original.txt"
resp = client.put("/copy-dst/copied.txt", headers=headers) resp = client.put("/copy-dst/copied.txt", headers=headers)
@@ -187,6 +199,7 @@ class TestCopyObject:
assert root.find("ETag") is not None assert root.find("ETag") is not None
assert root.find("LastModified") is not None assert root.find("LastModified") is not None
# Verify copy exists
path = storage.get_object_path("copy-dst", "copied.txt") path = storage.get_object_path("copy-dst", "copied.txt")
assert path.read_bytes() == b"original content" assert path.read_bytes() == b"original content"
@@ -195,6 +208,7 @@ class TestCopyObject:
storage.create_bucket("meta-dst") storage.create_bucket("meta-dst")
storage.put_object("meta-src", "source.txt", _stream(b"data"), metadata={"old": "value"}) storage.put_object("meta-src", "source.txt", _stream(b"data"), metadata={"old": "value"})
# Copy with REPLACE directive
headers = signer("PUT", "/meta-dst/target.txt") headers = signer("PUT", "/meta-dst/target.txt")
headers["x-amz-copy-source"] = "/meta-src/source.txt" headers["x-amz-copy-source"] = "/meta-src/source.txt"
headers["x-amz-metadata-directive"] = "REPLACE" headers["x-amz-metadata-directive"] = "REPLACE"
@@ -202,6 +216,7 @@ class TestCopyObject:
resp = client.put("/meta-dst/target.txt", headers=headers) resp = client.put("/meta-dst/target.txt", headers=headers)
assert resp.status_code == 200 assert resp.status_code == 200
# Verify new metadata (note: header keys are Title-Cased)
meta = storage.get_object_metadata("meta-dst", "target.txt") meta = storage.get_object_metadata("meta-dst", "target.txt")
assert "New" in meta or "new" in meta assert "New" in meta or "new" in meta
assert "old" not in meta and "Old" not in meta assert "old" not in meta and "Old" not in meta
@@ -214,6 +229,7 @@ class TestObjectTagging:
storage.create_bucket("obj-tag-test") storage.create_bucket("obj-tag-test")
storage.put_object("obj-tag-test", "tagged.txt", _stream(b"content")) storage.put_object("obj-tag-test", "tagged.txt", _stream(b"content"))
# PUT tags
payload = b"""<?xml version="1.0" encoding="UTF-8"?> payload = b"""<?xml version="1.0" encoding="UTF-8"?>
<Tagging> <Tagging>
<TagSet> <TagSet>
@@ -231,6 +247,7 @@ class TestObjectTagging:
) )
assert resp.status_code == 204 assert resp.status_code == 204
# GET tags
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging") headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers) resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
assert resp.status_code == 200 assert resp.status_code == 200
@@ -240,10 +257,12 @@ class TestObjectTagging:
assert tags["project"] == "demo" assert tags["project"] == "demo"
assert tags["env"] == "test" assert tags["env"] == "test"
# DELETE tags
headers = signer("DELETE", "/obj-tag-test/tagged.txt?tagging") headers = signer("DELETE", "/obj-tag-test/tagged.txt?tagging")
resp = client.delete("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers) resp = client.delete("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
assert resp.status_code == 204 assert resp.status_code == 204
# Verify empty
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging") headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers) resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
root = fromstring(resp.data) root = fromstring(resp.data)
@@ -253,6 +272,7 @@ class TestObjectTagging:
storage.create_bucket("tag-limit") storage.create_bucket("tag-limit")
storage.put_object("tag-limit", "file.txt", _stream(b"x")) storage.put_object("tag-limit", "file.txt", _stream(b"x"))
# Try to set 11 tags (limit is 10)
tags = "".join(f"<Tag><Key>key{i}</Key><Value>val{i}</Value></Tag>" for i in range(11)) tags = "".join(f"<Tag><Key>key{i}</Key><Value>val{i}</Value></Tag>" for i in range(11))
payload = f"<Tagging><TagSet>{tags}</TagSet></Tagging>".encode() payload = f"<Tagging><TagSet>{tags}</TagSet></Tagging>".encode()

View File

@@ -66,9 +66,10 @@ class TestUIBucketEncryption:
"""Encryption card should be visible on bucket detail page.""" """Encryption card should be visible on bucket detail page."""
app = _make_encryption_app(tmp_path) app = _make_encryption_app(tmp_path)
client = app.test_client() client = app.test_client()
# Login first
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True) client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
response = client.get("/ui/buckets/test-bucket?tab=properties") response = client.get("/ui/buckets/test-bucket?tab=properties")
assert response.status_code == 200 assert response.status_code == 200
@@ -80,12 +81,15 @@ class TestUIBucketEncryption:
"""Should be able to enable AES-256 encryption.""" """Should be able to enable AES-256 encryption."""
app = _make_encryption_app(tmp_path) app = _make_encryption_app(tmp_path)
client = app.test_client() client = app.test_client()
# Login
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True) client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
# Get CSRF token
response = client.get("/ui/buckets/test-bucket?tab=properties") response = client.get("/ui/buckets/test-bucket?tab=properties")
csrf_token = get_csrf_token(response) csrf_token = get_csrf_token(response)
# Enable AES-256 encryption
response = client.post( response = client.post(
"/ui/buckets/test-bucket/encryption", "/ui/buckets/test-bucket/encryption",
data={ data={
@@ -98,13 +102,15 @@ class TestUIBucketEncryption:
assert response.status_code == 200 assert response.status_code == 200
html = response.data.decode("utf-8") html = response.data.decode("utf-8")
# Should see success message or enabled state
assert "AES-256" in html or "encryption enabled" in html.lower() assert "AES-256" in html or "encryption enabled" in html.lower()
def test_enable_kms_encryption(self, tmp_path): def test_enable_kms_encryption(self, tmp_path):
"""Should be able to enable KMS encryption.""" """Should be able to enable KMS encryption."""
app = _make_encryption_app(tmp_path, kms_enabled=True) app = _make_encryption_app(tmp_path, kms_enabled=True)
client = app.test_client() client = app.test_client()
# Create a KMS key first
with app.app_context(): with app.app_context():
kms = app.extensions.get("kms") kms = app.extensions.get("kms")
if kms: if kms:
@@ -112,12 +118,15 @@ class TestUIBucketEncryption:
key_id = key.key_id key_id = key.key_id
else: else:
pytest.skip("KMS not available") pytest.skip("KMS not available")
# Login
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True) client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
# Get CSRF token
response = client.get("/ui/buckets/test-bucket?tab=properties") response = client.get("/ui/buckets/test-bucket?tab=properties")
csrf_token = get_csrf_token(response) csrf_token = get_csrf_token(response)
# Enable KMS encryption
response = client.post( response = client.post(
"/ui/buckets/test-bucket/encryption", "/ui/buckets/test-bucket/encryption",
data={ data={
@@ -137,9 +146,11 @@ class TestUIBucketEncryption:
"""Should be able to disable encryption.""" """Should be able to disable encryption."""
app = _make_encryption_app(tmp_path) app = _make_encryption_app(tmp_path)
client = app.test_client() client = app.test_client()
# Login
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True) client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
# First enable encryption
response = client.get("/ui/buckets/test-bucket?tab=properties") response = client.get("/ui/buckets/test-bucket?tab=properties")
csrf_token = get_csrf_token(response) csrf_token = get_csrf_token(response)
@@ -151,7 +162,8 @@ class TestUIBucketEncryption:
"algorithm": "AES256", "algorithm": "AES256",
}, },
) )
# Now disable it
response = client.get("/ui/buckets/test-bucket?tab=properties") response = client.get("/ui/buckets/test-bucket?tab=properties")
csrf_token = get_csrf_token(response) csrf_token = get_csrf_token(response)
@@ -172,12 +184,13 @@ class TestUIBucketEncryption:
"""Invalid encryption algorithm should be rejected.""" """Invalid encryption algorithm should be rejected."""
app = _make_encryption_app(tmp_path) app = _make_encryption_app(tmp_path)
client = app.test_client() client = app.test_client()
# Login
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True) client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
response = client.get("/ui/buckets/test-bucket?tab=properties") response = client.get("/ui/buckets/test-bucket?tab=properties")
csrf_token = get_csrf_token(response) csrf_token = get_csrf_token(response)
response = client.post( response = client.post(
"/ui/buckets/test-bucket/encryption", "/ui/buckets/test-bucket/encryption",
data={ data={
@@ -187,21 +200,23 @@ class TestUIBucketEncryption:
}, },
follow_redirects=True, follow_redirects=True,
) )
assert response.status_code == 200 assert response.status_code == 200
html = response.data.decode("utf-8") html = response.data.decode("utf-8")
assert "Invalid" in html or "danger" in html assert "Invalid" in html or "danger" in html
def test_encryption_persists_in_config(self, tmp_path): def test_encryption_persists_in_config(self, tmp_path):
"""Encryption config should persist in bucket config.""" """Encryption config should persist in bucket config."""
app = _make_encryption_app(tmp_path) app = _make_encryption_app(tmp_path)
client = app.test_client() client = app.test_client()
# Login
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True) client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
# Enable encryption
response = client.get("/ui/buckets/test-bucket?tab=properties") response = client.get("/ui/buckets/test-bucket?tab=properties")
csrf_token = get_csrf_token(response) csrf_token = get_csrf_token(response)
client.post( client.post(
"/ui/buckets/test-bucket/encryption", "/ui/buckets/test-bucket/encryption",
data={ data={
@@ -210,7 +225,8 @@ class TestUIBucketEncryption:
"algorithm": "AES256", "algorithm": "AES256",
}, },
) )
# Verify it's stored
with app.app_context(): with app.app_context():
storage = app.extensions["object_storage"] storage = app.extensions["object_storage"]
config = storage.get_bucket_encryption("test-bucket") config = storage.get_bucket_encryption("test-bucket")
@@ -227,12 +243,14 @@ class TestUIEncryptionWithoutPermission:
"""Read-only user should not be able to change encryption settings.""" """Read-only user should not be able to change encryption settings."""
app = _make_encryption_app(tmp_path) app = _make_encryption_app(tmp_path)
client = app.test_client() client = app.test_client()
# Login as readonly user
client.post("/ui/login", data={"access_key": "readonly", "secret_key": "secret"}, follow_redirects=True) client.post("/ui/login", data={"access_key": "readonly", "secret_key": "secret"}, follow_redirects=True)
# This should fail or be rejected
response = client.get("/ui/buckets/test-bucket?tab=properties") response = client.get("/ui/buckets/test-bucket?tab=properties")
csrf_token = get_csrf_token(response) csrf_token = get_csrf_token(response)
response = client.post( response = client.post(
"/ui/buckets/test-bucket/encryption", "/ui/buckets/test-bucket/encryption",
data={ data={
@@ -242,7 +260,9 @@ class TestUIEncryptionWithoutPermission:
}, },
follow_redirects=True, follow_redirects=True,
) )
# Should either redirect with error or show permission denied
assert response.status_code == 200 assert response.status_code == 200
html = response.data.decode("utf-8") html = response.data.decode("utf-8")
# Should contain error about permission denied
assert "Access denied" in html or "permission" in html.lower() or "not authorized" in html.lower() assert "Access denied" in html or "permission" in html.lower() or "not authorized" in html.lower()