Compare commits
22 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| caf01d6ada | |||
| a5d19e2982 | |||
| 692e7e3a6e | |||
| 78dba93ee0 | |||
| 93a5aa6618 | |||
| 9ab750650c | |||
| 609e9db2f7 | |||
| 94a55cf2b7 | |||
| b9cfc45aa2 | |||
| 2d60e36fbf | |||
| c78f7fa6b0 | |||
| b3dce8d13e | |||
| e792b86485 | |||
| cdb86aeea7 | |||
| cdbc156b5b | |||
| 1df8ff9d25 | |||
| 05f1b00473 | |||
| 5ebc97300e | |||
| d2f9c3bded | |||
| 9f347f2caa | |||
| 4ab58e59c2 | |||
| 32232211a1 |
@@ -1,5 +1,5 @@
|
|||||||
# syntax=docker/dockerfile:1.7
|
# syntax=docker/dockerfile:1.7
|
||||||
FROM python:3.11-slim
|
FROM python:3.12.12-slim
|
||||||
|
|
||||||
ENV PYTHONDONTWRITEBYTECODE=1 \
|
ENV PYTHONDONTWRITEBYTECODE=1 \
|
||||||
PYTHONUNBUFFERED=1
|
PYTHONUNBUFFERED=1
|
||||||
|
|||||||
298
README.md
298
README.md
@@ -1,117 +1,251 @@
|
|||||||
# MyFSIO (Flask S3 + IAM)
|
# MyFSIO
|
||||||
|
|
||||||
MyFSIO is a batteries-included, Flask-based recreation of Amazon S3 and IAM workflows built for local development. The design mirrors the [AWS S3 documentation](https://docs.aws.amazon.com/s3/) wherever practical: bucket naming, Signature Version 4 presigning, Version 2012-10-17 bucket policies, IAM-style users, and familiar REST endpoints.
|
A lightweight, S3-compatible object storage system built with Flask. MyFSIO implements core AWS S3 REST API operations with filesystem-backed storage, making it ideal for local development, testing, and self-hosted storage scenarios.
|
||||||
|
|
||||||
## Why MyFSIO?
|
## Features
|
||||||
|
|
||||||
- **Dual servers:** Run both the API (port 5000) and UI (port 5100) with a single command: `python run.py`.
|
**Core Storage**
|
||||||
- **IAM + access keys:** Users, access keys, key rotation, and bucket-scoped actions (`list/read/write/delete/policy`) now live in `data/.myfsio.sys/config/iam.json` and are editable from the IAM dashboard.
|
- S3-compatible REST API with AWS Signature Version 4 authentication
|
||||||
- **Bucket policies + hot reload:** `data/.myfsio.sys/config/bucket_policies.json` uses AWS' policy grammar (Version `2012-10-17`) with a built-in watcher, so editing the JSON file applies immediately. The UI also ships Public/Private/Custom presets for faster edits.
|
- Bucket and object CRUD operations
|
||||||
- **Presigned URLs everywhere:** Signature Version 4 presigned URLs respect IAM + bucket policies and replace the now-removed "share link" feature for public access scenarios.
|
- Object versioning with version history
|
||||||
- **Modern UI:** Responsive tables, quick filters, preview sidebar, object-level delete buttons, a presign modal, and an inline JSON policy editor that respects dark mode keep bucket management friendly. The object browser supports folder navigation, infinite scroll pagination, bulk operations, and automatic retry on load failures.
|
- Multipart uploads for large files
|
||||||
- **Tests & health:** `/healthz` for smoke checks and `pytest` coverage for IAM, CRUD, presign, and policy flows.
|
- Presigned URLs (1 second to 7 days validity)
|
||||||
|
|
||||||
## Architecture at a Glance
|
**Security & Access Control**
|
||||||
|
- IAM users with access key management and rotation
|
||||||
|
- Bucket policies (AWS Policy Version 2012-10-17)
|
||||||
|
- Server-side encryption (SSE-S3 and SSE-KMS)
|
||||||
|
- Built-in Key Management Service (KMS)
|
||||||
|
- Rate limiting per endpoint
|
||||||
|
|
||||||
|
**Advanced Features**
|
||||||
|
- Cross-bucket replication to remote S3-compatible endpoints
|
||||||
|
- Hot-reload for bucket policies (no restart required)
|
||||||
|
- CORS configuration per bucket
|
||||||
|
|
||||||
|
**Management UI**
|
||||||
|
- Web console for bucket and object management
|
||||||
|
- IAM dashboard for user administration
|
||||||
|
- Inline JSON policy editor with presets
|
||||||
|
- Object browser with folder navigation and bulk operations
|
||||||
|
- Dark mode support
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
```
|
```
|
||||||
+-----------------+ +----------------+
|
+------------------+ +------------------+
|
||||||
| API Server |<----->| Object storage |
|
| API Server | | UI Server |
|
||||||
| (port 5000) | | (filesystem) |
|
| (port 5000) | | (port 5100) |
|
||||||
| - S3 routes | +----------------+
|
| | | |
|
||||||
| - Presigned URLs |
|
| - S3 REST API |<------->| - Web Console |
|
||||||
| - Bucket policy |
|
| - SigV4 Auth | | - IAM Dashboard |
|
||||||
+-----------------+
|
| - Presign URLs | | - Bucket Editor |
|
||||||
^
|
+--------+---------+ +------------------+
|
||||||
|
|
|
|
||||||
+-----------------+
|
v
|
||||||
| UI Server |
|
+------------------+ +------------------+
|
||||||
| (port 5100) |
|
| Object Storage | | System Metadata |
|
||||||
| - Auth console |
|
| (filesystem) | | (.myfsio.sys/) |
|
||||||
| - IAM dashboard|
|
| | | |
|
||||||
| - Bucket editor|
|
| data/<bucket>/ | | - IAM config |
|
||||||
+-----------------+
|
| <objects> | | - Bucket policies|
|
||||||
|
| | | - Encryption keys|
|
||||||
|
+------------------+ +------------------+
|
||||||
```
|
```
|
||||||
|
|
||||||
Both apps load the same configuration via `AppConfig` so IAM data and bucket policies stay consistent no matter which process you run.
|
## Quick Start
|
||||||
Bucket policies are automatically reloaded whenever `bucket_policies.json` changes—no restarts required.
|
|
||||||
|
|
||||||
## Getting Started
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# Clone and setup
|
||||||
|
git clone https://gitea.jzwsite.com/kqjy/MyFSIO
|
||||||
|
cd s3
|
||||||
python -m venv .venv
|
python -m venv .venv
|
||||||
. .venv/Scripts/activate # PowerShell: .\.venv\Scripts\Activate.ps1
|
|
||||||
|
# Activate virtual environment
|
||||||
|
# Windows PowerShell:
|
||||||
|
.\.venv\Scripts\Activate.ps1
|
||||||
|
# Windows CMD:
|
||||||
|
.venv\Scripts\activate.bat
|
||||||
|
# Linux/macOS:
|
||||||
|
source .venv/bin/activate
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
|
|
||||||
# Run both API and UI (default)
|
# Start both servers
|
||||||
python run.py
|
python run.py
|
||||||
|
|
||||||
# Or run individually:
|
# Or start individually
|
||||||
# python run.py --mode api
|
python run.py --mode api # API only (port 5000)
|
||||||
# python run.py --mode ui
|
python run.py --mode ui # UI only (port 5100)
|
||||||
```
|
```
|
||||||
|
|
||||||
Visit `http://127.0.0.1:5100/ui` for the console and `http://127.0.0.1:5000/` for the raw API. Override ports/hosts with the environment variables listed below.
|
**Default Credentials:** `localadmin` / `localadmin`
|
||||||
|
|
||||||
## IAM, Access Keys, and Bucket Policies
|
- **Web Console:** http://127.0.0.1:5100/ui
|
||||||
|
- **API Endpoint:** http://127.0.0.1:5000
|
||||||
- First run creates `data/.myfsio.sys/config/iam.json` with `localadmin / localadmin` (full control). Sign in via the UI, then use the **IAM** tab to create users, rotate secrets, or edit inline policies without touching JSON by hand.
|
|
||||||
- Bucket policies live in `data/.myfsio.sys/config/bucket_policies.json` and follow the AWS `arn:aws:s3:::bucket/key` resource syntax with Version `2012-10-17`. Attach/replace/remove policies from the bucket detail page or edit the JSON by hand—changes hot reload automatically.
|
|
||||||
- IAM actions include extended verbs (`iam:list_users`, `iam:create_user`, `iam:update_policy`, etc.) so you can control who is allowed to manage other users and policies.
|
|
||||||
|
|
||||||
### Bucket Policy Presets & Hot Reload
|
|
||||||
|
|
||||||
- **Presets:** Every bucket detail view includes Public (read-only), Private (detach policy), and Custom presets. Public auto-populates a policy that grants anonymous `s3:ListBucket` + `s3:GetObject` access to the entire bucket.
|
|
||||||
- **Custom drafts:** Switching back to Custom restores your last manual edit so you can toggle between presets without losing work.
|
|
||||||
- **Hot reload:** The server watches `bucket_policies.json` and reloads statements on-the-fly—ideal for editing policies in your favorite editor while testing Via curl or the UI.
|
|
||||||
|
|
||||||
## Presigned URLs
|
|
||||||
|
|
||||||
Presigned URLs follow the AWS CLI playbook:
|
|
||||||
|
|
||||||
- Call `POST /presign/<bucket>/<key>` (or use the "Presign" button in the UI) to request a Signature Version 4 URL valid for 1 second to 7 days.
|
|
||||||
- The generated URL honors IAM permissions and bucket-policy decisions at generation-time and again when somebody fetches it.
|
|
||||||
- Because presigned URLs cover both authenticated and public sharing scenarios, the legacy "share link" feature has been removed.
|
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
| Variable | Default | Description |
|
| Variable | Default | Description |
|
||||||
| --- | --- | --- |
|
|----------|---------|-------------|
|
||||||
| `STORAGE_ROOT` | `<project>/data` | Filesystem root for bucket directories |
|
| `STORAGE_ROOT` | `./data` | Filesystem root for bucket storage |
|
||||||
| `MAX_UPLOAD_SIZE` | `1073741824` | Maximum upload size (bytes) |
|
| `IAM_CONFIG` | `.myfsio.sys/config/iam.json` | IAM user and policy store |
|
||||||
| `UI_PAGE_SIZE` | `100` | `MaxKeys` hint for listings |
|
| `BUCKET_POLICY_PATH` | `.myfsio.sys/config/bucket_policies.json` | Bucket policy store |
|
||||||
| `SECRET_KEY` | `dev-secret-key` | Flask session secret for the UI |
|
| `API_BASE_URL` | `http://127.0.0.1:5000` | API endpoint for UI calls |
|
||||||
| `IAM_CONFIG` | `<project>/data/.myfsio.sys/config/iam.json` | IAM user + policy store |
|
| `MAX_UPLOAD_SIZE` | `1073741824` | Maximum upload size in bytes (1 GB) |
|
||||||
| `BUCKET_POLICY_PATH` | `<project>/data/.myfsio.sys/config/bucket_policies.json` | Bucket policy store |
|
| `MULTIPART_MIN_PART_SIZE` | `5242880` | Minimum multipart part size (5 MB) |
|
||||||
| `API_BASE_URL` | `http://127.0.0.1:5000` | Used by the UI when calling API endpoints (presign, bucket policy) |
|
| `UI_PAGE_SIZE` | `100` | Default page size for listings |
|
||||||
| `AWS_REGION` | `us-east-1` | Region used in Signature V4 scope |
|
| `SECRET_KEY` | `dev-secret-key` | Flask session secret |
|
||||||
| `AWS_SERVICE` | `s3` | Service used in Signature V4 scope |
|
| `AWS_REGION` | `us-east-1` | Region for SigV4 signing |
|
||||||
|
| `AWS_SERVICE` | `s3` | Service name for SigV4 signing |
|
||||||
|
| `ENCRYPTION_ENABLED` | `false` | Enable server-side encryption |
|
||||||
|
| `KMS_ENABLED` | `false` | Enable Key Management Service |
|
||||||
|
| `LOG_LEVEL` | `INFO` | Logging verbosity |
|
||||||
|
|
||||||
> Buckets now live directly under `data/` while system metadata (versions, IAM, bucket policies, multipart uploads, etc.) lives in `data/.myfsio.sys`.
|
## Data Layout
|
||||||
|
|
||||||
## API Cheatsheet (IAM headers required)
|
|
||||||
|
|
||||||
```
|
```
|
||||||
GET / -> List buckets (XML)
|
data/
|
||||||
PUT /<bucket> -> Create bucket
|
├── <bucket>/ # User buckets with objects
|
||||||
DELETE /<bucket> -> Delete bucket (must be empty)
|
└── .myfsio.sys/ # System metadata
|
||||||
GET /<bucket> -> List objects (XML)
|
├── config/
|
||||||
PUT /<bucket>/<key> -> Upload object (binary stream)
|
│ ├── iam.json # IAM users and policies
|
||||||
GET /<bucket>/<key> -> Download object
|
│ ├── bucket_policies.json # Bucket policies
|
||||||
DELETE /<bucket>/<key> -> Delete object
|
│ ├── replication_rules.json
|
||||||
POST /presign/<bucket>/<key> -> Generate AWS SigV4 presigned URL (JSON)
|
│ └── connections.json # Remote S3 connections
|
||||||
GET /bucket-policy/<bucket> -> Fetch bucket policy (JSON)
|
├── buckets/<bucket>/
|
||||||
PUT /bucket-policy/<bucket> -> Attach/replace bucket policy (JSON)
|
│ ├── meta/ # Object metadata (.meta.json)
|
||||||
DELETE /bucket-policy/<bucket> -> Remove bucket policy
|
│ ├── versions/ # Archived object versions
|
||||||
|
│ └── .bucket.json # Bucket config (versioning, CORS)
|
||||||
|
├── multipart/ # Active multipart uploads
|
||||||
|
└── keys/ # Encryption keys (SSE-S3/KMS)
|
||||||
|
```
|
||||||
|
|
||||||
|
## API Reference
|
||||||
|
|
||||||
|
All endpoints require AWS Signature Version 4 authentication unless using presigned URLs or public bucket policies.
|
||||||
|
|
||||||
|
### Bucket Operations
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/` | List all buckets |
|
||||||
|
| `PUT` | `/<bucket>` | Create bucket |
|
||||||
|
| `DELETE` | `/<bucket>` | Delete bucket (must be empty) |
|
||||||
|
| `HEAD` | `/<bucket>` | Check bucket exists |
|
||||||
|
|
||||||
|
### Object Operations
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/<bucket>` | List objects (supports `list-type=2`) |
|
||||||
|
| `PUT` | `/<bucket>/<key>` | Upload object |
|
||||||
|
| `GET` | `/<bucket>/<key>` | Download object |
|
||||||
|
| `DELETE` | `/<bucket>/<key>` | Delete object |
|
||||||
|
| `HEAD` | `/<bucket>/<key>` | Get object metadata |
|
||||||
|
| `POST` | `/<bucket>/<key>?uploads` | Initiate multipart upload |
|
||||||
|
| `PUT` | `/<bucket>/<key>?partNumber=N&uploadId=X` | Upload part |
|
||||||
|
| `POST` | `/<bucket>/<key>?uploadId=X` | Complete multipart upload |
|
||||||
|
| `DELETE` | `/<bucket>/<key>?uploadId=X` | Abort multipart upload |
|
||||||
|
|
||||||
|
### Presigned URLs
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `POST` | `/presign/<bucket>/<key>` | Generate presigned URL |
|
||||||
|
|
||||||
|
### Bucket Policies
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/bucket-policy/<bucket>` | Get bucket policy |
|
||||||
|
| `PUT` | `/bucket-policy/<bucket>` | Set bucket policy |
|
||||||
|
| `DELETE` | `/bucket-policy/<bucket>` | Delete bucket policy |
|
||||||
|
|
||||||
|
### Versioning
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/<bucket>/<key>?versionId=X` | Get specific version |
|
||||||
|
| `DELETE` | `/<bucket>/<key>?versionId=X` | Delete specific version |
|
||||||
|
| `GET` | `/<bucket>?versions` | List object versions |
|
||||||
|
|
||||||
|
### Health Check
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/healthz` | Health check endpoint |
|
||||||
|
|
||||||
|
## IAM & Access Control
|
||||||
|
|
||||||
|
### Users and Access Keys
|
||||||
|
|
||||||
|
On first run, MyFSIO creates a default admin user (`localadmin`/`localadmin`). Use the IAM dashboard to:
|
||||||
|
|
||||||
|
- Create and delete users
|
||||||
|
- Generate and rotate access keys
|
||||||
|
- Attach inline policies to users
|
||||||
|
- Control IAM management permissions
|
||||||
|
|
||||||
|
### Bucket Policies
|
||||||
|
|
||||||
|
Bucket policies follow AWS policy grammar (Version `2012-10-17`) with support for:
|
||||||
|
|
||||||
|
- Principal-based access (`*` for anonymous, specific users)
|
||||||
|
- Action-based permissions (`s3:GetObject`, `s3:PutObject`, etc.)
|
||||||
|
- Resource patterns (`arn:aws:s3:::bucket/*`)
|
||||||
|
- Condition keys
|
||||||
|
|
||||||
|
**Policy Presets:**
|
||||||
|
- **Public:** Grants anonymous read access (`s3:GetObject`, `s3:ListBucket`)
|
||||||
|
- **Private:** Removes bucket policy (IAM-only access)
|
||||||
|
- **Custom:** Manual policy editing with draft preservation
|
||||||
|
|
||||||
|
Policies hot-reload when the JSON file changes.
|
||||||
|
|
||||||
|
## Server-Side Encryption
|
||||||
|
|
||||||
|
MyFSIO supports two encryption modes:
|
||||||
|
|
||||||
|
- **SSE-S3:** Server-managed keys with automatic key rotation
|
||||||
|
- **SSE-KMS:** Customer-managed keys via built-in KMS
|
||||||
|
|
||||||
|
Enable encryption with:
|
||||||
|
```bash
|
||||||
|
ENCRYPTION_ENABLED=true python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
## Cross-Bucket Replication
|
||||||
|
|
||||||
|
Replicate objects to remote S3-compatible endpoints:
|
||||||
|
|
||||||
|
1. Configure remote connections in the UI
|
||||||
|
2. Create replication rules specifying source/destination
|
||||||
|
3. Objects are automatically replicated on upload
|
||||||
|
|
||||||
|
## Docker
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker build -t myfsio .
|
||||||
|
docker run -p 5000:5000 -p 5100:5100 -v ./data:/app/data myfsio
|
||||||
```
|
```
|
||||||
|
|
||||||
## Testing
|
## Testing
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pytest -q
|
# Run all tests
|
||||||
|
pytest tests/ -v
|
||||||
|
|
||||||
|
# Run specific test file
|
||||||
|
pytest tests/test_api.py -v
|
||||||
|
|
||||||
|
# Run with coverage
|
||||||
|
pytest tests/ --cov=app --cov-report=html
|
||||||
```
|
```
|
||||||
|
|
||||||
## References
|
## References
|
||||||
|
|
||||||
- [Amazon Simple Storage Service Documentation](https://docs.aws.amazon.com/s3/)
|
- [Amazon S3 Documentation](https://docs.aws.amazon.com/s3/)
|
||||||
- [Signature Version 4 Signing Process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)
|
- [AWS Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)
|
||||||
- [Amazon S3 Bucket Policy Examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html)
|
- [S3 Bucket Policy Examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html)
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
"""Application factory for the mini S3-compatible object store."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
@@ -16,6 +15,8 @@ from flask_cors import CORS
|
|||||||
from flask_wtf.csrf import CSRFError
|
from flask_wtf.csrf import CSRFError
|
||||||
from werkzeug.middleware.proxy_fix import ProxyFix
|
from werkzeug.middleware.proxy_fix import ProxyFix
|
||||||
|
|
||||||
|
from .access_logging import AccessLoggingService
|
||||||
|
from .acl import AclService
|
||||||
from .bucket_policies import BucketPolicyStore
|
from .bucket_policies import BucketPolicyStore
|
||||||
from .config import AppConfig
|
from .config import AppConfig
|
||||||
from .connections import ConnectionStore
|
from .connections import ConnectionStore
|
||||||
@@ -23,6 +24,9 @@ from .encryption import EncryptionManager
|
|||||||
from .extensions import limiter, csrf
|
from .extensions import limiter, csrf
|
||||||
from .iam import IamService
|
from .iam import IamService
|
||||||
from .kms import KMSManager
|
from .kms import KMSManager
|
||||||
|
from .lifecycle import LifecycleManager
|
||||||
|
from .notifications import NotificationService
|
||||||
|
from .object_lock import ObjectLockService
|
||||||
from .replication import ReplicationManager
|
from .replication import ReplicationManager
|
||||||
from .secret_store import EphemeralSecretStore
|
from .secret_store import EphemeralSecretStore
|
||||||
from .storage import ObjectStorage
|
from .storage import ObjectStorage
|
||||||
@@ -120,7 +124,7 @@ def create_app(
|
|||||||
)
|
)
|
||||||
|
|
||||||
connections = ConnectionStore(connections_path)
|
connections = ConnectionStore(connections_path)
|
||||||
replication = ReplicationManager(storage, connections, replication_rules_path)
|
replication = ReplicationManager(storage, connections, replication_rules_path, storage_root)
|
||||||
|
|
||||||
encryption_config = {
|
encryption_config = {
|
||||||
"encryption_enabled": app.config.get("ENCRYPTION_ENABLED", False),
|
"encryption_enabled": app.config.get("ENCRYPTION_ENABLED", False),
|
||||||
@@ -140,6 +144,22 @@ def create_app(
|
|||||||
from .encrypted_storage import EncryptedObjectStorage
|
from .encrypted_storage import EncryptedObjectStorage
|
||||||
storage = EncryptedObjectStorage(storage, encryption_manager)
|
storage = EncryptedObjectStorage(storage, encryption_manager)
|
||||||
|
|
||||||
|
acl_service = AclService(storage_root)
|
||||||
|
object_lock_service = ObjectLockService(storage_root)
|
||||||
|
notification_service = NotificationService(storage_root)
|
||||||
|
access_logging_service = AccessLoggingService(storage_root)
|
||||||
|
access_logging_service.set_storage(storage)
|
||||||
|
|
||||||
|
lifecycle_manager = None
|
||||||
|
if app.config.get("LIFECYCLE_ENABLED", False):
|
||||||
|
base_storage = storage.storage if hasattr(storage, 'storage') else storage
|
||||||
|
lifecycle_manager = LifecycleManager(
|
||||||
|
base_storage,
|
||||||
|
interval_seconds=app.config.get("LIFECYCLE_INTERVAL_SECONDS", 3600),
|
||||||
|
storage_root=storage_root,
|
||||||
|
)
|
||||||
|
lifecycle_manager.start()
|
||||||
|
|
||||||
app.extensions["object_storage"] = storage
|
app.extensions["object_storage"] = storage
|
||||||
app.extensions["iam"] = iam
|
app.extensions["iam"] = iam
|
||||||
app.extensions["bucket_policies"] = bucket_policies
|
app.extensions["bucket_policies"] = bucket_policies
|
||||||
@@ -149,6 +169,11 @@ def create_app(
|
|||||||
app.extensions["replication"] = replication
|
app.extensions["replication"] = replication
|
||||||
app.extensions["encryption"] = encryption_manager
|
app.extensions["encryption"] = encryption_manager
|
||||||
app.extensions["kms"] = kms_manager
|
app.extensions["kms"] = kms_manager
|
||||||
|
app.extensions["acl"] = acl_service
|
||||||
|
app.extensions["lifecycle"] = lifecycle_manager
|
||||||
|
app.extensions["object_lock"] = object_lock_service
|
||||||
|
app.extensions["notifications"] = notification_service
|
||||||
|
app.extensions["access_logging"] = access_logging_service
|
||||||
|
|
||||||
@app.errorhandler(500)
|
@app.errorhandler(500)
|
||||||
def internal_error(error):
|
def internal_error(error):
|
||||||
@@ -266,16 +291,16 @@ def _configure_logging(app: Flask) -> None:
|
|||||||
"%(asctime)s | %(levelname)s | %(request_id)s | %(method)s %(path)s | %(message)s"
|
"%(asctime)s | %(levelname)s | %(request_id)s | %(method)s %(path)s | %(message)s"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Stream Handler (stdout) - Primary for Docker
|
|
||||||
stream_handler = logging.StreamHandler(sys.stdout)
|
stream_handler = logging.StreamHandler(sys.stdout)
|
||||||
stream_handler.setFormatter(formatter)
|
stream_handler.setFormatter(formatter)
|
||||||
stream_handler.addFilter(_RequestContextFilter())
|
stream_handler.addFilter(_RequestContextFilter())
|
||||||
|
|
||||||
logger = app.logger
|
logger = app.logger
|
||||||
|
for handler in logger.handlers[:]:
|
||||||
|
handler.close()
|
||||||
logger.handlers.clear()
|
logger.handlers.clear()
|
||||||
logger.addHandler(stream_handler)
|
logger.addHandler(stream_handler)
|
||||||
|
|
||||||
# File Handler (optional, if configured)
|
|
||||||
if app.config.get("LOG_TO_FILE"):
|
if app.config.get("LOG_TO_FILE"):
|
||||||
log_file = Path(app.config["LOG_FILE"])
|
log_file = Path(app.config["LOG_FILE"])
|
||||||
log_file.parent.mkdir(parents=True, exist_ok=True)
|
log_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|||||||
265
app/access_logging.py
Normal file
265
app/access_logging.py
Normal file
@@ -0,0 +1,265 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import io
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import queue
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class AccessLogEntry:
|
||||||
|
bucket_owner: str = "-"
|
||||||
|
bucket: str = "-"
|
||||||
|
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||||
|
remote_ip: str = "-"
|
||||||
|
requester: str = "-"
|
||||||
|
request_id: str = field(default_factory=lambda: uuid.uuid4().hex[:16].upper())
|
||||||
|
operation: str = "-"
|
||||||
|
key: str = "-"
|
||||||
|
request_uri: str = "-"
|
||||||
|
http_status: int = 200
|
||||||
|
error_code: str = "-"
|
||||||
|
bytes_sent: int = 0
|
||||||
|
object_size: int = 0
|
||||||
|
total_time_ms: int = 0
|
||||||
|
turn_around_time_ms: int = 0
|
||||||
|
referrer: str = "-"
|
||||||
|
user_agent: str = "-"
|
||||||
|
version_id: str = "-"
|
||||||
|
host_id: str = "-"
|
||||||
|
signature_version: str = "SigV4"
|
||||||
|
cipher_suite: str = "-"
|
||||||
|
authentication_type: str = "AuthHeader"
|
||||||
|
host_header: str = "-"
|
||||||
|
tls_version: str = "-"
|
||||||
|
|
||||||
|
def to_log_line(self) -> str:
|
||||||
|
time_str = self.timestamp.strftime("[%d/%b/%Y:%H:%M:%S %z]")
|
||||||
|
return (
|
||||||
|
f'{self.bucket_owner} {self.bucket} {time_str} {self.remote_ip} '
|
||||||
|
f'{self.requester} {self.request_id} {self.operation} {self.key} '
|
||||||
|
f'"{self.request_uri}" {self.http_status} {self.error_code or "-"} '
|
||||||
|
f'{self.bytes_sent or "-"} {self.object_size or "-"} {self.total_time_ms or "-"} '
|
||||||
|
f'{self.turn_around_time_ms or "-"} "{self.referrer}" "{self.user_agent}" {self.version_id}'
|
||||||
|
)
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"bucket_owner": self.bucket_owner,
|
||||||
|
"bucket": self.bucket,
|
||||||
|
"timestamp": self.timestamp.isoformat(),
|
||||||
|
"remote_ip": self.remote_ip,
|
||||||
|
"requester": self.requester,
|
||||||
|
"request_id": self.request_id,
|
||||||
|
"operation": self.operation,
|
||||||
|
"key": self.key,
|
||||||
|
"request_uri": self.request_uri,
|
||||||
|
"http_status": self.http_status,
|
||||||
|
"error_code": self.error_code,
|
||||||
|
"bytes_sent": self.bytes_sent,
|
||||||
|
"object_size": self.object_size,
|
||||||
|
"total_time_ms": self.total_time_ms,
|
||||||
|
"referrer": self.referrer,
|
||||||
|
"user_agent": self.user_agent,
|
||||||
|
"version_id": self.version_id,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LoggingConfiguration:
|
||||||
|
target_bucket: str
|
||||||
|
target_prefix: str = ""
|
||||||
|
enabled: bool = True
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"LoggingEnabled": {
|
||||||
|
"TargetBucket": self.target_bucket,
|
||||||
|
"TargetPrefix": self.target_prefix,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> Optional["LoggingConfiguration"]:
|
||||||
|
logging_enabled = data.get("LoggingEnabled")
|
||||||
|
if not logging_enabled:
|
||||||
|
return None
|
||||||
|
return cls(
|
||||||
|
target_bucket=logging_enabled.get("TargetBucket", ""),
|
||||||
|
target_prefix=logging_enabled.get("TargetPrefix", ""),
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class AccessLoggingService:
|
||||||
|
def __init__(self, storage_root: Path, flush_interval: int = 60, max_buffer_size: int = 1000):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self.flush_interval = flush_interval
|
||||||
|
self.max_buffer_size = max_buffer_size
|
||||||
|
self._configs: Dict[str, LoggingConfiguration] = {}
|
||||||
|
self._buffer: Dict[str, List[AccessLogEntry]] = {}
|
||||||
|
self._buffer_lock = threading.Lock()
|
||||||
|
self._shutdown = threading.Event()
|
||||||
|
self._storage = None
|
||||||
|
|
||||||
|
self._flush_thread = threading.Thread(target=self._flush_loop, name="access-log-flush", daemon=True)
|
||||||
|
self._flush_thread.start()
|
||||||
|
|
||||||
|
def set_storage(self, storage: Any) -> None:
|
||||||
|
self._storage = storage
|
||||||
|
|
||||||
|
def _config_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "logging.json"
|
||||||
|
|
||||||
|
def get_bucket_logging(self, bucket_name: str) -> Optional[LoggingConfiguration]:
|
||||||
|
if bucket_name in self._configs:
|
||||||
|
return self._configs[bucket_name]
|
||||||
|
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
if not config_path.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||||
|
config = LoggingConfiguration.from_dict(data)
|
||||||
|
if config:
|
||||||
|
self._configs[bucket_name] = config
|
||||||
|
return config
|
||||||
|
except (json.JSONDecodeError, OSError) as e:
|
||||||
|
logger.warning(f"Failed to load logging config for {bucket_name}: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def set_bucket_logging(self, bucket_name: str, config: LoggingConfiguration) -> None:
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
config_path.write_text(json.dumps(config.to_dict(), indent=2), encoding="utf-8")
|
||||||
|
self._configs[bucket_name] = config
|
||||||
|
|
||||||
|
def delete_bucket_logging(self, bucket_name: str) -> None:
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
try:
|
||||||
|
if config_path.exists():
|
||||||
|
config_path.unlink()
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
self._configs.pop(bucket_name, None)
|
||||||
|
|
||||||
|
def log_request(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
*,
|
||||||
|
operation: str,
|
||||||
|
key: str = "-",
|
||||||
|
remote_ip: str = "-",
|
||||||
|
requester: str = "-",
|
||||||
|
request_uri: str = "-",
|
||||||
|
http_status: int = 200,
|
||||||
|
error_code: str = "",
|
||||||
|
bytes_sent: int = 0,
|
||||||
|
object_size: int = 0,
|
||||||
|
total_time_ms: int = 0,
|
||||||
|
referrer: str = "-",
|
||||||
|
user_agent: str = "-",
|
||||||
|
version_id: str = "-",
|
||||||
|
request_id: str = "",
|
||||||
|
) -> None:
|
||||||
|
config = self.get_bucket_logging(bucket_name)
|
||||||
|
if not config or not config.enabled:
|
||||||
|
return
|
||||||
|
|
||||||
|
entry = AccessLogEntry(
|
||||||
|
bucket_owner="local-owner",
|
||||||
|
bucket=bucket_name,
|
||||||
|
remote_ip=remote_ip,
|
||||||
|
requester=requester,
|
||||||
|
request_id=request_id or uuid.uuid4().hex[:16].upper(),
|
||||||
|
operation=operation,
|
||||||
|
key=key,
|
||||||
|
request_uri=request_uri,
|
||||||
|
http_status=http_status,
|
||||||
|
error_code=error_code,
|
||||||
|
bytes_sent=bytes_sent,
|
||||||
|
object_size=object_size,
|
||||||
|
total_time_ms=total_time_ms,
|
||||||
|
referrer=referrer,
|
||||||
|
user_agent=user_agent,
|
||||||
|
version_id=version_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
target_key = f"{config.target_bucket}:{config.target_prefix}"
|
||||||
|
should_flush = False
|
||||||
|
with self._buffer_lock:
|
||||||
|
if target_key not in self._buffer:
|
||||||
|
self._buffer[target_key] = []
|
||||||
|
self._buffer[target_key].append(entry)
|
||||||
|
should_flush = len(self._buffer[target_key]) >= self.max_buffer_size
|
||||||
|
|
||||||
|
if should_flush:
|
||||||
|
self._flush_buffer(target_key)
|
||||||
|
|
||||||
|
def _flush_loop(self) -> None:
|
||||||
|
while not self._shutdown.is_set():
|
||||||
|
self._shutdown.wait(timeout=self.flush_interval)
|
||||||
|
if not self._shutdown.is_set():
|
||||||
|
self._flush_all()
|
||||||
|
|
||||||
|
def _flush_all(self) -> None:
|
||||||
|
with self._buffer_lock:
|
||||||
|
targets = list(self._buffer.keys())
|
||||||
|
|
||||||
|
for target_key in targets:
|
||||||
|
self._flush_buffer(target_key)
|
||||||
|
|
||||||
|
def _flush_buffer(self, target_key: str) -> None:
|
||||||
|
with self._buffer_lock:
|
||||||
|
entries = self._buffer.pop(target_key, [])
|
||||||
|
|
||||||
|
if not entries or not self._storage:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
bucket_name, prefix = target_key.split(":", 1)
|
||||||
|
except ValueError:
|
||||||
|
logger.error(f"Invalid target key: {target_key}")
|
||||||
|
return
|
||||||
|
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
log_key = f"{prefix}{now.strftime('%Y-%m-%d-%H-%M-%S')}-{uuid.uuid4().hex[:8]}"
|
||||||
|
|
||||||
|
log_content = "\n".join(entry.to_log_line() for entry in entries) + "\n"
|
||||||
|
|
||||||
|
try:
|
||||||
|
stream = io.BytesIO(log_content.encode("utf-8"))
|
||||||
|
self._storage.put_object(bucket_name, log_key, stream, enforce_quota=False)
|
||||||
|
logger.info(f"Flushed {len(entries)} access log entries to {bucket_name}/{log_key}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to write access log to {bucket_name}/{log_key}: {e}")
|
||||||
|
with self._buffer_lock:
|
||||||
|
if target_key not in self._buffer:
|
||||||
|
self._buffer[target_key] = []
|
||||||
|
self._buffer[target_key] = entries + self._buffer[target_key]
|
||||||
|
|
||||||
|
def flush(self) -> None:
|
||||||
|
self._flush_all()
|
||||||
|
|
||||||
|
def shutdown(self) -> None:
|
||||||
|
self._shutdown.set()
|
||||||
|
self._flush_all()
|
||||||
|
self._flush_thread.join(timeout=5.0)
|
||||||
|
|
||||||
|
def get_stats(self) -> Dict[str, Any]:
|
||||||
|
with self._buffer_lock:
|
||||||
|
buffered = sum(len(entries) for entries in self._buffer.values())
|
||||||
|
return {
|
||||||
|
"buffered_entries": buffered,
|
||||||
|
"target_buckets": len(self._buffer),
|
||||||
|
}
|
||||||
204
app/acl.py
Normal file
204
app/acl.py
Normal file
@@ -0,0 +1,204 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional, Set
|
||||||
|
|
||||||
|
|
||||||
|
ACL_PERMISSION_FULL_CONTROL = "FULL_CONTROL"
|
||||||
|
ACL_PERMISSION_WRITE = "WRITE"
|
||||||
|
ACL_PERMISSION_WRITE_ACP = "WRITE_ACP"
|
||||||
|
ACL_PERMISSION_READ = "READ"
|
||||||
|
ACL_PERMISSION_READ_ACP = "READ_ACP"
|
||||||
|
|
||||||
|
ALL_PERMISSIONS = {
|
||||||
|
ACL_PERMISSION_FULL_CONTROL,
|
||||||
|
ACL_PERMISSION_WRITE,
|
||||||
|
ACL_PERMISSION_WRITE_ACP,
|
||||||
|
ACL_PERMISSION_READ,
|
||||||
|
ACL_PERMISSION_READ_ACP,
|
||||||
|
}
|
||||||
|
|
||||||
|
PERMISSION_TO_ACTIONS = {
|
||||||
|
ACL_PERMISSION_FULL_CONTROL: {"read", "write", "delete", "list", "share"},
|
||||||
|
ACL_PERMISSION_WRITE: {"write", "delete"},
|
||||||
|
ACL_PERMISSION_WRITE_ACP: {"share"},
|
||||||
|
ACL_PERMISSION_READ: {"read", "list"},
|
||||||
|
ACL_PERMISSION_READ_ACP: {"share"},
|
||||||
|
}
|
||||||
|
|
||||||
|
GRANTEE_ALL_USERS = "*"
|
||||||
|
GRANTEE_AUTHENTICATED_USERS = "authenticated"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class AclGrant:
|
||||||
|
grantee: str
|
||||||
|
permission: str
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, str]:
|
||||||
|
return {"grantee": self.grantee, "permission": self.permission}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, str]) -> "AclGrant":
|
||||||
|
return cls(grantee=data["grantee"], permission=data["permission"])
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Acl:
|
||||||
|
owner: str
|
||||||
|
grants: List[AclGrant] = field(default_factory=list)
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"owner": self.owner,
|
||||||
|
"grants": [g.to_dict() for g in self.grants],
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "Acl":
|
||||||
|
return cls(
|
||||||
|
owner=data.get("owner", ""),
|
||||||
|
grants=[AclGrant.from_dict(g) for g in data.get("grants", [])],
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_allowed_actions(self, principal_id: Optional[str], is_authenticated: bool = True) -> Set[str]:
|
||||||
|
actions: Set[str] = set()
|
||||||
|
if principal_id and principal_id == self.owner:
|
||||||
|
actions.update(PERMISSION_TO_ACTIONS[ACL_PERMISSION_FULL_CONTROL])
|
||||||
|
for grant in self.grants:
|
||||||
|
if grant.grantee == GRANTEE_ALL_USERS:
|
||||||
|
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
|
||||||
|
elif grant.grantee == GRANTEE_AUTHENTICATED_USERS and is_authenticated:
|
||||||
|
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
|
||||||
|
elif principal_id and grant.grantee == principal_id:
|
||||||
|
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
|
||||||
|
return actions
|
||||||
|
|
||||||
|
|
||||||
|
CANNED_ACLS = {
|
||||||
|
"private": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL)],
|
||||||
|
),
|
||||||
|
"public-read": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
"public-read-write": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_WRITE),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
"authenticated-read": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
AclGrant(grantee=GRANTEE_AUTHENTICATED_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
"bucket-owner-read": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
"bucket-owner-full-control": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def create_canned_acl(canned_acl: str, owner: str) -> Acl:
|
||||||
|
factory = CANNED_ACLS.get(canned_acl)
|
||||||
|
if not factory:
|
||||||
|
return CANNED_ACLS["private"](owner)
|
||||||
|
return factory(owner)
|
||||||
|
|
||||||
|
|
||||||
|
class AclService:
|
||||||
|
def __init__(self, storage_root: Path):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._bucket_acl_cache: Dict[str, Acl] = {}
|
||||||
|
|
||||||
|
def _bucket_acl_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / ".acl.json"
|
||||||
|
|
||||||
|
def get_bucket_acl(self, bucket_name: str) -> Optional[Acl]:
|
||||||
|
if bucket_name in self._bucket_acl_cache:
|
||||||
|
return self._bucket_acl_cache[bucket_name]
|
||||||
|
acl_path = self._bucket_acl_path(bucket_name)
|
||||||
|
if not acl_path.exists():
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
data = json.loads(acl_path.read_text(encoding="utf-8"))
|
||||||
|
acl = Acl.from_dict(data)
|
||||||
|
self._bucket_acl_cache[bucket_name] = acl
|
||||||
|
return acl
|
||||||
|
except (OSError, json.JSONDecodeError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def set_bucket_acl(self, bucket_name: str, acl: Acl) -> None:
|
||||||
|
acl_path = self._bucket_acl_path(bucket_name)
|
||||||
|
acl_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
acl_path.write_text(json.dumps(acl.to_dict(), indent=2), encoding="utf-8")
|
||||||
|
self._bucket_acl_cache[bucket_name] = acl
|
||||||
|
|
||||||
|
def set_bucket_canned_acl(self, bucket_name: str, canned_acl: str, owner: str) -> Acl:
|
||||||
|
acl = create_canned_acl(canned_acl, owner)
|
||||||
|
self.set_bucket_acl(bucket_name, acl)
|
||||||
|
return acl
|
||||||
|
|
||||||
|
def delete_bucket_acl(self, bucket_name: str) -> None:
|
||||||
|
acl_path = self._bucket_acl_path(bucket_name)
|
||||||
|
if acl_path.exists():
|
||||||
|
acl_path.unlink()
|
||||||
|
self._bucket_acl_cache.pop(bucket_name, None)
|
||||||
|
|
||||||
|
def evaluate_bucket_acl(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
principal_id: Optional[str],
|
||||||
|
action: str,
|
||||||
|
is_authenticated: bool = True,
|
||||||
|
) -> bool:
|
||||||
|
acl = self.get_bucket_acl(bucket_name)
|
||||||
|
if not acl:
|
||||||
|
return False
|
||||||
|
allowed_actions = acl.get_allowed_actions(principal_id, is_authenticated)
|
||||||
|
return action in allowed_actions
|
||||||
|
|
||||||
|
def get_object_acl(self, bucket_name: str, object_key: str, object_metadata: Dict[str, Any]) -> Optional[Acl]:
|
||||||
|
acl_data = object_metadata.get("__acl__")
|
||||||
|
if not acl_data:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
return Acl.from_dict(acl_data)
|
||||||
|
except (TypeError, KeyError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def create_object_acl_metadata(self, acl: Acl) -> Dict[str, Any]:
|
||||||
|
return {"__acl__": acl.to_dict()}
|
||||||
|
|
||||||
|
def evaluate_object_acl(
|
||||||
|
self,
|
||||||
|
object_metadata: Dict[str, Any],
|
||||||
|
principal_id: Optional[str],
|
||||||
|
action: str,
|
||||||
|
is_authenticated: bool = True,
|
||||||
|
) -> bool:
|
||||||
|
acl = self.get_object_acl("", "", object_metadata)
|
||||||
|
if not acl:
|
||||||
|
return False
|
||||||
|
allowed_actions = acl.get_allowed_actions(principal_id, is_authenticated)
|
||||||
|
return action in allowed_actions
|
||||||
@@ -1,11 +1,12 @@
|
|||||||
"""Bucket policy loader/enforcer with a subset of AWS semantics."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
|
import re
|
||||||
|
import time
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from fnmatch import fnmatch
|
from fnmatch import fnmatch, translate
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, Iterable, List, Optional, Sequence
|
from typing import Any, Dict, Iterable, List, Optional, Pattern, Sequence, Tuple
|
||||||
|
|
||||||
|
|
||||||
RESOURCE_PREFIX = "arn:aws:s3:::"
|
RESOURCE_PREFIX = "arn:aws:s3:::"
|
||||||
@@ -133,7 +134,22 @@ class BucketPolicyStatement:
|
|||||||
effect: str
|
effect: str
|
||||||
principals: List[str] | str
|
principals: List[str] | str
|
||||||
actions: List[str]
|
actions: List[str]
|
||||||
resources: List[tuple[str | None, str | None]]
|
resources: List[Tuple[str | None, str | None]]
|
||||||
|
# Performance: Pre-compiled regex patterns for resource matching
|
||||||
|
_compiled_patterns: List[Tuple[str | None, Optional[Pattern[str]]]] | None = None
|
||||||
|
|
||||||
|
def _get_compiled_patterns(self) -> List[Tuple[str | None, Optional[Pattern[str]]]]:
|
||||||
|
"""Lazily compile fnmatch patterns to regex for faster matching."""
|
||||||
|
if self._compiled_patterns is None:
|
||||||
|
self._compiled_patterns = []
|
||||||
|
for resource_bucket, key_pattern in self.resources:
|
||||||
|
if key_pattern is None:
|
||||||
|
self._compiled_patterns.append((resource_bucket, None))
|
||||||
|
else:
|
||||||
|
# Convert fnmatch pattern to regex
|
||||||
|
regex_pattern = translate(key_pattern)
|
||||||
|
self._compiled_patterns.append((resource_bucket, re.compile(regex_pattern)))
|
||||||
|
return self._compiled_patterns
|
||||||
|
|
||||||
def matches_principal(self, access_key: Optional[str]) -> bool:
|
def matches_principal(self, access_key: Optional[str]) -> bool:
|
||||||
if self.principals == "*":
|
if self.principals == "*":
|
||||||
@@ -149,15 +165,16 @@ class BucketPolicyStatement:
|
|||||||
def matches_resource(self, bucket: Optional[str], object_key: Optional[str]) -> bool:
|
def matches_resource(self, bucket: Optional[str], object_key: Optional[str]) -> bool:
|
||||||
bucket = (bucket or "*").lower()
|
bucket = (bucket or "*").lower()
|
||||||
key = object_key or ""
|
key = object_key or ""
|
||||||
for resource_bucket, key_pattern in self.resources:
|
for resource_bucket, compiled_pattern in self._get_compiled_patterns():
|
||||||
resource_bucket = (resource_bucket or "*").lower()
|
resource_bucket = (resource_bucket or "*").lower()
|
||||||
if resource_bucket not in {"*", bucket}:
|
if resource_bucket not in {"*", bucket}:
|
||||||
continue
|
continue
|
||||||
if key_pattern is None:
|
if compiled_pattern is None:
|
||||||
if not key:
|
if not key:
|
||||||
return True
|
return True
|
||||||
continue
|
continue
|
||||||
if fnmatch(key, key_pattern):
|
# Performance: Use pre-compiled regex instead of fnmatch
|
||||||
|
if compiled_pattern.match(key):
|
||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
@@ -174,8 +191,16 @@ class BucketPolicyStore:
|
|||||||
self._policies: Dict[str, List[BucketPolicyStatement]] = {}
|
self._policies: Dict[str, List[BucketPolicyStatement]] = {}
|
||||||
self._load()
|
self._load()
|
||||||
self._last_mtime = self._current_mtime()
|
self._last_mtime = self._current_mtime()
|
||||||
|
# Performance: Avoid stat() on every request
|
||||||
|
self._last_stat_check = 0.0
|
||||||
|
self._stat_check_interval = 1.0 # Only check mtime every 1 second
|
||||||
|
|
||||||
def maybe_reload(self) -> None:
|
def maybe_reload(self) -> None:
|
||||||
|
# Performance: Skip stat check if we checked recently
|
||||||
|
now = time.time()
|
||||||
|
if now - self._last_stat_check < self._stat_check_interval:
|
||||||
|
return
|
||||||
|
self._last_stat_check = now
|
||||||
current = self._current_mtime()
|
current = self._current_mtime()
|
||||||
if current is None or current == self._last_mtime:
|
if current is None or current == self._last_mtime:
|
||||||
return
|
return
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
"""Configuration helpers for the S3 clone application."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import os
|
import os
|
||||||
@@ -74,6 +73,8 @@ class AppConfig:
|
|||||||
kms_keys_path: Path
|
kms_keys_path: Path
|
||||||
default_encryption_algorithm: str
|
default_encryption_algorithm: str
|
||||||
display_timezone: str
|
display_timezone: str
|
||||||
|
lifecycle_enabled: bool
|
||||||
|
lifecycle_interval_seconds: int
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_env(cls, overrides: Optional[Dict[str, Any]] = None) -> "AppConfig":
|
def from_env(cls, overrides: Optional[Dict[str, Any]] = None) -> "AppConfig":
|
||||||
@@ -83,7 +84,7 @@ class AppConfig:
|
|||||||
return overrides.get(name, os.getenv(name, default))
|
return overrides.get(name, os.getenv(name, default))
|
||||||
|
|
||||||
storage_root = Path(_get("STORAGE_ROOT", PROJECT_ROOT / "data")).resolve()
|
storage_root = Path(_get("STORAGE_ROOT", PROJECT_ROOT / "data")).resolve()
|
||||||
max_upload_size = int(_get("MAX_UPLOAD_SIZE", 1024 * 1024 * 1024)) # 1 GiB default
|
max_upload_size = int(_get("MAX_UPLOAD_SIZE", 1024 * 1024 * 1024))
|
||||||
ui_page_size = int(_get("UI_PAGE_SIZE", 100))
|
ui_page_size = int(_get("UI_PAGE_SIZE", 100))
|
||||||
auth_max_attempts = int(_get("AUTH_MAX_ATTEMPTS", 5))
|
auth_max_attempts = int(_get("AUTH_MAX_ATTEMPTS", 5))
|
||||||
auth_lockout_minutes = int(_get("AUTH_LOCKOUT_MINUTES", 15))
|
auth_lockout_minutes = int(_get("AUTH_LOCKOUT_MINUTES", 15))
|
||||||
@@ -91,6 +92,8 @@ class AppConfig:
|
|||||||
secret_ttl_seconds = int(_get("SECRET_TTL_SECONDS", 300))
|
secret_ttl_seconds = int(_get("SECRET_TTL_SECONDS", 300))
|
||||||
stream_chunk_size = int(_get("STREAM_CHUNK_SIZE", 64 * 1024))
|
stream_chunk_size = int(_get("STREAM_CHUNK_SIZE", 64 * 1024))
|
||||||
multipart_min_part_size = int(_get("MULTIPART_MIN_PART_SIZE", 5 * 1024 * 1024))
|
multipart_min_part_size = int(_get("MULTIPART_MIN_PART_SIZE", 5 * 1024 * 1024))
|
||||||
|
lifecycle_enabled = _get("LIFECYCLE_ENABLED", "false").lower() in ("true", "1", "yes")
|
||||||
|
lifecycle_interval_seconds = int(_get("LIFECYCLE_INTERVAL_SECONDS", 3600))
|
||||||
default_secret = "dev-secret-key"
|
default_secret = "dev-secret-key"
|
||||||
secret_key = str(_get("SECRET_KEY", default_secret))
|
secret_key = str(_get("SECRET_KEY", default_secret))
|
||||||
|
|
||||||
@@ -105,6 +108,10 @@ class AppConfig:
|
|||||||
try:
|
try:
|
||||||
secret_file.parent.mkdir(parents=True, exist_ok=True)
|
secret_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
secret_file.write_text(generated)
|
secret_file.write_text(generated)
|
||||||
|
try:
|
||||||
|
os.chmod(secret_file, 0o600)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
secret_key = generated
|
secret_key = generated
|
||||||
except OSError:
|
except OSError:
|
||||||
secret_key = generated
|
secret_key = generated
|
||||||
@@ -198,7 +205,9 @@ class AppConfig:
|
|||||||
kms_enabled=kms_enabled,
|
kms_enabled=kms_enabled,
|
||||||
kms_keys_path=kms_keys_path,
|
kms_keys_path=kms_keys_path,
|
||||||
default_encryption_algorithm=default_encryption_algorithm,
|
default_encryption_algorithm=default_encryption_algorithm,
|
||||||
display_timezone=display_timezone)
|
display_timezone=display_timezone,
|
||||||
|
lifecycle_enabled=lifecycle_enabled,
|
||||||
|
lifecycle_interval_seconds=lifecycle_interval_seconds)
|
||||||
|
|
||||||
def validate_and_report(self) -> list[str]:
|
def validate_and_report(self) -> list[str]:
|
||||||
"""Validate configuration and return a list of warnings/issues.
|
"""Validate configuration and return a list of warnings/issues.
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
"""Manage remote S3 connections."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
"""Encrypted storage layer that wraps ObjectStorage with encryption support."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import io
|
import io
|
||||||
@@ -90,6 +89,8 @@ class EncryptedObjectStorage:
|
|||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
ObjectMeta with object information
|
ObjectMeta with object information
|
||||||
|
|
||||||
|
Performance: Uses streaming encryption for large files to reduce memory usage.
|
||||||
"""
|
"""
|
||||||
should_encrypt, algorithm, detected_kms_key = self._should_encrypt(
|
should_encrypt, algorithm, detected_kms_key = self._should_encrypt(
|
||||||
bucket_name, server_side_encryption
|
bucket_name, server_side_encryption
|
||||||
@@ -99,20 +100,17 @@ class EncryptedObjectStorage:
|
|||||||
kms_key_id = detected_kms_key
|
kms_key_id = detected_kms_key
|
||||||
|
|
||||||
if should_encrypt:
|
if should_encrypt:
|
||||||
data = stream.read()
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
ciphertext, enc_metadata = self.encryption.encrypt_object(
|
# Performance: Use streaming encryption to avoid loading entire file into memory
|
||||||
data,
|
encrypted_stream, enc_metadata = self.encryption.encrypt_stream(
|
||||||
|
stream,
|
||||||
algorithm=algorithm,
|
algorithm=algorithm,
|
||||||
kms_key_id=kms_key_id,
|
|
||||||
context={"bucket": bucket_name, "key": object_key},
|
context={"bucket": bucket_name, "key": object_key},
|
||||||
)
|
)
|
||||||
|
|
||||||
combined_metadata = metadata.copy() if metadata else {}
|
combined_metadata = metadata.copy() if metadata else {}
|
||||||
combined_metadata.update(enc_metadata.to_dict())
|
combined_metadata.update(enc_metadata.to_dict())
|
||||||
|
|
||||||
encrypted_stream = io.BytesIO(ciphertext)
|
|
||||||
result = self.storage.put_object(
|
result = self.storage.put_object(
|
||||||
bucket_name,
|
bucket_name,
|
||||||
object_key,
|
object_key,
|
||||||
@@ -138,23 +136,24 @@ class EncryptedObjectStorage:
|
|||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tuple of (data, metadata)
|
Tuple of (data, metadata)
|
||||||
|
|
||||||
|
Performance: Uses streaming decryption to reduce memory usage.
|
||||||
"""
|
"""
|
||||||
path = self.storage.get_object_path(bucket_name, object_key)
|
path = self.storage.get_object_path(bucket_name, object_key)
|
||||||
metadata = self.storage.get_object_metadata(bucket_name, object_key)
|
metadata = self.storage.get_object_metadata(bucket_name, object_key)
|
||||||
|
|
||||||
with path.open("rb") as f:
|
|
||||||
data = f.read()
|
|
||||||
|
|
||||||
enc_metadata = EncryptionMetadata.from_dict(metadata)
|
enc_metadata = EncryptionMetadata.from_dict(metadata)
|
||||||
if enc_metadata:
|
if enc_metadata:
|
||||||
try:
|
try:
|
||||||
data = self.encryption.decrypt_object(
|
# Performance: Use streaming decryption to avoid loading entire file into memory
|
||||||
data,
|
with path.open("rb") as f:
|
||||||
enc_metadata,
|
decrypted_stream = self.encryption.decrypt_stream(f, enc_metadata)
|
||||||
context={"bucket": bucket_name, "key": object_key},
|
data = decrypted_stream.read()
|
||||||
)
|
|
||||||
except EncryptionError as exc:
|
except EncryptionError as exc:
|
||||||
raise StorageError(f"Decryption failed: {exc}") from exc
|
raise StorageError(f"Decryption failed: {exc}") from exc
|
||||||
|
else:
|
||||||
|
with path.open("rb") as f:
|
||||||
|
data = f.read()
|
||||||
|
|
||||||
clean_metadata = {
|
clean_metadata = {
|
||||||
k: v for k, v in metadata.items()
|
k: v for k, v in metadata.items()
|
||||||
|
|||||||
@@ -157,10 +157,7 @@ class LocalKeyEncryption(EncryptionProvider):
|
|||||||
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
||||||
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
||||||
"""Decrypt data using envelope encryption."""
|
"""Decrypt data using envelope encryption."""
|
||||||
# Decrypt the data key
|
|
||||||
data_key = self._decrypt_data_key(encrypted_data_key)
|
data_key = self._decrypt_data_key(encrypted_data_key)
|
||||||
|
|
||||||
# Decrypt the data
|
|
||||||
aesgcm = AESGCM(data_key)
|
aesgcm = AESGCM(data_key)
|
||||||
try:
|
try:
|
||||||
return aesgcm.decrypt(nonce, ciphertext, None)
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
@@ -183,21 +180,26 @@ class StreamingEncryptor:
|
|||||||
self.chunk_size = chunk_size
|
self.chunk_size = chunk_size
|
||||||
|
|
||||||
def _derive_chunk_nonce(self, base_nonce: bytes, chunk_index: int) -> bytes:
|
def _derive_chunk_nonce(self, base_nonce: bytes, chunk_index: int) -> bytes:
|
||||||
"""Derive a unique nonce for each chunk."""
|
"""Derive a unique nonce for each chunk.
|
||||||
# XOR the base nonce with the chunk index
|
|
||||||
nonce_int = int.from_bytes(base_nonce, "big")
|
Performance: Use direct byte manipulation instead of full int conversion.
|
||||||
derived = nonce_int ^ chunk_index
|
"""
|
||||||
return derived.to_bytes(12, "big")
|
# Performance: Only modify last 4 bytes instead of full 12-byte conversion
|
||||||
|
return base_nonce[:8] + (chunk_index ^ int.from_bytes(base_nonce[8:], "big")).to_bytes(4, "big")
|
||||||
|
|
||||||
def encrypt_stream(self, stream: BinaryIO,
|
def encrypt_stream(self, stream: BinaryIO,
|
||||||
context: Dict[str, str] | None = None) -> tuple[BinaryIO, EncryptionMetadata]:
|
context: Dict[str, str] | None = None) -> tuple[BinaryIO, EncryptionMetadata]:
|
||||||
"""Encrypt a stream and return encrypted stream + metadata."""
|
"""Encrypt a stream and return encrypted stream + metadata.
|
||||||
|
|
||||||
|
Performance: Writes chunks directly to output buffer instead of accumulating in list.
|
||||||
|
"""
|
||||||
data_key, encrypted_data_key = self.provider.generate_data_key()
|
data_key, encrypted_data_key = self.provider.generate_data_key()
|
||||||
base_nonce = secrets.token_bytes(12)
|
base_nonce = secrets.token_bytes(12)
|
||||||
|
|
||||||
aesgcm = AESGCM(data_key)
|
aesgcm = AESGCM(data_key)
|
||||||
encrypted_chunks = []
|
# Performance: Write directly to BytesIO instead of accumulating chunks
|
||||||
|
output = io.BytesIO()
|
||||||
|
output.write(b"\x00\x00\x00\x00") # Placeholder for chunk count
|
||||||
chunk_index = 0
|
chunk_index = 0
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
@@ -208,12 +210,15 @@ class StreamingEncryptor:
|
|||||||
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
||||||
encrypted_chunk = aesgcm.encrypt(chunk_nonce, chunk, None)
|
encrypted_chunk = aesgcm.encrypt(chunk_nonce, chunk, None)
|
||||||
|
|
||||||
size_prefix = len(encrypted_chunk).to_bytes(self.HEADER_SIZE, "big")
|
# Write size prefix + encrypted chunk directly
|
||||||
encrypted_chunks.append(size_prefix + encrypted_chunk)
|
output.write(len(encrypted_chunk).to_bytes(self.HEADER_SIZE, "big"))
|
||||||
|
output.write(encrypted_chunk)
|
||||||
chunk_index += 1
|
chunk_index += 1
|
||||||
|
|
||||||
header = chunk_index.to_bytes(4, "big")
|
# Write actual chunk count to header
|
||||||
encrypted_data = header + b"".join(encrypted_chunks)
|
output.seek(0)
|
||||||
|
output.write(chunk_index.to_bytes(4, "big"))
|
||||||
|
output.seek(0)
|
||||||
|
|
||||||
metadata = EncryptionMetadata(
|
metadata = EncryptionMetadata(
|
||||||
algorithm="AES256",
|
algorithm="AES256",
|
||||||
@@ -222,10 +227,13 @@ class StreamingEncryptor:
|
|||||||
encrypted_data_key=encrypted_data_key,
|
encrypted_data_key=encrypted_data_key,
|
||||||
)
|
)
|
||||||
|
|
||||||
return io.BytesIO(encrypted_data), metadata
|
return output, metadata
|
||||||
|
|
||||||
def decrypt_stream(self, stream: BinaryIO, metadata: EncryptionMetadata) -> BinaryIO:
|
def decrypt_stream(self, stream: BinaryIO, metadata: EncryptionMetadata) -> BinaryIO:
|
||||||
"""Decrypt a stream using the provided metadata."""
|
"""Decrypt a stream using the provided metadata.
|
||||||
|
|
||||||
|
Performance: Writes chunks directly to output buffer instead of accumulating in list.
|
||||||
|
"""
|
||||||
if isinstance(self.provider, LocalKeyEncryption):
|
if isinstance(self.provider, LocalKeyEncryption):
|
||||||
data_key = self.provider._decrypt_data_key(metadata.encrypted_data_key)
|
data_key = self.provider._decrypt_data_key(metadata.encrypted_data_key)
|
||||||
else:
|
else:
|
||||||
@@ -239,7 +247,8 @@ class StreamingEncryptor:
|
|||||||
raise EncryptionError("Invalid encrypted stream: missing header")
|
raise EncryptionError("Invalid encrypted stream: missing header")
|
||||||
chunk_count = int.from_bytes(chunk_count_bytes, "big")
|
chunk_count = int.from_bytes(chunk_count_bytes, "big")
|
||||||
|
|
||||||
decrypted_chunks = []
|
# Performance: Write directly to BytesIO instead of accumulating chunks
|
||||||
|
output = io.BytesIO()
|
||||||
for chunk_index in range(chunk_count):
|
for chunk_index in range(chunk_count):
|
||||||
size_bytes = stream.read(self.HEADER_SIZE)
|
size_bytes = stream.read(self.HEADER_SIZE)
|
||||||
if len(size_bytes) < self.HEADER_SIZE:
|
if len(size_bytes) < self.HEADER_SIZE:
|
||||||
@@ -253,11 +262,12 @@ class StreamingEncryptor:
|
|||||||
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
||||||
try:
|
try:
|
||||||
decrypted_chunk = aesgcm.decrypt(chunk_nonce, encrypted_chunk, None)
|
decrypted_chunk = aesgcm.decrypt(chunk_nonce, encrypted_chunk, None)
|
||||||
decrypted_chunks.append(decrypted_chunk)
|
output.write(decrypted_chunk) # Write directly instead of appending to list
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
raise EncryptionError(f"Failed to decrypt chunk {chunk_index}: {exc}") from exc
|
raise EncryptionError(f"Failed to decrypt chunk {chunk_index}: {exc}") from exc
|
||||||
|
|
||||||
return io.BytesIO(b"".join(decrypted_chunks))
|
output.seek(0)
|
||||||
|
return output
|
||||||
|
|
||||||
|
|
||||||
class EncryptionManager:
|
class EncryptionManager:
|
||||||
@@ -343,6 +353,106 @@ class EncryptionManager:
|
|||||||
return encryptor.decrypt_stream(stream, metadata)
|
return encryptor.decrypt_stream(stream, metadata)
|
||||||
|
|
||||||
|
|
||||||
|
class SSECEncryption(EncryptionProvider):
|
||||||
|
"""SSE-C: Server-Side Encryption with Customer-Provided Keys.
|
||||||
|
|
||||||
|
The client provides the encryption key with each request.
|
||||||
|
Server encrypts/decrypts but never stores the key.
|
||||||
|
|
||||||
|
Required headers for PUT:
|
||||||
|
- x-amz-server-side-encryption-customer-algorithm: AES256
|
||||||
|
- x-amz-server-side-encryption-customer-key: Base64-encoded 256-bit key
|
||||||
|
- x-amz-server-side-encryption-customer-key-MD5: Base64-encoded MD5 of key
|
||||||
|
"""
|
||||||
|
|
||||||
|
KEY_ID = "customer-provided"
|
||||||
|
|
||||||
|
def __init__(self, customer_key: bytes):
|
||||||
|
if len(customer_key) != 32:
|
||||||
|
raise EncryptionError("Customer key must be exactly 256 bits (32 bytes)")
|
||||||
|
self.customer_key = customer_key
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_headers(cls, headers: Dict[str, str]) -> "SSECEncryption":
|
||||||
|
algorithm = headers.get("x-amz-server-side-encryption-customer-algorithm", "")
|
||||||
|
if algorithm.upper() != "AES256":
|
||||||
|
raise EncryptionError(f"Unsupported SSE-C algorithm: {algorithm}. Only AES256 is supported.")
|
||||||
|
|
||||||
|
key_b64 = headers.get("x-amz-server-side-encryption-customer-key", "")
|
||||||
|
if not key_b64:
|
||||||
|
raise EncryptionError("Missing x-amz-server-side-encryption-customer-key header")
|
||||||
|
|
||||||
|
key_md5_b64 = headers.get("x-amz-server-side-encryption-customer-key-md5", "")
|
||||||
|
|
||||||
|
try:
|
||||||
|
customer_key = base64.b64decode(key_b64)
|
||||||
|
except Exception as e:
|
||||||
|
raise EncryptionError(f"Invalid base64 in customer key: {e}") from e
|
||||||
|
|
||||||
|
if len(customer_key) != 32:
|
||||||
|
raise EncryptionError(f"Customer key must be 256 bits, got {len(customer_key) * 8} bits")
|
||||||
|
|
||||||
|
if key_md5_b64:
|
||||||
|
import hashlib
|
||||||
|
expected_md5 = base64.b64encode(hashlib.md5(customer_key).digest()).decode()
|
||||||
|
if key_md5_b64 != expected_md5:
|
||||||
|
raise EncryptionError("Customer key MD5 mismatch")
|
||||||
|
|
||||||
|
return cls(customer_key)
|
||||||
|
|
||||||
|
def encrypt(self, plaintext: bytes, context: Dict[str, str] | None = None) -> EncryptionResult:
|
||||||
|
aesgcm = AESGCM(self.customer_key)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
ciphertext = aesgcm.encrypt(nonce, plaintext, None)
|
||||||
|
|
||||||
|
return EncryptionResult(
|
||||||
|
ciphertext=ciphertext,
|
||||||
|
nonce=nonce,
|
||||||
|
key_id=self.KEY_ID,
|
||||||
|
encrypted_data_key=b"",
|
||||||
|
)
|
||||||
|
|
||||||
|
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
||||||
|
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
aesgcm = AESGCM(self.customer_key)
|
||||||
|
try:
|
||||||
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"SSE-C decryption failed: {exc}") from exc
|
||||||
|
|
||||||
|
def generate_data_key(self) -> tuple[bytes, bytes]:
|
||||||
|
return self.customer_key, b""
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SSECMetadata:
|
||||||
|
algorithm: str = "AES256"
|
||||||
|
nonce: bytes = b""
|
||||||
|
key_md5: str = ""
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, str]:
|
||||||
|
return {
|
||||||
|
"x-amz-server-side-encryption-customer-algorithm": self.algorithm,
|
||||||
|
"x-amz-encryption-nonce": base64.b64encode(self.nonce).decode(),
|
||||||
|
"x-amz-server-side-encryption-customer-key-MD5": self.key_md5,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, str]) -> Optional["SSECMetadata"]:
|
||||||
|
algorithm = data.get("x-amz-server-side-encryption-customer-algorithm")
|
||||||
|
if not algorithm:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
nonce = base64.b64decode(data.get("x-amz-encryption-nonce", ""))
|
||||||
|
return cls(
|
||||||
|
algorithm=algorithm,
|
||||||
|
nonce=nonce,
|
||||||
|
key_md5=data.get("x-amz-server-side-encryption-customer-key-MD5", ""),
|
||||||
|
)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
class ClientEncryptionHelper:
|
class ClientEncryptionHelper:
|
||||||
"""Helpers for client-side encryption.
|
"""Helpers for client-side encryption.
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
"""Standardized error handling for API and UI responses."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
"""Application-wide extension instances."""
|
|
||||||
from flask import g
|
from flask import g
|
||||||
from flask_limiter import Limiter
|
from flask_limiter import Limiter
|
||||||
from flask_limiter.util import get_remote_address
|
from flask_limiter.util import get_remote_address
|
||||||
|
|||||||
104
app/iam.py
104
app/iam.py
@@ -1,14 +1,14 @@
|
|||||||
"""Lightweight IAM-style user and policy management."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import math
|
import math
|
||||||
import secrets
|
import secrets
|
||||||
|
import time
|
||||||
from collections import deque
|
from collections import deque
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import datetime, timedelta, timezone
|
from datetime import datetime, timedelta, timezone
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set
|
from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set, Tuple
|
||||||
|
|
||||||
|
|
||||||
class IamError(RuntimeError):
|
class IamError(RuntimeError):
|
||||||
@@ -26,14 +26,12 @@ IAM_ACTIONS = {
|
|||||||
ALLOWED_ACTIONS = (S3_ACTIONS | IAM_ACTIONS) | {"iam:*"}
|
ALLOWED_ACTIONS = (S3_ACTIONS | IAM_ACTIONS) | {"iam:*"}
|
||||||
|
|
||||||
ACTION_ALIASES = {
|
ACTION_ALIASES = {
|
||||||
# List actions
|
|
||||||
"list": "list",
|
"list": "list",
|
||||||
"s3:listbucket": "list",
|
"s3:listbucket": "list",
|
||||||
"s3:listallmybuckets": "list",
|
"s3:listallmybuckets": "list",
|
||||||
"s3:listbucketversions": "list",
|
"s3:listbucketversions": "list",
|
||||||
"s3:listmultipartuploads": "list",
|
"s3:listmultipartuploads": "list",
|
||||||
"s3:listparts": "list",
|
"s3:listparts": "list",
|
||||||
# Read actions
|
|
||||||
"read": "read",
|
"read": "read",
|
||||||
"s3:getobject": "read",
|
"s3:getobject": "read",
|
||||||
"s3:getobjectversion": "read",
|
"s3:getobjectversion": "read",
|
||||||
@@ -43,7 +41,6 @@ ACTION_ALIASES = {
|
|||||||
"s3:getbucketversioning": "read",
|
"s3:getbucketversioning": "read",
|
||||||
"s3:headobject": "read",
|
"s3:headobject": "read",
|
||||||
"s3:headbucket": "read",
|
"s3:headbucket": "read",
|
||||||
# Write actions
|
|
||||||
"write": "write",
|
"write": "write",
|
||||||
"s3:putobject": "write",
|
"s3:putobject": "write",
|
||||||
"s3:createbucket": "write",
|
"s3:createbucket": "write",
|
||||||
@@ -54,23 +51,19 @@ ACTION_ALIASES = {
|
|||||||
"s3:completemultipartupload": "write",
|
"s3:completemultipartupload": "write",
|
||||||
"s3:abortmultipartupload": "write",
|
"s3:abortmultipartupload": "write",
|
||||||
"s3:copyobject": "write",
|
"s3:copyobject": "write",
|
||||||
# Delete actions
|
|
||||||
"delete": "delete",
|
"delete": "delete",
|
||||||
"s3:deleteobject": "delete",
|
"s3:deleteobject": "delete",
|
||||||
"s3:deleteobjectversion": "delete",
|
"s3:deleteobjectversion": "delete",
|
||||||
"s3:deletebucket": "delete",
|
"s3:deletebucket": "delete",
|
||||||
"s3:deleteobjecttagging": "delete",
|
"s3:deleteobjecttagging": "delete",
|
||||||
# Share actions (ACL)
|
|
||||||
"share": "share",
|
"share": "share",
|
||||||
"s3:putobjectacl": "share",
|
"s3:putobjectacl": "share",
|
||||||
"s3:putbucketacl": "share",
|
"s3:putbucketacl": "share",
|
||||||
"s3:getbucketacl": "share",
|
"s3:getbucketacl": "share",
|
||||||
# Policy actions
|
|
||||||
"policy": "policy",
|
"policy": "policy",
|
||||||
"s3:putbucketpolicy": "policy",
|
"s3:putbucketpolicy": "policy",
|
||||||
"s3:getbucketpolicy": "policy",
|
"s3:getbucketpolicy": "policy",
|
||||||
"s3:deletebucketpolicy": "policy",
|
"s3:deletebucketpolicy": "policy",
|
||||||
# Replication actions
|
|
||||||
"replication": "replication",
|
"replication": "replication",
|
||||||
"s3:getreplicationconfiguration": "replication",
|
"s3:getreplicationconfiguration": "replication",
|
||||||
"s3:putreplicationconfiguration": "replication",
|
"s3:putreplicationconfiguration": "replication",
|
||||||
@@ -78,7 +71,6 @@ ACTION_ALIASES = {
|
|||||||
"s3:replicateobject": "replication",
|
"s3:replicateobject": "replication",
|
||||||
"s3:replicatetags": "replication",
|
"s3:replicatetags": "replication",
|
||||||
"s3:replicatedelete": "replication",
|
"s3:replicatedelete": "replication",
|
||||||
# IAM actions
|
|
||||||
"iam:listusers": "iam:list_users",
|
"iam:listusers": "iam:list_users",
|
||||||
"iam:createuser": "iam:create_user",
|
"iam:createuser": "iam:create_user",
|
||||||
"iam:deleteuser": "iam:delete_user",
|
"iam:deleteuser": "iam:delete_user",
|
||||||
@@ -115,13 +107,23 @@ class IamService:
|
|||||||
self._raw_config: Dict[str, Any] = {}
|
self._raw_config: Dict[str, Any] = {}
|
||||||
self._failed_attempts: Dict[str, Deque[datetime]] = {}
|
self._failed_attempts: Dict[str, Deque[datetime]] = {}
|
||||||
self._last_load_time = 0.0
|
self._last_load_time = 0.0
|
||||||
|
self._credential_cache: Dict[str, Tuple[str, Principal, float]] = {}
|
||||||
|
self._cache_ttl = 60.0
|
||||||
|
self._last_stat_check = 0.0
|
||||||
|
self._stat_check_interval = 1.0
|
||||||
|
self._sessions: Dict[str, Dict[str, Any]] = {}
|
||||||
self._load()
|
self._load()
|
||||||
|
|
||||||
def _maybe_reload(self) -> None:
|
def _maybe_reload(self) -> None:
|
||||||
"""Reload configuration if the file has changed on disk."""
|
"""Reload configuration if the file has changed on disk."""
|
||||||
|
now = time.time()
|
||||||
|
if now - self._last_stat_check < self._stat_check_interval:
|
||||||
|
return
|
||||||
|
self._last_stat_check = now
|
||||||
try:
|
try:
|
||||||
if self.config_path.stat().st_mtime > self._last_load_time:
|
if self.config_path.stat().st_mtime > self._last_load_time:
|
||||||
self._load()
|
self._load()
|
||||||
|
self._credential_cache.clear()
|
||||||
except OSError:
|
except OSError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@@ -180,18 +182,70 @@ class IamService:
|
|||||||
elapsed = (datetime.now(timezone.utc) - oldest).total_seconds()
|
elapsed = (datetime.now(timezone.utc) - oldest).total_seconds()
|
||||||
return int(max(0, self.auth_lockout_window.total_seconds() - elapsed))
|
return int(max(0, self.auth_lockout_window.total_seconds() - elapsed))
|
||||||
|
|
||||||
def principal_for_key(self, access_key: str) -> Principal:
|
def create_session_token(self, access_key: str, duration_seconds: int = 3600) -> str:
|
||||||
|
"""Create a temporary session token for an access key."""
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if not record:
|
if not record:
|
||||||
raise IamError("Unknown access key")
|
raise IamError("Unknown access key")
|
||||||
return self._build_principal(access_key, record)
|
self._cleanup_expired_sessions()
|
||||||
|
token = secrets.token_urlsafe(32)
|
||||||
|
expires_at = time.time() + duration_seconds
|
||||||
|
self._sessions[token] = {
|
||||||
|
"access_key": access_key,
|
||||||
|
"expires_at": expires_at,
|
||||||
|
}
|
||||||
|
return token
|
||||||
|
|
||||||
|
def validate_session_token(self, access_key: str, session_token: str) -> bool:
|
||||||
|
"""Validate a session token for an access key."""
|
||||||
|
session = self._sessions.get(session_token)
|
||||||
|
if not session:
|
||||||
|
return False
|
||||||
|
if session["access_key"] != access_key:
|
||||||
|
return False
|
||||||
|
if time.time() > session["expires_at"]:
|
||||||
|
del self._sessions[session_token]
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _cleanup_expired_sessions(self) -> None:
|
||||||
|
"""Remove expired session tokens."""
|
||||||
|
now = time.time()
|
||||||
|
expired = [token for token, data in self._sessions.items() if now > data["expires_at"]]
|
||||||
|
for token in expired:
|
||||||
|
del self._sessions[token]
|
||||||
|
|
||||||
|
def principal_for_key(self, access_key: str) -> Principal:
|
||||||
|
now = time.time()
|
||||||
|
cached = self._credential_cache.get(access_key)
|
||||||
|
if cached:
|
||||||
|
secret, principal, cached_time = cached
|
||||||
|
if now - cached_time < self._cache_ttl:
|
||||||
|
return principal
|
||||||
|
|
||||||
|
self._maybe_reload()
|
||||||
|
record = self._users.get(access_key)
|
||||||
|
if not record:
|
||||||
|
raise IamError("Unknown access key")
|
||||||
|
principal = self._build_principal(access_key, record)
|
||||||
|
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
||||||
|
return principal
|
||||||
|
|
||||||
def secret_for_key(self, access_key: str) -> str:
|
def secret_for_key(self, access_key: str) -> str:
|
||||||
|
now = time.time()
|
||||||
|
cached = self._credential_cache.get(access_key)
|
||||||
|
if cached:
|
||||||
|
secret, principal, cached_time = cached
|
||||||
|
if now - cached_time < self._cache_ttl:
|
||||||
|
return secret
|
||||||
|
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if not record:
|
if not record:
|
||||||
raise IamError("Unknown access key")
|
raise IamError("Unknown access key")
|
||||||
|
principal = self._build_principal(access_key, record)
|
||||||
|
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
||||||
return record["secret_key"]
|
return record["secret_key"]
|
||||||
|
|
||||||
def authorize(self, principal: Principal, bucket_name: str | None, action: str) -> None:
|
def authorize(self, principal: Principal, bucket_name: str | None, action: str) -> None:
|
||||||
@@ -442,11 +496,33 @@ class IamService:
|
|||||||
raise IamError("User not found")
|
raise IamError("User not found")
|
||||||
|
|
||||||
def get_secret_key(self, access_key: str) -> str | None:
|
def get_secret_key(self, access_key: str) -> str | None:
|
||||||
|
now = time.time()
|
||||||
|
cached = self._credential_cache.get(access_key)
|
||||||
|
if cached:
|
||||||
|
secret, principal, cached_time = cached
|
||||||
|
if now - cached_time < self._cache_ttl:
|
||||||
|
return secret
|
||||||
|
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
return record["secret_key"] if record else None
|
if record:
|
||||||
|
principal = self._build_principal(access_key, record)
|
||||||
|
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
||||||
|
return record["secret_key"]
|
||||||
|
return None
|
||||||
|
|
||||||
def get_principal(self, access_key: str) -> Principal | None:
|
def get_principal(self, access_key: str) -> Principal | None:
|
||||||
|
now = time.time()
|
||||||
|
cached = self._credential_cache.get(access_key)
|
||||||
|
if cached:
|
||||||
|
secret, principal, cached_time = cached
|
||||||
|
if now - cached_time < self._cache_ttl:
|
||||||
|
return principal
|
||||||
|
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
return self._build_principal(access_key, record) if record else None
|
if record:
|
||||||
|
principal = self._build_principal(access_key, record)
|
||||||
|
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
||||||
|
return principal
|
||||||
|
return None
|
||||||
|
|||||||
21
app/kms.py
21
app/kms.py
@@ -1,4 +1,3 @@
|
|||||||
"""Key Management Service (KMS) for encryption key management."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import base64
|
import base64
|
||||||
@@ -212,6 +211,26 @@ class KMSManager:
|
|||||||
self._load_keys()
|
self._load_keys()
|
||||||
return list(self._keys.values())
|
return list(self._keys.values())
|
||||||
|
|
||||||
|
def get_default_key_id(self) -> str:
|
||||||
|
"""Get the default KMS key ID, creating one if none exist."""
|
||||||
|
self._load_keys()
|
||||||
|
for key in self._keys.values():
|
||||||
|
if key.enabled:
|
||||||
|
return key.key_id
|
||||||
|
default_key = self.create_key(description="Default KMS Key")
|
||||||
|
return default_key.key_id
|
||||||
|
|
||||||
|
def get_provider(self, key_id: str | None = None) -> "KMSEncryptionProvider":
|
||||||
|
"""Get a KMS encryption provider for the specified key."""
|
||||||
|
if key_id is None:
|
||||||
|
key_id = self.get_default_key_id()
|
||||||
|
key = self.get_key(key_id)
|
||||||
|
if not key:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
if not key.enabled:
|
||||||
|
raise EncryptionError(f"Key is disabled: {key_id}")
|
||||||
|
return KMSEncryptionProvider(self, key_id)
|
||||||
|
|
||||||
def enable_key(self, key_id: str) -> None:
|
def enable_key(self, key_id: str) -> None:
|
||||||
"""Enable a key."""
|
"""Enable a key."""
|
||||||
self._load_keys()
|
self._load_keys()
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
"""KMS and encryption API endpoints."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import base64
|
import base64
|
||||||
|
|||||||
335
app/lifecycle.py
Normal file
335
app/lifecycle.py
Normal file
@@ -0,0 +1,335 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
from .storage import ObjectStorage, StorageError
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LifecycleResult:
|
||||||
|
bucket_name: str
|
||||||
|
objects_deleted: int = 0
|
||||||
|
versions_deleted: int = 0
|
||||||
|
uploads_aborted: int = 0
|
||||||
|
errors: List[str] = field(default_factory=list)
|
||||||
|
execution_time_seconds: float = 0.0
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LifecycleExecutionRecord:
|
||||||
|
timestamp: float
|
||||||
|
bucket_name: str
|
||||||
|
objects_deleted: int
|
||||||
|
versions_deleted: int
|
||||||
|
uploads_aborted: int
|
||||||
|
errors: List[str]
|
||||||
|
execution_time_seconds: float
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
return {
|
||||||
|
"timestamp": self.timestamp,
|
||||||
|
"bucket_name": self.bucket_name,
|
||||||
|
"objects_deleted": self.objects_deleted,
|
||||||
|
"versions_deleted": self.versions_deleted,
|
||||||
|
"uploads_aborted": self.uploads_aborted,
|
||||||
|
"errors": self.errors,
|
||||||
|
"execution_time_seconds": self.execution_time_seconds,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: dict) -> "LifecycleExecutionRecord":
|
||||||
|
return cls(
|
||||||
|
timestamp=data["timestamp"],
|
||||||
|
bucket_name=data["bucket_name"],
|
||||||
|
objects_deleted=data["objects_deleted"],
|
||||||
|
versions_deleted=data["versions_deleted"],
|
||||||
|
uploads_aborted=data["uploads_aborted"],
|
||||||
|
errors=data.get("errors", []),
|
||||||
|
execution_time_seconds=data["execution_time_seconds"],
|
||||||
|
)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_result(cls, result: LifecycleResult) -> "LifecycleExecutionRecord":
|
||||||
|
return cls(
|
||||||
|
timestamp=time.time(),
|
||||||
|
bucket_name=result.bucket_name,
|
||||||
|
objects_deleted=result.objects_deleted,
|
||||||
|
versions_deleted=result.versions_deleted,
|
||||||
|
uploads_aborted=result.uploads_aborted,
|
||||||
|
errors=result.errors.copy(),
|
||||||
|
execution_time_seconds=result.execution_time_seconds,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class LifecycleHistoryStore:
|
||||||
|
MAX_HISTORY_PER_BUCKET = 50
|
||||||
|
|
||||||
|
def __init__(self, storage_root: Path) -> None:
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
|
||||||
|
def _get_history_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "lifecycle_history.json"
|
||||||
|
|
||||||
|
def load_history(self, bucket_name: str) -> List[LifecycleExecutionRecord]:
|
||||||
|
path = self._get_history_path(bucket_name)
|
||||||
|
if not path.exists():
|
||||||
|
return []
|
||||||
|
try:
|
||||||
|
with open(path, "r") as f:
|
||||||
|
data = json.load(f)
|
||||||
|
return [LifecycleExecutionRecord.from_dict(d) for d in data.get("executions", [])]
|
||||||
|
except (OSError, ValueError, KeyError) as e:
|
||||||
|
logger.error(f"Failed to load lifecycle history for {bucket_name}: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
def save_history(self, bucket_name: str, records: List[LifecycleExecutionRecord]) -> None:
|
||||||
|
path = self._get_history_path(bucket_name)
|
||||||
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
data = {"executions": [r.to_dict() for r in records[:self.MAX_HISTORY_PER_BUCKET]]}
|
||||||
|
try:
|
||||||
|
with open(path, "w") as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
except OSError as e:
|
||||||
|
logger.error(f"Failed to save lifecycle history for {bucket_name}: {e}")
|
||||||
|
|
||||||
|
def add_record(self, bucket_name: str, record: LifecycleExecutionRecord) -> None:
|
||||||
|
with self._lock:
|
||||||
|
records = self.load_history(bucket_name)
|
||||||
|
records.insert(0, record)
|
||||||
|
self.save_history(bucket_name, records)
|
||||||
|
|
||||||
|
def get_history(self, bucket_name: str, limit: int = 50, offset: int = 0) -> List[LifecycleExecutionRecord]:
|
||||||
|
records = self.load_history(bucket_name)
|
||||||
|
return records[offset:offset + limit]
|
||||||
|
|
||||||
|
|
||||||
|
class LifecycleManager:
|
||||||
|
def __init__(self, storage: ObjectStorage, interval_seconds: int = 3600, storage_root: Optional[Path] = None):
|
||||||
|
self.storage = storage
|
||||||
|
self.interval_seconds = interval_seconds
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._timer: Optional[threading.Timer] = None
|
||||||
|
self._shutdown = False
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
self.history_store = LifecycleHistoryStore(storage_root) if storage_root else None
|
||||||
|
|
||||||
|
def start(self) -> None:
|
||||||
|
if self._timer is not None:
|
||||||
|
return
|
||||||
|
self._shutdown = False
|
||||||
|
self._schedule_next()
|
||||||
|
logger.info(f"Lifecycle manager started with interval {self.interval_seconds}s")
|
||||||
|
|
||||||
|
def stop(self) -> None:
|
||||||
|
self._shutdown = True
|
||||||
|
if self._timer:
|
||||||
|
self._timer.cancel()
|
||||||
|
self._timer = None
|
||||||
|
logger.info("Lifecycle manager stopped")
|
||||||
|
|
||||||
|
def _schedule_next(self) -> None:
|
||||||
|
if self._shutdown:
|
||||||
|
return
|
||||||
|
self._timer = threading.Timer(self.interval_seconds, self._run_enforcement)
|
||||||
|
self._timer.daemon = True
|
||||||
|
self._timer.start()
|
||||||
|
|
||||||
|
def _run_enforcement(self) -> None:
|
||||||
|
if self._shutdown:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
self.enforce_all_buckets()
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Lifecycle enforcement failed: {e}")
|
||||||
|
finally:
|
||||||
|
self._schedule_next()
|
||||||
|
|
||||||
|
def enforce_all_buckets(self) -> Dict[str, LifecycleResult]:
|
||||||
|
results = {}
|
||||||
|
try:
|
||||||
|
buckets = self.storage.list_buckets()
|
||||||
|
for bucket in buckets:
|
||||||
|
result = self.enforce_rules(bucket.name)
|
||||||
|
if result.objects_deleted > 0 or result.versions_deleted > 0 or result.uploads_aborted > 0:
|
||||||
|
results[bucket.name] = result
|
||||||
|
except StorageError as e:
|
||||||
|
logger.error(f"Failed to list buckets for lifecycle: {e}")
|
||||||
|
return results
|
||||||
|
|
||||||
|
def enforce_rules(self, bucket_name: str) -> LifecycleResult:
|
||||||
|
start_time = time.time()
|
||||||
|
result = LifecycleResult(bucket_name=bucket_name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
lifecycle = self.storage.get_bucket_lifecycle(bucket_name)
|
||||||
|
if not lifecycle:
|
||||||
|
return result
|
||||||
|
|
||||||
|
for rule in lifecycle:
|
||||||
|
if rule.get("Status") != "Enabled":
|
||||||
|
continue
|
||||||
|
rule_id = rule.get("ID", "unknown")
|
||||||
|
prefix = rule.get("Prefix", rule.get("Filter", {}).get("Prefix", ""))
|
||||||
|
|
||||||
|
self._enforce_expiration(bucket_name, rule, prefix, result)
|
||||||
|
self._enforce_noncurrent_expiration(bucket_name, rule, prefix, result)
|
||||||
|
self._enforce_abort_multipart(bucket_name, rule, result)
|
||||||
|
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(str(e))
|
||||||
|
logger.error(f"Lifecycle enforcement error for {bucket_name}: {e}")
|
||||||
|
|
||||||
|
result.execution_time_seconds = time.time() - start_time
|
||||||
|
if result.objects_deleted > 0 or result.versions_deleted > 0 or result.uploads_aborted > 0 or result.errors:
|
||||||
|
logger.info(
|
||||||
|
f"Lifecycle enforcement for {bucket_name}: "
|
||||||
|
f"deleted={result.objects_deleted}, versions={result.versions_deleted}, "
|
||||||
|
f"aborted={result.uploads_aborted}, time={result.execution_time_seconds:.2f}s"
|
||||||
|
)
|
||||||
|
if self.history_store:
|
||||||
|
record = LifecycleExecutionRecord.from_result(result)
|
||||||
|
self.history_store.add_record(bucket_name, record)
|
||||||
|
return result
|
||||||
|
|
||||||
|
def _enforce_expiration(
|
||||||
|
self, bucket_name: str, rule: Dict[str, Any], prefix: str, result: LifecycleResult
|
||||||
|
) -> None:
|
||||||
|
expiration = rule.get("Expiration", {})
|
||||||
|
if not expiration:
|
||||||
|
return
|
||||||
|
|
||||||
|
days = expiration.get("Days")
|
||||||
|
date_str = expiration.get("Date")
|
||||||
|
|
||||||
|
if days:
|
||||||
|
cutoff = datetime.now(timezone.utc) - timedelta(days=days)
|
||||||
|
elif date_str:
|
||||||
|
try:
|
||||||
|
cutoff = datetime.fromisoformat(date_str.replace("Z", "+00:00"))
|
||||||
|
except ValueError:
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
objects = self.storage.list_objects_all(bucket_name)
|
||||||
|
for obj in objects:
|
||||||
|
if prefix and not obj.key.startswith(prefix):
|
||||||
|
continue
|
||||||
|
if obj.last_modified < cutoff:
|
||||||
|
try:
|
||||||
|
self.storage.delete_object(bucket_name, obj.key)
|
||||||
|
result.objects_deleted += 1
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to delete {obj.key}: {e}")
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to list objects: {e}")
|
||||||
|
|
||||||
|
def _enforce_noncurrent_expiration(
|
||||||
|
self, bucket_name: str, rule: Dict[str, Any], prefix: str, result: LifecycleResult
|
||||||
|
) -> None:
|
||||||
|
noncurrent = rule.get("NoncurrentVersionExpiration", {})
|
||||||
|
noncurrent_days = noncurrent.get("NoncurrentDays")
|
||||||
|
if not noncurrent_days:
|
||||||
|
return
|
||||||
|
|
||||||
|
cutoff = datetime.now(timezone.utc) - timedelta(days=noncurrent_days)
|
||||||
|
|
||||||
|
try:
|
||||||
|
objects = self.storage.list_objects_all(bucket_name)
|
||||||
|
for obj in objects:
|
||||||
|
if prefix and not obj.key.startswith(prefix):
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
versions = self.storage.list_object_versions(bucket_name, obj.key)
|
||||||
|
for version in versions:
|
||||||
|
archived_at_str = version.get("archived_at", "")
|
||||||
|
if not archived_at_str:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
archived_at = datetime.fromisoformat(archived_at_str.replace("Z", "+00:00"))
|
||||||
|
if archived_at < cutoff:
|
||||||
|
version_id = version.get("version_id")
|
||||||
|
if version_id:
|
||||||
|
self.storage.delete_object_version(bucket_name, obj.key, version_id)
|
||||||
|
result.versions_deleted += 1
|
||||||
|
except (ValueError, StorageError) as e:
|
||||||
|
result.errors.append(f"Failed to process version: {e}")
|
||||||
|
except StorageError:
|
||||||
|
pass
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to list objects: {e}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
orphaned = self.storage.list_orphaned_objects(bucket_name)
|
||||||
|
for item in orphaned:
|
||||||
|
obj_key = item.get("key", "")
|
||||||
|
if prefix and not obj_key.startswith(prefix):
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
versions = self.storage.list_object_versions(bucket_name, obj_key)
|
||||||
|
for version in versions:
|
||||||
|
archived_at_str = version.get("archived_at", "")
|
||||||
|
if not archived_at_str:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
archived_at = datetime.fromisoformat(archived_at_str.replace("Z", "+00:00"))
|
||||||
|
if archived_at < cutoff:
|
||||||
|
version_id = version.get("version_id")
|
||||||
|
if version_id:
|
||||||
|
self.storage.delete_object_version(bucket_name, obj_key, version_id)
|
||||||
|
result.versions_deleted += 1
|
||||||
|
except (ValueError, StorageError) as e:
|
||||||
|
result.errors.append(f"Failed to process orphaned version: {e}")
|
||||||
|
except StorageError:
|
||||||
|
pass
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to list orphaned objects: {e}")
|
||||||
|
|
||||||
|
def _enforce_abort_multipart(
|
||||||
|
self, bucket_name: str, rule: Dict[str, Any], result: LifecycleResult
|
||||||
|
) -> None:
|
||||||
|
abort_config = rule.get("AbortIncompleteMultipartUpload", {})
|
||||||
|
days_after = abort_config.get("DaysAfterInitiation")
|
||||||
|
if not days_after:
|
||||||
|
return
|
||||||
|
|
||||||
|
cutoff = datetime.now(timezone.utc) - timedelta(days=days_after)
|
||||||
|
|
||||||
|
try:
|
||||||
|
uploads = self.storage.list_multipart_uploads(bucket_name)
|
||||||
|
for upload in uploads:
|
||||||
|
created_at_str = upload.get("created_at", "")
|
||||||
|
if not created_at_str:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
created_at = datetime.fromisoformat(created_at_str.replace("Z", "+00:00"))
|
||||||
|
if created_at < cutoff:
|
||||||
|
upload_id = upload.get("upload_id")
|
||||||
|
if upload_id:
|
||||||
|
self.storage.abort_multipart_upload(bucket_name, upload_id)
|
||||||
|
result.uploads_aborted += 1
|
||||||
|
except (ValueError, StorageError) as e:
|
||||||
|
result.errors.append(f"Failed to abort upload: {e}")
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to list multipart uploads: {e}")
|
||||||
|
|
||||||
|
def run_now(self, bucket_name: Optional[str] = None) -> Dict[str, LifecycleResult]:
|
||||||
|
if bucket_name:
|
||||||
|
return {bucket_name: self.enforce_rules(bucket_name)}
|
||||||
|
return self.enforce_all_buckets()
|
||||||
|
|
||||||
|
def get_execution_history(self, bucket_name: str, limit: int = 50, offset: int = 0) -> List[LifecycleExecutionRecord]:
|
||||||
|
if not self.history_store:
|
||||||
|
return []
|
||||||
|
return self.history_store.get_history(bucket_name, limit, offset)
|
||||||
334
app/notifications.py
Normal file
334
app/notifications.py
Normal file
@@ -0,0 +1,334 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import queue
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
|
import requests
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class NotificationEvent:
|
||||||
|
event_name: str
|
||||||
|
bucket_name: str
|
||||||
|
object_key: str
|
||||||
|
object_size: int = 0
|
||||||
|
etag: str = ""
|
||||||
|
version_id: Optional[str] = None
|
||||||
|
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||||
|
request_id: str = field(default_factory=lambda: uuid.uuid4().hex)
|
||||||
|
source_ip: str = ""
|
||||||
|
user_identity: str = ""
|
||||||
|
|
||||||
|
def to_s3_event(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"Records": [
|
||||||
|
{
|
||||||
|
"eventVersion": "2.1",
|
||||||
|
"eventSource": "myfsio:s3",
|
||||||
|
"awsRegion": "local",
|
||||||
|
"eventTime": self.timestamp.strftime("%Y-%m-%dT%H:%M:%S.000Z"),
|
||||||
|
"eventName": self.event_name,
|
||||||
|
"userIdentity": {
|
||||||
|
"principalId": self.user_identity or "ANONYMOUS",
|
||||||
|
},
|
||||||
|
"requestParameters": {
|
||||||
|
"sourceIPAddress": self.source_ip or "127.0.0.1",
|
||||||
|
},
|
||||||
|
"responseElements": {
|
||||||
|
"x-amz-request-id": self.request_id,
|
||||||
|
"x-amz-id-2": self.request_id,
|
||||||
|
},
|
||||||
|
"s3": {
|
||||||
|
"s3SchemaVersion": "1.0",
|
||||||
|
"configurationId": "notification",
|
||||||
|
"bucket": {
|
||||||
|
"name": self.bucket_name,
|
||||||
|
"ownerIdentity": {"principalId": "local"},
|
||||||
|
"arn": f"arn:aws:s3:::{self.bucket_name}",
|
||||||
|
},
|
||||||
|
"object": {
|
||||||
|
"key": self.object_key,
|
||||||
|
"size": self.object_size,
|
||||||
|
"eTag": self.etag,
|
||||||
|
"versionId": self.version_id or "null",
|
||||||
|
"sequencer": f"{int(time.time() * 1000):016X}",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class WebhookDestination:
|
||||||
|
url: str
|
||||||
|
headers: Dict[str, str] = field(default_factory=dict)
|
||||||
|
timeout_seconds: int = 30
|
||||||
|
retry_count: int = 3
|
||||||
|
retry_delay_seconds: int = 1
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"url": self.url,
|
||||||
|
"headers": self.headers,
|
||||||
|
"timeout_seconds": self.timeout_seconds,
|
||||||
|
"retry_count": self.retry_count,
|
||||||
|
"retry_delay_seconds": self.retry_delay_seconds,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "WebhookDestination":
|
||||||
|
return cls(
|
||||||
|
url=data.get("url", ""),
|
||||||
|
headers=data.get("headers", {}),
|
||||||
|
timeout_seconds=data.get("timeout_seconds", 30),
|
||||||
|
retry_count=data.get("retry_count", 3),
|
||||||
|
retry_delay_seconds=data.get("retry_delay_seconds", 1),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class NotificationConfiguration:
|
||||||
|
id: str
|
||||||
|
events: List[str]
|
||||||
|
destination: WebhookDestination
|
||||||
|
prefix_filter: str = ""
|
||||||
|
suffix_filter: str = ""
|
||||||
|
|
||||||
|
def matches_event(self, event_name: str, object_key: str) -> bool:
|
||||||
|
event_match = False
|
||||||
|
for pattern in self.events:
|
||||||
|
if pattern.endswith("*"):
|
||||||
|
base = pattern[:-1]
|
||||||
|
if event_name.startswith(base):
|
||||||
|
event_match = True
|
||||||
|
break
|
||||||
|
elif pattern == event_name:
|
||||||
|
event_match = True
|
||||||
|
break
|
||||||
|
|
||||||
|
if not event_match:
|
||||||
|
return False
|
||||||
|
|
||||||
|
if self.prefix_filter and not object_key.startswith(self.prefix_filter):
|
||||||
|
return False
|
||||||
|
if self.suffix_filter and not object_key.endswith(self.suffix_filter):
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"Id": self.id,
|
||||||
|
"Events": self.events,
|
||||||
|
"Destination": self.destination.to_dict(),
|
||||||
|
"Filter": {
|
||||||
|
"Key": {
|
||||||
|
"FilterRules": [
|
||||||
|
{"Name": "prefix", "Value": self.prefix_filter},
|
||||||
|
{"Name": "suffix", "Value": self.suffix_filter},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "NotificationConfiguration":
|
||||||
|
prefix = ""
|
||||||
|
suffix = ""
|
||||||
|
filter_data = data.get("Filter", {})
|
||||||
|
key_filter = filter_data.get("Key", {})
|
||||||
|
for rule in key_filter.get("FilterRules", []):
|
||||||
|
if rule.get("Name") == "prefix":
|
||||||
|
prefix = rule.get("Value", "")
|
||||||
|
elif rule.get("Name") == "suffix":
|
||||||
|
suffix = rule.get("Value", "")
|
||||||
|
|
||||||
|
return cls(
|
||||||
|
id=data.get("Id", uuid.uuid4().hex),
|
||||||
|
events=data.get("Events", []),
|
||||||
|
destination=WebhookDestination.from_dict(data.get("Destination", {})),
|
||||||
|
prefix_filter=prefix,
|
||||||
|
suffix_filter=suffix,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class NotificationService:
|
||||||
|
def __init__(self, storage_root: Path, worker_count: int = 2):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._configs: Dict[str, List[NotificationConfiguration]] = {}
|
||||||
|
self._queue: queue.Queue[tuple[NotificationEvent, WebhookDestination]] = queue.Queue()
|
||||||
|
self._workers: List[threading.Thread] = []
|
||||||
|
self._shutdown = threading.Event()
|
||||||
|
self._stats = {
|
||||||
|
"events_queued": 0,
|
||||||
|
"events_sent": 0,
|
||||||
|
"events_failed": 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
for i in range(worker_count):
|
||||||
|
worker = threading.Thread(target=self._worker_loop, name=f"notification-worker-{i}", daemon=True)
|
||||||
|
worker.start()
|
||||||
|
self._workers.append(worker)
|
||||||
|
|
||||||
|
def _config_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "notifications.json"
|
||||||
|
|
||||||
|
def get_bucket_notifications(self, bucket_name: str) -> List[NotificationConfiguration]:
|
||||||
|
if bucket_name in self._configs:
|
||||||
|
return self._configs[bucket_name]
|
||||||
|
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
if not config_path.exists():
|
||||||
|
return []
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||||
|
configs = [NotificationConfiguration.from_dict(c) for c in data.get("configurations", [])]
|
||||||
|
self._configs[bucket_name] = configs
|
||||||
|
return configs
|
||||||
|
except (json.JSONDecodeError, OSError) as e:
|
||||||
|
logger.warning(f"Failed to load notification config for {bucket_name}: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
def set_bucket_notifications(
|
||||||
|
self, bucket_name: str, configurations: List[NotificationConfiguration]
|
||||||
|
) -> None:
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
data = {"configurations": [c.to_dict() for c in configurations]}
|
||||||
|
config_path.write_text(json.dumps(data, indent=2), encoding="utf-8")
|
||||||
|
self._configs[bucket_name] = configurations
|
||||||
|
|
||||||
|
def delete_bucket_notifications(self, bucket_name: str) -> None:
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
try:
|
||||||
|
if config_path.exists():
|
||||||
|
config_path.unlink()
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
self._configs.pop(bucket_name, None)
|
||||||
|
|
||||||
|
def emit_event(self, event: NotificationEvent) -> None:
|
||||||
|
configurations = self.get_bucket_notifications(event.bucket_name)
|
||||||
|
if not configurations:
|
||||||
|
return
|
||||||
|
|
||||||
|
for config in configurations:
|
||||||
|
if config.matches_event(event.event_name, event.object_key):
|
||||||
|
self._queue.put((event, config.destination))
|
||||||
|
self._stats["events_queued"] += 1
|
||||||
|
logger.debug(
|
||||||
|
f"Queued notification for {event.event_name} on {event.bucket_name}/{event.object_key}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def emit_object_created(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
*,
|
||||||
|
size: int = 0,
|
||||||
|
etag: str = "",
|
||||||
|
version_id: Optional[str] = None,
|
||||||
|
request_id: str = "",
|
||||||
|
source_ip: str = "",
|
||||||
|
user_identity: str = "",
|
||||||
|
operation: str = "Put",
|
||||||
|
) -> None:
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name=f"s3:ObjectCreated:{operation}",
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
object_key=object_key,
|
||||||
|
object_size=size,
|
||||||
|
etag=etag,
|
||||||
|
version_id=version_id,
|
||||||
|
request_id=request_id or uuid.uuid4().hex,
|
||||||
|
source_ip=source_ip,
|
||||||
|
user_identity=user_identity,
|
||||||
|
)
|
||||||
|
self.emit_event(event)
|
||||||
|
|
||||||
|
def emit_object_removed(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
*,
|
||||||
|
version_id: Optional[str] = None,
|
||||||
|
request_id: str = "",
|
||||||
|
source_ip: str = "",
|
||||||
|
user_identity: str = "",
|
||||||
|
operation: str = "Delete",
|
||||||
|
) -> None:
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name=f"s3:ObjectRemoved:{operation}",
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
object_key=object_key,
|
||||||
|
version_id=version_id,
|
||||||
|
request_id=request_id or uuid.uuid4().hex,
|
||||||
|
source_ip=source_ip,
|
||||||
|
user_identity=user_identity,
|
||||||
|
)
|
||||||
|
self.emit_event(event)
|
||||||
|
|
||||||
|
def _worker_loop(self) -> None:
|
||||||
|
while not self._shutdown.is_set():
|
||||||
|
try:
|
||||||
|
event, destination = self._queue.get(timeout=1.0)
|
||||||
|
except queue.Empty:
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
self._send_notification(event, destination)
|
||||||
|
self._stats["events_sent"] += 1
|
||||||
|
except Exception as e:
|
||||||
|
self._stats["events_failed"] += 1
|
||||||
|
logger.error(f"Failed to send notification: {e}")
|
||||||
|
finally:
|
||||||
|
self._queue.task_done()
|
||||||
|
|
||||||
|
def _send_notification(self, event: NotificationEvent, destination: WebhookDestination) -> None:
|
||||||
|
payload = event.to_s3_event()
|
||||||
|
headers = {"Content-Type": "application/json", **destination.headers}
|
||||||
|
|
||||||
|
last_error = None
|
||||||
|
for attempt in range(destination.retry_count):
|
||||||
|
try:
|
||||||
|
response = requests.post(
|
||||||
|
destination.url,
|
||||||
|
json=payload,
|
||||||
|
headers=headers,
|
||||||
|
timeout=destination.timeout_seconds,
|
||||||
|
)
|
||||||
|
if response.status_code < 400:
|
||||||
|
logger.info(
|
||||||
|
f"Notification sent: {event.event_name} -> {destination.url} (status={response.status_code})"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
last_error = f"HTTP {response.status_code}: {response.text[:200]}"
|
||||||
|
except requests.RequestException as e:
|
||||||
|
last_error = str(e)
|
||||||
|
|
||||||
|
if attempt < destination.retry_count - 1:
|
||||||
|
time.sleep(destination.retry_delay_seconds * (attempt + 1))
|
||||||
|
|
||||||
|
raise RuntimeError(f"Failed after {destination.retry_count} attempts: {last_error}")
|
||||||
|
|
||||||
|
def get_stats(self) -> Dict[str, int]:
|
||||||
|
return dict(self._stats)
|
||||||
|
|
||||||
|
def shutdown(self) -> None:
|
||||||
|
self._shutdown.set()
|
||||||
|
for worker in self._workers:
|
||||||
|
worker.join(timeout=5.0)
|
||||||
234
app/object_lock.py
Normal file
234
app/object_lock.py
Normal file
@@ -0,0 +1,234 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from enum import Enum
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
|
|
||||||
|
class RetentionMode(Enum):
|
||||||
|
GOVERNANCE = "GOVERNANCE"
|
||||||
|
COMPLIANCE = "COMPLIANCE"
|
||||||
|
|
||||||
|
|
||||||
|
class ObjectLockError(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ObjectLockRetention:
|
||||||
|
mode: RetentionMode
|
||||||
|
retain_until_date: datetime
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, str]:
|
||||||
|
return {
|
||||||
|
"Mode": self.mode.value,
|
||||||
|
"RetainUntilDate": self.retain_until_date.isoformat(),
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> Optional["ObjectLockRetention"]:
|
||||||
|
if not data:
|
||||||
|
return None
|
||||||
|
mode_str = data.get("Mode")
|
||||||
|
date_str = data.get("RetainUntilDate")
|
||||||
|
if not mode_str or not date_str:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
mode = RetentionMode(mode_str)
|
||||||
|
retain_until = datetime.fromisoformat(date_str.replace("Z", "+00:00"))
|
||||||
|
return cls(mode=mode, retain_until_date=retain_until)
|
||||||
|
except (ValueError, KeyError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def is_expired(self) -> bool:
|
||||||
|
return datetime.now(timezone.utc) > self.retain_until_date
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ObjectLockConfig:
|
||||||
|
enabled: bool = False
|
||||||
|
default_retention: Optional[ObjectLockRetention] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
result: Dict[str, Any] = {"ObjectLockEnabled": "Enabled" if self.enabled else "Disabled"}
|
||||||
|
if self.default_retention:
|
||||||
|
result["Rule"] = {
|
||||||
|
"DefaultRetention": {
|
||||||
|
"Mode": self.default_retention.mode.value,
|
||||||
|
"Days": None,
|
||||||
|
"Years": None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "ObjectLockConfig":
|
||||||
|
enabled = data.get("ObjectLockEnabled") == "Enabled"
|
||||||
|
default_retention = None
|
||||||
|
rule = data.get("Rule")
|
||||||
|
if rule and "DefaultRetention" in rule:
|
||||||
|
dr = rule["DefaultRetention"]
|
||||||
|
mode_str = dr.get("Mode", "GOVERNANCE")
|
||||||
|
days = dr.get("Days")
|
||||||
|
years = dr.get("Years")
|
||||||
|
if days or years:
|
||||||
|
from datetime import timedelta
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
if years:
|
||||||
|
delta = timedelta(days=int(years) * 365)
|
||||||
|
else:
|
||||||
|
delta = timedelta(days=int(days))
|
||||||
|
default_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode(mode_str),
|
||||||
|
retain_until_date=now + delta,
|
||||||
|
)
|
||||||
|
return cls(enabled=enabled, default_retention=default_retention)
|
||||||
|
|
||||||
|
|
||||||
|
class ObjectLockService:
|
||||||
|
def __init__(self, storage_root: Path):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._config_cache: Dict[str, ObjectLockConfig] = {}
|
||||||
|
|
||||||
|
def _bucket_lock_config_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "object_lock.json"
|
||||||
|
|
||||||
|
def _object_lock_meta_path(self, bucket_name: str, object_key: str) -> Path:
|
||||||
|
safe_key = object_key.replace("/", "_").replace("\\", "_")
|
||||||
|
return (
|
||||||
|
self.storage_root / ".myfsio.sys" / "buckets" / bucket_name /
|
||||||
|
"locks" / f"{safe_key}.lock.json"
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_bucket_lock_config(self, bucket_name: str) -> ObjectLockConfig:
|
||||||
|
if bucket_name in self._config_cache:
|
||||||
|
return self._config_cache[bucket_name]
|
||||||
|
|
||||||
|
config_path = self._bucket_lock_config_path(bucket_name)
|
||||||
|
if not config_path.exists():
|
||||||
|
return ObjectLockConfig(enabled=False)
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
self._config_cache[bucket_name] = config
|
||||||
|
return config
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
return ObjectLockConfig(enabled=False)
|
||||||
|
|
||||||
|
def set_bucket_lock_config(self, bucket_name: str, config: ObjectLockConfig) -> None:
|
||||||
|
config_path = self._bucket_lock_config_path(bucket_name)
|
||||||
|
config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
config_path.write_text(json.dumps(config.to_dict()), encoding="utf-8")
|
||||||
|
self._config_cache[bucket_name] = config
|
||||||
|
|
||||||
|
def enable_bucket_lock(self, bucket_name: str) -> None:
|
||||||
|
config = self.get_bucket_lock_config(bucket_name)
|
||||||
|
config.enabled = True
|
||||||
|
self.set_bucket_lock_config(bucket_name, config)
|
||||||
|
|
||||||
|
def is_bucket_lock_enabled(self, bucket_name: str) -> bool:
|
||||||
|
return self.get_bucket_lock_config(bucket_name).enabled
|
||||||
|
|
||||||
|
def get_object_retention(self, bucket_name: str, object_key: str) -> Optional[ObjectLockRetention]:
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
if not meta_path.exists():
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
data = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||||
|
return ObjectLockRetention.from_dict(data.get("retention", {}))
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def set_object_retention(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
retention: ObjectLockRetention,
|
||||||
|
bypass_governance: bool = False,
|
||||||
|
) -> None:
|
||||||
|
existing = self.get_object_retention(bucket_name, object_key)
|
||||||
|
if existing and not existing.is_expired():
|
||||||
|
if existing.mode == RetentionMode.COMPLIANCE:
|
||||||
|
raise ObjectLockError(
|
||||||
|
"Cannot modify retention on object with COMPLIANCE mode until retention expires"
|
||||||
|
)
|
||||||
|
if existing.mode == RetentionMode.GOVERNANCE and not bypass_governance:
|
||||||
|
raise ObjectLockError(
|
||||||
|
"Cannot modify GOVERNANCE retention without bypass-governance permission"
|
||||||
|
)
|
||||||
|
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
meta_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
existing_data: Dict[str, Any] = {}
|
||||||
|
if meta_path.exists():
|
||||||
|
try:
|
||||||
|
existing_data = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
existing_data["retention"] = retention.to_dict()
|
||||||
|
meta_path.write_text(json.dumps(existing_data), encoding="utf-8")
|
||||||
|
|
||||||
|
def get_legal_hold(self, bucket_name: str, object_key: str) -> bool:
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
if not meta_path.exists():
|
||||||
|
return False
|
||||||
|
try:
|
||||||
|
data = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||||
|
return data.get("legal_hold", False)
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
return False
|
||||||
|
|
||||||
|
def set_legal_hold(self, bucket_name: str, object_key: str, enabled: bool) -> None:
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
meta_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
existing_data: Dict[str, Any] = {}
|
||||||
|
if meta_path.exists():
|
||||||
|
try:
|
||||||
|
existing_data = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
existing_data["legal_hold"] = enabled
|
||||||
|
meta_path.write_text(json.dumps(existing_data), encoding="utf-8")
|
||||||
|
|
||||||
|
def can_delete_object(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
bypass_governance: bool = False,
|
||||||
|
) -> tuple[bool, str]:
|
||||||
|
if self.get_legal_hold(bucket_name, object_key):
|
||||||
|
return False, "Object is under legal hold"
|
||||||
|
|
||||||
|
retention = self.get_object_retention(bucket_name, object_key)
|
||||||
|
if retention and not retention.is_expired():
|
||||||
|
if retention.mode == RetentionMode.COMPLIANCE:
|
||||||
|
return False, f"Object is locked in COMPLIANCE mode until {retention.retain_until_date.isoformat()}"
|
||||||
|
if retention.mode == RetentionMode.GOVERNANCE:
|
||||||
|
if not bypass_governance:
|
||||||
|
return False, f"Object is locked in GOVERNANCE mode until {retention.retain_until_date.isoformat()}"
|
||||||
|
|
||||||
|
return True, ""
|
||||||
|
|
||||||
|
def can_overwrite_object(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
bypass_governance: bool = False,
|
||||||
|
) -> tuple[bool, str]:
|
||||||
|
return self.can_delete_object(bucket_name, object_key, bypass_governance)
|
||||||
|
|
||||||
|
def delete_object_lock_metadata(self, bucket_name: str, object_key: str) -> None:
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
try:
|
||||||
|
if meta_path.exists():
|
||||||
|
meta_path.unlink()
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
@@ -1,4 +1,3 @@
|
|||||||
"""Background replication worker."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
@@ -9,7 +8,7 @@ import time
|
|||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, Optional
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
import boto3
|
import boto3
|
||||||
from botocore.config import Config
|
from botocore.config import Config
|
||||||
@@ -24,7 +23,7 @@ logger = logging.getLogger(__name__)
|
|||||||
REPLICATION_USER_AGENT = "S3ReplicationAgent/1.0"
|
REPLICATION_USER_AGENT = "S3ReplicationAgent/1.0"
|
||||||
REPLICATION_CONNECT_TIMEOUT = 5
|
REPLICATION_CONNECT_TIMEOUT = 5
|
||||||
REPLICATION_READ_TIMEOUT = 30
|
REPLICATION_READ_TIMEOUT = 30
|
||||||
STREAMING_THRESHOLD_BYTES = 10 * 1024 * 1024 # 10 MiB - use streaming for larger files
|
STREAMING_THRESHOLD_BYTES = 10 * 1024 * 1024
|
||||||
|
|
||||||
REPLICATION_MODE_NEW_ONLY = "new_only"
|
REPLICATION_MODE_NEW_ONLY = "new_only"
|
||||||
REPLICATION_MODE_ALL = "all"
|
REPLICATION_MODE_ALL = "all"
|
||||||
@@ -32,13 +31,9 @@ REPLICATION_MODE_ALL = "all"
|
|||||||
|
|
||||||
def _create_s3_client(connection: RemoteConnection, *, health_check: bool = False) -> Any:
|
def _create_s3_client(connection: RemoteConnection, *, health_check: bool = False) -> Any:
|
||||||
"""Create a boto3 S3 client for the given connection.
|
"""Create a boto3 S3 client for the given connection.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
connection: Remote S3 connection configuration
|
connection: Remote S3 connection configuration
|
||||||
health_check: If True, use minimal retries for quick health checks
|
health_check: If True, use minimal retries for quick health checks
|
||||||
|
|
||||||
Returns:
|
|
||||||
Configured boto3 S3 client
|
|
||||||
"""
|
"""
|
||||||
config = Config(
|
config = Config(
|
||||||
user_agent_extra=REPLICATION_USER_AGENT,
|
user_agent_extra=REPLICATION_USER_AGENT,
|
||||||
@@ -92,6 +87,40 @@ class ReplicationStats:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ReplicationFailure:
|
||||||
|
object_key: str
|
||||||
|
error_message: str
|
||||||
|
timestamp: float
|
||||||
|
failure_count: int
|
||||||
|
bucket_name: str
|
||||||
|
action: str
|
||||||
|
last_error_code: Optional[str] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
return {
|
||||||
|
"object_key": self.object_key,
|
||||||
|
"error_message": self.error_message,
|
||||||
|
"timestamp": self.timestamp,
|
||||||
|
"failure_count": self.failure_count,
|
||||||
|
"bucket_name": self.bucket_name,
|
||||||
|
"action": self.action,
|
||||||
|
"last_error_code": self.last_error_code,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: dict) -> "ReplicationFailure":
|
||||||
|
return cls(
|
||||||
|
object_key=data["object_key"],
|
||||||
|
error_message=data["error_message"],
|
||||||
|
timestamp=data["timestamp"],
|
||||||
|
failure_count=data["failure_count"],
|
||||||
|
bucket_name=data["bucket_name"],
|
||||||
|
action=data["action"],
|
||||||
|
last_error_code=data.get("last_error_code"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ReplicationRule:
|
class ReplicationRule:
|
||||||
bucket_name: str
|
bucket_name: str
|
||||||
@@ -125,15 +154,86 @@ class ReplicationRule:
|
|||||||
return rule
|
return rule
|
||||||
|
|
||||||
|
|
||||||
|
class ReplicationFailureStore:
|
||||||
|
MAX_FAILURES_PER_BUCKET = 50
|
||||||
|
|
||||||
|
def __init__(self, storage_root: Path) -> None:
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
|
||||||
|
def _get_failures_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "replication_failures.json"
|
||||||
|
|
||||||
|
def load_failures(self, bucket_name: str) -> List[ReplicationFailure]:
|
||||||
|
path = self._get_failures_path(bucket_name)
|
||||||
|
if not path.exists():
|
||||||
|
return []
|
||||||
|
try:
|
||||||
|
with open(path, "r") as f:
|
||||||
|
data = json.load(f)
|
||||||
|
return [ReplicationFailure.from_dict(d) for d in data.get("failures", [])]
|
||||||
|
except (OSError, ValueError, KeyError) as e:
|
||||||
|
logger.error(f"Failed to load replication failures for {bucket_name}: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
def save_failures(self, bucket_name: str, failures: List[ReplicationFailure]) -> None:
|
||||||
|
path = self._get_failures_path(bucket_name)
|
||||||
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
data = {"failures": [f.to_dict() for f in failures[:self.MAX_FAILURES_PER_BUCKET]]}
|
||||||
|
try:
|
||||||
|
with open(path, "w") as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
except OSError as e:
|
||||||
|
logger.error(f"Failed to save replication failures for {bucket_name}: {e}")
|
||||||
|
|
||||||
|
def add_failure(self, bucket_name: str, failure: ReplicationFailure) -> None:
|
||||||
|
with self._lock:
|
||||||
|
failures = self.load_failures(bucket_name)
|
||||||
|
existing = next((f for f in failures if f.object_key == failure.object_key), None)
|
||||||
|
if existing:
|
||||||
|
existing.failure_count += 1
|
||||||
|
existing.timestamp = failure.timestamp
|
||||||
|
existing.error_message = failure.error_message
|
||||||
|
existing.last_error_code = failure.last_error_code
|
||||||
|
else:
|
||||||
|
failures.insert(0, failure)
|
||||||
|
self.save_failures(bucket_name, failures)
|
||||||
|
|
||||||
|
def remove_failure(self, bucket_name: str, object_key: str) -> bool:
|
||||||
|
with self._lock:
|
||||||
|
failures = self.load_failures(bucket_name)
|
||||||
|
original_len = len(failures)
|
||||||
|
failures = [f for f in failures if f.object_key != object_key]
|
||||||
|
if len(failures) < original_len:
|
||||||
|
self.save_failures(bucket_name, failures)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def clear_failures(self, bucket_name: str) -> None:
|
||||||
|
with self._lock:
|
||||||
|
path = self._get_failures_path(bucket_name)
|
||||||
|
if path.exists():
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def get_failure(self, bucket_name: str, object_key: str) -> Optional[ReplicationFailure]:
|
||||||
|
failures = self.load_failures(bucket_name)
|
||||||
|
return next((f for f in failures if f.object_key == object_key), None)
|
||||||
|
|
||||||
|
def get_failure_count(self, bucket_name: str) -> int:
|
||||||
|
return len(self.load_failures(bucket_name))
|
||||||
|
|
||||||
|
|
||||||
class ReplicationManager:
|
class ReplicationManager:
|
||||||
def __init__(self, storage: ObjectStorage, connections: ConnectionStore, rules_path: Path) -> None:
|
def __init__(self, storage: ObjectStorage, connections: ConnectionStore, rules_path: Path, storage_root: Path) -> None:
|
||||||
self.storage = storage
|
self.storage = storage
|
||||||
self.connections = connections
|
self.connections = connections
|
||||||
self.rules_path = rules_path
|
self.rules_path = rules_path
|
||||||
|
self.storage_root = storage_root
|
||||||
self._rules: Dict[str, ReplicationRule] = {}
|
self._rules: Dict[str, ReplicationRule] = {}
|
||||||
self._stats_lock = threading.Lock()
|
self._stats_lock = threading.Lock()
|
||||||
self._executor = ThreadPoolExecutor(max_workers=4, thread_name_prefix="ReplicationWorker")
|
self._executor = ThreadPoolExecutor(max_workers=4, thread_name_prefix="ReplicationWorker")
|
||||||
self._shutdown = False
|
self._shutdown = False
|
||||||
|
self.failure_store = ReplicationFailureStore(storage_root)
|
||||||
self.reload_rules()
|
self.reload_rules()
|
||||||
|
|
||||||
def shutdown(self, wait: bool = True) -> None:
|
def shutdown(self, wait: bool = True) -> None:
|
||||||
@@ -182,9 +282,15 @@ class ReplicationManager:
|
|||||||
return self._rules.get(bucket_name)
|
return self._rules.get(bucket_name)
|
||||||
|
|
||||||
def set_rule(self, rule: ReplicationRule) -> None:
|
def set_rule(self, rule: ReplicationRule) -> None:
|
||||||
|
old_rule = self._rules.get(rule.bucket_name)
|
||||||
|
was_all_mode = old_rule and old_rule.mode == REPLICATION_MODE_ALL if old_rule else False
|
||||||
self._rules[rule.bucket_name] = rule
|
self._rules[rule.bucket_name] = rule
|
||||||
self.save_rules()
|
self.save_rules()
|
||||||
|
|
||||||
|
if rule.mode == REPLICATION_MODE_ALL and rule.enabled and not was_all_mode:
|
||||||
|
logger.info(f"Replication mode ALL enabled for {rule.bucket_name}, triggering sync of existing objects")
|
||||||
|
self._executor.submit(self.replicate_existing_objects, rule.bucket_name)
|
||||||
|
|
||||||
def delete_rule(self, bucket_name: str) -> None:
|
def delete_rule(self, bucket_name: str) -> None:
|
||||||
if bucket_name in self._rules:
|
if bucket_name in self._rules:
|
||||||
del self._rules[bucket_name]
|
del self._rules[bucket_name]
|
||||||
@@ -306,7 +412,6 @@ class ReplicationManager:
|
|||||||
if self._shutdown:
|
if self._shutdown:
|
||||||
return
|
return
|
||||||
|
|
||||||
# Re-check if rule is still enabled (may have been paused after task was submitted)
|
|
||||||
current_rule = self.get_rule(bucket_name)
|
current_rule = self.get_rule(bucket_name)
|
||||||
if not current_rule or not current_rule.enabled:
|
if not current_rule or not current_rule.enabled:
|
||||||
logger.debug(f"Replication skipped for {bucket_name}/{object_key}: rule disabled or removed")
|
logger.debug(f"Replication skipped for {bucket_name}/{object_key}: rule disabled or removed")
|
||||||
@@ -331,8 +436,19 @@ class ReplicationManager:
|
|||||||
s3.delete_object(Bucket=rule.target_bucket, Key=object_key)
|
s3.delete_object(Bucket=rule.target_bucket, Key=object_key)
|
||||||
logger.info(f"Replicated DELETE {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
logger.info(f"Replicated DELETE {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
||||||
self._update_last_sync(bucket_name, object_key)
|
self._update_last_sync(bucket_name, object_key)
|
||||||
|
self.failure_store.remove_failure(bucket_name, object_key)
|
||||||
except ClientError as e:
|
except ClientError as e:
|
||||||
|
error_code = e.response.get('Error', {}).get('Code')
|
||||||
logger.error(f"Replication DELETE failed for {bucket_name}/{object_key}: {e}")
|
logger.error(f"Replication DELETE failed for {bucket_name}/{object_key}: {e}")
|
||||||
|
self.failure_store.add_failure(bucket_name, ReplicationFailure(
|
||||||
|
object_key=object_key,
|
||||||
|
error_message=str(e),
|
||||||
|
timestamp=time.time(),
|
||||||
|
failure_count=1,
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
action="delete",
|
||||||
|
last_error_code=error_code,
|
||||||
|
))
|
||||||
return
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -357,7 +473,6 @@ class ReplicationManager:
|
|||||||
extra_args["ContentType"] = content_type
|
extra_args["ContentType"] = content_type
|
||||||
|
|
||||||
if file_size >= STREAMING_THRESHOLD_BYTES:
|
if file_size >= STREAMING_THRESHOLD_BYTES:
|
||||||
# Use multipart upload for large files
|
|
||||||
s3.upload_file(
|
s3.upload_file(
|
||||||
str(path),
|
str(path),
|
||||||
rule.target_bucket,
|
rule.target_bucket,
|
||||||
@@ -365,7 +480,6 @@ class ReplicationManager:
|
|||||||
ExtraArgs=extra_args if extra_args else None,
|
ExtraArgs=extra_args if extra_args else None,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
# Read small files into memory
|
|
||||||
file_content = path.read_bytes()
|
file_content = path.read_bytes()
|
||||||
put_kwargs = {
|
put_kwargs = {
|
||||||
"Bucket": rule.target_bucket,
|
"Bucket": rule.target_bucket,
|
||||||
@@ -407,9 +521,89 @@ class ReplicationManager:
|
|||||||
|
|
||||||
logger.info(f"Replicated {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
logger.info(f"Replicated {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
||||||
self._update_last_sync(bucket_name, object_key)
|
self._update_last_sync(bucket_name, object_key)
|
||||||
|
self.failure_store.remove_failure(bucket_name, object_key)
|
||||||
|
|
||||||
except (ClientError, OSError, ValueError) as e:
|
except (ClientError, OSError, ValueError) as e:
|
||||||
|
error_code = None
|
||||||
|
if isinstance(e, ClientError):
|
||||||
|
error_code = e.response.get('Error', {}).get('Code')
|
||||||
logger.error(f"Replication failed for {bucket_name}/{object_key}: {e}")
|
logger.error(f"Replication failed for {bucket_name}/{object_key}: {e}")
|
||||||
except Exception:
|
self.failure_store.add_failure(bucket_name, ReplicationFailure(
|
||||||
|
object_key=object_key,
|
||||||
|
error_message=str(e),
|
||||||
|
timestamp=time.time(),
|
||||||
|
failure_count=1,
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
action=action,
|
||||||
|
last_error_code=error_code,
|
||||||
|
))
|
||||||
|
except Exception as e:
|
||||||
logger.exception(f"Unexpected error during replication for {bucket_name}/{object_key}")
|
logger.exception(f"Unexpected error during replication for {bucket_name}/{object_key}")
|
||||||
|
self.failure_store.add_failure(bucket_name, ReplicationFailure(
|
||||||
|
object_key=object_key,
|
||||||
|
error_message=str(e),
|
||||||
|
timestamp=time.time(),
|
||||||
|
failure_count=1,
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
action=action,
|
||||||
|
last_error_code=None,
|
||||||
|
))
|
||||||
|
|
||||||
|
def get_failed_items(self, bucket_name: str, limit: int = 50, offset: int = 0) -> List[ReplicationFailure]:
|
||||||
|
failures = self.failure_store.load_failures(bucket_name)
|
||||||
|
return failures[offset:offset + limit]
|
||||||
|
|
||||||
|
def get_failure_count(self, bucket_name: str) -> int:
|
||||||
|
return self.failure_store.get_failure_count(bucket_name)
|
||||||
|
|
||||||
|
def retry_failed_item(self, bucket_name: str, object_key: str) -> bool:
|
||||||
|
failure = self.failure_store.get_failure(bucket_name, object_key)
|
||||||
|
if not failure:
|
||||||
|
return False
|
||||||
|
|
||||||
|
rule = self.get_rule(bucket_name)
|
||||||
|
if not rule or not rule.enabled:
|
||||||
|
return False
|
||||||
|
|
||||||
|
connection = self.connections.get(rule.target_connection_id)
|
||||||
|
if not connection:
|
||||||
|
logger.warning(f"Cannot retry: Connection {rule.target_connection_id} not found")
|
||||||
|
return False
|
||||||
|
|
||||||
|
if not self.check_endpoint_health(connection):
|
||||||
|
logger.warning(f"Cannot retry: Endpoint {connection.name} is not reachable")
|
||||||
|
return False
|
||||||
|
|
||||||
|
self._executor.submit(self._replicate_task, bucket_name, object_key, rule, connection, failure.action)
|
||||||
|
return True
|
||||||
|
|
||||||
|
def retry_all_failed(self, bucket_name: str) -> Dict[str, int]:
|
||||||
|
failures = self.failure_store.load_failures(bucket_name)
|
||||||
|
if not failures:
|
||||||
|
return {"submitted": 0, "skipped": 0}
|
||||||
|
|
||||||
|
rule = self.get_rule(bucket_name)
|
||||||
|
if not rule or not rule.enabled:
|
||||||
|
return {"submitted": 0, "skipped": len(failures)}
|
||||||
|
|
||||||
|
connection = self.connections.get(rule.target_connection_id)
|
||||||
|
if not connection:
|
||||||
|
logger.warning(f"Cannot retry: Connection {rule.target_connection_id} not found")
|
||||||
|
return {"submitted": 0, "skipped": len(failures)}
|
||||||
|
|
||||||
|
if not self.check_endpoint_health(connection):
|
||||||
|
logger.warning(f"Cannot retry: Endpoint {connection.name} is not reachable")
|
||||||
|
return {"submitted": 0, "skipped": len(failures)}
|
||||||
|
|
||||||
|
submitted = 0
|
||||||
|
for failure in failures:
|
||||||
|
self._executor.submit(self._replicate_task, bucket_name, failure.object_key, rule, connection, failure.action)
|
||||||
|
submitted += 1
|
||||||
|
|
||||||
|
return {"submitted": submitted, "skipped": 0}
|
||||||
|
|
||||||
|
def dismiss_failure(self, bucket_name: str, object_key: str) -> bool:
|
||||||
|
return self.failure_store.remove_failure(bucket_name, object_key)
|
||||||
|
|
||||||
|
def clear_failures(self, bucket_name: str) -> None:
|
||||||
|
self.failure_store.clear_failures(bucket_name)
|
||||||
|
|||||||
789
app/s3_api.py
789
app/s3_api.py
File diff suppressed because it is too large
Load Diff
@@ -1,4 +1,3 @@
|
|||||||
"""Ephemeral store for one-time secrets communicated to the UI."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import secrets
|
import secrets
|
||||||
|
|||||||
290
app/storage.py
290
app/storage.py
@@ -1,4 +1,3 @@
|
|||||||
"""Filesystem-backed object storage helpers."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import hashlib
|
import hashlib
|
||||||
@@ -77,6 +76,14 @@ class StorageError(RuntimeError):
|
|||||||
"""Raised when the storage layer encounters an unrecoverable problem."""
|
"""Raised when the storage layer encounters an unrecoverable problem."""
|
||||||
|
|
||||||
|
|
||||||
|
class BucketNotFoundError(StorageError):
|
||||||
|
"""Raised when the bucket does not exist."""
|
||||||
|
|
||||||
|
|
||||||
|
class ObjectNotFoundError(StorageError):
|
||||||
|
"""Raised when the object does not exist."""
|
||||||
|
|
||||||
|
|
||||||
class QuotaExceededError(StorageError):
|
class QuotaExceededError(StorageError):
|
||||||
"""Raised when an operation would exceed bucket quota limits."""
|
"""Raised when an operation would exceed bucket quota limits."""
|
||||||
|
|
||||||
@@ -91,7 +98,7 @@ class ObjectMeta:
|
|||||||
key: str
|
key: str
|
||||||
size: int
|
size: int
|
||||||
last_modified: datetime
|
last_modified: datetime
|
||||||
etag: str
|
etag: Optional[str] = None
|
||||||
metadata: Optional[Dict[str, str]] = None
|
metadata: Optional[Dict[str, str]] = None
|
||||||
|
|
||||||
|
|
||||||
@@ -107,7 +114,7 @@ class ListObjectsResult:
|
|||||||
objects: List[ObjectMeta]
|
objects: List[ObjectMeta]
|
||||||
is_truncated: bool
|
is_truncated: bool
|
||||||
next_continuation_token: Optional[str]
|
next_continuation_token: Optional[str]
|
||||||
total_count: Optional[int] = None # Total objects in bucket (from stats cache)
|
total_count: Optional[int] = None
|
||||||
|
|
||||||
|
|
||||||
def _utcnow() -> datetime:
|
def _utcnow() -> datetime:
|
||||||
@@ -131,17 +138,25 @@ class ObjectStorage:
|
|||||||
MULTIPART_MANIFEST = "manifest.json"
|
MULTIPART_MANIFEST = "manifest.json"
|
||||||
BUCKET_CONFIG_FILE = ".bucket.json"
|
BUCKET_CONFIG_FILE = ".bucket.json"
|
||||||
KEY_INDEX_CACHE_TTL = 30
|
KEY_INDEX_CACHE_TTL = 30
|
||||||
OBJECT_CACHE_MAX_SIZE = 100 # Maximum number of buckets to cache
|
OBJECT_CACHE_MAX_SIZE = 100
|
||||||
|
|
||||||
def __init__(self, root: Path) -> None:
|
def __init__(self, root: Path) -> None:
|
||||||
self.root = Path(root)
|
self.root = Path(root)
|
||||||
self.root.mkdir(parents=True, exist_ok=True)
|
self.root.mkdir(parents=True, exist_ok=True)
|
||||||
self._ensure_system_roots()
|
self._ensure_system_roots()
|
||||||
# LRU cache for object metadata with thread-safe access
|
|
||||||
self._object_cache: OrderedDict[str, tuple[Dict[str, ObjectMeta], float]] = OrderedDict()
|
self._object_cache: OrderedDict[str, tuple[Dict[str, ObjectMeta], float]] = OrderedDict()
|
||||||
self._cache_lock = threading.Lock()
|
self._cache_lock = threading.Lock()
|
||||||
# Cache version counter for detecting stale reads
|
self._bucket_locks: Dict[str, threading.Lock] = {}
|
||||||
self._cache_version: Dict[str, int] = {}
|
self._cache_version: Dict[str, int] = {}
|
||||||
|
self._bucket_config_cache: Dict[str, tuple[dict[str, Any], float]] = {}
|
||||||
|
self._bucket_config_cache_ttl = 30.0
|
||||||
|
|
||||||
|
def _get_bucket_lock(self, bucket_id: str) -> threading.Lock:
|
||||||
|
"""Get or create a lock for a specific bucket. Reduces global lock contention."""
|
||||||
|
with self._cache_lock:
|
||||||
|
if bucket_id not in self._bucket_locks:
|
||||||
|
self._bucket_locks[bucket_id] = threading.Lock()
|
||||||
|
return self._bucket_locks[bucket_id]
|
||||||
|
|
||||||
def list_buckets(self) -> List[BucketMeta]:
|
def list_buckets(self) -> List[BucketMeta]:
|
||||||
buckets: List[BucketMeta] = []
|
buckets: List[BucketMeta] = []
|
||||||
@@ -159,6 +174,11 @@ class ObjectStorage:
|
|||||||
def bucket_exists(self, bucket_name: str) -> bool:
|
def bucket_exists(self, bucket_name: str) -> bool:
|
||||||
return self._bucket_path(bucket_name).exists()
|
return self._bucket_path(bucket_name).exists()
|
||||||
|
|
||||||
|
def _require_bucket_exists(self, bucket_path: Path) -> None:
|
||||||
|
"""Raise BucketNotFoundError if bucket does not exist."""
|
||||||
|
if not bucket_path.exists():
|
||||||
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
|
|
||||||
def _validate_bucket_name(self, bucket_name: str) -> None:
|
def _validate_bucket_name(self, bucket_name: str) -> None:
|
||||||
if len(bucket_name) < 3 or len(bucket_name) > 63:
|
if len(bucket_name) < 3 or len(bucket_name) > 63:
|
||||||
raise StorageError("Bucket name must be between 3 and 63 characters")
|
raise StorageError("Bucket name must be between 3 and 63 characters")
|
||||||
@@ -184,7 +204,7 @@ class ObjectStorage:
|
|||||||
"""
|
"""
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise StorageError("Bucket does not exist")
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
|
|
||||||
cache_path = self._system_bucket_root(bucket_name) / "stats.json"
|
cache_path = self._system_bucket_root(bucket_name) / "stats.json"
|
||||||
if cache_path.exists():
|
if cache_path.exists():
|
||||||
@@ -246,12 +266,13 @@ class ObjectStorage:
|
|||||||
def delete_bucket(self, bucket_name: str) -> None:
|
def delete_bucket(self, bucket_name: str) -> None:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise StorageError("Bucket does not exist")
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
if self._has_visible_objects(bucket_path):
|
has_objects, has_versions, has_multipart = self._check_bucket_contents(bucket_path)
|
||||||
|
if has_objects:
|
||||||
raise StorageError("Bucket not empty")
|
raise StorageError("Bucket not empty")
|
||||||
if self._has_archived_versions(bucket_path):
|
if has_versions:
|
||||||
raise StorageError("Bucket contains archived object versions")
|
raise StorageError("Bucket contains archived object versions")
|
||||||
if self._has_active_multipart_uploads(bucket_path):
|
if has_multipart:
|
||||||
raise StorageError("Bucket has active multipart uploads")
|
raise StorageError("Bucket has active multipart uploads")
|
||||||
self._remove_tree(bucket_path)
|
self._remove_tree(bucket_path)
|
||||||
self._remove_tree(self._system_bucket_root(bucket_path.name))
|
self._remove_tree(self._system_bucket_root(bucket_path.name))
|
||||||
@@ -278,7 +299,7 @@ class ObjectStorage:
|
|||||||
"""
|
"""
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise StorageError("Bucket does not exist")
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
bucket_id = bucket_path.name
|
bucket_id = bucket_path.name
|
||||||
|
|
||||||
object_cache = self._get_object_cache(bucket_id, bucket_path)
|
object_cache = self._get_object_cache(bucket_id, bucket_path)
|
||||||
@@ -339,7 +360,7 @@ class ObjectStorage:
|
|||||||
) -> ObjectMeta:
|
) -> ObjectMeta:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise StorageError("Bucket does not exist")
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
bucket_id = bucket_path.name
|
bucket_id = bucket_path.name
|
||||||
|
|
||||||
safe_key = self._sanitize_object_key(object_key)
|
safe_key = self._sanitize_object_key(object_key)
|
||||||
@@ -395,20 +416,22 @@ class ObjectStorage:
|
|||||||
self._write_metadata(bucket_id, safe_key, combined_meta)
|
self._write_metadata(bucket_id, safe_key, combined_meta)
|
||||||
|
|
||||||
self._invalidate_bucket_stats_cache(bucket_id)
|
self._invalidate_bucket_stats_cache(bucket_id)
|
||||||
self._invalidate_object_cache(bucket_id)
|
|
||||||
|
|
||||||
return ObjectMeta(
|
obj_meta = ObjectMeta(
|
||||||
key=safe_key.as_posix(),
|
key=safe_key.as_posix(),
|
||||||
size=stat.st_size,
|
size=stat.st_size,
|
||||||
last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc),
|
last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc),
|
||||||
etag=etag,
|
etag=etag,
|
||||||
metadata=metadata,
|
metadata=metadata,
|
||||||
)
|
)
|
||||||
|
self._update_object_cache_entry(bucket_id, safe_key.as_posix(), obj_meta)
|
||||||
|
|
||||||
|
return obj_meta
|
||||||
|
|
||||||
def get_object_path(self, bucket_name: str, object_key: str) -> Path:
|
def get_object_path(self, bucket_name: str, object_key: str) -> Path:
|
||||||
path = self._object_path(bucket_name, object_key)
|
path = self._object_path(bucket_name, object_key)
|
||||||
if not path.exists():
|
if not path.exists():
|
||||||
raise StorageError("Object not found")
|
raise ObjectNotFoundError("Object not found")
|
||||||
return path
|
return path
|
||||||
|
|
||||||
def get_object_metadata(self, bucket_name: str, object_key: str) -> Dict[str, str]:
|
def get_object_metadata(self, bucket_name: str, object_key: str) -> Dict[str, str]:
|
||||||
@@ -451,7 +474,7 @@ class ObjectStorage:
|
|||||||
self._delete_metadata(bucket_id, rel)
|
self._delete_metadata(bucket_id, rel)
|
||||||
|
|
||||||
self._invalidate_bucket_stats_cache(bucket_id)
|
self._invalidate_bucket_stats_cache(bucket_id)
|
||||||
self._invalidate_object_cache(bucket_id)
|
self._update_object_cache_entry(bucket_id, safe_key.as_posix(), None)
|
||||||
self._cleanup_empty_parents(path, bucket_path)
|
self._cleanup_empty_parents(path, bucket_path)
|
||||||
|
|
||||||
def purge_object(self, bucket_name: str, object_key: str) -> None:
|
def purge_object(self, bucket_name: str, object_key: str) -> None:
|
||||||
@@ -473,13 +496,13 @@ class ObjectStorage:
|
|||||||
shutil.rmtree(legacy_version_dir, ignore_errors=True)
|
shutil.rmtree(legacy_version_dir, ignore_errors=True)
|
||||||
|
|
||||||
self._invalidate_bucket_stats_cache(bucket_id)
|
self._invalidate_bucket_stats_cache(bucket_id)
|
||||||
self._invalidate_object_cache(bucket_id)
|
self._update_object_cache_entry(bucket_id, rel.as_posix(), None)
|
||||||
self._cleanup_empty_parents(target, bucket_path)
|
self._cleanup_empty_parents(target, bucket_path)
|
||||||
|
|
||||||
def is_versioning_enabled(self, bucket_name: str) -> bool:
|
def is_versioning_enabled(self, bucket_name: str) -> bool:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise StorageError("Bucket does not exist")
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
return self._is_versioning_enabled(bucket_path)
|
return self._is_versioning_enabled(bucket_path)
|
||||||
|
|
||||||
def set_bucket_versioning(self, bucket_name: str, enabled: bool) -> None:
|
def set_bucket_versioning(self, bucket_name: str, enabled: bool) -> None:
|
||||||
@@ -671,11 +694,11 @@ class ObjectStorage:
|
|||||||
"""Get tags for an object."""
|
"""Get tags for an object."""
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise StorageError("Bucket does not exist")
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
safe_key = self._sanitize_object_key(object_key)
|
safe_key = self._sanitize_object_key(object_key)
|
||||||
object_path = bucket_path / safe_key
|
object_path = bucket_path / safe_key
|
||||||
if not object_path.exists():
|
if not object_path.exists():
|
||||||
raise StorageError("Object does not exist")
|
raise ObjectNotFoundError("Object does not exist")
|
||||||
|
|
||||||
for meta_file in (self._metadata_file(bucket_path.name, safe_key), self._legacy_metadata_file(bucket_path.name, safe_key)):
|
for meta_file in (self._metadata_file(bucket_path.name, safe_key), self._legacy_metadata_file(bucket_path.name, safe_key)):
|
||||||
if not meta_file.exists():
|
if not meta_file.exists():
|
||||||
@@ -694,11 +717,11 @@ class ObjectStorage:
|
|||||||
"""Set tags for an object."""
|
"""Set tags for an object."""
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise StorageError("Bucket does not exist")
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
safe_key = self._sanitize_object_key(object_key)
|
safe_key = self._sanitize_object_key(object_key)
|
||||||
object_path = bucket_path / safe_key
|
object_path = bucket_path / safe_key
|
||||||
if not object_path.exists():
|
if not object_path.exists():
|
||||||
raise StorageError("Object does not exist")
|
raise ObjectNotFoundError("Object does not exist")
|
||||||
|
|
||||||
meta_file = self._metadata_file(bucket_path.name, safe_key)
|
meta_file = self._metadata_file(bucket_path.name, safe_key)
|
||||||
|
|
||||||
@@ -732,7 +755,7 @@ class ObjectStorage:
|
|||||||
def list_object_versions(self, bucket_name: str, object_key: str) -> List[Dict[str, Any]]:
|
def list_object_versions(self, bucket_name: str, object_key: str) -> List[Dict[str, Any]]:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise StorageError("Bucket does not exist")
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
bucket_id = bucket_path.name
|
bucket_id = bucket_path.name
|
||||||
safe_key = self._sanitize_object_key(object_key)
|
safe_key = self._sanitize_object_key(object_key)
|
||||||
version_dir = self._version_dir(bucket_id, safe_key)
|
version_dir = self._version_dir(bucket_id, safe_key)
|
||||||
@@ -756,7 +779,7 @@ class ObjectStorage:
|
|||||||
def restore_object_version(self, bucket_name: str, object_key: str, version_id: str) -> ObjectMeta:
|
def restore_object_version(self, bucket_name: str, object_key: str, version_id: str) -> ObjectMeta:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise StorageError("Bucket does not exist")
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
bucket_id = bucket_path.name
|
bucket_id = bucket_path.name
|
||||||
safe_key = self._sanitize_object_key(object_key)
|
safe_key = self._sanitize_object_key(object_key)
|
||||||
version_dir = self._version_dir(bucket_id, safe_key)
|
version_dir = self._version_dir(bucket_id, safe_key)
|
||||||
@@ -790,10 +813,33 @@ class ObjectStorage:
|
|||||||
metadata=metadata or None,
|
metadata=metadata or None,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def delete_object_version(self, bucket_name: str, object_key: str, version_id: str) -> None:
|
||||||
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
|
if not bucket_path.exists():
|
||||||
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
|
bucket_id = bucket_path.name
|
||||||
|
safe_key = self._sanitize_object_key(object_key)
|
||||||
|
version_dir = self._version_dir(bucket_id, safe_key)
|
||||||
|
data_path = version_dir / f"{version_id}.bin"
|
||||||
|
meta_path = version_dir / f"{version_id}.json"
|
||||||
|
if not data_path.exists() and not meta_path.exists():
|
||||||
|
legacy_version_dir = self._legacy_version_dir(bucket_id, safe_key)
|
||||||
|
data_path = legacy_version_dir / f"{version_id}.bin"
|
||||||
|
meta_path = legacy_version_dir / f"{version_id}.json"
|
||||||
|
if not data_path.exists() and not meta_path.exists():
|
||||||
|
raise StorageError(f"Version {version_id} not found")
|
||||||
|
if data_path.exists():
|
||||||
|
data_path.unlink()
|
||||||
|
if meta_path.exists():
|
||||||
|
meta_path.unlink()
|
||||||
|
parent = data_path.parent
|
||||||
|
if parent.exists() and not any(parent.iterdir()):
|
||||||
|
parent.rmdir()
|
||||||
|
|
||||||
def list_orphaned_objects(self, bucket_name: str) -> List[Dict[str, Any]]:
|
def list_orphaned_objects(self, bucket_name: str) -> List[Dict[str, Any]]:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise StorageError("Bucket does not exist")
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
bucket_id = bucket_path.name
|
bucket_id = bucket_path.name
|
||||||
version_roots = [self._bucket_versions_root(bucket_id), self._legacy_versions_root(bucket_id)]
|
version_roots = [self._bucket_versions_root(bucket_id), self._legacy_versions_root(bucket_id)]
|
||||||
if not any(root.exists() for root in version_roots):
|
if not any(root.exists() for root in version_roots):
|
||||||
@@ -861,7 +907,7 @@ class ObjectStorage:
|
|||||||
) -> str:
|
) -> str:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise StorageError("Bucket does not exist")
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
bucket_id = bucket_path.name
|
bucket_id = bucket_path.name
|
||||||
safe_key = self._sanitize_object_key(object_key)
|
safe_key = self._sanitize_object_key(object_key)
|
||||||
upload_id = uuid.uuid4().hex
|
upload_id = uuid.uuid4().hex
|
||||||
@@ -888,8 +934,8 @@ class ObjectStorage:
|
|||||||
|
|
||||||
Uses file locking to safely update the manifest and handle concurrent uploads.
|
Uses file locking to safely update the manifest and handle concurrent uploads.
|
||||||
"""
|
"""
|
||||||
if part_number < 1:
|
if part_number < 1 or part_number > 10000:
|
||||||
raise StorageError("part_number must be >= 1")
|
raise StorageError("part_number must be between 1 and 10000")
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
|
|
||||||
upload_root = self._multipart_dir(bucket_path.name, upload_id)
|
upload_root = self._multipart_dir(bucket_path.name, upload_id)
|
||||||
@@ -898,7 +944,6 @@ class ObjectStorage:
|
|||||||
if not upload_root.exists():
|
if not upload_root.exists():
|
||||||
raise StorageError("Multipart upload not found")
|
raise StorageError("Multipart upload not found")
|
||||||
|
|
||||||
# Write part to temporary file first, then rename atomically
|
|
||||||
checksum = hashlib.md5()
|
checksum = hashlib.md5()
|
||||||
part_filename = f"part-{part_number:05d}.part"
|
part_filename = f"part-{part_number:05d}.part"
|
||||||
part_path = upload_root / part_filename
|
part_path = upload_root / part_filename
|
||||||
@@ -907,11 +952,8 @@ class ObjectStorage:
|
|||||||
try:
|
try:
|
||||||
with temp_path.open("wb") as target:
|
with temp_path.open("wb") as target:
|
||||||
shutil.copyfileobj(_HashingReader(stream, checksum), target)
|
shutil.copyfileobj(_HashingReader(stream, checksum), target)
|
||||||
|
|
||||||
# Atomic rename (or replace on Windows)
|
|
||||||
temp_path.replace(part_path)
|
temp_path.replace(part_path)
|
||||||
except OSError:
|
except OSError:
|
||||||
# Clean up temp file on failure
|
|
||||||
try:
|
try:
|
||||||
temp_path.unlink(missing_ok=True)
|
temp_path.unlink(missing_ok=True)
|
||||||
except OSError:
|
except OSError:
|
||||||
@@ -927,7 +969,6 @@ class ObjectStorage:
|
|||||||
manifest_path = upload_root / self.MULTIPART_MANIFEST
|
manifest_path = upload_root / self.MULTIPART_MANIFEST
|
||||||
lock_path = upload_root / ".manifest.lock"
|
lock_path = upload_root / ".manifest.lock"
|
||||||
|
|
||||||
# Retry loop for handling transient lock/read failures
|
|
||||||
max_retries = 3
|
max_retries = 3
|
||||||
for attempt in range(max_retries):
|
for attempt in range(max_retries):
|
||||||
try:
|
try:
|
||||||
@@ -1038,11 +1079,6 @@ class ObjectStorage:
|
|||||||
checksum.update(data)
|
checksum.update(data)
|
||||||
target.write(data)
|
target.write(data)
|
||||||
|
|
||||||
metadata = manifest.get("metadata")
|
|
||||||
if metadata:
|
|
||||||
self._write_metadata(bucket_id, safe_key, metadata)
|
|
||||||
else:
|
|
||||||
self._delete_metadata(bucket_id, safe_key)
|
|
||||||
except BlockingIOError:
|
except BlockingIOError:
|
||||||
raise StorageError("Another upload to this key is in progress")
|
raise StorageError("Another upload to this key is in progress")
|
||||||
finally:
|
finally:
|
||||||
@@ -1054,16 +1090,25 @@ class ObjectStorage:
|
|||||||
shutil.rmtree(upload_root, ignore_errors=True)
|
shutil.rmtree(upload_root, ignore_errors=True)
|
||||||
|
|
||||||
self._invalidate_bucket_stats_cache(bucket_id)
|
self._invalidate_bucket_stats_cache(bucket_id)
|
||||||
self._invalidate_object_cache(bucket_id)
|
|
||||||
|
|
||||||
stat = destination.stat()
|
stat = destination.stat()
|
||||||
return ObjectMeta(
|
etag = checksum.hexdigest()
|
||||||
|
metadata = manifest.get("metadata")
|
||||||
|
|
||||||
|
internal_meta = {"__etag__": etag, "__size__": str(stat.st_size)}
|
||||||
|
combined_meta = {**internal_meta, **(metadata or {})}
|
||||||
|
self._write_metadata(bucket_id, safe_key, combined_meta)
|
||||||
|
|
||||||
|
obj_meta = ObjectMeta(
|
||||||
key=safe_key.as_posix(),
|
key=safe_key.as_posix(),
|
||||||
size=stat.st_size,
|
size=stat.st_size,
|
||||||
last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc),
|
last_modified=datetime.fromtimestamp(stat.st_mtime, timezone.utc),
|
||||||
etag=checksum.hexdigest(),
|
etag=etag,
|
||||||
metadata=metadata,
|
metadata=metadata,
|
||||||
)
|
)
|
||||||
|
self._update_object_cache_entry(bucket_id, safe_key.as_posix(), obj_meta)
|
||||||
|
|
||||||
|
return obj_meta
|
||||||
|
|
||||||
def abort_multipart_upload(self, bucket_name: str, upload_id: str) -> None:
|
def abort_multipart_upload(self, bucket_name: str, upload_id: str) -> None:
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
@@ -1102,6 +1147,49 @@ class ObjectStorage:
|
|||||||
parts.sort(key=lambda x: x["PartNumber"])
|
parts.sort(key=lambda x: x["PartNumber"])
|
||||||
return parts
|
return parts
|
||||||
|
|
||||||
|
def list_multipart_uploads(self, bucket_name: str) -> List[Dict[str, Any]]:
|
||||||
|
"""List all active multipart uploads for a bucket."""
|
||||||
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
|
if not bucket_path.exists():
|
||||||
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
|
bucket_id = bucket_path.name
|
||||||
|
uploads = []
|
||||||
|
multipart_root = self._multipart_bucket_root(bucket_id)
|
||||||
|
if multipart_root.exists():
|
||||||
|
for upload_dir in multipart_root.iterdir():
|
||||||
|
if not upload_dir.is_dir():
|
||||||
|
continue
|
||||||
|
manifest_path = upload_dir / "manifest.json"
|
||||||
|
if not manifest_path.exists():
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
manifest = json.loads(manifest_path.read_text(encoding="utf-8"))
|
||||||
|
uploads.append({
|
||||||
|
"upload_id": manifest.get("upload_id", upload_dir.name),
|
||||||
|
"object_key": manifest.get("object_key", ""),
|
||||||
|
"created_at": manifest.get("created_at", ""),
|
||||||
|
})
|
||||||
|
except (OSError, json.JSONDecodeError):
|
||||||
|
continue
|
||||||
|
legacy_root = self._legacy_multipart_bucket_root(bucket_id)
|
||||||
|
if legacy_root.exists():
|
||||||
|
for upload_dir in legacy_root.iterdir():
|
||||||
|
if not upload_dir.is_dir():
|
||||||
|
continue
|
||||||
|
manifest_path = upload_dir / "manifest.json"
|
||||||
|
if not manifest_path.exists():
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
manifest = json.loads(manifest_path.read_text(encoding="utf-8"))
|
||||||
|
uploads.append({
|
||||||
|
"upload_id": manifest.get("upload_id", upload_dir.name),
|
||||||
|
"object_key": manifest.get("object_key", ""),
|
||||||
|
"created_at": manifest.get("created_at", ""),
|
||||||
|
})
|
||||||
|
except (OSError, json.JSONDecodeError):
|
||||||
|
continue
|
||||||
|
return uploads
|
||||||
|
|
||||||
def _bucket_path(self, bucket_name: str) -> Path:
|
def _bucket_path(self, bucket_name: str) -> Path:
|
||||||
safe_name = self._sanitize_bucket_name(bucket_name)
|
safe_name = self._sanitize_bucket_name(bucket_name)
|
||||||
return self.root / safe_name
|
return self.root / safe_name
|
||||||
@@ -1283,9 +1371,6 @@ class ObjectStorage:
|
|||||||
|
|
||||||
etag = meta_cache.get(key)
|
etag = meta_cache.get(key)
|
||||||
|
|
||||||
if not etag:
|
|
||||||
etag = f'"{stat.st_size}-{int(stat.st_mtime)}"'
|
|
||||||
|
|
||||||
objects[key] = ObjectMeta(
|
objects[key] = ObjectMeta(
|
||||||
key=key,
|
key=key,
|
||||||
size=stat.st_size,
|
size=stat.st_size,
|
||||||
@@ -1305,32 +1390,34 @@ class ObjectStorage:
|
|||||||
"""Get cached object metadata for a bucket, refreshing if stale.
|
"""Get cached object metadata for a bucket, refreshing if stale.
|
||||||
|
|
||||||
Uses LRU eviction to prevent unbounded cache growth.
|
Uses LRU eviction to prevent unbounded cache growth.
|
||||||
Thread-safe with version tracking to detect concurrent invalidations.
|
Thread-safe with per-bucket locks to reduce contention.
|
||||||
"""
|
"""
|
||||||
now = time.time()
|
now = time.time()
|
||||||
|
|
||||||
with self._cache_lock:
|
with self._cache_lock:
|
||||||
cached = self._object_cache.get(bucket_id)
|
cached = self._object_cache.get(bucket_id)
|
||||||
cache_version = self._cache_version.get(bucket_id, 0)
|
|
||||||
|
|
||||||
if cached:
|
if cached:
|
||||||
objects, timestamp = cached
|
objects, timestamp = cached
|
||||||
if now - timestamp < self.KEY_INDEX_CACHE_TTL:
|
if now - timestamp < self.KEY_INDEX_CACHE_TTL:
|
||||||
# Move to end (most recently used)
|
|
||||||
self._object_cache.move_to_end(bucket_id)
|
self._object_cache.move_to_end(bucket_id)
|
||||||
return objects
|
return objects
|
||||||
|
cache_version = self._cache_version.get(bucket_id, 0)
|
||||||
|
|
||||||
# Build cache outside lock to avoid holding lock during I/O
|
bucket_lock = self._get_bucket_lock(bucket_id)
|
||||||
|
with bucket_lock:
|
||||||
|
with self._cache_lock:
|
||||||
|
cached = self._object_cache.get(bucket_id)
|
||||||
|
if cached:
|
||||||
|
objects, timestamp = cached
|
||||||
|
if now - timestamp < self.KEY_INDEX_CACHE_TTL:
|
||||||
|
self._object_cache.move_to_end(bucket_id)
|
||||||
|
return objects
|
||||||
objects = self._build_object_cache(bucket_path)
|
objects = self._build_object_cache(bucket_path)
|
||||||
|
|
||||||
with self._cache_lock:
|
with self._cache_lock:
|
||||||
# Check if cache was invalidated while we were building
|
|
||||||
current_version = self._cache_version.get(bucket_id, 0)
|
current_version = self._cache_version.get(bucket_id, 0)
|
||||||
if current_version != cache_version:
|
if current_version != cache_version:
|
||||||
# Cache was invalidated, rebuild
|
|
||||||
objects = self._build_object_cache(bucket_path)
|
objects = self._build_object_cache(bucket_path)
|
||||||
|
|
||||||
# Evict oldest entries if cache is full
|
|
||||||
while len(self._object_cache) >= self.OBJECT_CACHE_MAX_SIZE:
|
while len(self._object_cache) >= self.OBJECT_CACHE_MAX_SIZE:
|
||||||
self._object_cache.popitem(last=False)
|
self._object_cache.popitem(last=False)
|
||||||
|
|
||||||
@@ -1354,6 +1441,20 @@ class ObjectStorage:
|
|||||||
except OSError:
|
except OSError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
def _update_object_cache_entry(self, bucket_id: str, key: str, meta: Optional[ObjectMeta]) -> None:
|
||||||
|
"""Update a single entry in the object cache instead of invalidating the whole cache.
|
||||||
|
|
||||||
|
This is a performance optimization - lazy update instead of full invalidation.
|
||||||
|
"""
|
||||||
|
with self._cache_lock:
|
||||||
|
cached = self._object_cache.get(bucket_id)
|
||||||
|
if cached:
|
||||||
|
objects, timestamp = cached
|
||||||
|
if meta is None:
|
||||||
|
objects.pop(key, None)
|
||||||
|
else:
|
||||||
|
objects[key] = meta
|
||||||
|
|
||||||
def _ensure_system_roots(self) -> None:
|
def _ensure_system_roots(self) -> None:
|
||||||
for path in (
|
for path in (
|
||||||
self._system_root_path(),
|
self._system_root_path(),
|
||||||
@@ -1373,19 +1474,31 @@ class ObjectStorage:
|
|||||||
return self._system_bucket_root(bucket_name) / self.BUCKET_CONFIG_FILE
|
return self._system_bucket_root(bucket_name) / self.BUCKET_CONFIG_FILE
|
||||||
|
|
||||||
def _read_bucket_config(self, bucket_name: str) -> dict[str, Any]:
|
def _read_bucket_config(self, bucket_name: str) -> dict[str, Any]:
|
||||||
|
now = time.time()
|
||||||
|
cached = self._bucket_config_cache.get(bucket_name)
|
||||||
|
if cached:
|
||||||
|
config, cached_time = cached
|
||||||
|
if now - cached_time < self._bucket_config_cache_ttl:
|
||||||
|
return config.copy()
|
||||||
|
|
||||||
config_path = self._bucket_config_path(bucket_name)
|
config_path = self._bucket_config_path(bucket_name)
|
||||||
if not config_path.exists():
|
if not config_path.exists():
|
||||||
|
self._bucket_config_cache[bucket_name] = ({}, now)
|
||||||
return {}
|
return {}
|
||||||
try:
|
try:
|
||||||
data = json.loads(config_path.read_text(encoding="utf-8"))
|
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||||
return data if isinstance(data, dict) else {}
|
config = data if isinstance(data, dict) else {}
|
||||||
|
self._bucket_config_cache[bucket_name] = (config, now)
|
||||||
|
return config.copy()
|
||||||
except (OSError, json.JSONDecodeError):
|
except (OSError, json.JSONDecodeError):
|
||||||
|
self._bucket_config_cache[bucket_name] = ({}, now)
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
def _write_bucket_config(self, bucket_name: str, payload: dict[str, Any]) -> None:
|
def _write_bucket_config(self, bucket_name: str, payload: dict[str, Any]) -> None:
|
||||||
config_path = self._bucket_config_path(bucket_name)
|
config_path = self._bucket_config_path(bucket_name)
|
||||||
config_path.parent.mkdir(parents=True, exist_ok=True)
|
config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
config_path.write_text(json.dumps(payload), encoding="utf-8")
|
config_path.write_text(json.dumps(payload), encoding="utf-8")
|
||||||
|
self._bucket_config_cache[bucket_name] = (payload.copy(), time.time())
|
||||||
|
|
||||||
def _set_bucket_config_entry(self, bucket_name: str, key: str, value: Any | None) -> None:
|
def _set_bucket_config_entry(self, bucket_name: str, key: str, value: Any | None) -> None:
|
||||||
config = self._read_bucket_config(bucket_name)
|
config = self._read_bucket_config(bucket_name)
|
||||||
@@ -1507,33 +1620,64 @@ class ObjectStorage:
|
|||||||
except OSError:
|
except OSError:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
def _has_visible_objects(self, bucket_path: Path) -> bool:
|
def _check_bucket_contents(self, bucket_path: Path) -> tuple[bool, bool, bool]:
|
||||||
|
"""Check bucket for objects, versions, and multipart uploads in a single pass.
|
||||||
|
|
||||||
|
Returns (has_visible_objects, has_archived_versions, has_active_multipart_uploads).
|
||||||
|
Uses early exit when all three are found.
|
||||||
|
"""
|
||||||
|
has_objects = False
|
||||||
|
has_versions = False
|
||||||
|
has_multipart = False
|
||||||
|
bucket_name = bucket_path.name
|
||||||
|
|
||||||
for path in bucket_path.rglob("*"):
|
for path in bucket_path.rglob("*"):
|
||||||
|
if has_objects:
|
||||||
|
break
|
||||||
if not path.is_file():
|
if not path.is_file():
|
||||||
continue
|
continue
|
||||||
rel = path.relative_to(bucket_path)
|
rel = path.relative_to(bucket_path)
|
||||||
if rel.parts and rel.parts[0] in self.INTERNAL_FOLDERS:
|
if rel.parts and rel.parts[0] in self.INTERNAL_FOLDERS:
|
||||||
continue
|
continue
|
||||||
return True
|
has_objects = True
|
||||||
return False
|
|
||||||
|
for version_root in (
|
||||||
|
self._bucket_versions_root(bucket_name),
|
||||||
|
self._legacy_versions_root(bucket_name),
|
||||||
|
):
|
||||||
|
if has_versions:
|
||||||
|
break
|
||||||
|
if version_root.exists():
|
||||||
|
for path in version_root.rglob("*"):
|
||||||
|
if path.is_file():
|
||||||
|
has_versions = True
|
||||||
|
break
|
||||||
|
|
||||||
|
for uploads_root in (
|
||||||
|
self._multipart_bucket_root(bucket_name),
|
||||||
|
self._legacy_multipart_bucket_root(bucket_name),
|
||||||
|
):
|
||||||
|
if has_multipart:
|
||||||
|
break
|
||||||
|
if uploads_root.exists():
|
||||||
|
for path in uploads_root.rglob("*"):
|
||||||
|
if path.is_file():
|
||||||
|
has_multipart = True
|
||||||
|
break
|
||||||
|
|
||||||
|
return has_objects, has_versions, has_multipart
|
||||||
|
|
||||||
|
def _has_visible_objects(self, bucket_path: Path) -> bool:
|
||||||
|
has_objects, _, _ = self._check_bucket_contents(bucket_path)
|
||||||
|
return has_objects
|
||||||
|
|
||||||
def _has_archived_versions(self, bucket_path: Path) -> bool:
|
def _has_archived_versions(self, bucket_path: Path) -> bool:
|
||||||
for version_root in (
|
_, has_versions, _ = self._check_bucket_contents(bucket_path)
|
||||||
self._bucket_versions_root(bucket_path.name),
|
return has_versions
|
||||||
self._legacy_versions_root(bucket_path.name),
|
|
||||||
):
|
|
||||||
if version_root.exists() and any(path.is_file() for path in version_root.rglob("*")):
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
def _has_active_multipart_uploads(self, bucket_path: Path) -> bool:
|
def _has_active_multipart_uploads(self, bucket_path: Path) -> bool:
|
||||||
for uploads_root in (
|
_, _, has_multipart = self._check_bucket_contents(bucket_path)
|
||||||
self._multipart_bucket_root(bucket_path.name),
|
return has_multipart
|
||||||
self._legacy_multipart_bucket_root(bucket_path.name),
|
|
||||||
):
|
|
||||||
if uploads_root.exists() and any(path.is_file() for path in uploads_root.rglob("*")):
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
def _remove_tree(self, path: Path) -> None:
|
def _remove_tree(self, path: Path) -> None:
|
||||||
if not path.exists():
|
if not path.exists():
|
||||||
@@ -1542,7 +1686,7 @@ class ObjectStorage:
|
|||||||
try:
|
try:
|
||||||
os.chmod(target_path, stat.S_IRWXU)
|
os.chmod(target_path, stat.S_IRWXU)
|
||||||
func(target_path)
|
func(target_path)
|
||||||
except Exception as exc: # pragma: no cover - fallback failure
|
except Exception as exc:
|
||||||
raise StorageError(f"Unable to delete bucket contents: {exc}") from exc
|
raise StorageError(f"Unable to delete bucket contents: {exc}") from exc
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|||||||
495
app/ui.py
495
app/ui.py
@@ -1,6 +1,6 @@
|
|||||||
"""Authenticated HTML UI for browsing buckets and objects."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import io
|
||||||
import json
|
import json
|
||||||
import uuid
|
import uuid
|
||||||
import psutil
|
import psutil
|
||||||
@@ -26,9 +26,10 @@ from flask import (
|
|||||||
)
|
)
|
||||||
from flask_wtf.csrf import generate_csrf
|
from flask_wtf.csrf import generate_csrf
|
||||||
|
|
||||||
|
from .acl import AclService, create_canned_acl, CANNED_ACLS
|
||||||
from .bucket_policies import BucketPolicyStore
|
from .bucket_policies import BucketPolicyStore
|
||||||
from .connections import ConnectionStore, RemoteConnection
|
from .connections import ConnectionStore, RemoteConnection
|
||||||
from .extensions import limiter
|
from .extensions import limiter, csrf
|
||||||
from .iam import IamError
|
from .iam import IamError
|
||||||
from .kms import KMSManager
|
from .kms import KMSManager
|
||||||
from .replication import ReplicationManager, ReplicationRule
|
from .replication import ReplicationManager, ReplicationRule
|
||||||
@@ -75,6 +76,10 @@ def _secret_store() -> EphemeralSecretStore:
|
|||||||
return store
|
return store
|
||||||
|
|
||||||
|
|
||||||
|
def _acl() -> AclService:
|
||||||
|
return current_app.extensions["acl"]
|
||||||
|
|
||||||
|
|
||||||
def _format_bytes(num: int) -> str:
|
def _format_bytes(num: int) -> str:
|
||||||
step = 1024
|
step = 1024
|
||||||
units = ["B", "KB", "MB", "GB", "TB", "PB"]
|
units = ["B", "KB", "MB", "GB", "TB", "PB"]
|
||||||
@@ -271,6 +276,9 @@ def buckets_overview():
|
|||||||
})
|
})
|
||||||
return render_template("buckets.html", buckets=visible_buckets, principal=principal)
|
return render_template("buckets.html", buckets=visible_buckets, principal=principal)
|
||||||
|
|
||||||
|
@ui_bp.get("/buckets")
|
||||||
|
def buckets_redirect():
|
||||||
|
return redirect(url_for("ui.buckets_overview"))
|
||||||
|
|
||||||
@ui_bp.post("/buckets")
|
@ui_bp.post("/buckets")
|
||||||
def create_bucket():
|
def create_bucket():
|
||||||
@@ -366,7 +374,7 @@ def bucket_detail(bucket_name: str):
|
|||||||
kms_keys = kms_manager.list_keys() if kms_manager else []
|
kms_keys = kms_manager.list_keys() if kms_manager else []
|
||||||
kms_enabled = current_app.config.get("KMS_ENABLED", False)
|
kms_enabled = current_app.config.get("KMS_ENABLED", False)
|
||||||
encryption_enabled = current_app.config.get("ENCRYPTION_ENABLED", False)
|
encryption_enabled = current_app.config.get("ENCRYPTION_ENABLED", False)
|
||||||
can_manage_encryption = can_manage_versioning # Same as other bucket properties
|
can_manage_encryption = can_manage_versioning
|
||||||
|
|
||||||
bucket_quota = storage.get_bucket_quota(bucket_name)
|
bucket_quota = storage.get_bucket_quota(bucket_name)
|
||||||
bucket_stats = storage.bucket_stats(bucket_name)
|
bucket_stats = storage.bucket_stats(bucket_name)
|
||||||
@@ -379,10 +387,21 @@ def bucket_detail(bucket_name: str):
|
|||||||
|
|
||||||
objects_api_url = url_for("ui.list_bucket_objects", bucket_name=bucket_name)
|
objects_api_url = url_for("ui.list_bucket_objects", bucket_name=bucket_name)
|
||||||
|
|
||||||
|
lifecycle_url = url_for("ui.bucket_lifecycle", bucket_name=bucket_name)
|
||||||
|
cors_url = url_for("ui.bucket_cors", bucket_name=bucket_name)
|
||||||
|
acl_url = url_for("ui.bucket_acl", bucket_name=bucket_name)
|
||||||
|
folders_url = url_for("ui.create_folder", bucket_name=bucket_name)
|
||||||
|
buckets_for_copy_url = url_for("ui.list_buckets_for_copy", bucket_name=bucket_name)
|
||||||
|
|
||||||
return render_template(
|
return render_template(
|
||||||
"bucket_detail.html",
|
"bucket_detail.html",
|
||||||
bucket_name=bucket_name,
|
bucket_name=bucket_name,
|
||||||
objects_api_url=objects_api_url,
|
objects_api_url=objects_api_url,
|
||||||
|
lifecycle_url=lifecycle_url,
|
||||||
|
cors_url=cors_url,
|
||||||
|
acl_url=acl_url,
|
||||||
|
folders_url=folders_url,
|
||||||
|
buckets_for_copy_url=buckets_for_copy_url,
|
||||||
principal=principal,
|
principal=principal,
|
||||||
bucket_policy_text=policy_text,
|
bucket_policy_text=policy_text,
|
||||||
bucket_policy=bucket_policy,
|
bucket_policy=bucket_policy,
|
||||||
@@ -434,13 +453,14 @@ def list_bucket_objects(bucket_name: str):
|
|||||||
except StorageError:
|
except StorageError:
|
||||||
versioning_enabled = False
|
versioning_enabled = False
|
||||||
|
|
||||||
# Pre-compute URL templates once (not per-object) for performance
|
|
||||||
# Frontend will construct actual URLs by replacing KEY_PLACEHOLDER
|
|
||||||
preview_template = url_for("ui.object_preview", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
|
preview_template = url_for("ui.object_preview", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
|
||||||
delete_template = url_for("ui.delete_object", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
|
delete_template = url_for("ui.delete_object", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
|
||||||
presign_template = url_for("ui.object_presign", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
|
presign_template = url_for("ui.object_presign", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
|
||||||
versions_template = url_for("ui.object_versions", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
|
versions_template = url_for("ui.object_versions", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
|
||||||
restore_template = url_for("ui.restore_object_version", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER", version_id="VERSION_ID_PLACEHOLDER")
|
restore_template = url_for("ui.restore_object_version", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER", version_id="VERSION_ID_PLACEHOLDER")
|
||||||
|
tags_template = url_for("ui.object_tags", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
|
||||||
|
copy_template = url_for("ui.copy_object", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
|
||||||
|
move_template = url_for("ui.move_object", bucket_name=bucket_name, object_key="KEY_PLACEHOLDER")
|
||||||
|
|
||||||
objects_data = []
|
objects_data = []
|
||||||
for obj in result.objects:
|
for obj in result.objects:
|
||||||
@@ -465,6 +485,9 @@ def list_bucket_objects(bucket_name: str):
|
|||||||
"delete": delete_template,
|
"delete": delete_template,
|
||||||
"versions": versions_template,
|
"versions": versions_template,
|
||||||
"restore": restore_template,
|
"restore": restore_template,
|
||||||
|
"tags": tags_template,
|
||||||
|
"copy": copy_template,
|
||||||
|
"move": move_template,
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
|
|
||||||
@@ -505,8 +528,6 @@ def upload_object(bucket_name: str):
|
|||||||
try:
|
try:
|
||||||
_authorize_ui(principal, bucket_name, "write")
|
_authorize_ui(principal, bucket_name, "write")
|
||||||
_storage().put_object(bucket_name, object_key, file.stream, metadata=metadata)
|
_storage().put_object(bucket_name, object_key, file.stream, metadata=metadata)
|
||||||
|
|
||||||
# Trigger replication
|
|
||||||
_replication().trigger_replication(bucket_name, object_key)
|
_replication().trigger_replication(bucket_name, object_key)
|
||||||
|
|
||||||
message = f"Uploaded '{object_key}'"
|
message = f"Uploaded '{object_key}'"
|
||||||
@@ -542,6 +563,8 @@ def initiate_multipart_upload(bucket_name: str):
|
|||||||
|
|
||||||
|
|
||||||
@ui_bp.put("/buckets/<bucket_name>/multipart/<upload_id>/parts")
|
@ui_bp.put("/buckets/<bucket_name>/multipart/<upload_id>/parts")
|
||||||
|
@limiter.exempt
|
||||||
|
@csrf.exempt
|
||||||
def upload_multipart_part(bucket_name: str, upload_id: str):
|
def upload_multipart_part(bucket_name: str, upload_id: str):
|
||||||
principal = _current_principal()
|
principal = _current_principal()
|
||||||
try:
|
try:
|
||||||
@@ -555,7 +578,11 @@ def upload_multipart_part(bucket_name: str, upload_id: str):
|
|||||||
if part_number < 1:
|
if part_number < 1:
|
||||||
return jsonify({"error": "partNumber must be >= 1"}), 400
|
return jsonify({"error": "partNumber must be >= 1"}), 400
|
||||||
try:
|
try:
|
||||||
etag = _storage().upload_multipart_part(bucket_name, upload_id, part_number, request.stream)
|
data = request.get_data()
|
||||||
|
if not data:
|
||||||
|
return jsonify({"error": "Empty request body"}), 400
|
||||||
|
stream = io.BytesIO(data)
|
||||||
|
etag = _storage().upload_multipart_part(bucket_name, upload_id, part_number, stream)
|
||||||
except StorageError as exc:
|
except StorageError as exc:
|
||||||
return jsonify({"error": str(exc)}), 400
|
return jsonify({"error": str(exc)}), 400
|
||||||
return jsonify({"etag": etag, "part_number": part_number})
|
return jsonify({"etag": etag, "part_number": part_number})
|
||||||
@@ -585,9 +612,14 @@ def complete_multipart_upload(bucket_name: str, upload_id: str):
|
|||||||
normalized.append({"part_number": number, "etag": etag})
|
normalized.append({"part_number": number, "etag": etag})
|
||||||
try:
|
try:
|
||||||
result = _storage().complete_multipart_upload(bucket_name, upload_id, normalized)
|
result = _storage().complete_multipart_upload(bucket_name, upload_id, normalized)
|
||||||
_replication().trigger_replication(bucket_name, result["key"])
|
_replication().trigger_replication(bucket_name, result.key)
|
||||||
|
|
||||||
return jsonify(result)
|
return jsonify({
|
||||||
|
"key": result.key,
|
||||||
|
"size": result.size,
|
||||||
|
"etag": result.etag,
|
||||||
|
"last_modified": result.last_modified.isoformat() if result.last_modified else None,
|
||||||
|
})
|
||||||
except StorageError as exc:
|
except StorageError as exc:
|
||||||
return jsonify({"error": str(exc)}), 400
|
return jsonify({"error": str(exc)}), 400
|
||||||
|
|
||||||
@@ -732,20 +764,18 @@ def bulk_download_objects(bucket_name: str):
|
|||||||
if not cleaned:
|
if not cleaned:
|
||||||
return jsonify({"error": "Select at least one object to download"}), 400
|
return jsonify({"error": "Select at least one object to download"}), 400
|
||||||
|
|
||||||
MAX_KEYS = current_app.config.get("BULK_DELETE_MAX_KEYS", 500) # Reuse same limit for now
|
MAX_KEYS = current_app.config.get("BULK_DELETE_MAX_KEYS", 500)
|
||||||
if len(cleaned) > MAX_KEYS:
|
if len(cleaned) > MAX_KEYS:
|
||||||
return jsonify({"error": f"A maximum of {MAX_KEYS} objects can be downloaded per request"}), 400
|
return jsonify({"error": f"A maximum of {MAX_KEYS} objects can be downloaded per request"}), 400
|
||||||
|
|
||||||
unique_keys = list(dict.fromkeys(cleaned))
|
unique_keys = list(dict.fromkeys(cleaned))
|
||||||
storage = _storage()
|
storage = _storage()
|
||||||
|
|
||||||
# Verify permission to read bucket contents
|
|
||||||
try:
|
try:
|
||||||
_authorize_ui(principal, bucket_name, "read")
|
_authorize_ui(principal, bucket_name, "read")
|
||||||
except IamError as exc:
|
except IamError as exc:
|
||||||
return jsonify({"error": str(exc)}), 403
|
return jsonify({"error": str(exc)}), 403
|
||||||
|
|
||||||
# Create ZIP archive of selected objects
|
|
||||||
buffer = io.BytesIO()
|
buffer = io.BytesIO()
|
||||||
with zipfile.ZipFile(buffer, "w", zipfile.ZIP_DEFLATED) as zf:
|
with zipfile.ZipFile(buffer, "w", zipfile.ZIP_DEFLATED) as zf:
|
||||||
for key in unique_keys:
|
for key in unique_keys:
|
||||||
@@ -762,7 +792,6 @@ def bulk_download_objects(bucket_name: str):
|
|||||||
path = storage.get_object_path(bucket_name, key)
|
path = storage.get_object_path(bucket_name, key)
|
||||||
zf.write(path, arcname=key)
|
zf.write(path, arcname=key)
|
||||||
except (StorageError, IamError):
|
except (StorageError, IamError):
|
||||||
# Skip objects that can't be accessed
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
buffer.seek(0)
|
buffer.seek(0)
|
||||||
@@ -813,7 +842,6 @@ def object_preview(bucket_name: str, object_key: str) -> Response:
|
|||||||
|
|
||||||
download = request.args.get("download") == "1"
|
download = request.args.get("download") == "1"
|
||||||
|
|
||||||
# Check if object is encrypted and needs decryption
|
|
||||||
is_encrypted = "x-amz-server-side-encryption" in metadata
|
is_encrypted = "x-amz-server-side-encryption" in metadata
|
||||||
if is_encrypted and hasattr(storage, 'get_object_data'):
|
if is_encrypted and hasattr(storage, 'get_object_data'):
|
||||||
try:
|
try:
|
||||||
@@ -849,7 +877,6 @@ def object_presign(bucket_name: str, object_key: str):
|
|||||||
encoded_key = quote(object_key, safe="/")
|
encoded_key = quote(object_key, safe="/")
|
||||||
url = f"{api_base}/presign/{bucket_name}/{encoded_key}"
|
url = f"{api_base}/presign/{bucket_name}/{encoded_key}"
|
||||||
|
|
||||||
# Use API base URL for forwarded headers so presigned URLs point to API, not UI
|
|
||||||
parsed_api = urlparse(api_base)
|
parsed_api = urlparse(api_base)
|
||||||
headers = _api_headers()
|
headers = _api_headers()
|
||||||
headers["X-Forwarded-Host"] = parsed_api.netloc or "127.0.0.1:5000"
|
headers["X-Forwarded-Host"] = parsed_api.netloc or "127.0.0.1:5000"
|
||||||
@@ -994,7 +1021,6 @@ def update_bucket_quota(bucket_name: str):
|
|||||||
"""Update bucket quota configuration (admin only)."""
|
"""Update bucket quota configuration (admin only)."""
|
||||||
principal = _current_principal()
|
principal = _current_principal()
|
||||||
|
|
||||||
# Quota management is admin-only
|
|
||||||
is_admin = False
|
is_admin = False
|
||||||
try:
|
try:
|
||||||
_iam().authorize(principal, None, "iam:list_users")
|
_iam().authorize(principal, None, "iam:list_users")
|
||||||
@@ -1016,7 +1042,6 @@ def update_bucket_quota(bucket_name: str):
|
|||||||
flash(_friendly_error_message(exc), "danger")
|
flash(_friendly_error_message(exc), "danger")
|
||||||
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
||||||
|
|
||||||
# Parse quota values
|
|
||||||
max_mb_str = request.form.get("max_mb", "").strip()
|
max_mb_str = request.form.get("max_mb", "").strip()
|
||||||
max_objects_str = request.form.get("max_objects", "").strip()
|
max_objects_str = request.form.get("max_objects", "").strip()
|
||||||
|
|
||||||
@@ -1028,7 +1053,7 @@ def update_bucket_quota(bucket_name: str):
|
|||||||
max_mb = int(max_mb_str)
|
max_mb = int(max_mb_str)
|
||||||
if max_mb < 1:
|
if max_mb < 1:
|
||||||
raise ValueError("Size must be at least 1 MB")
|
raise ValueError("Size must be at least 1 MB")
|
||||||
max_bytes = max_mb * 1024 * 1024 # Convert MB to bytes
|
max_bytes = max_mb * 1024 * 1024
|
||||||
except ValueError as exc:
|
except ValueError as exc:
|
||||||
flash(f"Invalid size value: {exc}", "danger")
|
flash(f"Invalid size value: {exc}", "danger")
|
||||||
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
||||||
@@ -1081,7 +1106,6 @@ def update_bucket_encryption(bucket_name: str):
|
|||||||
flash("Invalid encryption algorithm", "danger")
|
flash("Invalid encryption algorithm", "danger")
|
||||||
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
return redirect(url_for("ui.bucket_detail", bucket_name=bucket_name, tab="properties"))
|
||||||
|
|
||||||
# Build encryption configuration in AWS S3 format
|
|
||||||
encryption_config: dict[str, Any] = {
|
encryption_config: dict[str, Any] = {
|
||||||
"Rules": [
|
"Rules": [
|
||||||
{
|
{
|
||||||
@@ -1472,7 +1496,6 @@ def update_bucket_replication(bucket_name: str):
|
|||||||
if rule:
|
if rule:
|
||||||
rule.enabled = True
|
rule.enabled = True
|
||||||
_replication().set_rule(rule)
|
_replication().set_rule(rule)
|
||||||
# When resuming, sync any pending objects that accumulated while paused
|
|
||||||
if rule.mode == REPLICATION_MODE_ALL:
|
if rule.mode == REPLICATION_MODE_ALL:
|
||||||
_replication().replicate_existing_objects(bucket_name)
|
_replication().replicate_existing_objects(bucket_name)
|
||||||
flash("Replication resumed. Syncing pending objects in background.", "success")
|
flash("Replication resumed. Syncing pending objects in background.", "success")
|
||||||
@@ -1567,6 +1590,84 @@ def get_replication_status(bucket_name: str):
|
|||||||
})
|
})
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.get("/buckets/<bucket_name>/replication/failures")
|
||||||
|
def get_replication_failures(bucket_name: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "replication")
|
||||||
|
except IamError:
|
||||||
|
return jsonify({"error": "Access denied"}), 403
|
||||||
|
|
||||||
|
limit = request.args.get("limit", 50, type=int)
|
||||||
|
offset = request.args.get("offset", 0, type=int)
|
||||||
|
|
||||||
|
failures = _replication().get_failed_items(bucket_name, limit, offset)
|
||||||
|
total = _replication().get_failure_count(bucket_name)
|
||||||
|
|
||||||
|
return jsonify({
|
||||||
|
"failures": [f.to_dict() for f in failures],
|
||||||
|
"total": total,
|
||||||
|
"limit": limit,
|
||||||
|
"offset": offset,
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.post("/buckets/<bucket_name>/replication/failures/<path:object_key>/retry")
|
||||||
|
def retry_replication_failure(bucket_name: str, object_key: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "replication")
|
||||||
|
except IamError:
|
||||||
|
return jsonify({"error": "Access denied"}), 403
|
||||||
|
|
||||||
|
success = _replication().retry_failed_item(bucket_name, object_key)
|
||||||
|
if success:
|
||||||
|
return jsonify({"status": "submitted", "object_key": object_key})
|
||||||
|
return jsonify({"error": "Failed to submit retry"}), 400
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.post("/buckets/<bucket_name>/replication/failures/retry-all")
|
||||||
|
def retry_all_replication_failures(bucket_name: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "replication")
|
||||||
|
except IamError:
|
||||||
|
return jsonify({"error": "Access denied"}), 403
|
||||||
|
|
||||||
|
result = _replication().retry_all_failed(bucket_name)
|
||||||
|
return jsonify({
|
||||||
|
"status": "submitted",
|
||||||
|
"submitted": result["submitted"],
|
||||||
|
"skipped": result["skipped"],
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.delete("/buckets/<bucket_name>/replication/failures/<path:object_key>")
|
||||||
|
def dismiss_replication_failure(bucket_name: str, object_key: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "replication")
|
||||||
|
except IamError:
|
||||||
|
return jsonify({"error": "Access denied"}), 403
|
||||||
|
|
||||||
|
success = _replication().dismiss_failure(bucket_name, object_key)
|
||||||
|
if success:
|
||||||
|
return jsonify({"status": "dismissed", "object_key": object_key})
|
||||||
|
return jsonify({"error": "Failure not found"}), 404
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.delete("/buckets/<bucket_name>/replication/failures")
|
||||||
|
def clear_replication_failures(bucket_name: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "replication")
|
||||||
|
except IamError:
|
||||||
|
return jsonify({"error": "Access denied"}), 403
|
||||||
|
|
||||||
|
_replication().clear_failures(bucket_name)
|
||||||
|
return jsonify({"status": "cleared"})
|
||||||
|
|
||||||
|
|
||||||
@ui_bp.get("/connections/<connection_id>/health")
|
@ui_bp.get("/connections/<connection_id>/health")
|
||||||
def check_connection_health(connection_id: str):
|
def check_connection_health(connection_id: str):
|
||||||
"""Check if a connection endpoint is reachable."""
|
"""Check if a connection endpoint is reachable."""
|
||||||
@@ -1666,6 +1767,358 @@ def metrics_dashboard():
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.route("/buckets/<bucket_name>/lifecycle", methods=["GET", "POST", "DELETE"])
|
||||||
|
def bucket_lifecycle(bucket_name: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "policy")
|
||||||
|
except IamError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 403
|
||||||
|
|
||||||
|
storage = _storage()
|
||||||
|
if not storage.bucket_exists(bucket_name):
|
||||||
|
return jsonify({"error": "Bucket does not exist"}), 404
|
||||||
|
|
||||||
|
if request.method == "GET":
|
||||||
|
rules = storage.get_bucket_lifecycle(bucket_name) or []
|
||||||
|
return jsonify({"rules": rules})
|
||||||
|
|
||||||
|
if request.method == "DELETE":
|
||||||
|
storage.set_bucket_lifecycle(bucket_name, None)
|
||||||
|
return jsonify({"status": "ok", "message": "Lifecycle configuration deleted"})
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
rules = payload.get("rules", [])
|
||||||
|
if not isinstance(rules, list):
|
||||||
|
return jsonify({"error": "rules must be a list"}), 400
|
||||||
|
|
||||||
|
validated_rules = []
|
||||||
|
for i, rule in enumerate(rules):
|
||||||
|
if not isinstance(rule, dict):
|
||||||
|
return jsonify({"error": f"Rule {i} must be an object"}), 400
|
||||||
|
validated = {
|
||||||
|
"ID": str(rule.get("ID", f"rule-{i+1}")),
|
||||||
|
"Status": "Enabled" if rule.get("Status", "Enabled") == "Enabled" else "Disabled",
|
||||||
|
}
|
||||||
|
if rule.get("Prefix"):
|
||||||
|
validated["Prefix"] = str(rule["Prefix"])
|
||||||
|
if rule.get("Expiration"):
|
||||||
|
exp = rule["Expiration"]
|
||||||
|
if isinstance(exp, dict) and exp.get("Days"):
|
||||||
|
validated["Expiration"] = {"Days": int(exp["Days"])}
|
||||||
|
if rule.get("NoncurrentVersionExpiration"):
|
||||||
|
nve = rule["NoncurrentVersionExpiration"]
|
||||||
|
if isinstance(nve, dict) and nve.get("NoncurrentDays"):
|
||||||
|
validated["NoncurrentVersionExpiration"] = {"NoncurrentDays": int(nve["NoncurrentDays"])}
|
||||||
|
if rule.get("AbortIncompleteMultipartUpload"):
|
||||||
|
aimu = rule["AbortIncompleteMultipartUpload"]
|
||||||
|
if isinstance(aimu, dict) and aimu.get("DaysAfterInitiation"):
|
||||||
|
validated["AbortIncompleteMultipartUpload"] = {"DaysAfterInitiation": int(aimu["DaysAfterInitiation"])}
|
||||||
|
validated_rules.append(validated)
|
||||||
|
|
||||||
|
storage.set_bucket_lifecycle(bucket_name, validated_rules if validated_rules else None)
|
||||||
|
return jsonify({"status": "ok", "message": "Lifecycle configuration saved", "rules": validated_rules})
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.get("/buckets/<bucket_name>/lifecycle/history")
|
||||||
|
def get_lifecycle_history(bucket_name: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "policy")
|
||||||
|
except IamError:
|
||||||
|
return jsonify({"error": "Access denied"}), 403
|
||||||
|
|
||||||
|
limit = request.args.get("limit", 50, type=int)
|
||||||
|
offset = request.args.get("offset", 0, type=int)
|
||||||
|
|
||||||
|
lifecycle_manager = current_app.extensions.get("lifecycle")
|
||||||
|
if not lifecycle_manager:
|
||||||
|
return jsonify({
|
||||||
|
"executions": [],
|
||||||
|
"total": 0,
|
||||||
|
"limit": limit,
|
||||||
|
"offset": offset,
|
||||||
|
"enabled": False,
|
||||||
|
})
|
||||||
|
|
||||||
|
records = lifecycle_manager.get_execution_history(bucket_name, limit, offset)
|
||||||
|
return jsonify({
|
||||||
|
"executions": [r.to_dict() for r in records],
|
||||||
|
"total": len(lifecycle_manager.get_execution_history(bucket_name, 1000, 0)),
|
||||||
|
"limit": limit,
|
||||||
|
"offset": offset,
|
||||||
|
"enabled": True,
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.route("/buckets/<bucket_name>/cors", methods=["GET", "POST", "DELETE"])
|
||||||
|
def bucket_cors(bucket_name: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "policy")
|
||||||
|
except IamError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 403
|
||||||
|
|
||||||
|
storage = _storage()
|
||||||
|
if not storage.bucket_exists(bucket_name):
|
||||||
|
return jsonify({"error": "Bucket does not exist"}), 404
|
||||||
|
|
||||||
|
if request.method == "GET":
|
||||||
|
rules = storage.get_bucket_cors(bucket_name) or []
|
||||||
|
return jsonify({"rules": rules})
|
||||||
|
|
||||||
|
if request.method == "DELETE":
|
||||||
|
storage.set_bucket_cors(bucket_name, None)
|
||||||
|
return jsonify({"status": "ok", "message": "CORS configuration deleted"})
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
rules = payload.get("rules", [])
|
||||||
|
if not isinstance(rules, list):
|
||||||
|
return jsonify({"error": "rules must be a list"}), 400
|
||||||
|
|
||||||
|
validated_rules = []
|
||||||
|
for i, rule in enumerate(rules):
|
||||||
|
if not isinstance(rule, dict):
|
||||||
|
return jsonify({"error": f"Rule {i} must be an object"}), 400
|
||||||
|
origins = rule.get("AllowedOrigins", [])
|
||||||
|
methods = rule.get("AllowedMethods", [])
|
||||||
|
if not origins or not methods:
|
||||||
|
return jsonify({"error": f"Rule {i} must have AllowedOrigins and AllowedMethods"}), 400
|
||||||
|
validated = {
|
||||||
|
"AllowedOrigins": [str(o) for o in origins if o],
|
||||||
|
"AllowedMethods": [str(m).upper() for m in methods if m],
|
||||||
|
}
|
||||||
|
if rule.get("AllowedHeaders"):
|
||||||
|
validated["AllowedHeaders"] = [str(h) for h in rule["AllowedHeaders"] if h]
|
||||||
|
if rule.get("ExposeHeaders"):
|
||||||
|
validated["ExposeHeaders"] = [str(h) for h in rule["ExposeHeaders"] if h]
|
||||||
|
if rule.get("MaxAgeSeconds") is not None:
|
||||||
|
try:
|
||||||
|
validated["MaxAgeSeconds"] = int(rule["MaxAgeSeconds"])
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
pass
|
||||||
|
validated_rules.append(validated)
|
||||||
|
|
||||||
|
storage.set_bucket_cors(bucket_name, validated_rules if validated_rules else None)
|
||||||
|
return jsonify({"status": "ok", "message": "CORS configuration saved", "rules": validated_rules})
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.route("/buckets/<bucket_name>/acl", methods=["GET", "POST"])
|
||||||
|
def bucket_acl(bucket_name: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
action = "read" if request.method == "GET" else "write"
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, action)
|
||||||
|
except IamError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 403
|
||||||
|
|
||||||
|
storage = _storage()
|
||||||
|
if not storage.bucket_exists(bucket_name):
|
||||||
|
return jsonify({"error": "Bucket does not exist"}), 404
|
||||||
|
|
||||||
|
acl_service = _acl()
|
||||||
|
owner_id = principal.access_key if principal else "anonymous"
|
||||||
|
|
||||||
|
if request.method == "GET":
|
||||||
|
try:
|
||||||
|
acl = acl_service.get_bucket_acl(bucket_name)
|
||||||
|
if not acl:
|
||||||
|
acl = create_canned_acl("private", owner_id)
|
||||||
|
return jsonify({
|
||||||
|
"owner": acl.owner,
|
||||||
|
"grants": [g.to_dict() for g in acl.grants],
|
||||||
|
"canned_acls": list(CANNED_ACLS.keys()),
|
||||||
|
})
|
||||||
|
except Exception as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 500
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
canned_acl = payload.get("canned_acl")
|
||||||
|
if canned_acl:
|
||||||
|
if canned_acl not in CANNED_ACLS:
|
||||||
|
return jsonify({"error": f"Invalid canned ACL: {canned_acl}"}), 400
|
||||||
|
acl_service.set_bucket_canned_acl(bucket_name, canned_acl, owner_id)
|
||||||
|
return jsonify({"status": "ok", "message": f"ACL set to {canned_acl}"})
|
||||||
|
|
||||||
|
return jsonify({"error": "canned_acl is required"}), 400
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.route("/buckets/<bucket_name>/objects/<path:object_key>/tags", methods=["GET", "POST"])
|
||||||
|
def object_tags(bucket_name: str, object_key: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "read", object_key=object_key)
|
||||||
|
except IamError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 403
|
||||||
|
|
||||||
|
storage = _storage()
|
||||||
|
|
||||||
|
if request.method == "GET":
|
||||||
|
try:
|
||||||
|
tags = storage.get_object_tags(bucket_name, object_key)
|
||||||
|
return jsonify({"tags": tags})
|
||||||
|
except StorageError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 404
|
||||||
|
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "write", object_key=object_key)
|
||||||
|
except IamError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 403
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
tags = payload.get("tags", [])
|
||||||
|
if not isinstance(tags, list):
|
||||||
|
return jsonify({"error": "tags must be a list"}), 400
|
||||||
|
if len(tags) > 10:
|
||||||
|
return jsonify({"error": "Maximum 10 tags allowed"}), 400
|
||||||
|
|
||||||
|
validated_tags = []
|
||||||
|
for tag in tags:
|
||||||
|
if isinstance(tag, dict) and tag.get("Key"):
|
||||||
|
validated_tags.append({
|
||||||
|
"Key": str(tag["Key"]),
|
||||||
|
"Value": str(tag.get("Value", ""))
|
||||||
|
})
|
||||||
|
|
||||||
|
try:
|
||||||
|
storage.set_object_tags(bucket_name, object_key, validated_tags if validated_tags else None)
|
||||||
|
return jsonify({"status": "ok", "message": "Tags saved", "tags": validated_tags})
|
||||||
|
except StorageError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 400
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.post("/buckets/<bucket_name>/folders")
|
||||||
|
def create_folder(bucket_name: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "write")
|
||||||
|
except IamError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 403
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
folder_name = str(payload.get("folder_name", "")).strip()
|
||||||
|
prefix = str(payload.get("prefix", "")).strip()
|
||||||
|
|
||||||
|
if not folder_name:
|
||||||
|
return jsonify({"error": "folder_name is required"}), 400
|
||||||
|
|
||||||
|
folder_name = folder_name.rstrip("/")
|
||||||
|
if "/" in folder_name:
|
||||||
|
return jsonify({"error": "Folder name cannot contain /"}), 400
|
||||||
|
|
||||||
|
folder_key = f"{prefix}{folder_name}/" if prefix else f"{folder_name}/"
|
||||||
|
|
||||||
|
import io
|
||||||
|
try:
|
||||||
|
_storage().put_object(bucket_name, folder_key, io.BytesIO(b""))
|
||||||
|
return jsonify({"status": "ok", "message": f"Folder '{folder_name}' created", "key": folder_key})
|
||||||
|
except StorageError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 400
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.post("/buckets/<bucket_name>/objects/<path:object_key>/copy")
|
||||||
|
def copy_object(bucket_name: str, object_key: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "read", object_key=object_key)
|
||||||
|
except IamError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 403
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
dest_bucket = str(payload.get("dest_bucket", bucket_name)).strip()
|
||||||
|
dest_key = str(payload.get("dest_key", "")).strip()
|
||||||
|
|
||||||
|
if not dest_key:
|
||||||
|
return jsonify({"error": "dest_key is required"}), 400
|
||||||
|
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, dest_bucket, "write", object_key=dest_key)
|
||||||
|
except IamError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 403
|
||||||
|
|
||||||
|
storage = _storage()
|
||||||
|
|
||||||
|
try:
|
||||||
|
source_path = storage.get_object_path(bucket_name, object_key)
|
||||||
|
source_metadata = storage.get_object_metadata(bucket_name, object_key)
|
||||||
|
except StorageError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 404
|
||||||
|
|
||||||
|
try:
|
||||||
|
with source_path.open("rb") as stream:
|
||||||
|
storage.put_object(dest_bucket, dest_key, stream, metadata=source_metadata or None)
|
||||||
|
return jsonify({
|
||||||
|
"status": "ok",
|
||||||
|
"message": f"Copied to {dest_bucket}/{dest_key}",
|
||||||
|
"dest_bucket": dest_bucket,
|
||||||
|
"dest_key": dest_key,
|
||||||
|
})
|
||||||
|
except StorageError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 400
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.post("/buckets/<bucket_name>/objects/<path:object_key>/move")
|
||||||
|
def move_object(bucket_name: str, object_key: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket_name, "read", object_key=object_key)
|
||||||
|
_authorize_ui(principal, bucket_name, "delete", object_key=object_key)
|
||||||
|
except IamError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 403
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
dest_bucket = str(payload.get("dest_bucket", bucket_name)).strip()
|
||||||
|
dest_key = str(payload.get("dest_key", "")).strip()
|
||||||
|
|
||||||
|
if not dest_key:
|
||||||
|
return jsonify({"error": "dest_key is required"}), 400
|
||||||
|
|
||||||
|
if dest_bucket == bucket_name and dest_key == object_key:
|
||||||
|
return jsonify({"error": "Cannot move object to the same location"}), 400
|
||||||
|
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, dest_bucket, "write", object_key=dest_key)
|
||||||
|
except IamError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 403
|
||||||
|
|
||||||
|
storage = _storage()
|
||||||
|
|
||||||
|
try:
|
||||||
|
source_path = storage.get_object_path(bucket_name, object_key)
|
||||||
|
source_metadata = storage.get_object_metadata(bucket_name, object_key)
|
||||||
|
except StorageError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 404
|
||||||
|
|
||||||
|
try:
|
||||||
|
import io
|
||||||
|
with source_path.open("rb") as f:
|
||||||
|
data = f.read()
|
||||||
|
storage.put_object(dest_bucket, dest_key, io.BytesIO(data), metadata=source_metadata or None)
|
||||||
|
storage.delete_object(bucket_name, object_key)
|
||||||
|
return jsonify({
|
||||||
|
"status": "ok",
|
||||||
|
"message": f"Moved to {dest_bucket}/{dest_key}",
|
||||||
|
"dest_bucket": dest_bucket,
|
||||||
|
"dest_key": dest_key,
|
||||||
|
})
|
||||||
|
except StorageError as exc:
|
||||||
|
return jsonify({"error": str(exc)}), 400
|
||||||
|
|
||||||
|
|
||||||
|
@ui_bp.get("/buckets/<bucket_name>/list-for-copy")
|
||||||
|
def list_buckets_for_copy(bucket_name: str):
|
||||||
|
principal = _current_principal()
|
||||||
|
buckets = _storage().list_buckets()
|
||||||
|
allowed = []
|
||||||
|
for bucket in buckets:
|
||||||
|
try:
|
||||||
|
_authorize_ui(principal, bucket.name, "write")
|
||||||
|
allowed.append(bucket.name)
|
||||||
|
except IamError:
|
||||||
|
pass
|
||||||
|
return jsonify({"buckets": allowed})
|
||||||
|
|
||||||
|
|
||||||
@ui_bp.app_errorhandler(404)
|
@ui_bp.app_errorhandler(404)
|
||||||
def ui_not_found(error): # type: ignore[override]
|
def ui_not_found(error): # type: ignore[override]
|
||||||
prefix = ui_bp.url_prefix or ""
|
prefix = ui_bp.url_prefix or ""
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
"""Central location for the application version string."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
APP_VERSION = "0.1.9"
|
APP_VERSION = "0.2.0"
|
||||||
|
|
||||||
|
|
||||||
def get_version() -> str:
|
def get_version() -> str:
|
||||||
|
|||||||
@@ -1,3 +1,5 @@
|
|||||||
[pytest]
|
[pytest]
|
||||||
testpaths = tests
|
testpaths = tests
|
||||||
norecursedirs = data .git __pycache__ .venv
|
norecursedirs = data .git __pycache__ .venv
|
||||||
|
markers =
|
||||||
|
integration: marks tests as integration tests (may require external services)
|
||||||
|
|||||||
1162
static/css/main.css
1162
static/css/main.css
File diff suppressed because it is too large
Load Diff
Binary file not shown.
|
Before Width: | Height: | Size: 200 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 628 KiB |
BIN
static/images/MyFSIO.ico
Normal file
BIN
static/images/MyFSIO.ico
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 200 KiB |
BIN
static/images/MyFSIO.png
Normal file
BIN
static/images/MyFSIO.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 872 KiB |
192
static/js/bucket-detail-operations.js
Normal file
192
static/js/bucket-detail-operations.js
Normal file
@@ -0,0 +1,192 @@
|
|||||||
|
window.BucketDetailOperations = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
let showMessage = function() {};
|
||||||
|
let escapeHtml = function(s) { return s; };
|
||||||
|
|
||||||
|
function init(config) {
|
||||||
|
showMessage = config.showMessage || showMessage;
|
||||||
|
escapeHtml = config.escapeHtml || escapeHtml;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadLifecycleRules(card, endpoint) {
|
||||||
|
if (!card || !endpoint) return;
|
||||||
|
const body = card.querySelector('[data-lifecycle-body]');
|
||||||
|
if (!body) return;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch(endpoint);
|
||||||
|
const data = await response.json();
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(data.error || 'Failed to load')}</td></tr>`;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const rules = data.rules || [];
|
||||||
|
if (rules.length === 0) {
|
||||||
|
body.innerHTML = '<tr><td colspan="5" class="text-center text-muted py-3">No lifecycle rules configured</td></tr>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
body.innerHTML = rules.map(rule => {
|
||||||
|
const actions = [];
|
||||||
|
if (rule.expiration_days) actions.push(`Delete after ${rule.expiration_days} days`);
|
||||||
|
if (rule.noncurrent_days) actions.push(`Delete old versions after ${rule.noncurrent_days} days`);
|
||||||
|
if (rule.abort_mpu_days) actions.push(`Abort incomplete MPU after ${rule.abort_mpu_days} days`);
|
||||||
|
|
||||||
|
return `
|
||||||
|
<tr>
|
||||||
|
<td class="fw-medium">${escapeHtml(rule.id)}</td>
|
||||||
|
<td><code>${escapeHtml(rule.prefix || '(all)')}</code></td>
|
||||||
|
<td>${actions.map(a => `<div class="small">${escapeHtml(a)}</div>`).join('')}</td>
|
||||||
|
<td>
|
||||||
|
<span class="badge ${rule.status === 'Enabled' ? 'text-bg-success' : 'text-bg-secondary'}">${escapeHtml(rule.status)}</span>
|
||||||
|
</td>
|
||||||
|
<td class="text-end">
|
||||||
|
<button class="btn btn-sm btn-outline-danger" onclick="BucketDetailOperations.deleteLifecycleRule('${escapeHtml(rule.id)}')">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="12" height="12" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
`;
|
||||||
|
}).join('');
|
||||||
|
} catch (err) {
|
||||||
|
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(err.message)}</td></tr>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadCorsRules(card, endpoint) {
|
||||||
|
if (!card || !endpoint) return;
|
||||||
|
const body = document.getElementById('cors-rules-body');
|
||||||
|
if (!body) return;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch(endpoint);
|
||||||
|
const data = await response.json();
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(data.error || 'Failed to load')}</td></tr>`;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const rules = data.rules || [];
|
||||||
|
if (rules.length === 0) {
|
||||||
|
body.innerHTML = '<tr><td colspan="5" class="text-center text-muted py-3">No CORS rules configured</td></tr>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
body.innerHTML = rules.map((rule, idx) => `
|
||||||
|
<tr>
|
||||||
|
<td>${(rule.allowed_origins || []).map(o => `<code class="d-block">${escapeHtml(o)}</code>`).join('')}</td>
|
||||||
|
<td>${(rule.allowed_methods || []).map(m => `<span class="badge text-bg-secondary me-1">${escapeHtml(m)}</span>`).join('')}</td>
|
||||||
|
<td class="small text-muted">${(rule.allowed_headers || []).slice(0, 3).join(', ')}${(rule.allowed_headers || []).length > 3 ? '...' : ''}</td>
|
||||||
|
<td class="text-muted">${rule.max_age_seconds || 0}s</td>
|
||||||
|
<td class="text-end">
|
||||||
|
<button class="btn btn-sm btn-outline-danger" onclick="BucketDetailOperations.deleteCorsRule(${idx})">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="12" height="12" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
`).join('');
|
||||||
|
} catch (err) {
|
||||||
|
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(err.message)}</td></tr>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadAcl(card, endpoint) {
|
||||||
|
if (!card || !endpoint) return;
|
||||||
|
const body = card.querySelector('[data-acl-body]');
|
||||||
|
if (!body) return;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch(endpoint);
|
||||||
|
const data = await response.json();
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
body.innerHTML = `<tr><td colspan="3" class="text-center text-danger py-3">${escapeHtml(data.error || 'Failed to load')}</td></tr>`;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const grants = data.grants || [];
|
||||||
|
if (grants.length === 0) {
|
||||||
|
body.innerHTML = '<tr><td colspan="3" class="text-center text-muted py-3">No ACL grants configured</td></tr>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
body.innerHTML = grants.map(grant => {
|
||||||
|
const grantee = grant.grantee_type === 'CanonicalUser'
|
||||||
|
? grant.display_name || grant.grantee_id
|
||||||
|
: grant.grantee_uri || grant.grantee_type;
|
||||||
|
return `
|
||||||
|
<tr>
|
||||||
|
<td class="fw-medium">${escapeHtml(grantee)}</td>
|
||||||
|
<td><span class="badge text-bg-info">${escapeHtml(grant.permission)}</span></td>
|
||||||
|
<td class="text-muted small">${escapeHtml(grant.grantee_type)}</td>
|
||||||
|
</tr>
|
||||||
|
`;
|
||||||
|
}).join('');
|
||||||
|
} catch (err) {
|
||||||
|
body.innerHTML = `<tr><td colspan="3" class="text-center text-danger py-3">${escapeHtml(err.message)}</td></tr>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function deleteLifecycleRule(ruleId) {
|
||||||
|
if (!confirm(`Delete lifecycle rule "${ruleId}"?`)) return;
|
||||||
|
const card = document.getElementById('lifecycle-rules-card');
|
||||||
|
if (!card) return;
|
||||||
|
const endpoint = card.dataset.lifecycleUrl;
|
||||||
|
const csrfToken = window.getCsrfToken ? window.getCsrfToken() : '';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const resp = await fetch(endpoint, {
|
||||||
|
method: 'DELETE',
|
||||||
|
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken },
|
||||||
|
body: JSON.stringify({ rule_id: ruleId })
|
||||||
|
});
|
||||||
|
const data = await resp.json();
|
||||||
|
if (!resp.ok) throw new Error(data.error || 'Failed to delete');
|
||||||
|
showMessage({ title: 'Rule deleted', body: `Lifecycle rule "${ruleId}" has been deleted.`, variant: 'success' });
|
||||||
|
loadLifecycleRules(card, endpoint);
|
||||||
|
} catch (err) {
|
||||||
|
showMessage({ title: 'Delete failed', body: err.message, variant: 'danger' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function deleteCorsRule(index) {
|
||||||
|
if (!confirm('Delete this CORS rule?')) return;
|
||||||
|
const card = document.getElementById('cors-rules-card');
|
||||||
|
if (!card) return;
|
||||||
|
const endpoint = card.dataset.corsUrl;
|
||||||
|
const csrfToken = window.getCsrfToken ? window.getCsrfToken() : '';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const resp = await fetch(endpoint, {
|
||||||
|
method: 'DELETE',
|
||||||
|
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken },
|
||||||
|
body: JSON.stringify({ rule_index: index })
|
||||||
|
});
|
||||||
|
const data = await resp.json();
|
||||||
|
if (!resp.ok) throw new Error(data.error || 'Failed to delete');
|
||||||
|
showMessage({ title: 'Rule deleted', body: 'CORS rule has been deleted.', variant: 'success' });
|
||||||
|
loadCorsRules(card, endpoint);
|
||||||
|
} catch (err) {
|
||||||
|
showMessage({ title: 'Delete failed', body: err.message, variant: 'danger' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
init: init,
|
||||||
|
loadLifecycleRules: loadLifecycleRules,
|
||||||
|
loadCorsRules: loadCorsRules,
|
||||||
|
loadAcl: loadAcl,
|
||||||
|
deleteLifecycleRule: deleteLifecycleRule,
|
||||||
|
deleteCorsRule: deleteCorsRule
|
||||||
|
};
|
||||||
|
})();
|
||||||
548
static/js/bucket-detail-upload.js
Normal file
548
static/js/bucket-detail-upload.js
Normal file
@@ -0,0 +1,548 @@
|
|||||||
|
window.BucketDetailUpload = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
const MULTIPART_THRESHOLD = 8 * 1024 * 1024;
|
||||||
|
const CHUNK_SIZE = 8 * 1024 * 1024;
|
||||||
|
|
||||||
|
let state = {
|
||||||
|
isUploading: false,
|
||||||
|
uploadProgress: { current: 0, total: 0, currentFile: '' }
|
||||||
|
};
|
||||||
|
|
||||||
|
let elements = {};
|
||||||
|
let callbacks = {};
|
||||||
|
|
||||||
|
function init(config) {
|
||||||
|
elements = {
|
||||||
|
uploadForm: config.uploadForm,
|
||||||
|
uploadFileInput: config.uploadFileInput,
|
||||||
|
uploadModal: config.uploadModal,
|
||||||
|
uploadModalEl: config.uploadModalEl,
|
||||||
|
uploadSubmitBtn: config.uploadSubmitBtn,
|
||||||
|
uploadCancelBtn: config.uploadCancelBtn,
|
||||||
|
uploadBtnText: config.uploadBtnText,
|
||||||
|
uploadDropZone: config.uploadDropZone,
|
||||||
|
uploadDropZoneLabel: config.uploadDropZoneLabel,
|
||||||
|
uploadProgressStack: config.uploadProgressStack,
|
||||||
|
uploadKeyPrefix: config.uploadKeyPrefix,
|
||||||
|
singleFileOptions: config.singleFileOptions,
|
||||||
|
bulkUploadProgress: config.bulkUploadProgress,
|
||||||
|
bulkUploadStatus: config.bulkUploadStatus,
|
||||||
|
bulkUploadCounter: config.bulkUploadCounter,
|
||||||
|
bulkUploadProgressBar: config.bulkUploadProgressBar,
|
||||||
|
bulkUploadCurrentFile: config.bulkUploadCurrentFile,
|
||||||
|
bulkUploadResults: config.bulkUploadResults,
|
||||||
|
bulkUploadSuccessAlert: config.bulkUploadSuccessAlert,
|
||||||
|
bulkUploadErrorAlert: config.bulkUploadErrorAlert,
|
||||||
|
bulkUploadSuccessCount: config.bulkUploadSuccessCount,
|
||||||
|
bulkUploadErrorCount: config.bulkUploadErrorCount,
|
||||||
|
bulkUploadErrorList: config.bulkUploadErrorList,
|
||||||
|
floatingProgress: config.floatingProgress,
|
||||||
|
floatingProgressBar: config.floatingProgressBar,
|
||||||
|
floatingProgressStatus: config.floatingProgressStatus,
|
||||||
|
floatingProgressTitle: config.floatingProgressTitle,
|
||||||
|
floatingProgressExpand: config.floatingProgressExpand
|
||||||
|
};
|
||||||
|
|
||||||
|
callbacks = {
|
||||||
|
showMessage: config.showMessage || function() {},
|
||||||
|
formatBytes: config.formatBytes || function(b) { return b + ' bytes'; },
|
||||||
|
escapeHtml: config.escapeHtml || function(s) { return s; },
|
||||||
|
onUploadComplete: config.onUploadComplete || function() {},
|
||||||
|
hasFolders: config.hasFolders || function() { return false; },
|
||||||
|
getCurrentPrefix: config.getCurrentPrefix || function() { return ''; }
|
||||||
|
};
|
||||||
|
|
||||||
|
setupEventListeners();
|
||||||
|
setupBeforeUnload();
|
||||||
|
}
|
||||||
|
|
||||||
|
function isUploading() {
|
||||||
|
return state.isUploading;
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupBeforeUnload() {
|
||||||
|
window.addEventListener('beforeunload', (e) => {
|
||||||
|
if (state.isUploading) {
|
||||||
|
e.preventDefault();
|
||||||
|
e.returnValue = 'Upload in progress. Are you sure you want to leave?';
|
||||||
|
return e.returnValue;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function showFloatingProgress() {
|
||||||
|
if (elements.floatingProgress) {
|
||||||
|
elements.floatingProgress.classList.remove('d-none');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function hideFloatingProgress() {
|
||||||
|
if (elements.floatingProgress) {
|
||||||
|
elements.floatingProgress.classList.add('d-none');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateFloatingProgress(current, total, currentFile) {
|
||||||
|
state.uploadProgress = { current, total, currentFile: currentFile || '' };
|
||||||
|
if (elements.floatingProgressBar && total > 0) {
|
||||||
|
const percent = Math.round((current / total) * 100);
|
||||||
|
elements.floatingProgressBar.style.width = `${percent}%`;
|
||||||
|
}
|
||||||
|
if (elements.floatingProgressStatus) {
|
||||||
|
if (currentFile) {
|
||||||
|
elements.floatingProgressStatus.textContent = `${current}/${total} files - ${currentFile}`;
|
||||||
|
} else {
|
||||||
|
elements.floatingProgressStatus.textContent = `${current}/${total} files completed`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (elements.floatingProgressTitle) {
|
||||||
|
elements.floatingProgressTitle.textContent = `Uploading ${total} file${total !== 1 ? 's' : ''}...`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function refreshUploadDropLabel() {
|
||||||
|
if (!elements.uploadDropZoneLabel || !elements.uploadFileInput) return;
|
||||||
|
const files = elements.uploadFileInput.files;
|
||||||
|
if (!files || files.length === 0) {
|
||||||
|
elements.uploadDropZoneLabel.textContent = 'No file selected';
|
||||||
|
if (elements.singleFileOptions) elements.singleFileOptions.classList.remove('d-none');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
elements.uploadDropZoneLabel.textContent = files.length === 1 ? files[0].name : `${files.length} files selected`;
|
||||||
|
if (elements.singleFileOptions) {
|
||||||
|
elements.singleFileOptions.classList.toggle('d-none', files.length > 1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateUploadBtnText() {
|
||||||
|
if (!elements.uploadBtnText || !elements.uploadFileInput) return;
|
||||||
|
const files = elements.uploadFileInput.files;
|
||||||
|
if (!files || files.length <= 1) {
|
||||||
|
elements.uploadBtnText.textContent = 'Upload';
|
||||||
|
} else {
|
||||||
|
elements.uploadBtnText.textContent = `Upload ${files.length} files`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function resetUploadUI() {
|
||||||
|
if (elements.bulkUploadProgress) elements.bulkUploadProgress.classList.add('d-none');
|
||||||
|
if (elements.bulkUploadResults) elements.bulkUploadResults.classList.add('d-none');
|
||||||
|
if (elements.bulkUploadSuccessAlert) elements.bulkUploadSuccessAlert.classList.remove('d-none');
|
||||||
|
if (elements.bulkUploadErrorAlert) elements.bulkUploadErrorAlert.classList.add('d-none');
|
||||||
|
if (elements.bulkUploadErrorList) elements.bulkUploadErrorList.innerHTML = '';
|
||||||
|
if (elements.uploadSubmitBtn) elements.uploadSubmitBtn.disabled = false;
|
||||||
|
if (elements.uploadFileInput) elements.uploadFileInput.disabled = false;
|
||||||
|
if (elements.uploadProgressStack) elements.uploadProgressStack.innerHTML = '';
|
||||||
|
if (elements.uploadDropZone) {
|
||||||
|
elements.uploadDropZone.classList.remove('upload-locked');
|
||||||
|
elements.uploadDropZone.style.pointerEvents = '';
|
||||||
|
}
|
||||||
|
state.isUploading = false;
|
||||||
|
hideFloatingProgress();
|
||||||
|
}
|
||||||
|
|
||||||
|
function setUploadLockState(locked) {
|
||||||
|
if (elements.uploadDropZone) {
|
||||||
|
elements.uploadDropZone.classList.toggle('upload-locked', locked);
|
||||||
|
elements.uploadDropZone.style.pointerEvents = locked ? 'none' : '';
|
||||||
|
}
|
||||||
|
if (elements.uploadFileInput) {
|
||||||
|
elements.uploadFileInput.disabled = locked;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function createProgressItem(file) {
|
||||||
|
const item = document.createElement('div');
|
||||||
|
item.className = 'upload-progress-item';
|
||||||
|
item.dataset.state = 'uploading';
|
||||||
|
item.innerHTML = `
|
||||||
|
<div class="d-flex justify-content-between align-items-start">
|
||||||
|
<div class="min-width-0 flex-grow-1">
|
||||||
|
<div class="file-name">${callbacks.escapeHtml(file.name)}</div>
|
||||||
|
<div class="file-size">${callbacks.formatBytes(file.size)}</div>
|
||||||
|
</div>
|
||||||
|
<div class="upload-status text-end ms-2">Preparing...</div>
|
||||||
|
</div>
|
||||||
|
<div class="progress-container">
|
||||||
|
<div class="progress">
|
||||||
|
<div class="progress-bar bg-primary" role="progressbar" style="width: 0%"></div>
|
||||||
|
</div>
|
||||||
|
<div class="progress-text">
|
||||||
|
<span class="progress-loaded">0 B</span>
|
||||||
|
<span class="progress-percent">0%</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
`;
|
||||||
|
return item;
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateProgressItem(item, { loaded, total, status, progressState, error }) {
|
||||||
|
if (progressState) item.dataset.state = progressState;
|
||||||
|
const statusEl = item.querySelector('.upload-status');
|
||||||
|
const progressBar = item.querySelector('.progress-bar');
|
||||||
|
const progressLoaded = item.querySelector('.progress-loaded');
|
||||||
|
const progressPercent = item.querySelector('.progress-percent');
|
||||||
|
|
||||||
|
if (status) {
|
||||||
|
statusEl.textContent = status;
|
||||||
|
statusEl.className = 'upload-status text-end ms-2';
|
||||||
|
if (progressState === 'success') statusEl.classList.add('success');
|
||||||
|
if (progressState === 'error') statusEl.classList.add('error');
|
||||||
|
}
|
||||||
|
if (typeof loaded === 'number' && typeof total === 'number' && total > 0) {
|
||||||
|
const percent = Math.round((loaded / total) * 100);
|
||||||
|
progressBar.style.width = `${percent}%`;
|
||||||
|
progressLoaded.textContent = `${callbacks.formatBytes(loaded)} / ${callbacks.formatBytes(total)}`;
|
||||||
|
progressPercent.textContent = `${percent}%`;
|
||||||
|
}
|
||||||
|
if (error) {
|
||||||
|
const progressContainer = item.querySelector('.progress-container');
|
||||||
|
if (progressContainer) {
|
||||||
|
progressContainer.innerHTML = `<div class="text-danger small mt-1">${callbacks.escapeHtml(error)}</div>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function uploadMultipart(file, objectKey, metadata, progressItem, urls) {
|
||||||
|
const csrfToken = document.querySelector('input[name="csrf_token"]')?.value;
|
||||||
|
|
||||||
|
updateProgressItem(progressItem, { status: 'Initiating...', loaded: 0, total: file.size });
|
||||||
|
const initResp = await fetch(urls.initUrl, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken || '' },
|
||||||
|
body: JSON.stringify({ object_key: objectKey, metadata })
|
||||||
|
});
|
||||||
|
if (!initResp.ok) {
|
||||||
|
const err = await initResp.json().catch(() => ({}));
|
||||||
|
throw new Error(err.error || 'Failed to initiate upload');
|
||||||
|
}
|
||||||
|
const { upload_id } = await initResp.json();
|
||||||
|
|
||||||
|
const partUrl = urls.partTemplate.replace('UPLOAD_ID_PLACEHOLDER', upload_id);
|
||||||
|
const completeUrl = urls.completeTemplate.replace('UPLOAD_ID_PLACEHOLDER', upload_id);
|
||||||
|
const abortUrl = urls.abortTemplate.replace('UPLOAD_ID_PLACEHOLDER', upload_id);
|
||||||
|
|
||||||
|
const parts = [];
|
||||||
|
const totalParts = Math.ceil(file.size / CHUNK_SIZE);
|
||||||
|
let uploadedBytes = 0;
|
||||||
|
|
||||||
|
try {
|
||||||
|
for (let partNumber = 1; partNumber <= totalParts; partNumber++) {
|
||||||
|
const start = (partNumber - 1) * CHUNK_SIZE;
|
||||||
|
const end = Math.min(start + CHUNK_SIZE, file.size);
|
||||||
|
const chunk = file.slice(start, end);
|
||||||
|
|
||||||
|
updateProgressItem(progressItem, {
|
||||||
|
status: `Part ${partNumber}/${totalParts}`,
|
||||||
|
loaded: uploadedBytes,
|
||||||
|
total: file.size
|
||||||
|
});
|
||||||
|
|
||||||
|
const partResp = await fetch(`${partUrl}?partNumber=${partNumber}`, {
|
||||||
|
method: 'PUT',
|
||||||
|
headers: { 'X-CSRFToken': csrfToken || '' },
|
||||||
|
body: chunk
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!partResp.ok) {
|
||||||
|
const err = await partResp.json().catch(() => ({}));
|
||||||
|
throw new Error(err.error || `Part ${partNumber} failed`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const partData = await partResp.json();
|
||||||
|
parts.push({ part_number: partNumber, etag: partData.etag });
|
||||||
|
uploadedBytes += chunk.size;
|
||||||
|
|
||||||
|
updateProgressItem(progressItem, {
|
||||||
|
loaded: uploadedBytes,
|
||||||
|
total: file.size
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
updateProgressItem(progressItem, { status: 'Completing...', loaded: file.size, total: file.size });
|
||||||
|
const completeResp = await fetch(completeUrl, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken || '' },
|
||||||
|
body: JSON.stringify({ parts })
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!completeResp.ok) {
|
||||||
|
const err = await completeResp.json().catch(() => ({}));
|
||||||
|
throw new Error(err.error || 'Failed to complete upload');
|
||||||
|
}
|
||||||
|
|
||||||
|
return await completeResp.json();
|
||||||
|
} catch (err) {
|
||||||
|
try {
|
||||||
|
await fetch(abortUrl, { method: 'DELETE', headers: { 'X-CSRFToken': csrfToken || '' } });
|
||||||
|
} catch {}
|
||||||
|
throw err;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function uploadRegular(file, objectKey, metadata, progressItem, formAction) {
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
const formData = new FormData();
|
||||||
|
formData.append('object', file);
|
||||||
|
formData.append('object_key', objectKey);
|
||||||
|
if (metadata) formData.append('metadata', JSON.stringify(metadata));
|
||||||
|
const csrfToken = document.querySelector('input[name="csrf_token"]')?.value;
|
||||||
|
if (csrfToken) formData.append('csrf_token', csrfToken);
|
||||||
|
|
||||||
|
const xhr = new XMLHttpRequest();
|
||||||
|
xhr.open('POST', formAction, true);
|
||||||
|
xhr.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
|
||||||
|
|
||||||
|
xhr.upload.addEventListener('progress', (e) => {
|
||||||
|
if (e.lengthComputable) {
|
||||||
|
updateProgressItem(progressItem, {
|
||||||
|
status: 'Uploading...',
|
||||||
|
loaded: e.loaded,
|
||||||
|
total: e.total
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
xhr.addEventListener('load', () => {
|
||||||
|
if (xhr.status >= 200 && xhr.status < 300) {
|
||||||
|
try {
|
||||||
|
const data = JSON.parse(xhr.responseText);
|
||||||
|
if (data.status === 'error') {
|
||||||
|
reject(new Error(data.message || 'Upload failed'));
|
||||||
|
} else {
|
||||||
|
resolve(data);
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
resolve({});
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
try {
|
||||||
|
const data = JSON.parse(xhr.responseText);
|
||||||
|
reject(new Error(data.message || `Upload failed (${xhr.status})`));
|
||||||
|
} catch {
|
||||||
|
reject(new Error(`Upload failed (${xhr.status})`));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
xhr.addEventListener('error', () => reject(new Error('Network error')));
|
||||||
|
xhr.addEventListener('abort', () => reject(new Error('Upload aborted')));
|
||||||
|
|
||||||
|
xhr.send(formData);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async function uploadSingleFile(file, keyPrefix, metadata, progressItem, urls) {
|
||||||
|
const objectKey = keyPrefix ? `${keyPrefix}${file.name}` : file.name;
|
||||||
|
const shouldUseMultipart = file.size >= MULTIPART_THRESHOLD && urls.initUrl;
|
||||||
|
|
||||||
|
if (!progressItem && elements.uploadProgressStack) {
|
||||||
|
progressItem = createProgressItem(file);
|
||||||
|
elements.uploadProgressStack.appendChild(progressItem);
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
let result;
|
||||||
|
if (shouldUseMultipart) {
|
||||||
|
updateProgressItem(progressItem, { status: 'Multipart upload...', loaded: 0, total: file.size });
|
||||||
|
result = await uploadMultipart(file, objectKey, metadata, progressItem, urls);
|
||||||
|
} else {
|
||||||
|
updateProgressItem(progressItem, { status: 'Uploading...', loaded: 0, total: file.size });
|
||||||
|
result = await uploadRegular(file, objectKey, metadata, progressItem, urls.formAction);
|
||||||
|
}
|
||||||
|
updateProgressItem(progressItem, { progressState: 'success', status: 'Complete', loaded: file.size, total: file.size });
|
||||||
|
return result;
|
||||||
|
} catch (err) {
|
||||||
|
updateProgressItem(progressItem, { progressState: 'error', status: 'Failed', error: err.message });
|
||||||
|
throw err;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function performBulkUpload(files, urls) {
|
||||||
|
if (state.isUploading || !files || files.length === 0) return;
|
||||||
|
|
||||||
|
state.isUploading = true;
|
||||||
|
setUploadLockState(true);
|
||||||
|
const keyPrefix = (elements.uploadKeyPrefix?.value || '').trim();
|
||||||
|
const metadataRaw = elements.uploadForm?.querySelector('textarea[name="metadata"]')?.value?.trim();
|
||||||
|
let metadata = null;
|
||||||
|
if (metadataRaw) {
|
||||||
|
try {
|
||||||
|
metadata = JSON.parse(metadataRaw);
|
||||||
|
} catch {
|
||||||
|
callbacks.showMessage({ title: 'Invalid metadata', body: 'Metadata must be valid JSON.', variant: 'danger' });
|
||||||
|
resetUploadUI();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (elements.bulkUploadProgress) elements.bulkUploadProgress.classList.remove('d-none');
|
||||||
|
if (elements.bulkUploadResults) elements.bulkUploadResults.classList.add('d-none');
|
||||||
|
if (elements.uploadSubmitBtn) elements.uploadSubmitBtn.disabled = true;
|
||||||
|
if (elements.uploadFileInput) elements.uploadFileInput.disabled = true;
|
||||||
|
|
||||||
|
const successFiles = [];
|
||||||
|
const errorFiles = [];
|
||||||
|
const total = files.length;
|
||||||
|
|
||||||
|
updateFloatingProgress(0, total, files[0]?.name || '');
|
||||||
|
|
||||||
|
for (let i = 0; i < total; i++) {
|
||||||
|
const file = files[i];
|
||||||
|
const current = i + 1;
|
||||||
|
|
||||||
|
if (elements.bulkUploadCounter) elements.bulkUploadCounter.textContent = `${current}/${total}`;
|
||||||
|
if (elements.bulkUploadCurrentFile) elements.bulkUploadCurrentFile.textContent = `Uploading: ${file.name}`;
|
||||||
|
if (elements.bulkUploadProgressBar) {
|
||||||
|
const percent = Math.round((current / total) * 100);
|
||||||
|
elements.bulkUploadProgressBar.style.width = `${percent}%`;
|
||||||
|
}
|
||||||
|
updateFloatingProgress(i, total, file.name);
|
||||||
|
|
||||||
|
try {
|
||||||
|
await uploadSingleFile(file, keyPrefix, metadata, null, urls);
|
||||||
|
successFiles.push(file.name);
|
||||||
|
} catch (error) {
|
||||||
|
errorFiles.push({ name: file.name, error: error.message || 'Unknown error' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
updateFloatingProgress(total, total);
|
||||||
|
|
||||||
|
if (elements.bulkUploadProgress) elements.bulkUploadProgress.classList.add('d-none');
|
||||||
|
if (elements.bulkUploadResults) elements.bulkUploadResults.classList.remove('d-none');
|
||||||
|
|
||||||
|
if (elements.bulkUploadSuccessCount) elements.bulkUploadSuccessCount.textContent = successFiles.length;
|
||||||
|
if (successFiles.length === 0 && elements.bulkUploadSuccessAlert) {
|
||||||
|
elements.bulkUploadSuccessAlert.classList.add('d-none');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (errorFiles.length > 0) {
|
||||||
|
if (elements.bulkUploadErrorCount) elements.bulkUploadErrorCount.textContent = errorFiles.length;
|
||||||
|
if (elements.bulkUploadErrorAlert) elements.bulkUploadErrorAlert.classList.remove('d-none');
|
||||||
|
if (elements.bulkUploadErrorList) {
|
||||||
|
elements.bulkUploadErrorList.innerHTML = errorFiles
|
||||||
|
.map(f => `<li><strong>${callbacks.escapeHtml(f.name)}</strong>: ${callbacks.escapeHtml(f.error)}</li>`)
|
||||||
|
.join('');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
state.isUploading = false;
|
||||||
|
setUploadLockState(false);
|
||||||
|
|
||||||
|
if (successFiles.length > 0) {
|
||||||
|
if (elements.uploadBtnText) elements.uploadBtnText.textContent = 'Refreshing...';
|
||||||
|
callbacks.onUploadComplete(successFiles, errorFiles);
|
||||||
|
} else {
|
||||||
|
if (elements.uploadSubmitBtn) elements.uploadSubmitBtn.disabled = false;
|
||||||
|
if (elements.uploadFileInput) elements.uploadFileInput.disabled = false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupEventListeners() {
|
||||||
|
if (elements.uploadFileInput) {
|
||||||
|
elements.uploadFileInput.addEventListener('change', () => {
|
||||||
|
if (state.isUploading) return;
|
||||||
|
refreshUploadDropLabel();
|
||||||
|
updateUploadBtnText();
|
||||||
|
resetUploadUI();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (elements.uploadDropZone) {
|
||||||
|
elements.uploadDropZone.addEventListener('click', () => {
|
||||||
|
if (state.isUploading) return;
|
||||||
|
elements.uploadFileInput?.click();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (elements.floatingProgressExpand) {
|
||||||
|
elements.floatingProgressExpand.addEventListener('click', () => {
|
||||||
|
if (elements.uploadModal) {
|
||||||
|
elements.uploadModal.show();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (elements.uploadModalEl) {
|
||||||
|
elements.uploadModalEl.addEventListener('hide.bs.modal', () => {
|
||||||
|
if (state.isUploading) {
|
||||||
|
showFloatingProgress();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
elements.uploadModalEl.addEventListener('hidden.bs.modal', () => {
|
||||||
|
if (!state.isUploading) {
|
||||||
|
resetUploadUI();
|
||||||
|
if (elements.uploadFileInput) elements.uploadFileInput.value = '';
|
||||||
|
refreshUploadDropLabel();
|
||||||
|
updateUploadBtnText();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
elements.uploadModalEl.addEventListener('show.bs.modal', () => {
|
||||||
|
if (state.isUploading) {
|
||||||
|
hideFloatingProgress();
|
||||||
|
}
|
||||||
|
if (callbacks.hasFolders() && callbacks.getCurrentPrefix()) {
|
||||||
|
if (elements.uploadKeyPrefix) {
|
||||||
|
elements.uploadKeyPrefix.value = callbacks.getCurrentPrefix();
|
||||||
|
}
|
||||||
|
} else if (elements.uploadKeyPrefix) {
|
||||||
|
elements.uploadKeyPrefix.value = '';
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function wireDropTarget(target, options) {
|
||||||
|
const { highlightClass = '', autoOpenModal = false } = options || {};
|
||||||
|
if (!target) return;
|
||||||
|
|
||||||
|
const preventDefaults = (event) => {
|
||||||
|
event.preventDefault();
|
||||||
|
event.stopPropagation();
|
||||||
|
};
|
||||||
|
|
||||||
|
['dragenter', 'dragover'].forEach((eventName) => {
|
||||||
|
target.addEventListener(eventName, (event) => {
|
||||||
|
preventDefaults(event);
|
||||||
|
if (state.isUploading) return;
|
||||||
|
if (highlightClass) {
|
||||||
|
target.classList.add(highlightClass);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
['dragleave', 'drop'].forEach((eventName) => {
|
||||||
|
target.addEventListener(eventName, (event) => {
|
||||||
|
preventDefaults(event);
|
||||||
|
if (highlightClass) {
|
||||||
|
target.classList.remove(highlightClass);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
target.addEventListener('drop', (event) => {
|
||||||
|
if (state.isUploading) return;
|
||||||
|
if (!event.dataTransfer?.files?.length || !elements.uploadFileInput) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
elements.uploadFileInput.files = event.dataTransfer.files;
|
||||||
|
elements.uploadFileInput.dispatchEvent(new Event('change', { bubbles: true }));
|
||||||
|
if (autoOpenModal && elements.uploadModal) {
|
||||||
|
elements.uploadModal.show();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
init: init,
|
||||||
|
isUploading: isUploading,
|
||||||
|
performBulkUpload: performBulkUpload,
|
||||||
|
wireDropTarget: wireDropTarget,
|
||||||
|
resetUploadUI: resetUploadUI,
|
||||||
|
refreshUploadDropLabel: refreshUploadDropLabel,
|
||||||
|
updateUploadBtnText: updateUploadBtnText
|
||||||
|
};
|
||||||
|
})();
|
||||||
120
static/js/bucket-detail-utils.js
Normal file
120
static/js/bucket-detail-utils.js
Normal file
@@ -0,0 +1,120 @@
|
|||||||
|
window.BucketDetailUtils = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
function setupJsonAutoIndent(textarea) {
|
||||||
|
if (!textarea) return;
|
||||||
|
|
||||||
|
textarea.addEventListener('keydown', function(e) {
|
||||||
|
if (e.key === 'Enter') {
|
||||||
|
e.preventDefault();
|
||||||
|
|
||||||
|
const start = this.selectionStart;
|
||||||
|
const end = this.selectionEnd;
|
||||||
|
const value = this.value;
|
||||||
|
|
||||||
|
const lineStart = value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
const currentLine = value.substring(lineStart, start);
|
||||||
|
|
||||||
|
const indentMatch = currentLine.match(/^(\s*)/);
|
||||||
|
let indent = indentMatch ? indentMatch[1] : '';
|
||||||
|
|
||||||
|
const trimmedLine = currentLine.trim();
|
||||||
|
const lastChar = trimmedLine.slice(-1);
|
||||||
|
|
||||||
|
let newIndent = indent;
|
||||||
|
let insertAfter = '';
|
||||||
|
|
||||||
|
if (lastChar === '{' || lastChar === '[') {
|
||||||
|
newIndent = indent + ' ';
|
||||||
|
|
||||||
|
const charAfterCursor = value.substring(start, start + 1).trim();
|
||||||
|
if ((lastChar === '{' && charAfterCursor === '}') ||
|
||||||
|
(lastChar === '[' && charAfterCursor === ']')) {
|
||||||
|
insertAfter = '\n' + indent;
|
||||||
|
}
|
||||||
|
} else if (lastChar === ',' || lastChar === ':') {
|
||||||
|
newIndent = indent;
|
||||||
|
}
|
||||||
|
|
||||||
|
const insertion = '\n' + newIndent + insertAfter;
|
||||||
|
const newValue = value.substring(0, start) + insertion + value.substring(end);
|
||||||
|
|
||||||
|
this.value = newValue;
|
||||||
|
|
||||||
|
const newCursorPos = start + 1 + newIndent.length;
|
||||||
|
this.selectionStart = this.selectionEnd = newCursorPos;
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
|
||||||
|
if (e.key === 'Tab') {
|
||||||
|
e.preventDefault();
|
||||||
|
const start = this.selectionStart;
|
||||||
|
const end = this.selectionEnd;
|
||||||
|
|
||||||
|
if (e.shiftKey) {
|
||||||
|
const lineStart = this.value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
const lineContent = this.value.substring(lineStart, start);
|
||||||
|
if (lineContent.startsWith(' ')) {
|
||||||
|
this.value = this.value.substring(0, lineStart) +
|
||||||
|
this.value.substring(lineStart + 2);
|
||||||
|
this.selectionStart = this.selectionEnd = Math.max(lineStart, start - 2);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
this.value = this.value.substring(0, start) + ' ' + this.value.substring(end);
|
||||||
|
this.selectionStart = this.selectionEnd = start + 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatBytes(bytes) {
|
||||||
|
if (!Number.isFinite(bytes)) return `${bytes} bytes`;
|
||||||
|
const units = ['bytes', 'KB', 'MB', 'GB', 'TB'];
|
||||||
|
let i = 0;
|
||||||
|
let size = bytes;
|
||||||
|
while (size >= 1024 && i < units.length - 1) {
|
||||||
|
size /= 1024;
|
||||||
|
i++;
|
||||||
|
}
|
||||||
|
return `${size.toFixed(i === 0 ? 0 : 1)} ${units[i]}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function escapeHtml(value) {
|
||||||
|
if (value === null || value === undefined) return '';
|
||||||
|
return String(value)
|
||||||
|
.replace(/&/g, '&')
|
||||||
|
.replace(/</g, '<')
|
||||||
|
.replace(/>/g, '>')
|
||||||
|
.replace(/"/g, '"')
|
||||||
|
.replace(/'/g, ''');
|
||||||
|
}
|
||||||
|
|
||||||
|
function fallbackCopy(text) {
|
||||||
|
const textArea = document.createElement('textarea');
|
||||||
|
textArea.value = text;
|
||||||
|
textArea.style.position = 'fixed';
|
||||||
|
textArea.style.left = '-9999px';
|
||||||
|
textArea.style.top = '-9999px';
|
||||||
|
document.body.appendChild(textArea);
|
||||||
|
textArea.focus();
|
||||||
|
textArea.select();
|
||||||
|
let success = false;
|
||||||
|
try {
|
||||||
|
success = document.execCommand('copy');
|
||||||
|
} catch {
|
||||||
|
success = false;
|
||||||
|
}
|
||||||
|
document.body.removeChild(textArea);
|
||||||
|
return success;
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
setupJsonAutoIndent: setupJsonAutoIndent,
|
||||||
|
formatBytes: formatBytes,
|
||||||
|
escapeHtml: escapeHtml,
|
||||||
|
fallbackCopy: fallbackCopy
|
||||||
|
};
|
||||||
|
})();
|
||||||
@@ -5,8 +5,8 @@
|
|||||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||||
{% if principal %}<meta name="csrf-token" content="{{ csrf_token() }}" />{% endif %}
|
{% if principal %}<meta name="csrf-token" content="{{ csrf_token() }}" />{% endif %}
|
||||||
<title>MyFSIO Console</title>
|
<title>MyFSIO Console</title>
|
||||||
<link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFISO.png') }}" />
|
<link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFSIO.png') }}" />
|
||||||
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFISO.ico') }}" />
|
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFSIO.ico') }}" />
|
||||||
<link
|
<link
|
||||||
href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css"
|
href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css"
|
||||||
rel="stylesheet"
|
rel="stylesheet"
|
||||||
@@ -24,105 +24,218 @@
|
|||||||
document.documentElement.dataset.bsTheme = 'light';
|
document.documentElement.dataset.bsTheme = 'light';
|
||||||
document.documentElement.dataset.theme = 'light';
|
document.documentElement.dataset.theme = 'light';
|
||||||
}
|
}
|
||||||
|
try {
|
||||||
|
if (localStorage.getItem('myfsio-sidebar-collapsed') === 'true') {
|
||||||
|
document.documentElement.classList.add('sidebar-will-collapse');
|
||||||
|
}
|
||||||
|
} catch (err) {}
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}" />
|
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}" />
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<nav class="navbar navbar-expand-lg myfsio-nav shadow-sm">
|
<header class="mobile-header d-lg-none">
|
||||||
<div class="container-fluid">
|
<button class="sidebar-toggle-btn" type="button" data-bs-toggle="offcanvas" data-bs-target="#mobileSidebar" aria-controls="mobileSidebar" aria-label="Toggle navigation">
|
||||||
<a class="navbar-brand fw-semibold" href="{{ url_for('ui.buckets_overview') }}">
|
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<img
|
<path fill-rule="evenodd" d="M2.5 12a.5.5 0 0 1 .5-.5h10a.5.5 0 0 1 0 1H3a.5.5 0 0 1-.5-.5zm0-4a.5.5 0 0 1 .5-.5h10a.5.5 0 0 1 0 1H3a.5.5 0 0 1-.5-.5zm0-4a.5.5 0 0 1 .5-.5h10a.5.5 0 0 1 0 1H3a.5.5 0 0 1-.5-.5z"/>
|
||||||
src="{{ url_for('static', filename='images/MyFISO.png') }}"
|
|
||||||
alt="MyFSIO logo"
|
|
||||||
class="myfsio-logo"
|
|
||||||
width="32"
|
|
||||||
height="32"
|
|
||||||
decoding="async"
|
|
||||||
/>
|
|
||||||
<span class="myfsio-title">MyFSIO</span>
|
|
||||||
</a>
|
|
||||||
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navContent" aria-controls="navContent" aria-expanded="false" aria-label="Toggle navigation">
|
|
||||||
<span class="navbar-toggler-icon"></span>
|
|
||||||
</button>
|
|
||||||
<div class="collapse navbar-collapse" id="navContent">
|
|
||||||
<ul class="navbar-nav me-auto mb-2 mb-lg-0">
|
|
||||||
{% if principal %}
|
|
||||||
<li class="nav-item">
|
|
||||||
<a class="nav-link" href="{{ url_for('ui.buckets_overview') }}">Buckets</a>
|
|
||||||
</li>
|
|
||||||
{% if can_manage_iam %}
|
|
||||||
<li class="nav-item">
|
|
||||||
<a class="nav-link" href="{{ url_for('ui.iam_dashboard') }}">IAM</a>
|
|
||||||
</li>
|
|
||||||
<li class="nav-item">
|
|
||||||
<a class="nav-link" href="{{ url_for('ui.connections_dashboard') }}">Connections</a>
|
|
||||||
</li>
|
|
||||||
<li class="nav-item">
|
|
||||||
<a class="nav-link" href="{{ url_for('ui.metrics_dashboard') }}">Metrics</a>
|
|
||||||
</li>
|
|
||||||
{% endif %}
|
|
||||||
{% endif %}
|
|
||||||
{% if principal %}
|
|
||||||
<li class="nav-item">
|
|
||||||
<a class="nav-link" href="{{ url_for('ui.docs_page') }}">Docs</a>
|
|
||||||
</li>
|
|
||||||
{% endif %}
|
|
||||||
</ul>
|
|
||||||
<div class="ms-lg-auto d-flex align-items-center gap-3 text-light flex-wrap">
|
|
||||||
<button
|
|
||||||
class="btn btn-outline-light btn-sm theme-toggle"
|
|
||||||
type="button"
|
|
||||||
id="themeToggle"
|
|
||||||
aria-pressed="false"
|
|
||||||
aria-label="Toggle dark mode"
|
|
||||||
>
|
|
||||||
<span id="themeToggleLabel" class="visually-hidden">Toggle dark mode</span>
|
|
||||||
<svg
|
|
||||||
xmlns="http://www.w3.org/2000/svg"
|
|
||||||
width="16"
|
|
||||||
height="16"
|
|
||||||
fill="currentColor"
|
|
||||||
class="theme-icon"
|
|
||||||
id="themeToggleSun"
|
|
||||||
viewBox="0 0 16 16"
|
|
||||||
aria-hidden="true"
|
|
||||||
>
|
|
||||||
<path
|
|
||||||
d="M8 11.5a3.5 3.5 0 1 1 0-7 3.5 3.5 0 0 1 0 7zm0 1.5a5 5 0 1 0 0-10 5 5 0 0 0 0 10zM8 0a.5.5 0 0 1 .5.5v1.555a.5.5 0 0 1-1 0V.5A.5.5 0 0 1 8 0zm0 12.945a.5.5 0 0 1 .5.5v2.055a.5.5 0 0 1-1 0v-2.055a.5.5 0 0 1 .5-.5zM2.343 2.343a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.708.707l-1.1-1.1a.5.5 0 0 1 0-.707zm9.507 9.507a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.707.708l-1.1-1.1a.5.5 0 0 1 0-.708zM0 8a.5.5 0 0 1 .5-.5h1.555a.5.5 0 0 1 0 1H.5A.5.5 0 0 1 0 8zm12.945 0a.5.5 0 0 1 .5-.5H15.5a.5.5 0 0 1 0 1h-2.055a.5.5 0 0 1-.5-.5zM2.343 13.657a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 1 1 .708.707l-1.1 1.1a.5.5 0 0 1-.708 0zm9.507-9.507a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 0 1 .707.708l-1.1 1.1a.5.5 0 0 1-.707 0z"
|
|
||||||
/>
|
|
||||||
</svg>
|
</svg>
|
||||||
<svg
|
</button>
|
||||||
xmlns="http://www.w3.org/2000/svg"
|
<a class="mobile-brand" href="{{ url_for('ui.buckets_overview') }}">
|
||||||
width="16"
|
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO logo" width="28" height="28" />
|
||||||
height="16"
|
<span>MyFSIO</span>
|
||||||
fill="currentColor"
|
</a>
|
||||||
class="theme-icon d-none"
|
<button class="theme-toggle-mobile" type="button" id="themeToggleMobile" aria-label="Toggle dark mode">
|
||||||
id="themeToggleMoon"
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon-mobile" id="themeToggleSunMobile" viewBox="0 0 16 16">
|
||||||
viewBox="0 0 16 16"
|
<path d="M8 11.5a3.5 3.5 0 1 1 0-7 3.5 3.5 0 0 1 0 7zm0 1.5a5 5 0 1 0 0-10 5 5 0 0 0 0 10zM8 0a.5.5 0 0 1 .5.5v1.555a.5.5 0 0 1-1 0V.5A.5.5 0 0 1 8 0zm0 12.945a.5.5 0 0 1 .5.5v2.055a.5.5 0 0 1-1 0v-2.055a.5.5 0 0 1 .5-.5zM2.343 2.343a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.708.707l-1.1-1.1a.5.5 0 0 1 0-.707zm9.507 9.507a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.707.708l-1.1-1.1a.5.5 0 0 1 0-.708zM0 8a.5.5 0 0 1 .5-.5h1.555a.5.5 0 0 1 0 1H.5A.5.5 0 0 1 0 8zm12.945 0a.5.5 0 0 1 .5-.5H15.5a.5.5 0 0 1 0 1h-2.055a.5.5 0 0 1-.5-.5zM2.343 13.657a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 1 1 .708.707l-1.1 1.1a.5.5 0 0 1-.708 0zm9.507-9.507a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 0 1 .707.708l-1.1 1.1a.5.5 0 0 1-.707 0z"/>
|
||||||
aria-hidden="true"
|
</svg>
|
||||||
>
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon-mobile" id="themeToggleMoonMobile" viewBox="0 0 16 16">
|
||||||
<path d="M6 .278a.768.768 0 0 1 .08.858 7.208 7.208 0 0 0-.878 3.46c0 4.021 3.278 7.277 7.318 7.277.527 0 1.04-.055 1.533-.16a.787.787 0 0 1 .81.316.733.733 0 0 1-.031.893A8.349 8.349 0 0 1 8.344 16C3.734 16 0 12.286 0 7.71 0 4.266 2.114 1.312 5.124.06A.752.752 0 0 1 6 .278z"/>
|
<path d="M6 .278a.768.768 0 0 1 .08.858 7.208 7.208 0 0 0-.878 3.46c0 4.021 3.278 7.277 7.318 7.277.527 0 1.04-.055 1.533-.16a.787.787 0 0 1 .81.316.733.733 0 0 1-.031.893A8.349 8.349 0 0 1 8.344 16C3.734 16 0 12.286 0 7.71 0 4.266 2.114 1.312 5.124.06A.752.752 0 0 1 6 .278z"/>
|
||||||
<path d="M10.794 3.148a.217.217 0 0 1 .412 0l.387 1.162c.173.518.579.924 1.097 1.097l1.162.387a.217.217 0 0 1 0 .412l-1.162.387a1.734 1.734 0 0 0-1.097 1.097l-.387 1.162a.217.217 0 0 1-.412 0l-.387-1.162A1.734 1.734 0 0 0 9.31 6.593l-1.162-.387a.217.217 0 0 1 0-.412l1.162-.387a1.734 1.734 0 0 0 1.097-1.097l.387-1.162zM13.863.099a.145.145 0 0 1 .274 0l.258.774c.115.346.386.617.732.732l.774.258a.145.145 0 0 1 0 .274l-.774.258a1.156 1.156 0 0 0-.732.732l-.258.774a.145.145 0 0 1-.274 0l-.258-.774a1.156 1.156 0 0 0-.732-.732l-.774-.258a.145.145 0 0 1 0-.274l.774-.258c.346-.115.617-.386.732-.732L13.863.1z"/>
|
<path d="M10.794 3.148a.217.217 0 0 1 .412 0l.387 1.162c.173.518.579.924 1.097 1.097l1.162.387a.217.217 0 0 1 0 .412l-1.162.387a1.734 1.734 0 0 0-1.097 1.097l-.387 1.162a.217.217 0 0 1-.412 0l-.387-1.162A1.734 1.734 0 0 0 9.31 6.593l-1.162-.387a.217.217 0 0 1 0-.412l1.162-.387a1.734 1.734 0 0 0 1.097-1.097l.387-1.162zM13.863.099a.145.145 0 0 1 .274 0l.258.774c.115.346.386.617.732.732l.774.258a.145.145 0 0 1 0 .274l-.774.258a1.156 1.156 0 0 0-.732.732l-.258.774a.145.145 0 0 1-.274 0l-.258-.774a1.156 1.156 0 0 0-.732-.732l-.774-.258a.145.145 0 0 1 0-.274l.774-.258c.346-.115.617-.386.732-.732L13.863.1z"/>
|
||||||
</svg>
|
</svg>
|
||||||
</button>
|
</button>
|
||||||
{% if principal %}
|
</header>
|
||||||
<div class="text-end small">
|
|
||||||
<div class="fw-semibold" title="{{ principal.display_name }}">{{ principal.display_name | truncate(20, true) }}</div>
|
<div class="offcanvas offcanvas-start sidebar-offcanvas" tabindex="-1" id="mobileSidebar" aria-labelledby="mobileSidebarLabel">
|
||||||
<div class="opacity-75">{{ principal.access_key }}</div>
|
<div class="offcanvas-header sidebar-header">
|
||||||
|
<a class="sidebar-brand" href="{{ url_for('ui.buckets_overview') }}">
|
||||||
|
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO logo" class="sidebar-logo" width="36" height="36" />
|
||||||
|
<span class="sidebar-title">MyFSIO</span>
|
||||||
|
</a>
|
||||||
|
<button type="button" class="btn-close btn-close-white" data-bs-dismiss="offcanvas" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<form method="post" action="{{ url_for('ui.logout') }}">
|
<div class="offcanvas-body sidebar-body">
|
||||||
|
<nav class="sidebar-nav">
|
||||||
|
{% if principal %}
|
||||||
|
<div class="nav-section">
|
||||||
|
<span class="nav-section-title">Navigation</span>
|
||||||
|
<a href="{{ url_for('ui.buckets_overview') }}" class="sidebar-link {% if request.endpoint == 'ui.buckets_overview' or request.endpoint == 'ui.bucket_detail' %}active{% endif %}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
|
</svg>
|
||||||
|
<span>Buckets</span>
|
||||||
|
</a>
|
||||||
|
{% if can_manage_iam %}
|
||||||
|
<a href="{{ url_for('ui.iam_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.iam_dashboard' %}active{% endif %}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M15 14s1 0 1-1-1-4-5-4-5 3-5 4 1 1 1 1h8zm-7.978-1A.261.261 0 0 1 7 12.996c.001-.264.167-1.03.76-1.72C8.312 10.629 9.282 10 11 10c1.717 0 2.687.63 3.24 1.276.593.69.758 1.457.76 1.72l-.008.002a.274.274 0 0 1-.014.002H7.022zM11 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4zm3-2a3 3 0 1 1-6 0 3 3 0 0 1 6 0zM6.936 9.28a5.88 5.88 0 0 0-1.23-.247A7.35 7.35 0 0 0 5 9c-4 0-5 3-5 4 0 .667.333 1 1 1h4.216A2.238 2.238 0 0 1 5 13c0-1.01.377-2.042 1.09-2.904.243-.294.526-.569.846-.816zM4.92 10A5.493 5.493 0 0 0 4 13H1c0-.26.164-1.03.76-1.724.545-.636 1.492-1.256 3.16-1.275zM1.5 5.5a3 3 0 1 1 6 0 3 3 0 0 1-6 0zm3-2a2 2 0 1 0 0 4 2 2 0 0 0 0-4z"/>
|
||||||
|
</svg>
|
||||||
|
<span>IAM</span>
|
||||||
|
</a>
|
||||||
|
<a href="{{ url_for('ui.connections_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.connections_dashboard' %}active{% endif %}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M6 3.5A1.5 1.5 0 0 1 7.5 2h1A1.5 1.5 0 0 1 10 3.5v1A1.5 1.5 0 0 1 8.5 6v1H14a.5.5 0 0 1 .5.5v1a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0v-1A.5.5 0 0 1 2 7h5.5V6A1.5 1.5 0 0 1 6 4.5v-1zM8.5 5a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1zM0 11.5A1.5 1.5 0 0 1 1.5 10h1A1.5 1.5 0 0 1 4 11.5v1A1.5 1.5 0 0 1 2.5 14h-1A1.5 1.5 0 0 1 0 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5A1.5 1.5 0 0 1 7.5 10h1a1.5 1.5 0 0 1 1.5 1.5v1A1.5 1.5 0 0 1 8.5 14h-1A1.5 1.5 0 0 1 6 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5a1.5 1.5 0 0 1 1.5-1.5h1a1.5 1.5 0 0 1 1.5 1.5v1a1.5 1.5 0 0 1-1.5 1.5h-1a1.5 1.5 0 0 1-1.5-1.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1z"/>
|
||||||
|
</svg>
|
||||||
|
<span>Connections</span>
|
||||||
|
</a>
|
||||||
|
<a href="{{ url_for('ui.metrics_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.metrics_dashboard' %}active{% endif %}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 4a.5.5 0 0 1 .5.5V6a.5.5 0 0 1-1 0V4.5A.5.5 0 0 1 8 4zM3.732 5.732a.5.5 0 0 1 .707 0l.915.914a.5.5 0 1 1-.708.708l-.914-.915a.5.5 0 0 1 0-.707zM2 10a.5.5 0 0 1 .5-.5h1.586a.5.5 0 0 1 0 1H2.5A.5.5 0 0 1 2 10zm9.5 0a.5.5 0 0 1 .5-.5h1.5a.5.5 0 0 1 0 1H12a.5.5 0 0 1-.5-.5zm.754-4.246a.389.389 0 0 0-.527-.02L7.547 9.31a.91.91 0 1 0 1.302 1.258l3.434-4.297a.389.389 0 0 0-.029-.518z"/>
|
||||||
|
<path fill-rule="evenodd" d="M0 10a8 8 0 1 1 15.547 2.661c-.442 1.253-1.845 1.602-2.932 1.25C11.309 13.488 9.475 13 8 13c-1.474 0-3.31.488-4.615.911-1.087.352-2.49.003-2.932-1.25A7.988 7.988 0 0 1 0 10zm8-7a7 7 0 0 0-6.603 9.329c.203.575.923.876 1.68.63C4.397 12.533 6.358 12 8 12s3.604.532 4.923.96c.757.245 1.477-.056 1.68-.631A7 7 0 0 0 8 3z"/>
|
||||||
|
</svg>
|
||||||
|
<span>Metrics</span>
|
||||||
|
</a>
|
||||||
|
{% endif %}
|
||||||
|
</div>
|
||||||
|
<div class="nav-section">
|
||||||
|
<span class="nav-section-title">Resources</span>
|
||||||
|
<a href="{{ url_for('ui.docs_page') }}" class="sidebar-link {% if request.endpoint == 'ui.docs_page' %}active{% endif %}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M1 2.828c.885-.37 2.154-.769 3.388-.893 1.33-.134 2.458.063 3.112.752v9.746c-.935-.53-2.12-.603-3.213-.493-1.18.12-2.37.461-3.287.811V2.828zm7.5-.141c.654-.689 1.782-.886 3.112-.752 1.234.124 2.503.523 3.388.893v9.923c-.918-.35-2.107-.692-3.287-.81-1.094-.111-2.278-.039-3.213.492V2.687zM8 1.783C7.015.936 5.587.81 4.287.94c-1.514.153-3.042.672-3.994 1.105A.5.5 0 0 0 0 2.5v11a.5.5 0 0 0 .707.455c.882-.4 2.303-.881 3.68-1.02 1.409-.142 2.59.087 3.223.877a.5.5 0 0 0 .78 0c.633-.79 1.814-1.019 3.222-.877 1.378.139 2.8.62 3.681 1.02A.5.5 0 0 0 16 13.5v-11a.5.5 0 0 0-.293-.455c-.952-.433-2.48-.952-3.994-1.105C10.413.809 8.985.936 8 1.783z"/>
|
||||||
|
</svg>
|
||||||
|
<span>Documentation</span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
|
</nav>
|
||||||
|
{% if principal %}
|
||||||
|
<div class="sidebar-footer">
|
||||||
|
<div class="sidebar-user">
|
||||||
|
<div class="user-avatar">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M11 6a3 3 0 1 1-6 0 3 3 0 0 1 6 0z"/>
|
||||||
|
<path fill-rule="evenodd" d="M0 8a8 8 0 1 1 16 0A8 8 0 0 1 0 8zm8-7a7 7 0 0 0-5.468 11.37C3.242 11.226 4.805 10 8 10s4.757 1.225 5.468 2.37A7 7 0 0 0 8 1z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<div class="user-info">
|
||||||
|
<div class="user-name" title="{{ principal.display_name }}">{{ principal.display_name | truncate(16, true) }}</div>
|
||||||
|
<div class="user-key">{{ principal.access_key | truncate(12, true) }}</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<form method="post" action="{{ url_for('ui.logout') }}" class="w-100">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<button class="btn btn-outline-light btn-sm" type="submit">Sign out</button>
|
<button class="sidebar-logout-btn" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M10 12.5a.5.5 0 0 1-.5.5h-8a.5.5 0 0 1-.5-.5v-9a.5.5 0 0 1 .5-.5h8a.5.5 0 0 1 .5.5v2a.5.5 0 0 0 1 0v-2A1.5 1.5 0 0 0 9.5 2h-8A1.5 1.5 0 0 0 0 3.5v9A1.5 1.5 0 0 0 1.5 14h8a1.5 1.5 0 0 0 1.5-1.5v-2a.5.5 0 0 0-1 0v2z"/>
|
||||||
|
<path fill-rule="evenodd" d="M15.854 8.354a.5.5 0 0 0 0-.708l-3-3a.5.5 0 0 0-.708.708L14.293 7.5H5.5a.5.5 0 0 0 0 1h8.793l-2.147 2.146a.5.5 0 0 0 .708.708l3-3z"/>
|
||||||
|
</svg>
|
||||||
|
<span>Sign out</span>
|
||||||
|
</button>
|
||||||
</form>
|
</form>
|
||||||
|
</div>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<aside class="sidebar d-none d-lg-flex" id="desktopSidebar">
|
||||||
|
<div class="sidebar-header">
|
||||||
|
<div class="sidebar-brand" id="sidebarBrand">
|
||||||
|
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO logo" class="sidebar-logo" width="36" height="36" />
|
||||||
|
<span class="sidebar-title">MyFSIO</span>
|
||||||
</div>
|
</div>
|
||||||
|
<button class="sidebar-collapse-btn" type="button" id="sidebarCollapseBtn" aria-label="Collapse sidebar">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<div class="sidebar-body">
|
||||||
|
<nav class="sidebar-nav">
|
||||||
|
{% if principal %}
|
||||||
|
<div class="nav-section">
|
||||||
|
<span class="nav-section-title">Navigation</span>
|
||||||
|
<a href="{{ url_for('ui.buckets_overview') }}" class="sidebar-link {% if request.endpoint == 'ui.buckets_overview' or request.endpoint == 'ui.bucket_detail' %}active{% endif %}" data-tooltip="Buckets">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">Buckets</span>
|
||||||
|
</a>
|
||||||
|
{% if can_manage_iam %}
|
||||||
|
<a href="{{ url_for('ui.iam_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.iam_dashboard' %}active{% endif %}" data-tooltip="IAM">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M15 14s1 0 1-1-1-4-5-4-5 3-5 4 1 1 1 1h8zm-7.978-1A.261.261 0 0 1 7 12.996c.001-.264.167-1.03.76-1.72C8.312 10.629 9.282 10 11 10c1.717 0 2.687.63 3.24 1.276.593.69.758 1.457.76 1.72l-.008.002a.274.274 0 0 1-.014.002H7.022zM11 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4zm3-2a3 3 0 1 1-6 0 3 3 0 0 1 6 0zM6.936 9.28a5.88 5.88 0 0 0-1.23-.247A7.35 7.35 0 0 0 5 9c-4 0-5 3-5 4 0 .667.333 1 1 1h4.216A2.238 2.238 0 0 1 5 13c0-1.01.377-2.042 1.09-2.904.243-.294.526-.569.846-.816zM4.92 10A5.493 5.493 0 0 0 4 13H1c0-.26.164-1.03.76-1.724.545-.636 1.492-1.256 3.16-1.275zM1.5 5.5a3 3 0 1 1 6 0 3 3 0 0 1-6 0zm3-2a2 2 0 1 0 0 4 2 2 0 0 0 0-4z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">IAM</span>
|
||||||
|
</a>
|
||||||
|
<a href="{{ url_for('ui.connections_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.connections_dashboard' %}active{% endif %}" data-tooltip="Connections">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M6 3.5A1.5 1.5 0 0 1 7.5 2h1A1.5 1.5 0 0 1 10 3.5v1A1.5 1.5 0 0 1 8.5 6v1H14a.5.5 0 0 1 .5.5v1a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0v-1A.5.5 0 0 1 2 7h5.5V6A1.5 1.5 0 0 1 6 4.5v-1zM8.5 5a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1zM0 11.5A1.5 1.5 0 0 1 1.5 10h1A1.5 1.5 0 0 1 4 11.5v1A1.5 1.5 0 0 1 2.5 14h-1A1.5 1.5 0 0 1 0 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5A1.5 1.5 0 0 1 7.5 10h1a1.5 1.5 0 0 1 1.5 1.5v1A1.5 1.5 0 0 1 8.5 14h-1A1.5 1.5 0 0 1 6 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5a1.5 1.5 0 0 1 1.5-1.5h1a1.5 1.5 0 0 1 1.5 1.5v1a1.5 1.5 0 0 1-1.5 1.5h-1a1.5 1.5 0 0 1-1.5-1.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">Connections</span>
|
||||||
|
</a>
|
||||||
|
<a href="{{ url_for('ui.metrics_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.metrics_dashboard' %}active{% endif %}" data-tooltip="Metrics">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 4a.5.5 0 0 1 .5.5V6a.5.5 0 0 1-1 0V4.5A.5.5 0 0 1 8 4zM3.732 5.732a.5.5 0 0 1 .707 0l.915.914a.5.5 0 1 1-.708.708l-.914-.915a.5.5 0 0 1 0-.707zM2 10a.5.5 0 0 1 .5-.5h1.586a.5.5 0 0 1 0 1H2.5A.5.5 0 0 1 2 10zm9.5 0a.5.5 0 0 1 .5-.5h1.5a.5.5 0 0 1 0 1H12a.5.5 0 0 1-.5-.5zm.754-4.246a.389.389 0 0 0-.527-.02L7.547 9.31a.91.91 0 1 0 1.302 1.258l3.434-4.297a.389.389 0 0 0-.029-.518z"/>
|
||||||
|
<path fill-rule="evenodd" d="M0 10a8 8 0 1 1 15.547 2.661c-.442 1.253-1.845 1.602-2.932 1.25C11.309 13.488 9.475 13 8 13c-1.474 0-3.31.488-4.615.911-1.087.352-2.49.003-2.932-1.25A7.988 7.988 0 0 1 0 10zm8-7a7 7 0 0 0-6.603 9.329c.203.575.923.876 1.68.63C4.397 12.533 6.358 12 8 12s3.604.532 4.923.96c.757.245 1.477-.056 1.68-.631A7 7 0 0 0 8 3z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">Metrics</span>
|
||||||
|
</a>
|
||||||
|
{% endif %}
|
||||||
|
</div>
|
||||||
|
<div class="nav-section">
|
||||||
|
<span class="nav-section-title">Resources</span>
|
||||||
|
<a href="{{ url_for('ui.docs_page') }}" class="sidebar-link {% if request.endpoint == 'ui.docs_page' %}active{% endif %}" data-tooltip="Documentation">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M1 2.828c.885-.37 2.154-.769 3.388-.893 1.33-.134 2.458.063 3.112.752v9.746c-.935-.53-2.12-.603-3.213-.493-1.18.12-2.37.461-3.287.811V2.828zm7.5-.141c.654-.689 1.782-.886 3.112-.752 1.234.124 2.503.523 3.388.893v9.923c-.918-.35-2.107-.692-3.287-.81-1.094-.111-2.278-.039-3.213.492V2.687zM8 1.783C7.015.936 5.587.81 4.287.94c-1.514.153-3.042.672-3.994 1.105A.5.5 0 0 0 0 2.5v11a.5.5 0 0 0 .707.455c.882-.4 2.303-.881 3.68-1.02 1.409-.142 2.59.087 3.223.877a.5.5 0 0 0 .78 0c.633-.79 1.814-1.019 3.222-.877 1.378.139 2.8.62 3.681 1.02A.5.5 0 0 0 16 13.5v-11a.5.5 0 0 0-.293-.455c-.952-.433-2.48-.952-3.994-1.105C10.413.809 8.985.936 8 1.783z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">Documentation</span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
</nav>
|
</nav>
|
||||||
<main class="container py-4">
|
</div>
|
||||||
|
<div class="sidebar-footer">
|
||||||
|
<button class="theme-toggle-sidebar" type="button" id="themeToggle" aria-label="Toggle dark mode">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon" id="themeToggleSun" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 11.5a3.5 3.5 0 1 1 0-7 3.5 3.5 0 0 1 0 7zm0 1.5a5 5 0 1 0 0-10 5 5 0 0 0 0 10zM8 0a.5.5 0 0 1 .5.5v1.555a.5.5 0 0 1-1 0V.5A.5.5 0 0 1 8 0zm0 12.945a.5.5 0 0 1 .5.5v2.055a.5.5 0 0 1-1 0v-2.055a.5.5 0 0 1 .5-.5zM2.343 2.343a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.708.707l-1.1-1.1a.5.5 0 0 1 0-.707zm9.507 9.507a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.707.708l-1.1-1.1a.5.5 0 0 1 0-.708zM0 8a.5.5 0 0 1 .5-.5h1.555a.5.5 0 0 1 0 1H.5A.5.5 0 0 1 0 8zm12.945 0a.5.5 0 0 1 .5-.5H15.5a.5.5 0 0 1 0 1h-2.055a.5.5 0 0 1-.5-.5zM2.343 13.657a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 1 1 .708.707l-1.1 1.1a.5.5 0 0 1-.708 0zm9.507-9.507a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 0 1 .707.708l-1.1 1.1a.5.5 0 0 1-.707 0z"/>
|
||||||
|
</svg>
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon" id="themeToggleMoon" viewBox="0 0 16 16">
|
||||||
|
<path d="M6 .278a.768.768 0 0 1 .08.858 7.208 7.208 0 0 0-.878 3.46c0 4.021 3.278 7.277 7.318 7.277.527 0 1.04-.055 1.533-.16a.787.787 0 0 1 .81.316.733.733 0 0 1-.031.893A8.349 8.349 0 0 1 8.344 16C3.734 16 0 12.286 0 7.71 0 4.266 2.114 1.312 5.124.06A.752.752 0 0 1 6 .278z"/>
|
||||||
|
<path d="M10.794 3.148a.217.217 0 0 1 .412 0l.387 1.162c.173.518.579.924 1.097 1.097l1.162.387a.217.217 0 0 1 0 .412l-1.162.387a1.734 1.734 0 0 0-1.097 1.097l-.387 1.162a.217.217 0 0 1-.412 0l-.387-1.162A1.734 1.734 0 0 0 9.31 6.593l-1.162-.387a.217.217 0 0 1 0-.412l1.162-.387a1.734 1.734 0 0 0 1.097-1.097l.387-1.162zM13.863.099a.145.145 0 0 1 .274 0l.258.774c.115.346.386.617.732.732l.774.258a.145.145 0 0 1 0 .274l-.774.258a1.156 1.156 0 0 0-.732.732l-.258.774a.145.145 0 0 1-.274 0l-.258-.774a1.156 1.156 0 0 0-.732-.732l-.774-.258a.145.145 0 0 1 0-.274l.774-.258c.346-.115.617-.386.732-.732L13.863.1z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="theme-toggle-text">Toggle theme</span>
|
||||||
|
</button>
|
||||||
|
{% if principal %}
|
||||||
|
<div class="sidebar-user" data-username="{{ principal.display_name }}">
|
||||||
|
<div class="user-avatar">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M11 6a3 3 0 1 1-6 0 3 3 0 0 1 6 0z"/>
|
||||||
|
<path fill-rule="evenodd" d="M0 8a8 8 0 1 1 16 0A8 8 0 0 1 0 8zm8-7a7 7 0 0 0-5.468 11.37C3.242 11.226 4.805 10 8 10s4.757 1.225 5.468 2.37A7 7 0 0 0 8 1z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<div class="user-info">
|
||||||
|
<div class="user-name" title="{{ principal.display_name }}">{{ principal.display_name | truncate(16, true) }}</div>
|
||||||
|
<div class="user-key">{{ principal.access_key | truncate(12, true) }}</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<form method="post" action="{{ url_for('ui.logout') }}" class="w-100">
|
||||||
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
|
<button class="sidebar-logout-btn" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M10 12.5a.5.5 0 0 1-.5.5h-8a.5.5 0 0 1-.5-.5v-9a.5.5 0 0 1 .5-.5h8a.5.5 0 0 1 .5.5v2a.5.5 0 0 0 1 0v-2A1.5 1.5 0 0 0 9.5 2h-8A1.5 1.5 0 0 0 0 3.5v9A1.5 1.5 0 0 0 1.5 14h8a1.5 1.5 0 0 0 1.5-1.5v-2a.5.5 0 0 0-1 0v2z"/>
|
||||||
|
<path fill-rule="evenodd" d="M15.854 8.354a.5.5 0 0 0 0-.708l-3-3a.5.5 0 0 0-.708.708L14.293 7.5H5.5a.5.5 0 0 0 0 1h8.793l-2.147 2.146a.5.5 0 0 0 .708.708l3-3z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="logout-text">Sign out</span>
|
||||||
|
</button>
|
||||||
|
</form>
|
||||||
|
{% endif %}
|
||||||
|
</div>
|
||||||
|
</aside>
|
||||||
|
|
||||||
|
<div class="main-wrapper">
|
||||||
|
<main class="main-content">
|
||||||
{% block content %}{% endblock %}
|
{% block content %}{% endblock %}
|
||||||
</main>
|
</main>
|
||||||
|
</div>
|
||||||
<div class="toast-container position-fixed bottom-0 end-0 p-3">
|
<div class="toast-container position-fixed bottom-0 end-0 p-3">
|
||||||
<div id="liveToast" class="toast" role="alert" aria-live="assertive" aria-atomic="true">
|
<div id="liveToast" class="toast" role="alert" aria-live="assertive" aria-atomic="true">
|
||||||
<div class="toast-header">
|
<div class="toast-header">
|
||||||
@@ -162,9 +275,11 @@
|
|||||||
(function () {
|
(function () {
|
||||||
const storageKey = 'myfsio-theme';
|
const storageKey = 'myfsio-theme';
|
||||||
const toggle = document.getElementById('themeToggle');
|
const toggle = document.getElementById('themeToggle');
|
||||||
const label = document.getElementById('themeToggleLabel');
|
const toggleMobile = document.getElementById('themeToggleMobile');
|
||||||
const sunIcon = document.getElementById('themeToggleSun');
|
const sunIcon = document.getElementById('themeToggleSun');
|
||||||
const moonIcon = document.getElementById('themeToggleMoon');
|
const moonIcon = document.getElementById('themeToggleMoon');
|
||||||
|
const sunIconMobile = document.getElementById('themeToggleSunMobile');
|
||||||
|
const moonIconMobile = document.getElementById('themeToggleMoonMobile');
|
||||||
|
|
||||||
const applyTheme = (theme) => {
|
const applyTheme = (theme) => {
|
||||||
document.documentElement.dataset.bsTheme = theme;
|
document.documentElement.dataset.bsTheme = theme;
|
||||||
@@ -172,29 +287,74 @@
|
|||||||
try {
|
try {
|
||||||
localStorage.setItem(storageKey, theme);
|
localStorage.setItem(storageKey, theme);
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
/* localStorage unavailable */
|
console.log("Error: local storage not available, cannot save theme preference.");
|
||||||
}
|
}
|
||||||
if (label) {
|
|
||||||
label.textContent = theme === 'dark' ? 'Switch to light mode' : 'Switch to dark mode';
|
|
||||||
}
|
|
||||||
if (toggle) {
|
|
||||||
toggle.setAttribute('aria-pressed', theme === 'dark' ? 'true' : 'false');
|
|
||||||
toggle.setAttribute('title', theme === 'dark' ? 'Switch to light mode' : 'Switch to dark mode');
|
|
||||||
toggle.setAttribute('aria-label', theme === 'dark' ? 'Switch to light mode' : 'Switch to dark mode');
|
|
||||||
}
|
|
||||||
if (sunIcon && moonIcon) {
|
|
||||||
const isDark = theme === 'dark';
|
const isDark = theme === 'dark';
|
||||||
|
if (sunIcon && moonIcon) {
|
||||||
sunIcon.classList.toggle('d-none', !isDark);
|
sunIcon.classList.toggle('d-none', !isDark);
|
||||||
moonIcon.classList.toggle('d-none', isDark);
|
moonIcon.classList.toggle('d-none', isDark);
|
||||||
}
|
}
|
||||||
|
if (sunIconMobile && moonIconMobile) {
|
||||||
|
sunIconMobile.classList.toggle('d-none', !isDark);
|
||||||
|
moonIconMobile.classList.toggle('d-none', isDark);
|
||||||
|
}
|
||||||
|
[toggle, toggleMobile].forEach(btn => {
|
||||||
|
if (btn) {
|
||||||
|
btn.setAttribute('aria-pressed', isDark ? 'true' : 'false');
|
||||||
|
btn.setAttribute('title', isDark ? 'Switch to light mode' : 'Switch to dark mode');
|
||||||
|
btn.setAttribute('aria-label', isDark ? 'Switch to light mode' : 'Switch to dark mode');
|
||||||
|
}
|
||||||
|
});
|
||||||
};
|
};
|
||||||
|
|
||||||
const current = document.documentElement.dataset.bsTheme || 'light';
|
const current = document.documentElement.dataset.bsTheme || 'light';
|
||||||
applyTheme(current);
|
applyTheme(current);
|
||||||
|
|
||||||
toggle?.addEventListener('click', () => {
|
const handleToggle = () => {
|
||||||
const next = document.documentElement.dataset.bsTheme === 'dark' ? 'light' : 'dark';
|
const next = document.documentElement.dataset.bsTheme === 'dark' ? 'light' : 'dark';
|
||||||
applyTheme(next);
|
applyTheme(next);
|
||||||
|
};
|
||||||
|
|
||||||
|
toggle?.addEventListener('click', handleToggle);
|
||||||
|
toggleMobile?.addEventListener('click', handleToggle);
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
|
<script>
|
||||||
|
(function () {
|
||||||
|
const sidebar = document.getElementById('desktopSidebar');
|
||||||
|
const collapseBtn = document.getElementById('sidebarCollapseBtn');
|
||||||
|
const sidebarBrand = document.getElementById('sidebarBrand');
|
||||||
|
const storageKey = 'myfsio-sidebar-collapsed';
|
||||||
|
|
||||||
|
if (!sidebar || !collapseBtn) return;
|
||||||
|
|
||||||
|
const applyCollapsed = (collapsed) => {
|
||||||
|
sidebar.classList.toggle('sidebar-collapsed', collapsed);
|
||||||
|
document.body.classList.toggle('sidebar-is-collapsed', collapsed);
|
||||||
|
document.documentElement.classList.remove('sidebar-will-collapse');
|
||||||
|
try {
|
||||||
|
localStorage.setItem(storageKey, collapsed ? 'true' : 'false');
|
||||||
|
} catch (err) {}
|
||||||
|
};
|
||||||
|
|
||||||
|
try {
|
||||||
|
const stored = localStorage.getItem(storageKey);
|
||||||
|
applyCollapsed(stored === 'true');
|
||||||
|
} catch (err) {
|
||||||
|
document.documentElement.classList.remove('sidebar-will-collapse');
|
||||||
|
}
|
||||||
|
|
||||||
|
collapseBtn.addEventListener('click', () => {
|
||||||
|
const isCollapsed = sidebar.classList.contains('sidebar-collapsed');
|
||||||
|
applyCollapsed(!isCollapsed);
|
||||||
|
});
|
||||||
|
|
||||||
|
sidebarBrand?.addEventListener('click', (e) => {
|
||||||
|
const isCollapsed = sidebar.classList.contains('sidebar-collapsed');
|
||||||
|
if (isCollapsed) {
|
||||||
|
e.preventDefault();
|
||||||
|
applyCollapsed(false);
|
||||||
|
}
|
||||||
});
|
});
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -46,8 +46,7 @@
|
|||||||
<div class="d-flex align-items-center gap-3">
|
<div class="d-flex align-items-center gap-3">
|
||||||
<div class="bucket-icon">
|
<div class="bucket-icon">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="22" height="22" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="22" height="22" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H11a.5.5 0 0 1 0 1h-1v1h1a.5.5 0 0 1 0 1h-1v1a.5.5 0 0 1-1 0v-1H6v1a.5.5 0 0 1-1 0v-1H4a.5.5 0 0 1 0-1h1v-1H4a.5.5 0 0 1 0-1h1.5A1.5 1.5 0 0 1 7 10.5V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1zm5 7.5v1h3v-1a.5.5 0 0 0-.5-.5h-2a.5.5 0 0 0-.5.5z"/>
|
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<div>
|
<div>
|
||||||
@@ -134,7 +133,7 @@
|
|||||||
|
|
||||||
const searchInput = document.getElementById('bucket-search');
|
const searchInput = document.getElementById('bucket-search');
|
||||||
const bucketItems = document.querySelectorAll('.bucket-item');
|
const bucketItems = document.querySelectorAll('.bucket-item');
|
||||||
const noBucketsMsg = document.querySelector('.text-center.py-5'); // The "No buckets found" empty state
|
const noBucketsMsg = document.querySelector('.text-center.py-5');
|
||||||
|
|
||||||
if (searchInput) {
|
if (searchInput) {
|
||||||
searchInput.addEventListener('input', (e) => {
|
searchInput.addEventListener('input', (e) => {
|
||||||
|
|||||||
@@ -8,8 +8,8 @@
|
|||||||
<p class="text-uppercase text-muted small mb-1">Replication</p>
|
<p class="text-uppercase text-muted small mb-1">Replication</p>
|
||||||
<h1 class="h3 mb-1 d-flex align-items-center gap-2">
|
<h1 class="h3 mb-1 d-flex align-items-center gap-2">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
<path d="M10.232 8.768l.546-.353a.25.25 0 0 0 0-.418l-.546-.354a.25.25 0 0 1-.116-.21V6.25a.25.25 0 0 0-.25-.25h-.5a.25.25 0 0 0-.25.25v1.183a.25.25 0 0 1-.116.21l-.546.354a.25.25 0 0 0 0 .418l.546.353a.25.25 0 0 1 .116.21v1.183a.25.25 0 0 0 .25.25h.5a.25.25 0 0 0 .25-.25V8.978a.25.25 0 0 1 .116-.21z"/>
|
||||||
</svg>
|
</svg>
|
||||||
Remote Connections
|
Remote Connections
|
||||||
</h1>
|
</h1>
|
||||||
@@ -124,8 +124,7 @@
|
|||||||
<div class="d-flex align-items-center gap-2">
|
<div class="d-flex align-items-center gap-2">
|
||||||
<div class="connection-icon">
|
<div class="connection-icon">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<span class="fw-medium">{{ conn.name }}</span>
|
<span class="fw-medium">{{ conn.name }}</span>
|
||||||
@@ -174,8 +173,7 @@
|
|||||||
<div class="empty-state text-center py-5">
|
<div class="empty-state text-center py-5">
|
||||||
<div class="empty-state-icon mx-auto mb-3">
|
<div class="empty-state-icon mx-auto mb-3">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<h5 class="fw-semibold mb-2">No connections yet</h5>
|
<h5 class="fw-semibold mb-2">No connections yet</h5>
|
||||||
@@ -309,7 +307,6 @@
|
|||||||
|
|
||||||
resultDiv.innerHTML = '<div class="text-info"><span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Testing connection...</div>';
|
resultDiv.innerHTML = '<div class="text-info"><span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Testing connection...</div>';
|
||||||
|
|
||||||
// Use AbortController to timeout client-side after 20 seconds
|
|
||||||
const controller = new AbortController();
|
const controller = new AbortController();
|
||||||
const timeoutId = setTimeout(() => controller.abort(), 20000);
|
const timeoutId = setTimeout(() => controller.abort(), 20000);
|
||||||
|
|
||||||
@@ -396,8 +393,6 @@
|
|||||||
form.action = "{{ url_for('ui.delete_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
|
form.action = "{{ url_for('ui.delete_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
|
||||||
});
|
});
|
||||||
|
|
||||||
// Check connection health for each connection in the table
|
|
||||||
// Uses staggered requests to avoid overwhelming the server
|
|
||||||
async function checkConnectionHealth(connectionId, statusEl) {
|
async function checkConnectionHealth(connectionId, statusEl) {
|
||||||
try {
|
try {
|
||||||
const controller = new AbortController();
|
const controller = new AbortController();
|
||||||
@@ -434,13 +429,11 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stagger health checks to avoid all requests at once
|
|
||||||
const connectionRows = document.querySelectorAll('tr[data-connection-id]');
|
const connectionRows = document.querySelectorAll('tr[data-connection-id]');
|
||||||
connectionRows.forEach((row, index) => {
|
connectionRows.forEach((row, index) => {
|
||||||
const connectionId = row.getAttribute('data-connection-id');
|
const connectionId = row.getAttribute('data-connection-id');
|
||||||
const statusEl = row.querySelector('.connection-status');
|
const statusEl = row.querySelector('.connection-status');
|
||||||
if (statusEl) {
|
if (statusEl) {
|
||||||
// Stagger requests by 200ms each
|
|
||||||
setTimeout(() => checkConnectionHealth(connectionId, statusEl), index * 200);
|
setTimeout(() => checkConnectionHealth(connectionId, statusEl), index * 200);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -14,6 +14,37 @@
|
|||||||
</div>
|
</div>
|
||||||
</section>
|
</section>
|
||||||
<div class="row g-4">
|
<div class="row g-4">
|
||||||
|
<div class="col-12 d-xl-none">
|
||||||
|
<div class="card shadow-sm docs-sidebar-mobile mb-0">
|
||||||
|
<div class="card-body py-3">
|
||||||
|
<div class="d-flex align-items-center justify-content-between mb-2">
|
||||||
|
<h3 class="h6 text-uppercase text-muted mb-0">On this page</h3>
|
||||||
|
<button class="btn btn-sm btn-outline-secondary" type="button" data-bs-toggle="collapse" data-bs-target="#mobileDocsToc" aria-expanded="false" aria-controls="mobileDocsToc">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<div class="collapse" id="mobileDocsToc">
|
||||||
|
<ul class="list-unstyled docs-toc mb-0 small">
|
||||||
|
<li><a href="#setup">Set up & run</a></li>
|
||||||
|
<li><a href="#background">Running in background</a></li>
|
||||||
|
<li><a href="#auth">Authentication & IAM</a></li>
|
||||||
|
<li><a href="#console">Console tour</a></li>
|
||||||
|
<li><a href="#automation">Automation / CLI</a></li>
|
||||||
|
<li><a href="#api">REST endpoints</a></li>
|
||||||
|
<li><a href="#examples">API Examples</a></li>
|
||||||
|
<li><a href="#replication">Site Replication</a></li>
|
||||||
|
<li><a href="#versioning">Object Versioning</a></li>
|
||||||
|
<li><a href="#quotas">Bucket Quotas</a></li>
|
||||||
|
<li><a href="#encryption">Encryption</a></li>
|
||||||
|
<li><a href="#lifecycle">Lifecycle Rules</a></li>
|
||||||
|
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
<div class="col-xl-8">
|
<div class="col-xl-8">
|
||||||
<article id="setup" class="card shadow-sm docs-section">
|
<article id="setup" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
@@ -526,15 +557,46 @@ curl -X POST "{{ api_base }}/presign/mybucket/upload.bin" \
|
|||||||
</li>
|
</li>
|
||||||
</ol>
|
</ol>
|
||||||
|
|
||||||
<div class="alert alert-light border mb-0">
|
<div class="alert alert-light border mb-3 overflow-hidden">
|
||||||
<div class="d-flex gap-2">
|
<div class="d-flex flex-column flex-sm-row gap-2 mb-2">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-terminal text-muted mt-1" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-terminal text-muted mt-1 flex-shrink-0 d-none d-sm-block" viewBox="0 0 16 16">
|
||||||
<path d="M6 9a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 0 1h-3A.5.5 0 0 1 6 9zM3.854 4.146a.5.5 0 1 0-.708.708L4.793 6.5 3.146 8.146a.5.5 0 1 0 .708.708l2-2a.5.5 0 0 0 0-.708l-2-2z"/>
|
<path d="M6 9a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 0 1h-3A.5.5 0 0 1 6 9zM3.854 4.146a.5.5 0 1 0-.708.708L4.793 6.5 3.146 8.146a.5.5 0 1 0 .708.708l2-2a.5.5 0 0 0 0-.708l-2-2z"/>
|
||||||
<path d="M2 1a2 2 0 0 0-2 2v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V3a2 2 0 0 0-2-2H2zm12 1a1 1 0 0 1 1 1v10a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1h12z"/>
|
<path d="M2 1a2 2 0 0 0-2 2v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V3a2 2 0 0 0-2-2H2zm12 1a1 1 0 0 1 1 1v10a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1h12z"/>
|
||||||
</svg>
|
</svg>
|
||||||
<div>
|
<div class="flex-grow-1 min-width-0">
|
||||||
<strong>Headless Target Setup?</strong>
|
<strong>Headless Target Setup</strong>
|
||||||
<p class="small text-muted mb-0">If your target server has no UI, use the Python API directly to bootstrap credentials. See <code>docs.md</code> in the project root for the <code>setup_target.py</code> script.</p>
|
<p class="small text-muted mb-2">If your target server has no UI, create a <code>setup_target.py</code> script to bootstrap credentials:</p>
|
||||||
|
<pre class="mb-0 overflow-auto" style="max-width: 100%;"><code class="language-python"># setup_target.py
|
||||||
|
from pathlib import Path
|
||||||
|
from app.iam import IamService
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
|
||||||
|
# Initialize services (paths match default config)
|
||||||
|
data_dir = Path("data")
|
||||||
|
iam = IamService(data_dir / ".myfsio.sys" / "config" / "iam.json")
|
||||||
|
storage = ObjectStorage(data_dir)
|
||||||
|
|
||||||
|
# 1. Create the bucket
|
||||||
|
bucket_name = "backup-bucket"
|
||||||
|
try:
|
||||||
|
storage.create_bucket(bucket_name)
|
||||||
|
print(f"Bucket '{bucket_name}' created.")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Bucket creation skipped: {e}")
|
||||||
|
|
||||||
|
# 2. Create the user
|
||||||
|
try:
|
||||||
|
creds = iam.create_user(
|
||||||
|
display_name="Replication User",
|
||||||
|
policies=[{"bucket": bucket_name, "actions": ["write", "read", "list"]}]
|
||||||
|
)
|
||||||
|
print("\n--- CREDENTIALS GENERATED ---")
|
||||||
|
print(f"Access Key: {creds['access_key']}")
|
||||||
|
print(f"Secret Key: {creds['secret_key']}")
|
||||||
|
print("-----------------------------")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"User creation failed: {e}")</code></pre>
|
||||||
|
<p class="small text-muted mt-2 mb-0">Save and run: <code>python setup_target.py</code></p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -545,11 +607,49 @@ curl -X POST "{{ api_base }}/presign/mybucket/upload.bin" \
|
|||||||
<li>Follow the steps above to replicate <strong>A → B</strong>.</li>
|
<li>Follow the steps above to replicate <strong>A → B</strong>.</li>
|
||||||
<li>Repeat the process on Server B to replicate <strong>B → A</strong> (create a connection to A, enable rule).</li>
|
<li>Repeat the process on Server B to replicate <strong>B → A</strong> (create a connection to A, enable rule).</li>
|
||||||
</ol>
|
</ol>
|
||||||
<p class="small text-muted mb-0">
|
<p class="small text-muted mb-3">
|
||||||
<strong>Loop Prevention:</strong> The system automatically detects replication traffic using a custom User-Agent (<code>S3ReplicationAgent</code>). This prevents infinite loops where an object replicated from A to B is immediately replicated back to A.
|
<strong>Loop Prevention:</strong> The system automatically detects replication traffic using a custom User-Agent (<code>S3ReplicationAgent</code>). This prevents infinite loops where an object replicated from A to B is immediately replicated back to A.
|
||||||
<br>
|
<br>
|
||||||
<strong>Deletes:</strong> Deleting an object on one server will propagate the deletion to the other server.
|
<strong>Deletes:</strong> Deleting an object on one server will propagate the deletion to the other server.
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Error Handling & Rate Limits</h3>
|
||||||
|
<p class="small text-muted mb-3">The replication system handles transient failures automatically:</p>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Behavior</th>
|
||||||
|
<th>Details</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Retry Logic</strong></td>
|
||||||
|
<td>boto3 automatically handles 429 (rate limit) errors using exponential backoff with <code>max_attempts=2</code></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Concurrency</strong></td>
|
||||||
|
<td>Uses a ThreadPoolExecutor with 4 parallel workers for replication tasks</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Timeouts</strong></td>
|
||||||
|
<td>Connect: 5s, Read: 30s. Large files use streaming transfers</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
<div class="alert alert-warning border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-exclamation-triangle text-warning mt-1 flex-shrink-0" viewBox="0 0 16 16">
|
||||||
|
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
||||||
|
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Large File Counts:</strong> When replicating buckets with many objects, the target server's rate limits may cause delays. There is no built-in pause mechanism. Consider increasing <code>RATE_LIMIT_DEFAULT</code> on the target server during bulk replication operations.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="versioning" class="card shadow-sm docs-section">
|
<article id="versioning" class="card shadow-sm docs-section">
|
||||||
@@ -794,10 +894,92 @@ curl -X DELETE "{{ api_base }}/kms/keys/{key-id}?waiting_period_days=30" \
|
|||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="troubleshooting" class="card shadow-sm docs-section">
|
<article id="lifecycle" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">12</span>
|
<span class="docs-section-kicker">12</span>
|
||||||
|
<h2 class="h4 mb-0">Lifecycle Rules</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">Automatically delete expired objects, clean up old versions, and abort incomplete multipart uploads using time-based lifecycle rules.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">How It Works</h3>
|
||||||
|
<p class="small text-muted mb-3">
|
||||||
|
Lifecycle rules run on a background timer (Python <code>threading.Timer</code>), not a system cronjob. The enforcement cycle triggers every <strong>3600 seconds (1 hour)</strong> by default. Each cycle scans all buckets with lifecycle configurations and applies matching rules.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Expiration Types</h3>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Type</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Expiration (Days)</strong></td>
|
||||||
|
<td>Delete current objects older than N days from their last modification</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Expiration (Date)</strong></td>
|
||||||
|
<td>Delete current objects after a specific date (ISO 8601 format)</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>NoncurrentVersionExpiration</strong></td>
|
||||||
|
<td>Delete non-current (archived) versions older than N days from when they became non-current</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>AbortIncompleteMultipartUpload</strong></td>
|
||||||
|
<td>Abort multipart uploads that have been in progress longer than N days</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">API Usage</h3>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Set lifecycle rule (delete objects older than 30 days)
|
||||||
|
curl -X PUT "{{ api_base }}/<bucket>?lifecycle" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '[{
|
||||||
|
"ID": "expire-old-objects",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"Expiration": {"Days": 30}
|
||||||
|
}]'
|
||||||
|
|
||||||
|
# Abort incomplete multipart uploads after 7 days
|
||||||
|
curl -X PUT "{{ api_base }}/<bucket>?lifecycle" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '[{
|
||||||
|
"ID": "cleanup-multipart",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7}
|
||||||
|
}]'
|
||||||
|
|
||||||
|
# Get current lifecycle configuration
|
||||||
|
curl "{{ api_base }}/<bucket>?lifecycle" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"</code></pre>
|
||||||
|
|
||||||
|
<div class="alert alert-light border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1 flex-shrink-0" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Prefix Filtering:</strong> Use the <code>Prefix</code> field to scope rules to specific paths (e.g., <code>"logs/"</code>). Leave empty to apply to all objects in the bucket.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="troubleshooting" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">13</span>
|
||||||
<h2 class="h4 mb-0">Troubleshooting & tips</h2>
|
<h2 class="h4 mb-0">Troubleshooting & tips</h2>
|
||||||
</div>
|
</div>
|
||||||
<div class="table-responsive">
|
<div class="table-responsive">
|
||||||
@@ -835,6 +1017,11 @@ curl -X DELETE "{{ api_base }}/kms/keys/{key-id}?waiting_period_days=30" \
|
|||||||
<td>Proxy headers missing or <code>API_BASE_URL</code> incorrect</td>
|
<td>Proxy headers missing or <code>API_BASE_URL</code> incorrect</td>
|
||||||
<td>Ensure your proxy sends <code>X-Forwarded-Host</code>/<code>Proto</code> headers, or explicitly set <code>API_BASE_URL</code> to your public domain.</td>
|
<td>Ensure your proxy sends <code>X-Forwarded-Host</code>/<code>Proto</code> headers, or explicitly set <code>API_BASE_URL</code> to your public domain.</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Large folder uploads hitting rate limits (429)</td>
|
||||||
|
<td><code>RATE_LIMIT_DEFAULT</code> exceeded (200/min)</td>
|
||||||
|
<td>Increase rate limit in env config, use Redis backend (<code>RATE_LIMIT_STORAGE_URI=redis://host:port</code>) for distributed setups, or upload in smaller batches.</td>
|
||||||
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
</div>
|
</div>
|
||||||
@@ -857,6 +1044,7 @@ curl -X DELETE "{{ api_base }}/kms/keys/{key-id}?waiting_period_days=30" \
|
|||||||
<li><a href="#versioning">Object Versioning</a></li>
|
<li><a href="#versioning">Object Versioning</a></li>
|
||||||
<li><a href="#quotas">Bucket Quotas</a></li>
|
<li><a href="#quotas">Bucket Quotas</a></li>
|
||||||
<li><a href="#encryption">Encryption</a></li>
|
<li><a href="#encryption">Encryption</a></li>
|
||||||
|
<li><a href="#lifecycle">Lifecycle Rules</a></li>
|
||||||
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
||||||
</ul>
|
</ul>
|
||||||
<div class="docs-sidebar-callouts">
|
<div class="docs-sidebar-callouts">
|
||||||
|
|||||||
@@ -10,6 +10,7 @@
|
|||||||
</svg>
|
</svg>
|
||||||
IAM Configuration
|
IAM Configuration
|
||||||
</h1>
|
</h1>
|
||||||
|
<p class="text-muted mb-0 mt-1">Create and manage users with fine-grained bucket permissions.</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="d-flex gap-2">
|
<div class="d-flex gap-2">
|
||||||
{% if not iam_locked %}
|
{% if not iam_locked %}
|
||||||
@@ -109,35 +110,68 @@
|
|||||||
{% else %}
|
{% else %}
|
||||||
<div class="card-body px-4 pb-4">
|
<div class="card-body px-4 pb-4">
|
||||||
{% if users %}
|
{% if users %}
|
||||||
<div class="table-responsive">
|
<div class="row g-3">
|
||||||
<table class="table table-hover align-middle mb-0">
|
|
||||||
<thead class="table-light">
|
|
||||||
<tr>
|
|
||||||
<th scope="col">User</th>
|
|
||||||
<th scope="col">Policies</th>
|
|
||||||
<th scope="col" class="text-end">Actions</th>
|
|
||||||
</tr>
|
|
||||||
</thead>
|
|
||||||
<tbody>
|
|
||||||
{% for user in users %}
|
{% for user in users %}
|
||||||
<tr>
|
<div class="col-md-6 col-xl-4">
|
||||||
<td>
|
<div class="card h-100 iam-user-card">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-start justify-content-between mb-3">
|
||||||
<div class="d-flex align-items-center gap-3">
|
<div class="d-flex align-items-center gap-3">
|
||||||
<div class="user-avatar">
|
<div class="user-avatar user-avatar-lg">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
|
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<div>
|
<div class="min-width-0">
|
||||||
<div class="fw-medium">{{ user.display_name }}</div>
|
<h6 class="fw-semibold mb-0 text-truncate" title="{{ user.display_name }}">{{ user.display_name }}</h6>
|
||||||
<code class="small text-muted">{{ user.access_key }}</code>
|
<code class="small text-muted d-block text-truncate" title="{{ user.access_key }}">{{ user.access_key }}</code>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</td>
|
<div class="dropdown">
|
||||||
<td>
|
<button class="btn btn-sm btn-icon" type="button" data-bs-toggle="dropdown" aria-expanded="false">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M9.5 13a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
<ul class="dropdown-menu dropdown-menu-end">
|
||||||
|
<li>
|
||||||
|
<button class="dropdown-item" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
|
||||||
|
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
|
||||||
|
</svg>
|
||||||
|
Edit Name
|
||||||
|
</button>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<button class="dropdown-item" type="button" data-rotate-user="{{ user.access_key }}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
|
||||||
|
<path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
|
||||||
|
<path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
|
||||||
|
</svg>
|
||||||
|
Rotate Secret
|
||||||
|
</button>
|
||||||
|
</li>
|
||||||
|
<li><hr class="dropdown-divider"></li>
|
||||||
|
<li>
|
||||||
|
<button class="dropdown-item text-danger" type="button" data-delete-user="{{ user.access_key }}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
Delete User
|
||||||
|
</button>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="mb-3">
|
||||||
|
<div class="small text-muted mb-2">Bucket Permissions</div>
|
||||||
<div class="d-flex flex-wrap gap-1">
|
<div class="d-flex flex-wrap gap-1">
|
||||||
{% for policy in user.policies %}
|
{% for policy in user.policies %}
|
||||||
<span class="badge bg-primary bg-opacity-10 text-primary">
|
<span class="badge bg-primary bg-opacity-10 text-primary">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="10" height="10" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
|
</svg>
|
||||||
{{ policy.bucket }}
|
{{ policy.bucket }}
|
||||||
{% if '*' in policy.actions %}
|
{% if '*' in policy.actions %}
|
||||||
<span class="opacity-75">(full)</span>
|
<span class="opacity-75">(full)</span>
|
||||||
@@ -149,38 +183,18 @@
|
|||||||
<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>
|
<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
</div>
|
</div>
|
||||||
</td>
|
</div>
|
||||||
<td class="text-end">
|
<button class="btn btn-outline-primary btn-sm w-100" type="button" data-policy-editor data-access-key="{{ user.access_key }}">
|
||||||
<div class="btn-group btn-group-sm" role="group">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
<button class="btn btn-outline-primary" type="button" data-rotate-user="{{ user.access_key }}" title="Rotate Secret">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
|
|
||||||
<path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
|
|
||||||
</svg>
|
|
||||||
</button>
|
|
||||||
<button class="btn btn-outline-secondary" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}" title="Edit User">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
|
|
||||||
</svg>
|
|
||||||
</button>
|
|
||||||
<button class="btn btn-outline-secondary" type="button" data-policy-editor data-access-key="{{ user.access_key }}" title="Edit Policies">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/>
|
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/>
|
||||||
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/>
|
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/>
|
||||||
</svg>
|
</svg>
|
||||||
</button>
|
Manage Policies
|
||||||
<button class="btn btn-outline-danger" type="button" data-delete-user="{{ user.access_key }}" title="Delete User">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
|
|
||||||
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
|
||||||
</svg>
|
|
||||||
</button>
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</td>
|
</div>
|
||||||
</tr>
|
</div>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
</tbody>
|
|
||||||
</table>
|
|
||||||
</div>
|
</div>
|
||||||
{% else %}
|
{% else %}
|
||||||
<div class="empty-state text-center py-5">
|
<div class="empty-state text-center py-5">
|
||||||
@@ -341,8 +355,8 @@
|
|||||||
<div class="modal-header border-0 pb-0">
|
<div class="modal-header border-0 pb-0">
|
||||||
<h1 class="modal-title fs-5 fw-semibold">
|
<h1 class="modal-title fs-5 fw-semibold">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-danger" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-danger" viewBox="0 0 16 16">
|
||||||
<path d="M1 14s-1 0-1-1 1-4 6-4 6 3 6 4-1 1-1 1H1zm5-6a3 3 0 1 0 0-6 3 3 0 0 0 0 6z"/>
|
<path d="M11 5a3 3 0 1 1-6 0 3 3 0 0 1 6 0M8 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4m.256 7a4.5 4.5 0 0 1-.229-1.004H3c.001-.246.154-.986.832-1.664C4.484 10.68 5.711 10 8 10q.39 0 .74.025c.226-.341.496-.65.804-.918Q9.077 9.014 8 9c-5 0-6 3-6 4s1 1 1 1h5.256Z"/>
|
||||||
<path fill-rule="evenodd" d="M11 1.5v1h5v1h-1v9a2 2 0 0 1-2 2H3a2 2 0 0 1-2-2v-9H0v-1h5v-1a1 1 0 0 1 1-1h4a1 1 0 0 1 1 1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118z"/>
|
<path d="M12.5 16a3.5 3.5 0 1 0 0-7 3.5 3.5 0 0 0 0 7m-.646-4.854.646.647.646-.647a.5.5 0 0 1 .708.708l-.647.646.647.646a.5.5 0 0 1-.708.708l-.646-.647-.646.647a.5.5 0 0 1-.708-.708l.647-.646-.647-.646a.5.5 0 0 1 .708-.708"/>
|
||||||
</svg>
|
</svg>
|
||||||
Delete User
|
Delete User
|
||||||
</h1>
|
</h1>
|
||||||
@@ -442,6 +456,80 @@
|
|||||||
{{ super() }}
|
{{ super() }}
|
||||||
<script>
|
<script>
|
||||||
(function () {
|
(function () {
|
||||||
|
function setupJsonAutoIndent(textarea) {
|
||||||
|
if (!textarea) return;
|
||||||
|
|
||||||
|
textarea.addEventListener('keydown', function(e) {
|
||||||
|
if (e.key === 'Enter') {
|
||||||
|
e.preventDefault();
|
||||||
|
|
||||||
|
const start = this.selectionStart;
|
||||||
|
const end = this.selectionEnd;
|
||||||
|
const value = this.value;
|
||||||
|
|
||||||
|
const lineStart = value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
const currentLine = value.substring(lineStart, start);
|
||||||
|
|
||||||
|
const indentMatch = currentLine.match(/^(\s*)/);
|
||||||
|
let indent = indentMatch ? indentMatch[1] : '';
|
||||||
|
|
||||||
|
const trimmedLine = currentLine.trim();
|
||||||
|
const lastChar = trimmedLine.slice(-1);
|
||||||
|
|
||||||
|
const charBeforeCursor = value.substring(start - 1, start).trim();
|
||||||
|
|
||||||
|
let newIndent = indent;
|
||||||
|
let insertAfter = '';
|
||||||
|
|
||||||
|
if (lastChar === '{' || lastChar === '[') {
|
||||||
|
newIndent = indent + ' ';
|
||||||
|
|
||||||
|
const charAfterCursor = value.substring(start, start + 1).trim();
|
||||||
|
if ((lastChar === '{' && charAfterCursor === '}') ||
|
||||||
|
(lastChar === '[' && charAfterCursor === ']')) {
|
||||||
|
insertAfter = '\n' + indent;
|
||||||
|
}
|
||||||
|
} else if (lastChar === ',' || lastChar === ':') {
|
||||||
|
newIndent = indent;
|
||||||
|
}
|
||||||
|
|
||||||
|
const insertion = '\n' + newIndent + insertAfter;
|
||||||
|
const newValue = value.substring(0, start) + insertion + value.substring(end);
|
||||||
|
|
||||||
|
this.value = newValue;
|
||||||
|
|
||||||
|
const newCursorPos = start + 1 + newIndent.length;
|
||||||
|
this.selectionStart = this.selectionEnd = newCursorPos;
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
|
||||||
|
if (e.key === 'Tab') {
|
||||||
|
e.preventDefault();
|
||||||
|
const start = this.selectionStart;
|
||||||
|
const end = this.selectionEnd;
|
||||||
|
|
||||||
|
if (e.shiftKey) {
|
||||||
|
const lineStart = this.value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
const lineContent = this.value.substring(lineStart, start);
|
||||||
|
if (lineContent.startsWith(' ')) {
|
||||||
|
this.value = this.value.substring(0, lineStart) +
|
||||||
|
this.value.substring(lineStart + 2);
|
||||||
|
this.selectionStart = this.selectionEnd = Math.max(lineStart, start - 2);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
this.value = this.value.substring(0, start) + ' ' + this.value.substring(end);
|
||||||
|
this.selectionStart = this.selectionEnd = start + 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
setupJsonAutoIndent(document.getElementById('policyEditorDocument'));
|
||||||
|
setupJsonAutoIndent(document.getElementById('createUserPolicies'));
|
||||||
|
|
||||||
const currentUserKey = {{ principal.access_key | tojson }};
|
const currentUserKey = {{ principal.access_key | tojson }};
|
||||||
const configCopyButtons = document.querySelectorAll('.config-copy');
|
const configCopyButtons = document.querySelectorAll('.config-copy');
|
||||||
configCopyButtons.forEach((button) => {
|
configCopyButtons.forEach((button) => {
|
||||||
|
|||||||
@@ -35,7 +35,7 @@
|
|||||||
<div class="card shadow-lg login-card position-relative">
|
<div class="card shadow-lg login-card position-relative">
|
||||||
<div class="card-body p-4 p-md-5">
|
<div class="card-body p-4 p-md-5">
|
||||||
<div class="text-center mb-4 d-lg-none">
|
<div class="text-center mb-4 d-lg-none">
|
||||||
<img src="{{ url_for('static', filename='images/MyFISO.png') }}" alt="MyFSIO" width="48" height="48" class="mb-3 rounded-3">
|
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO" width="48" height="48" class="mb-3 rounded-3">
|
||||||
<h2 class="h4 fw-bold">MyFSIO</h2>
|
<h2 class="h4 fw-bold">MyFSIO</h2>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="h4 mb-1 d-none d-lg-block">Sign in</h2>
|
<h2 class="h4 mb-1 d-none d-lg-block">Sign in</h2>
|
||||||
|
|||||||
@@ -219,24 +219,42 @@
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="col-lg-4">
|
<div class="col-lg-4">
|
||||||
<div class="card shadow-sm border-0 h-100 overflow-hidden" style="background: linear-gradient(135deg, #3b82f6 0%, #8b5cf6 100%);">
|
{% set has_issues = (cpu_percent > 80) or (memory.percent > 85) or (disk.percent > 90) %}
|
||||||
|
<div class="card shadow-sm border-0 h-100 overflow-hidden" style="background: linear-gradient(135deg, {% if has_issues %}#ef4444 0%, #f97316{% else %}#3b82f6 0%, #8b5cf6{% endif %} 100%);">
|
||||||
<div class="card-body p-4 d-flex flex-column justify-content-center text-white position-relative">
|
<div class="card-body p-4 d-flex flex-column justify-content-center text-white position-relative">
|
||||||
<div class="position-absolute top-0 end-0 opacity-25" style="transform: translate(20%, -20%);">
|
<div class="position-absolute top-0 end-0 opacity-25" style="transform: translate(20%, -20%);">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="160" height="160" fill="currentColor" class="bi bi-cloud-check" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="160" height="160" fill="currentColor" class="bi bi-{% if has_issues %}exclamation-triangle{% else %}cloud-check{% endif %}" viewBox="0 0 16 16">
|
||||||
|
{% if has_issues %}
|
||||||
|
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
||||||
|
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
||||||
|
{% else %}
|
||||||
<path fill-rule="evenodd" d="M10.354 6.146a.5.5 0 0 1 0 .708l-3 3a.5.5 0 0 1-.708 0l-1.5-1.5a.5.5 0 1 1 .708-.708L7 8.793l2.646-2.647a.5.5 0 0 1 .708 0z"/>
|
<path fill-rule="evenodd" d="M10.354 6.146a.5.5 0 0 1 0 .708l-3 3a.5.5 0 0 1-.708 0l-1.5-1.5a.5.5 0 1 1 .708-.708L7 8.793l2.646-2.647a.5.5 0 0 1 .708 0z"/>
|
||||||
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
|
{% endif %}
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<span class="badge bg-white text-primary fw-semibold px-3 py-2">
|
<span class="badge bg-white {% if has_issues %}text-danger{% else %}text-primary{% endif %} fw-semibold px-3 py-2">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-check-circle-fill me-1" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-{% if has_issues %}exclamation-circle-fill{% else %}check-circle-fill{% endif %} me-1" viewBox="0 0 16 16">
|
||||||
|
{% if has_issues %}
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM8 4a.905.905 0 0 0-.9.995l.35 3.507a.552.552 0 0 0 1.1 0l.35-3.507A.905.905 0 0 0 8 4zm.002 6a1 1 0 1 0 0 2 1 1 0 0 0 0-2z"/>
|
||||||
|
{% else %}
|
||||||
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
||||||
|
{% endif %}
|
||||||
</svg>
|
</svg>
|
||||||
v{{ app.version }}
|
v{{ app.version }}
|
||||||
</span>
|
</span>
|
||||||
</div>
|
</div>
|
||||||
<h4 class="card-title fw-bold mb-3">System Status</h4>
|
<h4 class="card-title fw-bold mb-3">System Health</h4>
|
||||||
<p class="card-text opacity-90 mb-4">All systems operational. Your storage infrastructure is running smoothly with no detected issues.</p>
|
{% if has_issues %}
|
||||||
|
<ul class="list-unstyled small mb-4 opacity-90">
|
||||||
|
{% if cpu_percent > 80 %}<li class="mb-1">CPU usage is high ({{ cpu_percent }}%)</li>{% endif %}
|
||||||
|
{% if memory.percent > 85 %}<li class="mb-1">Memory usage is high ({{ memory.percent }}%)</li>{% endif %}
|
||||||
|
{% if disk.percent > 90 %}<li class="mb-1">Disk space is critically low ({{ disk.percent }}% used)</li>{% endif %}
|
||||||
|
</ul>
|
||||||
|
{% else %}
|
||||||
|
<p class="card-text opacity-90 mb-4 small">All resources are within normal operating parameters.</p>
|
||||||
|
{% endif %}
|
||||||
<div class="d-flex gap-4">
|
<div class="d-flex gap-4">
|
||||||
<div>
|
<div>
|
||||||
<div class="h3 fw-bold mb-0">{{ app.uptime_days }}d</div>
|
<div class="h3 fw-bold mb-0">{{ app.uptime_days }}d</div>
|
||||||
|
|||||||
339
tests/test_access_logging.py
Normal file
339
tests/test_access_logging.py
Normal file
@@ -0,0 +1,339 @@
|
|||||||
|
import io
|
||||||
|
import json
|
||||||
|
import time
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.access_logging import (
|
||||||
|
AccessLogEntry,
|
||||||
|
AccessLoggingService,
|
||||||
|
LoggingConfiguration,
|
||||||
|
)
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
|
||||||
|
|
||||||
|
class TestAccessLogEntry:
|
||||||
|
def test_default_values(self):
|
||||||
|
entry = AccessLogEntry()
|
||||||
|
assert entry.bucket_owner == "-"
|
||||||
|
assert entry.bucket == "-"
|
||||||
|
assert entry.remote_ip == "-"
|
||||||
|
assert entry.requester == "-"
|
||||||
|
assert entry.operation == "-"
|
||||||
|
assert entry.http_status == 200
|
||||||
|
assert len(entry.request_id) == 16
|
||||||
|
|
||||||
|
def test_to_log_line(self):
|
||||||
|
entry = AccessLogEntry(
|
||||||
|
bucket_owner="owner123",
|
||||||
|
bucket="my-bucket",
|
||||||
|
remote_ip="192.168.1.1",
|
||||||
|
requester="user456",
|
||||||
|
request_id="REQ123456789012",
|
||||||
|
operation="REST.PUT.OBJECT",
|
||||||
|
key="test/key.txt",
|
||||||
|
request_uri="PUT /my-bucket/test/key.txt HTTP/1.1",
|
||||||
|
http_status=200,
|
||||||
|
bytes_sent=1024,
|
||||||
|
object_size=2048,
|
||||||
|
total_time_ms=150,
|
||||||
|
referrer="http://example.com",
|
||||||
|
user_agent="aws-cli/2.0",
|
||||||
|
version_id="v1",
|
||||||
|
)
|
||||||
|
log_line = entry.to_log_line()
|
||||||
|
|
||||||
|
assert "owner123" in log_line
|
||||||
|
assert "my-bucket" in log_line
|
||||||
|
assert "192.168.1.1" in log_line
|
||||||
|
assert "user456" in log_line
|
||||||
|
assert "REST.PUT.OBJECT" in log_line
|
||||||
|
assert "test/key.txt" in log_line
|
||||||
|
assert "200" in log_line
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
entry = AccessLogEntry(
|
||||||
|
bucket_owner="owner",
|
||||||
|
bucket="bucket",
|
||||||
|
remote_ip="10.0.0.1",
|
||||||
|
requester="admin",
|
||||||
|
request_id="ABC123",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="file.txt",
|
||||||
|
request_uri="GET /bucket/file.txt HTTP/1.1",
|
||||||
|
http_status=200,
|
||||||
|
bytes_sent=512,
|
||||||
|
object_size=512,
|
||||||
|
total_time_ms=50,
|
||||||
|
)
|
||||||
|
result = entry.to_dict()
|
||||||
|
|
||||||
|
assert result["bucket_owner"] == "owner"
|
||||||
|
assert result["bucket"] == "bucket"
|
||||||
|
assert result["remote_ip"] == "10.0.0.1"
|
||||||
|
assert result["requester"] == "admin"
|
||||||
|
assert result["operation"] == "REST.GET.OBJECT"
|
||||||
|
assert result["key"] == "file.txt"
|
||||||
|
assert result["http_status"] == 200
|
||||||
|
assert result["bytes_sent"] == 512
|
||||||
|
|
||||||
|
|
||||||
|
class TestLoggingConfiguration:
|
||||||
|
def test_default_values(self):
|
||||||
|
config = LoggingConfiguration(target_bucket="log-bucket")
|
||||||
|
assert config.target_bucket == "log-bucket"
|
||||||
|
assert config.target_prefix == ""
|
||||||
|
assert config.enabled is True
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="logs",
|
||||||
|
target_prefix="access-logs/",
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
result = config.to_dict()
|
||||||
|
|
||||||
|
assert "LoggingEnabled" in result
|
||||||
|
assert result["LoggingEnabled"]["TargetBucket"] == "logs"
|
||||||
|
assert result["LoggingEnabled"]["TargetPrefix"] == "access-logs/"
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"LoggingEnabled": {
|
||||||
|
"TargetBucket": "my-logs",
|
||||||
|
"TargetPrefix": "bucket-logs/",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
config = LoggingConfiguration.from_dict(data)
|
||||||
|
|
||||||
|
assert config is not None
|
||||||
|
assert config.target_bucket == "my-logs"
|
||||||
|
assert config.target_prefix == "bucket-logs/"
|
||||||
|
assert config.enabled is True
|
||||||
|
|
||||||
|
def test_from_dict_no_logging(self):
|
||||||
|
data = {}
|
||||||
|
config = LoggingConfiguration.from_dict(data)
|
||||||
|
assert config is None
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def storage(tmp_path: Path):
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
storage_root.mkdir(parents=True)
|
||||||
|
return ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def logging_service(tmp_path: Path, storage):
|
||||||
|
service = AccessLoggingService(
|
||||||
|
tmp_path,
|
||||||
|
flush_interval=3600,
|
||||||
|
max_buffer_size=10,
|
||||||
|
)
|
||||||
|
service.set_storage(storage)
|
||||||
|
yield service
|
||||||
|
service.shutdown()
|
||||||
|
|
||||||
|
|
||||||
|
class TestAccessLoggingService:
|
||||||
|
def test_get_bucket_logging_not_configured(self, logging_service):
|
||||||
|
result = logging_service.get_bucket_logging("unconfigured-bucket")
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_set_and_get_bucket_logging(self, logging_service):
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="log-bucket",
|
||||||
|
target_prefix="logs/",
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("source-bucket", config)
|
||||||
|
|
||||||
|
retrieved = logging_service.get_bucket_logging("source-bucket")
|
||||||
|
assert retrieved is not None
|
||||||
|
assert retrieved.target_bucket == "log-bucket"
|
||||||
|
assert retrieved.target_prefix == "logs/"
|
||||||
|
|
||||||
|
def test_delete_bucket_logging(self, logging_service):
|
||||||
|
config = LoggingConfiguration(target_bucket="logs")
|
||||||
|
logging_service.set_bucket_logging("to-delete", config)
|
||||||
|
assert logging_service.get_bucket_logging("to-delete") is not None
|
||||||
|
|
||||||
|
logging_service.delete_bucket_logging("to-delete")
|
||||||
|
logging_service._configs.clear()
|
||||||
|
assert logging_service.get_bucket_logging("to-delete") is None
|
||||||
|
|
||||||
|
def test_log_request_no_config(self, logging_service):
|
||||||
|
logging_service.log_request(
|
||||||
|
"no-config-bucket",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="test.txt",
|
||||||
|
)
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert stats["buffered_entries"] == 0
|
||||||
|
|
||||||
|
def test_log_request_with_config(self, logging_service, storage):
|
||||||
|
storage.create_bucket("log-target")
|
||||||
|
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="log-target",
|
||||||
|
target_prefix="access/",
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("source-bucket", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"source-bucket",
|
||||||
|
operation="REST.PUT.OBJECT",
|
||||||
|
key="uploaded.txt",
|
||||||
|
remote_ip="192.168.1.100",
|
||||||
|
requester="test-user",
|
||||||
|
http_status=200,
|
||||||
|
bytes_sent=1024,
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert stats["buffered_entries"] == 1
|
||||||
|
|
||||||
|
def test_log_request_disabled_config(self, logging_service):
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="logs",
|
||||||
|
enabled=False,
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("disabled-bucket", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"disabled-bucket",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="test.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert stats["buffered_entries"] == 0
|
||||||
|
|
||||||
|
def test_flush_buffer(self, logging_service, storage):
|
||||||
|
storage.create_bucket("flush-target")
|
||||||
|
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="flush-target",
|
||||||
|
target_prefix="logs/",
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("flush-source", config)
|
||||||
|
|
||||||
|
for i in range(3):
|
||||||
|
logging_service.log_request(
|
||||||
|
"flush-source",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key=f"file{i}.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
logging_service.flush()
|
||||||
|
|
||||||
|
objects = storage.list_objects_all("flush-target")
|
||||||
|
assert len(objects) >= 1
|
||||||
|
|
||||||
|
def test_auto_flush_on_buffer_size(self, logging_service, storage):
|
||||||
|
storage.create_bucket("auto-flush-target")
|
||||||
|
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="auto-flush-target",
|
||||||
|
target_prefix="",
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("auto-source", config)
|
||||||
|
|
||||||
|
for i in range(15):
|
||||||
|
logging_service.log_request(
|
||||||
|
"auto-source",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key=f"file{i}.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
objects = storage.list_objects_all("auto-flush-target")
|
||||||
|
assert len(objects) >= 1
|
||||||
|
|
||||||
|
def test_get_stats(self, logging_service, storage):
|
||||||
|
storage.create_bucket("stats-target")
|
||||||
|
config = LoggingConfiguration(target_bucket="stats-target")
|
||||||
|
logging_service.set_bucket_logging("stats-bucket", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"stats-bucket",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="test.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert "buffered_entries" in stats
|
||||||
|
assert "target_buckets" in stats
|
||||||
|
assert stats["buffered_entries"] >= 1
|
||||||
|
|
||||||
|
def test_shutdown_flushes_buffer(self, tmp_path, storage):
|
||||||
|
storage.create_bucket("shutdown-target")
|
||||||
|
|
||||||
|
service = AccessLoggingService(tmp_path, flush_interval=3600, max_buffer_size=100)
|
||||||
|
service.set_storage(storage)
|
||||||
|
|
||||||
|
config = LoggingConfiguration(target_bucket="shutdown-target")
|
||||||
|
service.set_bucket_logging("shutdown-source", config)
|
||||||
|
|
||||||
|
service.log_request(
|
||||||
|
"shutdown-source",
|
||||||
|
operation="REST.PUT.OBJECT",
|
||||||
|
key="final.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
service.shutdown()
|
||||||
|
|
||||||
|
objects = storage.list_objects_all("shutdown-target")
|
||||||
|
assert len(objects) >= 1
|
||||||
|
|
||||||
|
def test_logging_caching(self, logging_service):
|
||||||
|
config = LoggingConfiguration(target_bucket="cached-logs")
|
||||||
|
logging_service.set_bucket_logging("cached-bucket", config)
|
||||||
|
|
||||||
|
logging_service.get_bucket_logging("cached-bucket")
|
||||||
|
assert "cached-bucket" in logging_service._configs
|
||||||
|
|
||||||
|
def test_log_request_all_fields(self, logging_service, storage):
|
||||||
|
storage.create_bucket("detailed-target")
|
||||||
|
|
||||||
|
config = LoggingConfiguration(target_bucket="detailed-target", target_prefix="detailed/")
|
||||||
|
logging_service.set_bucket_logging("detailed-source", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"detailed-source",
|
||||||
|
operation="REST.PUT.OBJECT",
|
||||||
|
key="detailed/file.txt",
|
||||||
|
remote_ip="10.0.0.1",
|
||||||
|
requester="admin-user",
|
||||||
|
request_uri="PUT /detailed-source/detailed/file.txt HTTP/1.1",
|
||||||
|
http_status=201,
|
||||||
|
error_code="",
|
||||||
|
bytes_sent=2048,
|
||||||
|
object_size=2048,
|
||||||
|
total_time_ms=100,
|
||||||
|
referrer="http://admin.example.com",
|
||||||
|
user_agent="curl/7.68.0",
|
||||||
|
version_id="v1.0",
|
||||||
|
request_id="CUSTOM_REQ_ID",
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert stats["buffered_entries"] == 1
|
||||||
|
|
||||||
|
def test_failed_flush_returns_to_buffer(self, logging_service):
|
||||||
|
config = LoggingConfiguration(target_bucket="nonexistent-target")
|
||||||
|
logging_service.set_bucket_logging("fail-source", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"fail-source",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="test.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
initial_count = logging_service.get_stats()["buffered_entries"]
|
||||||
|
logging_service.flush()
|
||||||
|
|
||||||
|
final_count = logging_service.get_stats()["buffered_entries"]
|
||||||
|
assert final_count >= initial_count
|
||||||
284
tests/test_acl.py
Normal file
284
tests/test_acl.py
Normal file
@@ -0,0 +1,284 @@
|
|||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.acl import (
|
||||||
|
Acl,
|
||||||
|
AclGrant,
|
||||||
|
AclService,
|
||||||
|
ACL_PERMISSION_FULL_CONTROL,
|
||||||
|
ACL_PERMISSION_READ,
|
||||||
|
ACL_PERMISSION_WRITE,
|
||||||
|
ACL_PERMISSION_READ_ACP,
|
||||||
|
ACL_PERMISSION_WRITE_ACP,
|
||||||
|
GRANTEE_ALL_USERS,
|
||||||
|
GRANTEE_AUTHENTICATED_USERS,
|
||||||
|
PERMISSION_TO_ACTIONS,
|
||||||
|
create_canned_acl,
|
||||||
|
CANNED_ACLS,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestAclGrant:
|
||||||
|
def test_to_dict(self):
|
||||||
|
grant = AclGrant(grantee="user123", permission=ACL_PERMISSION_READ)
|
||||||
|
result = grant.to_dict()
|
||||||
|
assert result == {"grantee": "user123", "permission": "READ"}
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {"grantee": "admin", "permission": "FULL_CONTROL"}
|
||||||
|
grant = AclGrant.from_dict(data)
|
||||||
|
assert grant.grantee == "admin"
|
||||||
|
assert grant.permission == ACL_PERMISSION_FULL_CONTROL
|
||||||
|
|
||||||
|
|
||||||
|
class TestAcl:
|
||||||
|
def test_to_dict(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner-user",
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee="owner-user", permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
result = acl.to_dict()
|
||||||
|
assert result["owner"] == "owner-user"
|
||||||
|
assert len(result["grants"]) == 2
|
||||||
|
assert result["grants"][0]["grantee"] == "owner-user"
|
||||||
|
assert result["grants"][1]["grantee"] == "*"
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"owner": "the-owner",
|
||||||
|
"grants": [
|
||||||
|
{"grantee": "the-owner", "permission": "FULL_CONTROL"},
|
||||||
|
{"grantee": "authenticated", "permission": "READ"},
|
||||||
|
],
|
||||||
|
}
|
||||||
|
acl = Acl.from_dict(data)
|
||||||
|
assert acl.owner == "the-owner"
|
||||||
|
assert len(acl.grants) == 2
|
||||||
|
assert acl.grants[0].grantee == "the-owner"
|
||||||
|
assert acl.grants[1].grantee == GRANTEE_AUTHENTICATED_USERS
|
||||||
|
|
||||||
|
def test_from_dict_empty_grants(self):
|
||||||
|
data = {"owner": "solo-owner"}
|
||||||
|
acl = Acl.from_dict(data)
|
||||||
|
assert acl.owner == "solo-owner"
|
||||||
|
assert len(acl.grants) == 0
|
||||||
|
|
||||||
|
def test_get_allowed_actions_owner(self):
|
||||||
|
acl = Acl(owner="owner123", grants=[])
|
||||||
|
actions = acl.get_allowed_actions("owner123", is_authenticated=True)
|
||||||
|
assert actions == PERMISSION_TO_ACTIONS[ACL_PERMISSION_FULL_CONTROL]
|
||||||
|
|
||||||
|
def test_get_allowed_actions_all_users(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ)],
|
||||||
|
)
|
||||||
|
actions = acl.get_allowed_actions(None, is_authenticated=False)
|
||||||
|
assert "read" in actions
|
||||||
|
assert "list" in actions
|
||||||
|
assert "write" not in actions
|
||||||
|
|
||||||
|
def test_get_allowed_actions_authenticated_users(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[AclGrant(grantee=GRANTEE_AUTHENTICATED_USERS, permission=ACL_PERMISSION_WRITE)],
|
||||||
|
)
|
||||||
|
actions_authenticated = acl.get_allowed_actions("some-user", is_authenticated=True)
|
||||||
|
assert "write" in actions_authenticated
|
||||||
|
assert "delete" in actions_authenticated
|
||||||
|
|
||||||
|
actions_anonymous = acl.get_allowed_actions(None, is_authenticated=False)
|
||||||
|
assert "write" not in actions_anonymous
|
||||||
|
|
||||||
|
def test_get_allowed_actions_specific_grantee(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee="user-abc", permission=ACL_PERMISSION_READ),
|
||||||
|
AclGrant(grantee="user-xyz", permission=ACL_PERMISSION_WRITE),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
abc_actions = acl.get_allowed_actions("user-abc", is_authenticated=True)
|
||||||
|
assert "read" in abc_actions
|
||||||
|
assert "list" in abc_actions
|
||||||
|
assert "write" not in abc_actions
|
||||||
|
|
||||||
|
xyz_actions = acl.get_allowed_actions("user-xyz", is_authenticated=True)
|
||||||
|
assert "write" in xyz_actions
|
||||||
|
assert "read" not in xyz_actions
|
||||||
|
|
||||||
|
def test_get_allowed_actions_combined(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
AclGrant(grantee="special-user", permission=ACL_PERMISSION_WRITE),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
actions = acl.get_allowed_actions("special-user", is_authenticated=True)
|
||||||
|
assert "read" in actions
|
||||||
|
assert "list" in actions
|
||||||
|
assert "write" in actions
|
||||||
|
assert "delete" in actions
|
||||||
|
|
||||||
|
|
||||||
|
class TestCannedAcls:
|
||||||
|
def test_private_acl(self):
|
||||||
|
acl = create_canned_acl("private", "the-owner")
|
||||||
|
assert acl.owner == "the-owner"
|
||||||
|
assert len(acl.grants) == 1
|
||||||
|
assert acl.grants[0].grantee == "the-owner"
|
||||||
|
assert acl.grants[0].permission == ACL_PERMISSION_FULL_CONTROL
|
||||||
|
|
||||||
|
def test_public_read_acl(self):
|
||||||
|
acl = create_canned_acl("public-read", "owner")
|
||||||
|
assert acl.owner == "owner"
|
||||||
|
has_owner_full_control = any(
|
||||||
|
g.grantee == "owner" and g.permission == ACL_PERMISSION_FULL_CONTROL for g in acl.grants
|
||||||
|
)
|
||||||
|
has_public_read = any(
|
||||||
|
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_READ for g in acl.grants
|
||||||
|
)
|
||||||
|
assert has_owner_full_control
|
||||||
|
assert has_public_read
|
||||||
|
|
||||||
|
def test_public_read_write_acl(self):
|
||||||
|
acl = create_canned_acl("public-read-write", "owner")
|
||||||
|
assert acl.owner == "owner"
|
||||||
|
has_public_read = any(
|
||||||
|
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_READ for g in acl.grants
|
||||||
|
)
|
||||||
|
has_public_write = any(
|
||||||
|
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_WRITE for g in acl.grants
|
||||||
|
)
|
||||||
|
assert has_public_read
|
||||||
|
assert has_public_write
|
||||||
|
|
||||||
|
def test_authenticated_read_acl(self):
|
||||||
|
acl = create_canned_acl("authenticated-read", "owner")
|
||||||
|
has_authenticated_read = any(
|
||||||
|
g.grantee == GRANTEE_AUTHENTICATED_USERS and g.permission == ACL_PERMISSION_READ for g in acl.grants
|
||||||
|
)
|
||||||
|
assert has_authenticated_read
|
||||||
|
|
||||||
|
def test_unknown_canned_acl_defaults_to_private(self):
|
||||||
|
acl = create_canned_acl("unknown-acl", "owner")
|
||||||
|
private_acl = create_canned_acl("private", "owner")
|
||||||
|
assert acl.to_dict() == private_acl.to_dict()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def acl_service(tmp_path: Path):
|
||||||
|
return AclService(tmp_path)
|
||||||
|
|
||||||
|
|
||||||
|
class TestAclService:
|
||||||
|
def test_get_bucket_acl_not_exists(self, acl_service):
|
||||||
|
result = acl_service.get_bucket_acl("nonexistent-bucket")
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_set_and_get_bucket_acl(self, acl_service):
|
||||||
|
acl = Acl(
|
||||||
|
owner="bucket-owner",
|
||||||
|
grants=[AclGrant(grantee="bucket-owner", permission=ACL_PERMISSION_FULL_CONTROL)],
|
||||||
|
)
|
||||||
|
acl_service.set_bucket_acl("my-bucket", acl)
|
||||||
|
|
||||||
|
retrieved = acl_service.get_bucket_acl("my-bucket")
|
||||||
|
assert retrieved is not None
|
||||||
|
assert retrieved.owner == "bucket-owner"
|
||||||
|
assert len(retrieved.grants) == 1
|
||||||
|
|
||||||
|
def test_bucket_acl_caching(self, acl_service):
|
||||||
|
acl = Acl(owner="cached-owner", grants=[])
|
||||||
|
acl_service.set_bucket_acl("cached-bucket", acl)
|
||||||
|
|
||||||
|
acl_service.get_bucket_acl("cached-bucket")
|
||||||
|
assert "cached-bucket" in acl_service._bucket_acl_cache
|
||||||
|
|
||||||
|
retrieved = acl_service.get_bucket_acl("cached-bucket")
|
||||||
|
assert retrieved.owner == "cached-owner"
|
||||||
|
|
||||||
|
def test_set_bucket_canned_acl(self, acl_service):
|
||||||
|
result = acl_service.set_bucket_canned_acl("new-bucket", "public-read", "the-owner")
|
||||||
|
assert result.owner == "the-owner"
|
||||||
|
|
||||||
|
retrieved = acl_service.get_bucket_acl("new-bucket")
|
||||||
|
assert retrieved is not None
|
||||||
|
has_public_read = any(
|
||||||
|
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_READ for g in retrieved.grants
|
||||||
|
)
|
||||||
|
assert has_public_read
|
||||||
|
|
||||||
|
def test_delete_bucket_acl(self, acl_service):
|
||||||
|
acl = Acl(owner="to-delete-owner", grants=[])
|
||||||
|
acl_service.set_bucket_acl("delete-me", acl)
|
||||||
|
assert acl_service.get_bucket_acl("delete-me") is not None
|
||||||
|
|
||||||
|
acl_service.delete_bucket_acl("delete-me")
|
||||||
|
acl_service._bucket_acl_cache.clear()
|
||||||
|
assert acl_service.get_bucket_acl("delete-me") is None
|
||||||
|
|
||||||
|
def test_evaluate_bucket_acl_allowed(self, acl_service):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ)],
|
||||||
|
)
|
||||||
|
acl_service.set_bucket_acl("public-bucket", acl)
|
||||||
|
|
||||||
|
result = acl_service.evaluate_bucket_acl("public-bucket", None, "read", is_authenticated=False)
|
||||||
|
assert result is True
|
||||||
|
|
||||||
|
def test_evaluate_bucket_acl_denied(self, acl_service):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[AclGrant(grantee="owner", permission=ACL_PERMISSION_FULL_CONTROL)],
|
||||||
|
)
|
||||||
|
acl_service.set_bucket_acl("private-bucket", acl)
|
||||||
|
|
||||||
|
result = acl_service.evaluate_bucket_acl("private-bucket", "other-user", "write", is_authenticated=True)
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
def test_evaluate_bucket_acl_no_acl(self, acl_service):
|
||||||
|
result = acl_service.evaluate_bucket_acl("no-acl-bucket", "anyone", "read")
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
def test_get_object_acl_from_metadata(self, acl_service):
|
||||||
|
metadata = {
|
||||||
|
"__acl__": {
|
||||||
|
"owner": "object-owner",
|
||||||
|
"grants": [{"grantee": "object-owner", "permission": "FULL_CONTROL"}],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result = acl_service.get_object_acl("bucket", "key", metadata)
|
||||||
|
assert result is not None
|
||||||
|
assert result.owner == "object-owner"
|
||||||
|
|
||||||
|
def test_get_object_acl_no_acl_in_metadata(self, acl_service):
|
||||||
|
metadata = {"Content-Type": "text/plain"}
|
||||||
|
result = acl_service.get_object_acl("bucket", "key", metadata)
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_create_object_acl_metadata(self, acl_service):
|
||||||
|
acl = Acl(owner="obj-owner", grants=[])
|
||||||
|
result = acl_service.create_object_acl_metadata(acl)
|
||||||
|
assert "__acl__" in result
|
||||||
|
assert result["__acl__"]["owner"] == "obj-owner"
|
||||||
|
|
||||||
|
def test_evaluate_object_acl(self, acl_service):
|
||||||
|
metadata = {
|
||||||
|
"__acl__": {
|
||||||
|
"owner": "obj-owner",
|
||||||
|
"grants": [{"grantee": "*", "permission": "READ"}],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result = acl_service.evaluate_object_acl(metadata, None, "read", is_authenticated=False)
|
||||||
|
assert result is True
|
||||||
|
|
||||||
|
result = acl_service.evaluate_object_acl(metadata, None, "write", is_authenticated=False)
|
||||||
|
assert result is False
|
||||||
@@ -8,8 +8,6 @@ def client(app):
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def auth_headers(app):
|
def auth_headers(app):
|
||||||
# Create a test user and return headers
|
|
||||||
# Using the user defined in conftest.py
|
|
||||||
return {
|
return {
|
||||||
"X-Access-Key": "test",
|
"X-Access-Key": "test",
|
||||||
"X-Secret-Key": "secret"
|
"X-Secret-Key": "secret"
|
||||||
@@ -76,18 +74,15 @@ def test_multipart_upload_flow(client, auth_headers):
|
|||||||
def test_abort_multipart_upload(client, auth_headers):
|
def test_abort_multipart_upload(client, auth_headers):
|
||||||
client.put("/abort-bucket", headers=auth_headers)
|
client.put("/abort-bucket", headers=auth_headers)
|
||||||
|
|
||||||
# Initiate
|
|
||||||
resp = client.post("/abort-bucket/file.txt?uploads", headers=auth_headers)
|
resp = client.post("/abort-bucket/file.txt?uploads", headers=auth_headers)
|
||||||
upload_id = fromstring(resp.data).find("UploadId").text
|
upload_id = fromstring(resp.data).find("UploadId").text
|
||||||
|
|
||||||
# Abort
|
|
||||||
resp = client.delete(f"/abort-bucket/file.txt?uploadId={upload_id}", headers=auth_headers)
|
resp = client.delete(f"/abort-bucket/file.txt?uploadId={upload_id}", headers=auth_headers)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
# Try to upload part (should fail)
|
|
||||||
resp = client.put(
|
resp = client.put(
|
||||||
f"/abort-bucket/file.txt?partNumber=1&uploadId={upload_id}",
|
f"/abort-bucket/file.txt?partNumber=1&uploadId={upload_id}",
|
||||||
headers=auth_headers,
|
headers=auth_headers,
|
||||||
data=b"data"
|
data=b"data"
|
||||||
)
|
)
|
||||||
assert resp.status_code == 404 # NoSuchUpload
|
assert resp.status_code == 404
|
||||||
|
|||||||
@@ -22,11 +22,10 @@ class TestLocalKeyEncryption:
|
|||||||
key_path = tmp_path / "keys" / "master.key"
|
key_path = tmp_path / "keys" / "master.key"
|
||||||
provider = LocalKeyEncryption(key_path)
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
|
||||||
# Access master key to trigger creation
|
|
||||||
key = provider.master_key
|
key = provider.master_key
|
||||||
|
|
||||||
assert key_path.exists()
|
assert key_path.exists()
|
||||||
assert len(key) == 32 # 256-bit key
|
assert len(key) == 32
|
||||||
|
|
||||||
def test_load_existing_master_key(self, tmp_path):
|
def test_load_existing_master_key(self, tmp_path):
|
||||||
"""Test loading an existing master key."""
|
"""Test loading an existing master key."""
|
||||||
@@ -50,7 +49,6 @@ class TestLocalKeyEncryption:
|
|||||||
|
|
||||||
plaintext = b"Hello, World! This is a test message."
|
plaintext = b"Hello, World! This is a test message."
|
||||||
|
|
||||||
# Encrypt
|
|
||||||
result = provider.encrypt(plaintext)
|
result = provider.encrypt(plaintext)
|
||||||
|
|
||||||
assert result.ciphertext != plaintext
|
assert result.ciphertext != plaintext
|
||||||
@@ -58,7 +56,6 @@ class TestLocalKeyEncryption:
|
|||||||
assert len(result.nonce) == 12
|
assert len(result.nonce) == 12
|
||||||
assert len(result.encrypted_data_key) > 0
|
assert len(result.encrypted_data_key) > 0
|
||||||
|
|
||||||
# Decrypt
|
|
||||||
decrypted = provider.decrypt(
|
decrypted = provider.decrypt(
|
||||||
result.ciphertext,
|
result.ciphertext,
|
||||||
result.nonce,
|
result.nonce,
|
||||||
@@ -80,11 +77,8 @@ class TestLocalKeyEncryption:
|
|||||||
result1 = provider.encrypt(plaintext)
|
result1 = provider.encrypt(plaintext)
|
||||||
result2 = provider.encrypt(plaintext)
|
result2 = provider.encrypt(plaintext)
|
||||||
|
|
||||||
# Different encrypted data keys
|
|
||||||
assert result1.encrypted_data_key != result2.encrypted_data_key
|
assert result1.encrypted_data_key != result2.encrypted_data_key
|
||||||
# Different nonces
|
|
||||||
assert result1.nonce != result2.nonce
|
assert result1.nonce != result2.nonce
|
||||||
# Different ciphertexts
|
|
||||||
assert result1.ciphertext != result2.ciphertext
|
assert result1.ciphertext != result2.ciphertext
|
||||||
|
|
||||||
def test_generate_data_key(self, tmp_path):
|
def test_generate_data_key(self, tmp_path):
|
||||||
@@ -97,9 +91,8 @@ class TestLocalKeyEncryption:
|
|||||||
plaintext_key, encrypted_key = provider.generate_data_key()
|
plaintext_key, encrypted_key = provider.generate_data_key()
|
||||||
|
|
||||||
assert len(plaintext_key) == 32
|
assert len(plaintext_key) == 32
|
||||||
assert len(encrypted_key) > 32 # nonce + ciphertext + tag
|
assert len(encrypted_key) > 32
|
||||||
|
|
||||||
# Verify we can decrypt the key
|
|
||||||
decrypted_key = provider._decrypt_data_key(encrypted_key)
|
decrypted_key = provider._decrypt_data_key(encrypted_key)
|
||||||
assert decrypted_key == plaintext_key
|
assert decrypted_key == plaintext_key
|
||||||
|
|
||||||
@@ -107,18 +100,15 @@ class TestLocalKeyEncryption:
|
|||||||
"""Test that decryption fails with wrong master key."""
|
"""Test that decryption fails with wrong master key."""
|
||||||
from app.encryption import LocalKeyEncryption, EncryptionError
|
from app.encryption import LocalKeyEncryption, EncryptionError
|
||||||
|
|
||||||
# Create two providers with different keys
|
|
||||||
key_path1 = tmp_path / "master1.key"
|
key_path1 = tmp_path / "master1.key"
|
||||||
key_path2 = tmp_path / "master2.key"
|
key_path2 = tmp_path / "master2.key"
|
||||||
|
|
||||||
provider1 = LocalKeyEncryption(key_path1)
|
provider1 = LocalKeyEncryption(key_path1)
|
||||||
provider2 = LocalKeyEncryption(key_path2)
|
provider2 = LocalKeyEncryption(key_path2)
|
||||||
|
|
||||||
# Encrypt with provider1
|
|
||||||
plaintext = b"Secret message"
|
plaintext = b"Secret message"
|
||||||
result = provider1.encrypt(plaintext)
|
result = provider1.encrypt(plaintext)
|
||||||
|
|
||||||
# Try to decrypt with provider2
|
|
||||||
with pytest.raises(EncryptionError):
|
with pytest.raises(EncryptionError):
|
||||||
provider2.decrypt(
|
provider2.decrypt(
|
||||||
result.ciphertext,
|
result.ciphertext,
|
||||||
@@ -196,18 +186,15 @@ class TestStreamingEncryptor:
|
|||||||
provider = LocalKeyEncryption(key_path)
|
provider = LocalKeyEncryption(key_path)
|
||||||
encryptor = StreamingEncryptor(provider, chunk_size=1024)
|
encryptor = StreamingEncryptor(provider, chunk_size=1024)
|
||||||
|
|
||||||
# Create test data
|
original_data = b"A" * 5000 + b"B" * 5000 + b"C" * 5000
|
||||||
original_data = b"A" * 5000 + b"B" * 5000 + b"C" * 5000 # 15KB
|
|
||||||
stream = io.BytesIO(original_data)
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
# Encrypt
|
|
||||||
encrypted_stream, metadata = encryptor.encrypt_stream(stream)
|
encrypted_stream, metadata = encryptor.encrypt_stream(stream)
|
||||||
encrypted_data = encrypted_stream.read()
|
encrypted_data = encrypted_stream.read()
|
||||||
|
|
||||||
assert encrypted_data != original_data
|
assert encrypted_data != original_data
|
||||||
assert metadata.algorithm == "AES256"
|
assert metadata.algorithm == "AES256"
|
||||||
|
|
||||||
# Decrypt
|
|
||||||
encrypted_stream = io.BytesIO(encrypted_data)
|
encrypted_stream = io.BytesIO(encrypted_data)
|
||||||
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
|
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
|
||||||
decrypted_data = decrypted_stream.read()
|
decrypted_data = decrypted_stream.read()
|
||||||
@@ -319,7 +306,6 @@ class TestClientEncryptionHelper:
|
|||||||
assert key_info["algorithm"] == "AES-256-GCM"
|
assert key_info["algorithm"] == "AES-256-GCM"
|
||||||
assert "created_at" in key_info
|
assert "created_at" in key_info
|
||||||
|
|
||||||
# Verify key is 256 bits
|
|
||||||
key = base64.b64decode(key_info["key"])
|
key = base64.b64decode(key_info["key"])
|
||||||
assert len(key) == 32
|
assert len(key) == 32
|
||||||
|
|
||||||
@@ -425,7 +411,6 @@ class TestKMSManager:
|
|||||||
assert key is not None
|
assert key is not None
|
||||||
assert key.key_id == "test-key"
|
assert key.key_id == "test-key"
|
||||||
|
|
||||||
# Non-existent key
|
|
||||||
assert kms.get_key("non-existent") is None
|
assert kms.get_key("non-existent") is None
|
||||||
|
|
||||||
def test_enable_disable_key(self, tmp_path):
|
def test_enable_disable_key(self, tmp_path):
|
||||||
@@ -439,14 +424,11 @@ class TestKMSManager:
|
|||||||
|
|
||||||
kms.create_key("Test key", key_id="test-key")
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
# Initially enabled
|
|
||||||
assert kms.get_key("test-key").enabled
|
assert kms.get_key("test-key").enabled
|
||||||
|
|
||||||
# Disable
|
|
||||||
kms.disable_key("test-key")
|
kms.disable_key("test-key")
|
||||||
assert not kms.get_key("test-key").enabled
|
assert not kms.get_key("test-key").enabled
|
||||||
|
|
||||||
# Enable
|
|
||||||
kms.enable_key("test-key")
|
kms.enable_key("test-key")
|
||||||
assert kms.get_key("test-key").enabled
|
assert kms.get_key("test-key").enabled
|
||||||
|
|
||||||
@@ -503,11 +485,9 @@ class TestKMSManager:
|
|||||||
|
|
||||||
ciphertext = kms.encrypt("test-key", plaintext, context)
|
ciphertext = kms.encrypt("test-key", plaintext, context)
|
||||||
|
|
||||||
# Decrypt with same context succeeds
|
|
||||||
decrypted, _ = kms.decrypt(ciphertext, context)
|
decrypted, _ = kms.decrypt(ciphertext, context)
|
||||||
assert decrypted == plaintext
|
assert decrypted == plaintext
|
||||||
|
|
||||||
# Decrypt with different context fails
|
|
||||||
with pytest.raises(EncryptionError):
|
with pytest.raises(EncryptionError):
|
||||||
kms.decrypt(ciphertext, {"different": "context"})
|
kms.decrypt(ciphertext, {"different": "context"})
|
||||||
|
|
||||||
@@ -527,7 +507,6 @@ class TestKMSManager:
|
|||||||
assert len(plaintext_key) == 32
|
assert len(plaintext_key) == 32
|
||||||
assert len(encrypted_key) > 0
|
assert len(encrypted_key) > 0
|
||||||
|
|
||||||
# Decrypt the encrypted key
|
|
||||||
decrypted_key = kms.decrypt_data_key("test-key", encrypted_key)
|
decrypted_key = kms.decrypt_data_key("test-key", encrypted_key)
|
||||||
|
|
||||||
assert decrypted_key == plaintext_key
|
assert decrypted_key == plaintext_key
|
||||||
@@ -561,13 +540,8 @@ class TestKMSManager:
|
|||||||
|
|
||||||
plaintext = b"Data to re-encrypt"
|
plaintext = b"Data to re-encrypt"
|
||||||
|
|
||||||
# Encrypt with key-1
|
|
||||||
ciphertext1 = kms.encrypt("key-1", plaintext)
|
ciphertext1 = kms.encrypt("key-1", plaintext)
|
||||||
|
|
||||||
# Re-encrypt with key-2
|
|
||||||
ciphertext2 = kms.re_encrypt(ciphertext1, "key-2")
|
ciphertext2 = kms.re_encrypt(ciphertext1, "key-2")
|
||||||
|
|
||||||
# Decrypt with key-2
|
|
||||||
decrypted, key_id = kms.decrypt(ciphertext2)
|
decrypted, key_id = kms.decrypt(ciphertext2)
|
||||||
|
|
||||||
assert decrypted == plaintext
|
assert decrypted == plaintext
|
||||||
@@ -587,7 +561,7 @@ class TestKMSManager:
|
|||||||
|
|
||||||
assert len(random1) == 32
|
assert len(random1) == 32
|
||||||
assert len(random2) == 32
|
assert len(random2) == 32
|
||||||
assert random1 != random2 # Very unlikely to be equal
|
assert random1 != random2
|
||||||
|
|
||||||
def test_keys_persist_across_instances(self, tmp_path):
|
def test_keys_persist_across_instances(self, tmp_path):
|
||||||
"""Test that keys persist and can be loaded by new instances."""
|
"""Test that keys persist and can be loaded by new instances."""
|
||||||
@@ -596,14 +570,12 @@ class TestKMSManager:
|
|||||||
keys_path = tmp_path / "kms_keys.json"
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
master_key_path = tmp_path / "master.key"
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
# Create key with first instance
|
|
||||||
kms1 = KMSManager(keys_path, master_key_path)
|
kms1 = KMSManager(keys_path, master_key_path)
|
||||||
kms1.create_key("Test key", key_id="test-key")
|
kms1.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
plaintext = b"Persistent encryption test"
|
plaintext = b"Persistent encryption test"
|
||||||
ciphertext = kms1.encrypt("test-key", plaintext)
|
ciphertext = kms1.encrypt("test-key", plaintext)
|
||||||
|
|
||||||
# Create new instance and verify key works
|
|
||||||
kms2 = KMSManager(keys_path, master_key_path)
|
kms2 = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
decrypted, key_id = kms2.decrypt(ciphertext)
|
decrypted, key_id = kms2.decrypt(ciphertext)
|
||||||
@@ -665,13 +637,11 @@ class TestEncryptedStorage:
|
|||||||
|
|
||||||
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
||||||
|
|
||||||
# Create bucket with encryption config
|
|
||||||
storage.create_bucket("test-bucket")
|
storage.create_bucket("test-bucket")
|
||||||
storage.set_bucket_encryption("test-bucket", {
|
storage.set_bucket_encryption("test-bucket", {
|
||||||
"Rules": [{"SSEAlgorithm": "AES256"}]
|
"Rules": [{"SSEAlgorithm": "AES256"}]
|
||||||
})
|
})
|
||||||
|
|
||||||
# Put object
|
|
||||||
original_data = b"This is secret data that should be encrypted"
|
original_data = b"This is secret data that should be encrypted"
|
||||||
stream = io.BytesIO(original_data)
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
@@ -683,12 +653,10 @@ class TestEncryptedStorage:
|
|||||||
|
|
||||||
assert meta is not None
|
assert meta is not None
|
||||||
|
|
||||||
# Verify file on disk is encrypted (not plaintext)
|
|
||||||
file_path = storage_root / "test-bucket" / "secret.txt"
|
file_path = storage_root / "test-bucket" / "secret.txt"
|
||||||
stored_data = file_path.read_bytes()
|
stored_data = file_path.read_bytes()
|
||||||
assert stored_data != original_data
|
assert stored_data != original_data
|
||||||
|
|
||||||
# Get object - should be decrypted
|
|
||||||
data, metadata = encrypted_storage.get_object_data("test-bucket", "secret.txt")
|
data, metadata = encrypted_storage.get_object_data("test-bucket", "secret.txt")
|
||||||
|
|
||||||
assert data == original_data
|
assert data == original_data
|
||||||
@@ -711,14 +679,12 @@ class TestEncryptedStorage:
|
|||||||
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
||||||
|
|
||||||
storage.create_bucket("test-bucket")
|
storage.create_bucket("test-bucket")
|
||||||
# No encryption config
|
|
||||||
|
|
||||||
original_data = b"Unencrypted data"
|
original_data = b"Unencrypted data"
|
||||||
stream = io.BytesIO(original_data)
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
encrypted_storage.put_object("test-bucket", "plain.txt", stream)
|
encrypted_storage.put_object("test-bucket", "plain.txt", stream)
|
||||||
|
|
||||||
# Verify file on disk is NOT encrypted
|
|
||||||
file_path = storage_root / "test-bucket" / "plain.txt"
|
file_path = storage_root / "test-bucket" / "plain.txt"
|
||||||
stored_data = file_path.read_bytes()
|
stored_data = file_path.read_bytes()
|
||||||
assert stored_data == original_data
|
assert stored_data == original_data
|
||||||
@@ -745,7 +711,6 @@ class TestEncryptedStorage:
|
|||||||
original_data = b"Explicitly encrypted data"
|
original_data = b"Explicitly encrypted data"
|
||||||
stream = io.BytesIO(original_data)
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
# Request encryption explicitly
|
|
||||||
encrypted_storage.put_object(
|
encrypted_storage.put_object(
|
||||||
"test-bucket",
|
"test-bucket",
|
||||||
"encrypted.txt",
|
"encrypted.txt",
|
||||||
@@ -753,11 +718,9 @@ class TestEncryptedStorage:
|
|||||||
server_side_encryption="AES256",
|
server_side_encryption="AES256",
|
||||||
)
|
)
|
||||||
|
|
||||||
# Verify file is encrypted
|
|
||||||
file_path = storage_root / "test-bucket" / "encrypted.txt"
|
file_path = storage_root / "test-bucket" / "encrypted.txt"
|
||||||
stored_data = file_path.read_bytes()
|
stored_data = file_path.read_bytes()
|
||||||
assert stored_data != original_data
|
assert stored_data != original_data
|
||||||
|
|
||||||
# Get object - should be decrypted
|
|
||||||
data, _ = encrypted_storage.get_object_data("test-bucket", "encrypted.txt")
|
data, _ = encrypted_storage.get_object_data("test-bucket", "encrypted.txt")
|
||||||
assert data == original_data
|
assert data == original_data
|
||||||
|
|||||||
@@ -24,7 +24,6 @@ def kms_client(tmp_path):
|
|||||||
"KMS_KEYS_PATH": str(tmp_path / "kms_keys.json"),
|
"KMS_KEYS_PATH": str(tmp_path / "kms_keys.json"),
|
||||||
})
|
})
|
||||||
|
|
||||||
# Create default IAM config with admin user
|
|
||||||
iam_config = {
|
iam_config = {
|
||||||
"users": [
|
"users": [
|
||||||
{
|
{
|
||||||
@@ -83,7 +82,6 @@ class TestKMSKeyManagement:
|
|||||||
|
|
||||||
def test_list_keys(self, kms_client, auth_headers):
|
def test_list_keys(self, kms_client, auth_headers):
|
||||||
"""Test listing KMS keys."""
|
"""Test listing KMS keys."""
|
||||||
# Create some keys
|
|
||||||
kms_client.post("/kms/keys", json={"Description": "Key 1"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"Description": "Key 1"}, headers=auth_headers)
|
||||||
kms_client.post("/kms/keys", json={"Description": "Key 2"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"Description": "Key 2"}, headers=auth_headers)
|
||||||
|
|
||||||
@@ -97,7 +95,6 @@ class TestKMSKeyManagement:
|
|||||||
|
|
||||||
def test_get_key(self, kms_client, auth_headers):
|
def test_get_key(self, kms_client, auth_headers):
|
||||||
"""Test getting a specific key."""
|
"""Test getting a specific key."""
|
||||||
# Create a key
|
|
||||||
create_response = kms_client.post(
|
create_response = kms_client.post(
|
||||||
"/kms/keys",
|
"/kms/keys",
|
||||||
json={"KeyId": "test-key", "Description": "Test key"},
|
json={"KeyId": "test-key", "Description": "Test key"},
|
||||||
@@ -120,36 +117,28 @@ class TestKMSKeyManagement:
|
|||||||
|
|
||||||
def test_delete_key(self, kms_client, auth_headers):
|
def test_delete_key(self, kms_client, auth_headers):
|
||||||
"""Test deleting a key."""
|
"""Test deleting a key."""
|
||||||
# Create a key
|
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
# Delete it
|
|
||||||
response = kms_client.delete("/kms/keys/test-key", headers=auth_headers)
|
response = kms_client.delete("/kms/keys/test-key", headers=auth_headers)
|
||||||
|
|
||||||
assert response.status_code == 204
|
assert response.status_code == 204
|
||||||
|
|
||||||
# Verify it's gone
|
|
||||||
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
assert get_response.status_code == 404
|
assert get_response.status_code == 404
|
||||||
|
|
||||||
def test_enable_disable_key(self, kms_client, auth_headers):
|
def test_enable_disable_key(self, kms_client, auth_headers):
|
||||||
"""Test enabling and disabling a key."""
|
"""Test enabling and disabling a key."""
|
||||||
# Create a key
|
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
# Disable
|
|
||||||
response = kms_client.post("/kms/keys/test-key/disable", headers=auth_headers)
|
response = kms_client.post("/kms/keys/test-key/disable", headers=auth_headers)
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
|
|
||||||
# Verify disabled
|
|
||||||
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
assert get_response.get_json()["KeyMetadata"]["Enabled"] is False
|
assert get_response.get_json()["KeyMetadata"]["Enabled"] is False
|
||||||
|
|
||||||
# Enable
|
|
||||||
response = kms_client.post("/kms/keys/test-key/enable", headers=auth_headers)
|
response = kms_client.post("/kms/keys/test-key/enable", headers=auth_headers)
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
|
|
||||||
# Verify enabled
|
|
||||||
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
assert get_response.get_json()["KeyMetadata"]["Enabled"] is True
|
assert get_response.get_json()["KeyMetadata"]["Enabled"] is True
|
||||||
|
|
||||||
@@ -159,13 +148,11 @@ class TestKMSEncryption:
|
|||||||
|
|
||||||
def test_encrypt_decrypt(self, kms_client, auth_headers):
|
def test_encrypt_decrypt(self, kms_client, auth_headers):
|
||||||
"""Test encrypting and decrypting data."""
|
"""Test encrypting and decrypting data."""
|
||||||
# Create a key
|
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
plaintext = b"Hello, World!"
|
plaintext = b"Hello, World!"
|
||||||
plaintext_b64 = base64.b64encode(plaintext).decode()
|
plaintext_b64 = base64.b64encode(plaintext).decode()
|
||||||
|
|
||||||
# Encrypt
|
|
||||||
encrypt_response = kms_client.post(
|
encrypt_response = kms_client.post(
|
||||||
"/kms/encrypt",
|
"/kms/encrypt",
|
||||||
json={"KeyId": "test-key", "Plaintext": plaintext_b64},
|
json={"KeyId": "test-key", "Plaintext": plaintext_b64},
|
||||||
@@ -178,7 +165,6 @@ class TestKMSEncryption:
|
|||||||
assert "CiphertextBlob" in encrypt_data
|
assert "CiphertextBlob" in encrypt_data
|
||||||
assert encrypt_data["KeyId"] == "test-key"
|
assert encrypt_data["KeyId"] == "test-key"
|
||||||
|
|
||||||
# Decrypt
|
|
||||||
decrypt_response = kms_client.post(
|
decrypt_response = kms_client.post(
|
||||||
"/kms/decrypt",
|
"/kms/decrypt",
|
||||||
json={"CiphertextBlob": encrypt_data["CiphertextBlob"]},
|
json={"CiphertextBlob": encrypt_data["CiphertextBlob"]},
|
||||||
@@ -199,7 +185,6 @@ class TestKMSEncryption:
|
|||||||
plaintext_b64 = base64.b64encode(plaintext).decode()
|
plaintext_b64 = base64.b64encode(plaintext).decode()
|
||||||
context = {"purpose": "testing", "bucket": "my-bucket"}
|
context = {"purpose": "testing", "bucket": "my-bucket"}
|
||||||
|
|
||||||
# Encrypt with context
|
|
||||||
encrypt_response = kms_client.post(
|
encrypt_response = kms_client.post(
|
||||||
"/kms/encrypt",
|
"/kms/encrypt",
|
||||||
json={
|
json={
|
||||||
@@ -213,7 +198,6 @@ class TestKMSEncryption:
|
|||||||
assert encrypt_response.status_code == 200
|
assert encrypt_response.status_code == 200
|
||||||
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
||||||
|
|
||||||
# Decrypt with same context succeeds
|
|
||||||
decrypt_response = kms_client.post(
|
decrypt_response = kms_client.post(
|
||||||
"/kms/decrypt",
|
"/kms/decrypt",
|
||||||
json={
|
json={
|
||||||
@@ -225,7 +209,6 @@ class TestKMSEncryption:
|
|||||||
|
|
||||||
assert decrypt_response.status_code == 200
|
assert decrypt_response.status_code == 200
|
||||||
|
|
||||||
# Decrypt with wrong context fails
|
|
||||||
wrong_context_response = kms_client.post(
|
wrong_context_response = kms_client.post(
|
||||||
"/kms/decrypt",
|
"/kms/decrypt",
|
||||||
json={
|
json={
|
||||||
@@ -325,11 +308,9 @@ class TestKMSReEncrypt:
|
|||||||
|
|
||||||
def test_re_encrypt(self, kms_client, auth_headers):
|
def test_re_encrypt(self, kms_client, auth_headers):
|
||||||
"""Test re-encrypting data with a different key."""
|
"""Test re-encrypting data with a different key."""
|
||||||
# Create two keys
|
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "key-1"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "key-1"}, headers=auth_headers)
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "key-2"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "key-2"}, headers=auth_headers)
|
||||||
|
|
||||||
# Encrypt with key-1
|
|
||||||
plaintext = b"Data to re-encrypt"
|
plaintext = b"Data to re-encrypt"
|
||||||
encrypt_response = kms_client.post(
|
encrypt_response = kms_client.post(
|
||||||
"/kms/encrypt",
|
"/kms/encrypt",
|
||||||
@@ -342,7 +323,6 @@ class TestKMSReEncrypt:
|
|||||||
|
|
||||||
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
||||||
|
|
||||||
# Re-encrypt with key-2
|
|
||||||
re_encrypt_response = kms_client.post(
|
re_encrypt_response = kms_client.post(
|
||||||
"/kms/re-encrypt",
|
"/kms/re-encrypt",
|
||||||
json={
|
json={
|
||||||
@@ -358,7 +338,6 @@ class TestKMSReEncrypt:
|
|||||||
assert data["SourceKeyId"] == "key-1"
|
assert data["SourceKeyId"] == "key-1"
|
||||||
assert data["KeyId"] == "key-2"
|
assert data["KeyId"] == "key-2"
|
||||||
|
|
||||||
# Verify new ciphertext can be decrypted
|
|
||||||
decrypt_response = kms_client.post(
|
decrypt_response = kms_client.post(
|
||||||
"/kms/decrypt",
|
"/kms/decrypt",
|
||||||
json={"CiphertextBlob": data["CiphertextBlob"]},
|
json={"CiphertextBlob": data["CiphertextBlob"]},
|
||||||
@@ -398,7 +377,7 @@ class TestKMSRandom:
|
|||||||
data = response.get_json()
|
data = response.get_json()
|
||||||
|
|
||||||
random_bytes = base64.b64decode(data["Plaintext"])
|
random_bytes = base64.b64decode(data["Plaintext"])
|
||||||
assert len(random_bytes) == 32 # Default is 32 bytes
|
assert len(random_bytes) == 32
|
||||||
|
|
||||||
|
|
||||||
class TestClientSideEncryption:
|
class TestClientSideEncryption:
|
||||||
@@ -422,11 +401,9 @@ class TestClientSideEncryption:
|
|||||||
|
|
||||||
def test_client_encrypt_decrypt(self, kms_client, auth_headers):
|
def test_client_encrypt_decrypt(self, kms_client, auth_headers):
|
||||||
"""Test client-side encryption and decryption."""
|
"""Test client-side encryption and decryption."""
|
||||||
# Generate a key
|
|
||||||
key_response = kms_client.post("/kms/client/generate-key", headers=auth_headers)
|
key_response = kms_client.post("/kms/client/generate-key", headers=auth_headers)
|
||||||
key = key_response.get_json()["key"]
|
key = key_response.get_json()["key"]
|
||||||
|
|
||||||
# Encrypt
|
|
||||||
plaintext = b"Client-side encrypted data"
|
plaintext = b"Client-side encrypted data"
|
||||||
encrypt_response = kms_client.post(
|
encrypt_response = kms_client.post(
|
||||||
"/kms/client/encrypt",
|
"/kms/client/encrypt",
|
||||||
@@ -440,7 +417,6 @@ class TestClientSideEncryption:
|
|||||||
assert encrypt_response.status_code == 200
|
assert encrypt_response.status_code == 200
|
||||||
encrypted = encrypt_response.get_json()
|
encrypted = encrypt_response.get_json()
|
||||||
|
|
||||||
# Decrypt
|
|
||||||
decrypt_response = kms_client.post(
|
decrypt_response = kms_client.post(
|
||||||
"/kms/client/decrypt",
|
"/kms/client/decrypt",
|
||||||
json={
|
json={
|
||||||
@@ -461,7 +437,6 @@ class TestEncryptionMaterials:
|
|||||||
|
|
||||||
def test_get_encryption_materials(self, kms_client, auth_headers):
|
def test_get_encryption_materials(self, kms_client, auth_headers):
|
||||||
"""Test getting encryption materials for client-side S3 encryption."""
|
"""Test getting encryption materials for client-side S3 encryption."""
|
||||||
# Create a key
|
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "s3-key"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "s3-key"}, headers=auth_headers)
|
||||||
|
|
||||||
response = kms_client.post(
|
response = kms_client.post(
|
||||||
@@ -478,7 +453,6 @@ class TestEncryptionMaterials:
|
|||||||
assert data["KeyId"] == "s3-key"
|
assert data["KeyId"] == "s3-key"
|
||||||
assert data["Algorithm"] == "AES-256-GCM"
|
assert data["Algorithm"] == "AES-256-GCM"
|
||||||
|
|
||||||
# Verify key is 256 bits
|
|
||||||
key = base64.b64decode(data["PlaintextKey"])
|
key = base64.b64decode(data["PlaintextKey"])
|
||||||
assert len(key) == 32
|
assert len(key) == 32
|
||||||
|
|
||||||
@@ -490,7 +464,6 @@ class TestKMSAuthentication:
|
|||||||
"""Test that unauthenticated requests are rejected."""
|
"""Test that unauthenticated requests are rejected."""
|
||||||
response = kms_client.get("/kms/keys")
|
response = kms_client.get("/kms/keys")
|
||||||
|
|
||||||
# Should fail with 403 (no credentials)
|
|
||||||
assert response.status_code == 403
|
assert response.status_code == 403
|
||||||
|
|
||||||
def test_invalid_credentials_fail(self, kms_client):
|
def test_invalid_credentials_fail(self, kms_client):
|
||||||
|
|||||||
238
tests/test_lifecycle.py
Normal file
238
tests/test_lifecycle.py
Normal file
@@ -0,0 +1,238 @@
|
|||||||
|
import io
|
||||||
|
import time
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.lifecycle import LifecycleManager, LifecycleResult
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def storage(tmp_path: Path):
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
storage_root.mkdir(parents=True)
|
||||||
|
return ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def lifecycle_manager(storage):
|
||||||
|
manager = LifecycleManager(storage, interval_seconds=3600)
|
||||||
|
yield manager
|
||||||
|
manager.stop()
|
||||||
|
|
||||||
|
|
||||||
|
class TestLifecycleResult:
|
||||||
|
def test_default_values(self):
|
||||||
|
result = LifecycleResult(bucket_name="test-bucket")
|
||||||
|
assert result.bucket_name == "test-bucket"
|
||||||
|
assert result.objects_deleted == 0
|
||||||
|
assert result.versions_deleted == 0
|
||||||
|
assert result.uploads_aborted == 0
|
||||||
|
assert result.errors == []
|
||||||
|
assert result.execution_time_seconds == 0.0
|
||||||
|
|
||||||
|
|
||||||
|
class TestLifecycleManager:
|
||||||
|
def test_start_and_stop(self, lifecycle_manager):
|
||||||
|
lifecycle_manager.start()
|
||||||
|
assert lifecycle_manager._timer is not None
|
||||||
|
assert lifecycle_manager._shutdown is False
|
||||||
|
|
||||||
|
lifecycle_manager.stop()
|
||||||
|
assert lifecycle_manager._shutdown is True
|
||||||
|
assert lifecycle_manager._timer is None
|
||||||
|
|
||||||
|
def test_start_only_once(self, lifecycle_manager):
|
||||||
|
lifecycle_manager.start()
|
||||||
|
first_timer = lifecycle_manager._timer
|
||||||
|
|
||||||
|
lifecycle_manager.start()
|
||||||
|
assert lifecycle_manager._timer is first_timer
|
||||||
|
|
||||||
|
def test_enforce_rules_no_lifecycle(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("no-lifecycle-bucket")
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("no-lifecycle-bucket")
|
||||||
|
assert result.bucket_name == "no-lifecycle-bucket"
|
||||||
|
assert result.objects_deleted == 0
|
||||||
|
|
||||||
|
def test_enforce_rules_disabled_rule(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("disabled-bucket")
|
||||||
|
storage.set_bucket_lifecycle("disabled-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "disabled-rule",
|
||||||
|
"Status": "Disabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"Expiration": {"Days": 1},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
old_object = storage.put_object(
|
||||||
|
"disabled-bucket",
|
||||||
|
"old-file.txt",
|
||||||
|
io.BytesIO(b"old content"),
|
||||||
|
)
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("disabled-bucket")
|
||||||
|
assert result.objects_deleted == 0
|
||||||
|
|
||||||
|
def test_enforce_expiration_by_days(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("expire-bucket")
|
||||||
|
storage.set_bucket_lifecycle("expire-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "expire-30-days",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"Expiration": {"Days": 30},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
storage.put_object(
|
||||||
|
"expire-bucket",
|
||||||
|
"recent-file.txt",
|
||||||
|
io.BytesIO(b"recent content"),
|
||||||
|
)
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("expire-bucket")
|
||||||
|
assert result.objects_deleted == 0
|
||||||
|
|
||||||
|
def test_enforce_expiration_with_prefix(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("prefix-bucket")
|
||||||
|
storage.set_bucket_lifecycle("prefix-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "expire-logs",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "logs/",
|
||||||
|
"Expiration": {"Days": 1},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
storage.put_object("prefix-bucket", "logs/old.log", io.BytesIO(b"log data"))
|
||||||
|
storage.put_object("prefix-bucket", "data/keep.txt", io.BytesIO(b"keep this"))
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("prefix-bucket")
|
||||||
|
|
||||||
|
def test_enforce_all_buckets(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("bucket1")
|
||||||
|
storage.create_bucket("bucket2")
|
||||||
|
|
||||||
|
results = lifecycle_manager.enforce_all_buckets()
|
||||||
|
assert isinstance(results, dict)
|
||||||
|
|
||||||
|
def test_run_now_single_bucket(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("run-now-bucket")
|
||||||
|
|
||||||
|
results = lifecycle_manager.run_now("run-now-bucket")
|
||||||
|
assert "run-now-bucket" in results
|
||||||
|
|
||||||
|
def test_run_now_all_buckets(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("all-bucket-1")
|
||||||
|
storage.create_bucket("all-bucket-2")
|
||||||
|
|
||||||
|
results = lifecycle_manager.run_now()
|
||||||
|
assert isinstance(results, dict)
|
||||||
|
|
||||||
|
def test_enforce_abort_multipart(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("multipart-bucket")
|
||||||
|
storage.set_bucket_lifecycle("multipart-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "abort-old-uploads",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
upload_id = storage.initiate_multipart_upload("multipart-bucket", "large-file.bin")
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("multipart-bucket")
|
||||||
|
assert result.uploads_aborted == 0
|
||||||
|
|
||||||
|
def test_enforce_noncurrent_version_expiration(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("versioned-bucket")
|
||||||
|
storage.set_bucket_versioning("versioned-bucket", True)
|
||||||
|
storage.set_bucket_lifecycle("versioned-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "expire-old-versions",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"NoncurrentVersionExpiration": {"NoncurrentDays": 30},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
storage.put_object("versioned-bucket", "file.txt", io.BytesIO(b"v1"))
|
||||||
|
storage.put_object("versioned-bucket", "file.txt", io.BytesIO(b"v2"))
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("versioned-bucket")
|
||||||
|
assert result.bucket_name == "versioned-bucket"
|
||||||
|
|
||||||
|
def test_execution_time_tracking(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("timed-bucket")
|
||||||
|
storage.set_bucket_lifecycle("timed-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "timer-test",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Expiration": {"Days": 1},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("timed-bucket")
|
||||||
|
assert result.execution_time_seconds >= 0
|
||||||
|
|
||||||
|
def test_enforce_rules_with_error(self, lifecycle_manager, storage):
|
||||||
|
result = lifecycle_manager.enforce_rules("nonexistent-bucket")
|
||||||
|
assert len(result.errors) > 0 or result.objects_deleted == 0
|
||||||
|
|
||||||
|
def test_lifecycle_with_date_expiration(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("date-bucket")
|
||||||
|
past_date = (datetime.now(timezone.utc) - timedelta(days=1)).strftime("%Y-%m-%dT00:00:00Z")
|
||||||
|
storage.set_bucket_lifecycle("date-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "expire-by-date",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"Expiration": {"Date": past_date},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
storage.put_object("date-bucket", "should-expire.txt", io.BytesIO(b"content"))
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("date-bucket")
|
||||||
|
|
||||||
|
def test_enforce_with_filter_prefix(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("filter-bucket")
|
||||||
|
storage.set_bucket_lifecycle("filter-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "filter-prefix-rule",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Filter": {"Prefix": "archive/"},
|
||||||
|
"Expiration": {"Days": 1},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("filter-bucket")
|
||||||
|
assert result.bucket_name == "filter-bucket"
|
||||||
|
|
||||||
|
|
||||||
|
class TestLifecycleManagerScheduling:
|
||||||
|
def test_schedule_next_respects_shutdown(self, storage):
|
||||||
|
manager = LifecycleManager(storage, interval_seconds=1)
|
||||||
|
manager._shutdown = True
|
||||||
|
manager._schedule_next()
|
||||||
|
assert manager._timer is None
|
||||||
|
|
||||||
|
@patch.object(LifecycleManager, "enforce_all_buckets")
|
||||||
|
def test_run_enforcement_catches_exceptions(self, mock_enforce, storage):
|
||||||
|
mock_enforce.side_effect = Exception("Test error")
|
||||||
|
manager = LifecycleManager(storage, interval_seconds=3600)
|
||||||
|
manager._shutdown = True
|
||||||
|
manager._run_enforcement()
|
||||||
|
|
||||||
|
def test_shutdown_flag_prevents_scheduling(self, storage):
|
||||||
|
manager = LifecycleManager(storage, interval_seconds=1)
|
||||||
|
manager.start()
|
||||||
|
manager.stop()
|
||||||
|
assert manager._shutdown is True
|
||||||
@@ -4,7 +4,6 @@ import pytest
|
|||||||
from xml.etree.ElementTree import fromstring
|
from xml.etree.ElementTree import fromstring
|
||||||
|
|
||||||
|
|
||||||
# Helper to create file-like stream
|
|
||||||
def _stream(data: bytes):
|
def _stream(data: bytes):
|
||||||
return io.BytesIO(data)
|
return io.BytesIO(data)
|
||||||
|
|
||||||
@@ -19,13 +18,11 @@ class TestListObjectsV2:
|
|||||||
"""Tests for ListObjectsV2 endpoint."""
|
"""Tests for ListObjectsV2 endpoint."""
|
||||||
|
|
||||||
def test_list_objects_v2_basic(self, client, signer, storage):
|
def test_list_objects_v2_basic(self, client, signer, storage):
|
||||||
# Create bucket and objects
|
|
||||||
storage.create_bucket("v2-test")
|
storage.create_bucket("v2-test")
|
||||||
storage.put_object("v2-test", "file1.txt", _stream(b"hello"))
|
storage.put_object("v2-test", "file1.txt", _stream(b"hello"))
|
||||||
storage.put_object("v2-test", "file2.txt", _stream(b"world"))
|
storage.put_object("v2-test", "file2.txt", _stream(b"world"))
|
||||||
storage.put_object("v2-test", "folder/file3.txt", _stream(b"nested"))
|
storage.put_object("v2-test", "folder/file3.txt", _stream(b"nested"))
|
||||||
|
|
||||||
# ListObjectsV2 request
|
|
||||||
headers = signer("GET", "/v2-test?list-type=2")
|
headers = signer("GET", "/v2-test?list-type=2")
|
||||||
resp = client.get("/v2-test", query_string={"list-type": "2"}, headers=headers)
|
resp = client.get("/v2-test", query_string={"list-type": "2"}, headers=headers)
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
@@ -46,7 +43,6 @@ class TestListObjectsV2:
|
|||||||
storage.put_object("prefix-test", "photos/2024/mar.jpg", _stream(b"mar"))
|
storage.put_object("prefix-test", "photos/2024/mar.jpg", _stream(b"mar"))
|
||||||
storage.put_object("prefix-test", "docs/readme.md", _stream(b"readme"))
|
storage.put_object("prefix-test", "docs/readme.md", _stream(b"readme"))
|
||||||
|
|
||||||
# List with prefix and delimiter
|
|
||||||
headers = signer("GET", "/prefix-test?list-type=2&prefix=photos/&delimiter=/")
|
headers = signer("GET", "/prefix-test?list-type=2&prefix=photos/&delimiter=/")
|
||||||
resp = client.get(
|
resp = client.get(
|
||||||
"/prefix-test",
|
"/prefix-test",
|
||||||
@@ -56,11 +52,10 @@ class TestListObjectsV2:
|
|||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
|
|
||||||
root = fromstring(resp.data)
|
root = fromstring(resp.data)
|
||||||
# Should show common prefixes for 2023/ and 2024/
|
|
||||||
prefixes = [el.find("Prefix").text for el in root.findall("CommonPrefixes")]
|
prefixes = [el.find("Prefix").text for el in root.findall("CommonPrefixes")]
|
||||||
assert "photos/2023/" in prefixes
|
assert "photos/2023/" in prefixes
|
||||||
assert "photos/2024/" in prefixes
|
assert "photos/2024/" in prefixes
|
||||||
assert len(root.findall("Contents")) == 0 # No direct files under photos/
|
assert len(root.findall("Contents")) == 0
|
||||||
|
|
||||||
|
|
||||||
class TestPutBucketVersioning:
|
class TestPutBucketVersioning:
|
||||||
@@ -78,7 +73,6 @@ class TestPutBucketVersioning:
|
|||||||
resp = client.put("/version-test", query_string={"versioning": ""}, data=payload, headers=headers)
|
resp = client.put("/version-test", query_string={"versioning": ""}, data=payload, headers=headers)
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
|
|
||||||
# Verify via GET
|
|
||||||
headers = signer("GET", "/version-test?versioning")
|
headers = signer("GET", "/version-test?versioning")
|
||||||
resp = client.get("/version-test", query_string={"versioning": ""}, headers=headers)
|
resp = client.get("/version-test", query_string={"versioning": ""}, headers=headers)
|
||||||
root = fromstring(resp.data)
|
root = fromstring(resp.data)
|
||||||
@@ -110,15 +104,13 @@ class TestDeleteBucketTagging:
|
|||||||
storage.create_bucket("tag-delete-test")
|
storage.create_bucket("tag-delete-test")
|
||||||
storage.set_bucket_tags("tag-delete-test", [{"Key": "env", "Value": "test"}])
|
storage.set_bucket_tags("tag-delete-test", [{"Key": "env", "Value": "test"}])
|
||||||
|
|
||||||
# Delete tags
|
|
||||||
headers = signer("DELETE", "/tag-delete-test?tagging")
|
headers = signer("DELETE", "/tag-delete-test?tagging")
|
||||||
resp = client.delete("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
resp = client.delete("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
# Verify tags are gone
|
|
||||||
headers = signer("GET", "/tag-delete-test?tagging")
|
headers = signer("GET", "/tag-delete-test?tagging")
|
||||||
resp = client.get("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
resp = client.get("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
||||||
assert resp.status_code == 404 # NoSuchTagSet
|
assert resp.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
class TestDeleteBucketCors:
|
class TestDeleteBucketCors:
|
||||||
@@ -130,15 +122,13 @@ class TestDeleteBucketCors:
|
|||||||
{"AllowedOrigins": ["*"], "AllowedMethods": ["GET"]}
|
{"AllowedOrigins": ["*"], "AllowedMethods": ["GET"]}
|
||||||
])
|
])
|
||||||
|
|
||||||
# Delete CORS
|
|
||||||
headers = signer("DELETE", "/cors-delete-test?cors")
|
headers = signer("DELETE", "/cors-delete-test?cors")
|
||||||
resp = client.delete("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
resp = client.delete("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
# Verify CORS is gone
|
|
||||||
headers = signer("GET", "/cors-delete-test?cors")
|
headers = signer("GET", "/cors-delete-test?cors")
|
||||||
resp = client.get("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
resp = client.get("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
||||||
assert resp.status_code == 404 # NoSuchCORSConfiguration
|
assert resp.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
class TestGetBucketLocation:
|
class TestGetBucketLocation:
|
||||||
@@ -173,7 +163,6 @@ class TestBucketAcl:
|
|||||||
def test_put_bucket_acl(self, client, signer, storage):
|
def test_put_bucket_acl(self, client, signer, storage):
|
||||||
storage.create_bucket("acl-put-test")
|
storage.create_bucket("acl-put-test")
|
||||||
|
|
||||||
# PUT with canned ACL header
|
|
||||||
headers = signer("PUT", "/acl-put-test?acl")
|
headers = signer("PUT", "/acl-put-test?acl")
|
||||||
headers["x-amz-acl"] = "public-read"
|
headers["x-amz-acl"] = "public-read"
|
||||||
resp = client.put("/acl-put-test", query_string={"acl": ""}, headers=headers)
|
resp = client.put("/acl-put-test", query_string={"acl": ""}, headers=headers)
|
||||||
@@ -188,7 +177,6 @@ class TestCopyObject:
|
|||||||
storage.create_bucket("copy-dst")
|
storage.create_bucket("copy-dst")
|
||||||
storage.put_object("copy-src", "original.txt", _stream(b"original content"))
|
storage.put_object("copy-src", "original.txt", _stream(b"original content"))
|
||||||
|
|
||||||
# Copy object
|
|
||||||
headers = signer("PUT", "/copy-dst/copied.txt")
|
headers = signer("PUT", "/copy-dst/copied.txt")
|
||||||
headers["x-amz-copy-source"] = "/copy-src/original.txt"
|
headers["x-amz-copy-source"] = "/copy-src/original.txt"
|
||||||
resp = client.put("/copy-dst/copied.txt", headers=headers)
|
resp = client.put("/copy-dst/copied.txt", headers=headers)
|
||||||
@@ -199,7 +187,6 @@ class TestCopyObject:
|
|||||||
assert root.find("ETag") is not None
|
assert root.find("ETag") is not None
|
||||||
assert root.find("LastModified") is not None
|
assert root.find("LastModified") is not None
|
||||||
|
|
||||||
# Verify copy exists
|
|
||||||
path = storage.get_object_path("copy-dst", "copied.txt")
|
path = storage.get_object_path("copy-dst", "copied.txt")
|
||||||
assert path.read_bytes() == b"original content"
|
assert path.read_bytes() == b"original content"
|
||||||
|
|
||||||
@@ -208,7 +195,6 @@ class TestCopyObject:
|
|||||||
storage.create_bucket("meta-dst")
|
storage.create_bucket("meta-dst")
|
||||||
storage.put_object("meta-src", "source.txt", _stream(b"data"), metadata={"old": "value"})
|
storage.put_object("meta-src", "source.txt", _stream(b"data"), metadata={"old": "value"})
|
||||||
|
|
||||||
# Copy with REPLACE directive
|
|
||||||
headers = signer("PUT", "/meta-dst/target.txt")
|
headers = signer("PUT", "/meta-dst/target.txt")
|
||||||
headers["x-amz-copy-source"] = "/meta-src/source.txt"
|
headers["x-amz-copy-source"] = "/meta-src/source.txt"
|
||||||
headers["x-amz-metadata-directive"] = "REPLACE"
|
headers["x-amz-metadata-directive"] = "REPLACE"
|
||||||
@@ -216,7 +202,6 @@ class TestCopyObject:
|
|||||||
resp = client.put("/meta-dst/target.txt", headers=headers)
|
resp = client.put("/meta-dst/target.txt", headers=headers)
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
|
|
||||||
# Verify new metadata (note: header keys are Title-Cased)
|
|
||||||
meta = storage.get_object_metadata("meta-dst", "target.txt")
|
meta = storage.get_object_metadata("meta-dst", "target.txt")
|
||||||
assert "New" in meta or "new" in meta
|
assert "New" in meta or "new" in meta
|
||||||
assert "old" not in meta and "Old" not in meta
|
assert "old" not in meta and "Old" not in meta
|
||||||
@@ -229,7 +214,6 @@ class TestObjectTagging:
|
|||||||
storage.create_bucket("obj-tag-test")
|
storage.create_bucket("obj-tag-test")
|
||||||
storage.put_object("obj-tag-test", "tagged.txt", _stream(b"content"))
|
storage.put_object("obj-tag-test", "tagged.txt", _stream(b"content"))
|
||||||
|
|
||||||
# PUT tags
|
|
||||||
payload = b"""<?xml version="1.0" encoding="UTF-8"?>
|
payload = b"""<?xml version="1.0" encoding="UTF-8"?>
|
||||||
<Tagging>
|
<Tagging>
|
||||||
<TagSet>
|
<TagSet>
|
||||||
@@ -247,7 +231,6 @@ class TestObjectTagging:
|
|||||||
)
|
)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
# GET tags
|
|
||||||
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
||||||
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
@@ -257,12 +240,10 @@ class TestObjectTagging:
|
|||||||
assert tags["project"] == "demo"
|
assert tags["project"] == "demo"
|
||||||
assert tags["env"] == "test"
|
assert tags["env"] == "test"
|
||||||
|
|
||||||
# DELETE tags
|
|
||||||
headers = signer("DELETE", "/obj-tag-test/tagged.txt?tagging")
|
headers = signer("DELETE", "/obj-tag-test/tagged.txt?tagging")
|
||||||
resp = client.delete("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
resp = client.delete("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
# Verify empty
|
|
||||||
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
||||||
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
root = fromstring(resp.data)
|
root = fromstring(resp.data)
|
||||||
@@ -272,7 +253,6 @@ class TestObjectTagging:
|
|||||||
storage.create_bucket("tag-limit")
|
storage.create_bucket("tag-limit")
|
||||||
storage.put_object("tag-limit", "file.txt", _stream(b"x"))
|
storage.put_object("tag-limit", "file.txt", _stream(b"x"))
|
||||||
|
|
||||||
# Try to set 11 tags (limit is 10)
|
|
||||||
tags = "".join(f"<Tag><Key>key{i}</Key><Value>val{i}</Value></Tag>" for i in range(11))
|
tags = "".join(f"<Tag><Key>key{i}</Key><Value>val{i}</Value></Tag>" for i in range(11))
|
||||||
payload = f"<Tagging><TagSet>{tags}</TagSet></Tagging>".encode()
|
payload = f"<Tagging><TagSet>{tags}</TagSet></Tagging>".encode()
|
||||||
|
|
||||||
|
|||||||
374
tests/test_notifications.py
Normal file
374
tests/test_notifications.py
Normal file
@@ -0,0 +1,374 @@
|
|||||||
|
import json
|
||||||
|
import time
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.notifications import (
|
||||||
|
NotificationConfiguration,
|
||||||
|
NotificationEvent,
|
||||||
|
NotificationService,
|
||||||
|
WebhookDestination,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestNotificationEvent:
|
||||||
|
def test_default_values(self):
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="test-bucket",
|
||||||
|
object_key="test/key.txt",
|
||||||
|
)
|
||||||
|
assert event.event_name == "s3:ObjectCreated:Put"
|
||||||
|
assert event.bucket_name == "test-bucket"
|
||||||
|
assert event.object_key == "test/key.txt"
|
||||||
|
assert event.object_size == 0
|
||||||
|
assert event.etag == ""
|
||||||
|
assert event.version_id is None
|
||||||
|
assert event.request_id != ""
|
||||||
|
|
||||||
|
def test_to_s3_event(self):
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="my-bucket",
|
||||||
|
object_key="my/object.txt",
|
||||||
|
object_size=1024,
|
||||||
|
etag="abc123",
|
||||||
|
version_id="v1",
|
||||||
|
source_ip="192.168.1.1",
|
||||||
|
user_identity="user123",
|
||||||
|
)
|
||||||
|
result = event.to_s3_event()
|
||||||
|
|
||||||
|
assert "Records" in result
|
||||||
|
assert len(result["Records"]) == 1
|
||||||
|
|
||||||
|
record = result["Records"][0]
|
||||||
|
assert record["eventVersion"] == "2.1"
|
||||||
|
assert record["eventSource"] == "myfsio:s3"
|
||||||
|
assert record["eventName"] == "s3:ObjectCreated:Put"
|
||||||
|
assert record["s3"]["bucket"]["name"] == "my-bucket"
|
||||||
|
assert record["s3"]["object"]["key"] == "my/object.txt"
|
||||||
|
assert record["s3"]["object"]["size"] == 1024
|
||||||
|
assert record["s3"]["object"]["eTag"] == "abc123"
|
||||||
|
assert record["s3"]["object"]["versionId"] == "v1"
|
||||||
|
assert record["userIdentity"]["principalId"] == "user123"
|
||||||
|
assert record["requestParameters"]["sourceIPAddress"] == "192.168.1.1"
|
||||||
|
|
||||||
|
|
||||||
|
class TestWebhookDestination:
|
||||||
|
def test_default_values(self):
|
||||||
|
dest = WebhookDestination(url="http://example.com/webhook")
|
||||||
|
assert dest.url == "http://example.com/webhook"
|
||||||
|
assert dest.headers == {}
|
||||||
|
assert dest.timeout_seconds == 30
|
||||||
|
assert dest.retry_count == 3
|
||||||
|
assert dest.retry_delay_seconds == 1
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
dest = WebhookDestination(
|
||||||
|
url="http://example.com/webhook",
|
||||||
|
headers={"X-Custom": "value"},
|
||||||
|
timeout_seconds=60,
|
||||||
|
retry_count=5,
|
||||||
|
retry_delay_seconds=2,
|
||||||
|
)
|
||||||
|
result = dest.to_dict()
|
||||||
|
assert result["url"] == "http://example.com/webhook"
|
||||||
|
assert result["headers"] == {"X-Custom": "value"}
|
||||||
|
assert result["timeout_seconds"] == 60
|
||||||
|
assert result["retry_count"] == 5
|
||||||
|
assert result["retry_delay_seconds"] == 2
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"url": "http://hook.example.com",
|
||||||
|
"headers": {"Authorization": "Bearer token"},
|
||||||
|
"timeout_seconds": 45,
|
||||||
|
"retry_count": 2,
|
||||||
|
"retry_delay_seconds": 5,
|
||||||
|
}
|
||||||
|
dest = WebhookDestination.from_dict(data)
|
||||||
|
assert dest.url == "http://hook.example.com"
|
||||||
|
assert dest.headers == {"Authorization": "Bearer token"}
|
||||||
|
assert dest.timeout_seconds == 45
|
||||||
|
assert dest.retry_count == 2
|
||||||
|
assert dest.retry_delay_seconds == 5
|
||||||
|
|
||||||
|
|
||||||
|
class TestNotificationConfiguration:
|
||||||
|
def test_matches_event_exact_match(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:Put"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "any/key.txt") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Post", "any/key.txt") is False
|
||||||
|
|
||||||
|
def test_matches_event_wildcard(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "key.txt") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Copy", "key.txt") is True
|
||||||
|
assert config.matches_event("s3:ObjectRemoved:Delete", "key.txt") is False
|
||||||
|
|
||||||
|
def test_matches_event_with_prefix_filter(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
prefix_filter="logs/",
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "logs/app.log") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "data/file.txt") is False
|
||||||
|
|
||||||
|
def test_matches_event_with_suffix_filter(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
suffix_filter=".jpg",
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "photos/image.jpg") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "photos/image.png") is False
|
||||||
|
|
||||||
|
def test_matches_event_with_both_filters(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
prefix_filter="images/",
|
||||||
|
suffix_filter=".png",
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "images/photo.png") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "images/photo.jpg") is False
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "documents/file.png") is False
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="my-config",
|
||||||
|
events=["s3:ObjectCreated:Put", "s3:ObjectRemoved:Delete"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
prefix_filter="logs/",
|
||||||
|
suffix_filter=".log",
|
||||||
|
)
|
||||||
|
result = config.to_dict()
|
||||||
|
assert result["Id"] == "my-config"
|
||||||
|
assert result["Events"] == ["s3:ObjectCreated:Put", "s3:ObjectRemoved:Delete"]
|
||||||
|
assert "Destination" in result
|
||||||
|
assert result["Filter"]["Key"]["FilterRules"][0]["Value"] == "logs/"
|
||||||
|
assert result["Filter"]["Key"]["FilterRules"][1]["Value"] == ".log"
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"Id": "parsed-config",
|
||||||
|
"Events": ["s3:ObjectCreated:*"],
|
||||||
|
"Destination": {"url": "http://hook.example.com"},
|
||||||
|
"Filter": {
|
||||||
|
"Key": {
|
||||||
|
"FilterRules": [
|
||||||
|
{"Name": "prefix", "Value": "data/"},
|
||||||
|
{"Name": "suffix", "Value": ".csv"},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
config = NotificationConfiguration.from_dict(data)
|
||||||
|
assert config.id == "parsed-config"
|
||||||
|
assert config.events == ["s3:ObjectCreated:*"]
|
||||||
|
assert config.destination.url == "http://hook.example.com"
|
||||||
|
assert config.prefix_filter == "data/"
|
||||||
|
assert config.suffix_filter == ".csv"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def notification_service(tmp_path: Path):
|
||||||
|
service = NotificationService(tmp_path, worker_count=1)
|
||||||
|
yield service
|
||||||
|
service.shutdown()
|
||||||
|
|
||||||
|
|
||||||
|
class TestNotificationService:
|
||||||
|
def test_get_bucket_notifications_empty(self, notification_service):
|
||||||
|
result = notification_service.get_bucket_notifications("nonexistent-bucket")
|
||||||
|
assert result == []
|
||||||
|
|
||||||
|
def test_set_and_get_bucket_notifications(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook1"),
|
||||||
|
),
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="config2",
|
||||||
|
events=["s3:ObjectRemoved:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook2"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("my-bucket", configs)
|
||||||
|
|
||||||
|
retrieved = notification_service.get_bucket_notifications("my-bucket")
|
||||||
|
assert len(retrieved) == 2
|
||||||
|
assert retrieved[0].id == "config1"
|
||||||
|
assert retrieved[1].id == "config2"
|
||||||
|
|
||||||
|
def test_delete_bucket_notifications(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="to-delete",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("delete-bucket", configs)
|
||||||
|
assert len(notification_service.get_bucket_notifications("delete-bucket")) == 1
|
||||||
|
|
||||||
|
notification_service.delete_bucket_notifications("delete-bucket")
|
||||||
|
notification_service._configs.clear()
|
||||||
|
assert len(notification_service.get_bucket_notifications("delete-bucket")) == 0
|
||||||
|
|
||||||
|
def test_emit_event_no_config(self, notification_service):
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="no-config-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
notification_service.emit_event(event)
|
||||||
|
assert notification_service._stats["events_queued"] == 0
|
||||||
|
|
||||||
|
def test_emit_event_matching_config(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="match-config",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("event-bucket", configs)
|
||||||
|
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="event-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
notification_service.emit_event(event)
|
||||||
|
assert notification_service._stats["events_queued"] == 1
|
||||||
|
|
||||||
|
def test_emit_event_non_matching_config(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="delete-only",
|
||||||
|
events=["s3:ObjectRemoved:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("delete-bucket", configs)
|
||||||
|
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="delete-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
notification_service.emit_event(event)
|
||||||
|
assert notification_service._stats["events_queued"] == 0
|
||||||
|
|
||||||
|
def test_emit_object_created(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="create-config",
|
||||||
|
events=["s3:ObjectCreated:Put"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("create-bucket", configs)
|
||||||
|
|
||||||
|
notification_service.emit_object_created(
|
||||||
|
"create-bucket",
|
||||||
|
"new-file.txt",
|
||||||
|
size=1024,
|
||||||
|
etag="abc123",
|
||||||
|
operation="Put",
|
||||||
|
)
|
||||||
|
assert notification_service._stats["events_queued"] == 1
|
||||||
|
|
||||||
|
def test_emit_object_removed(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="remove-config",
|
||||||
|
events=["s3:ObjectRemoved:Delete"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("remove-bucket", configs)
|
||||||
|
|
||||||
|
notification_service.emit_object_removed(
|
||||||
|
"remove-bucket",
|
||||||
|
"deleted-file.txt",
|
||||||
|
operation="Delete",
|
||||||
|
)
|
||||||
|
assert notification_service._stats["events_queued"] == 1
|
||||||
|
|
||||||
|
def test_get_stats(self, notification_service):
|
||||||
|
stats = notification_service.get_stats()
|
||||||
|
assert "events_queued" in stats
|
||||||
|
assert "events_sent" in stats
|
||||||
|
assert "events_failed" in stats
|
||||||
|
|
||||||
|
@patch("app.notifications.requests.post")
|
||||||
|
def test_send_notification_success(self, mock_post, notification_service):
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 200
|
||||||
|
mock_post.return_value = mock_response
|
||||||
|
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="test-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
destination = WebhookDestination(url="http://example.com/webhook")
|
||||||
|
|
||||||
|
notification_service._send_notification(event, destination)
|
||||||
|
mock_post.assert_called_once()
|
||||||
|
|
||||||
|
@patch("app.notifications.requests.post")
|
||||||
|
def test_send_notification_retry_on_failure(self, mock_post, notification_service):
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 500
|
||||||
|
mock_response.text = "Internal Server Error"
|
||||||
|
mock_post.return_value = mock_response
|
||||||
|
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="test-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
destination = WebhookDestination(
|
||||||
|
url="http://example.com/webhook",
|
||||||
|
retry_count=2,
|
||||||
|
retry_delay_seconds=0,
|
||||||
|
)
|
||||||
|
|
||||||
|
with pytest.raises(RuntimeError) as exc_info:
|
||||||
|
notification_service._send_notification(event, destination)
|
||||||
|
assert "Failed after 2 attempts" in str(exc_info.value)
|
||||||
|
assert mock_post.call_count == 2
|
||||||
|
|
||||||
|
def test_notification_caching(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="cached-config",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("cached-bucket", configs)
|
||||||
|
|
||||||
|
notification_service.get_bucket_notifications("cached-bucket")
|
||||||
|
assert "cached-bucket" in notification_service._configs
|
||||||
332
tests/test_object_lock.py
Normal file
332
tests/test_object_lock.py
Normal file
@@ -0,0 +1,332 @@
|
|||||||
|
import json
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.object_lock import (
|
||||||
|
ObjectLockConfig,
|
||||||
|
ObjectLockError,
|
||||||
|
ObjectLockRetention,
|
||||||
|
ObjectLockService,
|
||||||
|
RetentionMode,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestRetentionMode:
|
||||||
|
def test_governance_mode(self):
|
||||||
|
assert RetentionMode.GOVERNANCE.value == "GOVERNANCE"
|
||||||
|
|
||||||
|
def test_compliance_mode(self):
|
||||||
|
assert RetentionMode.COMPLIANCE.value == "COMPLIANCE"
|
||||||
|
|
||||||
|
|
||||||
|
class TestObjectLockRetention:
|
||||||
|
def test_to_dict(self):
|
||||||
|
retain_until = datetime(2025, 12, 31, 23, 59, 59, tzinfo=timezone.utc)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=retain_until,
|
||||||
|
)
|
||||||
|
result = retention.to_dict()
|
||||||
|
assert result["Mode"] == "GOVERNANCE"
|
||||||
|
assert "2025-12-31" in result["RetainUntilDate"]
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"Mode": "COMPLIANCE",
|
||||||
|
"RetainUntilDate": "2030-06-15T12:00:00+00:00",
|
||||||
|
}
|
||||||
|
retention = ObjectLockRetention.from_dict(data)
|
||||||
|
assert retention is not None
|
||||||
|
assert retention.mode == RetentionMode.COMPLIANCE
|
||||||
|
assert retention.retain_until_date.year == 2030
|
||||||
|
|
||||||
|
def test_from_dict_empty(self):
|
||||||
|
result = ObjectLockRetention.from_dict({})
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_from_dict_missing_mode(self):
|
||||||
|
data = {"RetainUntilDate": "2030-06-15T12:00:00+00:00"}
|
||||||
|
result = ObjectLockRetention.from_dict(data)
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_from_dict_missing_date(self):
|
||||||
|
data = {"Mode": "GOVERNANCE"}
|
||||||
|
result = ObjectLockRetention.from_dict(data)
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_is_expired_future_date(self):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
assert retention.is_expired() is False
|
||||||
|
|
||||||
|
def test_is_expired_past_date(self):
|
||||||
|
past = datetime.now(timezone.utc) - timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=past,
|
||||||
|
)
|
||||||
|
assert retention.is_expired() is True
|
||||||
|
|
||||||
|
|
||||||
|
class TestObjectLockConfig:
|
||||||
|
def test_to_dict_enabled(self):
|
||||||
|
config = ObjectLockConfig(enabled=True)
|
||||||
|
result = config.to_dict()
|
||||||
|
assert result["ObjectLockEnabled"] == "Enabled"
|
||||||
|
|
||||||
|
def test_to_dict_disabled(self):
|
||||||
|
config = ObjectLockConfig(enabled=False)
|
||||||
|
result = config.to_dict()
|
||||||
|
assert result["ObjectLockEnabled"] == "Disabled"
|
||||||
|
|
||||||
|
def test_from_dict_enabled(self):
|
||||||
|
data = {"ObjectLockEnabled": "Enabled"}
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
assert config.enabled is True
|
||||||
|
|
||||||
|
def test_from_dict_disabled(self):
|
||||||
|
data = {"ObjectLockEnabled": "Disabled"}
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
assert config.enabled is False
|
||||||
|
|
||||||
|
def test_from_dict_with_default_retention_days(self):
|
||||||
|
data = {
|
||||||
|
"ObjectLockEnabled": "Enabled",
|
||||||
|
"Rule": {
|
||||||
|
"DefaultRetention": {
|
||||||
|
"Mode": "GOVERNANCE",
|
||||||
|
"Days": 30,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
assert config.enabled is True
|
||||||
|
assert config.default_retention is not None
|
||||||
|
assert config.default_retention.mode == RetentionMode.GOVERNANCE
|
||||||
|
|
||||||
|
def test_from_dict_with_default_retention_years(self):
|
||||||
|
data = {
|
||||||
|
"ObjectLockEnabled": "Enabled",
|
||||||
|
"Rule": {
|
||||||
|
"DefaultRetention": {
|
||||||
|
"Mode": "COMPLIANCE",
|
||||||
|
"Years": 1,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
assert config.enabled is True
|
||||||
|
assert config.default_retention is not None
|
||||||
|
assert config.default_retention.mode == RetentionMode.COMPLIANCE
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def lock_service(tmp_path: Path):
|
||||||
|
return ObjectLockService(tmp_path)
|
||||||
|
|
||||||
|
|
||||||
|
class TestObjectLockService:
|
||||||
|
def test_get_bucket_lock_config_default(self, lock_service):
|
||||||
|
config = lock_service.get_bucket_lock_config("nonexistent-bucket")
|
||||||
|
assert config.enabled is False
|
||||||
|
assert config.default_retention is None
|
||||||
|
|
||||||
|
def test_set_and_get_bucket_lock_config(self, lock_service):
|
||||||
|
config = ObjectLockConfig(enabled=True)
|
||||||
|
lock_service.set_bucket_lock_config("my-bucket", config)
|
||||||
|
|
||||||
|
retrieved = lock_service.get_bucket_lock_config("my-bucket")
|
||||||
|
assert retrieved.enabled is True
|
||||||
|
|
||||||
|
def test_enable_bucket_lock(self, lock_service):
|
||||||
|
lock_service.enable_bucket_lock("lock-bucket")
|
||||||
|
|
||||||
|
config = lock_service.get_bucket_lock_config("lock-bucket")
|
||||||
|
assert config.enabled is True
|
||||||
|
|
||||||
|
def test_is_bucket_lock_enabled(self, lock_service):
|
||||||
|
assert lock_service.is_bucket_lock_enabled("new-bucket") is False
|
||||||
|
|
||||||
|
lock_service.enable_bucket_lock("new-bucket")
|
||||||
|
assert lock_service.is_bucket_lock_enabled("new-bucket") is True
|
||||||
|
|
||||||
|
def test_get_object_retention_not_set(self, lock_service):
|
||||||
|
result = lock_service.get_object_retention("bucket", "key.txt")
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_set_and_get_object_retention(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "key.txt", retention)
|
||||||
|
|
||||||
|
retrieved = lock_service.get_object_retention("bucket", "key.txt")
|
||||||
|
assert retrieved is not None
|
||||||
|
assert retrieved.mode == RetentionMode.GOVERNANCE
|
||||||
|
|
||||||
|
def test_cannot_modify_compliance_retention(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.COMPLIANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "locked.txt", retention)
|
||||||
|
|
||||||
|
new_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future + timedelta(days=10),
|
||||||
|
)
|
||||||
|
with pytest.raises(ObjectLockError) as exc_info:
|
||||||
|
lock_service.set_object_retention("bucket", "locked.txt", new_retention)
|
||||||
|
assert "COMPLIANCE" in str(exc_info.value)
|
||||||
|
|
||||||
|
def test_cannot_modify_governance_without_bypass(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "gov.txt", retention)
|
||||||
|
|
||||||
|
new_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future + timedelta(days=10),
|
||||||
|
)
|
||||||
|
with pytest.raises(ObjectLockError) as exc_info:
|
||||||
|
lock_service.set_object_retention("bucket", "gov.txt", new_retention)
|
||||||
|
assert "GOVERNANCE" in str(exc_info.value)
|
||||||
|
|
||||||
|
def test_can_modify_governance_with_bypass(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "bypassable.txt", retention)
|
||||||
|
|
||||||
|
new_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future + timedelta(days=10),
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "bypassable.txt", new_retention, bypass_governance=True)
|
||||||
|
retrieved = lock_service.get_object_retention("bucket", "bypassable.txt")
|
||||||
|
assert retrieved.retain_until_date > future
|
||||||
|
|
||||||
|
def test_can_modify_expired_retention(self, lock_service):
|
||||||
|
past = datetime.now(timezone.utc) - timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.COMPLIANCE,
|
||||||
|
retain_until_date=past,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "expired.txt", retention)
|
||||||
|
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
new_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "expired.txt", new_retention)
|
||||||
|
retrieved = lock_service.get_object_retention("bucket", "expired.txt")
|
||||||
|
assert retrieved.mode == RetentionMode.GOVERNANCE
|
||||||
|
|
||||||
|
def test_get_legal_hold_not_set(self, lock_service):
|
||||||
|
result = lock_service.get_legal_hold("bucket", "key.txt")
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
def test_set_and_get_legal_hold(self, lock_service):
|
||||||
|
lock_service.set_legal_hold("bucket", "held.txt", True)
|
||||||
|
assert lock_service.get_legal_hold("bucket", "held.txt") is True
|
||||||
|
|
||||||
|
lock_service.set_legal_hold("bucket", "held.txt", False)
|
||||||
|
assert lock_service.get_legal_hold("bucket", "held.txt") is False
|
||||||
|
|
||||||
|
def test_can_delete_object_no_lock(self, lock_service):
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "unlocked.txt")
|
||||||
|
assert can_delete is True
|
||||||
|
assert reason == ""
|
||||||
|
|
||||||
|
def test_cannot_delete_object_with_legal_hold(self, lock_service):
|
||||||
|
lock_service.set_legal_hold("bucket", "held.txt", True)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "held.txt")
|
||||||
|
assert can_delete is False
|
||||||
|
assert "legal hold" in reason.lower()
|
||||||
|
|
||||||
|
def test_cannot_delete_object_with_compliance_retention(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.COMPLIANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "compliant.txt", retention)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "compliant.txt")
|
||||||
|
assert can_delete is False
|
||||||
|
assert "COMPLIANCE" in reason
|
||||||
|
|
||||||
|
def test_cannot_delete_governance_without_bypass(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "governed.txt", retention)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "governed.txt")
|
||||||
|
assert can_delete is False
|
||||||
|
assert "GOVERNANCE" in reason
|
||||||
|
|
||||||
|
def test_can_delete_governance_with_bypass(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "governed.txt", retention)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "governed.txt", bypass_governance=True)
|
||||||
|
assert can_delete is True
|
||||||
|
assert reason == ""
|
||||||
|
|
||||||
|
def test_can_delete_expired_retention(self, lock_service):
|
||||||
|
past = datetime.now(timezone.utc) - timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.COMPLIANCE,
|
||||||
|
retain_until_date=past,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "expired.txt", retention)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "expired.txt")
|
||||||
|
assert can_delete is True
|
||||||
|
|
||||||
|
def test_can_overwrite_is_same_as_delete(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "overwrite.txt", retention)
|
||||||
|
|
||||||
|
can_overwrite, _ = lock_service.can_overwrite_object("bucket", "overwrite.txt")
|
||||||
|
can_delete, _ = lock_service.can_delete_object("bucket", "overwrite.txt")
|
||||||
|
assert can_overwrite == can_delete
|
||||||
|
|
||||||
|
def test_delete_object_lock_metadata(self, lock_service):
|
||||||
|
lock_service.set_legal_hold("bucket", "cleanup.txt", True)
|
||||||
|
lock_service.delete_object_lock_metadata("bucket", "cleanup.txt")
|
||||||
|
|
||||||
|
assert lock_service.get_legal_hold("bucket", "cleanup.txt") is False
|
||||||
|
|
||||||
|
def test_config_caching(self, lock_service):
|
||||||
|
config = ObjectLockConfig(enabled=True)
|
||||||
|
lock_service.set_bucket_lock_config("cached-bucket", config)
|
||||||
|
|
||||||
|
lock_service.get_bucket_lock_config("cached-bucket")
|
||||||
|
assert "cached-bucket" in lock_service._config_cache
|
||||||
287
tests/test_replication.py
Normal file
287
tests/test_replication.py
Normal file
@@ -0,0 +1,287 @@
|
|||||||
|
import json
|
||||||
|
import time
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.connections import ConnectionStore, RemoteConnection
|
||||||
|
from app.replication import (
|
||||||
|
ReplicationManager,
|
||||||
|
ReplicationRule,
|
||||||
|
ReplicationStats,
|
||||||
|
REPLICATION_MODE_ALL,
|
||||||
|
REPLICATION_MODE_NEW_ONLY,
|
||||||
|
_create_s3_client,
|
||||||
|
)
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def storage(tmp_path: Path):
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
storage_root.mkdir(parents=True)
|
||||||
|
return ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def connections(tmp_path: Path):
|
||||||
|
connections_path = tmp_path / "connections.json"
|
||||||
|
store = ConnectionStore(connections_path)
|
||||||
|
conn = RemoteConnection(
|
||||||
|
id="test-conn",
|
||||||
|
name="Test Remote",
|
||||||
|
endpoint_url="http://localhost:9000",
|
||||||
|
access_key="remote-access",
|
||||||
|
secret_key="remote-secret",
|
||||||
|
region="us-east-1",
|
||||||
|
)
|
||||||
|
store.add(conn)
|
||||||
|
return store
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def replication_manager(storage, connections, tmp_path):
|
||||||
|
rules_path = tmp_path / "replication_rules.json"
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
storage_root.mkdir(exist_ok=True)
|
||||||
|
manager = ReplicationManager(storage, connections, rules_path, storage_root)
|
||||||
|
yield manager
|
||||||
|
manager.shutdown(wait=False)
|
||||||
|
|
||||||
|
|
||||||
|
class TestReplicationStats:
|
||||||
|
def test_to_dict(self):
|
||||||
|
stats = ReplicationStats(
|
||||||
|
objects_synced=10,
|
||||||
|
objects_pending=5,
|
||||||
|
objects_orphaned=2,
|
||||||
|
bytes_synced=1024,
|
||||||
|
last_sync_at=1234567890.0,
|
||||||
|
last_sync_key="test/key.txt",
|
||||||
|
)
|
||||||
|
result = stats.to_dict()
|
||||||
|
assert result["objects_synced"] == 10
|
||||||
|
assert result["objects_pending"] == 5
|
||||||
|
assert result["objects_orphaned"] == 2
|
||||||
|
assert result["bytes_synced"] == 1024
|
||||||
|
assert result["last_sync_at"] == 1234567890.0
|
||||||
|
assert result["last_sync_key"] == "test/key.txt"
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"objects_synced": 15,
|
||||||
|
"objects_pending": 3,
|
||||||
|
"objects_orphaned": 1,
|
||||||
|
"bytes_synced": 2048,
|
||||||
|
"last_sync_at": 9876543210.0,
|
||||||
|
"last_sync_key": "another/key.txt",
|
||||||
|
}
|
||||||
|
stats = ReplicationStats.from_dict(data)
|
||||||
|
assert stats.objects_synced == 15
|
||||||
|
assert stats.objects_pending == 3
|
||||||
|
assert stats.objects_orphaned == 1
|
||||||
|
assert stats.bytes_synced == 2048
|
||||||
|
assert stats.last_sync_at == 9876543210.0
|
||||||
|
assert stats.last_sync_key == "another/key.txt"
|
||||||
|
|
||||||
|
def test_from_dict_with_defaults(self):
|
||||||
|
stats = ReplicationStats.from_dict({})
|
||||||
|
assert stats.objects_synced == 0
|
||||||
|
assert stats.objects_pending == 0
|
||||||
|
assert stats.objects_orphaned == 0
|
||||||
|
assert stats.bytes_synced == 0
|
||||||
|
assert stats.last_sync_at is None
|
||||||
|
assert stats.last_sync_key is None
|
||||||
|
|
||||||
|
|
||||||
|
class TestReplicationRule:
|
||||||
|
def test_to_dict(self):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="source-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="dest-bucket",
|
||||||
|
enabled=True,
|
||||||
|
mode=REPLICATION_MODE_ALL,
|
||||||
|
created_at=1234567890.0,
|
||||||
|
)
|
||||||
|
result = rule.to_dict()
|
||||||
|
assert result["bucket_name"] == "source-bucket"
|
||||||
|
assert result["target_connection_id"] == "test-conn"
|
||||||
|
assert result["target_bucket"] == "dest-bucket"
|
||||||
|
assert result["enabled"] is True
|
||||||
|
assert result["mode"] == REPLICATION_MODE_ALL
|
||||||
|
assert result["created_at"] == 1234567890.0
|
||||||
|
assert "stats" in result
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"bucket_name": "my-bucket",
|
||||||
|
"target_connection_id": "conn-123",
|
||||||
|
"target_bucket": "remote-bucket",
|
||||||
|
"enabled": False,
|
||||||
|
"mode": REPLICATION_MODE_NEW_ONLY,
|
||||||
|
"created_at": 1111111111.0,
|
||||||
|
"stats": {"objects_synced": 5},
|
||||||
|
}
|
||||||
|
rule = ReplicationRule.from_dict(data)
|
||||||
|
assert rule.bucket_name == "my-bucket"
|
||||||
|
assert rule.target_connection_id == "conn-123"
|
||||||
|
assert rule.target_bucket == "remote-bucket"
|
||||||
|
assert rule.enabled is False
|
||||||
|
assert rule.mode == REPLICATION_MODE_NEW_ONLY
|
||||||
|
assert rule.created_at == 1111111111.0
|
||||||
|
assert rule.stats.objects_synced == 5
|
||||||
|
|
||||||
|
def test_from_dict_defaults_mode(self):
|
||||||
|
data = {
|
||||||
|
"bucket_name": "my-bucket",
|
||||||
|
"target_connection_id": "conn-123",
|
||||||
|
"target_bucket": "remote-bucket",
|
||||||
|
}
|
||||||
|
rule = ReplicationRule.from_dict(data)
|
||||||
|
assert rule.mode == REPLICATION_MODE_NEW_ONLY
|
||||||
|
assert rule.created_at is None
|
||||||
|
|
||||||
|
|
||||||
|
class TestReplicationManager:
|
||||||
|
def test_get_rule_not_exists(self, replication_manager):
|
||||||
|
rule = replication_manager.get_rule("nonexistent-bucket")
|
||||||
|
assert rule is None
|
||||||
|
|
||||||
|
def test_set_and_get_rule(self, replication_manager):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="my-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=True,
|
||||||
|
mode=REPLICATION_MODE_NEW_ONLY,
|
||||||
|
created_at=time.time(),
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
|
||||||
|
retrieved = replication_manager.get_rule("my-bucket")
|
||||||
|
assert retrieved is not None
|
||||||
|
assert retrieved.bucket_name == "my-bucket"
|
||||||
|
assert retrieved.target_connection_id == "test-conn"
|
||||||
|
assert retrieved.target_bucket == "remote-bucket"
|
||||||
|
|
||||||
|
def test_delete_rule(self, replication_manager):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="to-delete",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
assert replication_manager.get_rule("to-delete") is not None
|
||||||
|
|
||||||
|
replication_manager.delete_rule("to-delete")
|
||||||
|
assert replication_manager.get_rule("to-delete") is None
|
||||||
|
|
||||||
|
def test_save_and_reload_rules(self, replication_manager, tmp_path):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="persistent-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
|
||||||
|
rules_path = tmp_path / "replication_rules.json"
|
||||||
|
assert rules_path.exists()
|
||||||
|
data = json.loads(rules_path.read_text())
|
||||||
|
assert "persistent-bucket" in data
|
||||||
|
|
||||||
|
@patch("app.replication._create_s3_client")
|
||||||
|
def test_check_endpoint_health_success(self, mock_create_client, replication_manager, connections):
|
||||||
|
mock_client = MagicMock()
|
||||||
|
mock_client.list_buckets.return_value = {"Buckets": []}
|
||||||
|
mock_create_client.return_value = mock_client
|
||||||
|
|
||||||
|
conn = connections.get("test-conn")
|
||||||
|
result = replication_manager.check_endpoint_health(conn)
|
||||||
|
assert result is True
|
||||||
|
mock_client.list_buckets.assert_called_once()
|
||||||
|
|
||||||
|
@patch("app.replication._create_s3_client")
|
||||||
|
def test_check_endpoint_health_failure(self, mock_create_client, replication_manager, connections):
|
||||||
|
mock_client = MagicMock()
|
||||||
|
mock_client.list_buckets.side_effect = Exception("Connection refused")
|
||||||
|
mock_create_client.return_value = mock_client
|
||||||
|
|
||||||
|
conn = connections.get("test-conn")
|
||||||
|
result = replication_manager.check_endpoint_health(conn)
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
def test_trigger_replication_no_rule(self, replication_manager):
|
||||||
|
replication_manager.trigger_replication("no-such-bucket", "test.txt", "write")
|
||||||
|
|
||||||
|
def test_trigger_replication_disabled_rule(self, replication_manager):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="disabled-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=False,
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
replication_manager.trigger_replication("disabled-bucket", "test.txt", "write")
|
||||||
|
|
||||||
|
def test_trigger_replication_missing_connection(self, replication_manager):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="orphan-bucket",
|
||||||
|
target_connection_id="missing-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
replication_manager.trigger_replication("orphan-bucket", "test.txt", "write")
|
||||||
|
|
||||||
|
def test_replicate_task_path_traversal_blocked(self, replication_manager, connections):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="secure-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
conn = connections.get("test-conn")
|
||||||
|
|
||||||
|
replication_manager._replicate_task("secure-bucket", "../../../etc/passwd", rule, conn, "write")
|
||||||
|
replication_manager._replicate_task("secure-bucket", "/root/secret", rule, conn, "write")
|
||||||
|
replication_manager._replicate_task("secure-bucket", "..\\..\\windows\\system32", rule, conn, "write")
|
||||||
|
|
||||||
|
|
||||||
|
class TestCreateS3Client:
|
||||||
|
@patch("app.replication.boto3.client")
|
||||||
|
def test_creates_client_with_correct_config(self, mock_boto_client):
|
||||||
|
conn = RemoteConnection(
|
||||||
|
id="test",
|
||||||
|
name="Test",
|
||||||
|
endpoint_url="http://localhost:9000",
|
||||||
|
access_key="access",
|
||||||
|
secret_key="secret",
|
||||||
|
region="eu-west-1",
|
||||||
|
)
|
||||||
|
_create_s3_client(conn)
|
||||||
|
|
||||||
|
mock_boto_client.assert_called_once()
|
||||||
|
call_kwargs = mock_boto_client.call_args[1]
|
||||||
|
assert call_kwargs["endpoint_url"] == "http://localhost:9000"
|
||||||
|
assert call_kwargs["aws_access_key_id"] == "access"
|
||||||
|
assert call_kwargs["aws_secret_access_key"] == "secret"
|
||||||
|
assert call_kwargs["region_name"] == "eu-west-1"
|
||||||
|
|
||||||
|
@patch("app.replication.boto3.client")
|
||||||
|
def test_health_check_mode_minimal_retries(self, mock_boto_client):
|
||||||
|
conn = RemoteConnection(
|
||||||
|
id="test",
|
||||||
|
name="Test",
|
||||||
|
endpoint_url="http://localhost:9000",
|
||||||
|
access_key="access",
|
||||||
|
secret_key="secret",
|
||||||
|
)
|
||||||
|
_create_s3_client(conn, health_check=True)
|
||||||
|
|
||||||
|
call_kwargs = mock_boto_client.call_args[1]
|
||||||
|
config = call_kwargs["config"]
|
||||||
|
assert config.retries["max_attempts"] == 1
|
||||||
@@ -67,7 +67,6 @@ class TestUIBucketEncryption:
|
|||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Login first
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
@@ -82,14 +81,11 @@ class TestUIBucketEncryption:
|
|||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Login
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
# Get CSRF token
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
# Enable AES-256 encryption
|
|
||||||
response = client.post(
|
response = client.post(
|
||||||
"/ui/buckets/test-bucket/encryption",
|
"/ui/buckets/test-bucket/encryption",
|
||||||
data={
|
data={
|
||||||
@@ -102,7 +98,6 @@ class TestUIBucketEncryption:
|
|||||||
|
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
html = response.data.decode("utf-8")
|
html = response.data.decode("utf-8")
|
||||||
# Should see success message or enabled state
|
|
||||||
assert "AES-256" in html or "encryption enabled" in html.lower()
|
assert "AES-256" in html or "encryption enabled" in html.lower()
|
||||||
|
|
||||||
def test_enable_kms_encryption(self, tmp_path):
|
def test_enable_kms_encryption(self, tmp_path):
|
||||||
@@ -110,7 +105,6 @@ class TestUIBucketEncryption:
|
|||||||
app = _make_encryption_app(tmp_path, kms_enabled=True)
|
app = _make_encryption_app(tmp_path, kms_enabled=True)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Create a KMS key first
|
|
||||||
with app.app_context():
|
with app.app_context():
|
||||||
kms = app.extensions.get("kms")
|
kms = app.extensions.get("kms")
|
||||||
if kms:
|
if kms:
|
||||||
@@ -119,14 +113,11 @@ class TestUIBucketEncryption:
|
|||||||
else:
|
else:
|
||||||
pytest.skip("KMS not available")
|
pytest.skip("KMS not available")
|
||||||
|
|
||||||
# Login
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
# Get CSRF token
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
# Enable KMS encryption
|
|
||||||
response = client.post(
|
response = client.post(
|
||||||
"/ui/buckets/test-bucket/encryption",
|
"/ui/buckets/test-bucket/encryption",
|
||||||
data={
|
data={
|
||||||
@@ -147,10 +138,8 @@ class TestUIBucketEncryption:
|
|||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Login
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
# First enable encryption
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
@@ -163,7 +152,6 @@ class TestUIBucketEncryption:
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
# Now disable it
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
@@ -185,7 +173,6 @@ class TestUIBucketEncryption:
|
|||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Login
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
@@ -210,10 +197,8 @@ class TestUIBucketEncryption:
|
|||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Login
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
# Enable encryption
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
@@ -226,7 +211,6 @@ class TestUIBucketEncryption:
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
# Verify it's stored
|
|
||||||
with app.app_context():
|
with app.app_context():
|
||||||
storage = app.extensions["object_storage"]
|
storage = app.extensions["object_storage"]
|
||||||
config = storage.get_bucket_encryption("test-bucket")
|
config = storage.get_bucket_encryption("test-bucket")
|
||||||
@@ -244,10 +228,8 @@ class TestUIEncryptionWithoutPermission:
|
|||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Login as readonly user
|
|
||||||
client.post("/ui/login", data={"access_key": "readonly", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "readonly", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
# This should fail or be rejected
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
@@ -261,8 +243,6 @@ class TestUIEncryptionWithoutPermission:
|
|||||||
follow_redirects=True,
|
follow_redirects=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Should either redirect with error or show permission denied
|
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
html = response.data.decode("utf-8")
|
html = response.data.decode("utf-8")
|
||||||
# Should contain error about permission denied
|
|
||||||
assert "Access denied" in html or "permission" in html.lower() or "not authorized" in html.lower()
|
assert "Access denied" in html or "permission" in html.lower() or "not authorized" in html.lower()
|
||||||
|
|||||||
Reference in New Issue
Block a user