Compare commits
99 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| caf01d6ada | |||
| a5d19e2982 | |||
| 692e7e3a6e | |||
| 78dba93ee0 | |||
| 93a5aa6618 | |||
| 9ab750650c | |||
| 609e9db2f7 | |||
| 94a55cf2b7 | |||
| b9cfc45aa2 | |||
| 2d60e36fbf | |||
| c78f7fa6b0 | |||
| b3dce8d13e | |||
| e792b86485 | |||
| cdb86aeea7 | |||
| cdbc156b5b | |||
| 1df8ff9d25 | |||
| 05f1b00473 | |||
| 5ebc97300e | |||
| d2f9c3bded | |||
| 9f347f2caa | |||
| 4ab58e59c2 | |||
| 32232211a1 | |||
| bb366cb4cd | |||
| 1cacb80dd6 | |||
| e89bbb62dc | |||
| c8eb3de629 | |||
| a2745ff2ee | |||
| 9165e365e6 | |||
| 01e26754e8 | |||
| b592fa9fdb | |||
| cd9734b398 | |||
| 90893cac27 | |||
| 6e659902bd | |||
| 39a707ecbc | |||
| 4199f8e6c7 | |||
| adc6770273 | |||
| f5451c162b | |||
| aab9ef696a | |||
| be48f59452 | |||
| 86c04f85f6 | |||
| 28cb656d94 | |||
| 992d9eccd9 | |||
| 40f3192c5c | |||
| 2498b950f6 | |||
| 97435f15e5 | |||
| 3c44152fc6 | |||
| 97860669ec | |||
| 4a5dd76286 | |||
| d2dc293722 | |||
| 397515edce | |||
| 563bb8fa6a | |||
| 980fced7e4 | |||
| 5ccf53b688 | |||
| 4d4256830a | |||
| 137e3b7b68 | |||
| bae5009ec4 | |||
| 114e684cb8 | |||
| 5d161c1d92 | |||
| f160827b41 | |||
| 9368715b16 | |||
| 453ac6ea30 | |||
| 804f46d11e | |||
| 766dbb18be | |||
| 590a39ca80 | |||
| 53326f4e41 | |||
| 233780617f | |||
| 6a31a9082e | |||
| aaa230b19b | |||
| 86138636db | |||
| b2f4d1b5db | |||
| cee28c9f81 | |||
| 85ee5b9388 | |||
| e6ee341b93 | |||
| 92cf8825cf | |||
| ef781ae0b1 | |||
| 37d372c617 | |||
| fd8fb21517 | |||
| a095616569 | |||
| c6cbe822e1 | |||
| dddab6dbbc | |||
| 015c9cb52d | |||
| c8b1c33118 | |||
| ebef3dfa57 | |||
| 1116353d0f | |||
| e4b92a32a1 | |||
| 57c40dcdcc | |||
| 7d1735a59f | |||
| 9064f9d60e | |||
| 36c08b0ac1 | |||
| ec5d52f208 | |||
| 96de6164d1 | |||
| 8c00d7bd4b | |||
| a32d9dbd77 | |||
| fe3eacd2be | |||
| 471cf5a305 | |||
| 840fd176d3 | |||
| 5350d04ba5 | |||
| f2daa8a8a3 | |||
| e287b59645 |
13
.dockerignore
Normal file
13
.dockerignore
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
.git
|
||||||
|
.gitignore
|
||||||
|
.venv
|
||||||
|
__pycache__
|
||||||
|
*.pyc
|
||||||
|
*.pyo
|
||||||
|
*.pyd
|
||||||
|
.pytest_cache
|
||||||
|
.coverage
|
||||||
|
htmlcov
|
||||||
|
logs
|
||||||
|
data
|
||||||
|
tmp
|
||||||
13
Dockerfile
13
Dockerfile
@@ -1,5 +1,5 @@
|
|||||||
# syntax=docker/dockerfile:1.7
|
# syntax=docker/dockerfile:1.7
|
||||||
FROM python:3.11-slim
|
FROM python:3.12.12-slim
|
||||||
|
|
||||||
ENV PYTHONDONTWRITEBYTECODE=1 \
|
ENV PYTHONDONTWRITEBYTECODE=1 \
|
||||||
PYTHONUNBUFFERED=1
|
PYTHONUNBUFFERED=1
|
||||||
@@ -16,9 +16,14 @@ RUN pip install --no-cache-dir -r requirements.txt
|
|||||||
|
|
||||||
COPY . .
|
COPY . .
|
||||||
|
|
||||||
# Drop privileges
|
# Make entrypoint executable
|
||||||
RUN useradd -m -u 1000 myfsio \
|
RUN chmod +x docker-entrypoint.sh
|
||||||
|
|
||||||
|
# Create data directory and set permissions
|
||||||
|
RUN mkdir -p /app/data \
|
||||||
|
&& useradd -m -u 1000 myfsio \
|
||||||
&& chown -R myfsio:myfsio /app
|
&& chown -R myfsio:myfsio /app
|
||||||
|
|
||||||
USER myfsio
|
USER myfsio
|
||||||
|
|
||||||
EXPOSE 5000 5100
|
EXPOSE 5000 5100
|
||||||
@@ -29,4 +34,4 @@ ENV APP_HOST=0.0.0.0 \
|
|||||||
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
||||||
CMD python -c "import requests; requests.get('http://localhost:5000/healthz', timeout=2)"
|
CMD python -c "import requests; requests.get('http://localhost:5000/healthz', timeout=2)"
|
||||||
|
|
||||||
CMD ["python", "run.py", "--mode", "both"]
|
CMD ["./docker-entrypoint.sh"]
|
||||||
|
|||||||
300
README.md
300
README.md
@@ -1,117 +1,251 @@
|
|||||||
# MyFSIO (Flask S3 + IAM)
|
# MyFSIO
|
||||||
|
|
||||||
MyFSIO is a batteries-included, Flask-based recreation of Amazon S3 and IAM workflows built for local development. The design mirrors the [AWS S3 documentation](https://docs.aws.amazon.com/s3/) wherever practical: bucket naming, Signature Version 4 presigning, Version 2012-10-17 bucket policies, IAM-style users, and familiar REST endpoints.
|
A lightweight, S3-compatible object storage system built with Flask. MyFSIO implements core AWS S3 REST API operations with filesystem-backed storage, making it ideal for local development, testing, and self-hosted storage scenarios.
|
||||||
|
|
||||||
## Why MyFSIO?
|
## Features
|
||||||
|
|
||||||
- **Dual servers:** Run both the API (port 5000) and UI (port 5100) with a single command: `python run.py`.
|
**Core Storage**
|
||||||
- **IAM + access keys:** Users, access keys, key rotation, and bucket-scoped actions (`list/read/write/delete/policy`) now live in `data/.myfsio.sys/config/iam.json` and are editable from the IAM dashboard.
|
- S3-compatible REST API with AWS Signature Version 4 authentication
|
||||||
- **Bucket policies + hot reload:** `data/.myfsio.sys/config/bucket_policies.json` uses AWS' policy grammar (Version `2012-10-17`) with a built-in watcher, so editing the JSON file applies immediately. The UI also ships Public/Private/Custom presets for faster edits.
|
- Bucket and object CRUD operations
|
||||||
- **Presigned URLs everywhere:** Signature Version 4 presigned URLs respect IAM + bucket policies and replace the now-removed "share link" feature for public access scenarios.
|
- Object versioning with version history
|
||||||
- **Modern UI:** Responsive tables, quick filters, preview sidebar, object-level delete buttons, a presign modal, and an inline JSON policy editor that respects dark mode keep bucket management friendly.
|
- Multipart uploads for large files
|
||||||
- **Tests & health:** `/healthz` for smoke checks and `pytest` coverage for IAM, CRUD, presign, and policy flows.
|
- Presigned URLs (1 second to 7 days validity)
|
||||||
|
|
||||||
## Architecture at a Glance
|
**Security & Access Control**
|
||||||
|
- IAM users with access key management and rotation
|
||||||
|
- Bucket policies (AWS Policy Version 2012-10-17)
|
||||||
|
- Server-side encryption (SSE-S3 and SSE-KMS)
|
||||||
|
- Built-in Key Management Service (KMS)
|
||||||
|
- Rate limiting per endpoint
|
||||||
|
|
||||||
|
**Advanced Features**
|
||||||
|
- Cross-bucket replication to remote S3-compatible endpoints
|
||||||
|
- Hot-reload for bucket policies (no restart required)
|
||||||
|
- CORS configuration per bucket
|
||||||
|
|
||||||
|
**Management UI**
|
||||||
|
- Web console for bucket and object management
|
||||||
|
- IAM dashboard for user administration
|
||||||
|
- Inline JSON policy editor with presets
|
||||||
|
- Object browser with folder navigation and bulk operations
|
||||||
|
- Dark mode support
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
```
|
```
|
||||||
+-----------------+ +----------------+
|
+------------------+ +------------------+
|
||||||
| API Server |<----->| Object storage |
|
| API Server | | UI Server |
|
||||||
| (port 5000) | | (filesystem) |
|
| (port 5000) | | (port 5100) |
|
||||||
| - S3 routes | +----------------+
|
| | | |
|
||||||
| - Presigned URLs |
|
| - S3 REST API |<------->| - Web Console |
|
||||||
| - Bucket policy |
|
| - SigV4 Auth | | - IAM Dashboard |
|
||||||
+-----------------+
|
| - Presign URLs | | - Bucket Editor |
|
||||||
^
|
+--------+---------+ +------------------+
|
||||||
|
|
|
|
||||||
+-----------------+
|
v
|
||||||
| UI Server |
|
+------------------+ +------------------+
|
||||||
| (port 5100) |
|
| Object Storage | | System Metadata |
|
||||||
| - Auth console |
|
| (filesystem) | | (.myfsio.sys/) |
|
||||||
| - IAM dashboard|
|
| | | |
|
||||||
| - Bucket editor|
|
| data/<bucket>/ | | - IAM config |
|
||||||
+-----------------+
|
| <objects> | | - Bucket policies|
|
||||||
|
| | | - Encryption keys|
|
||||||
|
+------------------+ +------------------+
|
||||||
```
|
```
|
||||||
|
|
||||||
Both apps load the same configuration via `AppConfig` so IAM data and bucket policies stay consistent no matter which process you run.
|
## Quick Start
|
||||||
Bucket policies are automatically reloaded whenever `bucket_policies.json` changes—no restarts required.
|
|
||||||
|
|
||||||
## Getting Started
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# Clone and setup
|
||||||
|
git clone https://gitea.jzwsite.com/kqjy/MyFSIO
|
||||||
|
cd s3
|
||||||
python -m venv .venv
|
python -m venv .venv
|
||||||
. .venv/Scripts/activate # PowerShell: .\.venv\Scripts\Activate.ps1
|
|
||||||
|
# Activate virtual environment
|
||||||
|
# Windows PowerShell:
|
||||||
|
.\.venv\Scripts\Activate.ps1
|
||||||
|
# Windows CMD:
|
||||||
|
.venv\Scripts\activate.bat
|
||||||
|
# Linux/macOS:
|
||||||
|
source .venv/bin/activate
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
|
|
||||||
# Run both API and UI (default)
|
# Start both servers
|
||||||
python run.py
|
python run.py
|
||||||
|
|
||||||
# Or run individually:
|
# Or start individually
|
||||||
# python run.py --mode api
|
python run.py --mode api # API only (port 5000)
|
||||||
# python run.py --mode ui
|
python run.py --mode ui # UI only (port 5100)
|
||||||
```
|
```
|
||||||
|
|
||||||
Visit `http://127.0.0.1:5100/ui` for the console and `http://127.0.0.1:5000/` for the raw API. Override ports/hosts with the environment variables listed below.
|
**Default Credentials:** `localadmin` / `localadmin`
|
||||||
|
|
||||||
## IAM, Access Keys, and Bucket Policies
|
- **Web Console:** http://127.0.0.1:5100/ui
|
||||||
|
- **API Endpoint:** http://127.0.0.1:5000
|
||||||
- First run creates `data/.myfsio.sys/config/iam.json` with `localadmin / localadmin` (full control). Sign in via the UI, then use the **IAM** tab to create users, rotate secrets, or edit inline policies without touching JSON by hand.
|
|
||||||
- Bucket policies live in `data/.myfsio.sys/config/bucket_policies.json` and follow the AWS `arn:aws:s3:::bucket/key` resource syntax with Version `2012-10-17`. Attach/replace/remove policies from the bucket detail page or edit the JSON by hand—changes hot reload automatically.
|
|
||||||
- IAM actions include extended verbs (`iam:list_users`, `iam:create_user`, `iam:update_policy`, etc.) so you can control who is allowed to manage other users and policies.
|
|
||||||
|
|
||||||
### Bucket Policy Presets & Hot Reload
|
|
||||||
|
|
||||||
- **Presets:** Every bucket detail view includes Public (read-only), Private (detach policy), and Custom presets. Public auto-populates a policy that grants anonymous `s3:ListBucket` + `s3:GetObject` access to the entire bucket.
|
|
||||||
- **Custom drafts:** Switching back to Custom restores your last manual edit so you can toggle between presets without losing work.
|
|
||||||
- **Hot reload:** The server watches `bucket_policies.json` and reloads statements on-the-fly—ideal for editing policies in your favorite editor while testing Via curl or the UI.
|
|
||||||
|
|
||||||
## Presigned URLs
|
|
||||||
|
|
||||||
Presigned URLs follow the AWS CLI playbook:
|
|
||||||
|
|
||||||
- Call `POST /presign/<bucket>/<key>` (or use the "Presign" button in the UI) to request a Signature Version 4 URL valid for 1 second to 7 days.
|
|
||||||
- The generated URL honors IAM permissions and bucket-policy decisions at generation-time and again when somebody fetches it.
|
|
||||||
- Because presigned URLs cover both authenticated and public sharing scenarios, the legacy "share link" feature has been removed.
|
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
| Variable | Default | Description |
|
| Variable | Default | Description |
|
||||||
| --- | --- | --- |
|
|----------|---------|-------------|
|
||||||
| `STORAGE_ROOT` | `<project>/data` | Filesystem root for bucket directories |
|
| `STORAGE_ROOT` | `./data` | Filesystem root for bucket storage |
|
||||||
| `MAX_UPLOAD_SIZE` | `1073741824` | Maximum upload size (bytes) |
|
| `IAM_CONFIG` | `.myfsio.sys/config/iam.json` | IAM user and policy store |
|
||||||
| `UI_PAGE_SIZE` | `100` | `MaxKeys` hint for listings |
|
| `BUCKET_POLICY_PATH` | `.myfsio.sys/config/bucket_policies.json` | Bucket policy store |
|
||||||
| `SECRET_KEY` | `dev-secret-key` | Flask session secret for the UI |
|
| `API_BASE_URL` | `http://127.0.0.1:5000` | API endpoint for UI calls |
|
||||||
| `IAM_CONFIG` | `<project>/data/.myfsio.sys/config/iam.json` | IAM user + policy store |
|
| `MAX_UPLOAD_SIZE` | `1073741824` | Maximum upload size in bytes (1 GB) |
|
||||||
| `BUCKET_POLICY_PATH` | `<project>/data/.myfsio.sys/config/bucket_policies.json` | Bucket policy store |
|
| `MULTIPART_MIN_PART_SIZE` | `5242880` | Minimum multipart part size (5 MB) |
|
||||||
| `API_BASE_URL` | `http://127.0.0.1:5000` | Used by the UI when calling API endpoints (presign, bucket policy) |
|
| `UI_PAGE_SIZE` | `100` | Default page size for listings |
|
||||||
| `AWS_REGION` | `us-east-1` | Region used in Signature V4 scope |
|
| `SECRET_KEY` | `dev-secret-key` | Flask session secret |
|
||||||
| `AWS_SERVICE` | `s3` | Service used in Signature V4 scope |
|
| `AWS_REGION` | `us-east-1` | Region for SigV4 signing |
|
||||||
|
| `AWS_SERVICE` | `s3` | Service name for SigV4 signing |
|
||||||
|
| `ENCRYPTION_ENABLED` | `false` | Enable server-side encryption |
|
||||||
|
| `KMS_ENABLED` | `false` | Enable Key Management Service |
|
||||||
|
| `LOG_LEVEL` | `INFO` | Logging verbosity |
|
||||||
|
|
||||||
> Buckets now live directly under `data/` while system metadata (versions, IAM, bucket policies, multipart uploads, etc.) lives in `data/.myfsio.sys`. Existing installs can keep their environment variables, but the defaults now match MinIO's `data/.system` pattern for easier bind-mounting.
|
## Data Layout
|
||||||
|
|
||||||
## API Cheatsheet (IAM headers required)
|
|
||||||
|
|
||||||
```
|
```
|
||||||
GET / -> List buckets (XML)
|
data/
|
||||||
PUT /<bucket> -> Create bucket
|
├── <bucket>/ # User buckets with objects
|
||||||
DELETE /<bucket> -> Delete bucket (must be empty)
|
└── .myfsio.sys/ # System metadata
|
||||||
GET /<bucket> -> List objects (XML)
|
├── config/
|
||||||
PUT /<bucket>/<key> -> Upload object (binary stream)
|
│ ├── iam.json # IAM users and policies
|
||||||
GET /<bucket>/<key> -> Download object
|
│ ├── bucket_policies.json # Bucket policies
|
||||||
DELETE /<bucket>/<key> -> Delete object
|
│ ├── replication_rules.json
|
||||||
POST /presign/<bucket>/<key> -> Generate AWS SigV4 presigned URL (JSON)
|
│ └── connections.json # Remote S3 connections
|
||||||
GET /bucket-policy/<bucket> -> Fetch bucket policy (JSON)
|
├── buckets/<bucket>/
|
||||||
PUT /bucket-policy/<bucket> -> Attach/replace bucket policy (JSON)
|
│ ├── meta/ # Object metadata (.meta.json)
|
||||||
DELETE /bucket-policy/<bucket> -> Remove bucket policy
|
│ ├── versions/ # Archived object versions
|
||||||
|
│ └── .bucket.json # Bucket config (versioning, CORS)
|
||||||
|
├── multipart/ # Active multipart uploads
|
||||||
|
└── keys/ # Encryption keys (SSE-S3/KMS)
|
||||||
|
```
|
||||||
|
|
||||||
|
## API Reference
|
||||||
|
|
||||||
|
All endpoints require AWS Signature Version 4 authentication unless using presigned URLs or public bucket policies.
|
||||||
|
|
||||||
|
### Bucket Operations
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/` | List all buckets |
|
||||||
|
| `PUT` | `/<bucket>` | Create bucket |
|
||||||
|
| `DELETE` | `/<bucket>` | Delete bucket (must be empty) |
|
||||||
|
| `HEAD` | `/<bucket>` | Check bucket exists |
|
||||||
|
|
||||||
|
### Object Operations
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/<bucket>` | List objects (supports `list-type=2`) |
|
||||||
|
| `PUT` | `/<bucket>/<key>` | Upload object |
|
||||||
|
| `GET` | `/<bucket>/<key>` | Download object |
|
||||||
|
| `DELETE` | `/<bucket>/<key>` | Delete object |
|
||||||
|
| `HEAD` | `/<bucket>/<key>` | Get object metadata |
|
||||||
|
| `POST` | `/<bucket>/<key>?uploads` | Initiate multipart upload |
|
||||||
|
| `PUT` | `/<bucket>/<key>?partNumber=N&uploadId=X` | Upload part |
|
||||||
|
| `POST` | `/<bucket>/<key>?uploadId=X` | Complete multipart upload |
|
||||||
|
| `DELETE` | `/<bucket>/<key>?uploadId=X` | Abort multipart upload |
|
||||||
|
|
||||||
|
### Presigned URLs
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `POST` | `/presign/<bucket>/<key>` | Generate presigned URL |
|
||||||
|
|
||||||
|
### Bucket Policies
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/bucket-policy/<bucket>` | Get bucket policy |
|
||||||
|
| `PUT` | `/bucket-policy/<bucket>` | Set bucket policy |
|
||||||
|
| `DELETE` | `/bucket-policy/<bucket>` | Delete bucket policy |
|
||||||
|
|
||||||
|
### Versioning
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/<bucket>/<key>?versionId=X` | Get specific version |
|
||||||
|
| `DELETE` | `/<bucket>/<key>?versionId=X` | Delete specific version |
|
||||||
|
| `GET` | `/<bucket>?versions` | List object versions |
|
||||||
|
|
||||||
|
### Health Check
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/healthz` | Health check endpoint |
|
||||||
|
|
||||||
|
## IAM & Access Control
|
||||||
|
|
||||||
|
### Users and Access Keys
|
||||||
|
|
||||||
|
On first run, MyFSIO creates a default admin user (`localadmin`/`localadmin`). Use the IAM dashboard to:
|
||||||
|
|
||||||
|
- Create and delete users
|
||||||
|
- Generate and rotate access keys
|
||||||
|
- Attach inline policies to users
|
||||||
|
- Control IAM management permissions
|
||||||
|
|
||||||
|
### Bucket Policies
|
||||||
|
|
||||||
|
Bucket policies follow AWS policy grammar (Version `2012-10-17`) with support for:
|
||||||
|
|
||||||
|
- Principal-based access (`*` for anonymous, specific users)
|
||||||
|
- Action-based permissions (`s3:GetObject`, `s3:PutObject`, etc.)
|
||||||
|
- Resource patterns (`arn:aws:s3:::bucket/*`)
|
||||||
|
- Condition keys
|
||||||
|
|
||||||
|
**Policy Presets:**
|
||||||
|
- **Public:** Grants anonymous read access (`s3:GetObject`, `s3:ListBucket`)
|
||||||
|
- **Private:** Removes bucket policy (IAM-only access)
|
||||||
|
- **Custom:** Manual policy editing with draft preservation
|
||||||
|
|
||||||
|
Policies hot-reload when the JSON file changes.
|
||||||
|
|
||||||
|
## Server-Side Encryption
|
||||||
|
|
||||||
|
MyFSIO supports two encryption modes:
|
||||||
|
|
||||||
|
- **SSE-S3:** Server-managed keys with automatic key rotation
|
||||||
|
- **SSE-KMS:** Customer-managed keys via built-in KMS
|
||||||
|
|
||||||
|
Enable encryption with:
|
||||||
|
```bash
|
||||||
|
ENCRYPTION_ENABLED=true python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
## Cross-Bucket Replication
|
||||||
|
|
||||||
|
Replicate objects to remote S3-compatible endpoints:
|
||||||
|
|
||||||
|
1. Configure remote connections in the UI
|
||||||
|
2. Create replication rules specifying source/destination
|
||||||
|
3. Objects are automatically replicated on upload
|
||||||
|
|
||||||
|
## Docker
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker build -t myfsio .
|
||||||
|
docker run -p 5000:5000 -p 5100:5100 -v ./data:/app/data myfsio
|
||||||
```
|
```
|
||||||
|
|
||||||
## Testing
|
## Testing
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pytest -q
|
# Run all tests
|
||||||
|
pytest tests/ -v
|
||||||
|
|
||||||
|
# Run specific test file
|
||||||
|
pytest tests/test_api.py -v
|
||||||
|
|
||||||
|
# Run with coverage
|
||||||
|
pytest tests/ --cov=app --cov-report=html
|
||||||
```
|
```
|
||||||
|
|
||||||
## References
|
## References
|
||||||
|
|
||||||
- [Amazon Simple Storage Service Documentation](https://docs.aws.amazon.com/s3/)
|
- [Amazon S3 Documentation](https://docs.aws.amazon.com/s3/)
|
||||||
- [Signature Version 4 Signing Process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)
|
- [AWS Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)
|
||||||
- [Amazon S3 Bucket Policy Examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html)
|
- [S3 Bucket Policy Examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html)
|
||||||
|
|||||||
181
app/__init__.py
181
app/__init__.py
@@ -1,29 +1,64 @@
|
|||||||
"""Application factory for the mini S3-compatible object store."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
import shutil
|
||||||
|
import sys
|
||||||
import time
|
import time
|
||||||
import uuid
|
import uuid
|
||||||
from logging.handlers import RotatingFileHandler
|
from logging.handlers import RotatingFileHandler
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from datetime import timedelta
|
from datetime import timedelta
|
||||||
from typing import Any, Dict, Optional
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
from flask import Flask, g, has_request_context, redirect, render_template, request, url_for
|
from flask import Flask, g, has_request_context, redirect, render_template, request, url_for
|
||||||
from flask_cors import CORS
|
from flask_cors import CORS
|
||||||
from flask_wtf.csrf import CSRFError
|
from flask_wtf.csrf import CSRFError
|
||||||
|
from werkzeug.middleware.proxy_fix import ProxyFix
|
||||||
|
|
||||||
|
from .access_logging import AccessLoggingService
|
||||||
|
from .acl import AclService
|
||||||
from .bucket_policies import BucketPolicyStore
|
from .bucket_policies import BucketPolicyStore
|
||||||
from .config import AppConfig
|
from .config import AppConfig
|
||||||
from .connections import ConnectionStore
|
from .connections import ConnectionStore
|
||||||
|
from .encryption import EncryptionManager
|
||||||
from .extensions import limiter, csrf
|
from .extensions import limiter, csrf
|
||||||
from .iam import IamService
|
from .iam import IamService
|
||||||
|
from .kms import KMSManager
|
||||||
|
from .lifecycle import LifecycleManager
|
||||||
|
from .notifications import NotificationService
|
||||||
|
from .object_lock import ObjectLockService
|
||||||
from .replication import ReplicationManager
|
from .replication import ReplicationManager
|
||||||
from .secret_store import EphemeralSecretStore
|
from .secret_store import EphemeralSecretStore
|
||||||
from .storage import ObjectStorage
|
from .storage import ObjectStorage
|
||||||
from .version import get_version
|
from .version import get_version
|
||||||
|
|
||||||
|
|
||||||
|
def _migrate_config_file(active_path: Path, legacy_paths: List[Path]) -> Path:
|
||||||
|
"""Migrate config file from legacy locations to the active path.
|
||||||
|
|
||||||
|
Checks each legacy path in order and moves the first one found to the active path.
|
||||||
|
This ensures backward compatibility for users upgrading from older versions.
|
||||||
|
"""
|
||||||
|
active_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
if active_path.exists():
|
||||||
|
return active_path
|
||||||
|
|
||||||
|
for legacy_path in legacy_paths:
|
||||||
|
if legacy_path.exists():
|
||||||
|
try:
|
||||||
|
shutil.move(str(legacy_path), str(active_path))
|
||||||
|
except OSError:
|
||||||
|
shutil.copy2(legacy_path, active_path)
|
||||||
|
try:
|
||||||
|
legacy_path.unlink(missing_ok=True)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
break
|
||||||
|
|
||||||
|
return active_path
|
||||||
|
|
||||||
|
|
||||||
def create_app(
|
def create_app(
|
||||||
test_config: Optional[Dict[str, Any]] = None,
|
test_config: Optional[Dict[str, Any]] = None,
|
||||||
*,
|
*,
|
||||||
@@ -33,7 +68,11 @@ def create_app(
|
|||||||
"""Create and configure the Flask application."""
|
"""Create and configure the Flask application."""
|
||||||
config = AppConfig.from_env(test_config)
|
config = AppConfig.from_env(test_config)
|
||||||
|
|
||||||
project_root = Path(__file__).resolve().parent.parent
|
if getattr(sys, "frozen", False):
|
||||||
|
project_root = Path(sys._MEIPASS)
|
||||||
|
else:
|
||||||
|
project_root = Path(__file__).resolve().parent.parent
|
||||||
|
|
||||||
app = Flask(
|
app = Flask(
|
||||||
__name__,
|
__name__,
|
||||||
static_folder=str(project_root / "static"),
|
static_folder=str(project_root / "static"),
|
||||||
@@ -47,6 +86,9 @@ def create_app(
|
|||||||
if app.config.get("TESTING"):
|
if app.config.get("TESTING"):
|
||||||
app.config.setdefault("WTF_CSRF_ENABLED", False)
|
app.config.setdefault("WTF_CSRF_ENABLED", False)
|
||||||
|
|
||||||
|
# Trust X-Forwarded-* headers from proxies
|
||||||
|
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_prefix=1)
|
||||||
|
|
||||||
_configure_cors(app)
|
_configure_cors(app)
|
||||||
_configure_logging(app)
|
_configure_logging(app)
|
||||||
|
|
||||||
@@ -62,12 +104,61 @@ def create_app(
|
|||||||
bucket_policies = BucketPolicyStore(Path(app.config["BUCKET_POLICY_PATH"]))
|
bucket_policies = BucketPolicyStore(Path(app.config["BUCKET_POLICY_PATH"]))
|
||||||
secret_store = EphemeralSecretStore(default_ttl=app.config.get("SECRET_TTL_SECONDS", 300))
|
secret_store = EphemeralSecretStore(default_ttl=app.config.get("SECRET_TTL_SECONDS", 300))
|
||||||
|
|
||||||
# Initialize Replication components
|
storage_root = Path(app.config["STORAGE_ROOT"])
|
||||||
connections_path = Path(app.config["STORAGE_ROOT"]) / ".connections.json"
|
config_dir = storage_root / ".myfsio.sys" / "config"
|
||||||
replication_rules_path = Path(app.config["STORAGE_ROOT"]) / ".replication_rules.json"
|
config_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
connections_path = _migrate_config_file(
|
||||||
|
active_path=config_dir / "connections.json",
|
||||||
|
legacy_paths=[
|
||||||
|
storage_root / ".myfsio.sys" / "connections.json",
|
||||||
|
storage_root / ".connections.json",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
replication_rules_path = _migrate_config_file(
|
||||||
|
active_path=config_dir / "replication_rules.json",
|
||||||
|
legacy_paths=[
|
||||||
|
storage_root / ".myfsio.sys" / "replication_rules.json",
|
||||||
|
storage_root / ".replication_rules.json",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
connections = ConnectionStore(connections_path)
|
connections = ConnectionStore(connections_path)
|
||||||
replication = ReplicationManager(storage, connections, replication_rules_path)
|
replication = ReplicationManager(storage, connections, replication_rules_path, storage_root)
|
||||||
|
|
||||||
|
encryption_config = {
|
||||||
|
"encryption_enabled": app.config.get("ENCRYPTION_ENABLED", False),
|
||||||
|
"encryption_master_key_path": app.config.get("ENCRYPTION_MASTER_KEY_PATH"),
|
||||||
|
"default_encryption_algorithm": app.config.get("DEFAULT_ENCRYPTION_ALGORITHM", "AES256"),
|
||||||
|
}
|
||||||
|
encryption_manager = EncryptionManager(encryption_config)
|
||||||
|
|
||||||
|
kms_manager = None
|
||||||
|
if app.config.get("KMS_ENABLED", False):
|
||||||
|
kms_keys_path = Path(app.config.get("KMS_KEYS_PATH", ""))
|
||||||
|
kms_master_key_path = Path(app.config.get("ENCRYPTION_MASTER_KEY_PATH", ""))
|
||||||
|
kms_manager = KMSManager(kms_keys_path, kms_master_key_path)
|
||||||
|
encryption_manager.set_kms_provider(kms_manager)
|
||||||
|
|
||||||
|
if app.config.get("ENCRYPTION_ENABLED", False):
|
||||||
|
from .encrypted_storage import EncryptedObjectStorage
|
||||||
|
storage = EncryptedObjectStorage(storage, encryption_manager)
|
||||||
|
|
||||||
|
acl_service = AclService(storage_root)
|
||||||
|
object_lock_service = ObjectLockService(storage_root)
|
||||||
|
notification_service = NotificationService(storage_root)
|
||||||
|
access_logging_service = AccessLoggingService(storage_root)
|
||||||
|
access_logging_service.set_storage(storage)
|
||||||
|
|
||||||
|
lifecycle_manager = None
|
||||||
|
if app.config.get("LIFECYCLE_ENABLED", False):
|
||||||
|
base_storage = storage.storage if hasattr(storage, 'storage') else storage
|
||||||
|
lifecycle_manager = LifecycleManager(
|
||||||
|
base_storage,
|
||||||
|
interval_seconds=app.config.get("LIFECYCLE_INTERVAL_SECONDS", 3600),
|
||||||
|
storage_root=storage_root,
|
||||||
|
)
|
||||||
|
lifecycle_manager.start()
|
||||||
|
|
||||||
app.extensions["object_storage"] = storage
|
app.extensions["object_storage"] = storage
|
||||||
app.extensions["iam"] = iam
|
app.extensions["iam"] = iam
|
||||||
@@ -76,6 +167,13 @@ def create_app(
|
|||||||
app.extensions["limiter"] = limiter
|
app.extensions["limiter"] = limiter
|
||||||
app.extensions["connections"] = connections
|
app.extensions["connections"] = connections
|
||||||
app.extensions["replication"] = replication
|
app.extensions["replication"] = replication
|
||||||
|
app.extensions["encryption"] = encryption_manager
|
||||||
|
app.extensions["kms"] = kms_manager
|
||||||
|
app.extensions["acl"] = acl_service
|
||||||
|
app.extensions["lifecycle"] = lifecycle_manager
|
||||||
|
app.extensions["object_lock"] = object_lock_service
|
||||||
|
app.extensions["notifications"] = notification_service
|
||||||
|
app.extensions["access_logging"] = access_logging_service
|
||||||
|
|
||||||
@app.errorhandler(500)
|
@app.errorhandler(500)
|
||||||
def internal_error(error):
|
def internal_error(error):
|
||||||
@@ -96,11 +194,35 @@ def create_app(
|
|||||||
value /= 1024.0
|
value /= 1024.0
|
||||||
return f"{value:.1f} PB"
|
return f"{value:.1f} PB"
|
||||||
|
|
||||||
|
@app.template_filter("timestamp_to_datetime")
|
||||||
|
def timestamp_to_datetime(value: float) -> str:
|
||||||
|
"""Format Unix timestamp as human-readable datetime in configured timezone."""
|
||||||
|
from datetime import datetime, timezone as dt_timezone
|
||||||
|
from zoneinfo import ZoneInfo
|
||||||
|
if not value:
|
||||||
|
return "Never"
|
||||||
|
try:
|
||||||
|
dt_utc = datetime.fromtimestamp(value, dt_timezone.utc)
|
||||||
|
display_tz = app.config.get("DISPLAY_TIMEZONE", "UTC")
|
||||||
|
if display_tz and display_tz != "UTC":
|
||||||
|
try:
|
||||||
|
tz = ZoneInfo(display_tz)
|
||||||
|
dt_local = dt_utc.astimezone(tz)
|
||||||
|
return dt_local.strftime("%Y-%m-%d %H:%M:%S")
|
||||||
|
except (KeyError, ValueError):
|
||||||
|
pass
|
||||||
|
return dt_utc.strftime("%Y-%m-%d %H:%M:%S UTC")
|
||||||
|
except (ValueError, OSError):
|
||||||
|
return "Unknown"
|
||||||
|
|
||||||
if include_api:
|
if include_api:
|
||||||
from .s3_api import s3_api_bp
|
from .s3_api import s3_api_bp
|
||||||
|
from .kms_api import kms_api_bp
|
||||||
|
|
||||||
app.register_blueprint(s3_api_bp)
|
app.register_blueprint(s3_api_bp)
|
||||||
|
app.register_blueprint(kms_api_bp)
|
||||||
csrf.exempt(s3_api_bp)
|
csrf.exempt(s3_api_bp)
|
||||||
|
csrf.exempt(kms_api_bp)
|
||||||
|
|
||||||
if include_ui:
|
if include_ui:
|
||||||
from .ui import ui_bp
|
from .ui import ui_bp
|
||||||
@@ -137,14 +259,12 @@ def create_ui_app(test_config: Optional[Dict[str, Any]] = None) -> Flask:
|
|||||||
|
|
||||||
def _configure_cors(app: Flask) -> None:
|
def _configure_cors(app: Flask) -> None:
|
||||||
origins = app.config.get("CORS_ORIGINS", ["*"])
|
origins = app.config.get("CORS_ORIGINS", ["*"])
|
||||||
methods = app.config.get("CORS_METHODS", ["GET", "PUT", "POST", "DELETE", "OPTIONS"])
|
methods = app.config.get("CORS_METHODS", ["GET", "PUT", "POST", "DELETE", "OPTIONS", "HEAD"])
|
||||||
allow_headers = app.config.get(
|
allow_headers = app.config.get("CORS_ALLOW_HEADERS", ["*"])
|
||||||
"CORS_ALLOW_HEADERS",
|
expose_headers = app.config.get("CORS_EXPOSE_HEADERS", ["*"])
|
||||||
["Content-Type", "X-Access-Key", "X-Secret-Key", "X-Amz-Date", "X-Amz-SignedHeaders"],
|
|
||||||
)
|
|
||||||
CORS(
|
CORS(
|
||||||
app,
|
app,
|
||||||
resources={r"/*": {"origins": origins, "methods": methods, "allow_headers": allow_headers}},
|
resources={r"/*": {"origins": origins, "methods": methods, "allow_headers": allow_headers, "expose_headers": expose_headers}},
|
||||||
supports_credentials=True,
|
supports_credentials=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -152,7 +272,7 @@ def _configure_cors(app: Flask) -> None:
|
|||||||
class _RequestContextFilter(logging.Filter):
|
class _RequestContextFilter(logging.Filter):
|
||||||
"""Inject request-specific attributes into log records."""
|
"""Inject request-specific attributes into log records."""
|
||||||
|
|
||||||
def filter(self, record: logging.LogRecord) -> bool: # pragma: no cover - simple boilerplate
|
def filter(self, record: logging.LogRecord) -> bool:
|
||||||
if has_request_context():
|
if has_request_context():
|
||||||
record.request_id = getattr(g, "request_id", "-")
|
record.request_id = getattr(g, "request_id", "-")
|
||||||
record.path = request.path
|
record.path = request.path
|
||||||
@@ -167,23 +287,33 @@ class _RequestContextFilter(logging.Filter):
|
|||||||
|
|
||||||
|
|
||||||
def _configure_logging(app: Flask) -> None:
|
def _configure_logging(app: Flask) -> None:
|
||||||
log_file = Path(app.config["LOG_FILE"])
|
|
||||||
log_file.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
handler = RotatingFileHandler(
|
|
||||||
log_file,
|
|
||||||
maxBytes=int(app.config.get("LOG_MAX_BYTES", 5 * 1024 * 1024)),
|
|
||||||
backupCount=int(app.config.get("LOG_BACKUP_COUNT", 3)),
|
|
||||||
encoding="utf-8",
|
|
||||||
)
|
|
||||||
formatter = logging.Formatter(
|
formatter = logging.Formatter(
|
||||||
"%(asctime)s | %(levelname)s | %(request_id)s | %(method)s %(path)s | %(message)s"
|
"%(asctime)s | %(levelname)s | %(request_id)s | %(method)s %(path)s | %(message)s"
|
||||||
)
|
)
|
||||||
handler.setFormatter(formatter)
|
|
||||||
handler.addFilter(_RequestContextFilter())
|
stream_handler = logging.StreamHandler(sys.stdout)
|
||||||
|
stream_handler.setFormatter(formatter)
|
||||||
|
stream_handler.addFilter(_RequestContextFilter())
|
||||||
|
|
||||||
logger = app.logger
|
logger = app.logger
|
||||||
|
for handler in logger.handlers[:]:
|
||||||
|
handler.close()
|
||||||
logger.handlers.clear()
|
logger.handlers.clear()
|
||||||
logger.addHandler(handler)
|
logger.addHandler(stream_handler)
|
||||||
|
|
||||||
|
if app.config.get("LOG_TO_FILE"):
|
||||||
|
log_file = Path(app.config["LOG_FILE"])
|
||||||
|
log_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
file_handler = RotatingFileHandler(
|
||||||
|
log_file,
|
||||||
|
maxBytes=int(app.config.get("LOG_MAX_BYTES", 5 * 1024 * 1024)),
|
||||||
|
backupCount=int(app.config.get("LOG_BACKUP_COUNT", 3)),
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
file_handler.setFormatter(formatter)
|
||||||
|
file_handler.addFilter(_RequestContextFilter())
|
||||||
|
logger.addHandler(file_handler)
|
||||||
|
|
||||||
logger.setLevel(getattr(logging, app.config.get("LOG_LEVEL", "INFO"), logging.INFO))
|
logger.setLevel(getattr(logging, app.config.get("LOG_LEVEL", "INFO"), logging.INFO))
|
||||||
|
|
||||||
@app.before_request
|
@app.before_request
|
||||||
@@ -211,5 +341,4 @@ def _configure_logging(app: Flask) -> None:
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
response.headers["X-Request-Duration-ms"] = f"{duration_ms:.2f}"
|
response.headers["X-Request-Duration-ms"] = f"{duration_ms:.2f}"
|
||||||
response.headers["Server"] = "MyFISO"
|
|
||||||
return response
|
return response
|
||||||
|
|||||||
265
app/access_logging.py
Normal file
265
app/access_logging.py
Normal file
@@ -0,0 +1,265 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import io
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import queue
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class AccessLogEntry:
|
||||||
|
bucket_owner: str = "-"
|
||||||
|
bucket: str = "-"
|
||||||
|
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||||
|
remote_ip: str = "-"
|
||||||
|
requester: str = "-"
|
||||||
|
request_id: str = field(default_factory=lambda: uuid.uuid4().hex[:16].upper())
|
||||||
|
operation: str = "-"
|
||||||
|
key: str = "-"
|
||||||
|
request_uri: str = "-"
|
||||||
|
http_status: int = 200
|
||||||
|
error_code: str = "-"
|
||||||
|
bytes_sent: int = 0
|
||||||
|
object_size: int = 0
|
||||||
|
total_time_ms: int = 0
|
||||||
|
turn_around_time_ms: int = 0
|
||||||
|
referrer: str = "-"
|
||||||
|
user_agent: str = "-"
|
||||||
|
version_id: str = "-"
|
||||||
|
host_id: str = "-"
|
||||||
|
signature_version: str = "SigV4"
|
||||||
|
cipher_suite: str = "-"
|
||||||
|
authentication_type: str = "AuthHeader"
|
||||||
|
host_header: str = "-"
|
||||||
|
tls_version: str = "-"
|
||||||
|
|
||||||
|
def to_log_line(self) -> str:
|
||||||
|
time_str = self.timestamp.strftime("[%d/%b/%Y:%H:%M:%S %z]")
|
||||||
|
return (
|
||||||
|
f'{self.bucket_owner} {self.bucket} {time_str} {self.remote_ip} '
|
||||||
|
f'{self.requester} {self.request_id} {self.operation} {self.key} '
|
||||||
|
f'"{self.request_uri}" {self.http_status} {self.error_code or "-"} '
|
||||||
|
f'{self.bytes_sent or "-"} {self.object_size or "-"} {self.total_time_ms or "-"} '
|
||||||
|
f'{self.turn_around_time_ms or "-"} "{self.referrer}" "{self.user_agent}" {self.version_id}'
|
||||||
|
)
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"bucket_owner": self.bucket_owner,
|
||||||
|
"bucket": self.bucket,
|
||||||
|
"timestamp": self.timestamp.isoformat(),
|
||||||
|
"remote_ip": self.remote_ip,
|
||||||
|
"requester": self.requester,
|
||||||
|
"request_id": self.request_id,
|
||||||
|
"operation": self.operation,
|
||||||
|
"key": self.key,
|
||||||
|
"request_uri": self.request_uri,
|
||||||
|
"http_status": self.http_status,
|
||||||
|
"error_code": self.error_code,
|
||||||
|
"bytes_sent": self.bytes_sent,
|
||||||
|
"object_size": self.object_size,
|
||||||
|
"total_time_ms": self.total_time_ms,
|
||||||
|
"referrer": self.referrer,
|
||||||
|
"user_agent": self.user_agent,
|
||||||
|
"version_id": self.version_id,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LoggingConfiguration:
|
||||||
|
target_bucket: str
|
||||||
|
target_prefix: str = ""
|
||||||
|
enabled: bool = True
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"LoggingEnabled": {
|
||||||
|
"TargetBucket": self.target_bucket,
|
||||||
|
"TargetPrefix": self.target_prefix,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> Optional["LoggingConfiguration"]:
|
||||||
|
logging_enabled = data.get("LoggingEnabled")
|
||||||
|
if not logging_enabled:
|
||||||
|
return None
|
||||||
|
return cls(
|
||||||
|
target_bucket=logging_enabled.get("TargetBucket", ""),
|
||||||
|
target_prefix=logging_enabled.get("TargetPrefix", ""),
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class AccessLoggingService:
|
||||||
|
def __init__(self, storage_root: Path, flush_interval: int = 60, max_buffer_size: int = 1000):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self.flush_interval = flush_interval
|
||||||
|
self.max_buffer_size = max_buffer_size
|
||||||
|
self._configs: Dict[str, LoggingConfiguration] = {}
|
||||||
|
self._buffer: Dict[str, List[AccessLogEntry]] = {}
|
||||||
|
self._buffer_lock = threading.Lock()
|
||||||
|
self._shutdown = threading.Event()
|
||||||
|
self._storage = None
|
||||||
|
|
||||||
|
self._flush_thread = threading.Thread(target=self._flush_loop, name="access-log-flush", daemon=True)
|
||||||
|
self._flush_thread.start()
|
||||||
|
|
||||||
|
def set_storage(self, storage: Any) -> None:
|
||||||
|
self._storage = storage
|
||||||
|
|
||||||
|
def _config_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "logging.json"
|
||||||
|
|
||||||
|
def get_bucket_logging(self, bucket_name: str) -> Optional[LoggingConfiguration]:
|
||||||
|
if bucket_name in self._configs:
|
||||||
|
return self._configs[bucket_name]
|
||||||
|
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
if not config_path.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||||
|
config = LoggingConfiguration.from_dict(data)
|
||||||
|
if config:
|
||||||
|
self._configs[bucket_name] = config
|
||||||
|
return config
|
||||||
|
except (json.JSONDecodeError, OSError) as e:
|
||||||
|
logger.warning(f"Failed to load logging config for {bucket_name}: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def set_bucket_logging(self, bucket_name: str, config: LoggingConfiguration) -> None:
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
config_path.write_text(json.dumps(config.to_dict(), indent=2), encoding="utf-8")
|
||||||
|
self._configs[bucket_name] = config
|
||||||
|
|
||||||
|
def delete_bucket_logging(self, bucket_name: str) -> None:
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
try:
|
||||||
|
if config_path.exists():
|
||||||
|
config_path.unlink()
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
self._configs.pop(bucket_name, None)
|
||||||
|
|
||||||
|
def log_request(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
*,
|
||||||
|
operation: str,
|
||||||
|
key: str = "-",
|
||||||
|
remote_ip: str = "-",
|
||||||
|
requester: str = "-",
|
||||||
|
request_uri: str = "-",
|
||||||
|
http_status: int = 200,
|
||||||
|
error_code: str = "",
|
||||||
|
bytes_sent: int = 0,
|
||||||
|
object_size: int = 0,
|
||||||
|
total_time_ms: int = 0,
|
||||||
|
referrer: str = "-",
|
||||||
|
user_agent: str = "-",
|
||||||
|
version_id: str = "-",
|
||||||
|
request_id: str = "",
|
||||||
|
) -> None:
|
||||||
|
config = self.get_bucket_logging(bucket_name)
|
||||||
|
if not config or not config.enabled:
|
||||||
|
return
|
||||||
|
|
||||||
|
entry = AccessLogEntry(
|
||||||
|
bucket_owner="local-owner",
|
||||||
|
bucket=bucket_name,
|
||||||
|
remote_ip=remote_ip,
|
||||||
|
requester=requester,
|
||||||
|
request_id=request_id or uuid.uuid4().hex[:16].upper(),
|
||||||
|
operation=operation,
|
||||||
|
key=key,
|
||||||
|
request_uri=request_uri,
|
||||||
|
http_status=http_status,
|
||||||
|
error_code=error_code,
|
||||||
|
bytes_sent=bytes_sent,
|
||||||
|
object_size=object_size,
|
||||||
|
total_time_ms=total_time_ms,
|
||||||
|
referrer=referrer,
|
||||||
|
user_agent=user_agent,
|
||||||
|
version_id=version_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
target_key = f"{config.target_bucket}:{config.target_prefix}"
|
||||||
|
should_flush = False
|
||||||
|
with self._buffer_lock:
|
||||||
|
if target_key not in self._buffer:
|
||||||
|
self._buffer[target_key] = []
|
||||||
|
self._buffer[target_key].append(entry)
|
||||||
|
should_flush = len(self._buffer[target_key]) >= self.max_buffer_size
|
||||||
|
|
||||||
|
if should_flush:
|
||||||
|
self._flush_buffer(target_key)
|
||||||
|
|
||||||
|
def _flush_loop(self) -> None:
|
||||||
|
while not self._shutdown.is_set():
|
||||||
|
self._shutdown.wait(timeout=self.flush_interval)
|
||||||
|
if not self._shutdown.is_set():
|
||||||
|
self._flush_all()
|
||||||
|
|
||||||
|
def _flush_all(self) -> None:
|
||||||
|
with self._buffer_lock:
|
||||||
|
targets = list(self._buffer.keys())
|
||||||
|
|
||||||
|
for target_key in targets:
|
||||||
|
self._flush_buffer(target_key)
|
||||||
|
|
||||||
|
def _flush_buffer(self, target_key: str) -> None:
|
||||||
|
with self._buffer_lock:
|
||||||
|
entries = self._buffer.pop(target_key, [])
|
||||||
|
|
||||||
|
if not entries or not self._storage:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
bucket_name, prefix = target_key.split(":", 1)
|
||||||
|
except ValueError:
|
||||||
|
logger.error(f"Invalid target key: {target_key}")
|
||||||
|
return
|
||||||
|
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
log_key = f"{prefix}{now.strftime('%Y-%m-%d-%H-%M-%S')}-{uuid.uuid4().hex[:8]}"
|
||||||
|
|
||||||
|
log_content = "\n".join(entry.to_log_line() for entry in entries) + "\n"
|
||||||
|
|
||||||
|
try:
|
||||||
|
stream = io.BytesIO(log_content.encode("utf-8"))
|
||||||
|
self._storage.put_object(bucket_name, log_key, stream, enforce_quota=False)
|
||||||
|
logger.info(f"Flushed {len(entries)} access log entries to {bucket_name}/{log_key}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to write access log to {bucket_name}/{log_key}: {e}")
|
||||||
|
with self._buffer_lock:
|
||||||
|
if target_key not in self._buffer:
|
||||||
|
self._buffer[target_key] = []
|
||||||
|
self._buffer[target_key] = entries + self._buffer[target_key]
|
||||||
|
|
||||||
|
def flush(self) -> None:
|
||||||
|
self._flush_all()
|
||||||
|
|
||||||
|
def shutdown(self) -> None:
|
||||||
|
self._shutdown.set()
|
||||||
|
self._flush_all()
|
||||||
|
self._flush_thread.join(timeout=5.0)
|
||||||
|
|
||||||
|
def get_stats(self) -> Dict[str, Any]:
|
||||||
|
with self._buffer_lock:
|
||||||
|
buffered = sum(len(entries) for entries in self._buffer.values())
|
||||||
|
return {
|
||||||
|
"buffered_entries": buffered,
|
||||||
|
"target_buckets": len(self._buffer),
|
||||||
|
}
|
||||||
204
app/acl.py
Normal file
204
app/acl.py
Normal file
@@ -0,0 +1,204 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional, Set
|
||||||
|
|
||||||
|
|
||||||
|
ACL_PERMISSION_FULL_CONTROL = "FULL_CONTROL"
|
||||||
|
ACL_PERMISSION_WRITE = "WRITE"
|
||||||
|
ACL_PERMISSION_WRITE_ACP = "WRITE_ACP"
|
||||||
|
ACL_PERMISSION_READ = "READ"
|
||||||
|
ACL_PERMISSION_READ_ACP = "READ_ACP"
|
||||||
|
|
||||||
|
ALL_PERMISSIONS = {
|
||||||
|
ACL_PERMISSION_FULL_CONTROL,
|
||||||
|
ACL_PERMISSION_WRITE,
|
||||||
|
ACL_PERMISSION_WRITE_ACP,
|
||||||
|
ACL_PERMISSION_READ,
|
||||||
|
ACL_PERMISSION_READ_ACP,
|
||||||
|
}
|
||||||
|
|
||||||
|
PERMISSION_TO_ACTIONS = {
|
||||||
|
ACL_PERMISSION_FULL_CONTROL: {"read", "write", "delete", "list", "share"},
|
||||||
|
ACL_PERMISSION_WRITE: {"write", "delete"},
|
||||||
|
ACL_PERMISSION_WRITE_ACP: {"share"},
|
||||||
|
ACL_PERMISSION_READ: {"read", "list"},
|
||||||
|
ACL_PERMISSION_READ_ACP: {"share"},
|
||||||
|
}
|
||||||
|
|
||||||
|
GRANTEE_ALL_USERS = "*"
|
||||||
|
GRANTEE_AUTHENTICATED_USERS = "authenticated"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class AclGrant:
|
||||||
|
grantee: str
|
||||||
|
permission: str
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, str]:
|
||||||
|
return {"grantee": self.grantee, "permission": self.permission}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, str]) -> "AclGrant":
|
||||||
|
return cls(grantee=data["grantee"], permission=data["permission"])
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Acl:
|
||||||
|
owner: str
|
||||||
|
grants: List[AclGrant] = field(default_factory=list)
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"owner": self.owner,
|
||||||
|
"grants": [g.to_dict() for g in self.grants],
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "Acl":
|
||||||
|
return cls(
|
||||||
|
owner=data.get("owner", ""),
|
||||||
|
grants=[AclGrant.from_dict(g) for g in data.get("grants", [])],
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_allowed_actions(self, principal_id: Optional[str], is_authenticated: bool = True) -> Set[str]:
|
||||||
|
actions: Set[str] = set()
|
||||||
|
if principal_id and principal_id == self.owner:
|
||||||
|
actions.update(PERMISSION_TO_ACTIONS[ACL_PERMISSION_FULL_CONTROL])
|
||||||
|
for grant in self.grants:
|
||||||
|
if grant.grantee == GRANTEE_ALL_USERS:
|
||||||
|
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
|
||||||
|
elif grant.grantee == GRANTEE_AUTHENTICATED_USERS and is_authenticated:
|
||||||
|
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
|
||||||
|
elif principal_id and grant.grantee == principal_id:
|
||||||
|
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
|
||||||
|
return actions
|
||||||
|
|
||||||
|
|
||||||
|
CANNED_ACLS = {
|
||||||
|
"private": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL)],
|
||||||
|
),
|
||||||
|
"public-read": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
"public-read-write": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_WRITE),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
"authenticated-read": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
AclGrant(grantee=GRANTEE_AUTHENTICATED_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
"bucket-owner-read": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
"bucket-owner-full-control": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def create_canned_acl(canned_acl: str, owner: str) -> Acl:
|
||||||
|
factory = CANNED_ACLS.get(canned_acl)
|
||||||
|
if not factory:
|
||||||
|
return CANNED_ACLS["private"](owner)
|
||||||
|
return factory(owner)
|
||||||
|
|
||||||
|
|
||||||
|
class AclService:
|
||||||
|
def __init__(self, storage_root: Path):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._bucket_acl_cache: Dict[str, Acl] = {}
|
||||||
|
|
||||||
|
def _bucket_acl_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / ".acl.json"
|
||||||
|
|
||||||
|
def get_bucket_acl(self, bucket_name: str) -> Optional[Acl]:
|
||||||
|
if bucket_name in self._bucket_acl_cache:
|
||||||
|
return self._bucket_acl_cache[bucket_name]
|
||||||
|
acl_path = self._bucket_acl_path(bucket_name)
|
||||||
|
if not acl_path.exists():
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
data = json.loads(acl_path.read_text(encoding="utf-8"))
|
||||||
|
acl = Acl.from_dict(data)
|
||||||
|
self._bucket_acl_cache[bucket_name] = acl
|
||||||
|
return acl
|
||||||
|
except (OSError, json.JSONDecodeError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def set_bucket_acl(self, bucket_name: str, acl: Acl) -> None:
|
||||||
|
acl_path = self._bucket_acl_path(bucket_name)
|
||||||
|
acl_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
acl_path.write_text(json.dumps(acl.to_dict(), indent=2), encoding="utf-8")
|
||||||
|
self._bucket_acl_cache[bucket_name] = acl
|
||||||
|
|
||||||
|
def set_bucket_canned_acl(self, bucket_name: str, canned_acl: str, owner: str) -> Acl:
|
||||||
|
acl = create_canned_acl(canned_acl, owner)
|
||||||
|
self.set_bucket_acl(bucket_name, acl)
|
||||||
|
return acl
|
||||||
|
|
||||||
|
def delete_bucket_acl(self, bucket_name: str) -> None:
|
||||||
|
acl_path = self._bucket_acl_path(bucket_name)
|
||||||
|
if acl_path.exists():
|
||||||
|
acl_path.unlink()
|
||||||
|
self._bucket_acl_cache.pop(bucket_name, None)
|
||||||
|
|
||||||
|
def evaluate_bucket_acl(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
principal_id: Optional[str],
|
||||||
|
action: str,
|
||||||
|
is_authenticated: bool = True,
|
||||||
|
) -> bool:
|
||||||
|
acl = self.get_bucket_acl(bucket_name)
|
||||||
|
if not acl:
|
||||||
|
return False
|
||||||
|
allowed_actions = acl.get_allowed_actions(principal_id, is_authenticated)
|
||||||
|
return action in allowed_actions
|
||||||
|
|
||||||
|
def get_object_acl(self, bucket_name: str, object_key: str, object_metadata: Dict[str, Any]) -> Optional[Acl]:
|
||||||
|
acl_data = object_metadata.get("__acl__")
|
||||||
|
if not acl_data:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
return Acl.from_dict(acl_data)
|
||||||
|
except (TypeError, KeyError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def create_object_acl_metadata(self, acl: Acl) -> Dict[str, Any]:
|
||||||
|
return {"__acl__": acl.to_dict()}
|
||||||
|
|
||||||
|
def evaluate_object_acl(
|
||||||
|
self,
|
||||||
|
object_metadata: Dict[str, Any],
|
||||||
|
principal_id: Optional[str],
|
||||||
|
action: str,
|
||||||
|
is_authenticated: bool = True,
|
||||||
|
) -> bool:
|
||||||
|
acl = self.get_object_acl("", "", object_metadata)
|
||||||
|
if not acl:
|
||||||
|
return False
|
||||||
|
allowed_actions = acl.get_allowed_actions(principal_id, is_authenticated)
|
||||||
|
return action in allowed_actions
|
||||||
@@ -1,27 +1,62 @@
|
|||||||
"""Bucket policy loader/enforcer with a subset of AWS semantics."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
|
import re
|
||||||
|
import time
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from fnmatch import fnmatch
|
from fnmatch import fnmatch, translate
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, Iterable, List, Optional, Sequence
|
from typing import Any, Dict, Iterable, List, Optional, Pattern, Sequence, Tuple
|
||||||
|
|
||||||
|
|
||||||
RESOURCE_PREFIX = "arn:aws:s3:::"
|
RESOURCE_PREFIX = "arn:aws:s3:::"
|
||||||
|
|
||||||
ACTION_ALIASES = {
|
ACTION_ALIASES = {
|
||||||
"s3:getobject": "read",
|
# List actions
|
||||||
"s3:getobjectversion": "read",
|
|
||||||
"s3:listbucket": "list",
|
"s3:listbucket": "list",
|
||||||
"s3:listallmybuckets": "list",
|
"s3:listallmybuckets": "list",
|
||||||
|
"s3:listbucketversions": "list",
|
||||||
|
"s3:listmultipartuploads": "list",
|
||||||
|
"s3:listparts": "list",
|
||||||
|
# Read actions
|
||||||
|
"s3:getobject": "read",
|
||||||
|
"s3:getobjectversion": "read",
|
||||||
|
"s3:getobjecttagging": "read",
|
||||||
|
"s3:getobjectversiontagging": "read",
|
||||||
|
"s3:getobjectacl": "read",
|
||||||
|
"s3:getbucketversioning": "read",
|
||||||
|
"s3:headobject": "read",
|
||||||
|
"s3:headbucket": "read",
|
||||||
|
# Write actions
|
||||||
"s3:putobject": "write",
|
"s3:putobject": "write",
|
||||||
"s3:createbucket": "write",
|
"s3:createbucket": "write",
|
||||||
|
"s3:putobjecttagging": "write",
|
||||||
|
"s3:putbucketversioning": "write",
|
||||||
|
"s3:createmultipartupload": "write",
|
||||||
|
"s3:uploadpart": "write",
|
||||||
|
"s3:completemultipartupload": "write",
|
||||||
|
"s3:abortmultipartupload": "write",
|
||||||
|
"s3:copyobject": "write",
|
||||||
|
# Delete actions
|
||||||
"s3:deleteobject": "delete",
|
"s3:deleteobject": "delete",
|
||||||
"s3:deleteobjectversion": "delete",
|
"s3:deleteobjectversion": "delete",
|
||||||
"s3:deletebucket": "delete",
|
"s3:deletebucket": "delete",
|
||||||
|
"s3:deleteobjecttagging": "delete",
|
||||||
|
# Share actions (ACL)
|
||||||
"s3:putobjectacl": "share",
|
"s3:putobjectacl": "share",
|
||||||
|
"s3:putbucketacl": "share",
|
||||||
|
"s3:getbucketacl": "share",
|
||||||
|
# Policy actions
|
||||||
"s3:putbucketpolicy": "policy",
|
"s3:putbucketpolicy": "policy",
|
||||||
|
"s3:getbucketpolicy": "policy",
|
||||||
|
"s3:deletebucketpolicy": "policy",
|
||||||
|
# Replication actions
|
||||||
|
"s3:getreplicationconfiguration": "replication",
|
||||||
|
"s3:putreplicationconfiguration": "replication",
|
||||||
|
"s3:deletereplicationconfiguration": "replication",
|
||||||
|
"s3:replicateobject": "replication",
|
||||||
|
"s3:replicatetags": "replication",
|
||||||
|
"s3:replicatedelete": "replication",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -99,7 +134,22 @@ class BucketPolicyStatement:
|
|||||||
effect: str
|
effect: str
|
||||||
principals: List[str] | str
|
principals: List[str] | str
|
||||||
actions: List[str]
|
actions: List[str]
|
||||||
resources: List[tuple[str | None, str | None]]
|
resources: List[Tuple[str | None, str | None]]
|
||||||
|
# Performance: Pre-compiled regex patterns for resource matching
|
||||||
|
_compiled_patterns: List[Tuple[str | None, Optional[Pattern[str]]]] | None = None
|
||||||
|
|
||||||
|
def _get_compiled_patterns(self) -> List[Tuple[str | None, Optional[Pattern[str]]]]:
|
||||||
|
"""Lazily compile fnmatch patterns to regex for faster matching."""
|
||||||
|
if self._compiled_patterns is None:
|
||||||
|
self._compiled_patterns = []
|
||||||
|
for resource_bucket, key_pattern in self.resources:
|
||||||
|
if key_pattern is None:
|
||||||
|
self._compiled_patterns.append((resource_bucket, None))
|
||||||
|
else:
|
||||||
|
# Convert fnmatch pattern to regex
|
||||||
|
regex_pattern = translate(key_pattern)
|
||||||
|
self._compiled_patterns.append((resource_bucket, re.compile(regex_pattern)))
|
||||||
|
return self._compiled_patterns
|
||||||
|
|
||||||
def matches_principal(self, access_key: Optional[str]) -> bool:
|
def matches_principal(self, access_key: Optional[str]) -> bool:
|
||||||
if self.principals == "*":
|
if self.principals == "*":
|
||||||
@@ -115,15 +165,16 @@ class BucketPolicyStatement:
|
|||||||
def matches_resource(self, bucket: Optional[str], object_key: Optional[str]) -> bool:
|
def matches_resource(self, bucket: Optional[str], object_key: Optional[str]) -> bool:
|
||||||
bucket = (bucket or "*").lower()
|
bucket = (bucket or "*").lower()
|
||||||
key = object_key or ""
|
key = object_key or ""
|
||||||
for resource_bucket, key_pattern in self.resources:
|
for resource_bucket, compiled_pattern in self._get_compiled_patterns():
|
||||||
resource_bucket = (resource_bucket or "*").lower()
|
resource_bucket = (resource_bucket or "*").lower()
|
||||||
if resource_bucket not in {"*", bucket}:
|
if resource_bucket not in {"*", bucket}:
|
||||||
continue
|
continue
|
||||||
if key_pattern is None:
|
if compiled_pattern is None:
|
||||||
if not key:
|
if not key:
|
||||||
return True
|
return True
|
||||||
continue
|
continue
|
||||||
if fnmatch(key, key_pattern):
|
# Performance: Use pre-compiled regex instead of fnmatch
|
||||||
|
if compiled_pattern.match(key):
|
||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
@@ -140,8 +191,16 @@ class BucketPolicyStore:
|
|||||||
self._policies: Dict[str, List[BucketPolicyStatement]] = {}
|
self._policies: Dict[str, List[BucketPolicyStatement]] = {}
|
||||||
self._load()
|
self._load()
|
||||||
self._last_mtime = self._current_mtime()
|
self._last_mtime = self._current_mtime()
|
||||||
|
# Performance: Avoid stat() on every request
|
||||||
|
self._last_stat_check = 0.0
|
||||||
|
self._stat_check_interval = 1.0 # Only check mtime every 1 second
|
||||||
|
|
||||||
def maybe_reload(self) -> None:
|
def maybe_reload(self) -> None:
|
||||||
|
# Performance: Skip stat check if we checked recently
|
||||||
|
now = time.time()
|
||||||
|
if now - self._last_stat_check < self._stat_check_interval:
|
||||||
|
return
|
||||||
|
self._last_stat_check = now
|
||||||
current = self._current_mtime()
|
current = self._current_mtime()
|
||||||
if current is None or current == self._last_mtime:
|
if current is None or current == self._last_mtime:
|
||||||
return
|
return
|
||||||
@@ -154,7 +213,6 @@ class BucketPolicyStore:
|
|||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
def evaluate(
|
def evaluate(
|
||||||
self,
|
self,
|
||||||
access_key: Optional[str],
|
access_key: Optional[str],
|
||||||
@@ -195,7 +253,6 @@ class BucketPolicyStore:
|
|||||||
self._policies.pop(bucket, None)
|
self._policies.pop(bucket, None)
|
||||||
self._persist()
|
self._persist()
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
def _load(self) -> None:
|
def _load(self) -> None:
|
||||||
try:
|
try:
|
||||||
content = self.policy_path.read_text(encoding='utf-8')
|
content = self.policy_path.read_text(encoding='utf-8')
|
||||||
|
|||||||
198
app/config.py
198
app/config.py
@@ -1,15 +1,20 @@
|
|||||||
"""Configuration helpers for the S3 clone application."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import os
|
import os
|
||||||
import secrets
|
import secrets
|
||||||
import shutil
|
import shutil
|
||||||
|
import sys
|
||||||
import warnings
|
import warnings
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, Optional
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
PROJECT_ROOT = Path(__file__).resolve().parent.parent
|
if getattr(sys, "frozen", False):
|
||||||
|
# Running in a PyInstaller bundle
|
||||||
|
PROJECT_ROOT = Path(sys._MEIPASS)
|
||||||
|
else:
|
||||||
|
# Running in a normal Python environment
|
||||||
|
PROJECT_ROOT = Path(__file__).resolve().parent.parent
|
||||||
|
|
||||||
|
|
||||||
def _prepare_config_file(active_path: Path, legacy_path: Optional[Path] = None) -> Path:
|
def _prepare_config_file(active_path: Path, legacy_path: Optional[Path] = None) -> Path:
|
||||||
@@ -39,11 +44,12 @@ class AppConfig:
|
|||||||
secret_key: str
|
secret_key: str
|
||||||
iam_config_path: Path
|
iam_config_path: Path
|
||||||
bucket_policy_path: Path
|
bucket_policy_path: Path
|
||||||
api_base_url: str
|
api_base_url: Optional[str]
|
||||||
aws_region: str
|
aws_region: str
|
||||||
aws_service: str
|
aws_service: str
|
||||||
ui_enforce_bucket_policies: bool
|
ui_enforce_bucket_policies: bool
|
||||||
log_level: str
|
log_level: str
|
||||||
|
log_to_file: bool
|
||||||
log_path: Path
|
log_path: Path
|
||||||
log_max_bytes: int
|
log_max_bytes: int
|
||||||
log_backup_count: int
|
log_backup_count: int
|
||||||
@@ -52,6 +58,7 @@ class AppConfig:
|
|||||||
cors_origins: list[str]
|
cors_origins: list[str]
|
||||||
cors_methods: list[str]
|
cors_methods: list[str]
|
||||||
cors_allow_headers: list[str]
|
cors_allow_headers: list[str]
|
||||||
|
cors_expose_headers: list[str]
|
||||||
session_lifetime_days: int
|
session_lifetime_days: int
|
||||||
auth_max_attempts: int
|
auth_max_attempts: int
|
||||||
auth_lockout_minutes: int
|
auth_lockout_minutes: int
|
||||||
@@ -59,6 +66,15 @@ class AppConfig:
|
|||||||
secret_ttl_seconds: int
|
secret_ttl_seconds: int
|
||||||
stream_chunk_size: int
|
stream_chunk_size: int
|
||||||
multipart_min_part_size: int
|
multipart_min_part_size: int
|
||||||
|
bucket_stats_cache_ttl: int
|
||||||
|
encryption_enabled: bool
|
||||||
|
encryption_master_key_path: Path
|
||||||
|
kms_enabled: bool
|
||||||
|
kms_keys_path: Path
|
||||||
|
default_encryption_algorithm: str
|
||||||
|
display_timezone: str
|
||||||
|
lifecycle_enabled: bool
|
||||||
|
lifecycle_interval_seconds: int
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_env(cls, overrides: Optional[Dict[str, Any]] = None) -> "AppConfig":
|
def from_env(cls, overrides: Optional[Dict[str, Any]] = None) -> "AppConfig":
|
||||||
@@ -68,7 +84,7 @@ class AppConfig:
|
|||||||
return overrides.get(name, os.getenv(name, default))
|
return overrides.get(name, os.getenv(name, default))
|
||||||
|
|
||||||
storage_root = Path(_get("STORAGE_ROOT", PROJECT_ROOT / "data")).resolve()
|
storage_root = Path(_get("STORAGE_ROOT", PROJECT_ROOT / "data")).resolve()
|
||||||
max_upload_size = int(_get("MAX_UPLOAD_SIZE", 1024 * 1024 * 1024)) # 1 GiB default
|
max_upload_size = int(_get("MAX_UPLOAD_SIZE", 1024 * 1024 * 1024))
|
||||||
ui_page_size = int(_get("UI_PAGE_SIZE", 100))
|
ui_page_size = int(_get("UI_PAGE_SIZE", 100))
|
||||||
auth_max_attempts = int(_get("AUTH_MAX_ATTEMPTS", 5))
|
auth_max_attempts = int(_get("AUTH_MAX_ATTEMPTS", 5))
|
||||||
auth_lockout_minutes = int(_get("AUTH_LOCKOUT_MINUTES", 15))
|
auth_lockout_minutes = int(_get("AUTH_LOCKOUT_MINUTES", 15))
|
||||||
@@ -76,36 +92,57 @@ class AppConfig:
|
|||||||
secret_ttl_seconds = int(_get("SECRET_TTL_SECONDS", 300))
|
secret_ttl_seconds = int(_get("SECRET_TTL_SECONDS", 300))
|
||||||
stream_chunk_size = int(_get("STREAM_CHUNK_SIZE", 64 * 1024))
|
stream_chunk_size = int(_get("STREAM_CHUNK_SIZE", 64 * 1024))
|
||||||
multipart_min_part_size = int(_get("MULTIPART_MIN_PART_SIZE", 5 * 1024 * 1024))
|
multipart_min_part_size = int(_get("MULTIPART_MIN_PART_SIZE", 5 * 1024 * 1024))
|
||||||
|
lifecycle_enabled = _get("LIFECYCLE_ENABLED", "false").lower() in ("true", "1", "yes")
|
||||||
|
lifecycle_interval_seconds = int(_get("LIFECYCLE_INTERVAL_SECONDS", 3600))
|
||||||
default_secret = "dev-secret-key"
|
default_secret = "dev-secret-key"
|
||||||
secret_key = str(_get("SECRET_KEY", default_secret))
|
secret_key = str(_get("SECRET_KEY", default_secret))
|
||||||
|
|
||||||
if not secret_key or secret_key == default_secret:
|
if not secret_key or secret_key == default_secret:
|
||||||
generated = secrets.token_urlsafe(32)
|
secret_file = storage_root / ".myfsio.sys" / "config" / ".secret"
|
||||||
if secret_key == default_secret:
|
if secret_file.exists():
|
||||||
warnings.warn("Using insecure default SECRET_KEY. A random value has been generated; set SECRET_KEY for production", RuntimeWarning)
|
secret_key = secret_file.read_text().strip()
|
||||||
secret_key = generated
|
else:
|
||||||
|
generated = secrets.token_urlsafe(32)
|
||||||
|
if secret_key == default_secret:
|
||||||
|
warnings.warn("Using insecure default SECRET_KEY. A random value has been generated and persisted; set SECRET_KEY for production", RuntimeWarning)
|
||||||
|
try:
|
||||||
|
secret_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
secret_file.write_text(generated)
|
||||||
|
try:
|
||||||
|
os.chmod(secret_file, 0o600)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
secret_key = generated
|
||||||
|
except OSError:
|
||||||
|
secret_key = generated
|
||||||
|
|
||||||
iam_env_override = "IAM_CONFIG" in overrides or "IAM_CONFIG" in os.environ
|
iam_env_override = "IAM_CONFIG" in overrides or "IAM_CONFIG" in os.environ
|
||||||
bucket_policy_override = "BUCKET_POLICY_PATH" in overrides or "BUCKET_POLICY_PATH" in os.environ
|
bucket_policy_override = "BUCKET_POLICY_PATH" in overrides or "BUCKET_POLICY_PATH" in os.environ
|
||||||
|
|
||||||
default_iam_path = PROJECT_ROOT / "data" / ".myfsio.sys" / "config" / "iam.json"
|
default_iam_path = storage_root / ".myfsio.sys" / "config" / "iam.json"
|
||||||
default_bucket_policy_path = PROJECT_ROOT / "data" / ".myfsio.sys" / "config" / "bucket_policies.json"
|
default_bucket_policy_path = storage_root / ".myfsio.sys" / "config" / "bucket_policies.json"
|
||||||
|
|
||||||
iam_config_path = Path(_get("IAM_CONFIG", default_iam_path)).resolve()
|
iam_config_path = Path(_get("IAM_CONFIG", default_iam_path)).resolve()
|
||||||
bucket_policy_path = Path(_get("BUCKET_POLICY_PATH", default_bucket_policy_path)).resolve()
|
bucket_policy_path = Path(_get("BUCKET_POLICY_PATH", default_bucket_policy_path)).resolve()
|
||||||
|
|
||||||
iam_config_path = _prepare_config_file(
|
iam_config_path = _prepare_config_file(
|
||||||
iam_config_path,
|
iam_config_path,
|
||||||
legacy_path=None if iam_env_override else PROJECT_ROOT / "data" / "iam.json",
|
legacy_path=None if iam_env_override else storage_root / "iam.json",
|
||||||
)
|
)
|
||||||
bucket_policy_path = _prepare_config_file(
|
bucket_policy_path = _prepare_config_file(
|
||||||
bucket_policy_path,
|
bucket_policy_path,
|
||||||
legacy_path=None if bucket_policy_override else PROJECT_ROOT / "data" / "bucket_policies.json",
|
legacy_path=None if bucket_policy_override else storage_root / "bucket_policies.json",
|
||||||
)
|
)
|
||||||
api_base_url = str(_get("API_BASE_URL", "http://127.0.0.1:5000"))
|
api_base_url = _get("API_BASE_URL", None)
|
||||||
|
if api_base_url:
|
||||||
|
api_base_url = str(api_base_url)
|
||||||
|
|
||||||
aws_region = str(_get("AWS_REGION", "us-east-1"))
|
aws_region = str(_get("AWS_REGION", "us-east-1"))
|
||||||
aws_service = str(_get("AWS_SERVICE", "s3"))
|
aws_service = str(_get("AWS_SERVICE", "s3"))
|
||||||
enforce_ui_policies = str(_get("UI_ENFORCE_BUCKET_POLICIES", "0")).lower() in {"1", "true", "yes", "on"}
|
enforce_ui_policies = str(_get("UI_ENFORCE_BUCKET_POLICIES", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
log_level = str(_get("LOG_LEVEL", "INFO")).upper()
|
log_level = str(_get("LOG_LEVEL", "INFO")).upper()
|
||||||
log_dir = Path(_get("LOG_DIR", PROJECT_ROOT / "logs")).resolve()
|
log_to_file = str(_get("LOG_TO_FILE", "1")).lower() in {"1", "true", "yes", "on"}
|
||||||
|
log_dir = Path(_get("LOG_DIR", storage_root.parent / "logs")).resolve()
|
||||||
log_dir.mkdir(parents=True, exist_ok=True)
|
log_dir.mkdir(parents=True, exist_ok=True)
|
||||||
log_path = log_dir / str(_get("LOG_FILE", "app.log"))
|
log_path = log_dir / str(_get("LOG_FILE", "app.log"))
|
||||||
log_max_bytes = int(_get("LOG_MAX_BYTES", 5 * 1024 * 1024))
|
log_max_bytes = int(_get("LOG_MAX_BYTES", 5 * 1024 * 1024))
|
||||||
@@ -120,19 +157,19 @@ class AppConfig:
|
|||||||
return parts or default
|
return parts or default
|
||||||
|
|
||||||
cors_origins = _csv(str(_get("CORS_ORIGINS", "*")), ["*"])
|
cors_origins = _csv(str(_get("CORS_ORIGINS", "*")), ["*"])
|
||||||
cors_methods = _csv(str(_get("CORS_METHODS", "GET,PUT,POST,DELETE,OPTIONS")), ["GET", "PUT", "POST", "DELETE", "OPTIONS"])
|
cors_methods = _csv(str(_get("CORS_METHODS", "GET,PUT,POST,DELETE,OPTIONS,HEAD")), ["GET", "PUT", "POST", "DELETE", "OPTIONS", "HEAD"])
|
||||||
cors_allow_headers = _csv(str(_get("CORS_ALLOW_HEADERS", "Content-Type,X-Access-Key,X-Secret-Key,X-Amz-Algorithm,X-Amz-Credential,X-Amz-Date,X-Amz-Expires,X-Amz-SignedHeaders,X-Amz-Signature")), [
|
cors_allow_headers = _csv(str(_get("CORS_ALLOW_HEADERS", "*")), ["*"])
|
||||||
"Content-Type",
|
cors_expose_headers = _csv(str(_get("CORS_EXPOSE_HEADERS", "*")), ["*"])
|
||||||
"X-Access-Key",
|
|
||||||
"X-Secret-Key",
|
|
||||||
"X-Amz-Algorithm",
|
|
||||||
"X-Amz-Credential",
|
|
||||||
"X-Amz-Date",
|
|
||||||
"X-Amz-Expires",
|
|
||||||
"X-Amz-SignedHeaders",
|
|
||||||
"X-Amz-Signature",
|
|
||||||
])
|
|
||||||
session_lifetime_days = int(_get("SESSION_LIFETIME_DAYS", 30))
|
session_lifetime_days = int(_get("SESSION_LIFETIME_DAYS", 30))
|
||||||
|
bucket_stats_cache_ttl = int(_get("BUCKET_STATS_CACHE_TTL", 60))
|
||||||
|
|
||||||
|
encryption_enabled = str(_get("ENCRYPTION_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
|
encryption_keys_dir = storage_root / ".myfsio.sys" / "keys"
|
||||||
|
encryption_master_key_path = Path(_get("ENCRYPTION_MASTER_KEY_PATH", encryption_keys_dir / "master.key")).resolve()
|
||||||
|
kms_enabled = str(_get("KMS_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
|
kms_keys_path = Path(_get("KMS_KEYS_PATH", encryption_keys_dir / "kms_keys.json")).resolve()
|
||||||
|
default_encryption_algorithm = str(_get("DEFAULT_ENCRYPTION_ALGORITHM", "AES256"))
|
||||||
|
display_timezone = str(_get("DISPLAY_TIMEZONE", "UTC"))
|
||||||
|
|
||||||
return cls(storage_root=storage_root,
|
return cls(storage_root=storage_root,
|
||||||
max_upload_size=max_upload_size,
|
max_upload_size=max_upload_size,
|
||||||
@@ -145,6 +182,7 @@ class AppConfig:
|
|||||||
aws_service=aws_service,
|
aws_service=aws_service,
|
||||||
ui_enforce_bucket_policies=enforce_ui_policies,
|
ui_enforce_bucket_policies=enforce_ui_policies,
|
||||||
log_level=log_level,
|
log_level=log_level,
|
||||||
|
log_to_file=log_to_file,
|
||||||
log_path=log_path,
|
log_path=log_path,
|
||||||
log_max_bytes=log_max_bytes,
|
log_max_bytes=log_max_bytes,
|
||||||
log_backup_count=log_backup_count,
|
log_backup_count=log_backup_count,
|
||||||
@@ -153,13 +191,110 @@ class AppConfig:
|
|||||||
cors_origins=cors_origins,
|
cors_origins=cors_origins,
|
||||||
cors_methods=cors_methods,
|
cors_methods=cors_methods,
|
||||||
cors_allow_headers=cors_allow_headers,
|
cors_allow_headers=cors_allow_headers,
|
||||||
|
cors_expose_headers=cors_expose_headers,
|
||||||
session_lifetime_days=session_lifetime_days,
|
session_lifetime_days=session_lifetime_days,
|
||||||
auth_max_attempts=auth_max_attempts,
|
auth_max_attempts=auth_max_attempts,
|
||||||
auth_lockout_minutes=auth_lockout_minutes,
|
auth_lockout_minutes=auth_lockout_minutes,
|
||||||
bulk_delete_max_keys=bulk_delete_max_keys,
|
bulk_delete_max_keys=bulk_delete_max_keys,
|
||||||
secret_ttl_seconds=secret_ttl_seconds,
|
secret_ttl_seconds=secret_ttl_seconds,
|
||||||
stream_chunk_size=stream_chunk_size,
|
stream_chunk_size=stream_chunk_size,
|
||||||
multipart_min_part_size=multipart_min_part_size)
|
multipart_min_part_size=multipart_min_part_size,
|
||||||
|
bucket_stats_cache_ttl=bucket_stats_cache_ttl,
|
||||||
|
encryption_enabled=encryption_enabled,
|
||||||
|
encryption_master_key_path=encryption_master_key_path,
|
||||||
|
kms_enabled=kms_enabled,
|
||||||
|
kms_keys_path=kms_keys_path,
|
||||||
|
default_encryption_algorithm=default_encryption_algorithm,
|
||||||
|
display_timezone=display_timezone,
|
||||||
|
lifecycle_enabled=lifecycle_enabled,
|
||||||
|
lifecycle_interval_seconds=lifecycle_interval_seconds)
|
||||||
|
|
||||||
|
def validate_and_report(self) -> list[str]:
|
||||||
|
"""Validate configuration and return a list of warnings/issues.
|
||||||
|
|
||||||
|
Call this at startup to detect potential misconfigurations before
|
||||||
|
the application fully commits to running.
|
||||||
|
"""
|
||||||
|
issues = []
|
||||||
|
|
||||||
|
try:
|
||||||
|
test_file = self.storage_root / ".write_test"
|
||||||
|
test_file.touch()
|
||||||
|
test_file.unlink()
|
||||||
|
except (OSError, PermissionError) as e:
|
||||||
|
issues.append(f"CRITICAL: STORAGE_ROOT '{self.storage_root}' is not writable: {e}")
|
||||||
|
|
||||||
|
storage_str = str(self.storage_root).lower()
|
||||||
|
if "/tmp" in storage_str or "\\temp" in storage_str or "appdata\\local\\temp" in storage_str:
|
||||||
|
issues.append(f"WARNING: STORAGE_ROOT '{self.storage_root}' appears to be a temporary directory. Data may be lost on reboot!")
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.iam_config_path.relative_to(self.storage_root)
|
||||||
|
except ValueError:
|
||||||
|
issues.append(f"WARNING: IAM_CONFIG '{self.iam_config_path}' is outside STORAGE_ROOT '{self.storage_root}'. Consider setting IAM_CONFIG explicitly or ensuring paths are aligned.")
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.bucket_policy_path.relative_to(self.storage_root)
|
||||||
|
except ValueError:
|
||||||
|
issues.append(f"WARNING: BUCKET_POLICY_PATH '{self.bucket_policy_path}' is outside STORAGE_ROOT '{self.storage_root}'. Consider setting BUCKET_POLICY_PATH explicitly.")
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.log_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
test_log = self.log_path.parent / ".write_test"
|
||||||
|
test_log.touch()
|
||||||
|
test_log.unlink()
|
||||||
|
except (OSError, PermissionError) as e:
|
||||||
|
issues.append(f"WARNING: Log directory '{self.log_path.parent}' is not writable: {e}")
|
||||||
|
|
||||||
|
log_str = str(self.log_path).lower()
|
||||||
|
if "/tmp" in log_str or "\\temp" in log_str or "appdata\\local\\temp" in log_str:
|
||||||
|
issues.append(f"WARNING: LOG_DIR '{self.log_path.parent}' appears to be a temporary directory. Logs may be lost on reboot!")
|
||||||
|
|
||||||
|
if self.encryption_enabled:
|
||||||
|
try:
|
||||||
|
self.encryption_master_key_path.relative_to(self.storage_root)
|
||||||
|
except ValueError:
|
||||||
|
issues.append(f"WARNING: ENCRYPTION_MASTER_KEY_PATH '{self.encryption_master_key_path}' is outside STORAGE_ROOT. Ensure proper backup procedures.")
|
||||||
|
|
||||||
|
if self.kms_enabled:
|
||||||
|
try:
|
||||||
|
self.kms_keys_path.relative_to(self.storage_root)
|
||||||
|
except ValueError:
|
||||||
|
issues.append(f"WARNING: KMS_KEYS_PATH '{self.kms_keys_path}' is outside STORAGE_ROOT. Ensure proper backup procedures.")
|
||||||
|
|
||||||
|
if self.secret_key == "dev-secret-key":
|
||||||
|
issues.append("WARNING: Using default SECRET_KEY. Set SECRET_KEY environment variable for production.")
|
||||||
|
|
||||||
|
if "*" in self.cors_origins:
|
||||||
|
issues.append("INFO: CORS_ORIGINS is set to '*'. Consider restricting to specific domains in production.")
|
||||||
|
|
||||||
|
return issues
|
||||||
|
|
||||||
|
def print_startup_summary(self) -> None:
|
||||||
|
"""Print a summary of the configuration at startup."""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("MyFSIO Configuration Summary")
|
||||||
|
print("=" * 60)
|
||||||
|
print(f" STORAGE_ROOT: {self.storage_root}")
|
||||||
|
print(f" IAM_CONFIG: {self.iam_config_path}")
|
||||||
|
print(f" BUCKET_POLICY: {self.bucket_policy_path}")
|
||||||
|
print(f" LOG_PATH: {self.log_path}")
|
||||||
|
if self.api_base_url:
|
||||||
|
print(f" API_BASE_URL: {self.api_base_url}")
|
||||||
|
if self.encryption_enabled:
|
||||||
|
print(f" ENCRYPTION: Enabled (Master key: {self.encryption_master_key_path})")
|
||||||
|
if self.kms_enabled:
|
||||||
|
print(f" KMS: Enabled (Keys: {self.kms_keys_path})")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
issues = self.validate_and_report()
|
||||||
|
if issues:
|
||||||
|
print("\nConfiguration Issues Detected:")
|
||||||
|
for issue in issues:
|
||||||
|
print(f" • {issue}")
|
||||||
|
print()
|
||||||
|
else:
|
||||||
|
print(" ✓ Configuration validated successfully\n")
|
||||||
|
|
||||||
def to_flask_config(self) -> Dict[str, Any]:
|
def to_flask_config(self) -> Dict[str, Any]:
|
||||||
return {
|
return {
|
||||||
@@ -179,7 +314,9 @@ class AppConfig:
|
|||||||
"SECRET_TTL_SECONDS": self.secret_ttl_seconds,
|
"SECRET_TTL_SECONDS": self.secret_ttl_seconds,
|
||||||
"STREAM_CHUNK_SIZE": self.stream_chunk_size,
|
"STREAM_CHUNK_SIZE": self.stream_chunk_size,
|
||||||
"MULTIPART_MIN_PART_SIZE": self.multipart_min_part_size,
|
"MULTIPART_MIN_PART_SIZE": self.multipart_min_part_size,
|
||||||
|
"BUCKET_STATS_CACHE_TTL": self.bucket_stats_cache_ttl,
|
||||||
"LOG_LEVEL": self.log_level,
|
"LOG_LEVEL": self.log_level,
|
||||||
|
"LOG_TO_FILE": self.log_to_file,
|
||||||
"LOG_FILE": str(self.log_path),
|
"LOG_FILE": str(self.log_path),
|
||||||
"LOG_MAX_BYTES": self.log_max_bytes,
|
"LOG_MAX_BYTES": self.log_max_bytes,
|
||||||
"LOG_BACKUP_COUNT": self.log_backup_count,
|
"LOG_BACKUP_COUNT": self.log_backup_count,
|
||||||
@@ -188,5 +325,12 @@ class AppConfig:
|
|||||||
"CORS_ORIGINS": self.cors_origins,
|
"CORS_ORIGINS": self.cors_origins,
|
||||||
"CORS_METHODS": self.cors_methods,
|
"CORS_METHODS": self.cors_methods,
|
||||||
"CORS_ALLOW_HEADERS": self.cors_allow_headers,
|
"CORS_ALLOW_HEADERS": self.cors_allow_headers,
|
||||||
|
"CORS_EXPOSE_HEADERS": self.cors_expose_headers,
|
||||||
"SESSION_LIFETIME_DAYS": self.session_lifetime_days,
|
"SESSION_LIFETIME_DAYS": self.session_lifetime_days,
|
||||||
|
"ENCRYPTION_ENABLED": self.encryption_enabled,
|
||||||
|
"ENCRYPTION_MASTER_KEY_PATH": str(self.encryption_master_key_path),
|
||||||
|
"KMS_ENABLED": self.kms_enabled,
|
||||||
|
"KMS_KEYS_PATH": str(self.kms_keys_path),
|
||||||
|
"DEFAULT_ENCRYPTION_ALGORITHM": self.default_encryption_algorithm,
|
||||||
|
"DISPLAY_TIMEZONE": self.display_timezone,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
"""Manage remote S3 connections."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
|
|||||||
278
app/encrypted_storage.py
Normal file
278
app/encrypted_storage.py
Normal file
@@ -0,0 +1,278 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import io
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, BinaryIO, Dict, Optional
|
||||||
|
|
||||||
|
from .encryption import EncryptionManager, EncryptionMetadata, EncryptionError
|
||||||
|
from .storage import ObjectStorage, ObjectMeta, StorageError
|
||||||
|
|
||||||
|
|
||||||
|
class EncryptedObjectStorage:
|
||||||
|
"""Object storage with transparent server-side encryption.
|
||||||
|
|
||||||
|
This class wraps ObjectStorage and provides transparent encryption/decryption
|
||||||
|
of objects based on bucket encryption configuration.
|
||||||
|
|
||||||
|
Encryption is applied when:
|
||||||
|
1. Bucket has default encryption configured (SSE-S3 or SSE-KMS)
|
||||||
|
2. Client explicitly requests encryption via headers
|
||||||
|
|
||||||
|
The encryption metadata is stored alongside object metadata.
|
||||||
|
"""
|
||||||
|
|
||||||
|
STREAMING_THRESHOLD = 64 * 1024
|
||||||
|
|
||||||
|
def __init__(self, storage: ObjectStorage, encryption_manager: EncryptionManager):
|
||||||
|
self.storage = storage
|
||||||
|
self.encryption = encryption_manager
|
||||||
|
|
||||||
|
@property
|
||||||
|
def root(self) -> Path:
|
||||||
|
return self.storage.root
|
||||||
|
|
||||||
|
def _should_encrypt(self, bucket_name: str,
|
||||||
|
server_side_encryption: str | None = None) -> tuple[bool, str, str | None]:
|
||||||
|
"""Determine if object should be encrypted.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (should_encrypt, algorithm, kms_key_id)
|
||||||
|
"""
|
||||||
|
if not self.encryption.enabled:
|
||||||
|
return False, "", None
|
||||||
|
|
||||||
|
if server_side_encryption:
|
||||||
|
if server_side_encryption == "AES256":
|
||||||
|
return True, "AES256", None
|
||||||
|
elif server_side_encryption.startswith("aws:kms"):
|
||||||
|
parts = server_side_encryption.split(":")
|
||||||
|
kms_key_id = parts[2] if len(parts) > 2 else None
|
||||||
|
return True, "aws:kms", kms_key_id
|
||||||
|
|
||||||
|
try:
|
||||||
|
encryption_config = self.storage.get_bucket_encryption(bucket_name)
|
||||||
|
if encryption_config and encryption_config.get("Rules"):
|
||||||
|
rule = encryption_config["Rules"][0]
|
||||||
|
# AWS format: Rules[].ApplyServerSideEncryptionByDefault.SSEAlgorithm
|
||||||
|
sse_default = rule.get("ApplyServerSideEncryptionByDefault", {})
|
||||||
|
algorithm = sse_default.get("SSEAlgorithm", "AES256")
|
||||||
|
kms_key_id = sse_default.get("KMSMasterKeyID")
|
||||||
|
return True, algorithm, kms_key_id
|
||||||
|
except StorageError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return False, "", None
|
||||||
|
|
||||||
|
def _is_encrypted(self, metadata: Dict[str, str]) -> bool:
|
||||||
|
"""Check if object is encrypted based on its metadata."""
|
||||||
|
return "x-amz-server-side-encryption" in metadata
|
||||||
|
|
||||||
|
def put_object(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
stream: BinaryIO,
|
||||||
|
*,
|
||||||
|
metadata: Optional[Dict[str, str]] = None,
|
||||||
|
server_side_encryption: Optional[str] = None,
|
||||||
|
kms_key_id: Optional[str] = None,
|
||||||
|
) -> ObjectMeta:
|
||||||
|
"""Store an object, optionally with encryption.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
bucket_name: Name of the bucket
|
||||||
|
object_key: Key for the object
|
||||||
|
stream: Binary stream of object data
|
||||||
|
metadata: Optional user metadata
|
||||||
|
server_side_encryption: Encryption algorithm ("AES256" or "aws:kms")
|
||||||
|
kms_key_id: KMS key ID (for aws:kms encryption)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
ObjectMeta with object information
|
||||||
|
|
||||||
|
Performance: Uses streaming encryption for large files to reduce memory usage.
|
||||||
|
"""
|
||||||
|
should_encrypt, algorithm, detected_kms_key = self._should_encrypt(
|
||||||
|
bucket_name, server_side_encryption
|
||||||
|
)
|
||||||
|
|
||||||
|
if kms_key_id is None:
|
||||||
|
kms_key_id = detected_kms_key
|
||||||
|
|
||||||
|
if should_encrypt:
|
||||||
|
try:
|
||||||
|
# Performance: Use streaming encryption to avoid loading entire file into memory
|
||||||
|
encrypted_stream, enc_metadata = self.encryption.encrypt_stream(
|
||||||
|
stream,
|
||||||
|
algorithm=algorithm,
|
||||||
|
context={"bucket": bucket_name, "key": object_key},
|
||||||
|
)
|
||||||
|
|
||||||
|
combined_metadata = metadata.copy() if metadata else {}
|
||||||
|
combined_metadata.update(enc_metadata.to_dict())
|
||||||
|
|
||||||
|
result = self.storage.put_object(
|
||||||
|
bucket_name,
|
||||||
|
object_key,
|
||||||
|
encrypted_stream,
|
||||||
|
metadata=combined_metadata,
|
||||||
|
)
|
||||||
|
|
||||||
|
result.metadata = combined_metadata
|
||||||
|
return result
|
||||||
|
|
||||||
|
except EncryptionError as exc:
|
||||||
|
raise StorageError(f"Encryption failed: {exc}") from exc
|
||||||
|
else:
|
||||||
|
return self.storage.put_object(
|
||||||
|
bucket_name,
|
||||||
|
object_key,
|
||||||
|
stream,
|
||||||
|
metadata=metadata,
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_object_data(self, bucket_name: str, object_key: str) -> tuple[bytes, Dict[str, str]]:
|
||||||
|
"""Get object data, decrypting if necessary.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (data, metadata)
|
||||||
|
|
||||||
|
Performance: Uses streaming decryption to reduce memory usage.
|
||||||
|
"""
|
||||||
|
path = self.storage.get_object_path(bucket_name, object_key)
|
||||||
|
metadata = self.storage.get_object_metadata(bucket_name, object_key)
|
||||||
|
|
||||||
|
enc_metadata = EncryptionMetadata.from_dict(metadata)
|
||||||
|
if enc_metadata:
|
||||||
|
try:
|
||||||
|
# Performance: Use streaming decryption to avoid loading entire file into memory
|
||||||
|
with path.open("rb") as f:
|
||||||
|
decrypted_stream = self.encryption.decrypt_stream(f, enc_metadata)
|
||||||
|
data = decrypted_stream.read()
|
||||||
|
except EncryptionError as exc:
|
||||||
|
raise StorageError(f"Decryption failed: {exc}") from exc
|
||||||
|
else:
|
||||||
|
with path.open("rb") as f:
|
||||||
|
data = f.read()
|
||||||
|
|
||||||
|
clean_metadata = {
|
||||||
|
k: v for k, v in metadata.items()
|
||||||
|
if not k.startswith("x-amz-encryption")
|
||||||
|
and k != "x-amz-encrypted-data-key"
|
||||||
|
}
|
||||||
|
|
||||||
|
return data, clean_metadata
|
||||||
|
|
||||||
|
def get_object_stream(self, bucket_name: str, object_key: str) -> tuple[BinaryIO, Dict[str, str], int]:
|
||||||
|
"""Get object as a stream, decrypting if necessary.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (stream, metadata, original_size)
|
||||||
|
"""
|
||||||
|
data, metadata = self.get_object_data(bucket_name, object_key)
|
||||||
|
return io.BytesIO(data), metadata, len(data)
|
||||||
|
|
||||||
|
def list_buckets(self):
|
||||||
|
return self.storage.list_buckets()
|
||||||
|
|
||||||
|
def bucket_exists(self, bucket_name: str) -> bool:
|
||||||
|
return self.storage.bucket_exists(bucket_name)
|
||||||
|
|
||||||
|
def create_bucket(self, bucket_name: str) -> None:
|
||||||
|
return self.storage.create_bucket(bucket_name)
|
||||||
|
|
||||||
|
def delete_bucket(self, bucket_name: str) -> None:
|
||||||
|
return self.storage.delete_bucket(bucket_name)
|
||||||
|
|
||||||
|
def bucket_stats(self, bucket_name: str, cache_ttl: int = 60):
|
||||||
|
return self.storage.bucket_stats(bucket_name, cache_ttl)
|
||||||
|
|
||||||
|
def list_objects(self, bucket_name: str, **kwargs):
|
||||||
|
return self.storage.list_objects(bucket_name, **kwargs)
|
||||||
|
|
||||||
|
def list_objects_all(self, bucket_name: str):
|
||||||
|
return self.storage.list_objects_all(bucket_name)
|
||||||
|
|
||||||
|
def get_object_path(self, bucket_name: str, object_key: str):
|
||||||
|
return self.storage.get_object_path(bucket_name, object_key)
|
||||||
|
|
||||||
|
def get_object_metadata(self, bucket_name: str, object_key: str):
|
||||||
|
return self.storage.get_object_metadata(bucket_name, object_key)
|
||||||
|
|
||||||
|
def delete_object(self, bucket_name: str, object_key: str) -> None:
|
||||||
|
return self.storage.delete_object(bucket_name, object_key)
|
||||||
|
|
||||||
|
def purge_object(self, bucket_name: str, object_key: str) -> None:
|
||||||
|
return self.storage.purge_object(bucket_name, object_key)
|
||||||
|
|
||||||
|
def is_versioning_enabled(self, bucket_name: str) -> bool:
|
||||||
|
return self.storage.is_versioning_enabled(bucket_name)
|
||||||
|
|
||||||
|
def set_bucket_versioning(self, bucket_name: str, enabled: bool) -> None:
|
||||||
|
return self.storage.set_bucket_versioning(bucket_name, enabled)
|
||||||
|
|
||||||
|
def get_bucket_tags(self, bucket_name: str):
|
||||||
|
return self.storage.get_bucket_tags(bucket_name)
|
||||||
|
|
||||||
|
def set_bucket_tags(self, bucket_name: str, tags):
|
||||||
|
return self.storage.set_bucket_tags(bucket_name, tags)
|
||||||
|
|
||||||
|
def get_bucket_cors(self, bucket_name: str):
|
||||||
|
return self.storage.get_bucket_cors(bucket_name)
|
||||||
|
|
||||||
|
def set_bucket_cors(self, bucket_name: str, rules):
|
||||||
|
return self.storage.set_bucket_cors(bucket_name, rules)
|
||||||
|
|
||||||
|
def get_bucket_encryption(self, bucket_name: str):
|
||||||
|
return self.storage.get_bucket_encryption(bucket_name)
|
||||||
|
|
||||||
|
def set_bucket_encryption(self, bucket_name: str, config_payload):
|
||||||
|
return self.storage.set_bucket_encryption(bucket_name, config_payload)
|
||||||
|
|
||||||
|
def get_bucket_lifecycle(self, bucket_name: str):
|
||||||
|
return self.storage.get_bucket_lifecycle(bucket_name)
|
||||||
|
|
||||||
|
def set_bucket_lifecycle(self, bucket_name: str, rules):
|
||||||
|
return self.storage.set_bucket_lifecycle(bucket_name, rules)
|
||||||
|
|
||||||
|
def get_object_tags(self, bucket_name: str, object_key: str):
|
||||||
|
return self.storage.get_object_tags(bucket_name, object_key)
|
||||||
|
|
||||||
|
def set_object_tags(self, bucket_name: str, object_key: str, tags):
|
||||||
|
return self.storage.set_object_tags(bucket_name, object_key, tags)
|
||||||
|
|
||||||
|
def delete_object_tags(self, bucket_name: str, object_key: str):
|
||||||
|
return self.storage.delete_object_tags(bucket_name, object_key)
|
||||||
|
|
||||||
|
def list_object_versions(self, bucket_name: str, object_key: str):
|
||||||
|
return self.storage.list_object_versions(bucket_name, object_key)
|
||||||
|
|
||||||
|
def restore_object_version(self, bucket_name: str, object_key: str, version_id: str):
|
||||||
|
return self.storage.restore_object_version(bucket_name, object_key, version_id)
|
||||||
|
|
||||||
|
def list_orphaned_objects(self, bucket_name: str):
|
||||||
|
return self.storage.list_orphaned_objects(bucket_name)
|
||||||
|
|
||||||
|
def initiate_multipart_upload(self, bucket_name: str, object_key: str, *, metadata=None) -> str:
|
||||||
|
return self.storage.initiate_multipart_upload(bucket_name, object_key, metadata=metadata)
|
||||||
|
|
||||||
|
def upload_multipart_part(self, bucket_name: str, upload_id: str, part_number: int, stream: BinaryIO) -> str:
|
||||||
|
return self.storage.upload_multipart_part(bucket_name, upload_id, part_number, stream)
|
||||||
|
|
||||||
|
def complete_multipart_upload(self, bucket_name: str, upload_id: str, ordered_parts):
|
||||||
|
return self.storage.complete_multipart_upload(bucket_name, upload_id, ordered_parts)
|
||||||
|
|
||||||
|
def abort_multipart_upload(self, bucket_name: str, upload_id: str) -> None:
|
||||||
|
return self.storage.abort_multipart_upload(bucket_name, upload_id)
|
||||||
|
|
||||||
|
def list_multipart_parts(self, bucket_name: str, upload_id: str):
|
||||||
|
return self.storage.list_multipart_parts(bucket_name, upload_id)
|
||||||
|
|
||||||
|
def get_bucket_quota(self, bucket_name: str):
|
||||||
|
return self.storage.get_bucket_quota(bucket_name)
|
||||||
|
|
||||||
|
def set_bucket_quota(self, bucket_name: str, *, max_bytes=None, max_objects=None):
|
||||||
|
return self.storage.set_bucket_quota(bucket_name, max_bytes=max_bytes, max_objects=max_objects)
|
||||||
|
|
||||||
|
def _compute_etag(self, path: Path) -> str:
|
||||||
|
return self.storage._compute_etag(path)
|
||||||
505
app/encryption.py
Normal file
505
app/encryption.py
Normal file
@@ -0,0 +1,505 @@
|
|||||||
|
"""Encryption providers for server-side and client-side encryption."""
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
import io
|
||||||
|
import json
|
||||||
|
import secrets
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, BinaryIO, Dict, Generator, Optional
|
||||||
|
|
||||||
|
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
|
||||||
|
|
||||||
|
|
||||||
|
class EncryptionError(Exception):
|
||||||
|
"""Raised when encryption/decryption fails."""
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class EncryptionResult:
|
||||||
|
"""Result of encrypting data."""
|
||||||
|
ciphertext: bytes
|
||||||
|
nonce: bytes
|
||||||
|
key_id: str
|
||||||
|
encrypted_data_key: bytes
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class EncryptionMetadata:
|
||||||
|
"""Metadata stored with encrypted objects."""
|
||||||
|
algorithm: str
|
||||||
|
key_id: str
|
||||||
|
nonce: bytes
|
||||||
|
encrypted_data_key: bytes
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, str]:
|
||||||
|
return {
|
||||||
|
"x-amz-server-side-encryption": self.algorithm,
|
||||||
|
"x-amz-encryption-key-id": self.key_id,
|
||||||
|
"x-amz-encryption-nonce": base64.b64encode(self.nonce).decode(),
|
||||||
|
"x-amz-encrypted-data-key": base64.b64encode(self.encrypted_data_key).decode(),
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, str]) -> Optional["EncryptionMetadata"]:
|
||||||
|
algorithm = data.get("x-amz-server-side-encryption")
|
||||||
|
if not algorithm:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
return cls(
|
||||||
|
algorithm=algorithm,
|
||||||
|
key_id=data.get("x-amz-encryption-key-id", "local"),
|
||||||
|
nonce=base64.b64decode(data.get("x-amz-encryption-nonce", "")),
|
||||||
|
encrypted_data_key=base64.b64decode(data.get("x-amz-encrypted-data-key", "")),
|
||||||
|
)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
class EncryptionProvider:
|
||||||
|
"""Base class for encryption providers."""
|
||||||
|
|
||||||
|
def encrypt(self, plaintext: bytes, context: Dict[str, str] | None = None) -> EncryptionResult:
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
||||||
|
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def generate_data_key(self) -> tuple[bytes, bytes]:
|
||||||
|
"""Generate a data key and its encrypted form.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (plaintext_key, encrypted_key)
|
||||||
|
"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|
||||||
|
class LocalKeyEncryption(EncryptionProvider):
|
||||||
|
"""SSE-S3 style encryption using a local master key.
|
||||||
|
|
||||||
|
Uses envelope encryption:
|
||||||
|
1. Generate a unique data key for each object
|
||||||
|
2. Encrypt the data with the data key (AES-256-GCM)
|
||||||
|
3. Encrypt the data key with the master key
|
||||||
|
4. Store the encrypted data key alongside the ciphertext
|
||||||
|
"""
|
||||||
|
|
||||||
|
KEY_ID = "local"
|
||||||
|
|
||||||
|
def __init__(self, master_key_path: Path):
|
||||||
|
self.master_key_path = master_key_path
|
||||||
|
self._master_key: bytes | None = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def master_key(self) -> bytes:
|
||||||
|
if self._master_key is None:
|
||||||
|
self._master_key = self._load_or_create_master_key()
|
||||||
|
return self._master_key
|
||||||
|
|
||||||
|
def _load_or_create_master_key(self) -> bytes:
|
||||||
|
"""Load master key from file or generate a new one."""
|
||||||
|
if self.master_key_path.exists():
|
||||||
|
try:
|
||||||
|
return base64.b64decode(self.master_key_path.read_text().strip())
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Failed to load master key: {exc}") from exc
|
||||||
|
|
||||||
|
key = secrets.token_bytes(32)
|
||||||
|
try:
|
||||||
|
self.master_key_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
self.master_key_path.write_text(base64.b64encode(key).decode())
|
||||||
|
except OSError as exc:
|
||||||
|
raise EncryptionError(f"Failed to save master key: {exc}") from exc
|
||||||
|
return key
|
||||||
|
|
||||||
|
def _encrypt_data_key(self, data_key: bytes) -> bytes:
|
||||||
|
"""Encrypt the data key with the master key."""
|
||||||
|
aesgcm = AESGCM(self.master_key)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
encrypted = aesgcm.encrypt(nonce, data_key, None)
|
||||||
|
return nonce + encrypted
|
||||||
|
|
||||||
|
def _decrypt_data_key(self, encrypted_data_key: bytes) -> bytes:
|
||||||
|
"""Decrypt the data key using the master key."""
|
||||||
|
if len(encrypted_data_key) < 12 + 32 + 16: # nonce + key + tag
|
||||||
|
raise EncryptionError("Invalid encrypted data key")
|
||||||
|
aesgcm = AESGCM(self.master_key)
|
||||||
|
nonce = encrypted_data_key[:12]
|
||||||
|
ciphertext = encrypted_data_key[12:]
|
||||||
|
try:
|
||||||
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Failed to decrypt data key: {exc}") from exc
|
||||||
|
|
||||||
|
def generate_data_key(self) -> tuple[bytes, bytes]:
|
||||||
|
"""Generate a data key and its encrypted form."""
|
||||||
|
plaintext_key = secrets.token_bytes(32)
|
||||||
|
encrypted_key = self._encrypt_data_key(plaintext_key)
|
||||||
|
return plaintext_key, encrypted_key
|
||||||
|
|
||||||
|
def encrypt(self, plaintext: bytes, context: Dict[str, str] | None = None) -> EncryptionResult:
|
||||||
|
"""Encrypt data using envelope encryption."""
|
||||||
|
data_key, encrypted_data_key = self.generate_data_key()
|
||||||
|
|
||||||
|
aesgcm = AESGCM(data_key)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
ciphertext = aesgcm.encrypt(nonce, plaintext, None)
|
||||||
|
|
||||||
|
return EncryptionResult(
|
||||||
|
ciphertext=ciphertext,
|
||||||
|
nonce=nonce,
|
||||||
|
key_id=self.KEY_ID,
|
||||||
|
encrypted_data_key=encrypted_data_key,
|
||||||
|
)
|
||||||
|
|
||||||
|
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
||||||
|
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
"""Decrypt data using envelope encryption."""
|
||||||
|
data_key = self._decrypt_data_key(encrypted_data_key)
|
||||||
|
aesgcm = AESGCM(data_key)
|
||||||
|
try:
|
||||||
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Failed to decrypt data: {exc}") from exc
|
||||||
|
|
||||||
|
|
||||||
|
class StreamingEncryptor:
|
||||||
|
"""Encrypts/decrypts data in streaming fashion for large files.
|
||||||
|
|
||||||
|
For large files, we encrypt in chunks. Each chunk is encrypted with the
|
||||||
|
same data key but a unique nonce derived from the base nonce + chunk index.
|
||||||
|
"""
|
||||||
|
|
||||||
|
CHUNK_SIZE = 64 * 1024
|
||||||
|
HEADER_SIZE = 4
|
||||||
|
|
||||||
|
def __init__(self, provider: EncryptionProvider, chunk_size: int = CHUNK_SIZE):
|
||||||
|
self.provider = provider
|
||||||
|
self.chunk_size = chunk_size
|
||||||
|
|
||||||
|
def _derive_chunk_nonce(self, base_nonce: bytes, chunk_index: int) -> bytes:
|
||||||
|
"""Derive a unique nonce for each chunk.
|
||||||
|
|
||||||
|
Performance: Use direct byte manipulation instead of full int conversion.
|
||||||
|
"""
|
||||||
|
# Performance: Only modify last 4 bytes instead of full 12-byte conversion
|
||||||
|
return base_nonce[:8] + (chunk_index ^ int.from_bytes(base_nonce[8:], "big")).to_bytes(4, "big")
|
||||||
|
|
||||||
|
def encrypt_stream(self, stream: BinaryIO,
|
||||||
|
context: Dict[str, str] | None = None) -> tuple[BinaryIO, EncryptionMetadata]:
|
||||||
|
"""Encrypt a stream and return encrypted stream + metadata.
|
||||||
|
|
||||||
|
Performance: Writes chunks directly to output buffer instead of accumulating in list.
|
||||||
|
"""
|
||||||
|
data_key, encrypted_data_key = self.provider.generate_data_key()
|
||||||
|
base_nonce = secrets.token_bytes(12)
|
||||||
|
|
||||||
|
aesgcm = AESGCM(data_key)
|
||||||
|
# Performance: Write directly to BytesIO instead of accumulating chunks
|
||||||
|
output = io.BytesIO()
|
||||||
|
output.write(b"\x00\x00\x00\x00") # Placeholder for chunk count
|
||||||
|
chunk_index = 0
|
||||||
|
|
||||||
|
while True:
|
||||||
|
chunk = stream.read(self.chunk_size)
|
||||||
|
if not chunk:
|
||||||
|
break
|
||||||
|
|
||||||
|
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
||||||
|
encrypted_chunk = aesgcm.encrypt(chunk_nonce, chunk, None)
|
||||||
|
|
||||||
|
# Write size prefix + encrypted chunk directly
|
||||||
|
output.write(len(encrypted_chunk).to_bytes(self.HEADER_SIZE, "big"))
|
||||||
|
output.write(encrypted_chunk)
|
||||||
|
chunk_index += 1
|
||||||
|
|
||||||
|
# Write actual chunk count to header
|
||||||
|
output.seek(0)
|
||||||
|
output.write(chunk_index.to_bytes(4, "big"))
|
||||||
|
output.seek(0)
|
||||||
|
|
||||||
|
metadata = EncryptionMetadata(
|
||||||
|
algorithm="AES256",
|
||||||
|
key_id=self.provider.KEY_ID if hasattr(self.provider, "KEY_ID") else "local",
|
||||||
|
nonce=base_nonce,
|
||||||
|
encrypted_data_key=encrypted_data_key,
|
||||||
|
)
|
||||||
|
|
||||||
|
return output, metadata
|
||||||
|
|
||||||
|
def decrypt_stream(self, stream: BinaryIO, metadata: EncryptionMetadata) -> BinaryIO:
|
||||||
|
"""Decrypt a stream using the provided metadata.
|
||||||
|
|
||||||
|
Performance: Writes chunks directly to output buffer instead of accumulating in list.
|
||||||
|
"""
|
||||||
|
if isinstance(self.provider, LocalKeyEncryption):
|
||||||
|
data_key = self.provider._decrypt_data_key(metadata.encrypted_data_key)
|
||||||
|
else:
|
||||||
|
raise EncryptionError("Unsupported provider for streaming decryption")
|
||||||
|
|
||||||
|
aesgcm = AESGCM(data_key)
|
||||||
|
base_nonce = metadata.nonce
|
||||||
|
|
||||||
|
chunk_count_bytes = stream.read(4)
|
||||||
|
if len(chunk_count_bytes) < 4:
|
||||||
|
raise EncryptionError("Invalid encrypted stream: missing header")
|
||||||
|
chunk_count = int.from_bytes(chunk_count_bytes, "big")
|
||||||
|
|
||||||
|
# Performance: Write directly to BytesIO instead of accumulating chunks
|
||||||
|
output = io.BytesIO()
|
||||||
|
for chunk_index in range(chunk_count):
|
||||||
|
size_bytes = stream.read(self.HEADER_SIZE)
|
||||||
|
if len(size_bytes) < self.HEADER_SIZE:
|
||||||
|
raise EncryptionError(f"Invalid encrypted stream: truncated at chunk {chunk_index}")
|
||||||
|
chunk_size = int.from_bytes(size_bytes, "big")
|
||||||
|
|
||||||
|
encrypted_chunk = stream.read(chunk_size)
|
||||||
|
if len(encrypted_chunk) < chunk_size:
|
||||||
|
raise EncryptionError(f"Invalid encrypted stream: incomplete chunk {chunk_index}")
|
||||||
|
|
||||||
|
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
||||||
|
try:
|
||||||
|
decrypted_chunk = aesgcm.decrypt(chunk_nonce, encrypted_chunk, None)
|
||||||
|
output.write(decrypted_chunk) # Write directly instead of appending to list
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Failed to decrypt chunk {chunk_index}: {exc}") from exc
|
||||||
|
|
||||||
|
output.seek(0)
|
||||||
|
return output
|
||||||
|
|
||||||
|
|
||||||
|
class EncryptionManager:
|
||||||
|
"""Manages encryption providers and operations."""
|
||||||
|
|
||||||
|
def __init__(self, config: Dict[str, Any]):
|
||||||
|
self.config = config
|
||||||
|
self._local_provider: LocalKeyEncryption | None = None
|
||||||
|
self._kms_provider: Any = None # Set by KMS module
|
||||||
|
self._streaming_encryptor: StreamingEncryptor | None = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def enabled(self) -> bool:
|
||||||
|
return self.config.get("encryption_enabled", False)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def default_algorithm(self) -> str:
|
||||||
|
return self.config.get("default_encryption_algorithm", "AES256")
|
||||||
|
|
||||||
|
def get_local_provider(self) -> LocalKeyEncryption:
|
||||||
|
if self._local_provider is None:
|
||||||
|
key_path = Path(self.config.get("encryption_master_key_path", "data/.myfsio.sys/keys/master.key"))
|
||||||
|
self._local_provider = LocalKeyEncryption(key_path)
|
||||||
|
return self._local_provider
|
||||||
|
|
||||||
|
def set_kms_provider(self, kms_provider: Any) -> None:
|
||||||
|
"""Set the KMS provider (injected from kms module)."""
|
||||||
|
self._kms_provider = kms_provider
|
||||||
|
|
||||||
|
def get_provider(self, algorithm: str, kms_key_id: str | None = None) -> EncryptionProvider:
|
||||||
|
"""Get the appropriate encryption provider for the algorithm."""
|
||||||
|
if algorithm == "AES256":
|
||||||
|
return self.get_local_provider()
|
||||||
|
elif algorithm == "aws:kms":
|
||||||
|
if self._kms_provider is None:
|
||||||
|
raise EncryptionError("KMS is not configured")
|
||||||
|
return self._kms_provider.get_provider(kms_key_id)
|
||||||
|
else:
|
||||||
|
raise EncryptionError(f"Unsupported encryption algorithm: {algorithm}")
|
||||||
|
|
||||||
|
def get_streaming_encryptor(self) -> StreamingEncryptor:
|
||||||
|
if self._streaming_encryptor is None:
|
||||||
|
self._streaming_encryptor = StreamingEncryptor(self.get_local_provider())
|
||||||
|
return self._streaming_encryptor
|
||||||
|
|
||||||
|
def encrypt_object(self, data: bytes, algorithm: str = "AES256",
|
||||||
|
kms_key_id: str | None = None,
|
||||||
|
context: Dict[str, str] | None = None) -> tuple[bytes, EncryptionMetadata]:
|
||||||
|
"""Encrypt object data."""
|
||||||
|
provider = self.get_provider(algorithm, kms_key_id)
|
||||||
|
result = provider.encrypt(data, context)
|
||||||
|
|
||||||
|
metadata = EncryptionMetadata(
|
||||||
|
algorithm=algorithm,
|
||||||
|
key_id=result.key_id,
|
||||||
|
nonce=result.nonce,
|
||||||
|
encrypted_data_key=result.encrypted_data_key,
|
||||||
|
)
|
||||||
|
|
||||||
|
return result.ciphertext, metadata
|
||||||
|
|
||||||
|
def decrypt_object(self, ciphertext: bytes, metadata: EncryptionMetadata,
|
||||||
|
context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
"""Decrypt object data."""
|
||||||
|
provider = self.get_provider(metadata.algorithm, metadata.key_id)
|
||||||
|
return provider.decrypt(
|
||||||
|
ciphertext,
|
||||||
|
metadata.nonce,
|
||||||
|
metadata.encrypted_data_key,
|
||||||
|
metadata.key_id,
|
||||||
|
context,
|
||||||
|
)
|
||||||
|
|
||||||
|
def encrypt_stream(self, stream: BinaryIO, algorithm: str = "AES256",
|
||||||
|
context: Dict[str, str] | None = None) -> tuple[BinaryIO, EncryptionMetadata]:
|
||||||
|
"""Encrypt a stream for large files."""
|
||||||
|
encryptor = self.get_streaming_encryptor()
|
||||||
|
return encryptor.encrypt_stream(stream, context)
|
||||||
|
|
||||||
|
def decrypt_stream(self, stream: BinaryIO, metadata: EncryptionMetadata) -> BinaryIO:
|
||||||
|
"""Decrypt a stream."""
|
||||||
|
encryptor = self.get_streaming_encryptor()
|
||||||
|
return encryptor.decrypt_stream(stream, metadata)
|
||||||
|
|
||||||
|
|
||||||
|
class SSECEncryption(EncryptionProvider):
|
||||||
|
"""SSE-C: Server-Side Encryption with Customer-Provided Keys.
|
||||||
|
|
||||||
|
The client provides the encryption key with each request.
|
||||||
|
Server encrypts/decrypts but never stores the key.
|
||||||
|
|
||||||
|
Required headers for PUT:
|
||||||
|
- x-amz-server-side-encryption-customer-algorithm: AES256
|
||||||
|
- x-amz-server-side-encryption-customer-key: Base64-encoded 256-bit key
|
||||||
|
- x-amz-server-side-encryption-customer-key-MD5: Base64-encoded MD5 of key
|
||||||
|
"""
|
||||||
|
|
||||||
|
KEY_ID = "customer-provided"
|
||||||
|
|
||||||
|
def __init__(self, customer_key: bytes):
|
||||||
|
if len(customer_key) != 32:
|
||||||
|
raise EncryptionError("Customer key must be exactly 256 bits (32 bytes)")
|
||||||
|
self.customer_key = customer_key
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_headers(cls, headers: Dict[str, str]) -> "SSECEncryption":
|
||||||
|
algorithm = headers.get("x-amz-server-side-encryption-customer-algorithm", "")
|
||||||
|
if algorithm.upper() != "AES256":
|
||||||
|
raise EncryptionError(f"Unsupported SSE-C algorithm: {algorithm}. Only AES256 is supported.")
|
||||||
|
|
||||||
|
key_b64 = headers.get("x-amz-server-side-encryption-customer-key", "")
|
||||||
|
if not key_b64:
|
||||||
|
raise EncryptionError("Missing x-amz-server-side-encryption-customer-key header")
|
||||||
|
|
||||||
|
key_md5_b64 = headers.get("x-amz-server-side-encryption-customer-key-md5", "")
|
||||||
|
|
||||||
|
try:
|
||||||
|
customer_key = base64.b64decode(key_b64)
|
||||||
|
except Exception as e:
|
||||||
|
raise EncryptionError(f"Invalid base64 in customer key: {e}") from e
|
||||||
|
|
||||||
|
if len(customer_key) != 32:
|
||||||
|
raise EncryptionError(f"Customer key must be 256 bits, got {len(customer_key) * 8} bits")
|
||||||
|
|
||||||
|
if key_md5_b64:
|
||||||
|
import hashlib
|
||||||
|
expected_md5 = base64.b64encode(hashlib.md5(customer_key).digest()).decode()
|
||||||
|
if key_md5_b64 != expected_md5:
|
||||||
|
raise EncryptionError("Customer key MD5 mismatch")
|
||||||
|
|
||||||
|
return cls(customer_key)
|
||||||
|
|
||||||
|
def encrypt(self, plaintext: bytes, context: Dict[str, str] | None = None) -> EncryptionResult:
|
||||||
|
aesgcm = AESGCM(self.customer_key)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
ciphertext = aesgcm.encrypt(nonce, plaintext, None)
|
||||||
|
|
||||||
|
return EncryptionResult(
|
||||||
|
ciphertext=ciphertext,
|
||||||
|
nonce=nonce,
|
||||||
|
key_id=self.KEY_ID,
|
||||||
|
encrypted_data_key=b"",
|
||||||
|
)
|
||||||
|
|
||||||
|
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
||||||
|
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
aesgcm = AESGCM(self.customer_key)
|
||||||
|
try:
|
||||||
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"SSE-C decryption failed: {exc}") from exc
|
||||||
|
|
||||||
|
def generate_data_key(self) -> tuple[bytes, bytes]:
|
||||||
|
return self.customer_key, b""
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SSECMetadata:
|
||||||
|
algorithm: str = "AES256"
|
||||||
|
nonce: bytes = b""
|
||||||
|
key_md5: str = ""
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, str]:
|
||||||
|
return {
|
||||||
|
"x-amz-server-side-encryption-customer-algorithm": self.algorithm,
|
||||||
|
"x-amz-encryption-nonce": base64.b64encode(self.nonce).decode(),
|
||||||
|
"x-amz-server-side-encryption-customer-key-MD5": self.key_md5,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, str]) -> Optional["SSECMetadata"]:
|
||||||
|
algorithm = data.get("x-amz-server-side-encryption-customer-algorithm")
|
||||||
|
if not algorithm:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
nonce = base64.b64decode(data.get("x-amz-encryption-nonce", ""))
|
||||||
|
return cls(
|
||||||
|
algorithm=algorithm,
|
||||||
|
nonce=nonce,
|
||||||
|
key_md5=data.get("x-amz-server-side-encryption-customer-key-MD5", ""),
|
||||||
|
)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
class ClientEncryptionHelper:
|
||||||
|
"""Helpers for client-side encryption.
|
||||||
|
|
||||||
|
Client-side encryption is performed by the client, but this helper
|
||||||
|
provides key generation and materials for clients that need them.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def generate_client_key() -> Dict[str, str]:
|
||||||
|
"""Generate a new client encryption key."""
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
key = secrets.token_bytes(32)
|
||||||
|
return {
|
||||||
|
"key": base64.b64encode(key).decode(),
|
||||||
|
"algorithm": "AES-256-GCM",
|
||||||
|
"created_at": datetime.now(timezone.utc).isoformat(),
|
||||||
|
}
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def encrypt_with_key(plaintext: bytes, key_b64: str) -> Dict[str, str]:
|
||||||
|
"""Encrypt data with a client-provided key."""
|
||||||
|
key = base64.b64decode(key_b64)
|
||||||
|
if len(key) != 32:
|
||||||
|
raise EncryptionError("Key must be 256 bits (32 bytes)")
|
||||||
|
|
||||||
|
aesgcm = AESGCM(key)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
ciphertext = aesgcm.encrypt(nonce, plaintext, None)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"ciphertext": base64.b64encode(ciphertext).decode(),
|
||||||
|
"nonce": base64.b64encode(nonce).decode(),
|
||||||
|
"algorithm": "AES-256-GCM",
|
||||||
|
}
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def decrypt_with_key(ciphertext_b64: str, nonce_b64: str, key_b64: str) -> bytes:
|
||||||
|
"""Decrypt data with a client-provided key."""
|
||||||
|
key = base64.b64decode(key_b64)
|
||||||
|
nonce = base64.b64decode(nonce_b64)
|
||||||
|
ciphertext = base64.b64decode(ciphertext_b64)
|
||||||
|
|
||||||
|
if len(key) != 32:
|
||||||
|
raise EncryptionError("Key must be 256 bits (32 bytes)")
|
||||||
|
|
||||||
|
aesgcm = AESGCM(key)
|
||||||
|
try:
|
||||||
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Decryption failed: {exc}") from exc
|
||||||
186
app/errors.py
Normal file
186
app/errors.py
Normal file
@@ -0,0 +1,186 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Optional, Dict, Any
|
||||||
|
from xml.etree.ElementTree import Element, SubElement, tostring
|
||||||
|
|
||||||
|
from flask import Response, jsonify, request, flash, redirect, url_for, g
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class AppError(Exception):
|
||||||
|
"""Base application error with multi-format response support."""
|
||||||
|
code: str
|
||||||
|
message: str
|
||||||
|
status_code: int = 500
|
||||||
|
details: Optional[Dict[str, Any]] = field(default=None)
|
||||||
|
|
||||||
|
def __post_init__(self):
|
||||||
|
super().__init__(self.message)
|
||||||
|
|
||||||
|
def to_xml_response(self) -> Response:
|
||||||
|
"""Convert to S3 API XML error response."""
|
||||||
|
error = Element("Error")
|
||||||
|
SubElement(error, "Code").text = self.code
|
||||||
|
SubElement(error, "Message").text = self.message
|
||||||
|
request_id = getattr(g, 'request_id', None) if g else None
|
||||||
|
SubElement(error, "RequestId").text = request_id or "unknown"
|
||||||
|
xml_bytes = tostring(error, encoding="utf-8")
|
||||||
|
return Response(xml_bytes, status=self.status_code, mimetype="application/xml")
|
||||||
|
|
||||||
|
def to_json_response(self) -> tuple[Response, int]:
|
||||||
|
"""Convert to JSON error response for UI AJAX calls."""
|
||||||
|
payload: Dict[str, Any] = {
|
||||||
|
"success": False,
|
||||||
|
"error": {
|
||||||
|
"code": self.code,
|
||||||
|
"message": self.message
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if self.details:
|
||||||
|
payload["error"]["details"] = self.details
|
||||||
|
return jsonify(payload), self.status_code
|
||||||
|
|
||||||
|
def to_flash_message(self) -> str:
|
||||||
|
"""Convert to user-friendly flash message."""
|
||||||
|
return self.message
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class BucketNotFoundError(AppError):
|
||||||
|
"""Bucket does not exist."""
|
||||||
|
code: str = "NoSuchBucket"
|
||||||
|
message: str = "The specified bucket does not exist"
|
||||||
|
status_code: int = 404
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class BucketAlreadyExistsError(AppError):
|
||||||
|
"""Bucket already exists."""
|
||||||
|
code: str = "BucketAlreadyExists"
|
||||||
|
message: str = "The requested bucket name is not available"
|
||||||
|
status_code: int = 409
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class BucketNotEmptyError(AppError):
|
||||||
|
"""Bucket is not empty."""
|
||||||
|
code: str = "BucketNotEmpty"
|
||||||
|
message: str = "The bucket you tried to delete is not empty"
|
||||||
|
status_code: int = 409
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ObjectNotFoundError(AppError):
|
||||||
|
"""Object does not exist."""
|
||||||
|
code: str = "NoSuchKey"
|
||||||
|
message: str = "The specified key does not exist"
|
||||||
|
status_code: int = 404
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class InvalidObjectKeyError(AppError):
|
||||||
|
"""Invalid object key."""
|
||||||
|
code: str = "InvalidKey"
|
||||||
|
message: str = "The specified key is not valid"
|
||||||
|
status_code: int = 400
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class AccessDeniedError(AppError):
|
||||||
|
"""Access denied."""
|
||||||
|
code: str = "AccessDenied"
|
||||||
|
message: str = "Access Denied"
|
||||||
|
status_code: int = 403
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class InvalidCredentialsError(AppError):
|
||||||
|
"""Invalid credentials."""
|
||||||
|
code: str = "InvalidAccessKeyId"
|
||||||
|
message: str = "The access key ID you provided does not exist"
|
||||||
|
status_code: int = 403
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class MalformedRequestError(AppError):
|
||||||
|
"""Malformed request."""
|
||||||
|
code: str = "MalformedXML"
|
||||||
|
message: str = "The XML you provided was not well-formed"
|
||||||
|
status_code: int = 400
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class InvalidArgumentError(AppError):
|
||||||
|
"""Invalid argument."""
|
||||||
|
code: str = "InvalidArgument"
|
||||||
|
message: str = "Invalid argument"
|
||||||
|
status_code: int = 400
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class EntityTooLargeError(AppError):
|
||||||
|
"""Entity too large."""
|
||||||
|
code: str = "EntityTooLarge"
|
||||||
|
message: str = "Your proposed upload exceeds the maximum allowed size"
|
||||||
|
status_code: int = 413
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class QuotaExceededAppError(AppError):
|
||||||
|
"""Bucket quota exceeded."""
|
||||||
|
code: str = "QuotaExceeded"
|
||||||
|
message: str = "The bucket quota has been exceeded"
|
||||||
|
status_code: int = 403
|
||||||
|
quota: Optional[Dict[str, Any]] = None
|
||||||
|
usage: Optional[Dict[str, int]] = None
|
||||||
|
|
||||||
|
def __post_init__(self):
|
||||||
|
if self.quota or self.usage:
|
||||||
|
self.details = {}
|
||||||
|
if self.quota:
|
||||||
|
self.details["quota"] = self.quota
|
||||||
|
if self.usage:
|
||||||
|
self.details["usage"] = self.usage
|
||||||
|
super().__post_init__()
|
||||||
|
|
||||||
|
|
||||||
|
def handle_app_error(error: AppError) -> Response:
|
||||||
|
"""Handle application errors with appropriate response format."""
|
||||||
|
log_extra = {"error_code": error.code}
|
||||||
|
if error.details:
|
||||||
|
log_extra["details"] = error.details
|
||||||
|
|
||||||
|
logger.error(f"{error.code}: {error.message}", extra=log_extra)
|
||||||
|
|
||||||
|
if request.path.startswith('/ui'):
|
||||||
|
wants_json = (
|
||||||
|
request.is_json or
|
||||||
|
request.headers.get('X-Requested-With') == 'XMLHttpRequest' or
|
||||||
|
'application/json' in request.accept_mimetypes.values()
|
||||||
|
)
|
||||||
|
if wants_json:
|
||||||
|
return error.to_json_response()
|
||||||
|
flash(error.to_flash_message(), 'danger')
|
||||||
|
referrer = request.referrer
|
||||||
|
if referrer and request.host in referrer:
|
||||||
|
return redirect(referrer)
|
||||||
|
return redirect(url_for('ui.buckets_overview'))
|
||||||
|
else:
|
||||||
|
return error.to_xml_response()
|
||||||
|
|
||||||
|
|
||||||
|
def register_error_handlers(app):
|
||||||
|
"""Register error handlers with a Flask app."""
|
||||||
|
app.register_error_handler(AppError, handle_app_error)
|
||||||
|
|
||||||
|
for error_class in [
|
||||||
|
BucketNotFoundError, BucketAlreadyExistsError, BucketNotEmptyError,
|
||||||
|
ObjectNotFoundError, InvalidObjectKeyError,
|
||||||
|
AccessDeniedError, InvalidCredentialsError,
|
||||||
|
MalformedRequestError, InvalidArgumentError, EntityTooLargeError,
|
||||||
|
QuotaExceededAppError,
|
||||||
|
]:
|
||||||
|
app.register_error_handler(error_class, handle_app_error)
|
||||||
@@ -1,10 +1,16 @@
|
|||||||
"""Application-wide extension instances."""
|
from flask import g
|
||||||
from flask_limiter import Limiter
|
from flask_limiter import Limiter
|
||||||
from flask_limiter.util import get_remote_address
|
from flask_limiter.util import get_remote_address
|
||||||
from flask_wtf import CSRFProtect
|
from flask_wtf import CSRFProtect
|
||||||
|
|
||||||
|
def get_rate_limit_key():
|
||||||
|
"""Generate rate limit key based on authenticated user."""
|
||||||
|
if hasattr(g, 'principal') and g.principal:
|
||||||
|
return g.principal.access_key
|
||||||
|
return get_remote_address()
|
||||||
|
|
||||||
# Shared rate limiter instance; configured in app factory.
|
# Shared rate limiter instance; configured in app factory.
|
||||||
limiter = Limiter(key_func=get_remote_address)
|
limiter = Limiter(key_func=get_rate_limit_key)
|
||||||
|
|
||||||
# Global CSRF protection for UI routes.
|
# Global CSRF protection for UI routes.
|
||||||
csrf = CSRFProtect()
|
csrf = CSRFProtect()
|
||||||
|
|||||||
154
app/iam.py
154
app/iam.py
@@ -1,21 +1,21 @@
|
|||||||
"""Lightweight IAM-style user and policy management."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import math
|
import math
|
||||||
import secrets
|
import secrets
|
||||||
|
import time
|
||||||
from collections import deque
|
from collections import deque
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta, timezone
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set
|
from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set, Tuple
|
||||||
|
|
||||||
|
|
||||||
class IamError(RuntimeError):
|
class IamError(RuntimeError):
|
||||||
"""Raised when authentication or authorization fails."""
|
"""Raised when authentication or authorization fails."""
|
||||||
|
|
||||||
|
|
||||||
S3_ACTIONS = {"list", "read", "write", "delete", "share", "policy"}
|
S3_ACTIONS = {"list", "read", "write", "delete", "share", "policy", "replication"}
|
||||||
IAM_ACTIONS = {
|
IAM_ACTIONS = {
|
||||||
"iam:list_users",
|
"iam:list_users",
|
||||||
"iam:create_user",
|
"iam:create_user",
|
||||||
@@ -29,19 +29,48 @@ ACTION_ALIASES = {
|
|||||||
"list": "list",
|
"list": "list",
|
||||||
"s3:listbucket": "list",
|
"s3:listbucket": "list",
|
||||||
"s3:listallmybuckets": "list",
|
"s3:listallmybuckets": "list",
|
||||||
|
"s3:listbucketversions": "list",
|
||||||
|
"s3:listmultipartuploads": "list",
|
||||||
|
"s3:listparts": "list",
|
||||||
"read": "read",
|
"read": "read",
|
||||||
"s3:getobject": "read",
|
"s3:getobject": "read",
|
||||||
"s3:getobjectversion": "read",
|
"s3:getobjectversion": "read",
|
||||||
|
"s3:getobjecttagging": "read",
|
||||||
|
"s3:getobjectversiontagging": "read",
|
||||||
|
"s3:getobjectacl": "read",
|
||||||
|
"s3:getbucketversioning": "read",
|
||||||
|
"s3:headobject": "read",
|
||||||
|
"s3:headbucket": "read",
|
||||||
"write": "write",
|
"write": "write",
|
||||||
"s3:putobject": "write",
|
"s3:putobject": "write",
|
||||||
"s3:createbucket": "write",
|
"s3:createbucket": "write",
|
||||||
|
"s3:putobjecttagging": "write",
|
||||||
|
"s3:putbucketversioning": "write",
|
||||||
|
"s3:createmultipartupload": "write",
|
||||||
|
"s3:uploadpart": "write",
|
||||||
|
"s3:completemultipartupload": "write",
|
||||||
|
"s3:abortmultipartupload": "write",
|
||||||
|
"s3:copyobject": "write",
|
||||||
"delete": "delete",
|
"delete": "delete",
|
||||||
"s3:deleteobject": "delete",
|
"s3:deleteobject": "delete",
|
||||||
|
"s3:deleteobjectversion": "delete",
|
||||||
"s3:deletebucket": "delete",
|
"s3:deletebucket": "delete",
|
||||||
|
"s3:deleteobjecttagging": "delete",
|
||||||
"share": "share",
|
"share": "share",
|
||||||
"s3:putobjectacl": "share",
|
"s3:putobjectacl": "share",
|
||||||
|
"s3:putbucketacl": "share",
|
||||||
|
"s3:getbucketacl": "share",
|
||||||
"policy": "policy",
|
"policy": "policy",
|
||||||
"s3:putbucketpolicy": "policy",
|
"s3:putbucketpolicy": "policy",
|
||||||
|
"s3:getbucketpolicy": "policy",
|
||||||
|
"s3:deletebucketpolicy": "policy",
|
||||||
|
"replication": "replication",
|
||||||
|
"s3:getreplicationconfiguration": "replication",
|
||||||
|
"s3:putreplicationconfiguration": "replication",
|
||||||
|
"s3:deletereplicationconfiguration": "replication",
|
||||||
|
"s3:replicateobject": "replication",
|
||||||
|
"s3:replicatetags": "replication",
|
||||||
|
"s3:replicatedelete": "replication",
|
||||||
"iam:listusers": "iam:list_users",
|
"iam:listusers": "iam:list_users",
|
||||||
"iam:createuser": "iam:create_user",
|
"iam:createuser": "iam:create_user",
|
||||||
"iam:deleteuser": "iam:delete_user",
|
"iam:deleteuser": "iam:delete_user",
|
||||||
@@ -77,10 +106,29 @@ class IamService:
|
|||||||
self._users: Dict[str, Dict[str, Any]] = {}
|
self._users: Dict[str, Dict[str, Any]] = {}
|
||||||
self._raw_config: Dict[str, Any] = {}
|
self._raw_config: Dict[str, Any] = {}
|
||||||
self._failed_attempts: Dict[str, Deque[datetime]] = {}
|
self._failed_attempts: Dict[str, Deque[datetime]] = {}
|
||||||
|
self._last_load_time = 0.0
|
||||||
|
self._credential_cache: Dict[str, Tuple[str, Principal, float]] = {}
|
||||||
|
self._cache_ttl = 60.0
|
||||||
|
self._last_stat_check = 0.0
|
||||||
|
self._stat_check_interval = 1.0
|
||||||
|
self._sessions: Dict[str, Dict[str, Any]] = {}
|
||||||
self._load()
|
self._load()
|
||||||
|
|
||||||
# ---------------------- authz helpers ----------------------
|
def _maybe_reload(self) -> None:
|
||||||
|
"""Reload configuration if the file has changed on disk."""
|
||||||
|
now = time.time()
|
||||||
|
if now - self._last_stat_check < self._stat_check_interval:
|
||||||
|
return
|
||||||
|
self._last_stat_check = now
|
||||||
|
try:
|
||||||
|
if self.config_path.stat().st_mtime > self._last_load_time:
|
||||||
|
self._load()
|
||||||
|
self._credential_cache.clear()
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
def authenticate(self, access_key: str, secret_key: str) -> Principal:
|
def authenticate(self, access_key: str, secret_key: str) -> Principal:
|
||||||
|
self._maybe_reload()
|
||||||
access_key = (access_key or "").strip()
|
access_key = (access_key or "").strip()
|
||||||
secret_key = (secret_key or "").strip()
|
secret_key = (secret_key or "").strip()
|
||||||
if not access_key or not secret_key:
|
if not access_key or not secret_key:
|
||||||
@@ -102,7 +150,7 @@ class IamService:
|
|||||||
return
|
return
|
||||||
attempts = self._failed_attempts.setdefault(access_key, deque())
|
attempts = self._failed_attempts.setdefault(access_key, deque())
|
||||||
self._prune_attempts(attempts)
|
self._prune_attempts(attempts)
|
||||||
attempts.append(datetime.now())
|
attempts.append(datetime.now(timezone.utc))
|
||||||
|
|
||||||
def _clear_failed_attempts(self, access_key: str) -> None:
|
def _clear_failed_attempts(self, access_key: str) -> None:
|
||||||
if not access_key:
|
if not access_key:
|
||||||
@@ -110,7 +158,7 @@ class IamService:
|
|||||||
self._failed_attempts.pop(access_key, None)
|
self._failed_attempts.pop(access_key, None)
|
||||||
|
|
||||||
def _prune_attempts(self, attempts: Deque[datetime]) -> None:
|
def _prune_attempts(self, attempts: Deque[datetime]) -> None:
|
||||||
cutoff = datetime.now() - self.auth_lockout_window
|
cutoff = datetime.now(timezone.utc) - self.auth_lockout_window
|
||||||
while attempts and attempts[0] < cutoff:
|
while attempts and attempts[0] < cutoff:
|
||||||
attempts.popleft()
|
attempts.popleft()
|
||||||
|
|
||||||
@@ -131,19 +179,73 @@ class IamService:
|
|||||||
if len(attempts) < self.auth_max_attempts:
|
if len(attempts) < self.auth_max_attempts:
|
||||||
return 0
|
return 0
|
||||||
oldest = attempts[0]
|
oldest = attempts[0]
|
||||||
elapsed = (datetime.now() - oldest).total_seconds()
|
elapsed = (datetime.now(timezone.utc) - oldest).total_seconds()
|
||||||
return int(max(0, self.auth_lockout_window.total_seconds() - elapsed))
|
return int(max(0, self.auth_lockout_window.total_seconds() - elapsed))
|
||||||
|
|
||||||
def principal_for_key(self, access_key: str) -> Principal:
|
def create_session_token(self, access_key: str, duration_seconds: int = 3600) -> str:
|
||||||
|
"""Create a temporary session token for an access key."""
|
||||||
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if not record:
|
if not record:
|
||||||
raise IamError("Unknown access key")
|
raise IamError("Unknown access key")
|
||||||
return self._build_principal(access_key, record)
|
self._cleanup_expired_sessions()
|
||||||
|
token = secrets.token_urlsafe(32)
|
||||||
|
expires_at = time.time() + duration_seconds
|
||||||
|
self._sessions[token] = {
|
||||||
|
"access_key": access_key,
|
||||||
|
"expires_at": expires_at,
|
||||||
|
}
|
||||||
|
return token
|
||||||
|
|
||||||
|
def validate_session_token(self, access_key: str, session_token: str) -> bool:
|
||||||
|
"""Validate a session token for an access key."""
|
||||||
|
session = self._sessions.get(session_token)
|
||||||
|
if not session:
|
||||||
|
return False
|
||||||
|
if session["access_key"] != access_key:
|
||||||
|
return False
|
||||||
|
if time.time() > session["expires_at"]:
|
||||||
|
del self._sessions[session_token]
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _cleanup_expired_sessions(self) -> None:
|
||||||
|
"""Remove expired session tokens."""
|
||||||
|
now = time.time()
|
||||||
|
expired = [token for token, data in self._sessions.items() if now > data["expires_at"]]
|
||||||
|
for token in expired:
|
||||||
|
del self._sessions[token]
|
||||||
|
|
||||||
|
def principal_for_key(self, access_key: str) -> Principal:
|
||||||
|
now = time.time()
|
||||||
|
cached = self._credential_cache.get(access_key)
|
||||||
|
if cached:
|
||||||
|
secret, principal, cached_time = cached
|
||||||
|
if now - cached_time < self._cache_ttl:
|
||||||
|
return principal
|
||||||
|
|
||||||
|
self._maybe_reload()
|
||||||
|
record = self._users.get(access_key)
|
||||||
|
if not record:
|
||||||
|
raise IamError("Unknown access key")
|
||||||
|
principal = self._build_principal(access_key, record)
|
||||||
|
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
||||||
|
return principal
|
||||||
|
|
||||||
def secret_for_key(self, access_key: str) -> str:
|
def secret_for_key(self, access_key: str) -> str:
|
||||||
|
now = time.time()
|
||||||
|
cached = self._credential_cache.get(access_key)
|
||||||
|
if cached:
|
||||||
|
secret, principal, cached_time = cached
|
||||||
|
if now - cached_time < self._cache_ttl:
|
||||||
|
return secret
|
||||||
|
|
||||||
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if not record:
|
if not record:
|
||||||
raise IamError("Unknown access key")
|
raise IamError("Unknown access key")
|
||||||
|
principal = self._build_principal(access_key, record)
|
||||||
|
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
||||||
return record["secret_key"]
|
return record["secret_key"]
|
||||||
|
|
||||||
def authorize(self, principal: Principal, bucket_name: str | None, action: str) -> None:
|
def authorize(self, principal: Principal, bucket_name: str | None, action: str) -> None:
|
||||||
@@ -169,7 +271,6 @@ class IamService:
|
|||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# ---------------------- management helpers ----------------------
|
|
||||||
def list_users(self) -> List[Dict[str, Any]]:
|
def list_users(self) -> List[Dict[str, Any]]:
|
||||||
listing: List[Dict[str, Any]] = []
|
listing: List[Dict[str, Any]] = []
|
||||||
for access_key, record in self._users.items():
|
for access_key, record in self._users.items():
|
||||||
@@ -242,9 +343,9 @@ class IamService:
|
|||||||
self._save()
|
self._save()
|
||||||
self._load()
|
self._load()
|
||||||
|
|
||||||
# ---------------------- config helpers ----------------------
|
|
||||||
def _load(self) -> None:
|
def _load(self) -> None:
|
||||||
try:
|
try:
|
||||||
|
self._last_load_time = self.config_path.stat().st_mtime
|
||||||
content = self.config_path.read_text(encoding='utf-8')
|
content = self.config_path.read_text(encoding='utf-8')
|
||||||
raw = json.loads(content)
|
raw = json.loads(content)
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
@@ -287,7 +388,6 @@ class IamService:
|
|||||||
except (OSError, PermissionError) as e:
|
except (OSError, PermissionError) as e:
|
||||||
raise IamError(f"Cannot save IAM config: {e}")
|
raise IamError(f"Cannot save IAM config: {e}")
|
||||||
|
|
||||||
# ---------------------- insight helpers ----------------------
|
|
||||||
def config_summary(self) -> Dict[str, Any]:
|
def config_summary(self) -> Dict[str, Any]:
|
||||||
return {
|
return {
|
||||||
"path": str(self.config_path),
|
"path": str(self.config_path),
|
||||||
@@ -396,9 +496,33 @@ class IamService:
|
|||||||
raise IamError("User not found")
|
raise IamError("User not found")
|
||||||
|
|
||||||
def get_secret_key(self, access_key: str) -> str | None:
|
def get_secret_key(self, access_key: str) -> str | None:
|
||||||
|
now = time.time()
|
||||||
|
cached = self._credential_cache.get(access_key)
|
||||||
|
if cached:
|
||||||
|
secret, principal, cached_time = cached
|
||||||
|
if now - cached_time < self._cache_ttl:
|
||||||
|
return secret
|
||||||
|
|
||||||
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
return record["secret_key"] if record else None
|
if record:
|
||||||
|
principal = self._build_principal(access_key, record)
|
||||||
|
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
||||||
|
return record["secret_key"]
|
||||||
|
return None
|
||||||
|
|
||||||
def get_principal(self, access_key: str) -> Principal | None:
|
def get_principal(self, access_key: str) -> Principal | None:
|
||||||
|
now = time.time()
|
||||||
|
cached = self._credential_cache.get(access_key)
|
||||||
|
if cached:
|
||||||
|
secret, principal, cached_time = cached
|
||||||
|
if now - cached_time < self._cache_ttl:
|
||||||
|
return principal
|
||||||
|
|
||||||
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
return self._build_principal(access_key, record) if record else None
|
if record:
|
||||||
|
principal = self._build_principal(access_key, record)
|
||||||
|
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
||||||
|
return principal
|
||||||
|
return None
|
||||||
|
|||||||
363
app/kms.py
Normal file
363
app/kms.py
Normal file
@@ -0,0 +1,363 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
import json
|
||||||
|
import secrets
|
||||||
|
import uuid
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
|
||||||
|
|
||||||
|
from .encryption import EncryptionError, EncryptionProvider, EncryptionResult
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class KMSKey:
|
||||||
|
"""Represents a KMS encryption key."""
|
||||||
|
key_id: str
|
||||||
|
description: str
|
||||||
|
created_at: str
|
||||||
|
enabled: bool = True
|
||||||
|
key_material: bytes = field(default_factory=lambda: b"", repr=False)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def arn(self) -> str:
|
||||||
|
return f"arn:aws:kms:local:000000000000:key/{self.key_id}"
|
||||||
|
|
||||||
|
def to_dict(self, include_key: bool = False) -> Dict[str, Any]:
|
||||||
|
data = {
|
||||||
|
"KeyId": self.key_id,
|
||||||
|
"Arn": self.arn,
|
||||||
|
"Description": self.description,
|
||||||
|
"CreationDate": self.created_at,
|
||||||
|
"Enabled": self.enabled,
|
||||||
|
"KeyState": "Enabled" if self.enabled else "Disabled",
|
||||||
|
"KeyUsage": "ENCRYPT_DECRYPT",
|
||||||
|
"KeySpec": "SYMMETRIC_DEFAULT",
|
||||||
|
}
|
||||||
|
if include_key:
|
||||||
|
data["KeyMaterial"] = base64.b64encode(self.key_material).decode()
|
||||||
|
return data
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "KMSKey":
|
||||||
|
key_material = b""
|
||||||
|
if "KeyMaterial" in data:
|
||||||
|
key_material = base64.b64decode(data["KeyMaterial"])
|
||||||
|
return cls(
|
||||||
|
key_id=data["KeyId"],
|
||||||
|
description=data.get("Description", ""),
|
||||||
|
created_at=data.get("CreationDate", datetime.now(timezone.utc).isoformat()),
|
||||||
|
enabled=data.get("Enabled", True),
|
||||||
|
key_material=key_material,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class KMSEncryptionProvider(EncryptionProvider):
|
||||||
|
"""Encryption provider using a specific KMS key."""
|
||||||
|
|
||||||
|
def __init__(self, kms: "KMSManager", key_id: str):
|
||||||
|
self.kms = kms
|
||||||
|
self.key_id = key_id
|
||||||
|
|
||||||
|
@property
|
||||||
|
def KEY_ID(self) -> str:
|
||||||
|
return self.key_id
|
||||||
|
|
||||||
|
def generate_data_key(self) -> tuple[bytes, bytes]:
|
||||||
|
"""Generate a data key encrypted with the KMS key."""
|
||||||
|
return self.kms.generate_data_key(self.key_id)
|
||||||
|
|
||||||
|
def encrypt(self, plaintext: bytes, context: Dict[str, str] | None = None) -> EncryptionResult:
|
||||||
|
"""Encrypt data using envelope encryption with KMS."""
|
||||||
|
data_key, encrypted_data_key = self.generate_data_key()
|
||||||
|
|
||||||
|
aesgcm = AESGCM(data_key)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
ciphertext = aesgcm.encrypt(nonce, plaintext,
|
||||||
|
json.dumps(context).encode() if context else None)
|
||||||
|
|
||||||
|
return EncryptionResult(
|
||||||
|
ciphertext=ciphertext,
|
||||||
|
nonce=nonce,
|
||||||
|
key_id=self.key_id,
|
||||||
|
encrypted_data_key=encrypted_data_key,
|
||||||
|
)
|
||||||
|
|
||||||
|
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
||||||
|
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
"""Decrypt data using envelope encryption with KMS."""
|
||||||
|
# Note: Data key is encrypted without context (AAD), so we decrypt without context
|
||||||
|
data_key = self.kms.decrypt_data_key(key_id, encrypted_data_key, context=None)
|
||||||
|
|
||||||
|
aesgcm = AESGCM(data_key)
|
||||||
|
try:
|
||||||
|
return aesgcm.decrypt(nonce, ciphertext,
|
||||||
|
json.dumps(context).encode() if context else None)
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Failed to decrypt data: {exc}") from exc
|
||||||
|
|
||||||
|
|
||||||
|
class KMSManager:
|
||||||
|
"""Manages KMS keys and operations.
|
||||||
|
|
||||||
|
This is a local implementation that mimics AWS KMS functionality.
|
||||||
|
Keys are stored encrypted on disk.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, keys_path: Path, master_key_path: Path):
|
||||||
|
self.keys_path = keys_path
|
||||||
|
self.master_key_path = master_key_path
|
||||||
|
self._keys: Dict[str, KMSKey] = {}
|
||||||
|
self._master_key: bytes | None = None
|
||||||
|
self._loaded = False
|
||||||
|
|
||||||
|
@property
|
||||||
|
def master_key(self) -> bytes:
|
||||||
|
"""Load or create the master key for encrypting KMS keys."""
|
||||||
|
if self._master_key is None:
|
||||||
|
if self.master_key_path.exists():
|
||||||
|
self._master_key = base64.b64decode(
|
||||||
|
self.master_key_path.read_text().strip()
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self._master_key = secrets.token_bytes(32)
|
||||||
|
self.master_key_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
self.master_key_path.write_text(
|
||||||
|
base64.b64encode(self._master_key).decode()
|
||||||
|
)
|
||||||
|
return self._master_key
|
||||||
|
|
||||||
|
def _load_keys(self) -> None:
|
||||||
|
"""Load keys from disk."""
|
||||||
|
if self._loaded:
|
||||||
|
return
|
||||||
|
|
||||||
|
if self.keys_path.exists():
|
||||||
|
try:
|
||||||
|
data = json.loads(self.keys_path.read_text(encoding="utf-8"))
|
||||||
|
for key_data in data.get("keys", []):
|
||||||
|
key = KMSKey.from_dict(key_data)
|
||||||
|
if key_data.get("EncryptedKeyMaterial"):
|
||||||
|
encrypted = base64.b64decode(key_data["EncryptedKeyMaterial"])
|
||||||
|
key.key_material = self._decrypt_key_material(encrypted)
|
||||||
|
self._keys[key.key_id] = key
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
self._loaded = True
|
||||||
|
|
||||||
|
def _save_keys(self) -> None:
|
||||||
|
"""Save keys to disk (with encrypted key material)."""
|
||||||
|
keys_data = []
|
||||||
|
for key in self._keys.values():
|
||||||
|
data = key.to_dict(include_key=False)
|
||||||
|
encrypted = self._encrypt_key_material(key.key_material)
|
||||||
|
data["EncryptedKeyMaterial"] = base64.b64encode(encrypted).decode()
|
||||||
|
keys_data.append(data)
|
||||||
|
|
||||||
|
self.keys_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
self.keys_path.write_text(
|
||||||
|
json.dumps({"keys": keys_data}, indent=2),
|
||||||
|
encoding="utf-8"
|
||||||
|
)
|
||||||
|
|
||||||
|
def _encrypt_key_material(self, key_material: bytes) -> bytes:
|
||||||
|
"""Encrypt key material with the master key."""
|
||||||
|
aesgcm = AESGCM(self.master_key)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
ciphertext = aesgcm.encrypt(nonce, key_material, None)
|
||||||
|
return nonce + ciphertext
|
||||||
|
|
||||||
|
def _decrypt_key_material(self, encrypted: bytes) -> bytes:
|
||||||
|
"""Decrypt key material with the master key."""
|
||||||
|
aesgcm = AESGCM(self.master_key)
|
||||||
|
nonce = encrypted[:12]
|
||||||
|
ciphertext = encrypted[12:]
|
||||||
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
|
|
||||||
|
def create_key(self, description: str = "", key_id: str | None = None) -> KMSKey:
|
||||||
|
"""Create a new KMS key."""
|
||||||
|
self._load_keys()
|
||||||
|
|
||||||
|
if key_id is None:
|
||||||
|
key_id = str(uuid.uuid4())
|
||||||
|
|
||||||
|
if key_id in self._keys:
|
||||||
|
raise EncryptionError(f"Key already exists: {key_id}")
|
||||||
|
|
||||||
|
key = KMSKey(
|
||||||
|
key_id=key_id,
|
||||||
|
description=description,
|
||||||
|
created_at=datetime.now(timezone.utc).isoformat(),
|
||||||
|
enabled=True,
|
||||||
|
key_material=secrets.token_bytes(32),
|
||||||
|
)
|
||||||
|
|
||||||
|
self._keys[key_id] = key
|
||||||
|
self._save_keys()
|
||||||
|
return key
|
||||||
|
|
||||||
|
def get_key(self, key_id: str) -> KMSKey | None:
|
||||||
|
"""Get a key by ID."""
|
||||||
|
self._load_keys()
|
||||||
|
return self._keys.get(key_id)
|
||||||
|
|
||||||
|
def list_keys(self) -> List[KMSKey]:
|
||||||
|
"""List all keys."""
|
||||||
|
self._load_keys()
|
||||||
|
return list(self._keys.values())
|
||||||
|
|
||||||
|
def get_default_key_id(self) -> str:
|
||||||
|
"""Get the default KMS key ID, creating one if none exist."""
|
||||||
|
self._load_keys()
|
||||||
|
for key in self._keys.values():
|
||||||
|
if key.enabled:
|
||||||
|
return key.key_id
|
||||||
|
default_key = self.create_key(description="Default KMS Key")
|
||||||
|
return default_key.key_id
|
||||||
|
|
||||||
|
def get_provider(self, key_id: str | None = None) -> "KMSEncryptionProvider":
|
||||||
|
"""Get a KMS encryption provider for the specified key."""
|
||||||
|
if key_id is None:
|
||||||
|
key_id = self.get_default_key_id()
|
||||||
|
key = self.get_key(key_id)
|
||||||
|
if not key:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
if not key.enabled:
|
||||||
|
raise EncryptionError(f"Key is disabled: {key_id}")
|
||||||
|
return KMSEncryptionProvider(self, key_id)
|
||||||
|
|
||||||
|
def enable_key(self, key_id: str) -> None:
|
||||||
|
"""Enable a key."""
|
||||||
|
self._load_keys()
|
||||||
|
key = self._keys.get(key_id)
|
||||||
|
if not key:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
key.enabled = True
|
||||||
|
self._save_keys()
|
||||||
|
|
||||||
|
def disable_key(self, key_id: str) -> None:
|
||||||
|
"""Disable a key."""
|
||||||
|
self._load_keys()
|
||||||
|
key = self._keys.get(key_id)
|
||||||
|
if not key:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
key.enabled = False
|
||||||
|
self._save_keys()
|
||||||
|
|
||||||
|
def delete_key(self, key_id: str) -> None:
|
||||||
|
"""Delete a key (schedule for deletion in real KMS)."""
|
||||||
|
self._load_keys()
|
||||||
|
if key_id not in self._keys:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
del self._keys[key_id]
|
||||||
|
self._save_keys()
|
||||||
|
|
||||||
|
def encrypt(self, key_id: str, plaintext: bytes,
|
||||||
|
context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
"""Encrypt data directly with a KMS key."""
|
||||||
|
self._load_keys()
|
||||||
|
key = self._keys.get(key_id)
|
||||||
|
if not key:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
if not key.enabled:
|
||||||
|
raise EncryptionError(f"Key is disabled: {key_id}")
|
||||||
|
|
||||||
|
aesgcm = AESGCM(key.key_material)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
aad = json.dumps(context).encode() if context else None
|
||||||
|
ciphertext = aesgcm.encrypt(nonce, plaintext, aad)
|
||||||
|
|
||||||
|
key_id_bytes = key_id.encode("utf-8")
|
||||||
|
return len(key_id_bytes).to_bytes(2, "big") + key_id_bytes + nonce + ciphertext
|
||||||
|
|
||||||
|
def decrypt(self, ciphertext: bytes,
|
||||||
|
context: Dict[str, str] | None = None) -> tuple[bytes, str]:
|
||||||
|
"""Decrypt data directly with a KMS key.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (plaintext, key_id)
|
||||||
|
"""
|
||||||
|
self._load_keys()
|
||||||
|
|
||||||
|
key_id_len = int.from_bytes(ciphertext[:2], "big")
|
||||||
|
key_id = ciphertext[2:2 + key_id_len].decode("utf-8")
|
||||||
|
rest = ciphertext[2 + key_id_len:]
|
||||||
|
|
||||||
|
key = self._keys.get(key_id)
|
||||||
|
if not key:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
if not key.enabled:
|
||||||
|
raise EncryptionError(f"Key is disabled: {key_id}")
|
||||||
|
|
||||||
|
nonce = rest[:12]
|
||||||
|
encrypted = rest[12:]
|
||||||
|
|
||||||
|
aesgcm = AESGCM(key.key_material)
|
||||||
|
aad = json.dumps(context).encode() if context else None
|
||||||
|
try:
|
||||||
|
plaintext = aesgcm.decrypt(nonce, encrypted, aad)
|
||||||
|
return plaintext, key_id
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"Decryption failed: {exc}") from exc
|
||||||
|
|
||||||
|
def generate_data_key(self, key_id: str,
|
||||||
|
context: Dict[str, str] | None = None) -> tuple[bytes, bytes]:
|
||||||
|
"""Generate a data key and return both plaintext and encrypted versions.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (plaintext_key, encrypted_key)
|
||||||
|
"""
|
||||||
|
self._load_keys()
|
||||||
|
key = self._keys.get(key_id)
|
||||||
|
if not key:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
if not key.enabled:
|
||||||
|
raise EncryptionError(f"Key is disabled: {key_id}")
|
||||||
|
|
||||||
|
plaintext_key = secrets.token_bytes(32)
|
||||||
|
|
||||||
|
encrypted_key = self.encrypt(key_id, plaintext_key, context)
|
||||||
|
|
||||||
|
return plaintext_key, encrypted_key
|
||||||
|
|
||||||
|
def decrypt_data_key(self, key_id: str, encrypted_key: bytes,
|
||||||
|
context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
"""Decrypt a data key."""
|
||||||
|
plaintext, _ = self.decrypt(encrypted_key, context)
|
||||||
|
return plaintext
|
||||||
|
|
||||||
|
def get_provider(self, key_id: str | None = None) -> KMSEncryptionProvider:
|
||||||
|
"""Get an encryption provider for a specific key."""
|
||||||
|
self._load_keys()
|
||||||
|
|
||||||
|
if key_id is None:
|
||||||
|
if not self._keys:
|
||||||
|
key = self.create_key("Default KMS Key")
|
||||||
|
key_id = key.key_id
|
||||||
|
else:
|
||||||
|
key_id = next(iter(self._keys.keys()))
|
||||||
|
|
||||||
|
if key_id not in self._keys:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
|
||||||
|
return KMSEncryptionProvider(self, key_id)
|
||||||
|
|
||||||
|
def re_encrypt(self, ciphertext: bytes, destination_key_id: str,
|
||||||
|
source_context: Dict[str, str] | None = None,
|
||||||
|
destination_context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
"""Re-encrypt data with a different key."""
|
||||||
|
|
||||||
|
plaintext, source_key_id = self.decrypt(ciphertext, source_context)
|
||||||
|
|
||||||
|
return self.encrypt(destination_key_id, plaintext, destination_context)
|
||||||
|
|
||||||
|
def generate_random(self, num_bytes: int = 32) -> bytes:
|
||||||
|
"""Generate cryptographically secure random bytes."""
|
||||||
|
if num_bytes < 1 or num_bytes > 1024:
|
||||||
|
raise EncryptionError("Number of bytes must be between 1 and 1024")
|
||||||
|
return secrets.token_bytes(num_bytes)
|
||||||
444
app/kms_api.py
Normal file
444
app/kms_api.py
Normal file
@@ -0,0 +1,444 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
import uuid
|
||||||
|
from typing import Any, Dict
|
||||||
|
|
||||||
|
from flask import Blueprint, Response, current_app, jsonify, request
|
||||||
|
|
||||||
|
from .encryption import ClientEncryptionHelper, EncryptionError
|
||||||
|
from .extensions import limiter
|
||||||
|
from .iam import IamError
|
||||||
|
|
||||||
|
kms_api_bp = Blueprint("kms_api", __name__, url_prefix="/kms")
|
||||||
|
|
||||||
|
|
||||||
|
def _require_principal():
|
||||||
|
"""Require authentication for KMS operations."""
|
||||||
|
from .s3_api import _require_principal as s3_require_principal
|
||||||
|
return s3_require_principal()
|
||||||
|
|
||||||
|
|
||||||
|
def _kms():
|
||||||
|
"""Get KMS manager from app extensions."""
|
||||||
|
return current_app.extensions.get("kms")
|
||||||
|
|
||||||
|
|
||||||
|
def _encryption():
|
||||||
|
"""Get encryption manager from app extensions."""
|
||||||
|
return current_app.extensions.get("encryption")
|
||||||
|
|
||||||
|
|
||||||
|
def _error_response(code: str, message: str, status: int) -> tuple[Dict[str, Any], int]:
|
||||||
|
return {"__type": code, "message": message}, status
|
||||||
|
|
||||||
|
@kms_api_bp.route("/keys", methods=["GET", "POST"])
|
||||||
|
@limiter.limit("30 per minute")
|
||||||
|
def list_or_create_keys():
|
||||||
|
"""List all KMS keys or create a new key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
if request.method == "POST":
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
key_id = payload.get("KeyId") or payload.get("key_id")
|
||||||
|
description = payload.get("Description") or payload.get("description", "")
|
||||||
|
|
||||||
|
try:
|
||||||
|
key = kms.create_key(description=description, key_id=key_id)
|
||||||
|
current_app.logger.info(
|
||||||
|
"KMS key created",
|
||||||
|
extra={"key_id": key.key_id, "principal": principal.access_key},
|
||||||
|
)
|
||||||
|
return jsonify({
|
||||||
|
"KeyMetadata": key.to_dict(),
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
|
|
||||||
|
keys = kms.list_keys()
|
||||||
|
return jsonify({
|
||||||
|
"Keys": [{"KeyId": k.key_id, "KeyArn": k.arn} for k in keys],
|
||||||
|
"Truncated": False,
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/keys/<key_id>", methods=["GET", "DELETE"])
|
||||||
|
@limiter.limit("30 per minute")
|
||||||
|
def get_or_delete_key(key_id: str):
|
||||||
|
"""Get or delete a specific KMS key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
if request.method == "DELETE":
|
||||||
|
try:
|
||||||
|
kms.delete_key(key_id)
|
||||||
|
current_app.logger.info(
|
||||||
|
"KMS key deleted",
|
||||||
|
extra={"key_id": key_id, "principal": principal.access_key},
|
||||||
|
)
|
||||||
|
return Response(status=204)
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("NotFoundException", str(exc), 404)
|
||||||
|
|
||||||
|
key = kms.get_key(key_id)
|
||||||
|
if not key:
|
||||||
|
return _error_response("NotFoundException", f"Key not found: {key_id}", 404)
|
||||||
|
|
||||||
|
return jsonify({"KeyMetadata": key.to_dict()})
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/keys/<key_id>/enable", methods=["POST"])
|
||||||
|
@limiter.limit("30 per minute")
|
||||||
|
def enable_key(key_id: str):
|
||||||
|
"""Enable a KMS key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
kms.enable_key(key_id)
|
||||||
|
current_app.logger.info(
|
||||||
|
"KMS key enabled",
|
||||||
|
extra={"key_id": key_id, "principal": principal.access_key},
|
||||||
|
)
|
||||||
|
return Response(status=200)
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("NotFoundException", str(exc), 404)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/keys/<key_id>/disable", methods=["POST"])
|
||||||
|
@limiter.limit("30 per minute")
|
||||||
|
def disable_key(key_id: str):
|
||||||
|
"""Disable a KMS key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
kms.disable_key(key_id)
|
||||||
|
current_app.logger.info(
|
||||||
|
"KMS key disabled",
|
||||||
|
extra={"key_id": key_id, "principal": principal.access_key},
|
||||||
|
)
|
||||||
|
return Response(status=200)
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("NotFoundException", str(exc), 404)
|
||||||
|
|
||||||
|
@kms_api_bp.route("/encrypt", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def encrypt_data():
|
||||||
|
"""Encrypt data using a KMS key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
key_id = payload.get("KeyId")
|
||||||
|
plaintext_b64 = payload.get("Plaintext")
|
||||||
|
context = payload.get("EncryptionContext")
|
||||||
|
|
||||||
|
if not key_id:
|
||||||
|
return _error_response("ValidationException", "KeyId is required", 400)
|
||||||
|
if not plaintext_b64:
|
||||||
|
return _error_response("ValidationException", "Plaintext is required", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext = base64.b64decode(plaintext_b64)
|
||||||
|
except Exception:
|
||||||
|
return _error_response("ValidationException", "Plaintext must be base64 encoded", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
ciphertext = kms.encrypt(key_id, plaintext, context)
|
||||||
|
return jsonify({
|
||||||
|
"CiphertextBlob": base64.b64encode(ciphertext).decode(),
|
||||||
|
"KeyId": key_id,
|
||||||
|
"EncryptionAlgorithm": "SYMMETRIC_DEFAULT",
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/decrypt", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def decrypt_data():
|
||||||
|
"""Decrypt data using a KMS key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
ciphertext_b64 = payload.get("CiphertextBlob")
|
||||||
|
context = payload.get("EncryptionContext")
|
||||||
|
|
||||||
|
if not ciphertext_b64:
|
||||||
|
return _error_response("ValidationException", "CiphertextBlob is required", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
ciphertext = base64.b64decode(ciphertext_b64)
|
||||||
|
except Exception:
|
||||||
|
return _error_response("ValidationException", "CiphertextBlob must be base64 encoded", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext, key_id = kms.decrypt(ciphertext, context)
|
||||||
|
return jsonify({
|
||||||
|
"Plaintext": base64.b64encode(plaintext).decode(),
|
||||||
|
"KeyId": key_id,
|
||||||
|
"EncryptionAlgorithm": "SYMMETRIC_DEFAULT",
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("InvalidCiphertextException", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/generate-data-key", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def generate_data_key():
|
||||||
|
"""Generate a data encryption key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
key_id = payload.get("KeyId")
|
||||||
|
context = payload.get("EncryptionContext")
|
||||||
|
key_spec = payload.get("KeySpec", "AES_256")
|
||||||
|
|
||||||
|
if not key_id:
|
||||||
|
return _error_response("ValidationException", "KeyId is required", 400)
|
||||||
|
|
||||||
|
if key_spec not in {"AES_256", "AES_128"}:
|
||||||
|
return _error_response("ValidationException", "KeySpec must be AES_256 or AES_128", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext_key, encrypted_key = kms.generate_data_key(key_id, context)
|
||||||
|
|
||||||
|
if key_spec == "AES_128":
|
||||||
|
plaintext_key = plaintext_key[:16]
|
||||||
|
|
||||||
|
return jsonify({
|
||||||
|
"Plaintext": base64.b64encode(plaintext_key).decode(),
|
||||||
|
"CiphertextBlob": base64.b64encode(encrypted_key).decode(),
|
||||||
|
"KeyId": key_id,
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/generate-data-key-without-plaintext", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def generate_data_key_without_plaintext():
|
||||||
|
"""Generate a data encryption key without returning the plaintext."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
key_id = payload.get("KeyId")
|
||||||
|
context = payload.get("EncryptionContext")
|
||||||
|
|
||||||
|
if not key_id:
|
||||||
|
return _error_response("ValidationException", "KeyId is required", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
_, encrypted_key = kms.generate_data_key(key_id, context)
|
||||||
|
return jsonify({
|
||||||
|
"CiphertextBlob": base64.b64encode(encrypted_key).decode(),
|
||||||
|
"KeyId": key_id,
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/re-encrypt", methods=["POST"])
|
||||||
|
@limiter.limit("30 per minute")
|
||||||
|
def re_encrypt():
|
||||||
|
"""Re-encrypt data with a different key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
ciphertext_b64 = payload.get("CiphertextBlob")
|
||||||
|
destination_key_id = payload.get("DestinationKeyId")
|
||||||
|
source_context = payload.get("SourceEncryptionContext")
|
||||||
|
destination_context = payload.get("DestinationEncryptionContext")
|
||||||
|
|
||||||
|
if not ciphertext_b64:
|
||||||
|
return _error_response("ValidationException", "CiphertextBlob is required", 400)
|
||||||
|
if not destination_key_id:
|
||||||
|
return _error_response("ValidationException", "DestinationKeyId is required", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
ciphertext = base64.b64decode(ciphertext_b64)
|
||||||
|
except Exception:
|
||||||
|
return _error_response("ValidationException", "CiphertextBlob must be base64 encoded", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext, source_key_id = kms.decrypt(ciphertext, source_context)
|
||||||
|
new_ciphertext = kms.encrypt(destination_key_id, plaintext, destination_context)
|
||||||
|
|
||||||
|
return jsonify({
|
||||||
|
"CiphertextBlob": base64.b64encode(new_ciphertext).decode(),
|
||||||
|
"SourceKeyId": source_key_id,
|
||||||
|
"KeyId": destination_key_id,
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/generate-random", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def generate_random():
|
||||||
|
"""Generate random bytes."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
num_bytes = payload.get("NumberOfBytes", 32)
|
||||||
|
|
||||||
|
try:
|
||||||
|
num_bytes = int(num_bytes)
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
return _error_response("ValidationException", "NumberOfBytes must be an integer", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
random_bytes = kms.generate_random(num_bytes)
|
||||||
|
return jsonify({
|
||||||
|
"Plaintext": base64.b64encode(random_bytes).decode(),
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("ValidationException", str(exc), 400)
|
||||||
|
|
||||||
|
@kms_api_bp.route("/client/generate-key", methods=["POST"])
|
||||||
|
@limiter.limit("30 per minute")
|
||||||
|
def generate_client_key():
|
||||||
|
"""Generate a client-side encryption key."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
key_info = ClientEncryptionHelper.generate_client_key()
|
||||||
|
return jsonify(key_info)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/client/encrypt", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def client_encrypt():
|
||||||
|
"""Encrypt data using client-side encryption."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
plaintext_b64 = payload.get("Plaintext")
|
||||||
|
key_b64 = payload.get("Key")
|
||||||
|
|
||||||
|
if not plaintext_b64 or not key_b64:
|
||||||
|
return _error_response("ValidationException", "Plaintext and Key are required", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext = base64.b64decode(plaintext_b64)
|
||||||
|
result = ClientEncryptionHelper.encrypt_with_key(plaintext, key_b64)
|
||||||
|
return jsonify(result)
|
||||||
|
except Exception as exc:
|
||||||
|
return _error_response("EncryptionError", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
@kms_api_bp.route("/client/decrypt", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def client_decrypt():
|
||||||
|
"""Decrypt data using client-side encryption."""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
ciphertext_b64 = payload.get("Ciphertext") or payload.get("ciphertext")
|
||||||
|
nonce_b64 = payload.get("Nonce") or payload.get("nonce")
|
||||||
|
key_b64 = payload.get("Key") or payload.get("key")
|
||||||
|
|
||||||
|
if not ciphertext_b64 or not nonce_b64 or not key_b64:
|
||||||
|
return _error_response("ValidationException", "Ciphertext, Nonce, and Key are required", 400)
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext = ClientEncryptionHelper.decrypt_with_key(ciphertext_b64, nonce_b64, key_b64)
|
||||||
|
return jsonify({
|
||||||
|
"Plaintext": base64.b64encode(plaintext).decode(),
|
||||||
|
})
|
||||||
|
except Exception as exc:
|
||||||
|
return _error_response("DecryptionError", str(exc), 400)
|
||||||
|
|
||||||
|
@kms_api_bp.route("/materials/<key_id>", methods=["POST"])
|
||||||
|
@limiter.limit("60 per minute")
|
||||||
|
def get_encryption_materials(key_id: str):
|
||||||
|
"""Get encryption materials for client-side S3 encryption.
|
||||||
|
|
||||||
|
This is used by S3 encryption clients that want to use KMS for
|
||||||
|
key management but perform encryption client-side.
|
||||||
|
"""
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
|
||||||
|
kms = _kms()
|
||||||
|
if not kms:
|
||||||
|
return _error_response("KMSNotEnabled", "KMS is not configured", 400)
|
||||||
|
|
||||||
|
payload = request.get_json(silent=True) or {}
|
||||||
|
context = payload.get("EncryptionContext")
|
||||||
|
|
||||||
|
try:
|
||||||
|
plaintext_key, encrypted_key = kms.generate_data_key(key_id, context)
|
||||||
|
|
||||||
|
return jsonify({
|
||||||
|
"PlaintextKey": base64.b64encode(plaintext_key).decode(),
|
||||||
|
"EncryptedKey": base64.b64encode(encrypted_key).decode(),
|
||||||
|
"KeyId": key_id,
|
||||||
|
"Algorithm": "AES-256-GCM",
|
||||||
|
"KeyWrapAlgorithm": "kms",
|
||||||
|
})
|
||||||
|
except EncryptionError as exc:
|
||||||
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
335
app/lifecycle.py
Normal file
335
app/lifecycle.py
Normal file
@@ -0,0 +1,335 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
from .storage import ObjectStorage, StorageError
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LifecycleResult:
|
||||||
|
bucket_name: str
|
||||||
|
objects_deleted: int = 0
|
||||||
|
versions_deleted: int = 0
|
||||||
|
uploads_aborted: int = 0
|
||||||
|
errors: List[str] = field(default_factory=list)
|
||||||
|
execution_time_seconds: float = 0.0
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LifecycleExecutionRecord:
|
||||||
|
timestamp: float
|
||||||
|
bucket_name: str
|
||||||
|
objects_deleted: int
|
||||||
|
versions_deleted: int
|
||||||
|
uploads_aborted: int
|
||||||
|
errors: List[str]
|
||||||
|
execution_time_seconds: float
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
return {
|
||||||
|
"timestamp": self.timestamp,
|
||||||
|
"bucket_name": self.bucket_name,
|
||||||
|
"objects_deleted": self.objects_deleted,
|
||||||
|
"versions_deleted": self.versions_deleted,
|
||||||
|
"uploads_aborted": self.uploads_aborted,
|
||||||
|
"errors": self.errors,
|
||||||
|
"execution_time_seconds": self.execution_time_seconds,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: dict) -> "LifecycleExecutionRecord":
|
||||||
|
return cls(
|
||||||
|
timestamp=data["timestamp"],
|
||||||
|
bucket_name=data["bucket_name"],
|
||||||
|
objects_deleted=data["objects_deleted"],
|
||||||
|
versions_deleted=data["versions_deleted"],
|
||||||
|
uploads_aborted=data["uploads_aborted"],
|
||||||
|
errors=data.get("errors", []),
|
||||||
|
execution_time_seconds=data["execution_time_seconds"],
|
||||||
|
)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_result(cls, result: LifecycleResult) -> "LifecycleExecutionRecord":
|
||||||
|
return cls(
|
||||||
|
timestamp=time.time(),
|
||||||
|
bucket_name=result.bucket_name,
|
||||||
|
objects_deleted=result.objects_deleted,
|
||||||
|
versions_deleted=result.versions_deleted,
|
||||||
|
uploads_aborted=result.uploads_aborted,
|
||||||
|
errors=result.errors.copy(),
|
||||||
|
execution_time_seconds=result.execution_time_seconds,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class LifecycleHistoryStore:
|
||||||
|
MAX_HISTORY_PER_BUCKET = 50
|
||||||
|
|
||||||
|
def __init__(self, storage_root: Path) -> None:
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
|
||||||
|
def _get_history_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "lifecycle_history.json"
|
||||||
|
|
||||||
|
def load_history(self, bucket_name: str) -> List[LifecycleExecutionRecord]:
|
||||||
|
path = self._get_history_path(bucket_name)
|
||||||
|
if not path.exists():
|
||||||
|
return []
|
||||||
|
try:
|
||||||
|
with open(path, "r") as f:
|
||||||
|
data = json.load(f)
|
||||||
|
return [LifecycleExecutionRecord.from_dict(d) for d in data.get("executions", [])]
|
||||||
|
except (OSError, ValueError, KeyError) as e:
|
||||||
|
logger.error(f"Failed to load lifecycle history for {bucket_name}: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
def save_history(self, bucket_name: str, records: List[LifecycleExecutionRecord]) -> None:
|
||||||
|
path = self._get_history_path(bucket_name)
|
||||||
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
data = {"executions": [r.to_dict() for r in records[:self.MAX_HISTORY_PER_BUCKET]]}
|
||||||
|
try:
|
||||||
|
with open(path, "w") as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
except OSError as e:
|
||||||
|
logger.error(f"Failed to save lifecycle history for {bucket_name}: {e}")
|
||||||
|
|
||||||
|
def add_record(self, bucket_name: str, record: LifecycleExecutionRecord) -> None:
|
||||||
|
with self._lock:
|
||||||
|
records = self.load_history(bucket_name)
|
||||||
|
records.insert(0, record)
|
||||||
|
self.save_history(bucket_name, records)
|
||||||
|
|
||||||
|
def get_history(self, bucket_name: str, limit: int = 50, offset: int = 0) -> List[LifecycleExecutionRecord]:
|
||||||
|
records = self.load_history(bucket_name)
|
||||||
|
return records[offset:offset + limit]
|
||||||
|
|
||||||
|
|
||||||
|
class LifecycleManager:
|
||||||
|
def __init__(self, storage: ObjectStorage, interval_seconds: int = 3600, storage_root: Optional[Path] = None):
|
||||||
|
self.storage = storage
|
||||||
|
self.interval_seconds = interval_seconds
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._timer: Optional[threading.Timer] = None
|
||||||
|
self._shutdown = False
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
self.history_store = LifecycleHistoryStore(storage_root) if storage_root else None
|
||||||
|
|
||||||
|
def start(self) -> None:
|
||||||
|
if self._timer is not None:
|
||||||
|
return
|
||||||
|
self._shutdown = False
|
||||||
|
self._schedule_next()
|
||||||
|
logger.info(f"Lifecycle manager started with interval {self.interval_seconds}s")
|
||||||
|
|
||||||
|
def stop(self) -> None:
|
||||||
|
self._shutdown = True
|
||||||
|
if self._timer:
|
||||||
|
self._timer.cancel()
|
||||||
|
self._timer = None
|
||||||
|
logger.info("Lifecycle manager stopped")
|
||||||
|
|
||||||
|
def _schedule_next(self) -> None:
|
||||||
|
if self._shutdown:
|
||||||
|
return
|
||||||
|
self._timer = threading.Timer(self.interval_seconds, self._run_enforcement)
|
||||||
|
self._timer.daemon = True
|
||||||
|
self._timer.start()
|
||||||
|
|
||||||
|
def _run_enforcement(self) -> None:
|
||||||
|
if self._shutdown:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
self.enforce_all_buckets()
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Lifecycle enforcement failed: {e}")
|
||||||
|
finally:
|
||||||
|
self._schedule_next()
|
||||||
|
|
||||||
|
def enforce_all_buckets(self) -> Dict[str, LifecycleResult]:
|
||||||
|
results = {}
|
||||||
|
try:
|
||||||
|
buckets = self.storage.list_buckets()
|
||||||
|
for bucket in buckets:
|
||||||
|
result = self.enforce_rules(bucket.name)
|
||||||
|
if result.objects_deleted > 0 or result.versions_deleted > 0 or result.uploads_aborted > 0:
|
||||||
|
results[bucket.name] = result
|
||||||
|
except StorageError as e:
|
||||||
|
logger.error(f"Failed to list buckets for lifecycle: {e}")
|
||||||
|
return results
|
||||||
|
|
||||||
|
def enforce_rules(self, bucket_name: str) -> LifecycleResult:
|
||||||
|
start_time = time.time()
|
||||||
|
result = LifecycleResult(bucket_name=bucket_name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
lifecycle = self.storage.get_bucket_lifecycle(bucket_name)
|
||||||
|
if not lifecycle:
|
||||||
|
return result
|
||||||
|
|
||||||
|
for rule in lifecycle:
|
||||||
|
if rule.get("Status") != "Enabled":
|
||||||
|
continue
|
||||||
|
rule_id = rule.get("ID", "unknown")
|
||||||
|
prefix = rule.get("Prefix", rule.get("Filter", {}).get("Prefix", ""))
|
||||||
|
|
||||||
|
self._enforce_expiration(bucket_name, rule, prefix, result)
|
||||||
|
self._enforce_noncurrent_expiration(bucket_name, rule, prefix, result)
|
||||||
|
self._enforce_abort_multipart(bucket_name, rule, result)
|
||||||
|
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(str(e))
|
||||||
|
logger.error(f"Lifecycle enforcement error for {bucket_name}: {e}")
|
||||||
|
|
||||||
|
result.execution_time_seconds = time.time() - start_time
|
||||||
|
if result.objects_deleted > 0 or result.versions_deleted > 0 or result.uploads_aborted > 0 or result.errors:
|
||||||
|
logger.info(
|
||||||
|
f"Lifecycle enforcement for {bucket_name}: "
|
||||||
|
f"deleted={result.objects_deleted}, versions={result.versions_deleted}, "
|
||||||
|
f"aborted={result.uploads_aborted}, time={result.execution_time_seconds:.2f}s"
|
||||||
|
)
|
||||||
|
if self.history_store:
|
||||||
|
record = LifecycleExecutionRecord.from_result(result)
|
||||||
|
self.history_store.add_record(bucket_name, record)
|
||||||
|
return result
|
||||||
|
|
||||||
|
def _enforce_expiration(
|
||||||
|
self, bucket_name: str, rule: Dict[str, Any], prefix: str, result: LifecycleResult
|
||||||
|
) -> None:
|
||||||
|
expiration = rule.get("Expiration", {})
|
||||||
|
if not expiration:
|
||||||
|
return
|
||||||
|
|
||||||
|
days = expiration.get("Days")
|
||||||
|
date_str = expiration.get("Date")
|
||||||
|
|
||||||
|
if days:
|
||||||
|
cutoff = datetime.now(timezone.utc) - timedelta(days=days)
|
||||||
|
elif date_str:
|
||||||
|
try:
|
||||||
|
cutoff = datetime.fromisoformat(date_str.replace("Z", "+00:00"))
|
||||||
|
except ValueError:
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
objects = self.storage.list_objects_all(bucket_name)
|
||||||
|
for obj in objects:
|
||||||
|
if prefix and not obj.key.startswith(prefix):
|
||||||
|
continue
|
||||||
|
if obj.last_modified < cutoff:
|
||||||
|
try:
|
||||||
|
self.storage.delete_object(bucket_name, obj.key)
|
||||||
|
result.objects_deleted += 1
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to delete {obj.key}: {e}")
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to list objects: {e}")
|
||||||
|
|
||||||
|
def _enforce_noncurrent_expiration(
|
||||||
|
self, bucket_name: str, rule: Dict[str, Any], prefix: str, result: LifecycleResult
|
||||||
|
) -> None:
|
||||||
|
noncurrent = rule.get("NoncurrentVersionExpiration", {})
|
||||||
|
noncurrent_days = noncurrent.get("NoncurrentDays")
|
||||||
|
if not noncurrent_days:
|
||||||
|
return
|
||||||
|
|
||||||
|
cutoff = datetime.now(timezone.utc) - timedelta(days=noncurrent_days)
|
||||||
|
|
||||||
|
try:
|
||||||
|
objects = self.storage.list_objects_all(bucket_name)
|
||||||
|
for obj in objects:
|
||||||
|
if prefix and not obj.key.startswith(prefix):
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
versions = self.storage.list_object_versions(bucket_name, obj.key)
|
||||||
|
for version in versions:
|
||||||
|
archived_at_str = version.get("archived_at", "")
|
||||||
|
if not archived_at_str:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
archived_at = datetime.fromisoformat(archived_at_str.replace("Z", "+00:00"))
|
||||||
|
if archived_at < cutoff:
|
||||||
|
version_id = version.get("version_id")
|
||||||
|
if version_id:
|
||||||
|
self.storage.delete_object_version(bucket_name, obj.key, version_id)
|
||||||
|
result.versions_deleted += 1
|
||||||
|
except (ValueError, StorageError) as e:
|
||||||
|
result.errors.append(f"Failed to process version: {e}")
|
||||||
|
except StorageError:
|
||||||
|
pass
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to list objects: {e}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
orphaned = self.storage.list_orphaned_objects(bucket_name)
|
||||||
|
for item in orphaned:
|
||||||
|
obj_key = item.get("key", "")
|
||||||
|
if prefix and not obj_key.startswith(prefix):
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
versions = self.storage.list_object_versions(bucket_name, obj_key)
|
||||||
|
for version in versions:
|
||||||
|
archived_at_str = version.get("archived_at", "")
|
||||||
|
if not archived_at_str:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
archived_at = datetime.fromisoformat(archived_at_str.replace("Z", "+00:00"))
|
||||||
|
if archived_at < cutoff:
|
||||||
|
version_id = version.get("version_id")
|
||||||
|
if version_id:
|
||||||
|
self.storage.delete_object_version(bucket_name, obj_key, version_id)
|
||||||
|
result.versions_deleted += 1
|
||||||
|
except (ValueError, StorageError) as e:
|
||||||
|
result.errors.append(f"Failed to process orphaned version: {e}")
|
||||||
|
except StorageError:
|
||||||
|
pass
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to list orphaned objects: {e}")
|
||||||
|
|
||||||
|
def _enforce_abort_multipart(
|
||||||
|
self, bucket_name: str, rule: Dict[str, Any], result: LifecycleResult
|
||||||
|
) -> None:
|
||||||
|
abort_config = rule.get("AbortIncompleteMultipartUpload", {})
|
||||||
|
days_after = abort_config.get("DaysAfterInitiation")
|
||||||
|
if not days_after:
|
||||||
|
return
|
||||||
|
|
||||||
|
cutoff = datetime.now(timezone.utc) - timedelta(days=days_after)
|
||||||
|
|
||||||
|
try:
|
||||||
|
uploads = self.storage.list_multipart_uploads(bucket_name)
|
||||||
|
for upload in uploads:
|
||||||
|
created_at_str = upload.get("created_at", "")
|
||||||
|
if not created_at_str:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
created_at = datetime.fromisoformat(created_at_str.replace("Z", "+00:00"))
|
||||||
|
if created_at < cutoff:
|
||||||
|
upload_id = upload.get("upload_id")
|
||||||
|
if upload_id:
|
||||||
|
self.storage.abort_multipart_upload(bucket_name, upload_id)
|
||||||
|
result.uploads_aborted += 1
|
||||||
|
except (ValueError, StorageError) as e:
|
||||||
|
result.errors.append(f"Failed to abort upload: {e}")
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to list multipart uploads: {e}")
|
||||||
|
|
||||||
|
def run_now(self, bucket_name: Optional[str] = None) -> Dict[str, LifecycleResult]:
|
||||||
|
if bucket_name:
|
||||||
|
return {bucket_name: self.enforce_rules(bucket_name)}
|
||||||
|
return self.enforce_all_buckets()
|
||||||
|
|
||||||
|
def get_execution_history(self, bucket_name: str, limit: int = 50, offset: int = 0) -> List[LifecycleExecutionRecord]:
|
||||||
|
if not self.history_store:
|
||||||
|
return []
|
||||||
|
return self.history_store.get_history(bucket_name, limit, offset)
|
||||||
334
app/notifications.py
Normal file
334
app/notifications.py
Normal file
@@ -0,0 +1,334 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import queue
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
|
import requests
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class NotificationEvent:
|
||||||
|
event_name: str
|
||||||
|
bucket_name: str
|
||||||
|
object_key: str
|
||||||
|
object_size: int = 0
|
||||||
|
etag: str = ""
|
||||||
|
version_id: Optional[str] = None
|
||||||
|
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||||
|
request_id: str = field(default_factory=lambda: uuid.uuid4().hex)
|
||||||
|
source_ip: str = ""
|
||||||
|
user_identity: str = ""
|
||||||
|
|
||||||
|
def to_s3_event(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"Records": [
|
||||||
|
{
|
||||||
|
"eventVersion": "2.1",
|
||||||
|
"eventSource": "myfsio:s3",
|
||||||
|
"awsRegion": "local",
|
||||||
|
"eventTime": self.timestamp.strftime("%Y-%m-%dT%H:%M:%S.000Z"),
|
||||||
|
"eventName": self.event_name,
|
||||||
|
"userIdentity": {
|
||||||
|
"principalId": self.user_identity or "ANONYMOUS",
|
||||||
|
},
|
||||||
|
"requestParameters": {
|
||||||
|
"sourceIPAddress": self.source_ip or "127.0.0.1",
|
||||||
|
},
|
||||||
|
"responseElements": {
|
||||||
|
"x-amz-request-id": self.request_id,
|
||||||
|
"x-amz-id-2": self.request_id,
|
||||||
|
},
|
||||||
|
"s3": {
|
||||||
|
"s3SchemaVersion": "1.0",
|
||||||
|
"configurationId": "notification",
|
||||||
|
"bucket": {
|
||||||
|
"name": self.bucket_name,
|
||||||
|
"ownerIdentity": {"principalId": "local"},
|
||||||
|
"arn": f"arn:aws:s3:::{self.bucket_name}",
|
||||||
|
},
|
||||||
|
"object": {
|
||||||
|
"key": self.object_key,
|
||||||
|
"size": self.object_size,
|
||||||
|
"eTag": self.etag,
|
||||||
|
"versionId": self.version_id or "null",
|
||||||
|
"sequencer": f"{int(time.time() * 1000):016X}",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class WebhookDestination:
|
||||||
|
url: str
|
||||||
|
headers: Dict[str, str] = field(default_factory=dict)
|
||||||
|
timeout_seconds: int = 30
|
||||||
|
retry_count: int = 3
|
||||||
|
retry_delay_seconds: int = 1
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"url": self.url,
|
||||||
|
"headers": self.headers,
|
||||||
|
"timeout_seconds": self.timeout_seconds,
|
||||||
|
"retry_count": self.retry_count,
|
||||||
|
"retry_delay_seconds": self.retry_delay_seconds,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "WebhookDestination":
|
||||||
|
return cls(
|
||||||
|
url=data.get("url", ""),
|
||||||
|
headers=data.get("headers", {}),
|
||||||
|
timeout_seconds=data.get("timeout_seconds", 30),
|
||||||
|
retry_count=data.get("retry_count", 3),
|
||||||
|
retry_delay_seconds=data.get("retry_delay_seconds", 1),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class NotificationConfiguration:
|
||||||
|
id: str
|
||||||
|
events: List[str]
|
||||||
|
destination: WebhookDestination
|
||||||
|
prefix_filter: str = ""
|
||||||
|
suffix_filter: str = ""
|
||||||
|
|
||||||
|
def matches_event(self, event_name: str, object_key: str) -> bool:
|
||||||
|
event_match = False
|
||||||
|
for pattern in self.events:
|
||||||
|
if pattern.endswith("*"):
|
||||||
|
base = pattern[:-1]
|
||||||
|
if event_name.startswith(base):
|
||||||
|
event_match = True
|
||||||
|
break
|
||||||
|
elif pattern == event_name:
|
||||||
|
event_match = True
|
||||||
|
break
|
||||||
|
|
||||||
|
if not event_match:
|
||||||
|
return False
|
||||||
|
|
||||||
|
if self.prefix_filter and not object_key.startswith(self.prefix_filter):
|
||||||
|
return False
|
||||||
|
if self.suffix_filter and not object_key.endswith(self.suffix_filter):
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"Id": self.id,
|
||||||
|
"Events": self.events,
|
||||||
|
"Destination": self.destination.to_dict(),
|
||||||
|
"Filter": {
|
||||||
|
"Key": {
|
||||||
|
"FilterRules": [
|
||||||
|
{"Name": "prefix", "Value": self.prefix_filter},
|
||||||
|
{"Name": "suffix", "Value": self.suffix_filter},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "NotificationConfiguration":
|
||||||
|
prefix = ""
|
||||||
|
suffix = ""
|
||||||
|
filter_data = data.get("Filter", {})
|
||||||
|
key_filter = filter_data.get("Key", {})
|
||||||
|
for rule in key_filter.get("FilterRules", []):
|
||||||
|
if rule.get("Name") == "prefix":
|
||||||
|
prefix = rule.get("Value", "")
|
||||||
|
elif rule.get("Name") == "suffix":
|
||||||
|
suffix = rule.get("Value", "")
|
||||||
|
|
||||||
|
return cls(
|
||||||
|
id=data.get("Id", uuid.uuid4().hex),
|
||||||
|
events=data.get("Events", []),
|
||||||
|
destination=WebhookDestination.from_dict(data.get("Destination", {})),
|
||||||
|
prefix_filter=prefix,
|
||||||
|
suffix_filter=suffix,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class NotificationService:
|
||||||
|
def __init__(self, storage_root: Path, worker_count: int = 2):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._configs: Dict[str, List[NotificationConfiguration]] = {}
|
||||||
|
self._queue: queue.Queue[tuple[NotificationEvent, WebhookDestination]] = queue.Queue()
|
||||||
|
self._workers: List[threading.Thread] = []
|
||||||
|
self._shutdown = threading.Event()
|
||||||
|
self._stats = {
|
||||||
|
"events_queued": 0,
|
||||||
|
"events_sent": 0,
|
||||||
|
"events_failed": 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
for i in range(worker_count):
|
||||||
|
worker = threading.Thread(target=self._worker_loop, name=f"notification-worker-{i}", daemon=True)
|
||||||
|
worker.start()
|
||||||
|
self._workers.append(worker)
|
||||||
|
|
||||||
|
def _config_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "notifications.json"
|
||||||
|
|
||||||
|
def get_bucket_notifications(self, bucket_name: str) -> List[NotificationConfiguration]:
|
||||||
|
if bucket_name in self._configs:
|
||||||
|
return self._configs[bucket_name]
|
||||||
|
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
if not config_path.exists():
|
||||||
|
return []
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||||
|
configs = [NotificationConfiguration.from_dict(c) for c in data.get("configurations", [])]
|
||||||
|
self._configs[bucket_name] = configs
|
||||||
|
return configs
|
||||||
|
except (json.JSONDecodeError, OSError) as e:
|
||||||
|
logger.warning(f"Failed to load notification config for {bucket_name}: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
def set_bucket_notifications(
|
||||||
|
self, bucket_name: str, configurations: List[NotificationConfiguration]
|
||||||
|
) -> None:
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
data = {"configurations": [c.to_dict() for c in configurations]}
|
||||||
|
config_path.write_text(json.dumps(data, indent=2), encoding="utf-8")
|
||||||
|
self._configs[bucket_name] = configurations
|
||||||
|
|
||||||
|
def delete_bucket_notifications(self, bucket_name: str) -> None:
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
try:
|
||||||
|
if config_path.exists():
|
||||||
|
config_path.unlink()
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
self._configs.pop(bucket_name, None)
|
||||||
|
|
||||||
|
def emit_event(self, event: NotificationEvent) -> None:
|
||||||
|
configurations = self.get_bucket_notifications(event.bucket_name)
|
||||||
|
if not configurations:
|
||||||
|
return
|
||||||
|
|
||||||
|
for config in configurations:
|
||||||
|
if config.matches_event(event.event_name, event.object_key):
|
||||||
|
self._queue.put((event, config.destination))
|
||||||
|
self._stats["events_queued"] += 1
|
||||||
|
logger.debug(
|
||||||
|
f"Queued notification for {event.event_name} on {event.bucket_name}/{event.object_key}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def emit_object_created(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
*,
|
||||||
|
size: int = 0,
|
||||||
|
etag: str = "",
|
||||||
|
version_id: Optional[str] = None,
|
||||||
|
request_id: str = "",
|
||||||
|
source_ip: str = "",
|
||||||
|
user_identity: str = "",
|
||||||
|
operation: str = "Put",
|
||||||
|
) -> None:
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name=f"s3:ObjectCreated:{operation}",
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
object_key=object_key,
|
||||||
|
object_size=size,
|
||||||
|
etag=etag,
|
||||||
|
version_id=version_id,
|
||||||
|
request_id=request_id or uuid.uuid4().hex,
|
||||||
|
source_ip=source_ip,
|
||||||
|
user_identity=user_identity,
|
||||||
|
)
|
||||||
|
self.emit_event(event)
|
||||||
|
|
||||||
|
def emit_object_removed(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
*,
|
||||||
|
version_id: Optional[str] = None,
|
||||||
|
request_id: str = "",
|
||||||
|
source_ip: str = "",
|
||||||
|
user_identity: str = "",
|
||||||
|
operation: str = "Delete",
|
||||||
|
) -> None:
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name=f"s3:ObjectRemoved:{operation}",
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
object_key=object_key,
|
||||||
|
version_id=version_id,
|
||||||
|
request_id=request_id or uuid.uuid4().hex,
|
||||||
|
source_ip=source_ip,
|
||||||
|
user_identity=user_identity,
|
||||||
|
)
|
||||||
|
self.emit_event(event)
|
||||||
|
|
||||||
|
def _worker_loop(self) -> None:
|
||||||
|
while not self._shutdown.is_set():
|
||||||
|
try:
|
||||||
|
event, destination = self._queue.get(timeout=1.0)
|
||||||
|
except queue.Empty:
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
self._send_notification(event, destination)
|
||||||
|
self._stats["events_sent"] += 1
|
||||||
|
except Exception as e:
|
||||||
|
self._stats["events_failed"] += 1
|
||||||
|
logger.error(f"Failed to send notification: {e}")
|
||||||
|
finally:
|
||||||
|
self._queue.task_done()
|
||||||
|
|
||||||
|
def _send_notification(self, event: NotificationEvent, destination: WebhookDestination) -> None:
|
||||||
|
payload = event.to_s3_event()
|
||||||
|
headers = {"Content-Type": "application/json", **destination.headers}
|
||||||
|
|
||||||
|
last_error = None
|
||||||
|
for attempt in range(destination.retry_count):
|
||||||
|
try:
|
||||||
|
response = requests.post(
|
||||||
|
destination.url,
|
||||||
|
json=payload,
|
||||||
|
headers=headers,
|
||||||
|
timeout=destination.timeout_seconds,
|
||||||
|
)
|
||||||
|
if response.status_code < 400:
|
||||||
|
logger.info(
|
||||||
|
f"Notification sent: {event.event_name} -> {destination.url} (status={response.status_code})"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
last_error = f"HTTP {response.status_code}: {response.text[:200]}"
|
||||||
|
except requests.RequestException as e:
|
||||||
|
last_error = str(e)
|
||||||
|
|
||||||
|
if attempt < destination.retry_count - 1:
|
||||||
|
time.sleep(destination.retry_delay_seconds * (attempt + 1))
|
||||||
|
|
||||||
|
raise RuntimeError(f"Failed after {destination.retry_count} attempts: {last_error}")
|
||||||
|
|
||||||
|
def get_stats(self) -> Dict[str, int]:
|
||||||
|
return dict(self._stats)
|
||||||
|
|
||||||
|
def shutdown(self) -> None:
|
||||||
|
self._shutdown.set()
|
||||||
|
for worker in self._workers:
|
||||||
|
worker.join(timeout=5.0)
|
||||||
234
app/object_lock.py
Normal file
234
app/object_lock.py
Normal file
@@ -0,0 +1,234 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from enum import Enum
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
|
|
||||||
|
class RetentionMode(Enum):
|
||||||
|
GOVERNANCE = "GOVERNANCE"
|
||||||
|
COMPLIANCE = "COMPLIANCE"
|
||||||
|
|
||||||
|
|
||||||
|
class ObjectLockError(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ObjectLockRetention:
|
||||||
|
mode: RetentionMode
|
||||||
|
retain_until_date: datetime
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, str]:
|
||||||
|
return {
|
||||||
|
"Mode": self.mode.value,
|
||||||
|
"RetainUntilDate": self.retain_until_date.isoformat(),
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> Optional["ObjectLockRetention"]:
|
||||||
|
if not data:
|
||||||
|
return None
|
||||||
|
mode_str = data.get("Mode")
|
||||||
|
date_str = data.get("RetainUntilDate")
|
||||||
|
if not mode_str or not date_str:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
mode = RetentionMode(mode_str)
|
||||||
|
retain_until = datetime.fromisoformat(date_str.replace("Z", "+00:00"))
|
||||||
|
return cls(mode=mode, retain_until_date=retain_until)
|
||||||
|
except (ValueError, KeyError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def is_expired(self) -> bool:
|
||||||
|
return datetime.now(timezone.utc) > self.retain_until_date
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ObjectLockConfig:
|
||||||
|
enabled: bool = False
|
||||||
|
default_retention: Optional[ObjectLockRetention] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
result: Dict[str, Any] = {"ObjectLockEnabled": "Enabled" if self.enabled else "Disabled"}
|
||||||
|
if self.default_retention:
|
||||||
|
result["Rule"] = {
|
||||||
|
"DefaultRetention": {
|
||||||
|
"Mode": self.default_retention.mode.value,
|
||||||
|
"Days": None,
|
||||||
|
"Years": None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "ObjectLockConfig":
|
||||||
|
enabled = data.get("ObjectLockEnabled") == "Enabled"
|
||||||
|
default_retention = None
|
||||||
|
rule = data.get("Rule")
|
||||||
|
if rule and "DefaultRetention" in rule:
|
||||||
|
dr = rule["DefaultRetention"]
|
||||||
|
mode_str = dr.get("Mode", "GOVERNANCE")
|
||||||
|
days = dr.get("Days")
|
||||||
|
years = dr.get("Years")
|
||||||
|
if days or years:
|
||||||
|
from datetime import timedelta
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
if years:
|
||||||
|
delta = timedelta(days=int(years) * 365)
|
||||||
|
else:
|
||||||
|
delta = timedelta(days=int(days))
|
||||||
|
default_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode(mode_str),
|
||||||
|
retain_until_date=now + delta,
|
||||||
|
)
|
||||||
|
return cls(enabled=enabled, default_retention=default_retention)
|
||||||
|
|
||||||
|
|
||||||
|
class ObjectLockService:
|
||||||
|
def __init__(self, storage_root: Path):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._config_cache: Dict[str, ObjectLockConfig] = {}
|
||||||
|
|
||||||
|
def _bucket_lock_config_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "object_lock.json"
|
||||||
|
|
||||||
|
def _object_lock_meta_path(self, bucket_name: str, object_key: str) -> Path:
|
||||||
|
safe_key = object_key.replace("/", "_").replace("\\", "_")
|
||||||
|
return (
|
||||||
|
self.storage_root / ".myfsio.sys" / "buckets" / bucket_name /
|
||||||
|
"locks" / f"{safe_key}.lock.json"
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_bucket_lock_config(self, bucket_name: str) -> ObjectLockConfig:
|
||||||
|
if bucket_name in self._config_cache:
|
||||||
|
return self._config_cache[bucket_name]
|
||||||
|
|
||||||
|
config_path = self._bucket_lock_config_path(bucket_name)
|
||||||
|
if not config_path.exists():
|
||||||
|
return ObjectLockConfig(enabled=False)
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
self._config_cache[bucket_name] = config
|
||||||
|
return config
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
return ObjectLockConfig(enabled=False)
|
||||||
|
|
||||||
|
def set_bucket_lock_config(self, bucket_name: str, config: ObjectLockConfig) -> None:
|
||||||
|
config_path = self._bucket_lock_config_path(bucket_name)
|
||||||
|
config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
config_path.write_text(json.dumps(config.to_dict()), encoding="utf-8")
|
||||||
|
self._config_cache[bucket_name] = config
|
||||||
|
|
||||||
|
def enable_bucket_lock(self, bucket_name: str) -> None:
|
||||||
|
config = self.get_bucket_lock_config(bucket_name)
|
||||||
|
config.enabled = True
|
||||||
|
self.set_bucket_lock_config(bucket_name, config)
|
||||||
|
|
||||||
|
def is_bucket_lock_enabled(self, bucket_name: str) -> bool:
|
||||||
|
return self.get_bucket_lock_config(bucket_name).enabled
|
||||||
|
|
||||||
|
def get_object_retention(self, bucket_name: str, object_key: str) -> Optional[ObjectLockRetention]:
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
if not meta_path.exists():
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
data = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||||
|
return ObjectLockRetention.from_dict(data.get("retention", {}))
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def set_object_retention(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
retention: ObjectLockRetention,
|
||||||
|
bypass_governance: bool = False,
|
||||||
|
) -> None:
|
||||||
|
existing = self.get_object_retention(bucket_name, object_key)
|
||||||
|
if existing and not existing.is_expired():
|
||||||
|
if existing.mode == RetentionMode.COMPLIANCE:
|
||||||
|
raise ObjectLockError(
|
||||||
|
"Cannot modify retention on object with COMPLIANCE mode until retention expires"
|
||||||
|
)
|
||||||
|
if existing.mode == RetentionMode.GOVERNANCE and not bypass_governance:
|
||||||
|
raise ObjectLockError(
|
||||||
|
"Cannot modify GOVERNANCE retention without bypass-governance permission"
|
||||||
|
)
|
||||||
|
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
meta_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
existing_data: Dict[str, Any] = {}
|
||||||
|
if meta_path.exists():
|
||||||
|
try:
|
||||||
|
existing_data = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
existing_data["retention"] = retention.to_dict()
|
||||||
|
meta_path.write_text(json.dumps(existing_data), encoding="utf-8")
|
||||||
|
|
||||||
|
def get_legal_hold(self, bucket_name: str, object_key: str) -> bool:
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
if not meta_path.exists():
|
||||||
|
return False
|
||||||
|
try:
|
||||||
|
data = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||||
|
return data.get("legal_hold", False)
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
return False
|
||||||
|
|
||||||
|
def set_legal_hold(self, bucket_name: str, object_key: str, enabled: bool) -> None:
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
meta_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
existing_data: Dict[str, Any] = {}
|
||||||
|
if meta_path.exists():
|
||||||
|
try:
|
||||||
|
existing_data = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
existing_data["legal_hold"] = enabled
|
||||||
|
meta_path.write_text(json.dumps(existing_data), encoding="utf-8")
|
||||||
|
|
||||||
|
def can_delete_object(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
bypass_governance: bool = False,
|
||||||
|
) -> tuple[bool, str]:
|
||||||
|
if self.get_legal_hold(bucket_name, object_key):
|
||||||
|
return False, "Object is under legal hold"
|
||||||
|
|
||||||
|
retention = self.get_object_retention(bucket_name, object_key)
|
||||||
|
if retention and not retention.is_expired():
|
||||||
|
if retention.mode == RetentionMode.COMPLIANCE:
|
||||||
|
return False, f"Object is locked in COMPLIANCE mode until {retention.retain_until_date.isoformat()}"
|
||||||
|
if retention.mode == RetentionMode.GOVERNANCE:
|
||||||
|
if not bypass_governance:
|
||||||
|
return False, f"Object is locked in GOVERNANCE mode until {retention.retain_until_date.isoformat()}"
|
||||||
|
|
||||||
|
return True, ""
|
||||||
|
|
||||||
|
def can_overwrite_object(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
bypass_governance: bool = False,
|
||||||
|
) -> tuple[bool, str]:
|
||||||
|
return self.can_delete_object(bucket_name, object_key, bypass_governance)
|
||||||
|
|
||||||
|
def delete_object_lock_metadata(self, bucket_name: str, object_key: str) -> None:
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
try:
|
||||||
|
if meta_path.exists():
|
||||||
|
meta_path.unlink()
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
@@ -1,21 +1,125 @@
|
|||||||
"""Background replication worker."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
import logging
|
import logging
|
||||||
|
import mimetypes
|
||||||
import threading
|
import threading
|
||||||
|
import time
|
||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass, field
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Dict, Optional
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
import boto3
|
import boto3
|
||||||
|
from botocore.config import Config
|
||||||
from botocore.exceptions import ClientError
|
from botocore.exceptions import ClientError
|
||||||
|
from boto3.exceptions import S3UploadFailedError
|
||||||
|
|
||||||
from .connections import ConnectionStore, RemoteConnection
|
from .connections import ConnectionStore, RemoteConnection
|
||||||
from .storage import ObjectStorage
|
from .storage import ObjectStorage, StorageError
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
REPLICATION_USER_AGENT = "S3ReplicationAgent/1.0"
|
||||||
|
REPLICATION_CONNECT_TIMEOUT = 5
|
||||||
|
REPLICATION_READ_TIMEOUT = 30
|
||||||
|
STREAMING_THRESHOLD_BYTES = 10 * 1024 * 1024
|
||||||
|
|
||||||
|
REPLICATION_MODE_NEW_ONLY = "new_only"
|
||||||
|
REPLICATION_MODE_ALL = "all"
|
||||||
|
|
||||||
|
|
||||||
|
def _create_s3_client(connection: RemoteConnection, *, health_check: bool = False) -> Any:
|
||||||
|
"""Create a boto3 S3 client for the given connection.
|
||||||
|
Args:
|
||||||
|
connection: Remote S3 connection configuration
|
||||||
|
health_check: If True, use minimal retries for quick health checks
|
||||||
|
"""
|
||||||
|
config = Config(
|
||||||
|
user_agent_extra=REPLICATION_USER_AGENT,
|
||||||
|
connect_timeout=REPLICATION_CONNECT_TIMEOUT,
|
||||||
|
read_timeout=REPLICATION_READ_TIMEOUT,
|
||||||
|
retries={'max_attempts': 1 if health_check else 2},
|
||||||
|
signature_version='s3v4',
|
||||||
|
s3={'addressing_style': 'path'},
|
||||||
|
request_checksum_calculation='when_required',
|
||||||
|
response_checksum_validation='when_required',
|
||||||
|
)
|
||||||
|
return boto3.client(
|
||||||
|
"s3",
|
||||||
|
endpoint_url=connection.endpoint_url,
|
||||||
|
aws_access_key_id=connection.access_key,
|
||||||
|
aws_secret_access_key=connection.secret_key,
|
||||||
|
region_name=connection.region or 'us-east-1',
|
||||||
|
config=config,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ReplicationStats:
|
||||||
|
"""Statistics for replication operations - computed dynamically."""
|
||||||
|
objects_synced: int = 0
|
||||||
|
objects_pending: int = 0
|
||||||
|
objects_orphaned: int = 0
|
||||||
|
bytes_synced: int = 0
|
||||||
|
last_sync_at: Optional[float] = None
|
||||||
|
last_sync_key: Optional[str] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
return {
|
||||||
|
"objects_synced": self.objects_synced,
|
||||||
|
"objects_pending": self.objects_pending,
|
||||||
|
"objects_orphaned": self.objects_orphaned,
|
||||||
|
"bytes_synced": self.bytes_synced,
|
||||||
|
"last_sync_at": self.last_sync_at,
|
||||||
|
"last_sync_key": self.last_sync_key,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: dict) -> "ReplicationStats":
|
||||||
|
return cls(
|
||||||
|
objects_synced=data.get("objects_synced", 0),
|
||||||
|
objects_pending=data.get("objects_pending", 0),
|
||||||
|
objects_orphaned=data.get("objects_orphaned", 0),
|
||||||
|
bytes_synced=data.get("bytes_synced", 0),
|
||||||
|
last_sync_at=data.get("last_sync_at"),
|
||||||
|
last_sync_key=data.get("last_sync_key"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ReplicationFailure:
|
||||||
|
object_key: str
|
||||||
|
error_message: str
|
||||||
|
timestamp: float
|
||||||
|
failure_count: int
|
||||||
|
bucket_name: str
|
||||||
|
action: str
|
||||||
|
last_error_code: Optional[str] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
return {
|
||||||
|
"object_key": self.object_key,
|
||||||
|
"error_message": self.error_message,
|
||||||
|
"timestamp": self.timestamp,
|
||||||
|
"failure_count": self.failure_count,
|
||||||
|
"bucket_name": self.bucket_name,
|
||||||
|
"action": self.action,
|
||||||
|
"last_error_code": self.last_error_code,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: dict) -> "ReplicationFailure":
|
||||||
|
return cls(
|
||||||
|
object_key=data["object_key"],
|
||||||
|
error_message=data["error_message"],
|
||||||
|
timestamp=data["timestamp"],
|
||||||
|
failure_count=data["failure_count"],
|
||||||
|
bucket_name=data["bucket_name"],
|
||||||
|
action=data["action"],
|
||||||
|
last_error_code=data.get("last_error_code"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ReplicationRule:
|
class ReplicationRule:
|
||||||
@@ -23,50 +127,272 @@ class ReplicationRule:
|
|||||||
target_connection_id: str
|
target_connection_id: str
|
||||||
target_bucket: str
|
target_bucket: str
|
||||||
enabled: bool = True
|
enabled: bool = True
|
||||||
|
mode: str = REPLICATION_MODE_NEW_ONLY
|
||||||
|
created_at: Optional[float] = None
|
||||||
|
stats: ReplicationStats = field(default_factory=ReplicationStats)
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
return {
|
||||||
|
"bucket_name": self.bucket_name,
|
||||||
|
"target_connection_id": self.target_connection_id,
|
||||||
|
"target_bucket": self.target_bucket,
|
||||||
|
"enabled": self.enabled,
|
||||||
|
"mode": self.mode,
|
||||||
|
"created_at": self.created_at,
|
||||||
|
"stats": self.stats.to_dict(),
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: dict) -> "ReplicationRule":
|
||||||
|
stats_data = data.pop("stats", {})
|
||||||
|
if "mode" not in data:
|
||||||
|
data["mode"] = REPLICATION_MODE_NEW_ONLY
|
||||||
|
if "created_at" not in data:
|
||||||
|
data["created_at"] = None
|
||||||
|
rule = cls(**data)
|
||||||
|
rule.stats = ReplicationStats.from_dict(stats_data) if stats_data else ReplicationStats()
|
||||||
|
return rule
|
||||||
|
|
||||||
|
|
||||||
|
class ReplicationFailureStore:
|
||||||
|
MAX_FAILURES_PER_BUCKET = 50
|
||||||
|
|
||||||
|
def __init__(self, storage_root: Path) -> None:
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
|
||||||
|
def _get_failures_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "replication_failures.json"
|
||||||
|
|
||||||
|
def load_failures(self, bucket_name: str) -> List[ReplicationFailure]:
|
||||||
|
path = self._get_failures_path(bucket_name)
|
||||||
|
if not path.exists():
|
||||||
|
return []
|
||||||
|
try:
|
||||||
|
with open(path, "r") as f:
|
||||||
|
data = json.load(f)
|
||||||
|
return [ReplicationFailure.from_dict(d) for d in data.get("failures", [])]
|
||||||
|
except (OSError, ValueError, KeyError) as e:
|
||||||
|
logger.error(f"Failed to load replication failures for {bucket_name}: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
def save_failures(self, bucket_name: str, failures: List[ReplicationFailure]) -> None:
|
||||||
|
path = self._get_failures_path(bucket_name)
|
||||||
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
data = {"failures": [f.to_dict() for f in failures[:self.MAX_FAILURES_PER_BUCKET]]}
|
||||||
|
try:
|
||||||
|
with open(path, "w") as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
except OSError as e:
|
||||||
|
logger.error(f"Failed to save replication failures for {bucket_name}: {e}")
|
||||||
|
|
||||||
|
def add_failure(self, bucket_name: str, failure: ReplicationFailure) -> None:
|
||||||
|
with self._lock:
|
||||||
|
failures = self.load_failures(bucket_name)
|
||||||
|
existing = next((f for f in failures if f.object_key == failure.object_key), None)
|
||||||
|
if existing:
|
||||||
|
existing.failure_count += 1
|
||||||
|
existing.timestamp = failure.timestamp
|
||||||
|
existing.error_message = failure.error_message
|
||||||
|
existing.last_error_code = failure.last_error_code
|
||||||
|
else:
|
||||||
|
failures.insert(0, failure)
|
||||||
|
self.save_failures(bucket_name, failures)
|
||||||
|
|
||||||
|
def remove_failure(self, bucket_name: str, object_key: str) -> bool:
|
||||||
|
with self._lock:
|
||||||
|
failures = self.load_failures(bucket_name)
|
||||||
|
original_len = len(failures)
|
||||||
|
failures = [f for f in failures if f.object_key != object_key]
|
||||||
|
if len(failures) < original_len:
|
||||||
|
self.save_failures(bucket_name, failures)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def clear_failures(self, bucket_name: str) -> None:
|
||||||
|
with self._lock:
|
||||||
|
path = self._get_failures_path(bucket_name)
|
||||||
|
if path.exists():
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def get_failure(self, bucket_name: str, object_key: str) -> Optional[ReplicationFailure]:
|
||||||
|
failures = self.load_failures(bucket_name)
|
||||||
|
return next((f for f in failures if f.object_key == object_key), None)
|
||||||
|
|
||||||
|
def get_failure_count(self, bucket_name: str) -> int:
|
||||||
|
return len(self.load_failures(bucket_name))
|
||||||
|
|
||||||
|
|
||||||
class ReplicationManager:
|
class ReplicationManager:
|
||||||
def __init__(self, storage: ObjectStorage, connections: ConnectionStore, rules_path: Path) -> None:
|
def __init__(self, storage: ObjectStorage, connections: ConnectionStore, rules_path: Path, storage_root: Path) -> None:
|
||||||
self.storage = storage
|
self.storage = storage
|
||||||
self.connections = connections
|
self.connections = connections
|
||||||
self.rules_path = rules_path
|
self.rules_path = rules_path
|
||||||
|
self.storage_root = storage_root
|
||||||
self._rules: Dict[str, ReplicationRule] = {}
|
self._rules: Dict[str, ReplicationRule] = {}
|
||||||
|
self._stats_lock = threading.Lock()
|
||||||
self._executor = ThreadPoolExecutor(max_workers=4, thread_name_prefix="ReplicationWorker")
|
self._executor = ThreadPoolExecutor(max_workers=4, thread_name_prefix="ReplicationWorker")
|
||||||
|
self._shutdown = False
|
||||||
|
self.failure_store = ReplicationFailureStore(storage_root)
|
||||||
self.reload_rules()
|
self.reload_rules()
|
||||||
|
|
||||||
|
def shutdown(self, wait: bool = True) -> None:
|
||||||
|
"""Shutdown the replication executor gracefully.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
wait: If True, wait for pending tasks to complete
|
||||||
|
"""
|
||||||
|
self._shutdown = True
|
||||||
|
self._executor.shutdown(wait=wait)
|
||||||
|
logger.info("Replication manager shut down")
|
||||||
|
|
||||||
def reload_rules(self) -> None:
|
def reload_rules(self) -> None:
|
||||||
if not self.rules_path.exists():
|
if not self.rules_path.exists():
|
||||||
self._rules = {}
|
self._rules = {}
|
||||||
return
|
return
|
||||||
try:
|
try:
|
||||||
import json
|
|
||||||
with open(self.rules_path, "r") as f:
|
with open(self.rules_path, "r") as f:
|
||||||
data = json.load(f)
|
data = json.load(f)
|
||||||
for bucket, rule_data in data.items():
|
for bucket, rule_data in data.items():
|
||||||
self._rules[bucket] = ReplicationRule(**rule_data)
|
self._rules[bucket] = ReplicationRule.from_dict(rule_data)
|
||||||
except (OSError, ValueError) as e:
|
except (OSError, ValueError) as e:
|
||||||
logger.error(f"Failed to load replication rules: {e}")
|
logger.error(f"Failed to load replication rules: {e}")
|
||||||
|
|
||||||
def save_rules(self) -> None:
|
def save_rules(self) -> None:
|
||||||
import json
|
data = {b: rule.to_dict() for b, rule in self._rules.items()}
|
||||||
data = {b: rule.__dict__ for b, rule in self._rules.items()}
|
|
||||||
self.rules_path.parent.mkdir(parents=True, exist_ok=True)
|
self.rules_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
with open(self.rules_path, "w") as f:
|
with open(self.rules_path, "w") as f:
|
||||||
json.dump(data, f, indent=2)
|
json.dump(data, f, indent=2)
|
||||||
|
|
||||||
|
def check_endpoint_health(self, connection: RemoteConnection) -> bool:
|
||||||
|
"""Check if a remote endpoint is reachable and responsive.
|
||||||
|
|
||||||
|
Returns True if endpoint is healthy, False otherwise.
|
||||||
|
Uses short timeouts to prevent blocking.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
s3 = _create_s3_client(connection, health_check=True)
|
||||||
|
s3.list_buckets()
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Endpoint health check failed for {connection.name} ({connection.endpoint_url}): {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
def get_rule(self, bucket_name: str) -> Optional[ReplicationRule]:
|
def get_rule(self, bucket_name: str) -> Optional[ReplicationRule]:
|
||||||
return self._rules.get(bucket_name)
|
return self._rules.get(bucket_name)
|
||||||
|
|
||||||
def set_rule(self, rule: ReplicationRule) -> None:
|
def set_rule(self, rule: ReplicationRule) -> None:
|
||||||
|
old_rule = self._rules.get(rule.bucket_name)
|
||||||
|
was_all_mode = old_rule and old_rule.mode == REPLICATION_MODE_ALL if old_rule else False
|
||||||
self._rules[rule.bucket_name] = rule
|
self._rules[rule.bucket_name] = rule
|
||||||
self.save_rules()
|
self.save_rules()
|
||||||
|
|
||||||
|
if rule.mode == REPLICATION_MODE_ALL and rule.enabled and not was_all_mode:
|
||||||
|
logger.info(f"Replication mode ALL enabled for {rule.bucket_name}, triggering sync of existing objects")
|
||||||
|
self._executor.submit(self.replicate_existing_objects, rule.bucket_name)
|
||||||
|
|
||||||
def delete_rule(self, bucket_name: str) -> None:
|
def delete_rule(self, bucket_name: str) -> None:
|
||||||
if bucket_name in self._rules:
|
if bucket_name in self._rules:
|
||||||
del self._rules[bucket_name]
|
del self._rules[bucket_name]
|
||||||
self.save_rules()
|
self.save_rules()
|
||||||
|
|
||||||
def trigger_replication(self, bucket_name: str, object_key: str) -> None:
|
def _update_last_sync(self, bucket_name: str, object_key: str = "") -> None:
|
||||||
|
"""Update last sync timestamp after a successful operation."""
|
||||||
|
with self._stats_lock:
|
||||||
|
rule = self._rules.get(bucket_name)
|
||||||
|
if not rule:
|
||||||
|
return
|
||||||
|
rule.stats.last_sync_at = time.time()
|
||||||
|
rule.stats.last_sync_key = object_key
|
||||||
|
self.save_rules()
|
||||||
|
|
||||||
|
def get_sync_status(self, bucket_name: str) -> Optional[ReplicationStats]:
|
||||||
|
"""Dynamically compute replication status by comparing source and destination buckets."""
|
||||||
|
rule = self.get_rule(bucket_name)
|
||||||
|
if not rule:
|
||||||
|
return None
|
||||||
|
|
||||||
|
connection = self.connections.get(rule.target_connection_id)
|
||||||
|
if not connection:
|
||||||
|
return rule.stats
|
||||||
|
|
||||||
|
try:
|
||||||
|
source_objects = self.storage.list_objects_all(bucket_name)
|
||||||
|
source_keys = {obj.key: obj.size for obj in source_objects}
|
||||||
|
|
||||||
|
s3 = _create_s3_client(connection)
|
||||||
|
|
||||||
|
dest_keys = set()
|
||||||
|
bytes_synced = 0
|
||||||
|
paginator = s3.get_paginator('list_objects_v2')
|
||||||
|
try:
|
||||||
|
for page in paginator.paginate(Bucket=rule.target_bucket):
|
||||||
|
for obj in page.get('Contents', []):
|
||||||
|
dest_keys.add(obj['Key'])
|
||||||
|
if obj['Key'] in source_keys:
|
||||||
|
bytes_synced += obj.get('Size', 0)
|
||||||
|
except ClientError as e:
|
||||||
|
if e.response['Error']['Code'] == 'NoSuchBucket':
|
||||||
|
dest_keys = set()
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
|
synced = source_keys.keys() & dest_keys
|
||||||
|
orphaned = dest_keys - source_keys.keys()
|
||||||
|
|
||||||
|
if rule.mode == REPLICATION_MODE_ALL:
|
||||||
|
pending = source_keys.keys() - dest_keys
|
||||||
|
else:
|
||||||
|
pending = set()
|
||||||
|
|
||||||
|
rule.stats.objects_synced = len(synced)
|
||||||
|
rule.stats.objects_pending = len(pending)
|
||||||
|
rule.stats.objects_orphaned = len(orphaned)
|
||||||
|
rule.stats.bytes_synced = bytes_synced
|
||||||
|
|
||||||
|
return rule.stats
|
||||||
|
|
||||||
|
except (ClientError, StorageError) as e:
|
||||||
|
logger.error(f"Failed to compute sync status for {bucket_name}: {e}")
|
||||||
|
return rule.stats
|
||||||
|
|
||||||
|
def replicate_existing_objects(self, bucket_name: str) -> None:
|
||||||
|
"""Trigger replication for all existing objects in a bucket."""
|
||||||
|
rule = self.get_rule(bucket_name)
|
||||||
|
if not rule or not rule.enabled:
|
||||||
|
return
|
||||||
|
|
||||||
|
connection = self.connections.get(rule.target_connection_id)
|
||||||
|
if not connection:
|
||||||
|
logger.warning(f"Cannot replicate existing objects: Connection {rule.target_connection_id} not found")
|
||||||
|
return
|
||||||
|
|
||||||
|
if not self.check_endpoint_health(connection):
|
||||||
|
logger.warning(f"Cannot replicate existing objects: Endpoint {connection.name} ({connection.endpoint_url}) is not reachable")
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
objects = self.storage.list_objects_all(bucket_name)
|
||||||
|
logger.info(f"Starting replication of {len(objects)} existing objects from {bucket_name}")
|
||||||
|
for obj in objects:
|
||||||
|
self._executor.submit(self._replicate_task, bucket_name, obj.key, rule, connection, "write")
|
||||||
|
except StorageError as e:
|
||||||
|
logger.error(f"Failed to list objects for replication: {e}")
|
||||||
|
|
||||||
|
def create_remote_bucket(self, connection_id: str, bucket_name: str) -> None:
|
||||||
|
"""Create a bucket on the remote connection."""
|
||||||
|
connection = self.connections.get(connection_id)
|
||||||
|
if not connection:
|
||||||
|
raise ValueError(f"Connection {connection_id} not found")
|
||||||
|
|
||||||
|
try:
|
||||||
|
s3 = _create_s3_client(connection)
|
||||||
|
s3.create_bucket(Bucket=bucket_name)
|
||||||
|
except ClientError as e:
|
||||||
|
logger.error(f"Failed to create remote bucket {bucket_name}: {e}")
|
||||||
|
raise
|
||||||
|
|
||||||
|
def trigger_replication(self, bucket_name: str, object_key: str, action: str = "write") -> None:
|
||||||
rule = self.get_rule(bucket_name)
|
rule = self.get_rule(bucket_name)
|
||||||
if not rule or not rule.enabled:
|
if not rule or not rule.enabled:
|
||||||
return
|
return
|
||||||
@@ -76,46 +402,208 @@ class ReplicationManager:
|
|||||||
logger.warning(f"Replication skipped for {bucket_name}/{object_key}: Connection {rule.target_connection_id} not found")
|
logger.warning(f"Replication skipped for {bucket_name}/{object_key}: Connection {rule.target_connection_id} not found")
|
||||||
return
|
return
|
||||||
|
|
||||||
self._executor.submit(self._replicate_task, bucket_name, object_key, rule, connection)
|
if not self.check_endpoint_health(connection):
|
||||||
|
logger.warning(f"Replication skipped for {bucket_name}/{object_key}: Endpoint {connection.name} ({connection.endpoint_url}) is not reachable")
|
||||||
|
return
|
||||||
|
|
||||||
|
self._executor.submit(self._replicate_task, bucket_name, object_key, rule, connection, action)
|
||||||
|
|
||||||
|
def _replicate_task(self, bucket_name: str, object_key: str, rule: ReplicationRule, conn: RemoteConnection, action: str) -> None:
|
||||||
|
if self._shutdown:
|
||||||
|
return
|
||||||
|
|
||||||
|
current_rule = self.get_rule(bucket_name)
|
||||||
|
if not current_rule or not current_rule.enabled:
|
||||||
|
logger.debug(f"Replication skipped for {bucket_name}/{object_key}: rule disabled or removed")
|
||||||
|
return
|
||||||
|
|
||||||
|
if ".." in object_key or object_key.startswith("/") or object_key.startswith("\\"):
|
||||||
|
logger.error(f"Invalid object key in replication (path traversal attempt): {object_key}")
|
||||||
|
return
|
||||||
|
|
||||||
def _replicate_task(self, bucket_name: str, object_key: str, rule: ReplicationRule, conn: RemoteConnection) -> None:
|
|
||||||
try:
|
try:
|
||||||
# 1. Get local file path
|
from .storage import ObjectStorage
|
||||||
# Note: We are accessing internal storage structure here.
|
ObjectStorage._sanitize_object_key(object_key)
|
||||||
# Ideally storage.py should expose a 'get_file_path' or we read the stream.
|
except StorageError as e:
|
||||||
# For efficiency, we'll try to read the file directly if we can, or use storage.get_object
|
logger.error(f"Object key validation failed in replication: {e}")
|
||||||
|
return
|
||||||
|
|
||||||
# Using boto3 to upload
|
try:
|
||||||
s3 = boto3.client(
|
s3 = _create_s3_client(conn)
|
||||||
"s3",
|
|
||||||
endpoint_url=conn.endpoint_url,
|
|
||||||
aws_access_key_id=conn.access_key,
|
|
||||||
aws_secret_access_key=conn.secret_key,
|
|
||||||
region_name=conn.region,
|
|
||||||
)
|
|
||||||
|
|
||||||
# We need the file content.
|
if action == "delete":
|
||||||
# Since ObjectStorage is filesystem based, let's get the stream.
|
try:
|
||||||
# We need to be careful about closing it.
|
s3.delete_object(Bucket=rule.target_bucket, Key=object_key)
|
||||||
meta = self.storage.get_object_meta(bucket_name, object_key)
|
logger.info(f"Replicated DELETE {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
||||||
if not meta:
|
self._update_last_sync(bucket_name, object_key)
|
||||||
|
self.failure_store.remove_failure(bucket_name, object_key)
|
||||||
|
except ClientError as e:
|
||||||
|
error_code = e.response.get('Error', {}).get('Code')
|
||||||
|
logger.error(f"Replication DELETE failed for {bucket_name}/{object_key}: {e}")
|
||||||
|
self.failure_store.add_failure(bucket_name, ReplicationFailure(
|
||||||
|
object_key=object_key,
|
||||||
|
error_message=str(e),
|
||||||
|
timestamp=time.time(),
|
||||||
|
failure_count=1,
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
action="delete",
|
||||||
|
last_error_code=error_code,
|
||||||
|
))
|
||||||
return
|
return
|
||||||
|
|
||||||
with self.storage.open_object(bucket_name, object_key) as f:
|
try:
|
||||||
extra_args = {}
|
path = self.storage.get_object_path(bucket_name, object_key)
|
||||||
if meta.metadata:
|
except StorageError:
|
||||||
extra_args["Metadata"] = meta.metadata
|
logger.error(f"Source object not found: {bucket_name}/{object_key}")
|
||||||
|
return
|
||||||
|
|
||||||
s3.upload_fileobj(
|
content_type, _ = mimetypes.guess_type(path)
|
||||||
f,
|
file_size = path.stat().st_size
|
||||||
rule.target_bucket,
|
|
||||||
object_key,
|
logger.info(f"Replicating {bucket_name}/{object_key}: Size={file_size}, ContentType={content_type}")
|
||||||
ExtraArgs=extra_args
|
|
||||||
)
|
def do_upload() -> None:
|
||||||
|
"""Upload object using appropriate method based on file size.
|
||||||
|
|
||||||
|
For small files (< 10 MiB): Read into memory for simpler handling
|
||||||
|
For large files: Use streaming upload to avoid memory issues
|
||||||
|
"""
|
||||||
|
extra_args = {}
|
||||||
|
if content_type:
|
||||||
|
extra_args["ContentType"] = content_type
|
||||||
|
|
||||||
|
if file_size >= STREAMING_THRESHOLD_BYTES:
|
||||||
|
s3.upload_file(
|
||||||
|
str(path),
|
||||||
|
rule.target_bucket,
|
||||||
|
object_key,
|
||||||
|
ExtraArgs=extra_args if extra_args else None,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
file_content = path.read_bytes()
|
||||||
|
put_kwargs = {
|
||||||
|
"Bucket": rule.target_bucket,
|
||||||
|
"Key": object_key,
|
||||||
|
"Body": file_content,
|
||||||
|
**extra_args,
|
||||||
|
}
|
||||||
|
s3.put_object(**put_kwargs)
|
||||||
|
|
||||||
|
try:
|
||||||
|
do_upload()
|
||||||
|
except (ClientError, S3UploadFailedError) as e:
|
||||||
|
error_code = None
|
||||||
|
if isinstance(e, ClientError):
|
||||||
|
error_code = e.response['Error']['Code']
|
||||||
|
elif isinstance(e, S3UploadFailedError):
|
||||||
|
if "NoSuchBucket" in str(e):
|
||||||
|
error_code = 'NoSuchBucket'
|
||||||
|
|
||||||
|
if error_code == 'NoSuchBucket':
|
||||||
|
logger.info(f"Target bucket {rule.target_bucket} not found. Attempting to create it.")
|
||||||
|
bucket_ready = False
|
||||||
|
try:
|
||||||
|
s3.create_bucket(Bucket=rule.target_bucket)
|
||||||
|
bucket_ready = True
|
||||||
|
logger.info(f"Created target bucket {rule.target_bucket}")
|
||||||
|
except ClientError as bucket_err:
|
||||||
|
if bucket_err.response['Error']['Code'] in ('BucketAlreadyExists', 'BucketAlreadyOwnedByYou'):
|
||||||
|
logger.debug(f"Bucket {rule.target_bucket} already exists (created by another thread)")
|
||||||
|
bucket_ready = True
|
||||||
|
else:
|
||||||
|
logger.error(f"Failed to create target bucket {rule.target_bucket}: {bucket_err}")
|
||||||
|
raise e
|
||||||
|
|
||||||
|
if bucket_ready:
|
||||||
|
do_upload()
|
||||||
|
else:
|
||||||
|
raise e
|
||||||
|
|
||||||
logger.info(f"Replicated {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
logger.info(f"Replicated {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
||||||
|
self._update_last_sync(bucket_name, object_key)
|
||||||
|
self.failure_store.remove_failure(bucket_name, object_key)
|
||||||
|
|
||||||
except (ClientError, OSError, ValueError) as e:
|
except (ClientError, OSError, ValueError) as e:
|
||||||
|
error_code = None
|
||||||
|
if isinstance(e, ClientError):
|
||||||
|
error_code = e.response.get('Error', {}).get('Code')
|
||||||
logger.error(f"Replication failed for {bucket_name}/{object_key}: {e}")
|
logger.error(f"Replication failed for {bucket_name}/{object_key}: {e}")
|
||||||
except Exception:
|
self.failure_store.add_failure(bucket_name, ReplicationFailure(
|
||||||
|
object_key=object_key,
|
||||||
|
error_message=str(e),
|
||||||
|
timestamp=time.time(),
|
||||||
|
failure_count=1,
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
action=action,
|
||||||
|
last_error_code=error_code,
|
||||||
|
))
|
||||||
|
except Exception as e:
|
||||||
logger.exception(f"Unexpected error during replication for {bucket_name}/{object_key}")
|
logger.exception(f"Unexpected error during replication for {bucket_name}/{object_key}")
|
||||||
|
self.failure_store.add_failure(bucket_name, ReplicationFailure(
|
||||||
|
object_key=object_key,
|
||||||
|
error_message=str(e),
|
||||||
|
timestamp=time.time(),
|
||||||
|
failure_count=1,
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
action=action,
|
||||||
|
last_error_code=None,
|
||||||
|
))
|
||||||
|
|
||||||
|
def get_failed_items(self, bucket_name: str, limit: int = 50, offset: int = 0) -> List[ReplicationFailure]:
|
||||||
|
failures = self.failure_store.load_failures(bucket_name)
|
||||||
|
return failures[offset:offset + limit]
|
||||||
|
|
||||||
|
def get_failure_count(self, bucket_name: str) -> int:
|
||||||
|
return self.failure_store.get_failure_count(bucket_name)
|
||||||
|
|
||||||
|
def retry_failed_item(self, bucket_name: str, object_key: str) -> bool:
|
||||||
|
failure = self.failure_store.get_failure(bucket_name, object_key)
|
||||||
|
if not failure:
|
||||||
|
return False
|
||||||
|
|
||||||
|
rule = self.get_rule(bucket_name)
|
||||||
|
if not rule or not rule.enabled:
|
||||||
|
return False
|
||||||
|
|
||||||
|
connection = self.connections.get(rule.target_connection_id)
|
||||||
|
if not connection:
|
||||||
|
logger.warning(f"Cannot retry: Connection {rule.target_connection_id} not found")
|
||||||
|
return False
|
||||||
|
|
||||||
|
if not self.check_endpoint_health(connection):
|
||||||
|
logger.warning(f"Cannot retry: Endpoint {connection.name} is not reachable")
|
||||||
|
return False
|
||||||
|
|
||||||
|
self._executor.submit(self._replicate_task, bucket_name, object_key, rule, connection, failure.action)
|
||||||
|
return True
|
||||||
|
|
||||||
|
def retry_all_failed(self, bucket_name: str) -> Dict[str, int]:
|
||||||
|
failures = self.failure_store.load_failures(bucket_name)
|
||||||
|
if not failures:
|
||||||
|
return {"submitted": 0, "skipped": 0}
|
||||||
|
|
||||||
|
rule = self.get_rule(bucket_name)
|
||||||
|
if not rule or not rule.enabled:
|
||||||
|
return {"submitted": 0, "skipped": len(failures)}
|
||||||
|
|
||||||
|
connection = self.connections.get(rule.target_connection_id)
|
||||||
|
if not connection:
|
||||||
|
logger.warning(f"Cannot retry: Connection {rule.target_connection_id} not found")
|
||||||
|
return {"submitted": 0, "skipped": len(failures)}
|
||||||
|
|
||||||
|
if not self.check_endpoint_health(connection):
|
||||||
|
logger.warning(f"Cannot retry: Endpoint {connection.name} is not reachable")
|
||||||
|
return {"submitted": 0, "skipped": len(failures)}
|
||||||
|
|
||||||
|
submitted = 0
|
||||||
|
for failure in failures:
|
||||||
|
self._executor.submit(self._replicate_task, bucket_name, failure.object_key, rule, connection, failure.action)
|
||||||
|
submitted += 1
|
||||||
|
|
||||||
|
return {"submitted": submitted, "skipped": 0}
|
||||||
|
|
||||||
|
def dismiss_failure(self, bucket_name: str, object_key: str) -> bool:
|
||||||
|
return self.failure_store.remove_failure(bucket_name, object_key)
|
||||||
|
|
||||||
|
def clear_failures(self, bucket_name: str) -> None:
|
||||||
|
self.failure_store.clear_failures(bucket_name)
|
||||||
|
|||||||
1948
app/s3_api.py
1948
app/s3_api.py
File diff suppressed because it is too large
Load Diff
@@ -1,4 +1,3 @@
|
|||||||
"""Ephemeral store for one-time secrets communicated to the UI."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import secrets
|
import secrets
|
||||||
|
|||||||
1103
app/storage.py
1103
app/storage.py
File diff suppressed because it is too large
Load Diff
@@ -1,7 +1,6 @@
|
|||||||
"""Central location for the application version string."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
APP_VERSION = "0.1.0"
|
APP_VERSION = "0.2.0"
|
||||||
|
|
||||||
|
|
||||||
def get_version() -> str:
|
def get_version() -> str:
|
||||||
|
|||||||
5
docker-entrypoint.sh
Normal file
5
docker-entrypoint.sh
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Run both services using the python runner in production mode
|
||||||
|
exec python run.py --prod
|
||||||
812
docs.md
812
docs.md
@@ -33,6 +33,63 @@ python run.py --mode api # API only (port 5000)
|
|||||||
python run.py --mode ui # UI only (port 5100)
|
python run.py --mode ui # UI only (port 5100)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Configuration validation
|
||||||
|
|
||||||
|
Validate your configuration before deploying:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Show configuration summary
|
||||||
|
python run.py --show-config
|
||||||
|
./myfsio --show-config
|
||||||
|
|
||||||
|
# Validate and check for issues (exits with code 1 if critical issues found)
|
||||||
|
python run.py --check-config
|
||||||
|
./myfsio --check-config
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linux Installation (Recommended for Production)
|
||||||
|
|
||||||
|
For production deployments on Linux, use the provided installation script:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download the binary and install script
|
||||||
|
# Then run the installer with sudo:
|
||||||
|
sudo ./scripts/install.sh --binary ./myfsio
|
||||||
|
|
||||||
|
# Or with custom paths:
|
||||||
|
sudo ./scripts/install.sh \
|
||||||
|
--binary ./myfsio \
|
||||||
|
--install-dir /opt/myfsio \
|
||||||
|
--data-dir /mnt/storage/myfsio \
|
||||||
|
--log-dir /var/log/myfsio \
|
||||||
|
--api-url https://s3.example.com \
|
||||||
|
--user myfsio
|
||||||
|
|
||||||
|
# Non-interactive mode (for automation):
|
||||||
|
sudo ./scripts/install.sh --binary ./myfsio -y
|
||||||
|
```
|
||||||
|
|
||||||
|
The installer will:
|
||||||
|
1. Create a dedicated system user
|
||||||
|
2. Set up directories with proper permissions
|
||||||
|
3. Generate a secure `SECRET_KEY`
|
||||||
|
4. Create an environment file at `/opt/myfsio/myfsio.env`
|
||||||
|
5. Install and configure a systemd service
|
||||||
|
|
||||||
|
After installation:
|
||||||
|
```bash
|
||||||
|
sudo systemctl start myfsio # Start the service
|
||||||
|
sudo systemctl enable myfsio # Enable on boot
|
||||||
|
sudo systemctl status myfsio # Check status
|
||||||
|
sudo journalctl -u myfsio -f # View logs
|
||||||
|
```
|
||||||
|
|
||||||
|
To uninstall:
|
||||||
|
```bash
|
||||||
|
sudo ./scripts/uninstall.sh # Full removal
|
||||||
|
sudo ./scripts/uninstall.sh --keep-data # Keep data directory
|
||||||
|
```
|
||||||
|
|
||||||
### Docker quickstart
|
### Docker quickstart
|
||||||
|
|
||||||
The repo now ships a `Dockerfile` so you can run both services in one container:
|
The repo now ships a `Dockerfile` so you can run both services in one container:
|
||||||
@@ -69,19 +126,433 @@ The repo now tracks a human-friendly release string inside `app/version.py` (see
|
|||||||
|
|
||||||
## 3. Configuration Reference
|
## 3. Configuration Reference
|
||||||
|
|
||||||
|
All configuration is done via environment variables. The table below lists every supported variable.
|
||||||
|
|
||||||
|
### Core Settings
|
||||||
|
|
||||||
| Variable | Default | Notes |
|
| Variable | Default | Notes |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `STORAGE_ROOT` | `<repo>/data` | Filesystem home for all buckets/objects. |
|
| `STORAGE_ROOT` | `<repo>/data` | Filesystem home for all buckets/objects. |
|
||||||
| `MAX_UPLOAD_SIZE` | `1073741824` | Bytes. Caps incoming uploads in both API + UI. |
|
| `MAX_UPLOAD_SIZE` | `1073741824` (1 GiB) | Bytes. Caps incoming uploads in both API + UI. |
|
||||||
| `UI_PAGE_SIZE` | `100` | `MaxKeys` hint shown in listings. |
|
| `UI_PAGE_SIZE` | `100` | `MaxKeys` hint shown in listings. |
|
||||||
| `SECRET_KEY` | `dev-secret-key` | Flask session key for UI auth. |
|
| `SECRET_KEY` | Auto-generated | Flask session key. Auto-generates and persists if not set. **Set explicitly in production.** |
|
||||||
| `IAM_CONFIG` | `<repo>/data/.myfsio.sys/config/iam.json` | Stores users, secrets, and inline policies. |
|
| `API_BASE_URL` | `None` | Public URL for presigned URLs. Required behind proxies. |
|
||||||
| `BUCKET_POLICY_PATH` | `<repo>/data/.myfsio.sys/config/bucket_policies.json` | Bucket policy store (auto hot-reload). |
|
|
||||||
| `API_BASE_URL` | `http://127.0.0.1:5000` | Used by the UI to hit API endpoints (presign/policy). |
|
|
||||||
| `AWS_REGION` | `us-east-1` | Region embedded in SigV4 credential scope. |
|
| `AWS_REGION` | `us-east-1` | Region embedded in SigV4 credential scope. |
|
||||||
| `AWS_SERVICE` | `s3` | Service string for SigV4. |
|
| `AWS_SERVICE` | `s3` | Service string for SigV4. |
|
||||||
|
|
||||||
Set env vars (or pass overrides to `create_app`) to point the servers at custom paths.
|
### IAM & Security
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `IAM_CONFIG` | `data/.myfsio.sys/config/iam.json` | Stores users, secrets, and inline policies. |
|
||||||
|
| `BUCKET_POLICY_PATH` | `data/.myfsio.sys/config/bucket_policies.json` | Bucket policy store (auto hot-reload). |
|
||||||
|
| `AUTH_MAX_ATTEMPTS` | `5` | Failed login attempts before lockout. |
|
||||||
|
| `AUTH_LOCKOUT_MINUTES` | `15` | Lockout duration after max failed attempts. |
|
||||||
|
| `SESSION_LIFETIME_DAYS` | `30` | How long UI sessions remain valid. |
|
||||||
|
| `SECRET_TTL_SECONDS` | `300` | TTL for ephemeral secrets (presigned URLs). |
|
||||||
|
| `UI_ENFORCE_BUCKET_POLICIES` | `false` | Whether the UI should enforce bucket policies. |
|
||||||
|
|
||||||
|
### CORS (Cross-Origin Resource Sharing)
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `CORS_ORIGINS` | `*` | Comma-separated allowed origins. Use specific domains in production. |
|
||||||
|
| `CORS_METHODS` | `GET,PUT,POST,DELETE,OPTIONS,HEAD` | Allowed HTTP methods. |
|
||||||
|
| `CORS_ALLOW_HEADERS` | `*` | Allowed request headers. |
|
||||||
|
| `CORS_EXPOSE_HEADERS` | `*` | Response headers visible to browsers (e.g., `ETag`). |
|
||||||
|
|
||||||
|
### Rate Limiting
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `RATE_LIMIT_DEFAULT` | `200 per minute` | Default rate limit for API endpoints. |
|
||||||
|
| `RATE_LIMIT_STORAGE_URI` | `memory://` | Storage backend for rate limits. Use `redis://host:port` for distributed setups. |
|
||||||
|
|
||||||
|
### Logging
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `LOG_LEVEL` | `INFO` | Log verbosity: `DEBUG`, `INFO`, `WARNING`, `ERROR`. |
|
||||||
|
| `LOG_TO_FILE` | `true` | Enable file logging. |
|
||||||
|
| `LOG_DIR` | `<repo>/logs` | Directory for log files. |
|
||||||
|
| `LOG_FILE` | `app.log` | Log filename. |
|
||||||
|
| `LOG_MAX_BYTES` | `5242880` (5 MB) | Max log file size before rotation. |
|
||||||
|
| `LOG_BACKUP_COUNT` | `3` | Number of rotated log files to keep. |
|
||||||
|
|
||||||
|
### Encryption
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `ENCRYPTION_ENABLED` | `false` | Enable server-side encryption support. |
|
||||||
|
| `ENCRYPTION_MASTER_KEY_PATH` | `data/.myfsio.sys/keys/master.key` | Path to the master encryption key file. |
|
||||||
|
| `DEFAULT_ENCRYPTION_ALGORITHM` | `AES256` | Default algorithm for new encrypted objects. |
|
||||||
|
| `KMS_ENABLED` | `false` | Enable KMS key management for encryption. |
|
||||||
|
| `KMS_KEYS_PATH` | `data/.myfsio.sys/keys/kms_keys.json` | Path to store KMS key metadata. |
|
||||||
|
|
||||||
|
### Performance Tuning
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `STREAM_CHUNK_SIZE` | `65536` (64 KB) | Chunk size for streaming large files. |
|
||||||
|
| `MULTIPART_MIN_PART_SIZE` | `5242880` (5 MB) | Minimum part size for multipart uploads. |
|
||||||
|
| `BUCKET_STATS_CACHE_TTL` | `60` | Seconds to cache bucket statistics. |
|
||||||
|
| `BULK_DELETE_MAX_KEYS` | `500` | Maximum keys per bulk delete request. |
|
||||||
|
|
||||||
|
### Server Settings
|
||||||
|
|
||||||
|
| Variable | Default | Notes |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `APP_HOST` | `0.0.0.0` | Network interface to bind to. |
|
||||||
|
| `APP_PORT` | `5000` | API server port (UI uses 5100). |
|
||||||
|
| `FLASK_DEBUG` | `0` | Enable Flask debug mode. **Never enable in production.** |
|
||||||
|
|
||||||
|
### Production Checklist
|
||||||
|
|
||||||
|
Before deploying to production, ensure you:
|
||||||
|
|
||||||
|
1. **Set `SECRET_KEY`** - Use a strong, unique value (e.g., `openssl rand -base64 32`)
|
||||||
|
2. **Restrict CORS** - Set `CORS_ORIGINS` to your specific domains instead of `*`
|
||||||
|
3. **Configure `API_BASE_URL`** - Required for correct presigned URLs behind proxies
|
||||||
|
4. **Enable HTTPS** - Use a reverse proxy (nginx, Cloudflare) with TLS termination
|
||||||
|
5. **Review rate limits** - Adjust `RATE_LIMIT_DEFAULT` based on your needs
|
||||||
|
6. **Secure master keys** - Back up `ENCRYPTION_MASTER_KEY_PATH` if using encryption
|
||||||
|
7. **Use `--prod` flag** - Runs with Waitress instead of Flask dev server
|
||||||
|
|
||||||
|
### Proxy Configuration
|
||||||
|
|
||||||
|
If running behind a reverse proxy (e.g., Nginx, Cloudflare, or a tunnel), ensure the proxy sets the standard forwarding headers:
|
||||||
|
- `X-Forwarded-Host`
|
||||||
|
- `X-Forwarded-Proto`
|
||||||
|
|
||||||
|
The application automatically trusts these headers to generate correct presigned URLs (e.g., `https://s3.example.com/...` instead of `http://127.0.0.1:5000/...`). Alternatively, you can explicitly set `API_BASE_URL` to your public endpoint.
|
||||||
|
|
||||||
|
## 4. Upgrading and Updates
|
||||||
|
|
||||||
|
### Version Checking
|
||||||
|
|
||||||
|
The application version is tracked in `app/version.py` and exposed via:
|
||||||
|
- **Health endpoint:** `GET /healthz` returns JSON with `version` field
|
||||||
|
- **Metrics dashboard:** Navigate to `/ui/metrics` to see the running version in the System Status card
|
||||||
|
|
||||||
|
To check your current version:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# API health endpoint
|
||||||
|
curl http://localhost:5000/healthz
|
||||||
|
|
||||||
|
# Or inspect version.py directly
|
||||||
|
cat app/version.py | grep APP_VERSION
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pre-Update Backup Procedures
|
||||||
|
|
||||||
|
**Always backup before upgrading to prevent data loss:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Stop the application
|
||||||
|
# Ctrl+C if running in terminal, or:
|
||||||
|
docker stop myfsio # if using Docker
|
||||||
|
|
||||||
|
# 2. Backup configuration files (CRITICAL)
|
||||||
|
mkdir -p backups/$(date +%Y%m%d_%H%M%S)
|
||||||
|
cp -r data/.myfsio.sys/config backups/$(date +%Y%m%d_%H%M%S)/
|
||||||
|
|
||||||
|
# 3. Backup all data (optional but recommended)
|
||||||
|
tar -czf backups/data_$(date +%Y%m%d_%H%M%S).tar.gz data/
|
||||||
|
|
||||||
|
# 4. Backup logs for audit trail
|
||||||
|
cp -r logs backups/$(date +%Y%m%d_%H%M%S)/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Windows PowerShell:**
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
# Create timestamped backup
|
||||||
|
$timestamp = Get-Date -Format "yyyyMMdd_HHmmss"
|
||||||
|
New-Item -ItemType Directory -Path "backups\$timestamp" -Force
|
||||||
|
|
||||||
|
# Backup configs
|
||||||
|
Copy-Item -Recurse "data\.myfsio.sys\config" "backups\$timestamp\"
|
||||||
|
|
||||||
|
# Backup entire data directory
|
||||||
|
Compress-Archive -Path "data\" -DestinationPath "backups\data_$timestamp.zip"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Critical files to backup:**
|
||||||
|
- `data/.myfsio.sys/config/iam.json` – User accounts and access keys
|
||||||
|
- `data/.myfsio.sys/config/bucket_policies.json` – Bucket access policies
|
||||||
|
- `data/.myfsio.sys/config/kms_keys.json` – Encryption keys (if using KMS)
|
||||||
|
- `data/.myfsio.sys/config/secret_store.json` – Application secrets
|
||||||
|
|
||||||
|
### Update Procedures
|
||||||
|
|
||||||
|
#### Source Installation Updates
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Backup (see above)
|
||||||
|
# 2. Pull latest code
|
||||||
|
git fetch origin
|
||||||
|
git checkout main # or your target branch/tag
|
||||||
|
git pull
|
||||||
|
|
||||||
|
# 3. Check for dependency changes
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
# 4. Review CHANGELOG/release notes for breaking changes
|
||||||
|
cat CHANGELOG.md # if available
|
||||||
|
|
||||||
|
# 5. Run migration scripts (if any)
|
||||||
|
# python scripts/migrate_vX_to_vY.py # example
|
||||||
|
|
||||||
|
# 6. Restart application
|
||||||
|
python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Docker Updates
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Backup (see above)
|
||||||
|
# 2. Pull/rebuild image
|
||||||
|
docker pull yourregistry/myfsio:latest
|
||||||
|
# OR rebuild from source:
|
||||||
|
docker build -t myfsio:latest .
|
||||||
|
|
||||||
|
# 3. Stop and remove old container
|
||||||
|
docker stop myfsio
|
||||||
|
docker rm myfsio
|
||||||
|
|
||||||
|
# 4. Start new container with same volumes
|
||||||
|
docker run -d \
|
||||||
|
--name myfsio \
|
||||||
|
-p 5000:5000 -p 5100:5100 \
|
||||||
|
-v "$(pwd)/data:/app/data" \
|
||||||
|
-v "$(pwd)/logs:/app/logs" \
|
||||||
|
-e SECRET_KEY="your-secret" \
|
||||||
|
myfsio:latest
|
||||||
|
|
||||||
|
# 5. Verify health
|
||||||
|
curl http://localhost:5000/healthz
|
||||||
|
```
|
||||||
|
|
||||||
|
### Version Compatibility Checks
|
||||||
|
|
||||||
|
Before upgrading across major versions, verify compatibility:
|
||||||
|
|
||||||
|
| From Version | To Version | Breaking Changes | Migration Required |
|
||||||
|
|--------------|------------|------------------|-------------------|
|
||||||
|
| 0.1.x | 0.2.x | None expected | No |
|
||||||
|
| 0.1.6 | 0.1.7 | None | No |
|
||||||
|
| < 0.1.0 | >= 0.1.0 | New IAM config format | Yes - run migration script |
|
||||||
|
|
||||||
|
**Automatic compatibility detection:**
|
||||||
|
|
||||||
|
The application will log warnings on startup if config files need migration:
|
||||||
|
|
||||||
|
```
|
||||||
|
WARNING: IAM config format is outdated (v1). Please run: python scripts/migrate_iam.py
|
||||||
|
```
|
||||||
|
|
||||||
|
**Manual compatibility check:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Compare version schemas
|
||||||
|
python -c "from app.version import APP_VERSION; print(f'Running: {APP_VERSION}')"
|
||||||
|
python scripts/check_compatibility.py data/.myfsio.sys/config/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Migration Steps for Breaking Changes
|
||||||
|
|
||||||
|
When release notes indicate breaking changes, follow these steps:
|
||||||
|
|
||||||
|
#### Config Format Migrations
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Backup first (critical!)
|
||||||
|
cp data/.myfsio.sys/config/iam.json data/.myfsio.sys/config/iam.json.backup
|
||||||
|
|
||||||
|
# 2. Run provided migration script
|
||||||
|
python scripts/migrate_iam_v1_to_v2.py
|
||||||
|
|
||||||
|
# 3. Validate migration
|
||||||
|
python scripts/validate_config.py
|
||||||
|
|
||||||
|
# 4. Test with read-only mode first (if available)
|
||||||
|
# python run.py --read-only
|
||||||
|
|
||||||
|
# 5. Restart normally
|
||||||
|
python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Database/Storage Schema Changes
|
||||||
|
|
||||||
|
If object metadata format changes:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Run storage migration script
|
||||||
|
python scripts/migrate_storage.py --dry-run # preview changes
|
||||||
|
|
||||||
|
# 2. Apply migration
|
||||||
|
python scripts/migrate_storage.py --apply
|
||||||
|
|
||||||
|
# 3. Verify integrity
|
||||||
|
python scripts/verify_storage.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### IAM Policy Updates
|
||||||
|
|
||||||
|
If IAM action names change (e.g., `s3:Get` → `s3:GetObject`):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Migration script will update all policies
|
||||||
|
python scripts/migrate_policies.py \
|
||||||
|
--input data/.myfsio.sys/config/iam.json \
|
||||||
|
--backup data/.myfsio.sys/config/iam.json.v1
|
||||||
|
|
||||||
|
# Review changes before committing
|
||||||
|
python scripts/diff_policies.py \
|
||||||
|
data/.myfsio.sys/config/iam.json.v1 \
|
||||||
|
data/.myfsio.sys/config/iam.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rollback Procedures
|
||||||
|
|
||||||
|
If an update causes issues, rollback to the previous version:
|
||||||
|
|
||||||
|
#### Quick Rollback (Source)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Stop application
|
||||||
|
# Ctrl+C or kill process
|
||||||
|
|
||||||
|
# 2. Revert code
|
||||||
|
git checkout <previous-version-tag>
|
||||||
|
# OR
|
||||||
|
git reset --hard HEAD~1
|
||||||
|
|
||||||
|
# 3. Restore configs from backup
|
||||||
|
cp backups/20241213_103000/config/* data/.myfsio.sys/config/
|
||||||
|
|
||||||
|
# 4. Downgrade dependencies if needed
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
# 5. Restart
|
||||||
|
python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Docker Rollback
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Stop current container
|
||||||
|
docker stop myfsio
|
||||||
|
docker rm myfsio
|
||||||
|
|
||||||
|
# 2. Start previous version
|
||||||
|
docker run -d \
|
||||||
|
--name myfsio \
|
||||||
|
-p 5000:5000 -p 5100:5100 \
|
||||||
|
-v "$(pwd)/data:/app/data" \
|
||||||
|
-v "$(pwd)/logs:/app/logs" \
|
||||||
|
-e SECRET_KEY="your-secret" \
|
||||||
|
myfsio:0.1.3 # specify previous version tag
|
||||||
|
|
||||||
|
# 3. Verify
|
||||||
|
curl http://localhost:5000/healthz
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Emergency Config Restore
|
||||||
|
|
||||||
|
If only config is corrupted but code is fine:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Stop app
|
||||||
|
# Restore from latest backup
|
||||||
|
cp backups/20241213_103000/config/iam.json data/.myfsio.sys/config/
|
||||||
|
cp backups/20241213_103000/config/bucket_policies.json data/.myfsio.sys/config/
|
||||||
|
|
||||||
|
# Restart app
|
||||||
|
python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
### Blue-Green Deployment (Zero Downtime)
|
||||||
|
|
||||||
|
For production environments requiring zero downtime:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Run new version on different port (e.g., 5001/5101)
|
||||||
|
APP_PORT=5001 UI_PORT=5101 python run.py &
|
||||||
|
|
||||||
|
# 2. Health check new instance
|
||||||
|
curl http://localhost:5001/healthz
|
||||||
|
|
||||||
|
# 3. Update load balancer to route to new ports
|
||||||
|
|
||||||
|
# 4. Monitor for issues
|
||||||
|
|
||||||
|
# 5. Gracefully stop old instance
|
||||||
|
kill -SIGTERM <old-pid>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Post-Update Verification
|
||||||
|
|
||||||
|
After any update, verify functionality:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Health check
|
||||||
|
curl http://localhost:5000/healthz
|
||||||
|
|
||||||
|
# 2. Login to UI
|
||||||
|
open http://localhost:5100/ui
|
||||||
|
|
||||||
|
# 3. Test IAM authentication
|
||||||
|
curl -H "X-Amz-Security-Token: <your-access-key>:<your-secret>" \
|
||||||
|
http://localhost:5000/
|
||||||
|
|
||||||
|
# 4. Test presigned URL generation
|
||||||
|
# Via UI or API
|
||||||
|
|
||||||
|
# 5. Check logs for errors
|
||||||
|
tail -n 100 logs/myfsio.log
|
||||||
|
```
|
||||||
|
|
||||||
|
### Automated Update Scripts
|
||||||
|
|
||||||
|
Create a custom update script for your environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# update.sh - Automated update with rollback capability
|
||||||
|
|
||||||
|
set -e # Exit on error
|
||||||
|
|
||||||
|
VERSION_NEW="$1"
|
||||||
|
BACKUP_DIR="backups/$(date +%Y%m%d_%H%M%S)"
|
||||||
|
|
||||||
|
echo "Creating backup..."
|
||||||
|
mkdir -p "$BACKUP_DIR"
|
||||||
|
cp -r data/.myfsio.sys/config "$BACKUP_DIR/"
|
||||||
|
|
||||||
|
echo "Updating to version $VERSION_NEW..."
|
||||||
|
git fetch origin
|
||||||
|
git checkout "v$VERSION_NEW"
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
echo "Starting application..."
|
||||||
|
python run.py &
|
||||||
|
APP_PID=$!
|
||||||
|
|
||||||
|
# Wait and health check
|
||||||
|
sleep 5
|
||||||
|
if curl -f http://localhost:5000/healthz; then
|
||||||
|
echo "Update successful!"
|
||||||
|
else
|
||||||
|
echo "Health check failed, rolling back..."
|
||||||
|
kill $APP_PID
|
||||||
|
git checkout -
|
||||||
|
cp -r "$BACKUP_DIR/config/*" data/.myfsio.sys/config/
|
||||||
|
python run.py &
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
## 4. Authentication & IAM
|
## 4. Authentication & IAM
|
||||||
|
|
||||||
@@ -94,6 +565,46 @@ Set env vars (or pass overrides to `create_app`) to point the servers at custom
|
|||||||
|
|
||||||
The API expects every request to include `X-Access-Key` and `X-Secret-Key` headers. The UI persists them in the Flask session after login.
|
The API expects every request to include `X-Access-Key` and `X-Secret-Key` headers. The UI persists them in the Flask session after login.
|
||||||
|
|
||||||
|
### Available IAM Actions
|
||||||
|
|
||||||
|
| Action | Description | AWS Aliases |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `list` | List buckets and objects | `s3:ListBucket`, `s3:ListAllMyBuckets`, `s3:ListBucketVersions`, `s3:ListMultipartUploads`, `s3:ListParts` |
|
||||||
|
| `read` | Download objects | `s3:GetObject`, `s3:GetObjectVersion`, `s3:GetObjectTagging`, `s3:HeadObject`, `s3:HeadBucket` |
|
||||||
|
| `write` | Upload objects, create buckets | `s3:PutObject`, `s3:CreateBucket`, `s3:CreateMultipartUpload`, `s3:UploadPart`, `s3:CompleteMultipartUpload`, `s3:AbortMultipartUpload`, `s3:CopyObject` |
|
||||||
|
| `delete` | Remove objects and buckets | `s3:DeleteObject`, `s3:DeleteObjectVersion`, `s3:DeleteBucket` |
|
||||||
|
| `share` | Manage ACLs | `s3:PutObjectAcl`, `s3:PutBucketAcl`, `s3:GetBucketAcl` |
|
||||||
|
| `policy` | Manage bucket policies | `s3:PutBucketPolicy`, `s3:GetBucketPolicy`, `s3:DeleteBucketPolicy` |
|
||||||
|
| `replication` | Configure and manage replication | `s3:GetReplicationConfiguration`, `s3:PutReplicationConfiguration`, `s3:ReplicateObject`, `s3:ReplicateTags`, `s3:ReplicateDelete` |
|
||||||
|
| `iam:list_users` | View IAM users | `iam:ListUsers` |
|
||||||
|
| `iam:create_user` | Create IAM users | `iam:CreateUser` |
|
||||||
|
| `iam:delete_user` | Delete IAM users | `iam:DeleteUser` |
|
||||||
|
| `iam:rotate_key` | Rotate user secrets | `iam:RotateAccessKey` |
|
||||||
|
| `iam:update_policy` | Modify user policies | `iam:PutUserPolicy` |
|
||||||
|
| `iam:*` | All IAM actions (admin wildcard) | — |
|
||||||
|
|
||||||
|
### Example Policies
|
||||||
|
|
||||||
|
**Full Control (admin):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["list", "read", "write", "delete", "share", "policy", "replication", "iam:*"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Read-Only:**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["list", "read"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Single Bucket Access (no listing other buckets):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "user-bucket", "actions": ["read", "write", "delete"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Bucket Access with Replication:**
|
||||||
|
```json
|
||||||
|
[{"bucket": "my-bucket", "actions": ["list", "read", "write", "delete", "replication"]}]
|
||||||
|
```
|
||||||
|
|
||||||
## 5. Bucket Policies & Presets
|
## 5. Bucket Policies & Presets
|
||||||
|
|
||||||
- **Storage**: Policies are persisted in `data/.myfsio.sys/config/bucket_policies.json` under `{"policies": {"bucket": {...}}}`.
|
- **Storage**: Policies are persisted in `data/.myfsio.sys/config/bucket_policies.json` under `{"policies": {"bucket": {...}}}`.
|
||||||
@@ -124,6 +635,48 @@ curl -X PUT http://127.0.0.1:5000/bucket-policy/test \
|
|||||||
|
|
||||||
The UI will reflect this change as soon as the request completes thanks to the hot reload.
|
The UI will reflect this change as soon as the request completes thanks to the hot reload.
|
||||||
|
|
||||||
|
### UI Object Browser
|
||||||
|
|
||||||
|
The bucket detail page includes a powerful object browser with the following features:
|
||||||
|
|
||||||
|
#### Folder Navigation
|
||||||
|
|
||||||
|
Objects with forward slashes (`/`) in their keys are displayed as a folder hierarchy. Click a folder row to navigate into it. A breadcrumb navigation bar shows your current path and allows quick navigation back to parent folders or the root.
|
||||||
|
|
||||||
|
#### Pagination & Infinite Scroll
|
||||||
|
|
||||||
|
- Objects load in configurable batches (50, 100, 150, 200, or 250 per page)
|
||||||
|
- Scroll to the bottom to automatically load more objects (infinite scroll)
|
||||||
|
- A **Load more** button is available as a fallback for touch devices or when infinite scroll doesn't trigger
|
||||||
|
- The footer shows the current load status (e.g., "Showing 100 of 500 objects")
|
||||||
|
|
||||||
|
#### Bulk Operations
|
||||||
|
|
||||||
|
- Select multiple objects using checkboxes
|
||||||
|
- **Bulk Delete**: Delete multiple objects at once
|
||||||
|
- **Bulk Download**: Download selected objects as individual files
|
||||||
|
|
||||||
|
#### Search & Filter
|
||||||
|
|
||||||
|
Use the search box to filter objects by name in real-time. The filter applies to the currently loaded objects.
|
||||||
|
|
||||||
|
#### Error Handling
|
||||||
|
|
||||||
|
If object loading fails (e.g., network error), a friendly error message is displayed with a **Retry** button to attempt loading again.
|
||||||
|
|
||||||
|
#### Object Preview
|
||||||
|
|
||||||
|
Click any object row to view its details in the preview sidebar:
|
||||||
|
- File size and last modified date
|
||||||
|
- ETag (content hash)
|
||||||
|
- Custom metadata (if present)
|
||||||
|
- Download and presign (share link) buttons
|
||||||
|
- Version history (when versioning is enabled)
|
||||||
|
|
||||||
|
#### Drag & Drop Upload
|
||||||
|
|
||||||
|
Drag files directly onto the objects table to upload them to the current bucket and folder path.
|
||||||
|
|
||||||
## 6. Presigned URLs
|
## 6. Presigned URLs
|
||||||
|
|
||||||
- Trigger from the UI using the **Presign** button after selecting an object.
|
- Trigger from the UI using the **Presign** button after selecting an object.
|
||||||
@@ -165,9 +718,207 @@ s3.complete_multipart_upload(
|
|||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
## 6. Site Replication
|
## 7. Encryption
|
||||||
|
|
||||||
MyFSIO supports **Site Replication**, allowing you to automatically copy new objects from one MyFSIO instance (Source) to another (Target). This is useful for disaster recovery, data locality, or backups.
|
MyFSIO supports **server-side encryption at rest** to protect your data. When enabled, objects are encrypted using AES-256-GCM before being written to disk.
|
||||||
|
|
||||||
|
### Encryption Types
|
||||||
|
|
||||||
|
| Type | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| **AES-256 (SSE-S3)** | Server-managed encryption using a local master key |
|
||||||
|
| **KMS (SSE-KMS)** | Encryption using customer-managed keys via the built-in KMS |
|
||||||
|
|
||||||
|
### Enabling Encryption
|
||||||
|
|
||||||
|
#### 1. Set Environment Variables
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
# PowerShell
|
||||||
|
$env:ENCRYPTION_ENABLED = "true"
|
||||||
|
$env:KMS_ENABLED = "true" # Optional, for KMS key management
|
||||||
|
python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Bash
|
||||||
|
export ENCRYPTION_ENABLED=true
|
||||||
|
export KMS_ENABLED=true
|
||||||
|
python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Configure Bucket Default Encryption (UI)
|
||||||
|
|
||||||
|
1. Navigate to your bucket in the UI
|
||||||
|
2. Click the **Properties** tab
|
||||||
|
3. Find the **Default Encryption** card
|
||||||
|
4. Click **Enable Encryption**
|
||||||
|
5. Choose algorithm:
|
||||||
|
- **AES-256**: Uses the server's master key
|
||||||
|
- **aws:kms**: Uses a KMS-managed key (select from dropdown)
|
||||||
|
6. Save changes
|
||||||
|
|
||||||
|
Once enabled, all **new objects** uploaded to the bucket will be automatically encrypted.
|
||||||
|
|
||||||
|
### KMS Key Management
|
||||||
|
|
||||||
|
When `KMS_ENABLED=true`, you can manage encryption keys via the KMS API:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create a new KMS key
|
||||||
|
curl -X POST http://localhost:5000/kms/keys \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
||||||
|
-d '{"alias": "my-key", "description": "Production encryption key"}'
|
||||||
|
|
||||||
|
# List all keys
|
||||||
|
curl http://localhost:5000/kms/keys \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Get key details
|
||||||
|
curl http://localhost:5000/kms/keys/{key-id} \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Rotate a key (creates new key material)
|
||||||
|
curl -X POST http://localhost:5000/kms/keys/{key-id}/rotate \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Disable/Enable a key
|
||||||
|
curl -X POST http://localhost:5000/kms/keys/{key-id}/disable \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
curl -X POST http://localhost:5000/kms/keys/{key-id}/enable \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Schedule key deletion (30-day waiting period)
|
||||||
|
curl -X DELETE http://localhost:5000/kms/keys/{key-id}?waiting_period_days=30 \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
```
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
|
||||||
|
1. **Envelope Encryption**: Each object is encrypted with a unique Data Encryption Key (DEK)
|
||||||
|
2. **Key Wrapping**: The DEK is encrypted (wrapped) by the master key or KMS key
|
||||||
|
3. **Storage**: The encrypted DEK is stored alongside the encrypted object
|
||||||
|
4. **Decryption**: On read, the DEK is unwrapped and used to decrypt the object
|
||||||
|
|
||||||
|
### Client-Side Encryption
|
||||||
|
|
||||||
|
For additional security, you can use client-side encryption. The `ClientEncryptionHelper` class provides utilities:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from app.encryption import ClientEncryptionHelper
|
||||||
|
|
||||||
|
# Generate a client-side key
|
||||||
|
key = ClientEncryptionHelper.generate_key()
|
||||||
|
key_b64 = ClientEncryptionHelper.key_to_base64(key)
|
||||||
|
|
||||||
|
# Encrypt before upload
|
||||||
|
plaintext = b"sensitive data"
|
||||||
|
encrypted, metadata = ClientEncryptionHelper.encrypt_for_upload(plaintext, key)
|
||||||
|
|
||||||
|
# Upload with metadata headers
|
||||||
|
# x-amz-meta-x-amz-key: <wrapped-key>
|
||||||
|
# x-amz-meta-x-amz-iv: <iv>
|
||||||
|
# x-amz-meta-x-amz-matdesc: <material-description>
|
||||||
|
|
||||||
|
# Decrypt after download
|
||||||
|
decrypted = ClientEncryptionHelper.decrypt_from_download(encrypted, metadata, key)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Important Notes
|
||||||
|
|
||||||
|
- **Existing objects are NOT encrypted** - Only new uploads after enabling encryption are encrypted
|
||||||
|
- **Master key security** - The master key file (`master.key`) should be backed up securely and protected
|
||||||
|
- **Key rotation** - Rotating a KMS key creates new key material; existing objects remain encrypted with the old material
|
||||||
|
- **Disabled keys** - Objects encrypted with a disabled key cannot be decrypted until the key is re-enabled
|
||||||
|
- **Deleted keys** - Once a key is deleted (after the waiting period), objects encrypted with it are permanently inaccessible
|
||||||
|
|
||||||
|
### Verifying Encryption
|
||||||
|
|
||||||
|
To verify an object is encrypted:
|
||||||
|
1. Check the raw file in `data/<bucket>/` - it should be unreadable binary
|
||||||
|
2. Look for `.meta` files containing encryption metadata
|
||||||
|
3. Download via the API/UI - the object should be automatically decrypted
|
||||||
|
|
||||||
|
## 8. Bucket Quotas
|
||||||
|
|
||||||
|
MyFSIO supports **storage quotas** to limit how much data a bucket can hold. Quotas are enforced on uploads and multipart completions.
|
||||||
|
|
||||||
|
### Quota Types
|
||||||
|
|
||||||
|
| Limit | Description |
|
||||||
|
|-------|-------------|
|
||||||
|
| **Max Size (MB)** | Maximum total storage in megabytes (includes current objects + archived versions) |
|
||||||
|
| **Max Objects** | Maximum number of objects (includes current objects + archived versions) |
|
||||||
|
|
||||||
|
### Managing Quotas (Admin Only)
|
||||||
|
|
||||||
|
Quota management is restricted to administrators (users with `iam:*` or `iam:list_users` permissions).
|
||||||
|
|
||||||
|
#### Via UI
|
||||||
|
|
||||||
|
1. Navigate to your bucket in the UI
|
||||||
|
2. Click the **Properties** tab
|
||||||
|
3. Find the **Storage Quota** card
|
||||||
|
4. Enter limits:
|
||||||
|
- **Max Size (MB)**: Leave empty for unlimited
|
||||||
|
- **Max Objects**: Leave empty for unlimited
|
||||||
|
5. Click **Update Quota**
|
||||||
|
|
||||||
|
To remove a quota, click **Remove Quota**.
|
||||||
|
|
||||||
|
#### Via API
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set quota (max 100MB, max 1000 objects)
|
||||||
|
curl -X PUT "http://localhost:5000/bucket/<bucket>?quota" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
||||||
|
-d '{"max_bytes": 104857600, "max_objects": 1000}'
|
||||||
|
|
||||||
|
# Get current quota
|
||||||
|
curl "http://localhost:5000/bucket/<bucket>?quota" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Remove quota
|
||||||
|
curl -X PUT "http://localhost:5000/bucket/<bucket>?quota" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
||||||
|
-d '{"max_bytes": null, "max_objects": null}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Quota Behavior
|
||||||
|
|
||||||
|
- **Version Counting**: When versioning is enabled, archived versions count toward the quota
|
||||||
|
- **Enforcement Points**: Quotas are checked during `PUT` object and `CompleteMultipartUpload` operations
|
||||||
|
- **Error Response**: When quota is exceeded, the API returns `HTTP 400` with error code `QuotaExceeded`
|
||||||
|
- **Visibility**: All users can view quota usage in the bucket detail page, but only admins can modify quotas
|
||||||
|
|
||||||
|
### Example Error
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<Error>
|
||||||
|
<Code>QuotaExceeded</Code>
|
||||||
|
<Message>Bucket quota exceeded: storage limit reached</Message>
|
||||||
|
<BucketName>my-bucket</BucketName>
|
||||||
|
</Error>
|
||||||
|
```
|
||||||
|
|
||||||
|
## 9. Site Replication
|
||||||
|
|
||||||
|
### Permission Model
|
||||||
|
|
||||||
|
Replication uses a two-tier permission system:
|
||||||
|
|
||||||
|
| Role | Capabilities |
|
||||||
|
|------|--------------|
|
||||||
|
| **Admin** (users with `iam:*` permissions) | Create/delete replication rules, configure connections and target buckets |
|
||||||
|
| **Users** (with `replication` permission) | Enable/disable (pause/resume) existing replication rules |
|
||||||
|
|
||||||
|
> **Note:** The Replication tab is hidden for users without the `replication` permission on the bucket.
|
||||||
|
|
||||||
|
This separation allows administrators to pre-configure where data should replicate, while allowing authorized users to toggle replication on/off without accessing connection credentials.
|
||||||
|
|
||||||
### Architecture
|
### Architecture
|
||||||
|
|
||||||
@@ -245,13 +996,15 @@ Now, configure the primary instance to replicate to the target.
|
|||||||
- **Secret Key**: The secret you generated on the Target.
|
- **Secret Key**: The secret you generated on the Target.
|
||||||
- Click **Add Connection**.
|
- Click **Add Connection**.
|
||||||
|
|
||||||
3. **Enable Replication**:
|
3. **Enable Replication** (Admin):
|
||||||
- Navigate to **Buckets** and select the source bucket.
|
- Navigate to **Buckets** and select the source bucket.
|
||||||
- Switch to the **Replication** tab.
|
- Switch to the **Replication** tab.
|
||||||
- Select the `Secondary Site` connection.
|
- Select the `Secondary Site` connection.
|
||||||
- Enter the target bucket name (`backup-bucket`).
|
- Enter the target bucket name (`backup-bucket`).
|
||||||
- Click **Enable Replication**.
|
- Click **Enable Replication**.
|
||||||
|
|
||||||
|
Once configured, users with `replication` permission on this bucket can pause/resume replication without needing access to connection details.
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
1. Upload a file to the source bucket.
|
1. Upload a file to the source bucket.
|
||||||
@@ -262,7 +1015,34 @@ Now, configure the primary instance to replicate to the target.
|
|||||||
aws --endpoint-url http://target-server:5002 s3 ls s3://backup-bucket
|
aws --endpoint-url http://target-server:5002 s3 ls s3://backup-bucket
|
||||||
```
|
```
|
||||||
|
|
||||||
## 7. Running Tests
|
### Pausing and Resuming Replication
|
||||||
|
|
||||||
|
Users with the `replication` permission (but not admin rights) can pause and resume existing replication rules:
|
||||||
|
|
||||||
|
1. Navigate to the bucket's **Replication** tab.
|
||||||
|
2. If replication is **Active**, click **Pause Replication** to temporarily stop syncing.
|
||||||
|
3. If replication is **Paused**, click **Resume Replication** to continue syncing.
|
||||||
|
|
||||||
|
When paused, new objects uploaded to the source will not replicate until replication is resumed. Objects uploaded while paused will be replicated once resumed.
|
||||||
|
|
||||||
|
> **Note:** Only admins can create new replication rules, change the target connection/bucket, or delete rules entirely.
|
||||||
|
|
||||||
|
### Bidirectional Replication (Active-Active)
|
||||||
|
|
||||||
|
To set up two-way replication (Server A ↔ Server B):
|
||||||
|
|
||||||
|
1. Follow the steps above to replicate **A → B**.
|
||||||
|
2. Repeat the process on Server B to replicate **B → A**:
|
||||||
|
- Create a connection on Server B pointing to Server A.
|
||||||
|
- Enable replication on the target bucket on Server B.
|
||||||
|
|
||||||
|
**Loop Prevention**: The system automatically detects replication traffic using a custom User-Agent (`S3ReplicationAgent`). This prevents infinite loops where an object replicated from A to B is immediately replicated back to A.
|
||||||
|
|
||||||
|
**Deletes**: Deleting an object on one server will propagate the deletion to the other server.
|
||||||
|
|
||||||
|
**Note**: Deleting a bucket will automatically remove its associated replication configuration.
|
||||||
|
|
||||||
|
## 11. Running Tests
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pytest -q
|
pytest -q
|
||||||
@@ -272,7 +1052,7 @@ The suite now includes a boto3 integration test that spins up a live HTTP server
|
|||||||
|
|
||||||
The suite covers bucket CRUD, presigned downloads, bucket policy enforcement, and regression tests for anonymous reads when a Public policy is attached.
|
The suite covers bucket CRUD, presigned downloads, bucket policy enforcement, and regression tests for anonymous reads when a Public policy is attached.
|
||||||
|
|
||||||
## 8. Troubleshooting
|
## 12. Troubleshooting
|
||||||
|
|
||||||
| Symptom | Likely Cause | Fix |
|
| Symptom | Likely Cause | Fix |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
@@ -281,7 +1061,7 @@ The suite covers bucket CRUD, presigned downloads, bucket policy enforcement, an
|
|||||||
| Presign modal errors with 403 | IAM user lacks `read/write/delete` for target bucket or bucket policy denies | Update IAM inline policies or remove conflicting deny statements. |
|
| Presign modal errors with 403 | IAM user lacks `read/write/delete` for target bucket or bucket policy denies | Update IAM inline policies or remove conflicting deny statements. |
|
||||||
| Large upload rejected immediately | File exceeds `MAX_UPLOAD_SIZE` | Increase env var or shrink object. |
|
| Large upload rejected immediately | File exceeds `MAX_UPLOAD_SIZE` | Increase env var or shrink object. |
|
||||||
|
|
||||||
## 9. API Matrix
|
## 13. API Matrix
|
||||||
|
|
||||||
```
|
```
|
||||||
GET / # List buckets
|
GET / # List buckets
|
||||||
@@ -295,10 +1075,6 @@ POST /presign/<bucket>/<key> # Generate SigV4 URL
|
|||||||
GET /bucket-policy/<bucket> # Fetch policy
|
GET /bucket-policy/<bucket> # Fetch policy
|
||||||
PUT /bucket-policy/<bucket> # Upsert policy
|
PUT /bucket-policy/<bucket> # Upsert policy
|
||||||
DELETE /bucket-policy/<bucket> # Delete policy
|
DELETE /bucket-policy/<bucket> # Delete policy
|
||||||
|
GET /<bucket>?quota # Get bucket quota
|
||||||
|
PUT /<bucket>?quota # Set bucket quota (admin only)
|
||||||
```
|
```
|
||||||
|
|
||||||
## 10. Next Steps
|
|
||||||
|
|
||||||
- Tailor IAM + policy JSON files for team-ready presets.
|
|
||||||
- Wrap `run_api.py` with gunicorn or another WSGI server for long-running workloads.
|
|
||||||
- Extend `bucket_policies.json` to cover Deny statements that simulate production security controls.
|
|
||||||
|
|||||||
5
pytest.ini
Normal file
5
pytest.ini
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
[pytest]
|
||||||
|
testpaths = tests
|
||||||
|
norecursedirs = data .git __pycache__ .venv
|
||||||
|
markers =
|
||||||
|
integration: marks tests as integration tests (may require external services)
|
||||||
@@ -1,7 +1,10 @@
|
|||||||
Flask>=3.0.2
|
Flask>=3.1.2
|
||||||
Flask-Limiter>=3.5.0
|
Flask-Limiter>=4.1.1
|
||||||
Flask-Cors>=4.0.0
|
Flask-Cors>=6.0.2
|
||||||
Flask-WTF>=1.2.1
|
Flask-WTF>=1.2.2
|
||||||
pytest>=7.4
|
pytest>=9.0.2
|
||||||
requests>=2.31
|
requests>=2.32.5
|
||||||
boto3>=1.34
|
boto3>=1.42.14
|
||||||
|
waitress>=3.0.2
|
||||||
|
psutil>=7.1.3
|
||||||
|
cryptography>=46.0.3
|
||||||
86
run.py
86
run.py
@@ -3,10 +3,12 @@ from __future__ import annotations
|
|||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import os
|
import os
|
||||||
|
import sys
|
||||||
import warnings
|
import warnings
|
||||||
from multiprocessing import Process
|
from multiprocessing import Process
|
||||||
|
|
||||||
from app import create_api_app, create_ui_app
|
from app import create_api_app, create_ui_app
|
||||||
|
from app.config import AppConfig
|
||||||
|
|
||||||
|
|
||||||
def _server_host() -> str:
|
def _server_host() -> str:
|
||||||
@@ -18,20 +20,33 @@ def _is_debug_enabled() -> bool:
|
|||||||
return os.getenv("FLASK_DEBUG", "0").lower() in ("1", "true", "yes")
|
return os.getenv("FLASK_DEBUG", "0").lower() in ("1", "true", "yes")
|
||||||
|
|
||||||
|
|
||||||
def serve_api(port: int) -> None:
|
def _is_frozen() -> bool:
|
||||||
|
"""Check if running as a compiled binary (PyInstaller/Nuitka)."""
|
||||||
|
return getattr(sys, 'frozen', False) or '__compiled__' in globals()
|
||||||
|
|
||||||
|
|
||||||
|
def serve_api(port: int, prod: bool = False) -> None:
|
||||||
app = create_api_app()
|
app = create_api_app()
|
||||||
debug = _is_debug_enabled()
|
if prod:
|
||||||
if debug:
|
from waitress import serve
|
||||||
warnings.warn("DEBUG MODE ENABLED - DO NOT USE IN PRODUCTION", RuntimeWarning)
|
serve(app, host=_server_host(), port=port, ident="MyFSIO")
|
||||||
app.run(host=_server_host(), port=port, debug=debug)
|
else:
|
||||||
|
debug = _is_debug_enabled()
|
||||||
|
if debug:
|
||||||
|
warnings.warn("DEBUG MODE ENABLED - DO NOT USE IN PRODUCTION", RuntimeWarning)
|
||||||
|
app.run(host=_server_host(), port=port, debug=debug)
|
||||||
|
|
||||||
|
|
||||||
def serve_ui(port: int) -> None:
|
def serve_ui(port: int, prod: bool = False) -> None:
|
||||||
app = create_ui_app()
|
app = create_ui_app()
|
||||||
debug = _is_debug_enabled()
|
if prod:
|
||||||
if debug:
|
from waitress import serve
|
||||||
warnings.warn("DEBUG MODE ENABLED - DO NOT USE IN PRODUCTION", RuntimeWarning)
|
serve(app, host=_server_host(), port=port, ident="MyFSIO")
|
||||||
app.run(host=_server_host(), port=port, debug=debug)
|
else:
|
||||||
|
debug = _is_debug_enabled()
|
||||||
|
if debug:
|
||||||
|
warnings.warn("DEBUG MODE ENABLED - DO NOT USE IN PRODUCTION", RuntimeWarning)
|
||||||
|
app.run(host=_server_host(), port=port, debug=debug)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
@@ -39,18 +54,65 @@ if __name__ == "__main__":
|
|||||||
parser.add_argument("--mode", choices=["api", "ui", "both"], default="both")
|
parser.add_argument("--mode", choices=["api", "ui", "both"], default="both")
|
||||||
parser.add_argument("--api-port", type=int, default=5000)
|
parser.add_argument("--api-port", type=int, default=5000)
|
||||||
parser.add_argument("--ui-port", type=int, default=5100)
|
parser.add_argument("--ui-port", type=int, default=5100)
|
||||||
|
parser.add_argument("--prod", action="store_true", help="Run in production mode using Waitress")
|
||||||
|
parser.add_argument("--dev", action="store_true", help="Force development mode (Flask dev server)")
|
||||||
|
parser.add_argument("--check-config", action="store_true", help="Validate configuration and exit")
|
||||||
|
parser.add_argument("--show-config", action="store_true", help="Show configuration summary and exit")
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Handle config check/show modes
|
||||||
|
if args.check_config or args.show_config:
|
||||||
|
config = AppConfig.from_env()
|
||||||
|
config.print_startup_summary()
|
||||||
|
if args.check_config:
|
||||||
|
issues = config.validate_and_report()
|
||||||
|
critical = [i for i in issues if i.startswith("CRITICAL:")]
|
||||||
|
sys.exit(1 if critical else 0)
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
# Default to production mode when running as compiled binary
|
||||||
|
# unless --dev is explicitly passed
|
||||||
|
prod_mode = args.prod or (_is_frozen() and not args.dev)
|
||||||
|
|
||||||
|
# Validate configuration before starting
|
||||||
|
config = AppConfig.from_env()
|
||||||
|
|
||||||
|
# Show startup summary only on first run (when marker file doesn't exist)
|
||||||
|
first_run_marker = config.storage_root / ".myfsio.sys" / ".initialized"
|
||||||
|
is_first_run = not first_run_marker.exists()
|
||||||
|
|
||||||
|
if is_first_run:
|
||||||
|
config.print_startup_summary()
|
||||||
|
|
||||||
|
# Check for critical issues that should prevent startup
|
||||||
|
issues = config.validate_and_report()
|
||||||
|
critical_issues = [i for i in issues if i.startswith("CRITICAL:")]
|
||||||
|
if critical_issues:
|
||||||
|
print("ABORTING: Critical configuration issues detected. Fix them before starting.")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Create the marker file to indicate successful first run
|
||||||
|
try:
|
||||||
|
first_run_marker.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
first_run_marker.write_text(f"Initialized on {__import__('datetime').datetime.now().isoformat()}\n")
|
||||||
|
except OSError:
|
||||||
|
pass # Non-critical, just skip marker creation
|
||||||
|
|
||||||
|
if prod_mode:
|
||||||
|
print("Running in production mode (Waitress)")
|
||||||
|
else:
|
||||||
|
print("Running in development mode (Flask dev server)")
|
||||||
|
|
||||||
if args.mode in {"api", "both"}:
|
if args.mode in {"api", "both"}:
|
||||||
print(f"Starting API server on port {args.api_port}...")
|
print(f"Starting API server on port {args.api_port}...")
|
||||||
api_proc = Process(target=serve_api, args=(args.api_port,), daemon=True)
|
api_proc = Process(target=serve_api, args=(args.api_port, prod_mode), daemon=True)
|
||||||
api_proc.start()
|
api_proc.start()
|
||||||
else:
|
else:
|
||||||
api_proc = None
|
api_proc = None
|
||||||
|
|
||||||
if args.mode in {"ui", "both"}:
|
if args.mode in {"ui", "both"}:
|
||||||
print(f"Starting UI server on port {args.ui_port}...")
|
print(f"Starting UI server on port {args.ui_port}...")
|
||||||
serve_ui(args.ui_port)
|
serve_ui(args.ui_port, prod_mode)
|
||||||
elif api_proc:
|
elif api_proc:
|
||||||
try:
|
try:
|
||||||
api_proc.join()
|
api_proc.join()
|
||||||
|
|||||||
370
scripts/install.sh
Normal file
370
scripts/install.sh
Normal file
@@ -0,0 +1,370 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# MyFSIO Installation Script
|
||||||
|
# This script sets up MyFSIO for production use on Linux systems.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./install.sh [OPTIONS]
|
||||||
|
#
|
||||||
|
# Options:
|
||||||
|
# --install-dir DIR Installation directory (default: /opt/myfsio)
|
||||||
|
# --data-dir DIR Data directory (default: /var/lib/myfsio)
|
||||||
|
# --log-dir DIR Log directory (default: /var/log/myfsio)
|
||||||
|
# --user USER System user to run as (default: myfsio)
|
||||||
|
# --port PORT API port (default: 5000)
|
||||||
|
# --ui-port PORT UI port (default: 5100)
|
||||||
|
# --api-url URL Public API URL (for presigned URLs behind proxy)
|
||||||
|
# --no-systemd Skip systemd service creation
|
||||||
|
# --binary PATH Path to myfsio binary (will download if not provided)
|
||||||
|
# -y, --yes Skip confirmation prompts
|
||||||
|
#
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
INSTALL_DIR="/opt/myfsio"
|
||||||
|
DATA_DIR="/var/lib/myfsio"
|
||||||
|
LOG_DIR="/var/log/myfsio"
|
||||||
|
SERVICE_USER="myfsio"
|
||||||
|
API_PORT="5000"
|
||||||
|
UI_PORT="5100"
|
||||||
|
API_URL=""
|
||||||
|
SKIP_SYSTEMD=false
|
||||||
|
BINARY_PATH=""
|
||||||
|
AUTO_YES=false
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
--install-dir)
|
||||||
|
INSTALL_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--data-dir)
|
||||||
|
DATA_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--log-dir)
|
||||||
|
LOG_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--user)
|
||||||
|
SERVICE_USER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--port)
|
||||||
|
API_PORT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--ui-port)
|
||||||
|
UI_PORT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--api-url)
|
||||||
|
API_URL="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--no-systemd)
|
||||||
|
SKIP_SYSTEMD=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--binary)
|
||||||
|
BINARY_PATH="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-y|--yes)
|
||||||
|
AUTO_YES=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
head -30 "$0" | tail -25
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " MyFSIO Installation Script"
|
||||||
|
echo " S3-Compatible Object Storage"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
|
echo "Documentation: https://go.jzwsite.com/myfsio"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ $EUID -ne 0 ]]; then
|
||||||
|
echo "Error: This script must be run as root (use sudo)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 1: Review Installation Configuration"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
echo " Install directory: $INSTALL_DIR"
|
||||||
|
echo " Data directory: $DATA_DIR"
|
||||||
|
echo " Log directory: $LOG_DIR"
|
||||||
|
echo " Service user: $SERVICE_USER"
|
||||||
|
echo " API port: $API_PORT"
|
||||||
|
echo " UI port: $UI_PORT"
|
||||||
|
if [[ -n "$API_URL" ]]; then
|
||||||
|
echo " Public API URL: $API_URL"
|
||||||
|
fi
|
||||||
|
if [[ -n "$BINARY_PATH" ]]; then
|
||||||
|
echo " Binary path: $BINARY_PATH"
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$AUTO_YES" != true ]]; then
|
||||||
|
read -p "Do you want to proceed with these settings? [y/N] " -n 1 -r
|
||||||
|
echo
|
||||||
|
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||||
|
echo "Installation cancelled."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 2: Creating System User"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if id "$SERVICE_USER" &>/dev/null; then
|
||||||
|
echo " [OK] User '$SERVICE_USER' already exists"
|
||||||
|
else
|
||||||
|
useradd --system --no-create-home --shell /usr/sbin/nologin "$SERVICE_USER"
|
||||||
|
echo " [OK] Created user '$SERVICE_USER'"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 3: Creating Directories"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
mkdir -p "$INSTALL_DIR"
|
||||||
|
echo " [OK] Created $INSTALL_DIR"
|
||||||
|
mkdir -p "$DATA_DIR"
|
||||||
|
echo " [OK] Created $DATA_DIR"
|
||||||
|
mkdir -p "$LOG_DIR"
|
||||||
|
echo " [OK] Created $LOG_DIR"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 4: Installing Binary"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ -n "$BINARY_PATH" ]]; then
|
||||||
|
if [[ -f "$BINARY_PATH" ]]; then
|
||||||
|
cp "$BINARY_PATH" "$INSTALL_DIR/myfsio"
|
||||||
|
echo " [OK] Copied binary from $BINARY_PATH"
|
||||||
|
else
|
||||||
|
echo " [ERROR] Binary not found at $BINARY_PATH"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
elif [[ -f "./myfsio" ]]; then
|
||||||
|
cp "./myfsio" "$INSTALL_DIR/myfsio"
|
||||||
|
echo " [OK] Copied binary from ./myfsio"
|
||||||
|
else
|
||||||
|
echo " [ERROR] No binary provided."
|
||||||
|
echo " Use --binary PATH or place 'myfsio' in current directory"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
chmod +x "$INSTALL_DIR/myfsio"
|
||||||
|
echo " [OK] Set executable permissions"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 5: Generating Secret Key"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
SECRET_KEY=$(openssl rand -base64 32)
|
||||||
|
echo " [OK] Generated secure SECRET_KEY"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 6: Creating Configuration File"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
cat > "$INSTALL_DIR/myfsio.env" << EOF
|
||||||
|
# MyFSIO Configuration
|
||||||
|
# Generated by install.sh on $(date)
|
||||||
|
# Documentation: https://go.jzwsite.com/myfsio
|
||||||
|
|
||||||
|
# Storage paths
|
||||||
|
STORAGE_ROOT=$DATA_DIR
|
||||||
|
LOG_DIR=$LOG_DIR
|
||||||
|
|
||||||
|
# Network
|
||||||
|
APP_HOST=0.0.0.0
|
||||||
|
APP_PORT=$API_PORT
|
||||||
|
|
||||||
|
# Security - CHANGE IN PRODUCTION
|
||||||
|
SECRET_KEY=$SECRET_KEY
|
||||||
|
CORS_ORIGINS=*
|
||||||
|
|
||||||
|
# Public URL (set this if behind a reverse proxy)
|
||||||
|
$(if [[ -n "$API_URL" ]]; then echo "API_BASE_URL=$API_URL"; else echo "# API_BASE_URL=https://s3.example.com"; fi)
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
LOG_LEVEL=INFO
|
||||||
|
LOG_TO_FILE=true
|
||||||
|
|
||||||
|
# Rate limiting
|
||||||
|
RATE_LIMIT_DEFAULT=200 per minute
|
||||||
|
|
||||||
|
# Optional: Encryption (uncomment to enable)
|
||||||
|
# ENCRYPTION_ENABLED=true
|
||||||
|
# KMS_ENABLED=true
|
||||||
|
EOF
|
||||||
|
chmod 600 "$INSTALL_DIR/myfsio.env"
|
||||||
|
echo " [OK] Created $INSTALL_DIR/myfsio.env"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 7: Setting Permissions"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
chown -R "$SERVICE_USER:$SERVICE_USER" "$INSTALL_DIR"
|
||||||
|
echo " [OK] Set ownership for $INSTALL_DIR"
|
||||||
|
chown -R "$SERVICE_USER:$SERVICE_USER" "$DATA_DIR"
|
||||||
|
echo " [OK] Set ownership for $DATA_DIR"
|
||||||
|
chown -R "$SERVICE_USER:$SERVICE_USER" "$LOG_DIR"
|
||||||
|
echo " [OK] Set ownership for $LOG_DIR"
|
||||||
|
|
||||||
|
if [[ "$SKIP_SYSTEMD" != true ]]; then
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 8: Creating Systemd Service"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
cat > /etc/systemd/system/myfsio.service << EOF
|
||||||
|
[Unit]
|
||||||
|
Description=MyFSIO S3-Compatible Storage
|
||||||
|
Documentation=https://go.jzwsite.com/myfsio
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=$SERVICE_USER
|
||||||
|
Group=$SERVICE_USER
|
||||||
|
WorkingDirectory=$INSTALL_DIR
|
||||||
|
EnvironmentFile=$INSTALL_DIR/myfsio.env
|
||||||
|
ExecStart=$INSTALL_DIR/myfsio
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
# Security hardening
|
||||||
|
NoNewPrivileges=true
|
||||||
|
ProtectSystem=strict
|
||||||
|
ProtectHome=true
|
||||||
|
ReadWritePaths=$DATA_DIR $LOG_DIR
|
||||||
|
PrivateTmp=true
|
||||||
|
|
||||||
|
# Resource limits (adjust as needed)
|
||||||
|
# LimitNOFILE=65535
|
||||||
|
# MemoryMax=2G
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
|
||||||
|
systemctl daemon-reload
|
||||||
|
echo " [OK] Created /etc/systemd/system/myfsio.service"
|
||||||
|
echo " [OK] Reloaded systemd daemon"
|
||||||
|
else
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 8: Skipping Systemd Service (--no-systemd flag used)"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " Installation Complete!"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$SKIP_SYSTEMD" != true ]]; then
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 9: Start the Service"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$AUTO_YES" != true ]]; then
|
||||||
|
read -p "Would you like to start MyFSIO now? [Y/n] " -n 1 -r
|
||||||
|
echo
|
||||||
|
START_SERVICE=true
|
||||||
|
if [[ $REPLY =~ ^[Nn]$ ]]; then
|
||||||
|
START_SERVICE=false
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
START_SERVICE=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$START_SERVICE" == true ]]; then
|
||||||
|
echo " Starting MyFSIO service..."
|
||||||
|
systemctl start myfsio
|
||||||
|
echo " [OK] Service started"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
read -p "Would you like to enable MyFSIO to start on boot? [Y/n] " -n 1 -r
|
||||||
|
echo
|
||||||
|
if [[ ! $REPLY =~ ^[Nn]$ ]]; then
|
||||||
|
systemctl enable myfsio
|
||||||
|
echo " [OK] Service enabled on boot"
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
sleep 2
|
||||||
|
echo " Service Status:"
|
||||||
|
echo " ---------------"
|
||||||
|
if systemctl is-active --quiet myfsio; then
|
||||||
|
echo " [OK] MyFSIO is running"
|
||||||
|
else
|
||||||
|
echo " [WARNING] MyFSIO may not have started correctly"
|
||||||
|
echo " Check logs with: journalctl -u myfsio -f"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " [SKIPPED] Service not started"
|
||||||
|
echo ""
|
||||||
|
echo " To start manually, run:"
|
||||||
|
echo " sudo systemctl start myfsio"
|
||||||
|
echo ""
|
||||||
|
echo " To enable on boot, run:"
|
||||||
|
echo " sudo systemctl enable myfsio"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " Summary"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
|
echo "Access Points:"
|
||||||
|
echo " API: http://$(hostname -I 2>/dev/null | awk '{print $1}' || echo "localhost"):$API_PORT"
|
||||||
|
echo " UI: http://$(hostname -I 2>/dev/null | awk '{print $1}' || echo "localhost"):$UI_PORT/ui"
|
||||||
|
echo ""
|
||||||
|
echo "Default Credentials:"
|
||||||
|
echo " Username: localadmin"
|
||||||
|
echo " Password: localadmin"
|
||||||
|
echo " [!] WARNING: Change these immediately after first login!"
|
||||||
|
echo ""
|
||||||
|
echo "Configuration Files:"
|
||||||
|
echo " Environment: $INSTALL_DIR/myfsio.env"
|
||||||
|
echo " IAM Users: $DATA_DIR/.myfsio.sys/config/iam.json"
|
||||||
|
echo " Bucket Policies: $DATA_DIR/.myfsio.sys/config/bucket_policies.json"
|
||||||
|
echo ""
|
||||||
|
echo "Useful Commands:"
|
||||||
|
echo " Check status: sudo systemctl status myfsio"
|
||||||
|
echo " View logs: sudo journalctl -u myfsio -f"
|
||||||
|
echo " Restart: sudo systemctl restart myfsio"
|
||||||
|
echo " Stop: sudo systemctl stop myfsio"
|
||||||
|
echo ""
|
||||||
|
echo "Documentation: https://go.jzwsite.com/myfsio"
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " Thank you for installing MyFSIO!"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
244
scripts/uninstall.sh
Normal file
244
scripts/uninstall.sh
Normal file
@@ -0,0 +1,244 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# MyFSIO Uninstall Script
|
||||||
|
# This script removes MyFSIO from your system.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./uninstall.sh [OPTIONS]
|
||||||
|
#
|
||||||
|
# Options:
|
||||||
|
# --keep-data Don't remove data directory
|
||||||
|
# --keep-logs Don't remove log directory
|
||||||
|
# --install-dir DIR Installation directory (default: /opt/myfsio)
|
||||||
|
# --data-dir DIR Data directory (default: /var/lib/myfsio)
|
||||||
|
# --log-dir DIR Log directory (default: /var/log/myfsio)
|
||||||
|
# --user USER System user (default: myfsio)
|
||||||
|
# -y, --yes Skip confirmation prompts
|
||||||
|
#
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
INSTALL_DIR="/opt/myfsio"
|
||||||
|
DATA_DIR="/var/lib/myfsio"
|
||||||
|
LOG_DIR="/var/log/myfsio"
|
||||||
|
SERVICE_USER="myfsio"
|
||||||
|
KEEP_DATA=false
|
||||||
|
KEEP_LOGS=false
|
||||||
|
AUTO_YES=false
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
--keep-data)
|
||||||
|
KEEP_DATA=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--keep-logs)
|
||||||
|
KEEP_LOGS=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--install-dir)
|
||||||
|
INSTALL_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--data-dir)
|
||||||
|
DATA_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--log-dir)
|
||||||
|
LOG_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--user)
|
||||||
|
SERVICE_USER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-y|--yes)
|
||||||
|
AUTO_YES=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
head -20 "$0" | tail -15
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " MyFSIO Uninstallation Script"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
|
echo "Documentation: https://go.jzwsite.com/myfsio"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ $EUID -ne 0 ]]; then
|
||||||
|
echo "Error: This script must be run as root (use sudo)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 1: Review What Will Be Removed"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
echo "The following items will be removed:"
|
||||||
|
echo ""
|
||||||
|
echo " Install directory: $INSTALL_DIR"
|
||||||
|
if [[ "$KEEP_DATA" != true ]]; then
|
||||||
|
echo " Data directory: $DATA_DIR (ALL YOUR DATA WILL BE DELETED!)"
|
||||||
|
else
|
||||||
|
echo " Data directory: $DATA_DIR (WILL BE KEPT)"
|
||||||
|
fi
|
||||||
|
if [[ "$KEEP_LOGS" != true ]]; then
|
||||||
|
echo " Log directory: $LOG_DIR"
|
||||||
|
else
|
||||||
|
echo " Log directory: $LOG_DIR (WILL BE KEPT)"
|
||||||
|
fi
|
||||||
|
echo " Systemd service: /etc/systemd/system/myfsio.service"
|
||||||
|
echo " System user: $SERVICE_USER"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$AUTO_YES" != true ]]; then
|
||||||
|
echo "WARNING: This action cannot be undone!"
|
||||||
|
echo ""
|
||||||
|
read -p "Are you sure you want to uninstall MyFSIO? [y/N] " -n 1 -r
|
||||||
|
echo
|
||||||
|
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||||
|
echo ""
|
||||||
|
echo "Uninstallation cancelled."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$KEEP_DATA" != true ]]; then
|
||||||
|
echo ""
|
||||||
|
read -p "This will DELETE ALL YOUR DATA. Type 'DELETE' to confirm: " CONFIRM
|
||||||
|
if [[ "$CONFIRM" != "DELETE" ]]; then
|
||||||
|
echo ""
|
||||||
|
echo "Uninstallation cancelled."
|
||||||
|
echo "Tip: Use --keep-data to preserve your data directory"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 2: Stopping Service"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if systemctl is-active --quiet myfsio 2>/dev/null; then
|
||||||
|
systemctl stop myfsio
|
||||||
|
echo " [OK] Stopped myfsio service"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Service not running"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 3: Disabling Service"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if systemctl is-enabled --quiet myfsio 2>/dev/null; then
|
||||||
|
systemctl disable myfsio
|
||||||
|
echo " [OK] Disabled myfsio service"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Service not enabled"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 4: Removing Systemd Service File"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ -f /etc/systemd/system/myfsio.service ]]; then
|
||||||
|
rm -f /etc/systemd/system/myfsio.service
|
||||||
|
systemctl daemon-reload
|
||||||
|
echo " [OK] Removed /etc/systemd/system/myfsio.service"
|
||||||
|
echo " [OK] Reloaded systemd daemon"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Service file not found"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 5: Removing Installation Directory"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ -d "$INSTALL_DIR" ]]; then
|
||||||
|
rm -rf "$INSTALL_DIR"
|
||||||
|
echo " [OK] Removed $INSTALL_DIR"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Directory not found: $INSTALL_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 6: Removing Data Directory"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ "$KEEP_DATA" != true ]]; then
|
||||||
|
if [[ -d "$DATA_DIR" ]]; then
|
||||||
|
rm -rf "$DATA_DIR"
|
||||||
|
echo " [OK] Removed $DATA_DIR"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Directory not found: $DATA_DIR"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " [KEPT] Data preserved at: $DATA_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 7: Removing Log Directory"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ "$KEEP_LOGS" != true ]]; then
|
||||||
|
if [[ -d "$LOG_DIR" ]]; then
|
||||||
|
rm -rf "$LOG_DIR"
|
||||||
|
echo " [OK] Removed $LOG_DIR"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Directory not found: $LOG_DIR"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " [KEPT] Logs preserved at: $LOG_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 8: Removing System User"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if id "$SERVICE_USER" &>/dev/null; then
|
||||||
|
userdel "$SERVICE_USER" 2>/dev/null || true
|
||||||
|
echo " [OK] Removed user '$SERVICE_USER'"
|
||||||
|
else
|
||||||
|
echo " [SKIP] User not found: $SERVICE_USER"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " Uninstallation Complete!"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$KEEP_DATA" == true ]]; then
|
||||||
|
echo "Your data has been preserved at: $DATA_DIR"
|
||||||
|
echo ""
|
||||||
|
echo "To reinstall MyFSIO with existing data, run:"
|
||||||
|
echo " curl -fsSL https://go.jzwsite.com/myfsio-install | sudo bash"
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$KEEP_LOGS" == true ]]; then
|
||||||
|
echo "Your logs have been preserved at: $LOG_DIR"
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Thank you for using MyFSIO."
|
||||||
|
echo "Documentation: https://go.jzwsite.com/myfsio"
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
2019
static/css/main.css
2019
static/css/main.css
File diff suppressed because it is too large
Load Diff
Binary file not shown.
|
Before Width: | Height: | Size: 200 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 628 KiB |
BIN
static/images/MyFSIO.ico
Normal file
BIN
static/images/MyFSIO.ico
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 200 KiB |
BIN
static/images/MyFSIO.png
Normal file
BIN
static/images/MyFSIO.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 872 KiB |
192
static/js/bucket-detail-operations.js
Normal file
192
static/js/bucket-detail-operations.js
Normal file
@@ -0,0 +1,192 @@
|
|||||||
|
window.BucketDetailOperations = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
let showMessage = function() {};
|
||||||
|
let escapeHtml = function(s) { return s; };
|
||||||
|
|
||||||
|
function init(config) {
|
||||||
|
showMessage = config.showMessage || showMessage;
|
||||||
|
escapeHtml = config.escapeHtml || escapeHtml;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadLifecycleRules(card, endpoint) {
|
||||||
|
if (!card || !endpoint) return;
|
||||||
|
const body = card.querySelector('[data-lifecycle-body]');
|
||||||
|
if (!body) return;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch(endpoint);
|
||||||
|
const data = await response.json();
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(data.error || 'Failed to load')}</td></tr>`;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const rules = data.rules || [];
|
||||||
|
if (rules.length === 0) {
|
||||||
|
body.innerHTML = '<tr><td colspan="5" class="text-center text-muted py-3">No lifecycle rules configured</td></tr>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
body.innerHTML = rules.map(rule => {
|
||||||
|
const actions = [];
|
||||||
|
if (rule.expiration_days) actions.push(`Delete after ${rule.expiration_days} days`);
|
||||||
|
if (rule.noncurrent_days) actions.push(`Delete old versions after ${rule.noncurrent_days} days`);
|
||||||
|
if (rule.abort_mpu_days) actions.push(`Abort incomplete MPU after ${rule.abort_mpu_days} days`);
|
||||||
|
|
||||||
|
return `
|
||||||
|
<tr>
|
||||||
|
<td class="fw-medium">${escapeHtml(rule.id)}</td>
|
||||||
|
<td><code>${escapeHtml(rule.prefix || '(all)')}</code></td>
|
||||||
|
<td>${actions.map(a => `<div class="small">${escapeHtml(a)}</div>`).join('')}</td>
|
||||||
|
<td>
|
||||||
|
<span class="badge ${rule.status === 'Enabled' ? 'text-bg-success' : 'text-bg-secondary'}">${escapeHtml(rule.status)}</span>
|
||||||
|
</td>
|
||||||
|
<td class="text-end">
|
||||||
|
<button class="btn btn-sm btn-outline-danger" onclick="BucketDetailOperations.deleteLifecycleRule('${escapeHtml(rule.id)}')">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="12" height="12" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
`;
|
||||||
|
}).join('');
|
||||||
|
} catch (err) {
|
||||||
|
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(err.message)}</td></tr>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadCorsRules(card, endpoint) {
|
||||||
|
if (!card || !endpoint) return;
|
||||||
|
const body = document.getElementById('cors-rules-body');
|
||||||
|
if (!body) return;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch(endpoint);
|
||||||
|
const data = await response.json();
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(data.error || 'Failed to load')}</td></tr>`;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const rules = data.rules || [];
|
||||||
|
if (rules.length === 0) {
|
||||||
|
body.innerHTML = '<tr><td colspan="5" class="text-center text-muted py-3">No CORS rules configured</td></tr>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
body.innerHTML = rules.map((rule, idx) => `
|
||||||
|
<tr>
|
||||||
|
<td>${(rule.allowed_origins || []).map(o => `<code class="d-block">${escapeHtml(o)}</code>`).join('')}</td>
|
||||||
|
<td>${(rule.allowed_methods || []).map(m => `<span class="badge text-bg-secondary me-1">${escapeHtml(m)}</span>`).join('')}</td>
|
||||||
|
<td class="small text-muted">${(rule.allowed_headers || []).slice(0, 3).join(', ')}${(rule.allowed_headers || []).length > 3 ? '...' : ''}</td>
|
||||||
|
<td class="text-muted">${rule.max_age_seconds || 0}s</td>
|
||||||
|
<td class="text-end">
|
||||||
|
<button class="btn btn-sm btn-outline-danger" onclick="BucketDetailOperations.deleteCorsRule(${idx})">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="12" height="12" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
`).join('');
|
||||||
|
} catch (err) {
|
||||||
|
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(err.message)}</td></tr>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadAcl(card, endpoint) {
|
||||||
|
if (!card || !endpoint) return;
|
||||||
|
const body = card.querySelector('[data-acl-body]');
|
||||||
|
if (!body) return;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch(endpoint);
|
||||||
|
const data = await response.json();
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
body.innerHTML = `<tr><td colspan="3" class="text-center text-danger py-3">${escapeHtml(data.error || 'Failed to load')}</td></tr>`;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const grants = data.grants || [];
|
||||||
|
if (grants.length === 0) {
|
||||||
|
body.innerHTML = '<tr><td colspan="3" class="text-center text-muted py-3">No ACL grants configured</td></tr>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
body.innerHTML = grants.map(grant => {
|
||||||
|
const grantee = grant.grantee_type === 'CanonicalUser'
|
||||||
|
? grant.display_name || grant.grantee_id
|
||||||
|
: grant.grantee_uri || grant.grantee_type;
|
||||||
|
return `
|
||||||
|
<tr>
|
||||||
|
<td class="fw-medium">${escapeHtml(grantee)}</td>
|
||||||
|
<td><span class="badge text-bg-info">${escapeHtml(grant.permission)}</span></td>
|
||||||
|
<td class="text-muted small">${escapeHtml(grant.grantee_type)}</td>
|
||||||
|
</tr>
|
||||||
|
`;
|
||||||
|
}).join('');
|
||||||
|
} catch (err) {
|
||||||
|
body.innerHTML = `<tr><td colspan="3" class="text-center text-danger py-3">${escapeHtml(err.message)}</td></tr>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function deleteLifecycleRule(ruleId) {
|
||||||
|
if (!confirm(`Delete lifecycle rule "${ruleId}"?`)) return;
|
||||||
|
const card = document.getElementById('lifecycle-rules-card');
|
||||||
|
if (!card) return;
|
||||||
|
const endpoint = card.dataset.lifecycleUrl;
|
||||||
|
const csrfToken = window.getCsrfToken ? window.getCsrfToken() : '';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const resp = await fetch(endpoint, {
|
||||||
|
method: 'DELETE',
|
||||||
|
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken },
|
||||||
|
body: JSON.stringify({ rule_id: ruleId })
|
||||||
|
});
|
||||||
|
const data = await resp.json();
|
||||||
|
if (!resp.ok) throw new Error(data.error || 'Failed to delete');
|
||||||
|
showMessage({ title: 'Rule deleted', body: `Lifecycle rule "${ruleId}" has been deleted.`, variant: 'success' });
|
||||||
|
loadLifecycleRules(card, endpoint);
|
||||||
|
} catch (err) {
|
||||||
|
showMessage({ title: 'Delete failed', body: err.message, variant: 'danger' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function deleteCorsRule(index) {
|
||||||
|
if (!confirm('Delete this CORS rule?')) return;
|
||||||
|
const card = document.getElementById('cors-rules-card');
|
||||||
|
if (!card) return;
|
||||||
|
const endpoint = card.dataset.corsUrl;
|
||||||
|
const csrfToken = window.getCsrfToken ? window.getCsrfToken() : '';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const resp = await fetch(endpoint, {
|
||||||
|
method: 'DELETE',
|
||||||
|
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken },
|
||||||
|
body: JSON.stringify({ rule_index: index })
|
||||||
|
});
|
||||||
|
const data = await resp.json();
|
||||||
|
if (!resp.ok) throw new Error(data.error || 'Failed to delete');
|
||||||
|
showMessage({ title: 'Rule deleted', body: 'CORS rule has been deleted.', variant: 'success' });
|
||||||
|
loadCorsRules(card, endpoint);
|
||||||
|
} catch (err) {
|
||||||
|
showMessage({ title: 'Delete failed', body: err.message, variant: 'danger' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
init: init,
|
||||||
|
loadLifecycleRules: loadLifecycleRules,
|
||||||
|
loadCorsRules: loadCorsRules,
|
||||||
|
loadAcl: loadAcl,
|
||||||
|
deleteLifecycleRule: deleteLifecycleRule,
|
||||||
|
deleteCorsRule: deleteCorsRule
|
||||||
|
};
|
||||||
|
})();
|
||||||
548
static/js/bucket-detail-upload.js
Normal file
548
static/js/bucket-detail-upload.js
Normal file
@@ -0,0 +1,548 @@
|
|||||||
|
window.BucketDetailUpload = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
const MULTIPART_THRESHOLD = 8 * 1024 * 1024;
|
||||||
|
const CHUNK_SIZE = 8 * 1024 * 1024;
|
||||||
|
|
||||||
|
let state = {
|
||||||
|
isUploading: false,
|
||||||
|
uploadProgress: { current: 0, total: 0, currentFile: '' }
|
||||||
|
};
|
||||||
|
|
||||||
|
let elements = {};
|
||||||
|
let callbacks = {};
|
||||||
|
|
||||||
|
function init(config) {
|
||||||
|
elements = {
|
||||||
|
uploadForm: config.uploadForm,
|
||||||
|
uploadFileInput: config.uploadFileInput,
|
||||||
|
uploadModal: config.uploadModal,
|
||||||
|
uploadModalEl: config.uploadModalEl,
|
||||||
|
uploadSubmitBtn: config.uploadSubmitBtn,
|
||||||
|
uploadCancelBtn: config.uploadCancelBtn,
|
||||||
|
uploadBtnText: config.uploadBtnText,
|
||||||
|
uploadDropZone: config.uploadDropZone,
|
||||||
|
uploadDropZoneLabel: config.uploadDropZoneLabel,
|
||||||
|
uploadProgressStack: config.uploadProgressStack,
|
||||||
|
uploadKeyPrefix: config.uploadKeyPrefix,
|
||||||
|
singleFileOptions: config.singleFileOptions,
|
||||||
|
bulkUploadProgress: config.bulkUploadProgress,
|
||||||
|
bulkUploadStatus: config.bulkUploadStatus,
|
||||||
|
bulkUploadCounter: config.bulkUploadCounter,
|
||||||
|
bulkUploadProgressBar: config.bulkUploadProgressBar,
|
||||||
|
bulkUploadCurrentFile: config.bulkUploadCurrentFile,
|
||||||
|
bulkUploadResults: config.bulkUploadResults,
|
||||||
|
bulkUploadSuccessAlert: config.bulkUploadSuccessAlert,
|
||||||
|
bulkUploadErrorAlert: config.bulkUploadErrorAlert,
|
||||||
|
bulkUploadSuccessCount: config.bulkUploadSuccessCount,
|
||||||
|
bulkUploadErrorCount: config.bulkUploadErrorCount,
|
||||||
|
bulkUploadErrorList: config.bulkUploadErrorList,
|
||||||
|
floatingProgress: config.floatingProgress,
|
||||||
|
floatingProgressBar: config.floatingProgressBar,
|
||||||
|
floatingProgressStatus: config.floatingProgressStatus,
|
||||||
|
floatingProgressTitle: config.floatingProgressTitle,
|
||||||
|
floatingProgressExpand: config.floatingProgressExpand
|
||||||
|
};
|
||||||
|
|
||||||
|
callbacks = {
|
||||||
|
showMessage: config.showMessage || function() {},
|
||||||
|
formatBytes: config.formatBytes || function(b) { return b + ' bytes'; },
|
||||||
|
escapeHtml: config.escapeHtml || function(s) { return s; },
|
||||||
|
onUploadComplete: config.onUploadComplete || function() {},
|
||||||
|
hasFolders: config.hasFolders || function() { return false; },
|
||||||
|
getCurrentPrefix: config.getCurrentPrefix || function() { return ''; }
|
||||||
|
};
|
||||||
|
|
||||||
|
setupEventListeners();
|
||||||
|
setupBeforeUnload();
|
||||||
|
}
|
||||||
|
|
||||||
|
function isUploading() {
|
||||||
|
return state.isUploading;
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupBeforeUnload() {
|
||||||
|
window.addEventListener('beforeunload', (e) => {
|
||||||
|
if (state.isUploading) {
|
||||||
|
e.preventDefault();
|
||||||
|
e.returnValue = 'Upload in progress. Are you sure you want to leave?';
|
||||||
|
return e.returnValue;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function showFloatingProgress() {
|
||||||
|
if (elements.floatingProgress) {
|
||||||
|
elements.floatingProgress.classList.remove('d-none');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function hideFloatingProgress() {
|
||||||
|
if (elements.floatingProgress) {
|
||||||
|
elements.floatingProgress.classList.add('d-none');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateFloatingProgress(current, total, currentFile) {
|
||||||
|
state.uploadProgress = { current, total, currentFile: currentFile || '' };
|
||||||
|
if (elements.floatingProgressBar && total > 0) {
|
||||||
|
const percent = Math.round((current / total) * 100);
|
||||||
|
elements.floatingProgressBar.style.width = `${percent}%`;
|
||||||
|
}
|
||||||
|
if (elements.floatingProgressStatus) {
|
||||||
|
if (currentFile) {
|
||||||
|
elements.floatingProgressStatus.textContent = `${current}/${total} files - ${currentFile}`;
|
||||||
|
} else {
|
||||||
|
elements.floatingProgressStatus.textContent = `${current}/${total} files completed`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (elements.floatingProgressTitle) {
|
||||||
|
elements.floatingProgressTitle.textContent = `Uploading ${total} file${total !== 1 ? 's' : ''}...`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function refreshUploadDropLabel() {
|
||||||
|
if (!elements.uploadDropZoneLabel || !elements.uploadFileInput) return;
|
||||||
|
const files = elements.uploadFileInput.files;
|
||||||
|
if (!files || files.length === 0) {
|
||||||
|
elements.uploadDropZoneLabel.textContent = 'No file selected';
|
||||||
|
if (elements.singleFileOptions) elements.singleFileOptions.classList.remove('d-none');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
elements.uploadDropZoneLabel.textContent = files.length === 1 ? files[0].name : `${files.length} files selected`;
|
||||||
|
if (elements.singleFileOptions) {
|
||||||
|
elements.singleFileOptions.classList.toggle('d-none', files.length > 1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateUploadBtnText() {
|
||||||
|
if (!elements.uploadBtnText || !elements.uploadFileInput) return;
|
||||||
|
const files = elements.uploadFileInput.files;
|
||||||
|
if (!files || files.length <= 1) {
|
||||||
|
elements.uploadBtnText.textContent = 'Upload';
|
||||||
|
} else {
|
||||||
|
elements.uploadBtnText.textContent = `Upload ${files.length} files`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function resetUploadUI() {
|
||||||
|
if (elements.bulkUploadProgress) elements.bulkUploadProgress.classList.add('d-none');
|
||||||
|
if (elements.bulkUploadResults) elements.bulkUploadResults.classList.add('d-none');
|
||||||
|
if (elements.bulkUploadSuccessAlert) elements.bulkUploadSuccessAlert.classList.remove('d-none');
|
||||||
|
if (elements.bulkUploadErrorAlert) elements.bulkUploadErrorAlert.classList.add('d-none');
|
||||||
|
if (elements.bulkUploadErrorList) elements.bulkUploadErrorList.innerHTML = '';
|
||||||
|
if (elements.uploadSubmitBtn) elements.uploadSubmitBtn.disabled = false;
|
||||||
|
if (elements.uploadFileInput) elements.uploadFileInput.disabled = false;
|
||||||
|
if (elements.uploadProgressStack) elements.uploadProgressStack.innerHTML = '';
|
||||||
|
if (elements.uploadDropZone) {
|
||||||
|
elements.uploadDropZone.classList.remove('upload-locked');
|
||||||
|
elements.uploadDropZone.style.pointerEvents = '';
|
||||||
|
}
|
||||||
|
state.isUploading = false;
|
||||||
|
hideFloatingProgress();
|
||||||
|
}
|
||||||
|
|
||||||
|
function setUploadLockState(locked) {
|
||||||
|
if (elements.uploadDropZone) {
|
||||||
|
elements.uploadDropZone.classList.toggle('upload-locked', locked);
|
||||||
|
elements.uploadDropZone.style.pointerEvents = locked ? 'none' : '';
|
||||||
|
}
|
||||||
|
if (elements.uploadFileInput) {
|
||||||
|
elements.uploadFileInput.disabled = locked;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function createProgressItem(file) {
|
||||||
|
const item = document.createElement('div');
|
||||||
|
item.className = 'upload-progress-item';
|
||||||
|
item.dataset.state = 'uploading';
|
||||||
|
item.innerHTML = `
|
||||||
|
<div class="d-flex justify-content-between align-items-start">
|
||||||
|
<div class="min-width-0 flex-grow-1">
|
||||||
|
<div class="file-name">${callbacks.escapeHtml(file.name)}</div>
|
||||||
|
<div class="file-size">${callbacks.formatBytes(file.size)}</div>
|
||||||
|
</div>
|
||||||
|
<div class="upload-status text-end ms-2">Preparing...</div>
|
||||||
|
</div>
|
||||||
|
<div class="progress-container">
|
||||||
|
<div class="progress">
|
||||||
|
<div class="progress-bar bg-primary" role="progressbar" style="width: 0%"></div>
|
||||||
|
</div>
|
||||||
|
<div class="progress-text">
|
||||||
|
<span class="progress-loaded">0 B</span>
|
||||||
|
<span class="progress-percent">0%</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
`;
|
||||||
|
return item;
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateProgressItem(item, { loaded, total, status, progressState, error }) {
|
||||||
|
if (progressState) item.dataset.state = progressState;
|
||||||
|
const statusEl = item.querySelector('.upload-status');
|
||||||
|
const progressBar = item.querySelector('.progress-bar');
|
||||||
|
const progressLoaded = item.querySelector('.progress-loaded');
|
||||||
|
const progressPercent = item.querySelector('.progress-percent');
|
||||||
|
|
||||||
|
if (status) {
|
||||||
|
statusEl.textContent = status;
|
||||||
|
statusEl.className = 'upload-status text-end ms-2';
|
||||||
|
if (progressState === 'success') statusEl.classList.add('success');
|
||||||
|
if (progressState === 'error') statusEl.classList.add('error');
|
||||||
|
}
|
||||||
|
if (typeof loaded === 'number' && typeof total === 'number' && total > 0) {
|
||||||
|
const percent = Math.round((loaded / total) * 100);
|
||||||
|
progressBar.style.width = `${percent}%`;
|
||||||
|
progressLoaded.textContent = `${callbacks.formatBytes(loaded)} / ${callbacks.formatBytes(total)}`;
|
||||||
|
progressPercent.textContent = `${percent}%`;
|
||||||
|
}
|
||||||
|
if (error) {
|
||||||
|
const progressContainer = item.querySelector('.progress-container');
|
||||||
|
if (progressContainer) {
|
||||||
|
progressContainer.innerHTML = `<div class="text-danger small mt-1">${callbacks.escapeHtml(error)}</div>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function uploadMultipart(file, objectKey, metadata, progressItem, urls) {
|
||||||
|
const csrfToken = document.querySelector('input[name="csrf_token"]')?.value;
|
||||||
|
|
||||||
|
updateProgressItem(progressItem, { status: 'Initiating...', loaded: 0, total: file.size });
|
||||||
|
const initResp = await fetch(urls.initUrl, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken || '' },
|
||||||
|
body: JSON.stringify({ object_key: objectKey, metadata })
|
||||||
|
});
|
||||||
|
if (!initResp.ok) {
|
||||||
|
const err = await initResp.json().catch(() => ({}));
|
||||||
|
throw new Error(err.error || 'Failed to initiate upload');
|
||||||
|
}
|
||||||
|
const { upload_id } = await initResp.json();
|
||||||
|
|
||||||
|
const partUrl = urls.partTemplate.replace('UPLOAD_ID_PLACEHOLDER', upload_id);
|
||||||
|
const completeUrl = urls.completeTemplate.replace('UPLOAD_ID_PLACEHOLDER', upload_id);
|
||||||
|
const abortUrl = urls.abortTemplate.replace('UPLOAD_ID_PLACEHOLDER', upload_id);
|
||||||
|
|
||||||
|
const parts = [];
|
||||||
|
const totalParts = Math.ceil(file.size / CHUNK_SIZE);
|
||||||
|
let uploadedBytes = 0;
|
||||||
|
|
||||||
|
try {
|
||||||
|
for (let partNumber = 1; partNumber <= totalParts; partNumber++) {
|
||||||
|
const start = (partNumber - 1) * CHUNK_SIZE;
|
||||||
|
const end = Math.min(start + CHUNK_SIZE, file.size);
|
||||||
|
const chunk = file.slice(start, end);
|
||||||
|
|
||||||
|
updateProgressItem(progressItem, {
|
||||||
|
status: `Part ${partNumber}/${totalParts}`,
|
||||||
|
loaded: uploadedBytes,
|
||||||
|
total: file.size
|
||||||
|
});
|
||||||
|
|
||||||
|
const partResp = await fetch(`${partUrl}?partNumber=${partNumber}`, {
|
||||||
|
method: 'PUT',
|
||||||
|
headers: { 'X-CSRFToken': csrfToken || '' },
|
||||||
|
body: chunk
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!partResp.ok) {
|
||||||
|
const err = await partResp.json().catch(() => ({}));
|
||||||
|
throw new Error(err.error || `Part ${partNumber} failed`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const partData = await partResp.json();
|
||||||
|
parts.push({ part_number: partNumber, etag: partData.etag });
|
||||||
|
uploadedBytes += chunk.size;
|
||||||
|
|
||||||
|
updateProgressItem(progressItem, {
|
||||||
|
loaded: uploadedBytes,
|
||||||
|
total: file.size
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
updateProgressItem(progressItem, { status: 'Completing...', loaded: file.size, total: file.size });
|
||||||
|
const completeResp = await fetch(completeUrl, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken || '' },
|
||||||
|
body: JSON.stringify({ parts })
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!completeResp.ok) {
|
||||||
|
const err = await completeResp.json().catch(() => ({}));
|
||||||
|
throw new Error(err.error || 'Failed to complete upload');
|
||||||
|
}
|
||||||
|
|
||||||
|
return await completeResp.json();
|
||||||
|
} catch (err) {
|
||||||
|
try {
|
||||||
|
await fetch(abortUrl, { method: 'DELETE', headers: { 'X-CSRFToken': csrfToken || '' } });
|
||||||
|
} catch {}
|
||||||
|
throw err;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function uploadRegular(file, objectKey, metadata, progressItem, formAction) {
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
const formData = new FormData();
|
||||||
|
formData.append('object', file);
|
||||||
|
formData.append('object_key', objectKey);
|
||||||
|
if (metadata) formData.append('metadata', JSON.stringify(metadata));
|
||||||
|
const csrfToken = document.querySelector('input[name="csrf_token"]')?.value;
|
||||||
|
if (csrfToken) formData.append('csrf_token', csrfToken);
|
||||||
|
|
||||||
|
const xhr = new XMLHttpRequest();
|
||||||
|
xhr.open('POST', formAction, true);
|
||||||
|
xhr.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
|
||||||
|
|
||||||
|
xhr.upload.addEventListener('progress', (e) => {
|
||||||
|
if (e.lengthComputable) {
|
||||||
|
updateProgressItem(progressItem, {
|
||||||
|
status: 'Uploading...',
|
||||||
|
loaded: e.loaded,
|
||||||
|
total: e.total
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
xhr.addEventListener('load', () => {
|
||||||
|
if (xhr.status >= 200 && xhr.status < 300) {
|
||||||
|
try {
|
||||||
|
const data = JSON.parse(xhr.responseText);
|
||||||
|
if (data.status === 'error') {
|
||||||
|
reject(new Error(data.message || 'Upload failed'));
|
||||||
|
} else {
|
||||||
|
resolve(data);
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
resolve({});
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
try {
|
||||||
|
const data = JSON.parse(xhr.responseText);
|
||||||
|
reject(new Error(data.message || `Upload failed (${xhr.status})`));
|
||||||
|
} catch {
|
||||||
|
reject(new Error(`Upload failed (${xhr.status})`));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
xhr.addEventListener('error', () => reject(new Error('Network error')));
|
||||||
|
xhr.addEventListener('abort', () => reject(new Error('Upload aborted')));
|
||||||
|
|
||||||
|
xhr.send(formData);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async function uploadSingleFile(file, keyPrefix, metadata, progressItem, urls) {
|
||||||
|
const objectKey = keyPrefix ? `${keyPrefix}${file.name}` : file.name;
|
||||||
|
const shouldUseMultipart = file.size >= MULTIPART_THRESHOLD && urls.initUrl;
|
||||||
|
|
||||||
|
if (!progressItem && elements.uploadProgressStack) {
|
||||||
|
progressItem = createProgressItem(file);
|
||||||
|
elements.uploadProgressStack.appendChild(progressItem);
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
let result;
|
||||||
|
if (shouldUseMultipart) {
|
||||||
|
updateProgressItem(progressItem, { status: 'Multipart upload...', loaded: 0, total: file.size });
|
||||||
|
result = await uploadMultipart(file, objectKey, metadata, progressItem, urls);
|
||||||
|
} else {
|
||||||
|
updateProgressItem(progressItem, { status: 'Uploading...', loaded: 0, total: file.size });
|
||||||
|
result = await uploadRegular(file, objectKey, metadata, progressItem, urls.formAction);
|
||||||
|
}
|
||||||
|
updateProgressItem(progressItem, { progressState: 'success', status: 'Complete', loaded: file.size, total: file.size });
|
||||||
|
return result;
|
||||||
|
} catch (err) {
|
||||||
|
updateProgressItem(progressItem, { progressState: 'error', status: 'Failed', error: err.message });
|
||||||
|
throw err;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function performBulkUpload(files, urls) {
|
||||||
|
if (state.isUploading || !files || files.length === 0) return;
|
||||||
|
|
||||||
|
state.isUploading = true;
|
||||||
|
setUploadLockState(true);
|
||||||
|
const keyPrefix = (elements.uploadKeyPrefix?.value || '').trim();
|
||||||
|
const metadataRaw = elements.uploadForm?.querySelector('textarea[name="metadata"]')?.value?.trim();
|
||||||
|
let metadata = null;
|
||||||
|
if (metadataRaw) {
|
||||||
|
try {
|
||||||
|
metadata = JSON.parse(metadataRaw);
|
||||||
|
} catch {
|
||||||
|
callbacks.showMessage({ title: 'Invalid metadata', body: 'Metadata must be valid JSON.', variant: 'danger' });
|
||||||
|
resetUploadUI();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (elements.bulkUploadProgress) elements.bulkUploadProgress.classList.remove('d-none');
|
||||||
|
if (elements.bulkUploadResults) elements.bulkUploadResults.classList.add('d-none');
|
||||||
|
if (elements.uploadSubmitBtn) elements.uploadSubmitBtn.disabled = true;
|
||||||
|
if (elements.uploadFileInput) elements.uploadFileInput.disabled = true;
|
||||||
|
|
||||||
|
const successFiles = [];
|
||||||
|
const errorFiles = [];
|
||||||
|
const total = files.length;
|
||||||
|
|
||||||
|
updateFloatingProgress(0, total, files[0]?.name || '');
|
||||||
|
|
||||||
|
for (let i = 0; i < total; i++) {
|
||||||
|
const file = files[i];
|
||||||
|
const current = i + 1;
|
||||||
|
|
||||||
|
if (elements.bulkUploadCounter) elements.bulkUploadCounter.textContent = `${current}/${total}`;
|
||||||
|
if (elements.bulkUploadCurrentFile) elements.bulkUploadCurrentFile.textContent = `Uploading: ${file.name}`;
|
||||||
|
if (elements.bulkUploadProgressBar) {
|
||||||
|
const percent = Math.round((current / total) * 100);
|
||||||
|
elements.bulkUploadProgressBar.style.width = `${percent}%`;
|
||||||
|
}
|
||||||
|
updateFloatingProgress(i, total, file.name);
|
||||||
|
|
||||||
|
try {
|
||||||
|
await uploadSingleFile(file, keyPrefix, metadata, null, urls);
|
||||||
|
successFiles.push(file.name);
|
||||||
|
} catch (error) {
|
||||||
|
errorFiles.push({ name: file.name, error: error.message || 'Unknown error' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
updateFloatingProgress(total, total);
|
||||||
|
|
||||||
|
if (elements.bulkUploadProgress) elements.bulkUploadProgress.classList.add('d-none');
|
||||||
|
if (elements.bulkUploadResults) elements.bulkUploadResults.classList.remove('d-none');
|
||||||
|
|
||||||
|
if (elements.bulkUploadSuccessCount) elements.bulkUploadSuccessCount.textContent = successFiles.length;
|
||||||
|
if (successFiles.length === 0 && elements.bulkUploadSuccessAlert) {
|
||||||
|
elements.bulkUploadSuccessAlert.classList.add('d-none');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (errorFiles.length > 0) {
|
||||||
|
if (elements.bulkUploadErrorCount) elements.bulkUploadErrorCount.textContent = errorFiles.length;
|
||||||
|
if (elements.bulkUploadErrorAlert) elements.bulkUploadErrorAlert.classList.remove('d-none');
|
||||||
|
if (elements.bulkUploadErrorList) {
|
||||||
|
elements.bulkUploadErrorList.innerHTML = errorFiles
|
||||||
|
.map(f => `<li><strong>${callbacks.escapeHtml(f.name)}</strong>: ${callbacks.escapeHtml(f.error)}</li>`)
|
||||||
|
.join('');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
state.isUploading = false;
|
||||||
|
setUploadLockState(false);
|
||||||
|
|
||||||
|
if (successFiles.length > 0) {
|
||||||
|
if (elements.uploadBtnText) elements.uploadBtnText.textContent = 'Refreshing...';
|
||||||
|
callbacks.onUploadComplete(successFiles, errorFiles);
|
||||||
|
} else {
|
||||||
|
if (elements.uploadSubmitBtn) elements.uploadSubmitBtn.disabled = false;
|
||||||
|
if (elements.uploadFileInput) elements.uploadFileInput.disabled = false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupEventListeners() {
|
||||||
|
if (elements.uploadFileInput) {
|
||||||
|
elements.uploadFileInput.addEventListener('change', () => {
|
||||||
|
if (state.isUploading) return;
|
||||||
|
refreshUploadDropLabel();
|
||||||
|
updateUploadBtnText();
|
||||||
|
resetUploadUI();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (elements.uploadDropZone) {
|
||||||
|
elements.uploadDropZone.addEventListener('click', () => {
|
||||||
|
if (state.isUploading) return;
|
||||||
|
elements.uploadFileInput?.click();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (elements.floatingProgressExpand) {
|
||||||
|
elements.floatingProgressExpand.addEventListener('click', () => {
|
||||||
|
if (elements.uploadModal) {
|
||||||
|
elements.uploadModal.show();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (elements.uploadModalEl) {
|
||||||
|
elements.uploadModalEl.addEventListener('hide.bs.modal', () => {
|
||||||
|
if (state.isUploading) {
|
||||||
|
showFloatingProgress();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
elements.uploadModalEl.addEventListener('hidden.bs.modal', () => {
|
||||||
|
if (!state.isUploading) {
|
||||||
|
resetUploadUI();
|
||||||
|
if (elements.uploadFileInput) elements.uploadFileInput.value = '';
|
||||||
|
refreshUploadDropLabel();
|
||||||
|
updateUploadBtnText();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
elements.uploadModalEl.addEventListener('show.bs.modal', () => {
|
||||||
|
if (state.isUploading) {
|
||||||
|
hideFloatingProgress();
|
||||||
|
}
|
||||||
|
if (callbacks.hasFolders() && callbacks.getCurrentPrefix()) {
|
||||||
|
if (elements.uploadKeyPrefix) {
|
||||||
|
elements.uploadKeyPrefix.value = callbacks.getCurrentPrefix();
|
||||||
|
}
|
||||||
|
} else if (elements.uploadKeyPrefix) {
|
||||||
|
elements.uploadKeyPrefix.value = '';
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function wireDropTarget(target, options) {
|
||||||
|
const { highlightClass = '', autoOpenModal = false } = options || {};
|
||||||
|
if (!target) return;
|
||||||
|
|
||||||
|
const preventDefaults = (event) => {
|
||||||
|
event.preventDefault();
|
||||||
|
event.stopPropagation();
|
||||||
|
};
|
||||||
|
|
||||||
|
['dragenter', 'dragover'].forEach((eventName) => {
|
||||||
|
target.addEventListener(eventName, (event) => {
|
||||||
|
preventDefaults(event);
|
||||||
|
if (state.isUploading) return;
|
||||||
|
if (highlightClass) {
|
||||||
|
target.classList.add(highlightClass);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
['dragleave', 'drop'].forEach((eventName) => {
|
||||||
|
target.addEventListener(eventName, (event) => {
|
||||||
|
preventDefaults(event);
|
||||||
|
if (highlightClass) {
|
||||||
|
target.classList.remove(highlightClass);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
target.addEventListener('drop', (event) => {
|
||||||
|
if (state.isUploading) return;
|
||||||
|
if (!event.dataTransfer?.files?.length || !elements.uploadFileInput) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
elements.uploadFileInput.files = event.dataTransfer.files;
|
||||||
|
elements.uploadFileInput.dispatchEvent(new Event('change', { bubbles: true }));
|
||||||
|
if (autoOpenModal && elements.uploadModal) {
|
||||||
|
elements.uploadModal.show();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
init: init,
|
||||||
|
isUploading: isUploading,
|
||||||
|
performBulkUpload: performBulkUpload,
|
||||||
|
wireDropTarget: wireDropTarget,
|
||||||
|
resetUploadUI: resetUploadUI,
|
||||||
|
refreshUploadDropLabel: refreshUploadDropLabel,
|
||||||
|
updateUploadBtnText: updateUploadBtnText
|
||||||
|
};
|
||||||
|
})();
|
||||||
120
static/js/bucket-detail-utils.js
Normal file
120
static/js/bucket-detail-utils.js
Normal file
@@ -0,0 +1,120 @@
|
|||||||
|
window.BucketDetailUtils = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
function setupJsonAutoIndent(textarea) {
|
||||||
|
if (!textarea) return;
|
||||||
|
|
||||||
|
textarea.addEventListener('keydown', function(e) {
|
||||||
|
if (e.key === 'Enter') {
|
||||||
|
e.preventDefault();
|
||||||
|
|
||||||
|
const start = this.selectionStart;
|
||||||
|
const end = this.selectionEnd;
|
||||||
|
const value = this.value;
|
||||||
|
|
||||||
|
const lineStart = value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
const currentLine = value.substring(lineStart, start);
|
||||||
|
|
||||||
|
const indentMatch = currentLine.match(/^(\s*)/);
|
||||||
|
let indent = indentMatch ? indentMatch[1] : '';
|
||||||
|
|
||||||
|
const trimmedLine = currentLine.trim();
|
||||||
|
const lastChar = trimmedLine.slice(-1);
|
||||||
|
|
||||||
|
let newIndent = indent;
|
||||||
|
let insertAfter = '';
|
||||||
|
|
||||||
|
if (lastChar === '{' || lastChar === '[') {
|
||||||
|
newIndent = indent + ' ';
|
||||||
|
|
||||||
|
const charAfterCursor = value.substring(start, start + 1).trim();
|
||||||
|
if ((lastChar === '{' && charAfterCursor === '}') ||
|
||||||
|
(lastChar === '[' && charAfterCursor === ']')) {
|
||||||
|
insertAfter = '\n' + indent;
|
||||||
|
}
|
||||||
|
} else if (lastChar === ',' || lastChar === ':') {
|
||||||
|
newIndent = indent;
|
||||||
|
}
|
||||||
|
|
||||||
|
const insertion = '\n' + newIndent + insertAfter;
|
||||||
|
const newValue = value.substring(0, start) + insertion + value.substring(end);
|
||||||
|
|
||||||
|
this.value = newValue;
|
||||||
|
|
||||||
|
const newCursorPos = start + 1 + newIndent.length;
|
||||||
|
this.selectionStart = this.selectionEnd = newCursorPos;
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
|
||||||
|
if (e.key === 'Tab') {
|
||||||
|
e.preventDefault();
|
||||||
|
const start = this.selectionStart;
|
||||||
|
const end = this.selectionEnd;
|
||||||
|
|
||||||
|
if (e.shiftKey) {
|
||||||
|
const lineStart = this.value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
const lineContent = this.value.substring(lineStart, start);
|
||||||
|
if (lineContent.startsWith(' ')) {
|
||||||
|
this.value = this.value.substring(0, lineStart) +
|
||||||
|
this.value.substring(lineStart + 2);
|
||||||
|
this.selectionStart = this.selectionEnd = Math.max(lineStart, start - 2);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
this.value = this.value.substring(0, start) + ' ' + this.value.substring(end);
|
||||||
|
this.selectionStart = this.selectionEnd = start + 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatBytes(bytes) {
|
||||||
|
if (!Number.isFinite(bytes)) return `${bytes} bytes`;
|
||||||
|
const units = ['bytes', 'KB', 'MB', 'GB', 'TB'];
|
||||||
|
let i = 0;
|
||||||
|
let size = bytes;
|
||||||
|
while (size >= 1024 && i < units.length - 1) {
|
||||||
|
size /= 1024;
|
||||||
|
i++;
|
||||||
|
}
|
||||||
|
return `${size.toFixed(i === 0 ? 0 : 1)} ${units[i]}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function escapeHtml(value) {
|
||||||
|
if (value === null || value === undefined) return '';
|
||||||
|
return String(value)
|
||||||
|
.replace(/&/g, '&')
|
||||||
|
.replace(/</g, '<')
|
||||||
|
.replace(/>/g, '>')
|
||||||
|
.replace(/"/g, '"')
|
||||||
|
.replace(/'/g, ''');
|
||||||
|
}
|
||||||
|
|
||||||
|
function fallbackCopy(text) {
|
||||||
|
const textArea = document.createElement('textarea');
|
||||||
|
textArea.value = text;
|
||||||
|
textArea.style.position = 'fixed';
|
||||||
|
textArea.style.left = '-9999px';
|
||||||
|
textArea.style.top = '-9999px';
|
||||||
|
document.body.appendChild(textArea);
|
||||||
|
textArea.focus();
|
||||||
|
textArea.select();
|
||||||
|
let success = false;
|
||||||
|
try {
|
||||||
|
success = document.execCommand('copy');
|
||||||
|
} catch {
|
||||||
|
success = false;
|
||||||
|
}
|
||||||
|
document.body.removeChild(textArea);
|
||||||
|
return success;
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
setupJsonAutoIndent: setupJsonAutoIndent,
|
||||||
|
formatBytes: formatBytes,
|
||||||
|
escapeHtml: escapeHtml,
|
||||||
|
fallbackCopy: fallbackCopy
|
||||||
|
};
|
||||||
|
})();
|
||||||
@@ -3,10 +3,10 @@
|
|||||||
<head>
|
<head>
|
||||||
<meta charset="utf-8" />
|
<meta charset="utf-8" />
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||||
<meta name="csrf-token" content="{{ csrf_token() }}" />
|
{% if principal %}<meta name="csrf-token" content="{{ csrf_token() }}" />{% endif %}
|
||||||
<title>MyFSIO Console</title>
|
<title>MyFSIO Console</title>
|
||||||
<link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFISO.png') }}" />
|
<link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFSIO.png') }}" />
|
||||||
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFISO.ico') }}" />
|
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFSIO.ico') }}" />
|
||||||
<link
|
<link
|
||||||
href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css"
|
href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css"
|
||||||
rel="stylesheet"
|
rel="stylesheet"
|
||||||
@@ -24,106 +24,218 @@
|
|||||||
document.documentElement.dataset.bsTheme = 'light';
|
document.documentElement.dataset.bsTheme = 'light';
|
||||||
document.documentElement.dataset.theme = 'light';
|
document.documentElement.dataset.theme = 'light';
|
||||||
}
|
}
|
||||||
|
try {
|
||||||
|
if (localStorage.getItem('myfsio-sidebar-collapsed') === 'true') {
|
||||||
|
document.documentElement.classList.add('sidebar-will-collapse');
|
||||||
|
}
|
||||||
|
} catch (err) {}
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}" />
|
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}" />
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<nav class="navbar navbar-expand-lg myfsio-nav shadow-sm">
|
<header class="mobile-header d-lg-none">
|
||||||
<div class="container-fluid">
|
<button class="sidebar-toggle-btn" type="button" data-bs-toggle="offcanvas" data-bs-target="#mobileSidebar" aria-controls="mobileSidebar" aria-label="Toggle navigation">
|
||||||
<a class="navbar-brand fw-semibold" href="{{ url_for('ui.buckets_overview') }}">
|
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<img
|
<path fill-rule="evenodd" d="M2.5 12a.5.5 0 0 1 .5-.5h10a.5.5 0 0 1 0 1H3a.5.5 0 0 1-.5-.5zm0-4a.5.5 0 0 1 .5-.5h10a.5.5 0 0 1 0 1H3a.5.5 0 0 1-.5-.5zm0-4a.5.5 0 0 1 .5-.5h10a.5.5 0 0 1 0 1H3a.5.5 0 0 1-.5-.5z"/>
|
||||||
src="{{ url_for('static', filename='images/MyFISO.png') }}"
|
</svg>
|
||||||
alt="MyFSIO logo"
|
</button>
|
||||||
class="myfsio-logo"
|
<a class="mobile-brand" href="{{ url_for('ui.buckets_overview') }}">
|
||||||
width="32"
|
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO logo" width="28" height="28" />
|
||||||
height="32"
|
<span>MyFSIO</span>
|
||||||
decoding="async"
|
</a>
|
||||||
/>
|
<button class="theme-toggle-mobile" type="button" id="themeToggleMobile" aria-label="Toggle dark mode">
|
||||||
<span class="myfsio-title">MyFSIO</span>
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon-mobile" id="themeToggleSunMobile" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 11.5a3.5 3.5 0 1 1 0-7 3.5 3.5 0 0 1 0 7zm0 1.5a5 5 0 1 0 0-10 5 5 0 0 0 0 10zM8 0a.5.5 0 0 1 .5.5v1.555a.5.5 0 0 1-1 0V.5A.5.5 0 0 1 8 0zm0 12.945a.5.5 0 0 1 .5.5v2.055a.5.5 0 0 1-1 0v-2.055a.5.5 0 0 1 .5-.5zM2.343 2.343a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.708.707l-1.1-1.1a.5.5 0 0 1 0-.707zm9.507 9.507a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.707.708l-1.1-1.1a.5.5 0 0 1 0-.708zM0 8a.5.5 0 0 1 .5-.5h1.555a.5.5 0 0 1 0 1H.5A.5.5 0 0 1 0 8zm12.945 0a.5.5 0 0 1 .5-.5H15.5a.5.5 0 0 1 0 1h-2.055a.5.5 0 0 1-.5-.5zM2.343 13.657a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 1 1 .708.707l-1.1 1.1a.5.5 0 0 1-.708 0zm9.507-9.507a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 0 1 .707.708l-1.1 1.1a.5.5 0 0 1-.707 0z"/>
|
||||||
|
</svg>
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon-mobile" id="themeToggleMoonMobile" viewBox="0 0 16 16">
|
||||||
|
<path d="M6 .278a.768.768 0 0 1 .08.858 7.208 7.208 0 0 0-.878 3.46c0 4.021 3.278 7.277 7.318 7.277.527 0 1.04-.055 1.533-.16a.787.787 0 0 1 .81.316.733.733 0 0 1-.031.893A8.349 8.349 0 0 1 8.344 16C3.734 16 0 12.286 0 7.71 0 4.266 2.114 1.312 5.124.06A.752.752 0 0 1 6 .278z"/>
|
||||||
|
<path d="M10.794 3.148a.217.217 0 0 1 .412 0l.387 1.162c.173.518.579.924 1.097 1.097l1.162.387a.217.217 0 0 1 0 .412l-1.162.387a1.734 1.734 0 0 0-1.097 1.097l-.387 1.162a.217.217 0 0 1-.412 0l-.387-1.162A1.734 1.734 0 0 0 9.31 6.593l-1.162-.387a.217.217 0 0 1 0-.412l1.162-.387a1.734 1.734 0 0 0 1.097-1.097l.387-1.162zM13.863.099a.145.145 0 0 1 .274 0l.258.774c.115.346.386.617.732.732l.774.258a.145.145 0 0 1 0 .274l-.774.258a1.156 1.156 0 0 0-.732.732l-.258.774a.145.145 0 0 1-.274 0l-.258-.774a1.156 1.156 0 0 0-.732-.732l-.774-.258a.145.145 0 0 1 0-.274l.774-.258c.346-.115.617-.386.732-.732L13.863.1z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</header>
|
||||||
|
|
||||||
|
<div class="offcanvas offcanvas-start sidebar-offcanvas" tabindex="-1" id="mobileSidebar" aria-labelledby="mobileSidebarLabel">
|
||||||
|
<div class="offcanvas-header sidebar-header">
|
||||||
|
<a class="sidebar-brand" href="{{ url_for('ui.buckets_overview') }}">
|
||||||
|
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO logo" class="sidebar-logo" width="36" height="36" />
|
||||||
|
<span class="sidebar-title">MyFSIO</span>
|
||||||
</a>
|
</a>
|
||||||
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navContent" aria-controls="navContent" aria-expanded="false" aria-label="Toggle navigation">
|
<button type="button" class="btn-close btn-close-white" data-bs-dismiss="offcanvas" aria-label="Close"></button>
|
||||||
<span class="navbar-toggler-icon"></span>
|
</div>
|
||||||
</button>
|
<div class="offcanvas-body sidebar-body">
|
||||||
<div class="collapse navbar-collapse" id="navContent">
|
<nav class="sidebar-nav">
|
||||||
<ul class="navbar-nav me-auto mb-2 mb-lg-0">
|
{% if principal %}
|
||||||
{% if principal %}
|
<div class="nav-section">
|
||||||
<li class="nav-item">
|
<span class="nav-section-title">Navigation</span>
|
||||||
<a class="nav-link" href="{{ url_for('ui.buckets_overview') }}">Buckets</a>
|
<a href="{{ url_for('ui.buckets_overview') }}" class="sidebar-link {% if request.endpoint == 'ui.buckets_overview' or request.endpoint == 'ui.bucket_detail' %}active{% endif %}">
|
||||||
</li>
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<li class="nav-item">
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
<a class="nav-link {% if not can_manage_iam %}nav-link-muted{% endif %}" href="{{ url_for('ui.iam_dashboard') }}">
|
|
||||||
IAM
|
|
||||||
{% if not can_manage_iam %}<span class="badge ms-2 text-bg-warning">Restricted</span>{% endif %}
|
|
||||||
</a>
|
|
||||||
</li>
|
|
||||||
<li class="nav-item">
|
|
||||||
<a class="nav-link {% if not can_manage_iam %}nav-link-muted{% endif %}" href="{{ url_for('ui.connections_dashboard') }}">
|
|
||||||
Connections
|
|
||||||
{% if not can_manage_iam %}<span class="badge ms-2 text-bg-warning">Restricted</span>{% endif %}
|
|
||||||
</a>
|
|
||||||
</li>
|
|
||||||
{% endif %}
|
|
||||||
{% if principal %}
|
|
||||||
<li class="nav-item">
|
|
||||||
<a class="nav-link" href="{{ url_for('ui.docs_page') }}">Docs</a>
|
|
||||||
</li>
|
|
||||||
{% endif %}
|
|
||||||
</ul>
|
|
||||||
<div class="ms-lg-auto d-flex align-items-center gap-3 text-light flex-wrap">
|
|
||||||
<button
|
|
||||||
class="btn btn-outline-light btn-sm theme-toggle"
|
|
||||||
type="button"
|
|
||||||
id="themeToggle"
|
|
||||||
aria-pressed="false"
|
|
||||||
aria-label="Toggle dark mode"
|
|
||||||
>
|
|
||||||
<span id="themeToggleLabel" class="visually-hidden">Toggle dark mode</span>
|
|
||||||
<svg
|
|
||||||
xmlns="http://www.w3.org/2000/svg"
|
|
||||||
width="16"
|
|
||||||
height="16"
|
|
||||||
fill="currentColor"
|
|
||||||
class="theme-icon"
|
|
||||||
id="themeToggleSun"
|
|
||||||
viewBox="0 0 16 16"
|
|
||||||
aria-hidden="true"
|
|
||||||
>
|
|
||||||
<path
|
|
||||||
d="M8 11.5a3.5 3.5 0 1 1 0-7 3.5 3.5 0 0 1 0 7zm0 1.5a5 5 0 1 0 0-10 5 5 0 0 0 0 10zM8 0a.5.5 0 0 1 .5.5v1.555a.5.5 0 0 1-1 0V.5A.5.5 0 0 1 8 0zm0 12.945a.5.5 0 0 1 .5.5v2.055a.5.5 0 0 1-1 0v-2.055a.5.5 0 0 1 .5-.5zM2.343 2.343a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.708.707l-1.1-1.1a.5.5 0 0 1 0-.707zm9.507 9.507a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.707.708l-1.1-1.1a.5.5 0 0 1 0-.708zM0 8a.5.5 0 0 1 .5-.5h1.555a.5.5 0 0 1 0 1H.5A.5.5 0 0 1 0 8zm12.945 0a.5.5 0 0 1 .5-.5H15.5a.5.5 0 0 1 0 1h-2.055a.5.5 0 0 1-.5-.5zM2.343 13.657a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 1 1 .708.707l-1.1 1.1a.5.5 0 0 1-.708 0zm9.507-9.507a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 0 1 .707.708l-1.1 1.1a.5.5 0 0 1-.707 0z"
|
|
||||||
/>
|
|
||||||
</svg>
|
</svg>
|
||||||
<svg
|
<span>Buckets</span>
|
||||||
xmlns="http://www.w3.org/2000/svg"
|
</a>
|
||||||
width="16"
|
{% if can_manage_iam %}
|
||||||
height="16"
|
<a href="{{ url_for('ui.iam_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.iam_dashboard' %}active{% endif %}">
|
||||||
fill="currentColor"
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
class="theme-icon d-none"
|
<path d="M15 14s1 0 1-1-1-4-5-4-5 3-5 4 1 1 1 1h8zm-7.978-1A.261.261 0 0 1 7 12.996c.001-.264.167-1.03.76-1.72C8.312 10.629 9.282 10 11 10c1.717 0 2.687.63 3.24 1.276.593.69.758 1.457.76 1.72l-.008.002a.274.274 0 0 1-.014.002H7.022zM11 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4zm3-2a3 3 0 1 1-6 0 3 3 0 0 1 6 0zM6.936 9.28a5.88 5.88 0 0 0-1.23-.247A7.35 7.35 0 0 0 5 9c-4 0-5 3-5 4 0 .667.333 1 1 1h4.216A2.238 2.238 0 0 1 5 13c0-1.01.377-2.042 1.09-2.904.243-.294.526-.569.846-.816zM4.92 10A5.493 5.493 0 0 0 4 13H1c0-.26.164-1.03.76-1.724.545-.636 1.492-1.256 3.16-1.275zM1.5 5.5a3 3 0 1 1 6 0 3 3 0 0 1-6 0zm3-2a2 2 0 1 0 0 4 2 2 0 0 0 0-4z"/>
|
||||||
id="themeToggleMoon"
|
|
||||||
viewBox="0 0 16 16"
|
|
||||||
aria-hidden="true"
|
|
||||||
>
|
|
||||||
<path d="M6 .278a.768.768 0 0 1 .08.858 7.208 7.208 0 0 0-.878 3.46c0 4.021 3.278 7.277 7.318 7.277.527 0 1.04-.055 1.533-.16a.787.787 0 0 1 .81.316.733.733 0 0 1-.031.893A8.349 8.349 0 0 1 8.344 16C3.734 16 0 12.286 0 7.71 0 4.266 2.114 1.312 5.124.06A.752.752 0 0 1 6 .278z"/>
|
|
||||||
<path d="M10.794 3.148a.217.217 0 0 1 .412 0l.387 1.162c.173.518.579.924 1.097 1.097l1.162.387a.217.217 0 0 1 0 .412l-1.162.387a1.734 1.734 0 0 0-1.097 1.097l-.387 1.162a.217.217 0 0 1-.412 0l-.387-1.162A1.734 1.734 0 0 0 9.31 6.593l-1.162-.387a.217.217 0 0 1 0-.412l1.162-.387a1.734 1.734 0 0 0 1.097-1.097l.387-1.162zM13.863.099a.145.145 0 0 1 .274 0l.258.774c.115.346.386.617.732.732l.774.258a.145.145 0 0 1 0 .274l-.774.258a1.156 1.156 0 0 0-.732.732l-.258.774a.145.145 0 0 1-.274 0l-.258-.774a1.156 1.156 0 0 0-.732-.732l-.774-.258a.145.145 0 0 1 0-.274l.774-.258c.346-.115.617-.386.732-.732L13.863.1z"/>
|
|
||||||
</svg>
|
</svg>
|
||||||
</button>
|
<span>IAM</span>
|
||||||
{% if principal %}
|
</a>
|
||||||
<div class="text-end small">
|
<a href="{{ url_for('ui.connections_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.connections_dashboard' %}active{% endif %}">
|
||||||
<div class="fw-semibold" title="{{ principal.display_name }}">{{ principal.display_name | truncate(20, true) }}</div>
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<div class="opacity-75">{{ principal.access_key }}</div>
|
<path fill-rule="evenodd" d="M6 3.5A1.5 1.5 0 0 1 7.5 2h1A1.5 1.5 0 0 1 10 3.5v1A1.5 1.5 0 0 1 8.5 6v1H14a.5.5 0 0 1 .5.5v1a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0v-1A.5.5 0 0 1 2 7h5.5V6A1.5 1.5 0 0 1 6 4.5v-1zM8.5 5a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1zM0 11.5A1.5 1.5 0 0 1 1.5 10h1A1.5 1.5 0 0 1 4 11.5v1A1.5 1.5 0 0 1 2.5 14h-1A1.5 1.5 0 0 1 0 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5A1.5 1.5 0 0 1 7.5 10h1a1.5 1.5 0 0 1 1.5 1.5v1A1.5 1.5 0 0 1 8.5 14h-1A1.5 1.5 0 0 1 6 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5a1.5 1.5 0 0 1 1.5-1.5h1a1.5 1.5 0 0 1 1.5 1.5v1a1.5 1.5 0 0 1-1.5 1.5h-1a1.5 1.5 0 0 1-1.5-1.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1z"/>
|
||||||
</div>
|
</svg>
|
||||||
<form method="post" action="{{ url_for('ui.logout') }}">
|
<span>Connections</span>
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
</a>
|
||||||
<button class="btn btn-outline-light btn-sm" type="submit">Sign out</button>
|
<a href="{{ url_for('ui.metrics_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.metrics_dashboard' %}active{% endif %}">
|
||||||
</form>
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 4a.5.5 0 0 1 .5.5V6a.5.5 0 0 1-1 0V4.5A.5.5 0 0 1 8 4zM3.732 5.732a.5.5 0 0 1 .707 0l.915.914a.5.5 0 1 1-.708.708l-.914-.915a.5.5 0 0 1 0-.707zM2 10a.5.5 0 0 1 .5-.5h1.586a.5.5 0 0 1 0 1H2.5A.5.5 0 0 1 2 10zm9.5 0a.5.5 0 0 1 .5-.5h1.5a.5.5 0 0 1 0 1H12a.5.5 0 0 1-.5-.5zm.754-4.246a.389.389 0 0 0-.527-.02L7.547 9.31a.91.91 0 1 0 1.302 1.258l3.434-4.297a.389.389 0 0 0-.029-.518z"/>
|
||||||
|
<path fill-rule="evenodd" d="M0 10a8 8 0 1 1 15.547 2.661c-.442 1.253-1.845 1.602-2.932 1.25C11.309 13.488 9.475 13 8 13c-1.474 0-3.31.488-4.615.911-1.087.352-2.49.003-2.932-1.25A7.988 7.988 0 0 1 0 10zm8-7a7 7 0 0 0-6.603 9.329c.203.575.923.876 1.68.63C4.397 12.533 6.358 12 8 12s3.604.532 4.923.96c.757.245 1.477-.056 1.68-.631A7 7 0 0 0 8 3z"/>
|
||||||
|
</svg>
|
||||||
|
<span>Metrics</span>
|
||||||
|
</a>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
|
<div class="nav-section">
|
||||||
|
<span class="nav-section-title">Resources</span>
|
||||||
|
<a href="{{ url_for('ui.docs_page') }}" class="sidebar-link {% if request.endpoint == 'ui.docs_page' %}active{% endif %}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M1 2.828c.885-.37 2.154-.769 3.388-.893 1.33-.134 2.458.063 3.112.752v9.746c-.935-.53-2.12-.603-3.213-.493-1.18.12-2.37.461-3.287.811V2.828zm7.5-.141c.654-.689 1.782-.886 3.112-.752 1.234.124 2.503.523 3.388.893v9.923c-.918-.35-2.107-.692-3.287-.81-1.094-.111-2.278-.039-3.213.492V2.687zM8 1.783C7.015.936 5.587.81 4.287.94c-1.514.153-3.042.672-3.994 1.105A.5.5 0 0 0 0 2.5v11a.5.5 0 0 0 .707.455c.882-.4 2.303-.881 3.68-1.02 1.409-.142 2.59.087 3.223.877a.5.5 0 0 0 .78 0c.633-.79 1.814-1.019 3.222-.877 1.378.139 2.8.62 3.681 1.02A.5.5 0 0 0 16 13.5v-11a.5.5 0 0 0-.293-.455c-.952-.433-2.48-.952-3.994-1.105C10.413.809 8.985.936 8 1.783z"/>
|
||||||
|
</svg>
|
||||||
|
<span>Documentation</span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
|
</nav>
|
||||||
|
{% if principal %}
|
||||||
|
<div class="sidebar-footer">
|
||||||
|
<div class="sidebar-user">
|
||||||
|
<div class="user-avatar">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M11 6a3 3 0 1 1-6 0 3 3 0 0 1 6 0z"/>
|
||||||
|
<path fill-rule="evenodd" d="M0 8a8 8 0 1 1 16 0A8 8 0 0 1 0 8zm8-7a7 7 0 0 0-5.468 11.37C3.242 11.226 4.805 10 8 10s4.757 1.225 5.468 2.37A7 7 0 0 0 8 1z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<div class="user-info">
|
||||||
|
<div class="user-name" title="{{ principal.display_name }}">{{ principal.display_name | truncate(16, true) }}</div>
|
||||||
|
<div class="user-key">{{ principal.access_key | truncate(12, true) }}</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<form method="post" action="{{ url_for('ui.logout') }}" class="w-100">
|
||||||
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
|
<button class="sidebar-logout-btn" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M10 12.5a.5.5 0 0 1-.5.5h-8a.5.5 0 0 1-.5-.5v-9a.5.5 0 0 1 .5-.5h8a.5.5 0 0 1 .5.5v2a.5.5 0 0 0 1 0v-2A1.5 1.5 0 0 0 9.5 2h-8A1.5 1.5 0 0 0 0 3.5v9A1.5 1.5 0 0 0 1.5 14h8a1.5 1.5 0 0 0 1.5-1.5v-2a.5.5 0 0 0-1 0v2z"/>
|
||||||
|
<path fill-rule="evenodd" d="M15.854 8.354a.5.5 0 0 0 0-.708l-3-3a.5.5 0 0 0-.708.708L14.293 7.5H5.5a.5.5 0 0 0 0 1h8.793l-2.147 2.146a.5.5 0 0 0 .708.708l3-3z"/>
|
||||||
|
</svg>
|
||||||
|
<span>Sign out</span>
|
||||||
|
</button>
|
||||||
|
</form>
|
||||||
</div>
|
</div>
|
||||||
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
</nav>
|
</div>
|
||||||
<main class="container py-4">
|
|
||||||
{% block content %}{% endblock %}
|
<aside class="sidebar d-none d-lg-flex" id="desktopSidebar">
|
||||||
</main>
|
<div class="sidebar-header">
|
||||||
|
<div class="sidebar-brand" id="sidebarBrand">
|
||||||
|
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO logo" class="sidebar-logo" width="36" height="36" />
|
||||||
|
<span class="sidebar-title">MyFSIO</span>
|
||||||
|
</div>
|
||||||
|
<button class="sidebar-collapse-btn" type="button" id="sidebarCollapseBtn" aria-label="Collapse sidebar">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<div class="sidebar-body">
|
||||||
|
<nav class="sidebar-nav">
|
||||||
|
{% if principal %}
|
||||||
|
<div class="nav-section">
|
||||||
|
<span class="nav-section-title">Navigation</span>
|
||||||
|
<a href="{{ url_for('ui.buckets_overview') }}" class="sidebar-link {% if request.endpoint == 'ui.buckets_overview' or request.endpoint == 'ui.bucket_detail' %}active{% endif %}" data-tooltip="Buckets">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">Buckets</span>
|
||||||
|
</a>
|
||||||
|
{% if can_manage_iam %}
|
||||||
|
<a href="{{ url_for('ui.iam_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.iam_dashboard' %}active{% endif %}" data-tooltip="IAM">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M15 14s1 0 1-1-1-4-5-4-5 3-5 4 1 1 1 1h8zm-7.978-1A.261.261 0 0 1 7 12.996c.001-.264.167-1.03.76-1.72C8.312 10.629 9.282 10 11 10c1.717 0 2.687.63 3.24 1.276.593.69.758 1.457.76 1.72l-.008.002a.274.274 0 0 1-.014.002H7.022zM11 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4zm3-2a3 3 0 1 1-6 0 3 3 0 0 1 6 0zM6.936 9.28a5.88 5.88 0 0 0-1.23-.247A7.35 7.35 0 0 0 5 9c-4 0-5 3-5 4 0 .667.333 1 1 1h4.216A2.238 2.238 0 0 1 5 13c0-1.01.377-2.042 1.09-2.904.243-.294.526-.569.846-.816zM4.92 10A5.493 5.493 0 0 0 4 13H1c0-.26.164-1.03.76-1.724.545-.636 1.492-1.256 3.16-1.275zM1.5 5.5a3 3 0 1 1 6 0 3 3 0 0 1-6 0zm3-2a2 2 0 1 0 0 4 2 2 0 0 0 0-4z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">IAM</span>
|
||||||
|
</a>
|
||||||
|
<a href="{{ url_for('ui.connections_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.connections_dashboard' %}active{% endif %}" data-tooltip="Connections">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M6 3.5A1.5 1.5 0 0 1 7.5 2h1A1.5 1.5 0 0 1 10 3.5v1A1.5 1.5 0 0 1 8.5 6v1H14a.5.5 0 0 1 .5.5v1a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0v-1A.5.5 0 0 1 2 7h5.5V6A1.5 1.5 0 0 1 6 4.5v-1zM8.5 5a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1zM0 11.5A1.5 1.5 0 0 1 1.5 10h1A1.5 1.5 0 0 1 4 11.5v1A1.5 1.5 0 0 1 2.5 14h-1A1.5 1.5 0 0 1 0 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5A1.5 1.5 0 0 1 7.5 10h1a1.5 1.5 0 0 1 1.5 1.5v1A1.5 1.5 0 0 1 8.5 14h-1A1.5 1.5 0 0 1 6 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5a1.5 1.5 0 0 1 1.5-1.5h1a1.5 1.5 0 0 1 1.5 1.5v1a1.5 1.5 0 0 1-1.5 1.5h-1a1.5 1.5 0 0 1-1.5-1.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">Connections</span>
|
||||||
|
</a>
|
||||||
|
<a href="{{ url_for('ui.metrics_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.metrics_dashboard' %}active{% endif %}" data-tooltip="Metrics">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 4a.5.5 0 0 1 .5.5V6a.5.5 0 0 1-1 0V4.5A.5.5 0 0 1 8 4zM3.732 5.732a.5.5 0 0 1 .707 0l.915.914a.5.5 0 1 1-.708.708l-.914-.915a.5.5 0 0 1 0-.707zM2 10a.5.5 0 0 1 .5-.5h1.586a.5.5 0 0 1 0 1H2.5A.5.5 0 0 1 2 10zm9.5 0a.5.5 0 0 1 .5-.5h1.5a.5.5 0 0 1 0 1H12a.5.5 0 0 1-.5-.5zm.754-4.246a.389.389 0 0 0-.527-.02L7.547 9.31a.91.91 0 1 0 1.302 1.258l3.434-4.297a.389.389 0 0 0-.029-.518z"/>
|
||||||
|
<path fill-rule="evenodd" d="M0 10a8 8 0 1 1 15.547 2.661c-.442 1.253-1.845 1.602-2.932 1.25C11.309 13.488 9.475 13 8 13c-1.474 0-3.31.488-4.615.911-1.087.352-2.49.003-2.932-1.25A7.988 7.988 0 0 1 0 10zm8-7a7 7 0 0 0-6.603 9.329c.203.575.923.876 1.68.63C4.397 12.533 6.358 12 8 12s3.604.532 4.923.96c.757.245 1.477-.056 1.68-.631A7 7 0 0 0 8 3z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">Metrics</span>
|
||||||
|
</a>
|
||||||
|
{% endif %}
|
||||||
|
</div>
|
||||||
|
<div class="nav-section">
|
||||||
|
<span class="nav-section-title">Resources</span>
|
||||||
|
<a href="{{ url_for('ui.docs_page') }}" class="sidebar-link {% if request.endpoint == 'ui.docs_page' %}active{% endif %}" data-tooltip="Documentation">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M1 2.828c.885-.37 2.154-.769 3.388-.893 1.33-.134 2.458.063 3.112.752v9.746c-.935-.53-2.12-.603-3.213-.493-1.18.12-2.37.461-3.287.811V2.828zm7.5-.141c.654-.689 1.782-.886 3.112-.752 1.234.124 2.503.523 3.388.893v9.923c-.918-.35-2.107-.692-3.287-.81-1.094-.111-2.278-.039-3.213.492V2.687zM8 1.783C7.015.936 5.587.81 4.287.94c-1.514.153-3.042.672-3.994 1.105A.5.5 0 0 0 0 2.5v11a.5.5 0 0 0 .707.455c.882-.4 2.303-.881 3.68-1.02 1.409-.142 2.59.087 3.223.877a.5.5 0 0 0 .78 0c.633-.79 1.814-1.019 3.222-.877 1.378.139 2.8.62 3.681 1.02A.5.5 0 0 0 16 13.5v-11a.5.5 0 0 0-.293-.455c-.952-.433-2.48-.952-3.994-1.105C10.413.809 8.985.936 8 1.783z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">Documentation</span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
|
</nav>
|
||||||
|
</div>
|
||||||
|
<div class="sidebar-footer">
|
||||||
|
<button class="theme-toggle-sidebar" type="button" id="themeToggle" aria-label="Toggle dark mode">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon" id="themeToggleSun" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 11.5a3.5 3.5 0 1 1 0-7 3.5 3.5 0 0 1 0 7zm0 1.5a5 5 0 1 0 0-10 5 5 0 0 0 0 10zM8 0a.5.5 0 0 1 .5.5v1.555a.5.5 0 0 1-1 0V.5A.5.5 0 0 1 8 0zm0 12.945a.5.5 0 0 1 .5.5v2.055a.5.5 0 0 1-1 0v-2.055a.5.5 0 0 1 .5-.5zM2.343 2.343a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.708.707l-1.1-1.1a.5.5 0 0 1 0-.707zm9.507 9.507a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.707.708l-1.1-1.1a.5.5 0 0 1 0-.708zM0 8a.5.5 0 0 1 .5-.5h1.555a.5.5 0 0 1 0 1H.5A.5.5 0 0 1 0 8zm12.945 0a.5.5 0 0 1 .5-.5H15.5a.5.5 0 0 1 0 1h-2.055a.5.5 0 0 1-.5-.5zM2.343 13.657a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 1 1 .708.707l-1.1 1.1a.5.5 0 0 1-.708 0zm9.507-9.507a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 0 1 .707.708l-1.1 1.1a.5.5 0 0 1-.707 0z"/>
|
||||||
|
</svg>
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon" id="themeToggleMoon" viewBox="0 0 16 16">
|
||||||
|
<path d="M6 .278a.768.768 0 0 1 .08.858 7.208 7.208 0 0 0-.878 3.46c0 4.021 3.278 7.277 7.318 7.277.527 0 1.04-.055 1.533-.16a.787.787 0 0 1 .81.316.733.733 0 0 1-.031.893A8.349 8.349 0 0 1 8.344 16C3.734 16 0 12.286 0 7.71 0 4.266 2.114 1.312 5.124.06A.752.752 0 0 1 6 .278z"/>
|
||||||
|
<path d="M10.794 3.148a.217.217 0 0 1 .412 0l.387 1.162c.173.518.579.924 1.097 1.097l1.162.387a.217.217 0 0 1 0 .412l-1.162.387a1.734 1.734 0 0 0-1.097 1.097l-.387 1.162a.217.217 0 0 1-.412 0l-.387-1.162A1.734 1.734 0 0 0 9.31 6.593l-1.162-.387a.217.217 0 0 1 0-.412l1.162-.387a1.734 1.734 0 0 0 1.097-1.097l.387-1.162zM13.863.099a.145.145 0 0 1 .274 0l.258.774c.115.346.386.617.732.732l.774.258a.145.145 0 0 1 0 .274l-.774.258a1.156 1.156 0 0 0-.732.732l-.258.774a.145.145 0 0 1-.274 0l-.258-.774a1.156 1.156 0 0 0-.732-.732l-.774-.258a.145.145 0 0 1 0-.274l.774-.258c.346-.115.617-.386.732-.732L13.863.1z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="theme-toggle-text">Toggle theme</span>
|
||||||
|
</button>
|
||||||
|
{% if principal %}
|
||||||
|
<div class="sidebar-user" data-username="{{ principal.display_name }}">
|
||||||
|
<div class="user-avatar">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M11 6a3 3 0 1 1-6 0 3 3 0 0 1 6 0z"/>
|
||||||
|
<path fill-rule="evenodd" d="M0 8a8 8 0 1 1 16 0A8 8 0 0 1 0 8zm8-7a7 7 0 0 0-5.468 11.37C3.242 11.226 4.805 10 8 10s4.757 1.225 5.468 2.37A7 7 0 0 0 8 1z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<div class="user-info">
|
||||||
|
<div class="user-name" title="{{ principal.display_name }}">{{ principal.display_name | truncate(16, true) }}</div>
|
||||||
|
<div class="user-key">{{ principal.access_key | truncate(12, true) }}</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<form method="post" action="{{ url_for('ui.logout') }}" class="w-100">
|
||||||
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
|
<button class="sidebar-logout-btn" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M10 12.5a.5.5 0 0 1-.5.5h-8a.5.5 0 0 1-.5-.5v-9a.5.5 0 0 1 .5-.5h8a.5.5 0 0 1 .5.5v2a.5.5 0 0 0 1 0v-2A1.5 1.5 0 0 0 9.5 2h-8A1.5 1.5 0 0 0 0 3.5v9A1.5 1.5 0 0 0 1.5 14h8a1.5 1.5 0 0 0 1.5-1.5v-2a.5.5 0 0 0-1 0v2z"/>
|
||||||
|
<path fill-rule="evenodd" d="M15.854 8.354a.5.5 0 0 0 0-.708l-3-3a.5.5 0 0 0-.708.708L14.293 7.5H5.5a.5.5 0 0 0 0 1h8.793l-2.147 2.146a.5.5 0 0 0 .708.708l3-3z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="logout-text">Sign out</span>
|
||||||
|
</button>
|
||||||
|
</form>
|
||||||
|
{% endif %}
|
||||||
|
</div>
|
||||||
|
</aside>
|
||||||
|
|
||||||
|
<div class="main-wrapper">
|
||||||
|
<main class="main-content">
|
||||||
|
{% block content %}{% endblock %}
|
||||||
|
</main>
|
||||||
|
</div>
|
||||||
<div class="toast-container position-fixed bottom-0 end-0 p-3">
|
<div class="toast-container position-fixed bottom-0 end-0 p-3">
|
||||||
<div id="liveToast" class="toast" role="alert" aria-live="assertive" aria-atomic="true">
|
<div id="liveToast" class="toast" role="alert" aria-live="assertive" aria-atomic="true">
|
||||||
<div class="toast-header">
|
<div class="toast-header">
|
||||||
@@ -163,9 +275,11 @@
|
|||||||
(function () {
|
(function () {
|
||||||
const storageKey = 'myfsio-theme';
|
const storageKey = 'myfsio-theme';
|
||||||
const toggle = document.getElementById('themeToggle');
|
const toggle = document.getElementById('themeToggle');
|
||||||
const label = document.getElementById('themeToggleLabel');
|
const toggleMobile = document.getElementById('themeToggleMobile');
|
||||||
const sunIcon = document.getElementById('themeToggleSun');
|
const sunIcon = document.getElementById('themeToggleSun');
|
||||||
const moonIcon = document.getElementById('themeToggleMoon');
|
const moonIcon = document.getElementById('themeToggleMoon');
|
||||||
|
const sunIconMobile = document.getElementById('themeToggleSunMobile');
|
||||||
|
const moonIconMobile = document.getElementById('themeToggleMoonMobile');
|
||||||
|
|
||||||
const applyTheme = (theme) => {
|
const applyTheme = (theme) => {
|
||||||
document.documentElement.dataset.bsTheme = theme;
|
document.documentElement.dataset.bsTheme = theme;
|
||||||
@@ -173,34 +287,79 @@
|
|||||||
try {
|
try {
|
||||||
localStorage.setItem(storageKey, theme);
|
localStorage.setItem(storageKey, theme);
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
/* localStorage unavailable */
|
console.log("Error: local storage not available, cannot save theme preference.");
|
||||||
}
|
|
||||||
if (label) {
|
|
||||||
label.textContent = theme === 'dark' ? 'Switch to light mode' : 'Switch to dark mode';
|
|
||||||
}
|
|
||||||
if (toggle) {
|
|
||||||
toggle.setAttribute('aria-pressed', theme === 'dark' ? 'true' : 'false');
|
|
||||||
toggle.setAttribute('title', theme === 'dark' ? 'Switch to light mode' : 'Switch to dark mode');
|
|
||||||
toggle.setAttribute('aria-label', theme === 'dark' ? 'Switch to light mode' : 'Switch to dark mode');
|
|
||||||
}
|
}
|
||||||
|
const isDark = theme === 'dark';
|
||||||
if (sunIcon && moonIcon) {
|
if (sunIcon && moonIcon) {
|
||||||
const isDark = theme === 'dark';
|
|
||||||
sunIcon.classList.toggle('d-none', !isDark);
|
sunIcon.classList.toggle('d-none', !isDark);
|
||||||
moonIcon.classList.toggle('d-none', isDark);
|
moonIcon.classList.toggle('d-none', isDark);
|
||||||
}
|
}
|
||||||
|
if (sunIconMobile && moonIconMobile) {
|
||||||
|
sunIconMobile.classList.toggle('d-none', !isDark);
|
||||||
|
moonIconMobile.classList.toggle('d-none', isDark);
|
||||||
|
}
|
||||||
|
[toggle, toggleMobile].forEach(btn => {
|
||||||
|
if (btn) {
|
||||||
|
btn.setAttribute('aria-pressed', isDark ? 'true' : 'false');
|
||||||
|
btn.setAttribute('title', isDark ? 'Switch to light mode' : 'Switch to dark mode');
|
||||||
|
btn.setAttribute('aria-label', isDark ? 'Switch to light mode' : 'Switch to dark mode');
|
||||||
|
}
|
||||||
|
});
|
||||||
};
|
};
|
||||||
|
|
||||||
const current = document.documentElement.dataset.bsTheme || 'light';
|
const current = document.documentElement.dataset.bsTheme || 'light';
|
||||||
applyTheme(current);
|
applyTheme(current);
|
||||||
|
|
||||||
toggle?.addEventListener('click', () => {
|
const handleToggle = () => {
|
||||||
const next = document.documentElement.dataset.bsTheme === 'dark' ? 'light' : 'dark';
|
const next = document.documentElement.dataset.bsTheme === 'dark' ? 'light' : 'dark';
|
||||||
applyTheme(next);
|
applyTheme(next);
|
||||||
|
};
|
||||||
|
|
||||||
|
toggle?.addEventListener('click', handleToggle);
|
||||||
|
toggleMobile?.addEventListener('click', handleToggle);
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
|
<script>
|
||||||
|
(function () {
|
||||||
|
const sidebar = document.getElementById('desktopSidebar');
|
||||||
|
const collapseBtn = document.getElementById('sidebarCollapseBtn');
|
||||||
|
const sidebarBrand = document.getElementById('sidebarBrand');
|
||||||
|
const storageKey = 'myfsio-sidebar-collapsed';
|
||||||
|
|
||||||
|
if (!sidebar || !collapseBtn) return;
|
||||||
|
|
||||||
|
const applyCollapsed = (collapsed) => {
|
||||||
|
sidebar.classList.toggle('sidebar-collapsed', collapsed);
|
||||||
|
document.body.classList.toggle('sidebar-is-collapsed', collapsed);
|
||||||
|
document.documentElement.classList.remove('sidebar-will-collapse');
|
||||||
|
try {
|
||||||
|
localStorage.setItem(storageKey, collapsed ? 'true' : 'false');
|
||||||
|
} catch (err) {}
|
||||||
|
};
|
||||||
|
|
||||||
|
try {
|
||||||
|
const stored = localStorage.getItem(storageKey);
|
||||||
|
applyCollapsed(stored === 'true');
|
||||||
|
} catch (err) {
|
||||||
|
document.documentElement.classList.remove('sidebar-will-collapse');
|
||||||
|
}
|
||||||
|
|
||||||
|
collapseBtn.addEventListener('click', () => {
|
||||||
|
const isCollapsed = sidebar.classList.contains('sidebar-collapsed');
|
||||||
|
applyCollapsed(!isCollapsed);
|
||||||
|
});
|
||||||
|
|
||||||
|
sidebarBrand?.addEventListener('click', (e) => {
|
||||||
|
const isCollapsed = sidebar.classList.contains('sidebar-collapsed');
|
||||||
|
if (isCollapsed) {
|
||||||
|
e.preventDefault();
|
||||||
|
applyCollapsed(false);
|
||||||
|
}
|
||||||
});
|
});
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
<script>
|
<script>
|
||||||
// Toast utility
|
|
||||||
window.showToast = function(message, title = 'Notification', type = 'info') {
|
window.showToast = function(message, title = 'Notification', type = 'info') {
|
||||||
const toastEl = document.getElementById('liveToast');
|
const toastEl = document.getElementById('liveToast');
|
||||||
const toastTitle = document.getElementById('toastTitle');
|
const toastTitle = document.getElementById('toastTitle');
|
||||||
@@ -209,7 +368,6 @@
|
|||||||
toastTitle.textContent = title;
|
toastTitle.textContent = title;
|
||||||
toastMessage.textContent = message;
|
toastMessage.textContent = message;
|
||||||
|
|
||||||
// Reset classes
|
|
||||||
toastEl.classList.remove('text-bg-primary', 'text-bg-success', 'text-bg-danger', 'text-bg-warning');
|
toastEl.classList.remove('text-bg-primary', 'text-bg-success', 'text-bg-danger', 'text-bg-warning');
|
||||||
|
|
||||||
if (type === 'success') toastEl.classList.add('text-bg-success');
|
if (type === 'success') toastEl.classList.add('text-bg-success');
|
||||||
@@ -222,13 +380,11 @@
|
|||||||
</script>
|
</script>
|
||||||
<script>
|
<script>
|
||||||
(function () {
|
(function () {
|
||||||
// Show flashed messages as toasts
|
|
||||||
{% with messages = get_flashed_messages(with_categories=true) %}
|
{% with messages = get_flashed_messages(with_categories=true) %}
|
||||||
{% if messages %}
|
{% if messages %}
|
||||||
{% for category, message in messages %}
|
{% for category, message in messages %}
|
||||||
// Map Flask categories to Toast types
|
|
||||||
// Flask: success, danger, warning, info
|
|
||||||
// Toast: success, error, warning, info
|
|
||||||
var type = "{{ category }}";
|
var type = "{{ category }}";
|
||||||
if (type === "danger") type = "error";
|
if (type === "danger") type = "error";
|
||||||
window.showToast({{ message | tojson | safe }}, "Notification", type);
|
window.showToast({{ message | tojson | safe }}, "Notification", type);
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -40,48 +40,52 @@
|
|||||||
<div class="row g-3" id="buckets-container">
|
<div class="row g-3" id="buckets-container">
|
||||||
{% for bucket in buckets %}
|
{% for bucket in buckets %}
|
||||||
<div class="col-md-6 col-xl-4 bucket-item">
|
<div class="col-md-6 col-xl-4 bucket-item">
|
||||||
<div class="card h-100 shadow-sm border-0 bucket-card" data-bucket-row data-href="{{ bucket.detail_url }}">
|
<div class="card h-100 shadow-sm bucket-card" data-bucket-row data-href="{{ bucket.detail_url }}">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex justify-content-between align-items-start mb-3">
|
<div class="d-flex justify-content-between align-items-start mb-2">
|
||||||
<div class="d-flex align-items-center gap-2">
|
<div class="d-flex align-items-center gap-3">
|
||||||
<div class="bg-primary-subtle text-primary rounded p-2">
|
<div class="bucket-icon">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="bi bi-hdd-network" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="22" height="22" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5v3.375a.5.5 0 0 1-.5.5h-2a.5.5 0 0 1-.5-.5V11.5a.5.5 0 0 1 .5-.5h1V9.5a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1.5a.5.5 0 0 1 .5.5h1v3.375a.5.5 0 0 1-.5.5h-2a.5.5 0 0 1-.5-.5V11.5a.5.5 0 0 1 .5-.5h1V9.5a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1.5a.5.5 0 0 1 .5.5h1v3.375a.5.5 0 0 1-.5.5h-2a.5.5 0 0 1-.5-.5V11.5a.5.5 0 0 1 .5-.5h1V9.5a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1.5a.5.5 0 0 1 .5.5h1v3.375a.5.5 0 0 1-.5.5h-2a.5.5 0 0 1-.5-.5V11.5a.5.5 0 0 1 .5-.5h1V9.5a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1.5a.5.5 0 0 1 .5.5h1V13.5a1.5 1.5 0 0 1 1.5-1.5h3V7H2a2 2 0 0 1-2-2V4zm1 0a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1v1z"/>
|
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<h5 class="card-title mb-0 text-break">{{ bucket.meta.name }}</h5>
|
<div>
|
||||||
|
<h5 class="bucket-name text-break">{{ bucket.meta.name }}</h5>
|
||||||
|
<small class="text-muted">Created {{ bucket.meta.created_at.strftime('%b %d, %Y') }}</small>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<span class="badge {{ bucket.access_badge }} rounded-pill">{{ bucket.access_label }}</span>
|
<span class="badge {{ bucket.access_badge }} bucket-access-badge">{{ bucket.access_label }}</span>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="d-flex justify-content-between align-items-end mt-4">
|
<div class="bucket-stats">
|
||||||
<div>
|
<div class="bucket-stat">
|
||||||
<div class="text-muted small mb-1">Storage Used</div>
|
<div class="bucket-stat-value">{{ bucket.summary.human_size }}</div>
|
||||||
<div class="fw-semibold">{{ bucket.summary.human_size }}</div>
|
<div class="bucket-stat-label">Storage</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="text-end">
|
<div class="bucket-stat">
|
||||||
<div class="text-muted small mb-1">Objects</div>
|
<div class="bucket-stat-value">{{ bucket.summary.objects }}</div>
|
||||||
<div class="fw-semibold">{{ bucket.summary.objects }}</div>
|
<div class="bucket-stat-label">Objects</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="card-footer bg-transparent border-top-0 pt-0 pb-3">
|
|
||||||
<small class="text-muted">Created {{ bucket.meta.created_at.strftime('%b %d, %Y') }}</small>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
{% else %}
|
{% else %}
|
||||||
<div class="col-12">
|
<div class="col-12">
|
||||||
<div class="text-center py-5 bg-panel rounded-3 border border-dashed">
|
<div class="empty-state bg-panel rounded-3 border border-dashed">
|
||||||
<div class="mb-3 text-muted">
|
<div class="empty-state-icon">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" class="bi bi-bucket" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="36" height="36" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<h5>No buckets found</h5>
|
<h5 class="mb-2">No buckets yet</h5>
|
||||||
<p class="text-muted mb-4">Get started by creating your first storage bucket.</p>
|
<p class="text-muted mb-4">Create your first storage bucket to start organizing your files.</p>
|
||||||
<button class="btn btn-primary" data-bs-toggle="modal" data-bs-target="#createBucketModal">Create Bucket</button>
|
<button class="btn btn-primary" data-bs-toggle="modal" data-bs-target="#createBucketModal">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 2a.5.5 0 0 1 .5.5v5h5a.5.5 0 0 1 0 1h-5v5a.5.5 0 0 1-1 0v-5h-5a.5.5 0 0 1 0-1h5v-5A.5.5 0 0 1 8 2Z"/>
|
||||||
|
</svg>
|
||||||
|
Create Bucket
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
@@ -90,20 +94,31 @@
|
|||||||
<div class="modal fade" id="createBucketModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="createBucketModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header">
|
<div class="modal-header border-0">
|
||||||
<h1 class="modal-title fs-5">Create bucket</h1>
|
<h1 class="modal-title fs-5">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M.5 9.9a.5.5 0 0 1 .5.5v2.5a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1v-2.5a.5.5 0 0 1 1 0v2.5a2 2 0 0 1-2 2H2a2 2 0 0 1-2-2v-2.5a.5.5 0 0 1 .5-.5z"/>
|
||||||
|
<path d="M7.646 1.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1-.708.708L8.5 2.707V11.5a.5.5 0 0 1-1 0V2.707L5.354 4.854a.5.5 0 1 1-.708-.708l3-3z"/>
|
||||||
|
</svg>
|
||||||
|
Create bucket
|
||||||
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<form method="post" action="{{ url_for('ui.create_bucket') }}">
|
<form method="post" action="{{ url_for('ui.create_bucket') }}">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<div class="modal-body">
|
<div class="modal-body pt-0">
|
||||||
<label class="form-label">Bucket name</label>
|
<label class="form-label fw-medium">Bucket name</label>
|
||||||
<input class="form-control" type="text" name="bucket_name" pattern="[a-z0-9.-]{3,63}" placeholder="team-assets" required />
|
<input class="form-control" type="text" name="bucket_name" pattern="[a-z0-9.-]{3,63}" placeholder="my-bucket-name" required autofocus />
|
||||||
<div class="form-text">Must be 3-63 chars, lowercase letters, numbers, dots, or hyphens.</div>
|
<div class="form-text">Use 3-63 characters: lowercase letters, numbers, dots, or hyphens.</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-footer">
|
<div class="modal-footer">
|
||||||
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
<button class="btn btn-primary" type="submit">Create</button>
|
<button class="btn btn-primary" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 2a.5.5 0 0 1 .5.5v5h5a.5.5 0 0 1 0 1h-5v5a.5.5 0 0 1-1 0v-5h-5a.5.5 0 0 1 0-1h5v-5A.5.5 0 0 1 8 2Z"/>
|
||||||
|
</svg>
|
||||||
|
Create
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</form>
|
</form>
|
||||||
</div>
|
</div>
|
||||||
@@ -115,10 +130,10 @@
|
|||||||
{{ super() }}
|
{{ super() }}
|
||||||
<script>
|
<script>
|
||||||
(function () {
|
(function () {
|
||||||
// Search functionality
|
|
||||||
const searchInput = document.getElementById('bucket-search');
|
const searchInput = document.getElementById('bucket-search');
|
||||||
const bucketItems = document.querySelectorAll('.bucket-item');
|
const bucketItems = document.querySelectorAll('.bucket-item');
|
||||||
const noBucketsMsg = document.querySelector('.text-center.py-5'); // The "No buckets found" empty state
|
const noBucketsMsg = document.querySelector('.text-center.py-5');
|
||||||
|
|
||||||
if (searchInput) {
|
if (searchInput) {
|
||||||
searchInput.addEventListener('input', (e) => {
|
searchInput.addEventListener('input', (e) => {
|
||||||
@@ -137,7 +152,6 @@
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
// View toggle functionality
|
|
||||||
const viewGrid = document.getElementById('view-grid');
|
const viewGrid = document.getElementById('view-grid');
|
||||||
const viewList = document.getElementById('view-list');
|
const viewList = document.getElementById('view-list');
|
||||||
const container = document.getElementById('buckets-container');
|
const container = document.getElementById('buckets-container');
|
||||||
@@ -152,8 +166,7 @@
|
|||||||
});
|
});
|
||||||
cards.forEach(card => {
|
cards.forEach(card => {
|
||||||
card.classList.remove('h-100');
|
card.classList.remove('h-100');
|
||||||
// Optional: Add flex-row to card-body content if we want a horizontal layout
|
|
||||||
// For now, full-width stacked cards is a good list view
|
|
||||||
});
|
});
|
||||||
localStorage.setItem('bucket-view-pref', 'list');
|
localStorage.setItem('bucket-view-pref', 'list');
|
||||||
} else {
|
} else {
|
||||||
@@ -172,7 +185,6 @@
|
|||||||
viewGrid.addEventListener('change', () => setView('grid'));
|
viewGrid.addEventListener('change', () => setView('grid'));
|
||||||
viewList.addEventListener('change', () => setView('list'));
|
viewList.addEventListener('change', () => setView('list'));
|
||||||
|
|
||||||
// Restore preference
|
|
||||||
const pref = localStorage.getItem('bucket-view-pref');
|
const pref = localStorage.getItem('bucket-view-pref');
|
||||||
if (pref === 'list') {
|
if (pref === 'list') {
|
||||||
viewList.checked = true;
|
viewList.checked = true;
|
||||||
|
|||||||
@@ -3,76 +3,166 @@
|
|||||||
{% block title %}Connections - S3 Compatible Storage{% endblock %}
|
{% block title %}Connections - S3 Compatible Storage{% endblock %}
|
||||||
|
|
||||||
{% block content %}
|
{% block content %}
|
||||||
<div class="row mb-4">
|
<div class="page-header d-flex justify-content-between align-items-center mb-4">
|
||||||
<div class="col-md-12">
|
<div>
|
||||||
<h2>Remote Connections</h2>
|
<p class="text-uppercase text-muted small mb-1">Replication</p>
|
||||||
<p class="text-muted">Manage connections to other S3-compatible services for replication.</p>
|
<h1 class="h3 mb-1 d-flex align-items-center gap-2">
|
||||||
</div>
|
<svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
|
<path d="M10.232 8.768l.546-.353a.25.25 0 0 0 0-.418l-.546-.354a.25.25 0 0 1-.116-.21V6.25a.25.25 0 0 0-.25-.25h-.5a.25.25 0 0 0-.25.25v1.183a.25.25 0 0 1-.116.21l-.546.354a.25.25 0 0 0 0 .418l.546.353a.25.25 0 0 1 .116.21v1.183a.25.25 0 0 0 .25.25h.5a.25.25 0 0 0 .25-.25V8.978a.25.25 0 0 1 .116-.21z"/>
|
||||||
|
</svg>
|
||||||
|
Remote Connections
|
||||||
|
</h1>
|
||||||
|
<p class="text-muted mb-0 mt-1">Manage connections to other S3-compatible services for replication.</p>
|
||||||
|
</div>
|
||||||
|
<div class="d-none d-md-block">
|
||||||
|
<span class="badge bg-primary bg-opacity-10 text-primary fs-6 px-3 py-2">
|
||||||
|
{{ connections|length }} connection{{ 's' if connections|length != 1 else '' }}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="row">
|
<div class="row g-4">
|
||||||
<div class="col-md-4">
|
<div class="col-lg-4 col-md-5">
|
||||||
<div class="card">
|
<div class="card shadow-sm border-0" style="border-radius: 1rem;">
|
||||||
<div class="card-header">
|
<div class="card-header bg-transparent border-0 pt-4 pb-0 px-4">
|
||||||
Add New Connection
|
<h5 class="fw-semibold d-flex align-items-center gap-2 mb-1">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 2a.5.5 0 0 1 .5.5v5h5a.5.5 0 0 1 0 1h-5v5a.5.5 0 0 1-1 0v-5h-5a.5.5 0 0 1 0-1h5v-5A.5.5 0 0 1 8 2Z"/>
|
||||||
|
</svg>
|
||||||
|
Add New Connection
|
||||||
|
</h5>
|
||||||
|
<p class="text-muted small mb-0">Connect to an S3-compatible endpoint</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="card-body">
|
<div class="card-body px-4 pb-4">
|
||||||
<form method="POST" action="{{ url_for('ui.create_connection') }}">
|
<form method="POST" action="{{ url_for('ui.create_connection') }}" id="createConnectionForm">
|
||||||
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label for="name" class="form-label">Name</label>
|
<label for="name" class="form-label fw-medium">Name</label>
|
||||||
<input type="text" class="form-control" id="name" name="name" required placeholder="e.g. Production Backup">
|
<input type="text" class="form-control" id="name" name="name" required placeholder="Production Backup">
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label for="endpoint_url" class="form-label">Endpoint URL</label>
|
<label for="endpoint_url" class="form-label fw-medium">Endpoint URL</label>
|
||||||
<input type="url" class="form-control" id="endpoint_url" name="endpoint_url" required placeholder="https://s3.us-east-1.amazonaws.com">
|
<input type="url" class="form-control" id="endpoint_url" name="endpoint_url" required placeholder="https://s3.us-east-1.amazonaws.com">
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label for="region" class="form-label">Region</label>
|
<label for="region" class="form-label fw-medium">Region</label>
|
||||||
<input type="text" class="form-control" id="region" name="region" value="us-east-1">
|
<input type="text" class="form-control" id="region" name="region" value="us-east-1">
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label for="access_key" class="form-label">Access Key</label>
|
<label for="access_key" class="form-label fw-medium">Access Key</label>
|
||||||
<input type="text" class="form-control" id="access_key" name="access_key" required>
|
<input type="text" class="form-control font-monospace" id="access_key" name="access_key" required>
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label for="secret_key" class="form-label">Secret Key</label>
|
<label for="secret_key" class="form-label fw-medium">Secret Key</label>
|
||||||
<input type="password" class="form-control" id="secret_key" name="secret_key" required>
|
<div class="input-group">
|
||||||
|
<input type="password" class="form-control font-monospace" id="secret_key" name="secret_key" required>
|
||||||
|
<button class="btn btn-outline-secondary" type="button" onclick="togglePassword('secret_key')" title="Toggle visibility">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
||||||
|
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div id="testResult" class="mb-3"></div>
|
||||||
|
<div class="d-grid gap-2">
|
||||||
|
<button type="button" class="btn btn-outline-secondary" id="testConnectionBtn">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M11.251.068a.5.5 0 0 1 .227.58L9.677 6.5H13a.5.5 0 0 1 .364.843l-8 8.5a.5.5 0 0 1-.842-.49L6.323 9.5H3a.5.5 0 0 1-.364-.843l8-8.5a.5.5 0 0 1 .615-.09z"/>
|
||||||
|
</svg>
|
||||||
|
Test Connection
|
||||||
|
</button>
|
||||||
|
<button type="submit" class="btn btn-primary">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 2a.5.5 0 0 1 .5.5v5h5a.5.5 0 0 1 0 1h-5v5a.5.5 0 0 1-1 0v-5h-5a.5.5 0 0 1 0-1h5v-5A.5.5 0 0 1 8 2Z"/>
|
||||||
|
</svg>
|
||||||
|
Add Connection
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
<button type="submit" class="btn btn-primary">Add Connection</button>
|
|
||||||
</form>
|
</form>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="col-md-8">
|
<div class="col-lg-8 col-md-7">
|
||||||
<div class="card">
|
<div class="card shadow-sm border-0" style="border-radius: 1rem;">
|
||||||
<div class="card-header">
|
<div class="card-header bg-transparent border-0 pt-4 pb-0 px-4 d-flex justify-content-between align-items-center">
|
||||||
Existing Connections
|
<div>
|
||||||
|
<h5 class="fw-semibold d-flex align-items-center gap-2 mb-1">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-muted" viewBox="0 0 16 16">
|
||||||
|
<path d="M0 1.5A1.5 1.5 0 0 1 1.5 0h2A1.5 1.5 0 0 1 5 1.5v2A1.5 1.5 0 0 1 3.5 5h-2A1.5 1.5 0 0 1 0 3.5v-2zM1.5 1a.5.5 0 0 0-.5.5v2a.5.5 0 0 0 .5.5h2a.5.5 0 0 0 .5-.5v-2a.5.5 0 0 0-.5-.5h-2zM0 8a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v5a2 2 0 0 1-2 2H2a2 2 0 0 1-2-2V8zm1 3v2a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1v-2H1zm14-1V8a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1v2h14zM2 8.5a.5.5 0 0 1 .5-.5h9a.5.5 0 0 1 0 1h-9a.5.5 0 0 1-.5-.5zm0 4a.5.5 0 0 1 .5-.5h6a.5.5 0 0 1 0 1h-6a.5.5 0 0 1-.5-.5z"/>
|
||||||
|
</svg>
|
||||||
|
Existing Connections
|
||||||
|
</h5>
|
||||||
|
<p class="text-muted small mb-0">Configured remote endpoints</p>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="card-body">
|
<div class="card-body px-4 pb-4">
|
||||||
{% if connections %}
|
{% if connections %}
|
||||||
<div class="table-responsive">
|
<div class="table-responsive">
|
||||||
<table class="table table-hover">
|
<table class="table table-hover align-middle mb-0">
|
||||||
<thead>
|
<thead class="table-light">
|
||||||
<tr>
|
<tr>
|
||||||
<th>Name</th>
|
<th scope="col" style="width: 50px;">Status</th>
|
||||||
<th>Endpoint</th>
|
<th scope="col">Name</th>
|
||||||
<th>Region</th>
|
<th scope="col">Endpoint</th>
|
||||||
<th>Access Key</th>
|
<th scope="col">Region</th>
|
||||||
<th>Actions</th>
|
<th scope="col">Access Key</th>
|
||||||
|
<th scope="col" class="text-end">Actions</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
{% for conn in connections %}
|
{% for conn in connections %}
|
||||||
<tr>
|
<tr data-connection-id="{{ conn.id }}">
|
||||||
<td>{{ conn.name }}</td>
|
<td class="text-center">
|
||||||
<td>{{ conn.endpoint_url }}</td>
|
<span class="connection-status" data-status="checking" title="Checking...">
|
||||||
<td>{{ conn.region }}</td>
|
<span class="spinner-border spinner-border-sm text-muted" role="status" style="width: 12px; height: 12px;"></span>
|
||||||
<td><code>{{ conn.access_key }}</code></td>
|
</span>
|
||||||
|
</td>
|
||||||
<td>
|
<td>
|
||||||
<form method="POST" action="{{ url_for('ui.delete_connection', connection_id=conn.id) }}" onsubmit="return confirm('Are you sure?');" style="display: inline;">
|
<div class="d-flex align-items-center gap-2">
|
||||||
<button type="submit" class="btn btn-sm btn-danger">Delete</button>
|
<div class="connection-icon">
|
||||||
</form>
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<span class="fw-medium">{{ conn.name }}</span>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<span class="text-muted small text-truncate d-inline-block" style="max-width: 200px;" title="{{ conn.endpoint_url }}">{{ conn.endpoint_url }}</span>
|
||||||
|
</td>
|
||||||
|
<td><span class="badge bg-primary bg-opacity-10 text-primary">{{ conn.region }}</span></td>
|
||||||
|
<td><code class="small">{{ conn.access_key[:8] }}...{{ conn.access_key[-4:] }}</code></td>
|
||||||
|
<td class="text-end">
|
||||||
|
<div class="btn-group btn-group-sm" role="group">
|
||||||
|
<button type="button" class="btn btn-outline-secondary"
|
||||||
|
data-bs-toggle="modal"
|
||||||
|
data-bs-target="#editConnectionModal"
|
||||||
|
data-id="{{ conn.id }}"
|
||||||
|
data-name="{{ conn.name }}"
|
||||||
|
data-endpoint="{{ conn.endpoint_url }}"
|
||||||
|
data-region="{{ conn.region }}"
|
||||||
|
data-access="{{ conn.access_key }}"
|
||||||
|
data-secret="{{ conn.secret_key }}"
|
||||||
|
title="Edit connection">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
<button type="button" class="btn btn-outline-danger"
|
||||||
|
data-bs-toggle="modal"
|
||||||
|
data-bs-target="#deleteConnectionModal"
|
||||||
|
data-id="{{ conn.id }}"
|
||||||
|
data-name="{{ conn.name }}"
|
||||||
|
title="Delete connection">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
@@ -80,10 +170,272 @@
|
|||||||
</table>
|
</table>
|
||||||
</div>
|
</div>
|
||||||
{% else %}
|
{% else %}
|
||||||
<p class="text-muted text-center my-4">No remote connections configured.</p>
|
<div class="empty-state text-center py-5">
|
||||||
|
<div class="empty-state-icon mx-auto mb-3">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<h5 class="fw-semibold mb-2">No connections yet</h5>
|
||||||
|
<p class="text-muted mb-0">Add your first remote connection to enable bucket replication.</p>
|
||||||
|
</div>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<div class="modal fade" id="editConnectionModal" tabindex="-1" aria-hidden="true">
|
||||||
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
|
<div class="modal-content">
|
||||||
|
<div class="modal-header border-0 pb-0">
|
||||||
|
<h5 class="modal-title fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5zm-9.761 5.175-.106.106-1.528 3.821 3.821-1.528.106-.106A.5.5 0 0 1 5 12.5V12h-.5a.5.5 0 0 1-.5-.5V11h-.5a.5.5 0 0 1-.468-.325z"/>
|
||||||
|
</svg>
|
||||||
|
Edit Connection
|
||||||
|
</h5>
|
||||||
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
|
</div>
|
||||||
|
<form method="POST" id="editConnectionForm">
|
||||||
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
|
||||||
|
<div class="modal-body">
|
||||||
|
<div class="mb-3">
|
||||||
|
<label for="edit_name" class="form-label fw-medium">Name</label>
|
||||||
|
<input type="text" class="form-control" id="edit_name" name="name" required>
|
||||||
|
</div>
|
||||||
|
<div class="mb-3">
|
||||||
|
<label for="edit_endpoint_url" class="form-label fw-medium">Endpoint URL</label>
|
||||||
|
<input type="url" class="form-control" id="edit_endpoint_url" name="endpoint_url" required>
|
||||||
|
</div>
|
||||||
|
<div class="mb-3">
|
||||||
|
<label for="edit_region" class="form-label fw-medium">Region</label>
|
||||||
|
<input type="text" class="form-control" id="edit_region" name="region" required>
|
||||||
|
</div>
|
||||||
|
<div class="mb-3">
|
||||||
|
<label for="edit_access_key" class="form-label fw-medium">Access Key</label>
|
||||||
|
<input type="text" class="form-control font-monospace" id="edit_access_key" name="access_key" required>
|
||||||
|
</div>
|
||||||
|
<div class="mb-3">
|
||||||
|
<label for="edit_secret_key" class="form-label fw-medium">Secret Key</label>
|
||||||
|
<div class="input-group">
|
||||||
|
<input type="password" class="form-control font-monospace" id="edit_secret_key" name="secret_key" required>
|
||||||
|
<button class="btn btn-outline-secondary" type="button" onclick="togglePassword('edit_secret_key')">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
||||||
|
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div id="editTestResult" class="mt-2"></div>
|
||||||
|
</div>
|
||||||
|
<div class="modal-footer">
|
||||||
|
<button type="button" class="btn btn-outline-secondary" id="editTestConnectionBtn">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M11.251.068a.5.5 0 0 1 .227.58L9.677 6.5H13a.5.5 0 0 1 .364.843l-8 8.5a.5.5 0 0 1-.842-.49L6.323 9.5H3a.5.5 0 0 1-.364-.843l8-8.5a.5.5 0 0 1 .615-.09z"/>
|
||||||
|
</svg>
|
||||||
|
Test
|
||||||
|
</button>
|
||||||
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
|
<button type="submit" class="btn btn-primary">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M10.97 4.97a.75.75 0 0 1 1.07 1.05l-3.99 4.99a.75.75 0 0 1-1.08.02L4.324 8.384a.75.75 0 1 1 1.06-1.06l2.094 2.093 3.473-4.425a.267.267 0 0 1 .02-.022z"/>
|
||||||
|
</svg>
|
||||||
|
Save
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</form>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="modal fade" id="deleteConnectionModal" tabindex="-1" aria-hidden="true">
|
||||||
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
|
<div class="modal-content">
|
||||||
|
<div class="modal-header border-0 pb-0">
|
||||||
|
<h5 class="modal-title fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-danger" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
Delete Connection
|
||||||
|
</h5>
|
||||||
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
|
</div>
|
||||||
|
<div class="modal-body">
|
||||||
|
<p>Are you sure you want to delete <strong id="deleteConnectionName"></strong>?</p>
|
||||||
|
<div class="alert alert-warning d-flex align-items-start small" role="alert">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="flex-shrink-0 me-2 mt-0" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 16A8 8 0 1 0 8 0a8 8 0 0 0 0 16zm.93-9.412-1 4.705c-.07.34.029.533.304.533.194 0 .487-.07.686-.246l-.088.416c-.287.346-.92.598-1.465.598-.703 0-1.002-.422-.808-1.319l.738-3.468c.064-.293.006-.399-.287-.47l-.451-.081.082-.381 2.29-.287zM8 5.5a1 1 0 1 1 0-2 1 1 0 0 1 0 2z"/>
|
||||||
|
</svg>
|
||||||
|
<div>This will stop any replication rules using this connection.</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="modal-footer">
|
||||||
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
|
<form method="POST" id="deleteConnectionForm">
|
||||||
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
|
||||||
|
<button type="submit" class="btn btn-danger">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
Delete
|
||||||
|
</button>
|
||||||
|
</form>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
function togglePassword(id) {
|
||||||
|
const input = document.getElementById(id);
|
||||||
|
if (input.type === "password") {
|
||||||
|
input.type = "text";
|
||||||
|
} else {
|
||||||
|
input.type = "password";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function testConnection(formId, resultId) {
|
||||||
|
const form = document.getElementById(formId);
|
||||||
|
const resultDiv = document.getElementById(resultId);
|
||||||
|
const formData = new FormData(form);
|
||||||
|
const data = Object.fromEntries(formData.entries());
|
||||||
|
|
||||||
|
resultDiv.innerHTML = '<div class="text-info"><span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Testing connection...</div>';
|
||||||
|
|
||||||
|
const controller = new AbortController();
|
||||||
|
const timeoutId = setTimeout(() => controller.abort(), 20000);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch("{{ url_for('ui.test_connection') }}", {
|
||||||
|
method: "POST",
|
||||||
|
headers: {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"X-CSRFToken": "{{ csrf_token() }}"
|
||||||
|
},
|
||||||
|
body: JSON.stringify(data),
|
||||||
|
signal: controller.signal
|
||||||
|
});
|
||||||
|
clearTimeout(timeoutId);
|
||||||
|
|
||||||
|
const result = await response.json();
|
||||||
|
if (response.ok) {
|
||||||
|
resultDiv.innerHTML = `<div class="text-success">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
||||||
|
</svg>
|
||||||
|
${result.message}
|
||||||
|
</div>`;
|
||||||
|
} else {
|
||||||
|
resultDiv.innerHTML = `<div class="text-danger">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>
|
||||||
|
</svg>
|
||||||
|
${result.message}
|
||||||
|
</div>`;
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
clearTimeout(timeoutId);
|
||||||
|
if (error.name === 'AbortError') {
|
||||||
|
resultDiv.innerHTML = `<div class="text-danger">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>
|
||||||
|
</svg>
|
||||||
|
Connection test timed out - endpoint may be unreachable
|
||||||
|
</div>`;
|
||||||
|
} else {
|
||||||
|
resultDiv.innerHTML = `<div class="text-danger">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>
|
||||||
|
</svg>
|
||||||
|
Connection failed: Network error
|
||||||
|
</div>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
document.getElementById('testConnectionBtn').addEventListener('click', () => {
|
||||||
|
testConnection('createConnectionForm', 'testResult');
|
||||||
|
});
|
||||||
|
|
||||||
|
document.getElementById('editTestConnectionBtn').addEventListener('click', () => {
|
||||||
|
testConnection('editConnectionForm', 'editTestResult');
|
||||||
|
});
|
||||||
|
|
||||||
|
const editModal = document.getElementById('editConnectionModal');
|
||||||
|
editModal.addEventListener('show.bs.modal', event => {
|
||||||
|
const button = event.relatedTarget;
|
||||||
|
const id = button.getAttribute('data-id');
|
||||||
|
|
||||||
|
document.getElementById('edit_name').value = button.getAttribute('data-name');
|
||||||
|
document.getElementById('edit_endpoint_url').value = button.getAttribute('data-endpoint');
|
||||||
|
document.getElementById('edit_region').value = button.getAttribute('data-region');
|
||||||
|
document.getElementById('edit_access_key').value = button.getAttribute('data-access');
|
||||||
|
document.getElementById('edit_secret_key').value = button.getAttribute('data-secret');
|
||||||
|
document.getElementById('editTestResult').innerHTML = '';
|
||||||
|
|
||||||
|
const form = document.getElementById('editConnectionForm');
|
||||||
|
form.action = "{{ url_for('ui.update_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
|
||||||
|
});
|
||||||
|
|
||||||
|
const deleteModal = document.getElementById('deleteConnectionModal');
|
||||||
|
deleteModal.addEventListener('show.bs.modal', event => {
|
||||||
|
const button = event.relatedTarget;
|
||||||
|
const id = button.getAttribute('data-id');
|
||||||
|
const name = button.getAttribute('data-name');
|
||||||
|
|
||||||
|
document.getElementById('deleteConnectionName').textContent = name;
|
||||||
|
const form = document.getElementById('deleteConnectionForm');
|
||||||
|
form.action = "{{ url_for('ui.delete_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
|
||||||
|
});
|
||||||
|
|
||||||
|
async function checkConnectionHealth(connectionId, statusEl) {
|
||||||
|
try {
|
||||||
|
const controller = new AbortController();
|
||||||
|
const timeoutId = setTimeout(() => controller.abort(), 15000);
|
||||||
|
|
||||||
|
const response = await fetch(`/ui/connections/${connectionId}/health`, {
|
||||||
|
signal: controller.signal
|
||||||
|
});
|
||||||
|
clearTimeout(timeoutId);
|
||||||
|
|
||||||
|
const data = await response.json();
|
||||||
|
if (data.healthy) {
|
||||||
|
statusEl.innerHTML = `
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-success" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
||||||
|
</svg>`;
|
||||||
|
statusEl.setAttribute('data-status', 'healthy');
|
||||||
|
statusEl.setAttribute('title', 'Connected');
|
||||||
|
} else {
|
||||||
|
statusEl.innerHTML = `
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-danger" viewBox="0 0 16 16">
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>
|
||||||
|
</svg>`;
|
||||||
|
statusEl.setAttribute('data-status', 'unhealthy');
|
||||||
|
statusEl.setAttribute('title', data.error || 'Unreachable');
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
statusEl.innerHTML = `
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-warning" viewBox="0 0 16 16">
|
||||||
|
<path d="M8.982 1.566a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566zM8 5c.535 0 .954.462.9.995l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995A.905.905 0 0 1 8 5zm.002 6a1 1 0 1 1 0 2 1 1 0 0 1 0-2z"/>
|
||||||
|
</svg>`;
|
||||||
|
statusEl.setAttribute('data-status', 'unknown');
|
||||||
|
statusEl.setAttribute('title', 'Could not check status');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const connectionRows = document.querySelectorAll('tr[data-connection-id]');
|
||||||
|
connectionRows.forEach((row, index) => {
|
||||||
|
const connectionId = row.getAttribute('data-connection-id');
|
||||||
|
const statusEl = row.querySelector('.connection-status');
|
||||||
|
if (statusEl) {
|
||||||
|
setTimeout(() => checkConnectionHealth(connectionId, statusEl), index * 200);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
</script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
@@ -14,6 +14,37 @@
|
|||||||
</div>
|
</div>
|
||||||
</section>
|
</section>
|
||||||
<div class="row g-4">
|
<div class="row g-4">
|
||||||
|
<div class="col-12 d-xl-none">
|
||||||
|
<div class="card shadow-sm docs-sidebar-mobile mb-0">
|
||||||
|
<div class="card-body py-3">
|
||||||
|
<div class="d-flex align-items-center justify-content-between mb-2">
|
||||||
|
<h3 class="h6 text-uppercase text-muted mb-0">On this page</h3>
|
||||||
|
<button class="btn btn-sm btn-outline-secondary" type="button" data-bs-toggle="collapse" data-bs-target="#mobileDocsToc" aria-expanded="false" aria-controls="mobileDocsToc">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<div class="collapse" id="mobileDocsToc">
|
||||||
|
<ul class="list-unstyled docs-toc mb-0 small">
|
||||||
|
<li><a href="#setup">Set up & run</a></li>
|
||||||
|
<li><a href="#background">Running in background</a></li>
|
||||||
|
<li><a href="#auth">Authentication & IAM</a></li>
|
||||||
|
<li><a href="#console">Console tour</a></li>
|
||||||
|
<li><a href="#automation">Automation / CLI</a></li>
|
||||||
|
<li><a href="#api">REST endpoints</a></li>
|
||||||
|
<li><a href="#examples">API Examples</a></li>
|
||||||
|
<li><a href="#replication">Site Replication</a></li>
|
||||||
|
<li><a href="#versioning">Object Versioning</a></li>
|
||||||
|
<li><a href="#quotas">Bucket Quotas</a></li>
|
||||||
|
<li><a href="#encryption">Encryption</a></li>
|
||||||
|
<li><a href="#lifecycle">Lifecycle Rules</a></li>
|
||||||
|
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
<div class="col-xl-8">
|
<div class="col-xl-8">
|
||||||
<article id="setup" class="card shadow-sm docs-section">
|
<article id="setup" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
@@ -31,20 +62,194 @@
|
|||||||
. .venv/Scripts/activate # PowerShell: .\\.venv\\Scripts\\Activate.ps1
|
. .venv/Scripts/activate # PowerShell: .\\.venv\\Scripts\\Activate.ps1
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
|
|
||||||
# Run both API and UI
|
# Run both API and UI (Development)
|
||||||
python run.py
|
python run.py
|
||||||
|
|
||||||
|
# Run in Production (Waitress server)
|
||||||
|
python run.py --prod
|
||||||
|
|
||||||
# Or run individually
|
# Or run individually
|
||||||
python run.py --mode api
|
python run.py --mode api
|
||||||
python run.py --mode ui
|
python run.py --mode ui
|
||||||
</code></pre>
|
</code></pre>
|
||||||
<p class="small text-muted mb-0">Configuration lives in <code>app/config.py</code>; override variables via the shell (e.g., <code>STORAGE_ROOT</code>, <code>API_BASE_URL</code>, <code>SECRET_KEY</code>, <code>MAX_UPLOAD_SIZE</code>).</p>
|
<h3 class="h6 mt-4 mb-2">Configuration</h3>
|
||||||
|
<p class="text-muted small">Configuration defaults live in <code>app/config.py</code>. You can override them using environment variables. This is critical for production deployments behind proxies.</p>
|
||||||
|
<div class="table-responsive">
|
||||||
|
<table class="table table-sm table-bordered small mb-0">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th style="min-width: 180px;">Variable</th>
|
||||||
|
<th style="min-width: 120px;">Default</th>
|
||||||
|
<th class="text-wrap" style="min-width: 250px;">Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><code>API_BASE_URL</code></td>
|
||||||
|
<td><code>None</code></td>
|
||||||
|
<td>The public URL of the API. <strong>Required</strong> if running behind a proxy. Ensures presigned URLs are generated correctly.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>STORAGE_ROOT</code></td>
|
||||||
|
<td><code>./data</code></td>
|
||||||
|
<td>Directory for buckets and objects.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>MAX_UPLOAD_SIZE</code></td>
|
||||||
|
<td><code>1 GB</code></td>
|
||||||
|
<td>Max request body size in bytes.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>SECRET_KEY</code></td>
|
||||||
|
<td>(Auto-generated)</td>
|
||||||
|
<td>Flask session key. Auto-generates if not set. <strong>Set explicitly in production.</strong></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>APP_HOST</code></td>
|
||||||
|
<td><code>0.0.0.0</code></td>
|
||||||
|
<td>Bind interface.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>APP_PORT</code></td>
|
||||||
|
<td><code>5000</code></td>
|
||||||
|
<td>Listen port (UI uses 5100).</td>
|
||||||
|
</tr>
|
||||||
|
<tr class="table-secondary">
|
||||||
|
<td colspan="3" class="fw-semibold">CORS Settings</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>CORS_ORIGINS</code></td>
|
||||||
|
<td><code>*</code></td>
|
||||||
|
<td>Allowed origins. <strong>Restrict in production.</strong></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>CORS_METHODS</code></td>
|
||||||
|
<td><code>GET,PUT,POST,DELETE,OPTIONS,HEAD</code></td>
|
||||||
|
<td>Allowed HTTP methods.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>CORS_ALLOW_HEADERS</code></td>
|
||||||
|
<td><code>*</code></td>
|
||||||
|
<td>Allowed request headers.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>CORS_EXPOSE_HEADERS</code></td>
|
||||||
|
<td><code>*</code></td>
|
||||||
|
<td>Response headers visible to browsers (e.g., <code>ETag</code>).</td>
|
||||||
|
</tr>
|
||||||
|
<tr class="table-secondary">
|
||||||
|
<td colspan="3" class="fw-semibold">Security Settings</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>AUTH_MAX_ATTEMPTS</code></td>
|
||||||
|
<td><code>5</code></td>
|
||||||
|
<td>Failed login attempts before lockout.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>AUTH_LOCKOUT_MINUTES</code></td>
|
||||||
|
<td><code>15</code></td>
|
||||||
|
<td>Lockout duration after max failed attempts.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>RATE_LIMIT_DEFAULT</code></td>
|
||||||
|
<td><code>200 per minute</code></td>
|
||||||
|
<td>Default API rate limit.</td>
|
||||||
|
</tr>
|
||||||
|
<tr class="table-secondary">
|
||||||
|
<td colspan="3" class="fw-semibold">Encryption Settings</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>ENCRYPTION_ENABLED</code></td>
|
||||||
|
<td><code>false</code></td>
|
||||||
|
<td>Enable server-side encryption support.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>KMS_ENABLED</code></td>
|
||||||
|
<td><code>false</code></td>
|
||||||
|
<td>Enable KMS key management for encryption.</td>
|
||||||
|
</tr>
|
||||||
|
<tr class="table-secondary">
|
||||||
|
<td colspan="3" class="fw-semibold">Logging Settings</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>LOG_LEVEL</code></td>
|
||||||
|
<td><code>INFO</code></td>
|
||||||
|
<td>Log verbosity: DEBUG, INFO, WARNING, ERROR.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>LOG_TO_FILE</code></td>
|
||||||
|
<td><code>true</code></td>
|
||||||
|
<td>Enable file logging.</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
<div class="alert alert-warning mt-3 mb-0 small">
|
||||||
|
<strong>Production Checklist:</strong> Set <code>SECRET_KEY</code>, restrict <code>CORS_ORIGINS</code>, configure <code>API_BASE_URL</code>, enable HTTPS via reverse proxy, and use <code>--prod</code> flag.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="background" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">02</span>
|
||||||
|
<h2 class="h4 mb-0">Running in background</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">For production or server deployments, run MyFSIO as a background service so it persists after you close the terminal.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Quick Start (nohup)</h3>
|
||||||
|
<p class="text-muted small">Simplest way to run in background—survives terminal close:</p>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Using Python
|
||||||
|
nohup python run.py --prod > /dev/null 2>&1 &
|
||||||
|
|
||||||
|
# Using compiled binary
|
||||||
|
nohup ./myfsio > /dev/null 2>&1 &
|
||||||
|
|
||||||
|
# Check if running
|
||||||
|
ps aux | grep myfsio</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Screen / Tmux</h3>
|
||||||
|
<p class="text-muted small">Attach/detach from a persistent session:</p>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Start in a detached screen session
|
||||||
|
screen -dmS myfsio ./myfsio
|
||||||
|
|
||||||
|
# Attach to view logs
|
||||||
|
screen -r myfsio
|
||||||
|
|
||||||
|
# Detach: press Ctrl+A, then D</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Systemd (Recommended for Production)</h3>
|
||||||
|
<p class="text-muted small">Create <code>/etc/systemd/system/myfsio.service</code>:</p>
|
||||||
|
<pre class="mb-3"><code class="language-ini">[Unit]
|
||||||
|
Description=MyFSIO S3-Compatible Storage
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=myfsio
|
||||||
|
WorkingDirectory=/opt/myfsio
|
||||||
|
ExecStart=/opt/myfsio/myfsio
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
Environment=STORAGE_ROOT=/var/lib/myfsio
|
||||||
|
Environment=API_BASE_URL=https://s3.example.com
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target</code></pre>
|
||||||
|
<p class="text-muted small">Then enable and start:</p>
|
||||||
|
<pre class="mb-0"><code class="language-bash">sudo systemctl daemon-reload
|
||||||
|
sudo systemctl enable myfsio
|
||||||
|
sudo systemctl start myfsio
|
||||||
|
|
||||||
|
# Check status
|
||||||
|
sudo systemctl status myfsio
|
||||||
|
sudo journalctl -u myfsio -f # View logs</code></pre>
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="auth" class="card shadow-sm docs-section">
|
<article id="auth" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">02</span>
|
<span class="docs-section-kicker">03</span>
|
||||||
<h2 class="h4 mb-0">Authenticate & manage IAM</h2>
|
<h2 class="h4 mb-0">Authenticate & manage IAM</h2>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-muted">MyFSIO seeds <code>data/.myfsio.sys/config/iam.json</code> with <code>localadmin/localadmin</code>. Sign in once, rotate it, then grant least-privilege access to teammates and tools.</p>
|
<p class="text-muted">MyFSIO seeds <code>data/.myfsio.sys/config/iam.json</code> with <code>localadmin/localadmin</code>. Sign in once, rotate it, then grant least-privilege access to teammates and tools.</p>
|
||||||
@@ -62,7 +267,7 @@ python run.py --mode ui
|
|||||||
<article id="console" class="card shadow-sm docs-section">
|
<article id="console" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">03</span>
|
<span class="docs-section-kicker">04</span>
|
||||||
<h2 class="h4 mb-0">Use the console effectively</h2>
|
<h2 class="h4 mb-0">Use the console effectively</h2>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-muted">Each workspace models an S3 workflow so you can administer buckets end-to-end.</p>
|
<p class="text-muted">Each workspace models an S3 workflow so you can administer buckets end-to-end.</p>
|
||||||
@@ -81,6 +286,15 @@ python run.py --mode ui
|
|||||||
<li>Progress rows highlight retries, throughput, and completion even if you close the modal.</li>
|
<li>Progress rows highlight retries, throughput, and completion even if you close the modal.</li>
|
||||||
</ul>
|
</ul>
|
||||||
</div>
|
</div>
|
||||||
|
<div>
|
||||||
|
<h3 class="h6 text-uppercase text-muted">Object browser</h3>
|
||||||
|
<ul>
|
||||||
|
<li>Navigate folder hierarchies using breadcrumbs. Objects with <code>/</code> in keys display as folders.</li>
|
||||||
|
<li>Infinite scroll loads more objects automatically. Choose batch size (50–250) from the footer dropdown.</li>
|
||||||
|
<li>Bulk select objects for multi-delete or multi-download. Filter by name using the search box.</li>
|
||||||
|
<li>If loading fails, click <strong>Retry</strong> to attempt again—no page refresh needed.</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
<div>
|
<div>
|
||||||
<h3 class="h6 text-uppercase text-muted">Object details</h3>
|
<h3 class="h6 text-uppercase text-muted">Object details</h3>
|
||||||
<ul>
|
<ul>
|
||||||
@@ -101,7 +315,7 @@ python run.py --mode ui
|
|||||||
<article id="automation" class="card shadow-sm docs-section">
|
<article id="automation" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">04</span>
|
<span class="docs-section-kicker">05</span>
|
||||||
<h2 class="h4 mb-0">Automate with CLI & tools</h2>
|
<h2 class="h4 mb-0">Automate with CLI & tools</h2>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-muted">Point standard S3 clients at {{ api_base }} and reuse the same IAM credentials.</p>
|
<p class="text-muted">Point standard S3 clients at {{ api_base }} and reuse the same IAM credentials.</p>
|
||||||
@@ -154,7 +368,7 @@ curl -X POST {{ api_base }}/presign/demo/notes.txt \
|
|||||||
<article id="api" class="card shadow-sm docs-section">
|
<article id="api" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">05</span>
|
<span class="docs-section-kicker">06</span>
|
||||||
<h2 class="h4 mb-0">Key REST endpoints</h2>
|
<h2 class="h4 mb-0">Key REST endpoints</h2>
|
||||||
</div>
|
</div>
|
||||||
<div class="table-responsive">
|
<div class="table-responsive">
|
||||||
@@ -221,13 +435,65 @@ curl -X POST {{ api_base }}/presign/demo/notes.txt \
|
|||||||
<article id="examples" class="card shadow-sm docs-section">
|
<article id="examples" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">06</span>
|
<span class="docs-section-kicker">07</span>
|
||||||
<h2 class="h4 mb-0">API Examples</h2>
|
<h2 class="h4 mb-0">API Examples</h2>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-muted">Common operations using boto3.</p>
|
<p class="text-muted">Common operations using popular SDKs and tools.</p>
|
||||||
|
|
||||||
<h5 class="mt-4">Multipart Upload</h5>
|
<h3 class="h6 text-uppercase text-muted mt-4">Python (boto3)</h3>
|
||||||
<pre><code class="language-python">import boto3
|
<pre class="mb-4"><code class="language-python">import boto3
|
||||||
|
|
||||||
|
s3 = boto3.client(
|
||||||
|
's3',
|
||||||
|
endpoint_url='{{ api_base }}',
|
||||||
|
aws_access_key_id='<access_key>',
|
||||||
|
aws_secret_access_key='<secret_key>'
|
||||||
|
)
|
||||||
|
|
||||||
|
# List buckets
|
||||||
|
buckets = s3.list_buckets()['Buckets']
|
||||||
|
|
||||||
|
# Create bucket
|
||||||
|
s3.create_bucket(Bucket='mybucket')
|
||||||
|
|
||||||
|
# Upload file
|
||||||
|
s3.upload_file('local.txt', 'mybucket', 'remote.txt')
|
||||||
|
|
||||||
|
# Download file
|
||||||
|
s3.download_file('mybucket', 'remote.txt', 'downloaded.txt')
|
||||||
|
|
||||||
|
# Generate presigned URL (valid 1 hour)
|
||||||
|
url = s3.generate_presigned_url(
|
||||||
|
'get_object',
|
||||||
|
Params={'Bucket': 'mybucket', 'Key': 'remote.txt'},
|
||||||
|
ExpiresIn=3600
|
||||||
|
)</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">JavaScript (AWS SDK v3)</h3>
|
||||||
|
<pre class="mb-4"><code class="language-javascript">import { S3Client, ListBucketsCommand, PutObjectCommand } from '@aws-sdk/client-s3';
|
||||||
|
|
||||||
|
const s3 = new S3Client({
|
||||||
|
endpoint: '{{ api_base }}',
|
||||||
|
region: 'us-east-1',
|
||||||
|
credentials: {
|
||||||
|
accessKeyId: '<access_key>',
|
||||||
|
secretAccessKey: '<secret_key>'
|
||||||
|
},
|
||||||
|
forcePathStyle: true // Required for S3-compatible services
|
||||||
|
});
|
||||||
|
|
||||||
|
// List buckets
|
||||||
|
const { Buckets } = await s3.send(new ListBucketsCommand({}));
|
||||||
|
|
||||||
|
// Upload object
|
||||||
|
await s3.send(new PutObjectCommand({
|
||||||
|
Bucket: 'mybucket',
|
||||||
|
Key: 'hello.txt',
|
||||||
|
Body: 'Hello, World!'
|
||||||
|
}));</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Multipart Upload (Python)</h3>
|
||||||
|
<pre class="mb-4"><code class="language-python">import boto3
|
||||||
|
|
||||||
s3 = boto3.client('s3', endpoint_url='{{ api_base }}')
|
s3 = boto3.client('s3', endpoint_url='{{ api_base }}')
|
||||||
|
|
||||||
@@ -235,9 +501,9 @@ s3 = boto3.client('s3', endpoint_url='{{ api_base }}')
|
|||||||
response = s3.create_multipart_upload(Bucket='mybucket', Key='large.bin')
|
response = s3.create_multipart_upload(Bucket='mybucket', Key='large.bin')
|
||||||
upload_id = response['UploadId']
|
upload_id = response['UploadId']
|
||||||
|
|
||||||
# Upload parts
|
# Upload parts (minimum 5MB each, except last part)
|
||||||
parts = []
|
parts = []
|
||||||
chunks = [b'chunk1', b'chunk2'] # Example data chunks
|
chunks = [b'chunk1...', b'chunk2...']
|
||||||
for part_number, chunk in enumerate(chunks, start=1):
|
for part_number, chunk in enumerate(chunks, start=1):
|
||||||
response = s3.upload_part(
|
response = s3.upload_part(
|
||||||
Bucket='mybucket',
|
Bucket='mybucket',
|
||||||
@@ -255,12 +521,25 @@ s3.complete_multipart_upload(
|
|||||||
UploadId=upload_id,
|
UploadId=upload_id,
|
||||||
MultipartUpload={'Parts': parts}
|
MultipartUpload={'Parts': parts}
|
||||||
)</code></pre>
|
)</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Presigned URLs for Sharing</h3>
|
||||||
|
<pre class="mb-0"><code class="language-bash"># Generate a download link valid for 15 minutes
|
||||||
|
curl -X POST "{{ api_base }}/presign/mybucket/photo.jpg" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '{"method": "GET", "expires_in": 900}'
|
||||||
|
|
||||||
|
# Generate an upload link (PUT) valid for 1 hour
|
||||||
|
curl -X POST "{{ api_base }}/presign/mybucket/upload.bin" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '{"method": "PUT", "expires_in": 3600}'</code></pre>
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="replication" class="card shadow-sm docs-section">
|
<article id="replication" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">07</span>
|
<span class="docs-section-kicker">08</span>
|
||||||
<h2 class="h4 mb-0">Site Replication</h2>
|
<h2 class="h4 mb-0">Site Replication</h2>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-muted">Automatically copy new objects to another MyFSIO instance or S3-compatible service for backup or disaster recovery.</p>
|
<p class="text-muted">Automatically copy new objects to another MyFSIO instance or S3-compatible service for backup or disaster recovery.</p>
|
||||||
@@ -278,24 +557,429 @@ s3.complete_multipart_upload(
|
|||||||
</li>
|
</li>
|
||||||
</ol>
|
</ol>
|
||||||
|
|
||||||
<div class="alert alert-light border mb-0">
|
<div class="alert alert-light border mb-3 overflow-hidden">
|
||||||
<div class="d-flex gap-2">
|
<div class="d-flex flex-column flex-sm-row gap-2 mb-2">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-terminal text-muted mt-1" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-terminal text-muted mt-1 flex-shrink-0 d-none d-sm-block" viewBox="0 0 16 16">
|
||||||
<path d="M6 9a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 0 1h-3A.5.5 0 0 1 6 9zM3.854 4.146a.5.5 0 1 0-.708.708L4.793 6.5 3.146 8.146a.5.5 0 1 0 .708.708l2-2a.5.5 0 0 0 0-.708l-2-2z"/>
|
<path d="M6 9a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 0 1h-3A.5.5 0 0 1 6 9zM3.854 4.146a.5.5 0 1 0-.708.708L4.793 6.5 3.146 8.146a.5.5 0 1 0 .708.708l2-2a.5.5 0 0 0 0-.708l-2-2z"/>
|
||||||
<path d="M2 1a2 2 0 0 0-2 2v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V3a2 2 0 0 0-2-2H2zm12 1a1 1 0 0 1 1 1v10a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1h12z"/>
|
<path d="M2 1a2 2 0 0 0-2 2v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V3a2 2 0 0 0-2-2H2zm12 1a1 1 0 0 1 1 1v10a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1h12z"/>
|
||||||
</svg>
|
</svg>
|
||||||
<div>
|
<div class="flex-grow-1 min-width-0">
|
||||||
<strong>Headless Target Setup?</strong>
|
<strong>Headless Target Setup</strong>
|
||||||
<p class="small text-muted mb-0">If your target server has no UI, use the Python API directly to bootstrap credentials. See <code>docs.md</code> in the project root for the <code>setup_target.py</code> script.</p>
|
<p class="small text-muted mb-2">If your target server has no UI, create a <code>setup_target.py</code> script to bootstrap credentials:</p>
|
||||||
|
<pre class="mb-0 overflow-auto" style="max-width: 100%;"><code class="language-python"># setup_target.py
|
||||||
|
from pathlib import Path
|
||||||
|
from app.iam import IamService
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
|
||||||
|
# Initialize services (paths match default config)
|
||||||
|
data_dir = Path("data")
|
||||||
|
iam = IamService(data_dir / ".myfsio.sys" / "config" / "iam.json")
|
||||||
|
storage = ObjectStorage(data_dir)
|
||||||
|
|
||||||
|
# 1. Create the bucket
|
||||||
|
bucket_name = "backup-bucket"
|
||||||
|
try:
|
||||||
|
storage.create_bucket(bucket_name)
|
||||||
|
print(f"Bucket '{bucket_name}' created.")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Bucket creation skipped: {e}")
|
||||||
|
|
||||||
|
# 2. Create the user
|
||||||
|
try:
|
||||||
|
creds = iam.create_user(
|
||||||
|
display_name="Replication User",
|
||||||
|
policies=[{"bucket": bucket_name, "actions": ["write", "read", "list"]}]
|
||||||
|
)
|
||||||
|
print("\n--- CREDENTIALS GENERATED ---")
|
||||||
|
print(f"Access Key: {creds['access_key']}")
|
||||||
|
print(f"Secret Key: {creds['secret_key']}")
|
||||||
|
print("-----------------------------")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"User creation failed: {e}")</code></pre>
|
||||||
|
<p class="small text-muted mt-2 mb-0">Save and run: <code>python setup_target.py</code></p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Bidirectional Replication (Active-Active)</h3>
|
||||||
|
<p class="small text-muted">To set up two-way replication (Server A ↔ Server B):</p>
|
||||||
|
<ol class="docs-steps mb-3">
|
||||||
|
<li>Follow the steps above to replicate <strong>A → B</strong>.</li>
|
||||||
|
<li>Repeat the process on Server B to replicate <strong>B → A</strong> (create a connection to A, enable rule).</li>
|
||||||
|
</ol>
|
||||||
|
<p class="small text-muted mb-3">
|
||||||
|
<strong>Loop Prevention:</strong> The system automatically detects replication traffic using a custom User-Agent (<code>S3ReplicationAgent</code>). This prevents infinite loops where an object replicated from A to B is immediately replicated back to A.
|
||||||
|
<br>
|
||||||
|
<strong>Deletes:</strong> Deleting an object on one server will propagate the deletion to the other server.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Error Handling & Rate Limits</h3>
|
||||||
|
<p class="small text-muted mb-3">The replication system handles transient failures automatically:</p>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Behavior</th>
|
||||||
|
<th>Details</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Retry Logic</strong></td>
|
||||||
|
<td>boto3 automatically handles 429 (rate limit) errors using exponential backoff with <code>max_attempts=2</code></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Concurrency</strong></td>
|
||||||
|
<td>Uses a ThreadPoolExecutor with 4 parallel workers for replication tasks</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Timeouts</strong></td>
|
||||||
|
<td>Connect: 5s, Read: 30s. Large files use streaming transfers</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
<div class="alert alert-warning border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-exclamation-triangle text-warning mt-1 flex-shrink-0" viewBox="0 0 16 16">
|
||||||
|
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
||||||
|
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Large File Counts:</strong> When replicating buckets with many objects, the target server's rate limits may cause delays. There is no built-in pause mechanism. Consider increasing <code>RATE_LIMIT_DEFAULT</code> on the target server during bulk replication operations.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="versioning" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">09</span>
|
||||||
|
<h2 class="h4 mb-0">Object Versioning</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">Keep multiple versions of objects to protect against accidental deletions and overwrites. Restore previous versions at any time.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Enabling Versioning</h3>
|
||||||
|
<ol class="docs-steps mb-3">
|
||||||
|
<li>Navigate to your bucket's <strong>Properties</strong> tab.</li>
|
||||||
|
<li>Find the <strong>Versioning</strong> card and click <strong>Enable</strong>.</li>
|
||||||
|
<li>All subsequent uploads will create new versions instead of overwriting.</li>
|
||||||
|
</ol>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Version Operations</h3>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Operation</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><strong>View Versions</strong></td>
|
||||||
|
<td>Click the version icon on any object to see all historical versions with timestamps and sizes.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Restore Version</strong></td>
|
||||||
|
<td>Click <strong>Restore</strong> on any version to make it the current version (creates a copy).</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Delete Current</strong></td>
|
||||||
|
<td>Deleting an object archives it. Previous versions remain accessible.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Purge All</strong></td>
|
||||||
|
<td>Permanently delete an object and all its versions. This cannot be undone.</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Archived Objects</h3>
|
||||||
|
<p class="small text-muted mb-3">When you delete a versioned object, it becomes "archived" - the current version is removed but historical versions remain. The <strong>Archived</strong> tab shows these objects so you can restore them.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">API Usage</h3>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Enable versioning
|
||||||
|
curl -X PUT "{{ api_base }}/<bucket>?versioning" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '{"Status": "Enabled"}'
|
||||||
|
|
||||||
|
# Get versioning status
|
||||||
|
curl "{{ api_base }}/<bucket>?versioning" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# List object versions
|
||||||
|
curl "{{ api_base }}/<bucket>?versions" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Get specific version
|
||||||
|
curl "{{ api_base }}/<bucket>/<key>?versionId=<version-id>" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"</code></pre>
|
||||||
|
|
||||||
|
<div class="alert alert-light border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Storage Impact:</strong> Each version consumes storage. Enable quotas to limit total bucket size including all versions.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="quotas" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">10</span>
|
||||||
|
<h2 class="h4 mb-0">Bucket Quotas</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">Limit how much data a bucket can hold using storage quotas. Quotas are enforced on uploads and multipart completions.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Quota Types</h3>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Limit</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Max Size (MB)</strong></td>
|
||||||
|
<td>Maximum total storage in megabytes (includes current objects + archived versions)</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Max Objects</strong></td>
|
||||||
|
<td>Maximum number of objects (includes current objects + archived versions)</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Managing Quotas (Admin Only)</h3>
|
||||||
|
<p class="small text-muted">Quota management is restricted to administrators (users with <code>iam:*</code> permissions).</p>
|
||||||
|
<ol class="docs-steps mb-3">
|
||||||
|
<li>Navigate to your bucket → <strong>Properties</strong> tab → <strong>Storage Quota</strong> card.</li>
|
||||||
|
<li>Enter limits: <strong>Max Size (MB)</strong> and/or <strong>Max Objects</strong>. Leave empty for unlimited.</li>
|
||||||
|
<li>Click <strong>Update Quota</strong> to save, or <strong>Remove Quota</strong> to clear limits.</li>
|
||||||
|
</ol>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">API Usage</h3>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Set quota (max 100MB, max 1000 objects)
|
||||||
|
curl -X PUT "{{ api_base }}/bucket/<bucket>?quota" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '{"max_bytes": 104857600, "max_objects": 1000}'
|
||||||
|
|
||||||
|
# Get current quota
|
||||||
|
curl "{{ api_base }}/bucket/<bucket>?quota" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Remove quota
|
||||||
|
curl -X PUT "{{ api_base }}/bucket/<bucket>?quota" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '{"max_bytes": null, "max_objects": null}'</code></pre>
|
||||||
|
|
||||||
|
<div class="alert alert-light border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Version Counting:</strong> When versioning is enabled, archived versions count toward the quota. The quota is checked against total storage, not just current objects.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="encryption" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">11</span>
|
||||||
|
<h2 class="h4 mb-0">Encryption</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">Protect data at rest with server-side encryption using AES-256-GCM. Objects are encrypted before being written to disk and decrypted transparently on read.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Encryption Types</h3>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Type</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><strong>AES-256 (SSE-S3)</strong></td>
|
||||||
|
<td>Server-managed encryption using a local master key</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>KMS (SSE-KMS)</strong></td>
|
||||||
|
<td>Encryption using customer-managed keys via the built-in KMS</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Enabling Encryption</h3>
|
||||||
|
<ol class="docs-steps mb-3">
|
||||||
|
<li>
|
||||||
|
<strong>Set environment variables:</strong>
|
||||||
|
<pre class="mb-2"><code class="language-bash"># PowerShell
|
||||||
|
$env:ENCRYPTION_ENABLED = "true"
|
||||||
|
$env:KMS_ENABLED = "true" # Optional
|
||||||
|
python run.py
|
||||||
|
|
||||||
|
# Bash
|
||||||
|
export ENCRYPTION_ENABLED=true
|
||||||
|
export KMS_ENABLED=true
|
||||||
|
python run.py</code></pre>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong>Configure bucket encryption:</strong> Navigate to your bucket → <strong>Properties</strong> tab → <strong>Default Encryption</strong> card → Click <strong>Enable Encryption</strong>.
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<strong>Choose algorithm:</strong> Select <strong>AES-256</strong> for server-managed keys or <strong>aws:kms</strong> to use a KMS-managed key.
|
||||||
|
</li>
|
||||||
|
</ol>
|
||||||
|
|
||||||
|
<div class="alert alert-warning border-warning bg-warning-subtle mb-3">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-exclamation-triangle mt-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
||||||
|
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Important:</strong> Only <em>new uploads</em> after enabling encryption will be encrypted. Existing objects remain unencrypted.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">KMS Key Management</h3>
|
||||||
|
<p class="small text-muted">When <code>KMS_ENABLED=true</code>, manage encryption keys via the API:</p>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Create a new KMS key
|
||||||
|
curl -X POST {{ api_base }}/kms/keys \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '{"alias": "my-key", "description": "Production key"}'
|
||||||
|
|
||||||
|
# List all keys
|
||||||
|
curl {{ api_base }}/kms/keys \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Rotate a key (creates new key material)
|
||||||
|
curl -X POST {{ api_base }}/kms/keys/{key-id}/rotate \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Disable/Enable a key
|
||||||
|
curl -X POST {{ api_base }}/kms/keys/{key-id}/disable \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Schedule key deletion (30-day waiting period)
|
||||||
|
curl -X DELETE "{{ api_base }}/kms/keys/{key-id}?waiting_period_days=30" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">How It Works</h3>
|
||||||
|
<p class="small text-muted mb-0">
|
||||||
|
<strong>Envelope Encryption:</strong> Each object is encrypted with a unique Data Encryption Key (DEK). The DEK is then encrypted (wrapped) by the master key or KMS key and stored alongside the ciphertext. On read, the DEK is unwrapped and used to decrypt the object transparently.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="lifecycle" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">12</span>
|
||||||
|
<h2 class="h4 mb-0">Lifecycle Rules</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">Automatically delete expired objects, clean up old versions, and abort incomplete multipart uploads using time-based lifecycle rules.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">How It Works</h3>
|
||||||
|
<p class="small text-muted mb-3">
|
||||||
|
Lifecycle rules run on a background timer (Python <code>threading.Timer</code>), not a system cronjob. The enforcement cycle triggers every <strong>3600 seconds (1 hour)</strong> by default. Each cycle scans all buckets with lifecycle configurations and applies matching rules.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Expiration Types</h3>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Type</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Expiration (Days)</strong></td>
|
||||||
|
<td>Delete current objects older than N days from their last modification</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Expiration (Date)</strong></td>
|
||||||
|
<td>Delete current objects after a specific date (ISO 8601 format)</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>NoncurrentVersionExpiration</strong></td>
|
||||||
|
<td>Delete non-current (archived) versions older than N days from when they became non-current</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>AbortIncompleteMultipartUpload</strong></td>
|
||||||
|
<td>Abort multipart uploads that have been in progress longer than N days</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">API Usage</h3>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Set lifecycle rule (delete objects older than 30 days)
|
||||||
|
curl -X PUT "{{ api_base }}/<bucket>?lifecycle" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '[{
|
||||||
|
"ID": "expire-old-objects",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"Expiration": {"Days": 30}
|
||||||
|
}]'
|
||||||
|
|
||||||
|
# Abort incomplete multipart uploads after 7 days
|
||||||
|
curl -X PUT "{{ api_base }}/<bucket>?lifecycle" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '[{
|
||||||
|
"ID": "cleanup-multipart",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7}
|
||||||
|
}]'
|
||||||
|
|
||||||
|
# Get current lifecycle configuration
|
||||||
|
curl "{{ api_base }}/<bucket>?lifecycle" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"</code></pre>
|
||||||
|
|
||||||
|
<div class="alert alert-light border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1 flex-shrink-0" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Prefix Filtering:</strong> Use the <code>Prefix</code> field to scope rules to specific paths (e.g., <code>"logs/"</code>). Leave empty to apply to all objects in the bucket.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="troubleshooting" class="card shadow-sm docs-section">
|
<article id="troubleshooting" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">08</span>
|
<span class="docs-section-kicker">13</span>
|
||||||
<h2 class="h4 mb-0">Troubleshooting & tips</h2>
|
<h2 class="h4 mb-0">Troubleshooting & tips</h2>
|
||||||
</div>
|
</div>
|
||||||
<div class="table-responsive">
|
<div class="table-responsive">
|
||||||
@@ -330,8 +1014,13 @@ s3.complete_multipart_upload(
|
|||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>Requests hit the wrong host</td>
|
<td>Requests hit the wrong host</td>
|
||||||
<td><code>API_BASE_URL</code> not updated after tunneling/forwarding</td>
|
<td>Proxy headers missing or <code>API_BASE_URL</code> incorrect</td>
|
||||||
<td>Set <code>API_BASE_URL</code> in your shell or <code>.env</code> to match the published host.</td>
|
<td>Ensure your proxy sends <code>X-Forwarded-Host</code>/<code>Proto</code> headers, or explicitly set <code>API_BASE_URL</code> to your public domain.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Large folder uploads hitting rate limits (429)</td>
|
||||||
|
<td><code>RATE_LIMIT_DEFAULT</code> exceeded (200/min)</td>
|
||||||
|
<td>Increase rate limit in env config, use Redis backend (<code>RATE_LIMIT_STORAGE_URI=redis://host:port</code>) for distributed setups, or upload in smaller batches.</td>
|
||||||
</tr>
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
@@ -345,11 +1034,17 @@ s3.complete_multipart_upload(
|
|||||||
<h3 class="h6 text-uppercase text-muted mb-3">On this page</h3>
|
<h3 class="h6 text-uppercase text-muted mb-3">On this page</h3>
|
||||||
<ul class="list-unstyled docs-toc mb-4">
|
<ul class="list-unstyled docs-toc mb-4">
|
||||||
<li><a href="#setup">Set up & run</a></li>
|
<li><a href="#setup">Set up & run</a></li>
|
||||||
|
<li><a href="#background">Running in background</a></li>
|
||||||
<li><a href="#auth">Authentication & IAM</a></li>
|
<li><a href="#auth">Authentication & IAM</a></li>
|
||||||
<li><a href="#console">Console tour</a></li>
|
<li><a href="#console">Console tour</a></li>
|
||||||
<li><a href="#automation">Automation / CLI</a></li>
|
<li><a href="#automation">Automation / CLI</a></li>
|
||||||
<li><a href="#api">REST endpoints</a></li>
|
<li><a href="#api">REST endpoints</a></li>
|
||||||
|
<li><a href="#examples">API Examples</a></li>
|
||||||
<li><a href="#replication">Site Replication</a></li>
|
<li><a href="#replication">Site Replication</a></li>
|
||||||
|
<li><a href="#versioning">Object Versioning</a></li>
|
||||||
|
<li><a href="#quotas">Bucket Quotas</a></li>
|
||||||
|
<li><a href="#encryption">Encryption</a></li>
|
||||||
|
<li><a href="#lifecycle">Lifecycle Rules</a></li>
|
||||||
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
||||||
</ul>
|
</ul>
|
||||||
<div class="docs-sidebar-callouts">
|
<div class="docs-sidebar-callouts">
|
||||||
|
|||||||
@@ -4,7 +4,13 @@
|
|||||||
<div class="page-header d-flex justify-content-between align-items-center mb-4">
|
<div class="page-header d-flex justify-content-between align-items-center mb-4">
|
||||||
<div>
|
<div>
|
||||||
<p class="text-uppercase text-muted small mb-1">Identity & Access Management</p>
|
<p class="text-uppercase text-muted small mb-1">Identity & Access Management</p>
|
||||||
<h1 class="h3 mb-1">IAM Configuration</h1>
|
<h1 class="h3 mb-1 d-flex align-items-center gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
|
||||||
|
</svg>
|
||||||
|
IAM Configuration
|
||||||
|
</h1>
|
||||||
|
<p class="text-muted mb-0 mt-1">Create and manage users with fine-grained bucket permissions.</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="d-flex gap-2">
|
<div class="d-flex gap-2">
|
||||||
{% if not iam_locked %}
|
{% if not iam_locked %}
|
||||||
@@ -79,123 +85,201 @@
|
|||||||
</div>
|
</div>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
<div class="card shadow-sm">
|
<div class="card shadow-sm border-0" style="border-radius: 1rem;">
|
||||||
<div class="card-header bg-body d-flex justify-content-between align-items-center">
|
<div class="card-header bg-transparent border-0 pt-4 pb-0 px-4 d-flex justify-content-between align-items-center">
|
||||||
<span class="fw-semibold">Users</span>
|
<div>
|
||||||
{% if iam_locked %}<span class="badge text-bg-warning">View only</span>{% endif %}
|
<h5 class="fw-semibold d-flex align-items-center gap-2 mb-1">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-muted" viewBox="0 0 16 16">
|
||||||
|
<path d="M15 14s1 0 1-1-1-4-5-4-5 3-5 4 1 1 1 1h8zm-7.978-1A.261.261 0 0 1 7 12.996c.001-.264.167-1.03.76-1.72C8.312 10.629 9.282 10 11 10c1.717 0 2.687.63 3.24 1.276.593.69.758 1.457.76 1.72l-.008.002a.274.274 0 0 1-.014.002H7.022zM11 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4zm3-2a3 3 0 1 1-6 0 3 3 0 0 1 6 0zM6.936 9.28a5.88 5.88 0 0 0-1.23-.247A7.35 7.35 0 0 0 5 9c-4 0-5 3-5 4 0 .667.333 1 1 1h4.216A2.238 2.238 0 0 1 5 13c0-1.01.377-2.042 1.09-2.904.243-.294.526-.569.846-.816zM4.92 10A5.493 5.493 0 0 0 4 13H1c0-.26.164-1.03.76-1.724.545-.636 1.492-1.256 3.16-1.275zM1.5 5.5a3 3 0 1 1 6 0 3 3 0 0 1-6 0zm3-2a2 2 0 1 0 0 4 2 2 0 0 0 0-4z"/>
|
||||||
|
</svg>
|
||||||
|
Users
|
||||||
|
</h5>
|
||||||
|
<p class="text-muted small mb-0">{{ users|length if not iam_locked else '?' }} user{{ 's' if (users|length if not iam_locked else 0) != 1 else '' }} configured</p>
|
||||||
|
</div>
|
||||||
|
{% if iam_locked %}<span class="badge bg-warning bg-opacity-10 text-warning">View only</span>{% endif %}
|
||||||
</div>
|
</div>
|
||||||
{% if iam_locked %}
|
{% if iam_locked %}
|
||||||
<div class="card-body">
|
<div class="card-body px-4 pb-4">
|
||||||
<p class="text-muted mb-0">Sign in with an administrator to list or edit IAM users.</p>
|
<div class="alert alert-secondary d-flex align-items-center mb-0" role="alert">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="flex-shrink-0 me-2" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 1a2 2 0 0 1 2 2v4H6V3a2 2 0 0 1 2-2zm3 6V3a3 3 0 0 0-6 0v4a2 2 0 0 0-2 2v5a2 2 0 0 0 2 2h6a2 2 0 0 0 2-2V9a2 2 0 0 0-2-2z"/>
|
||||||
|
</svg>
|
||||||
|
<div>Sign in with an administrator account to list or edit IAM users.</div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
{% else %}
|
{% else %}
|
||||||
<div class="table-responsive">
|
<div class="card-body px-4 pb-4">
|
||||||
<table class="table table-hover align-middle mb-0">
|
{% if users %}
|
||||||
<thead class="table-light">
|
<div class="row g-3">
|
||||||
<tr>
|
{% for user in users %}
|
||||||
<th scope="col">Access Key</th>
|
<div class="col-md-6 col-xl-4">
|
||||||
<th scope="col">Display Name</th>
|
<div class="card h-100 iam-user-card">
|
||||||
<th scope="col">Policies</th>
|
<div class="card-body">
|
||||||
<th scope="col" class="text-end">Actions</th>
|
<div class="d-flex align-items-start justify-content-between mb-3">
|
||||||
</tr>
|
<div class="d-flex align-items-center gap-3">
|
||||||
</thead>
|
<div class="user-avatar user-avatar-lg">
|
||||||
<tbody>
|
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">
|
||||||
{% for user in users %}
|
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
|
||||||
<tr>
|
</svg>
|
||||||
<td class="font-monospace">{{ user.access_key }}</td>
|
</div>
|
||||||
<td>{{ user.display_name }}</td>
|
<div class="min-width-0">
|
||||||
<td>
|
<h6 class="fw-semibold mb-0 text-truncate" title="{{ user.display_name }}">{{ user.display_name }}</h6>
|
||||||
{% for policy in user.policies %}
|
<code class="small text-muted d-block text-truncate" title="{{ user.access_key }}">{{ user.access_key }}</code>
|
||||||
<span class="badge text-bg-light border text-dark mb-1">
|
</div>
|
||||||
{{ policy.bucket }}
|
</div>
|
||||||
{% if '*' in policy.actions %}
|
<div class="dropdown">
|
||||||
<span class="text-muted">(*)</span>
|
<button class="btn btn-sm btn-icon" type="button" data-bs-toggle="dropdown" aria-expanded="false">
|
||||||
{% else %}
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<span class="text-muted">({{ policy.actions|length }})</span>
|
<path d="M9.5 13a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0z"/>
|
||||||
{% endif %}
|
</svg>
|
||||||
</span>
|
</button>
|
||||||
{% endfor %}
|
<ul class="dropdown-menu dropdown-menu-end">
|
||||||
</td>
|
<li>
|
||||||
<td class="text-end">
|
<button class="dropdown-item" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}">
|
||||||
<div class="btn-group btn-group-sm" role="group">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
|
||||||
<button class="btn btn-outline-primary" type="button" data-rotate-user="{{ user.access_key }}" title="Rotate Secret">
|
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-arrow-repeat" viewBox="0 0 16 16">
|
</svg>
|
||||||
<path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
|
Edit Name
|
||||||
<path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
|
</button>
|
||||||
</svg>
|
</li>
|
||||||
</button>
|
<li>
|
||||||
<button class="btn btn-outline-secondary" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}" title="Edit User">
|
<button class="dropdown-item" type="button" data-rotate-user="{{ user.access_key }}">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-pencil" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
|
||||||
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5zm-9.761 5.175-.106.378.378-.106 5-5-.378-.378-5 5z"/>
|
<path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
|
||||||
</svg>
|
<path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
|
||||||
</button>
|
</svg>
|
||||||
<button class="btn btn-outline-secondary" type="button" data-policy-editor data-access-key="{{ user.access_key }}" title="Edit Policies">
|
Rotate Secret
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-pencil-square" viewBox="0 0 16 16">
|
</button>
|
||||||
<path d="M15.502 1.94a.5.5 0 0 1 0 .706L14.459 3.69l-2-2L13.502.646a.5.5 0 0 1 .707 0l1.293 1.293zm-1.75 2.456-2-2L4.939 9.21a.5.5 0 0 0-.121.196l-.805 2.414a.25.25 0 0 0 .316.316l2.414-.805a.5.5 0 0 0 .196-.12l6.813-6.814z"/>
|
</li>
|
||||||
<path fill-rule="evenodd" d="M1 13.5A1.5 1.5 0 0 0 2.5 15h11a1.5 1.5 0 0 0 1.5-1.5v-6a.5.5 0 0 0-1 0v6a.5.5 0 0 1-.5.5h-11a.5.5 0 0 1-.5-.5v-11a.5.5 0 0 1 .5-.5H9a.5.5 0 0 0 0-1H2.5A1.5 1.5 0 0 0 1 2.5v11z"/>
|
<li><hr class="dropdown-divider"></li>
|
||||||
</svg>
|
<li>
|
||||||
</button>
|
<button class="dropdown-item text-danger" type="button" data-delete-user="{{ user.access_key }}">
|
||||||
<button class="btn btn-outline-danger" type="button" data-delete-user="{{ user.access_key }}" title="Delete User">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-trash" viewBox="0 0 16 16">
|
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
|
||||||
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
</svg>
|
||||||
</svg>
|
Delete User
|
||||||
</button>
|
</button>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</td>
|
<div class="mb-3">
|
||||||
</tr>
|
<div class="small text-muted mb-2">Bucket Permissions</div>
|
||||||
{% else %}
|
<div class="d-flex flex-wrap gap-1">
|
||||||
<tr>
|
{% for policy in user.policies %}
|
||||||
<td colspan="4" class="text-center text-muted py-4">No IAM users defined.</td>
|
<span class="badge bg-primary bg-opacity-10 text-primary">
|
||||||
</tr>
|
<svg xmlns="http://www.w3.org/2000/svg" width="10" height="10" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
{% endfor %}
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
</tbody>
|
</svg>
|
||||||
</table>
|
{{ policy.bucket }}
|
||||||
|
{% if '*' in policy.actions %}
|
||||||
|
<span class="opacity-75">(full)</span>
|
||||||
|
{% else %}
|
||||||
|
<span class="opacity-75">({{ policy.actions|length }})</span>
|
||||||
|
{% endif %}
|
||||||
|
</span>
|
||||||
|
{% else %}
|
||||||
|
<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>
|
||||||
|
{% endfor %}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<button class="btn btn-outline-primary btn-sm w-100" type="button" data-policy-editor data-access-key="{{ user.access_key }}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/>
|
||||||
|
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/>
|
||||||
|
</svg>
|
||||||
|
Manage Policies
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{% endfor %}
|
||||||
|
</div>
|
||||||
|
{% else %}
|
||||||
|
<div class="empty-state text-center py-5">
|
||||||
|
<div class="empty-state-icon mx-auto mb-3">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M15 14s1 0 1-1-1-4-5-4-5 3-5 4 1 1 1 1h8zm-7.978-1A.261.261 0 0 1 7 12.996c.001-.264.167-1.03.76-1.72C8.312 10.629 9.282 10 11 10c1.717 0 2.687.63 3.24 1.276.593.69.758 1.457.76 1.72l-.008.002a.274.274 0 0 1-.014.002H7.022zM11 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4zm3-2a3 3 0 1 1-6 0 3 3 0 0 1 6 0zM6.936 9.28a5.88 5.88 0 0 0-1.23-.247A7.35 7.35 0 0 0 5 9c-4 0-5 3-5 4 0 .667.333 1 1 1h4.216A2.238 2.238 0 0 1 5 13c0-1.01.377-2.042 1.09-2.904.243-.294.526-.569.846-.816zM4.92 10A5.493 5.493 0 0 0 4 13H1c0-.26.164-1.03.76-1.724.545-.636 1.492-1.256 3.16-1.275zM1.5 5.5a3 3 0 1 1 6 0 3 3 0 0 1-6 0zm3-2a2 2 0 1 0 0 4 2 2 0 0 0 0-4z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<h5 class="fw-semibold mb-2">No users yet</h5>
|
||||||
|
<p class="text-muted mb-3">Create your first IAM user to manage access to your storage.</p>
|
||||||
|
<button class="btn btn-primary" data-bs-toggle="modal" data-bs-target="#createUserModal">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 2a.5.5 0 0 1 .5.5v5h5a.5.5 0 0 1 0 1h-5v5a.5.5 0 0 1-1 0v-5h-5a.5.5 0 0 1 0-1h5v-5A.5.5 0 0 1 8 2Z"/>
|
||||||
|
</svg>
|
||||||
|
Create First User
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Create User Modal -->
|
|
||||||
<div class="modal fade" id="createUserModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="createUserModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header">
|
<div class="modal-header border-0 pb-0">
|
||||||
<h1 class="modal-title fs-5">Create IAM User</h1>
|
<h1 class="modal-title fs-5 fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M1 14s-1 0-1-1 1-4 6-4 6 3 6 4-1 1-1 1H1zm5-6a3 3 0 1 0 0-6 3 3 0 0 0 0 6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M13.5 5a.5.5 0 0 1 .5.5V7h1.5a.5.5 0 0 1 0 1H14v1.5a.5.5 0 0 1-1 0V8h-1.5a.5.5 0 0 1 0-1H13V5.5a.5.5 0 0 1 .5-.5z"/>
|
||||||
|
</svg>
|
||||||
|
Create IAM User
|
||||||
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<form method="post" action="{{ url_for('ui.create_iam_user') }}">
|
<form method="post" action="{{ url_for('ui.create_iam_user') }}">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<div class="modal-body">
|
<div class="modal-body">
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label class="form-label">Display Name</label>
|
<label class="form-label fw-medium">Display Name</label>
|
||||||
<input class="form-control" type="text" name="display_name" placeholder="Analytics Team" required />
|
<input class="form-control" type="text" name="display_name" placeholder="Analytics Team" required autofocus />
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label class="form-label">Initial Policies (JSON)</label>
|
<label class="form-label fw-medium">Initial Policies (JSON)</label>
|
||||||
<textarea class="form-control font-monospace" name="policies" rows="6" spellcheck="false" placeholder='[
|
<textarea class="form-control font-monospace" name="policies" id="createUserPolicies" rows="6" spellcheck="false" placeholder='[
|
||||||
{"bucket": "*", "actions": ["list", "read"]}
|
{"bucket": "*", "actions": ["list", "read"]}
|
||||||
]'></textarea>
|
]'></textarea>
|
||||||
<div class="form-text">Leave blank to grant full control (for bootstrap admins only).</div>
|
<div class="form-text">Leave blank to grant full control (for bootstrap admins only).</div>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="d-flex flex-wrap gap-2">
|
||||||
|
<span class="text-muted small me-2 align-self-center">Quick templates:</span>
|
||||||
|
<button class="btn btn-outline-secondary btn-sm" type="button" data-create-policy-template="full">Full Control</button>
|
||||||
|
<button class="btn btn-outline-secondary btn-sm" type="button" data-create-policy-template="readonly">Read-Only</button>
|
||||||
|
<button class="btn btn-outline-secondary btn-sm" type="button" data-create-policy-template="writer">Read + Write</button>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-footer">
|
<div class="modal-footer">
|
||||||
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
<button class="btn btn-primary" type="submit">Create User</button>
|
<button class="btn btn-primary" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 2a.5.5 0 0 1 .5.5v5h5a.5.5 0 0 1 0 1h-5v5a.5.5 0 0 1-1 0v-5h-5a.5.5 0 0 1 0-1h5v-5A.5.5 0 0 1 8 2Z"/>
|
||||||
|
</svg>
|
||||||
|
Create User
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</form>
|
</form>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Policy Editor Modal -->
|
|
||||||
<div class="modal fade" id="policyEditorModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="policyEditorModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-lg modal-dialog-centered">
|
<div class="modal-dialog modal-lg modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header">
|
<div class="modal-header border-0 pb-0">
|
||||||
<h1 class="modal-title fs-5">Edit Policies: <span id="policyEditorUserLabel" class="font-monospace"></span></h1>
|
<h1 class="modal-title fs-5 fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/>
|
||||||
|
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319zm-2.633.283c.246-.835 1.428-.835 1.674 0l.094.319a1.873 1.873 0 0 0 2.693 1.115l.291-.16c.764-.415 1.6.42 1.184 1.185l-.159.292a1.873 1.873 0 0 0 1.116 2.692l.318.094c.835.246.835 1.428 0 1.674l-.319.094a1.873 1.873 0 0 0-1.115 2.693l.16.291c.415.764-.42 1.6-1.185 1.184l-.291-.159a1.873 1.873 0 0 0-2.693 1.116l-.094.318c-.246.835-1.428.835-1.674 0l-.094-.319a1.873 1.873 0 0 0-2.692-1.115l-.292.16c-.764.415-1.6-.42-1.184-1.185l.159-.291A1.873 1.873 0 0 0 1.945 8.93l-.319-.094c-.835-.246-.835-1.428 0-1.674l.319-.094A1.873 1.873 0 0 0 3.06 4.377l-.16-.292c-.415-.764.42-1.6 1.185-1.184l.292.159a1.873 1.873 0 0 0 2.692-1.115l.094-.319z"/>
|
||||||
|
</svg>
|
||||||
|
Edit Policies
|
||||||
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-body">
|
<div class="modal-body">
|
||||||
|
<p class="text-muted small mb-3">Editing policies for <code id="policyEditorUserLabel"></code></p>
|
||||||
<form
|
<form
|
||||||
id="policyEditorForm"
|
id="policyEditorForm"
|
||||||
method="post"
|
method="post"
|
||||||
@@ -206,11 +290,12 @@
|
|||||||
<input type="hidden" id="policyEditorUser" name="access_key" />
|
<input type="hidden" id="policyEditorUser" name="access_key" />
|
||||||
|
|
||||||
<div>
|
<div>
|
||||||
<label class="form-label">Inline Policies (JSON array)</label>
|
<label class="form-label fw-medium">Inline Policies (JSON array)</label>
|
||||||
<textarea class="form-control font-monospace" id="policyEditorDocument" name="policies" rows="12" spellcheck="false"></textarea>
|
<textarea class="form-control font-monospace" id="policyEditorDocument" name="policies" rows="12" spellcheck="false"></textarea>
|
||||||
<div class="form-text">Use standard MyFSIO policy format. Validation happens server-side.</div>
|
<div class="form-text">Use standard MyFSIO policy format. Validation happens server-side.</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="d-flex flex-wrap gap-2">
|
<div class="d-flex flex-wrap gap-2">
|
||||||
|
<span class="text-muted small me-2 align-self-center">Quick templates:</span>
|
||||||
<button class="btn btn-outline-secondary btn-sm" type="button" data-policy-template="full">Full Control</button>
|
<button class="btn btn-outline-secondary btn-sm" type="button" data-policy-template="full">Full Control</button>
|
||||||
<button class="btn btn-outline-secondary btn-sm" type="button" data-policy-template="readonly">Read-Only</button>
|
<button class="btn btn-outline-secondary btn-sm" type="button" data-policy-template="readonly">Read-Only</button>
|
||||||
<button class="btn btn-outline-secondary btn-sm" type="button" data-policy-template="writer">Read + Write</button>
|
<button class="btn btn-outline-secondary btn-sm" type="button" data-policy-template="writer">Read + Write</button>
|
||||||
@@ -219,91 +304,145 @@
|
|||||||
</div>
|
</div>
|
||||||
<div class="modal-footer">
|
<div class="modal-footer">
|
||||||
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
<button class="btn btn-primary" type="submit" form="policyEditorForm">Save Policies</button>
|
<button class="btn btn-primary" type="submit" form="policyEditorForm">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M10.97 4.97a.75.75 0 0 1 1.07 1.05l-3.99 4.99a.75.75 0 0 1-1.08.02L4.324 8.384a.75.75 0 1 1 1.06-1.06l2.094 2.093 3.473-4.425a.267.267 0 0 1 .02-.022z"/>
|
||||||
|
</svg>
|
||||||
|
Save Policies
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Edit User Modal -->
|
|
||||||
<div class="modal fade" id="editUserModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="editUserModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header">
|
<div class="modal-header border-0 pb-0">
|
||||||
<h1 class="modal-title fs-5">Edit User</h1>
|
<h1 class="modal-title fs-5 fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5zm-9.761 5.175-.106.106-1.528 3.821 3.821-1.528.106-.106A.5.5 0 0 1 5 12.5V12h-.5a.5.5 0 0 1-.5-.5V11h-.5a.5.5 0 0 1-.468-.325z"/>
|
||||||
|
</svg>
|
||||||
|
Edit User
|
||||||
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<form method="post" id="editUserForm">
|
<form method="post" id="editUserForm">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<div class="modal-body">
|
<div class="modal-body">
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label class="form-label">Display Name</label>
|
<label class="form-label fw-medium">Display Name</label>
|
||||||
<input class="form-control" type="text" name="display_name" id="editUserDisplayName" required />
|
<input class="form-control" type="text" name="display_name" id="editUserDisplayName" required />
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-footer">
|
<div class="modal-footer">
|
||||||
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
<button class="btn btn-primary" type="submit">Save Changes</button>
|
<button class="btn btn-primary" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M10.97 4.97a.75.75 0 0 1 1.07 1.05l-3.99 4.99a.75.75 0 0 1-1.08.02L4.324 8.384a.75.75 0 1 1 1.06-1.06l2.094 2.093 3.473-4.425a.267.267 0 0 1 .02-.022z"/>
|
||||||
|
</svg>
|
||||||
|
Save Changes
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</form>
|
</form>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Delete User Modal -->
|
|
||||||
<div class="modal fade" id="deleteUserModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="deleteUserModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header">
|
<div class="modal-header border-0 pb-0">
|
||||||
<h1 class="modal-title fs-5">Delete User</h1>
|
<h1 class="modal-title fs-5 fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-danger" viewBox="0 0 16 16">
|
||||||
|
<path d="M11 5a3 3 0 1 1-6 0 3 3 0 0 1 6 0M8 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4m.256 7a4.5 4.5 0 0 1-.229-1.004H3c.001-.246.154-.986.832-1.664C4.484 10.68 5.711 10 8 10q.39 0 .74.025c.226-.341.496-.65.804-.918Q9.077 9.014 8 9c-5 0-6 3-6 4s1 1 1 1h5.256Z"/>
|
||||||
|
<path d="M12.5 16a3.5 3.5 0 1 0 0-7 3.5 3.5 0 0 0 0 7m-.646-4.854.646.647.646-.647a.5.5 0 0 1 .708.708l-.647.646.647.646a.5.5 0 0 1-.708.708l-.646-.647-.646.647a.5.5 0 0 1-.708-.708l.647-.646-.647-.646a.5.5 0 0 1 .708-.708"/>
|
||||||
|
</svg>
|
||||||
|
Delete User
|
||||||
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-body">
|
<div class="modal-body">
|
||||||
<p>Are you sure you want to delete user <strong id="deleteUserLabel"></strong>?</p>
|
<p>Are you sure you want to delete user <strong id="deleteUserLabel"></strong>?</p>
|
||||||
<div id="deleteSelfWarning" class="alert alert-danger d-none">
|
<div id="deleteSelfWarning" class="alert alert-danger d-flex align-items-start d-none">
|
||||||
<strong>Warning:</strong> You are deleting your own account. You will be logged out immediately and will lose access to this session.
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="flex-shrink-0 me-2 mt-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
||||||
|
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Warning:</strong> You are deleting your own account. You will be logged out immediately.
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-danger mb-0">This action cannot be undone.</p>
|
<p class="text-danger small mb-0">This action cannot be undone.</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-footer">
|
<div class="modal-footer">
|
||||||
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
<form method="post" id="deleteUserForm">
|
<form method="post" id="deleteUserForm">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<button class="btn btn-danger" type="submit">Delete User</button>
|
<button class="btn btn-danger" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
Delete User
|
||||||
|
</button>
|
||||||
</form>
|
</form>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Rotate Secret Modal -->
|
|
||||||
<div class="modal fade" id="rotateSecretModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="rotateSecretModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header">
|
<div class="modal-header border-0 pb-0">
|
||||||
<h1 class="modal-title fs-5">Rotate Secret Key</h1>
|
<h1 class="modal-title fs-5 fw-semibold">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-warning" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 3a5 5 0 1 0 4.546 2.914.5.5 0 0 1 .908-.417A6 6 0 1 1 8 2v1z"/>
|
||||||
|
<path d="M8 4.466V.534a.25.25 0 0 1 .41-.192l2.36 1.966c.12.1.12.284 0 .384L8.41 4.658A.25.25 0 0 1 8 4.466z"/>
|
||||||
|
</svg>
|
||||||
|
Rotate Secret Key
|
||||||
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-body" id="rotateSecretConfirm">
|
<div class="modal-body" id="rotateSecretConfirm">
|
||||||
<p>Are you sure you want to rotate the secret key for <strong id="rotateUserLabel"></strong>?</p>
|
<p>Rotate the secret key for <strong id="rotateUserLabel"></strong>?</p>
|
||||||
<div id="rotateSelfWarning" class="alert alert-warning d-none">
|
<div class="alert alert-warning d-flex align-items-start mb-0">
|
||||||
<strong>Warning:</strong> You are rotating your own secret key. You will need to sign in again with the new key.
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="flex-shrink-0 me-2 mt-1" viewBox="0 0 16 16">
|
||||||
</div>
|
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
||||||
<div class="alert alert-warning mb-0">
|
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
||||||
The old secret key will stop working immediately. Any applications using it must be updated.
|
</svg>
|
||||||
|
<div>The old secret key will stop working immediately. Update any applications using it.</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-body d-none" id="rotateSecretResult">
|
<div class="modal-body d-none" id="rotateSecretResult">
|
||||||
<p class="mb-2">Secret rotated successfully!</p>
|
<div class="alert alert-success d-flex align-items-center mb-3">
|
||||||
<div class="input-group mb-3">
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="flex-shrink-0 me-2" viewBox="0 0 16 16">
|
||||||
<input type="text" class="form-control font-monospace" id="newSecretKey" readonly>
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
||||||
<button class="btn btn-outline-primary" type="button" id="copyNewSecret">Copy</button>
|
</svg>
|
||||||
|
<div>Secret rotated successfully!</div>
|
||||||
</div>
|
</div>
|
||||||
<p class="small text-muted mb-0">Copy this now. It will not be shown again.</p>
|
<label class="form-label fw-medium">New Secret Key</label>
|
||||||
|
<div class="input-group">
|
||||||
|
<input type="text" class="form-control font-monospace bg-body-tertiary" id="newSecretKey" readonly>
|
||||||
|
<button class="btn btn-outline-primary" type="button" id="copyNewSecret">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M4 1.5H3a2 2 0 0 0-2 2V14a2 2 0 0 0 2 2h10a2 2 0 0 0 2-2V3.5a2 2 0 0 0-2-2h-1v1h1a1 1 0 0 1 1 1V14a1 1 0 0 1-1 1H3a1 1 0 0 1-1-1V3.5a1 1 0 0 1 1-1h1v-1z"/>
|
||||||
|
<path d="M9.5 1a.5.5 0 0 1 .5.5v1a.5.5 0 0 1-.5.5h-3a.5.5 0 0 1-.5-.5v-1a.5.5 0 0 1 .5-.5h3zm-3-1A1.5 1.5 0 0 0 5 1.5v1A1.5 1.5 0 0 0 6.5 4h3A1.5 1.5 0 0 0 11 2.5v-1A1.5 1.5 0 0 0 9.5 0h-3z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<p class="form-text mb-0">Copy this now. It will not be shown again.</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="modal-footer">
|
<div class="modal-footer">
|
||||||
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal" id="rotateCancelBtn">Cancel</button>
|
<button type="button" class="btn btn-outline-secondary" data-bs-dismiss="modal" id="rotateCancelBtn">Cancel</button>
|
||||||
<button type="button" class="btn btn-primary" id="confirmRotateBtn">Rotate Key</button>
|
<button type="button" class="btn btn-warning" id="confirmRotateBtn">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 3a5 5 0 1 0 4.546 2.914.5.5 0 0 1 .908-.417A6 6 0 1 1 8 2v1z"/>
|
||||||
|
<path d="M8 4.466V.534a.25.25 0 0 1 .41-.192l2.36 1.966c.12.1.12.284 0 .384L8.41 4.658A.25.25 0 0 1 8 4.466z"/>
|
||||||
|
</svg>
|
||||||
|
Rotate Key
|
||||||
|
</button>
|
||||||
<button type="button" class="btn btn-primary d-none" data-bs-dismiss="modal" id="rotateDoneBtn">Done</button>
|
<button type="button" class="btn btn-primary d-none" data-bs-dismiss="modal" id="rotateDoneBtn">Done</button>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -317,6 +456,80 @@
|
|||||||
{{ super() }}
|
{{ super() }}
|
||||||
<script>
|
<script>
|
||||||
(function () {
|
(function () {
|
||||||
|
function setupJsonAutoIndent(textarea) {
|
||||||
|
if (!textarea) return;
|
||||||
|
|
||||||
|
textarea.addEventListener('keydown', function(e) {
|
||||||
|
if (e.key === 'Enter') {
|
||||||
|
e.preventDefault();
|
||||||
|
|
||||||
|
const start = this.selectionStart;
|
||||||
|
const end = this.selectionEnd;
|
||||||
|
const value = this.value;
|
||||||
|
|
||||||
|
const lineStart = value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
const currentLine = value.substring(lineStart, start);
|
||||||
|
|
||||||
|
const indentMatch = currentLine.match(/^(\s*)/);
|
||||||
|
let indent = indentMatch ? indentMatch[1] : '';
|
||||||
|
|
||||||
|
const trimmedLine = currentLine.trim();
|
||||||
|
const lastChar = trimmedLine.slice(-1);
|
||||||
|
|
||||||
|
const charBeforeCursor = value.substring(start - 1, start).trim();
|
||||||
|
|
||||||
|
let newIndent = indent;
|
||||||
|
let insertAfter = '';
|
||||||
|
|
||||||
|
if (lastChar === '{' || lastChar === '[') {
|
||||||
|
newIndent = indent + ' ';
|
||||||
|
|
||||||
|
const charAfterCursor = value.substring(start, start + 1).trim();
|
||||||
|
if ((lastChar === '{' && charAfterCursor === '}') ||
|
||||||
|
(lastChar === '[' && charAfterCursor === ']')) {
|
||||||
|
insertAfter = '\n' + indent;
|
||||||
|
}
|
||||||
|
} else if (lastChar === ',' || lastChar === ':') {
|
||||||
|
newIndent = indent;
|
||||||
|
}
|
||||||
|
|
||||||
|
const insertion = '\n' + newIndent + insertAfter;
|
||||||
|
const newValue = value.substring(0, start) + insertion + value.substring(end);
|
||||||
|
|
||||||
|
this.value = newValue;
|
||||||
|
|
||||||
|
const newCursorPos = start + 1 + newIndent.length;
|
||||||
|
this.selectionStart = this.selectionEnd = newCursorPos;
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
|
||||||
|
if (e.key === 'Tab') {
|
||||||
|
e.preventDefault();
|
||||||
|
const start = this.selectionStart;
|
||||||
|
const end = this.selectionEnd;
|
||||||
|
|
||||||
|
if (e.shiftKey) {
|
||||||
|
const lineStart = this.value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
const lineContent = this.value.substring(lineStart, start);
|
||||||
|
if (lineContent.startsWith(' ')) {
|
||||||
|
this.value = this.value.substring(0, lineStart) +
|
||||||
|
this.value.substring(lineStart + 2);
|
||||||
|
this.selectionStart = this.selectionEnd = Math.max(lineStart, start - 2);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
this.value = this.value.substring(0, start) + ' ' + this.value.substring(end);
|
||||||
|
this.selectionStart = this.selectionEnd = start + 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
setupJsonAutoIndent(document.getElementById('policyEditorDocument'));
|
||||||
|
setupJsonAutoIndent(document.getElementById('createUserPolicies'));
|
||||||
|
|
||||||
const currentUserKey = {{ principal.access_key | tojson }};
|
const currentUserKey = {{ principal.access_key | tojson }};
|
||||||
const configCopyButtons = document.querySelectorAll('.config-copy');
|
const configCopyButtons = document.querySelectorAll('.config-copy');
|
||||||
configCopyButtons.forEach((button) => {
|
configCopyButtons.forEach((button) => {
|
||||||
@@ -357,7 +570,6 @@
|
|||||||
const iamUsersData = document.getElementById('iamUsersJson');
|
const iamUsersData = document.getElementById('iamUsersJson');
|
||||||
const users = iamUsersData ? JSON.parse(iamUsersData.textContent || '[]') : [];
|
const users = iamUsersData ? JSON.parse(iamUsersData.textContent || '[]') : [];
|
||||||
|
|
||||||
// Policy Editor Logic
|
|
||||||
const policyModalEl = document.getElementById('policyEditorModal');
|
const policyModalEl = document.getElementById('policyEditorModal');
|
||||||
const policyModal = new bootstrap.Modal(policyModalEl);
|
const policyModal = new bootstrap.Modal(policyModalEl);
|
||||||
const userLabelEl = document.getElementById('policyEditorUserLabel');
|
const userLabelEl = document.getElementById('policyEditorUserLabel');
|
||||||
@@ -379,7 +591,7 @@
|
|||||||
full: [
|
full: [
|
||||||
{
|
{
|
||||||
bucket: '*',
|
bucket: '*',
|
||||||
actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'iam:list_users', 'iam:*'],
|
actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'replication', 'iam:list_users', 'iam:*'],
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
readonly: [
|
readonly: [
|
||||||
@@ -404,6 +616,39 @@
|
|||||||
button.addEventListener('click', () => applyTemplate(button.dataset.policyTemplate));
|
button.addEventListener('click', () => applyTemplate(button.dataset.policyTemplate));
|
||||||
});
|
});
|
||||||
|
|
||||||
|
const createUserPoliciesEl = document.getElementById('createUserPolicies');
|
||||||
|
const createTemplateButtons = document.querySelectorAll('[data-create-policy-template]');
|
||||||
|
|
||||||
|
const applyCreateTemplate = (name) => {
|
||||||
|
const templates = {
|
||||||
|
full: [
|
||||||
|
{
|
||||||
|
bucket: '*',
|
||||||
|
actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'replication', 'iam:list_users', 'iam:*'],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
readonly: [
|
||||||
|
{
|
||||||
|
bucket: '*',
|
||||||
|
actions: ['list', 'read'],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
writer: [
|
||||||
|
{
|
||||||
|
bucket: '*',
|
||||||
|
actions: ['list', 'read', 'write'],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
if (templates[name] && createUserPoliciesEl) {
|
||||||
|
createUserPoliciesEl.value = JSON.stringify(templates[name], null, 2);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
createTemplateButtons.forEach((button) => {
|
||||||
|
button.addEventListener('click', () => applyCreateTemplate(button.dataset.createPolicyTemplate));
|
||||||
|
});
|
||||||
|
|
||||||
formEl?.addEventListener('submit', (event) => {
|
formEl?.addEventListener('submit', (event) => {
|
||||||
const key = userInputEl.value;
|
const key = userInputEl.value;
|
||||||
if (!key) {
|
if (!key) {
|
||||||
@@ -427,7 +672,6 @@
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
// Edit User Logic
|
|
||||||
const editUserModal = new bootstrap.Modal(document.getElementById('editUserModal'));
|
const editUserModal = new bootstrap.Modal(document.getElementById('editUserModal'));
|
||||||
const editUserForm = document.getElementById('editUserForm');
|
const editUserForm = document.getElementById('editUserForm');
|
||||||
const editUserDisplayName = document.getElementById('editUserDisplayName');
|
const editUserDisplayName = document.getElementById('editUserDisplayName');
|
||||||
@@ -442,7 +686,6 @@
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
// Delete User Logic
|
|
||||||
const deleteUserModal = new bootstrap.Modal(document.getElementById('deleteUserModal'));
|
const deleteUserModal = new bootstrap.Modal(document.getElementById('deleteUserModal'));
|
||||||
const deleteUserForm = document.getElementById('deleteUserForm');
|
const deleteUserForm = document.getElementById('deleteUserForm');
|
||||||
const deleteUserLabel = document.getElementById('deleteUserLabel');
|
const deleteUserLabel = document.getElementById('deleteUserLabel');
|
||||||
@@ -464,7 +707,6 @@
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
// Rotate Secret Logic
|
|
||||||
const rotateSecretModal = new bootstrap.Modal(document.getElementById('rotateSecretModal'));
|
const rotateSecretModal = new bootstrap.Modal(document.getElementById('rotateSecretModal'));
|
||||||
const rotateUserLabel = document.getElementById('rotateUserLabel');
|
const rotateUserLabel = document.getElementById('rotateUserLabel');
|
||||||
const confirmRotateBtn = document.getElementById('confirmRotateBtn');
|
const confirmRotateBtn = document.getElementById('confirmRotateBtn');
|
||||||
@@ -474,7 +716,6 @@
|
|||||||
const rotateSecretResult = document.getElementById('rotateSecretResult');
|
const rotateSecretResult = document.getElementById('rotateSecretResult');
|
||||||
const newSecretKeyInput = document.getElementById('newSecretKey');
|
const newSecretKeyInput = document.getElementById('newSecretKey');
|
||||||
const copyNewSecretBtn = document.getElementById('copyNewSecret');
|
const copyNewSecretBtn = document.getElementById('copyNewSecret');
|
||||||
const rotateSelfWarning = document.getElementById('rotateSelfWarning');
|
|
||||||
let currentRotateKey = null;
|
let currentRotateKey = null;
|
||||||
|
|
||||||
document.querySelectorAll('[data-rotate-user]').forEach(btn => {
|
document.querySelectorAll('[data-rotate-user]').forEach(btn => {
|
||||||
@@ -482,13 +723,6 @@
|
|||||||
currentRotateKey = btn.dataset.rotateUser;
|
currentRotateKey = btn.dataset.rotateUser;
|
||||||
rotateUserLabel.textContent = currentRotateKey;
|
rotateUserLabel.textContent = currentRotateKey;
|
||||||
|
|
||||||
if (currentRotateKey === currentUserKey) {
|
|
||||||
rotateSelfWarning.classList.remove('d-none');
|
|
||||||
} else {
|
|
||||||
rotateSelfWarning.classList.add('d-none');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reset Modal State
|
|
||||||
rotateSecretConfirm.classList.remove('d-none');
|
rotateSecretConfirm.classList.remove('d-none');
|
||||||
rotateSecretResult.classList.add('d-none');
|
rotateSecretResult.classList.add('d-none');
|
||||||
confirmRotateBtn.classList.remove('d-none');
|
confirmRotateBtn.classList.remove('d-none');
|
||||||
@@ -523,7 +757,6 @@
|
|||||||
const data = await response.json();
|
const data = await response.json();
|
||||||
newSecretKeyInput.value = data.secret_key;
|
newSecretKeyInput.value = data.secret_key;
|
||||||
|
|
||||||
// Show Result
|
|
||||||
rotateSecretConfirm.classList.add('d-none');
|
rotateSecretConfirm.classList.add('d-none');
|
||||||
rotateSecretResult.classList.remove('d-none');
|
rotateSecretResult.classList.remove('d-none');
|
||||||
confirmRotateBtn.classList.add('d-none');
|
confirmRotateBtn.classList.add('d-none');
|
||||||
@@ -531,7 +764,9 @@
|
|||||||
rotateDoneBtn.classList.remove('d-none');
|
rotateDoneBtn.classList.remove('d-none');
|
||||||
|
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
alert(err.message);
|
if (window.showToast) {
|
||||||
|
window.showToast(err.message, 'Error', 'danger');
|
||||||
|
}
|
||||||
rotateSecretModal.hide();
|
rotateSecretModal.hide();
|
||||||
} finally {
|
} finally {
|
||||||
confirmRotateBtn.disabled = false;
|
confirmRotateBtn.disabled = false;
|
||||||
|
|||||||
@@ -1,29 +1,102 @@
|
|||||||
{% extends "base.html" %}
|
{% extends "base.html" %}
|
||||||
{% block content %}
|
{% block content %}
|
||||||
<div class="row align-items-center mt-5 g-4">
|
<div class="row align-items-center justify-content-center min-vh-75 g-5">
|
||||||
<div class="col-lg-6">
|
<div class="col-lg-5 d-none d-lg-block">
|
||||||
<h1 class="display-6 mb-3">Welcome to <span class="text-primary">MyFSIO</span></h1>
|
<div class="text-center mb-4">
|
||||||
<p class="lead text-muted">A developer-friendly object storage solution for prototyping and local development.</p>
|
<div class="position-relative d-inline-block mb-4">
|
||||||
<p class="text-muted mb-0">Need help getting started? Review the project README and docs for bootstrap credentials, IAM walkthroughs, and bucket policy samples.</p>
|
<div class="login-hero-icon">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="64" height="64" fill="currentColor" class="bi bi-cloud-arrow-up" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M7.646 5.146a.5.5 0 0 1 .708 0l2 2a.5.5 0 0 1-.708.708L8.5 6.707V10.5a.5.5 0 0 1-1 0V6.707L6.354 7.854a.5.5 0 1 1-.708-.708l2-2z"/>
|
||||||
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<h1 class="display-5 fw-bold mb-3">Welcome to <span class="text-gradient">MyFSIO</span></h1>
|
||||||
|
<p class="lead text-muted mb-4">A developer-friendly object storage solution for prototyping and local development.</p>
|
||||||
|
<div class="d-flex justify-content-center gap-4 text-muted">
|
||||||
|
<div class="text-center">
|
||||||
|
<div class="h4 fw-bold text-gradient mb-1">S3</div>
|
||||||
|
<small>Compatible</small>
|
||||||
|
</div>
|
||||||
|
<div class="vr"></div>
|
||||||
|
<div class="text-center">
|
||||||
|
<div class="h4 fw-bold text-gradient mb-1">Fast</div>
|
||||||
|
<small>Local Storage</small>
|
||||||
|
</div>
|
||||||
|
<div class="vr"></div>
|
||||||
|
<div class="text-center">
|
||||||
|
<div class="h4 fw-bold text-gradient mb-1">Secure</div>
|
||||||
|
<small>IAM Support</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="col-lg-5 ms-auto">
|
<div class="col-lg-5 col-md-8 col-sm-10">
|
||||||
<div class="card shadow-sm">
|
<div class="card shadow-lg login-card position-relative">
|
||||||
<div class="card-body">
|
<div class="card-body p-4 p-md-5">
|
||||||
<h2 class="h4 mb-3">Sign in</h2>
|
<div class="text-center mb-4 d-lg-none">
|
||||||
|
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO" width="48" height="48" class="mb-3 rounded-3">
|
||||||
|
<h2 class="h4 fw-bold">MyFSIO</h2>
|
||||||
|
</div>
|
||||||
|
<h2 class="h4 mb-1 d-none d-lg-block">Sign in</h2>
|
||||||
|
<p class="text-muted mb-4 d-none d-lg-block">Enter your credentials to continue</p>
|
||||||
<form method="post" action="{{ url_for('ui.login') }}">
|
<form method="post" action="{{ url_for('ui.login') }}">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<label class="form-label">Access key</label>
|
<label class="form-label fw-medium">Access key</label>
|
||||||
<input class="form-control" type="text" name="access_key" required autofocus />
|
<div class="input-group">
|
||||||
|
<span class="input-group-text bg-transparent">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-key text-muted" viewBox="0 0 16 16">
|
||||||
|
<path d="M0 8a4 4 0 0 1 7.465-2H14a.5.5 0 0 1 .354.146l1.5 1.5a.5.5 0 0 1 0 .708l-1.5 1.5a.5.5 0 0 1-.708 0L13 9.207l-.646.647a.5.5 0 0 1-.708 0L11 9.207l-.646.647a.5.5 0 0 1-.708 0L9 9.207l-.646.647A.5.5 0 0 1 8 10h-.535A4 4 0 0 1 0 8zm4-3a3 3 0 1 0 2.712 4.285A.5.5 0 0 1 7.163 9h.63l.853-.854a.5.5 0 0 1 .708 0l.646.647.646-.647a.5.5 0 0 1 .708 0l.646.647.646-.647a.5.5 0 0 1 .708 0l.646.647.793-.793-1-1h-6.63a.5.5 0 0 1-.451-.285A3 3 0 0 0 4 5z"/>
|
||||||
|
<path d="M4 8a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
||||||
|
</svg>
|
||||||
|
</span>
|
||||||
|
<input class="form-control" type="text" name="access_key" required autofocus placeholder="Enter your access key" />
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-4">
|
<div class="mb-4">
|
||||||
<label class="form-label">Secret key</label>
|
<label class="form-label fw-medium">Secret key</label>
|
||||||
<input class="form-control" type="password" name="secret_key" required />
|
<div class="input-group">
|
||||||
|
<span class="input-group-text bg-transparent">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-shield-lock text-muted" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.338 1.59a61.44 61.44 0 0 0-2.837.856.481.481 0 0 0-.328.39c-.554 4.157.726 7.19 2.253 9.188a10.725 10.725 0 0 0 2.287 2.233c.346.244.652.42.893.533.12.057.218.095.293.118a.55.55 0 0 0 .101.025.615.615 0 0 0 .1-.025c.076-.023.174-.061.294-.118.24-.113.547-.29.893-.533a10.726 10.726 0 0 0 2.287-2.233c1.527-1.997 2.807-5.031 2.253-9.188a.48.48 0 0 0-.328-.39c-.651-.213-1.75-.56-2.837-.855C9.552 1.29 8.531 1.067 8 1.067c-.53 0-1.552.223-2.662.524zM5.072.56C6.157.265 7.31 0 8 0s1.843.265 2.928.56c1.11.3 2.229.655 2.887.87a1.54 1.54 0 0 1 1.044 1.262c.596 4.477-.787 7.795-2.465 9.99a11.775 11.775 0 0 1-2.517 2.453 7.159 7.159 0 0 1-1.048.625c-.28.132-.581.24-.829.24s-.548-.108-.829-.24a7.158 7.158 0 0 1-1.048-.625 11.777 11.777 0 0 1-2.517-2.453C1.928 10.487.545 7.169 1.141 2.692A1.54 1.54 0 0 1 2.185 1.43 62.456 62.456 0 0 1 5.072.56z"/>
|
||||||
|
<path d="M9.5 6.5a1.5 1.5 0 0 1-1 1.415l.385 1.99a.5.5 0 0 1-.491.595h-.788a.5.5 0 0 1-.49-.595l.384-1.99a1.5 1.5 0 1 1 2-1.415z"/>
|
||||||
|
</svg>
|
||||||
|
</span>
|
||||||
|
<input class="form-control" type="password" name="secret_key" required placeholder="Enter your secret key" />
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<button class="btn btn-primary w-100" type="submit">Continue</button>
|
<button class="btn btn-primary btn-lg w-100 fw-medium" type="submit">
|
||||||
|
Sign in
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-arrow-right ms-2" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M1 8a.5.5 0 0 1 .5-.5h11.793l-3.147-3.146a.5.5 0 0 1 .708-.708l4 4a.5.5 0 0 1 0 .708l-4 4a.5.5 0 0 1-.708-.708L13.293 8.5H1.5A.5.5 0 0 1 1 8z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
</form>
|
</form>
|
||||||
|
<div class="text-center mt-4">
|
||||||
|
<small class="text-muted">Need help? Check the <a href="#" class="text-decoration-none">documentation</a></small>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<style>
|
||||||
|
.min-vh-75 { min-height: 75vh; }
|
||||||
|
.login-hero-icon {
|
||||||
|
width: 120px;
|
||||||
|
height: 120px;
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
background: linear-gradient(135deg, rgba(59, 130, 246, 0.15) 0%, rgba(139, 92, 246, 0.15) 100%);
|
||||||
|
border-radius: 50%;
|
||||||
|
color: #3b82f6;
|
||||||
|
margin: 0 auto;
|
||||||
|
}
|
||||||
|
[data-theme='dark'] .login-hero-icon {
|
||||||
|
background: linear-gradient(135deg, rgba(59, 130, 246, 0.25) 0%, rgba(139, 92, 246, 0.25) 100%);
|
||||||
|
color: #60a5fa;
|
||||||
|
}
|
||||||
|
</style>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
272
templates/metrics.html
Normal file
272
templates/metrics.html
Normal file
@@ -0,0 +1,272 @@
|
|||||||
|
{% extends "base.html" %}
|
||||||
|
{% block content %}
|
||||||
|
<div class="d-flex justify-content-between align-items-center mb-4">
|
||||||
|
<div>
|
||||||
|
<h1 class="h3 mb-1 fw-bold">System Metrics</h1>
|
||||||
|
<p class="text-muted mb-0">Real-time server performance and storage usage</p>
|
||||||
|
</div>
|
||||||
|
<div class="d-flex gap-2 align-items-center">
|
||||||
|
<span class="d-flex align-items-center gap-2 text-muted small">
|
||||||
|
<span class="live-indicator"></span>
|
||||||
|
Live
|
||||||
|
</span>
|
||||||
|
<button class="btn btn-outline-secondary btn-sm" onclick="window.location.reload()">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-arrow-clockwise me-1" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 3a5 5 0 1 0 4.546 2.914.5.5 0 0 1 .908-.417A6 6 0 1 1 8 2v1z"/>
|
||||||
|
<path d="M8 4.466V.534a.25.25 0 0 1 .41-.192l2.36 1.966c.12.1.12.284 0 .384L8.41 4.658A.25.25 0 0 1 8 4.466z"/>
|
||||||
|
</svg>
|
||||||
|
Refresh
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="row g-4 mb-4">
|
||||||
|
<div class="col-md-6 col-xl-3">
|
||||||
|
<div class="card shadow-sm h-100 border-0 metric-card">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center justify-content-between mb-3">
|
||||||
|
<h6 class="card-subtitle text-muted text-uppercase small fw-bold mb-0">CPU Usage</h6>
|
||||||
|
<div class="icon-box bg-primary-subtle text-primary rounded-circle p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="bi bi-cpu" viewBox="0 0 16 16">
|
||||||
|
<path d="M5 0a.5.5 0 0 1 .5.5V2h1V.5a.5.5 0 0 1 1 0V2h1V.5a.5.5 0 0 1 1 0V2h1V.5a.5.5 0 0 1 1 0V2A2.5 2.5 0 0 1 14 4.5h1.5a.5.5 0 0 1 0 1H14v1h1.5a.5.5 0 0 1 0 1H14v1h1.5a.5.5 0 0 1 0 1H14v1h1.5a.5.5 0 0 1 0 1H14a2.5 2.5 0 0 1-2.5 2.5v1.5a.5.5 0 0 1-1 0V14h-1v1.5a.5.5 0 0 1-1 0V14h-1v1.5a.5.5 0 0 1-1 0V14h-1v1.5a.5.5 0 0 1-1 0V14A2.5 2.5 0 0 1 2 11.5H.5a.5.5 0 0 1 0-1H2v-1H.5a.5.5 0 0 1 0-1H2v-1H.5a.5.5 0 0 1 0-1H2v-1H.5a.5.5 0 0 1 0-1H2A2.5 2.5 0 0 1 4.5 2V.5a.5.5 0 0 1 .5-.5zM5 4H5v8h6V4H5z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<h2 class="display-6 fw-bold mb-2 stat-value">{{ cpu_percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
|
<div class="progress-bar {% if cpu_percent > 80 %}bg-danger{% elif cpu_percent > 50 %}bg-warning{% else %}bg-primary{% endif %}" role="progressbar" style="width: {{ cpu_percent }}%"></div>
|
||||||
|
</div>
|
||||||
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
|
<small class="text-muted">Current load</small>
|
||||||
|
<small class="{% if cpu_percent > 80 %}text-danger{% elif cpu_percent > 50 %}text-warning{% else %}text-success{% endif %}">
|
||||||
|
{% if cpu_percent > 80 %}High{% elif cpu_percent > 50 %}Medium{% else %}Normal{% endif %}
|
||||||
|
</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="col-md-6 col-xl-3">
|
||||||
|
<div class="card shadow-sm h-100 border-0 metric-card">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center justify-content-between mb-3">
|
||||||
|
<h6 class="card-subtitle text-muted text-uppercase small fw-bold mb-0">Memory</h6>
|
||||||
|
<div class="icon-box bg-info-subtle text-info rounded-circle p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="bi bi-memory" viewBox="0 0 16 16">
|
||||||
|
<path d="M1 3a1 1 0 0 0-1 1v8a1 1 0 0 0 1 1h4.586a1 1 0 0 0 .707-.293l.353-.353a.5.5 0 0 1 .708 0l.353.353a1 1 0 0 0 .707.293H15a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H1Zm.5 1h3a.5.5 0 0 1 .5.5v4a.5.5 0 0 1-.5.5h-3a.5.5 0 0 1-.5-.5v-4a.5.5 0 0 1 .5-.5Zm5 0h3a.5.5 0 0 1 .5.5v4a.5.5 0 0 1-.5.5h-3a.5.5 0 0 1-.5-.5v-4a.5.5 0 0 1 .5-.5Zm4.5.5a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 .5.5v4a.5.5 0 0 1-.5.5h-3a.5.5 0 0 1-.5-.5v-4Z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<h2 class="display-6 fw-bold mb-2 stat-value">{{ memory.percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
|
<div class="progress-bar bg-info" role="progressbar" style="width: {{ memory.percent }}%"></div>
|
||||||
|
</div>
|
||||||
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
|
<small class="text-muted">{{ memory.used }} used</small>
|
||||||
|
<small class="text-muted">{{ memory.total }} total</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="col-md-6 col-xl-3">
|
||||||
|
<div class="card shadow-sm h-100 border-0 metric-card">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center justify-content-between mb-3">
|
||||||
|
<h6 class="card-subtitle text-muted text-uppercase small fw-bold mb-0">Disk Space</h6>
|
||||||
|
<div class="icon-box bg-warning-subtle text-warning rounded-circle p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="bi bi-hdd" viewBox="0 0 16 16">
|
||||||
|
<path d="M4.5 11a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 10.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
||||||
|
<path d="M16 11a2 2 0 0 1-2 2H2a2 2 0 0 1-2-2V9.51c0-.418.105-.83.305-1.197l2.472-4.531A1.5 1.5 0 0 1 4.094 3h7.812a1.5 1.5 0 0 1 1.317.782l2.472 4.53c.2.368.305.78.305 1.198V11zM3.655 4.26 1.592 8.043C1.724 8.014 1.86 8 2 8h12c.14 0 .276.014.408.042L12.345 4.26a.5.5 0 0 0-.439-.26H4.094a.5.5 0 0 0-.439.26zM1 10v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1v-1a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<h2 class="display-6 fw-bold mb-2 stat-value">{{ disk.percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
|
<div class="progress-bar {% if disk.percent > 90 %}bg-danger{% elif disk.percent > 75 %}bg-warning{% else %}bg-warning{% endif %}" role="progressbar" style="width: {{ disk.percent }}%"></div>
|
||||||
|
</div>
|
||||||
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
|
<small class="text-muted">{{ disk.free }} free</small>
|
||||||
|
<small class="text-muted">{{ disk.total }} total</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="col-md-6 col-xl-3">
|
||||||
|
<div class="card shadow-sm h-100 border-0 metric-card">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center justify-content-between mb-3">
|
||||||
|
<h6 class="card-subtitle text-muted text-uppercase small fw-bold mb-0">Storage</h6>
|
||||||
|
<div class="icon-box bg-success-subtle text-success rounded-circle p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="bi bi-database" viewBox="0 0 16 16">
|
||||||
|
<path d="M4.318 2.687C5.234 2.271 6.536 2 8 2s2.766.27 3.682.687C12.644 3.125 13 3.627 13 4c0 .374-.356.875-1.318 1.313C10.766 5.729 9.464 6 8 6s-2.766-.27-3.682-.687C3.356 4.875 3 4.373 3 4c0-.374.356-.875 1.318-1.313ZM13 5.698V7c0 .374-.356.875-1.318 1.313C10.766 8.729 9.464 9 8 9s-2.766-.27-3.682-.687C3.356 7.875 3 7.373 3 7V5.698c.271.202.58.378.904.525C4.978 6.711 6.427 7 8 7s3.022-.289 4.096-.777A4.92 4.92 0 0 0 13 5.698ZM14 4c0-1.007-.875-1.755-1.904-2.223C11.022 1.289 9.573 1 8 1s-3.022.289-4.096.777C2.875 2.245 2 2.993 2 4v9c0 1.007.875 1.755 1.904 2.223C4.978 15.71 6.427 16 8 16s3.022-.289 4.096-.777C13.125 14.755 14 14.007 14 13V4Zm-1 4.698V10c0 .374-.356.875-1.318 1.313C10.766 11.729 9.464 12 8 12s-2.766-.27-3.682-.687C3.356 10.875 3 10.373 3 10V8.698c.271.202.58.378.904.525C4.978 9.71 6.427 10 8 10s3.022-.289 4.096-.777A4.92 4.92 0 0 0 13 8.698Zm0 3V13c0 .374-.356.875-1.318 1.313C10.766 14.729 9.464 15 8 15s-2.766-.27-3.682-.687C3.356 13.875 3 13.373 3 13v-1.302c.271.202.58.378.904.525C4.978 12.71 6.427 13 8 13s3.022-.289 4.096-.777c.324-.147.633-.323.904-.525Z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<h2 class="display-6 fw-bold mb-2 stat-value">{{ app.storage_used }}</h2>
|
||||||
|
<div class="d-flex gap-3 mt-3">
|
||||||
|
<div class="text-center flex-fill">
|
||||||
|
<div class="h5 fw-bold mb-0">{{ app.buckets }}</div>
|
||||||
|
<small class="text-muted">Buckets</small>
|
||||||
|
</div>
|
||||||
|
<div class="vr"></div>
|
||||||
|
<div class="text-center flex-fill">
|
||||||
|
<div class="h5 fw-bold mb-0">{{ app.objects }}</div>
|
||||||
|
<small class="text-muted">Objects</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="row g-4">
|
||||||
|
<div class="col-lg-8">
|
||||||
|
<div class="card shadow-sm border-0">
|
||||||
|
<div class="card-header bg-transparent border-0 pt-4 px-4 d-flex justify-content-between align-items-center">
|
||||||
|
<h5 class="card-title mb-0 fw-semibold">System Overview</h5>
|
||||||
|
</div>
|
||||||
|
<div class="card-body p-4">
|
||||||
|
<div class="table-responsive">
|
||||||
|
<table class="table table-hover align-middle mb-0">
|
||||||
|
<thead>
|
||||||
|
<tr class="text-muted small text-uppercase">
|
||||||
|
<th class="fw-semibold border-0 pb-3">Resource</th>
|
||||||
|
<th class="fw-semibold border-0 pb-3">Value</th>
|
||||||
|
<th class="fw-semibold border-0 pb-3">Status</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td class="py-3">
|
||||||
|
<div class="d-flex align-items-center gap-2">
|
||||||
|
<div class="bg-secondary-subtle rounded p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-hdd-stack text-secondary" viewBox="0 0 16 16">
|
||||||
|
<path d="M14 10a1 1 0 0 1 1 1v1a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1v-1a1 1 0 0 1 1-1h12zM2 9a2 2 0 0 0-2 2v1a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2v-1a2 2 0 0 0-2-2H2z"/>
|
||||||
|
<path d="M5 11.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0zm-2 0a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0zM14 3a1 1 0 0 1 1 1v1a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V4a1 1 0 0 1 1-1h12zM2 2a2 2 0 0 0-2 2v1a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V4a2 2 0 0 0-2-2H2z"/>
|
||||||
|
<path d="M5 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0zm-2 0a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<span class="fw-medium">Total Disk Capacity</span>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td class="py-3 fw-semibold">{{ disk.total }}</td>
|
||||||
|
<td class="py-3"><span class="badge bg-secondary-subtle text-secondary">Hardware</span></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td class="py-3">
|
||||||
|
<div class="d-flex align-items-center gap-2">
|
||||||
|
<div class="bg-success-subtle rounded p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-check-circle text-success" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="M10.97 4.97a.235.235 0 0 0-.02.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-1.071-1.05z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<span class="fw-medium">Available Space</span>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td class="py-3 fw-semibold">{{ disk.free }}</td>
|
||||||
|
<td class="py-3">
|
||||||
|
{% if disk.percent > 90 %}
|
||||||
|
<span class="status-badge status-badge-danger badge bg-danger-subtle text-danger">
|
||||||
|
<span class="status-badge-dot"></span>Critical
|
||||||
|
</span>
|
||||||
|
{% elif disk.percent > 75 %}
|
||||||
|
<span class="status-badge status-badge-warning badge bg-warning-subtle text-warning">
|
||||||
|
<span class="status-badge-dot"></span>Low
|
||||||
|
</span>
|
||||||
|
{% else %}
|
||||||
|
<span class="status-badge status-badge-success badge bg-success-subtle text-success">
|
||||||
|
<span class="status-badge-dot"></span>Good
|
||||||
|
</span>
|
||||||
|
{% endif %}
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td class="py-3">
|
||||||
|
<div class="d-flex align-items-center gap-2">
|
||||||
|
<div class="bg-primary-subtle rounded p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-bucket text-primary" viewBox="0 0 16 16">
|
||||||
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<span class="fw-medium">MyFSIO Data</span>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td class="py-3 fw-semibold">{{ app.storage_used }}</td>
|
||||||
|
<td class="py-3"><span class="badge bg-primary-subtle text-primary">Application</span></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td class="py-3">
|
||||||
|
<div class="d-flex align-items-center gap-2">
|
||||||
|
<div class="bg-info-subtle rounded p-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-file-earmark text-info" viewBox="0 0 16 16">
|
||||||
|
<path d="M14 4.5V14a2 2 0 0 1-2 2H4a2 2 0 0 1-2-2V2a2 2 0 0 1 2-2h5.5L14 4.5zm-3 0A1.5 1.5 0 0 1 9.5 3V1H4a1 1 0 0 0-1 1v12a1 1 0 0 0 1 1h8a1 1 0 0 0 1-1V4.5h-2z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<span class="fw-medium">Total Objects</span>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td class="py-3 fw-semibold">{{ app.objects }}</td>
|
||||||
|
<td class="py-3"><span class="badge bg-info-subtle text-info">Count</span></td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="col-lg-4">
|
||||||
|
{% set has_issues = (cpu_percent > 80) or (memory.percent > 85) or (disk.percent > 90) %}
|
||||||
|
<div class="card shadow-sm border-0 h-100 overflow-hidden" style="background: linear-gradient(135deg, {% if has_issues %}#ef4444 0%, #f97316{% else %}#3b82f6 0%, #8b5cf6{% endif %} 100%);">
|
||||||
|
<div class="card-body p-4 d-flex flex-column justify-content-center text-white position-relative">
|
||||||
|
<div class="position-absolute top-0 end-0 opacity-25" style="transform: translate(20%, -20%);">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="160" height="160" fill="currentColor" class="bi bi-{% if has_issues %}exclamation-triangle{% else %}cloud-check{% endif %}" viewBox="0 0 16 16">
|
||||||
|
{% if has_issues %}
|
||||||
|
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
||||||
|
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
||||||
|
{% else %}
|
||||||
|
<path fill-rule="evenodd" d="M10.354 6.146a.5.5 0 0 1 0 .708l-3 3a.5.5 0 0 1-.708 0l-1.5-1.5a.5.5 0 1 1 .708-.708L7 8.793l2.646-2.647a.5.5 0 0 1 .708 0z"/>
|
||||||
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
|
{% endif %}
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<div class="mb-3">
|
||||||
|
<span class="badge bg-white {% if has_issues %}text-danger{% else %}text-primary{% endif %} fw-semibold px-3 py-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-{% if has_issues %}exclamation-circle-fill{% else %}check-circle-fill{% endif %} me-1" viewBox="0 0 16 16">
|
||||||
|
{% if has_issues %}
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM8 4a.905.905 0 0 0-.9.995l.35 3.507a.552.552 0 0 0 1.1 0l.35-3.507A.905.905 0 0 0 8 4zm.002 6a1 1 0 1 0 0 2 1 1 0 0 0 0-2z"/>
|
||||||
|
{% else %}
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
||||||
|
{% endif %}
|
||||||
|
</svg>
|
||||||
|
v{{ app.version }}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
<h4 class="card-title fw-bold mb-3">System Health</h4>
|
||||||
|
{% if has_issues %}
|
||||||
|
<ul class="list-unstyled small mb-4 opacity-90">
|
||||||
|
{% if cpu_percent > 80 %}<li class="mb-1">CPU usage is high ({{ cpu_percent }}%)</li>{% endif %}
|
||||||
|
{% if memory.percent > 85 %}<li class="mb-1">Memory usage is high ({{ memory.percent }}%)</li>{% endif %}
|
||||||
|
{% if disk.percent > 90 %}<li class="mb-1">Disk space is critically low ({{ disk.percent }}% used)</li>{% endif %}
|
||||||
|
</ul>
|
||||||
|
{% else %}
|
||||||
|
<p class="card-text opacity-90 mb-4 small">All resources are within normal operating parameters.</p>
|
||||||
|
{% endif %}
|
||||||
|
<div class="d-flex gap-4">
|
||||||
|
<div>
|
||||||
|
<div class="h3 fw-bold mb-0">{{ app.uptime_days }}d</div>
|
||||||
|
<small class="opacity-75">Uptime</small>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<div class="h3 fw-bold mb-0">{{ app.buckets }}</div>
|
||||||
|
<small class="opacity-75">Active Buckets</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{% endblock %}
|
||||||
339
tests/test_access_logging.py
Normal file
339
tests/test_access_logging.py
Normal file
@@ -0,0 +1,339 @@
|
|||||||
|
import io
|
||||||
|
import json
|
||||||
|
import time
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.access_logging import (
|
||||||
|
AccessLogEntry,
|
||||||
|
AccessLoggingService,
|
||||||
|
LoggingConfiguration,
|
||||||
|
)
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
|
||||||
|
|
||||||
|
class TestAccessLogEntry:
|
||||||
|
def test_default_values(self):
|
||||||
|
entry = AccessLogEntry()
|
||||||
|
assert entry.bucket_owner == "-"
|
||||||
|
assert entry.bucket == "-"
|
||||||
|
assert entry.remote_ip == "-"
|
||||||
|
assert entry.requester == "-"
|
||||||
|
assert entry.operation == "-"
|
||||||
|
assert entry.http_status == 200
|
||||||
|
assert len(entry.request_id) == 16
|
||||||
|
|
||||||
|
def test_to_log_line(self):
|
||||||
|
entry = AccessLogEntry(
|
||||||
|
bucket_owner="owner123",
|
||||||
|
bucket="my-bucket",
|
||||||
|
remote_ip="192.168.1.1",
|
||||||
|
requester="user456",
|
||||||
|
request_id="REQ123456789012",
|
||||||
|
operation="REST.PUT.OBJECT",
|
||||||
|
key="test/key.txt",
|
||||||
|
request_uri="PUT /my-bucket/test/key.txt HTTP/1.1",
|
||||||
|
http_status=200,
|
||||||
|
bytes_sent=1024,
|
||||||
|
object_size=2048,
|
||||||
|
total_time_ms=150,
|
||||||
|
referrer="http://example.com",
|
||||||
|
user_agent="aws-cli/2.0",
|
||||||
|
version_id="v1",
|
||||||
|
)
|
||||||
|
log_line = entry.to_log_line()
|
||||||
|
|
||||||
|
assert "owner123" in log_line
|
||||||
|
assert "my-bucket" in log_line
|
||||||
|
assert "192.168.1.1" in log_line
|
||||||
|
assert "user456" in log_line
|
||||||
|
assert "REST.PUT.OBJECT" in log_line
|
||||||
|
assert "test/key.txt" in log_line
|
||||||
|
assert "200" in log_line
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
entry = AccessLogEntry(
|
||||||
|
bucket_owner="owner",
|
||||||
|
bucket="bucket",
|
||||||
|
remote_ip="10.0.0.1",
|
||||||
|
requester="admin",
|
||||||
|
request_id="ABC123",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="file.txt",
|
||||||
|
request_uri="GET /bucket/file.txt HTTP/1.1",
|
||||||
|
http_status=200,
|
||||||
|
bytes_sent=512,
|
||||||
|
object_size=512,
|
||||||
|
total_time_ms=50,
|
||||||
|
)
|
||||||
|
result = entry.to_dict()
|
||||||
|
|
||||||
|
assert result["bucket_owner"] == "owner"
|
||||||
|
assert result["bucket"] == "bucket"
|
||||||
|
assert result["remote_ip"] == "10.0.0.1"
|
||||||
|
assert result["requester"] == "admin"
|
||||||
|
assert result["operation"] == "REST.GET.OBJECT"
|
||||||
|
assert result["key"] == "file.txt"
|
||||||
|
assert result["http_status"] == 200
|
||||||
|
assert result["bytes_sent"] == 512
|
||||||
|
|
||||||
|
|
||||||
|
class TestLoggingConfiguration:
|
||||||
|
def test_default_values(self):
|
||||||
|
config = LoggingConfiguration(target_bucket="log-bucket")
|
||||||
|
assert config.target_bucket == "log-bucket"
|
||||||
|
assert config.target_prefix == ""
|
||||||
|
assert config.enabled is True
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="logs",
|
||||||
|
target_prefix="access-logs/",
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
result = config.to_dict()
|
||||||
|
|
||||||
|
assert "LoggingEnabled" in result
|
||||||
|
assert result["LoggingEnabled"]["TargetBucket"] == "logs"
|
||||||
|
assert result["LoggingEnabled"]["TargetPrefix"] == "access-logs/"
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"LoggingEnabled": {
|
||||||
|
"TargetBucket": "my-logs",
|
||||||
|
"TargetPrefix": "bucket-logs/",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
config = LoggingConfiguration.from_dict(data)
|
||||||
|
|
||||||
|
assert config is not None
|
||||||
|
assert config.target_bucket == "my-logs"
|
||||||
|
assert config.target_prefix == "bucket-logs/"
|
||||||
|
assert config.enabled is True
|
||||||
|
|
||||||
|
def test_from_dict_no_logging(self):
|
||||||
|
data = {}
|
||||||
|
config = LoggingConfiguration.from_dict(data)
|
||||||
|
assert config is None
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def storage(tmp_path: Path):
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
storage_root.mkdir(parents=True)
|
||||||
|
return ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def logging_service(tmp_path: Path, storage):
|
||||||
|
service = AccessLoggingService(
|
||||||
|
tmp_path,
|
||||||
|
flush_interval=3600,
|
||||||
|
max_buffer_size=10,
|
||||||
|
)
|
||||||
|
service.set_storage(storage)
|
||||||
|
yield service
|
||||||
|
service.shutdown()
|
||||||
|
|
||||||
|
|
||||||
|
class TestAccessLoggingService:
|
||||||
|
def test_get_bucket_logging_not_configured(self, logging_service):
|
||||||
|
result = logging_service.get_bucket_logging("unconfigured-bucket")
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_set_and_get_bucket_logging(self, logging_service):
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="log-bucket",
|
||||||
|
target_prefix="logs/",
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("source-bucket", config)
|
||||||
|
|
||||||
|
retrieved = logging_service.get_bucket_logging("source-bucket")
|
||||||
|
assert retrieved is not None
|
||||||
|
assert retrieved.target_bucket == "log-bucket"
|
||||||
|
assert retrieved.target_prefix == "logs/"
|
||||||
|
|
||||||
|
def test_delete_bucket_logging(self, logging_service):
|
||||||
|
config = LoggingConfiguration(target_bucket="logs")
|
||||||
|
logging_service.set_bucket_logging("to-delete", config)
|
||||||
|
assert logging_service.get_bucket_logging("to-delete") is not None
|
||||||
|
|
||||||
|
logging_service.delete_bucket_logging("to-delete")
|
||||||
|
logging_service._configs.clear()
|
||||||
|
assert logging_service.get_bucket_logging("to-delete") is None
|
||||||
|
|
||||||
|
def test_log_request_no_config(self, logging_service):
|
||||||
|
logging_service.log_request(
|
||||||
|
"no-config-bucket",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="test.txt",
|
||||||
|
)
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert stats["buffered_entries"] == 0
|
||||||
|
|
||||||
|
def test_log_request_with_config(self, logging_service, storage):
|
||||||
|
storage.create_bucket("log-target")
|
||||||
|
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="log-target",
|
||||||
|
target_prefix="access/",
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("source-bucket", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"source-bucket",
|
||||||
|
operation="REST.PUT.OBJECT",
|
||||||
|
key="uploaded.txt",
|
||||||
|
remote_ip="192.168.1.100",
|
||||||
|
requester="test-user",
|
||||||
|
http_status=200,
|
||||||
|
bytes_sent=1024,
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert stats["buffered_entries"] == 1
|
||||||
|
|
||||||
|
def test_log_request_disabled_config(self, logging_service):
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="logs",
|
||||||
|
enabled=False,
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("disabled-bucket", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"disabled-bucket",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="test.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert stats["buffered_entries"] == 0
|
||||||
|
|
||||||
|
def test_flush_buffer(self, logging_service, storage):
|
||||||
|
storage.create_bucket("flush-target")
|
||||||
|
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="flush-target",
|
||||||
|
target_prefix="logs/",
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("flush-source", config)
|
||||||
|
|
||||||
|
for i in range(3):
|
||||||
|
logging_service.log_request(
|
||||||
|
"flush-source",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key=f"file{i}.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
logging_service.flush()
|
||||||
|
|
||||||
|
objects = storage.list_objects_all("flush-target")
|
||||||
|
assert len(objects) >= 1
|
||||||
|
|
||||||
|
def test_auto_flush_on_buffer_size(self, logging_service, storage):
|
||||||
|
storage.create_bucket("auto-flush-target")
|
||||||
|
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="auto-flush-target",
|
||||||
|
target_prefix="",
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("auto-source", config)
|
||||||
|
|
||||||
|
for i in range(15):
|
||||||
|
logging_service.log_request(
|
||||||
|
"auto-source",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key=f"file{i}.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
objects = storage.list_objects_all("auto-flush-target")
|
||||||
|
assert len(objects) >= 1
|
||||||
|
|
||||||
|
def test_get_stats(self, logging_service, storage):
|
||||||
|
storage.create_bucket("stats-target")
|
||||||
|
config = LoggingConfiguration(target_bucket="stats-target")
|
||||||
|
logging_service.set_bucket_logging("stats-bucket", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"stats-bucket",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="test.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert "buffered_entries" in stats
|
||||||
|
assert "target_buckets" in stats
|
||||||
|
assert stats["buffered_entries"] >= 1
|
||||||
|
|
||||||
|
def test_shutdown_flushes_buffer(self, tmp_path, storage):
|
||||||
|
storage.create_bucket("shutdown-target")
|
||||||
|
|
||||||
|
service = AccessLoggingService(tmp_path, flush_interval=3600, max_buffer_size=100)
|
||||||
|
service.set_storage(storage)
|
||||||
|
|
||||||
|
config = LoggingConfiguration(target_bucket="shutdown-target")
|
||||||
|
service.set_bucket_logging("shutdown-source", config)
|
||||||
|
|
||||||
|
service.log_request(
|
||||||
|
"shutdown-source",
|
||||||
|
operation="REST.PUT.OBJECT",
|
||||||
|
key="final.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
service.shutdown()
|
||||||
|
|
||||||
|
objects = storage.list_objects_all("shutdown-target")
|
||||||
|
assert len(objects) >= 1
|
||||||
|
|
||||||
|
def test_logging_caching(self, logging_service):
|
||||||
|
config = LoggingConfiguration(target_bucket="cached-logs")
|
||||||
|
logging_service.set_bucket_logging("cached-bucket", config)
|
||||||
|
|
||||||
|
logging_service.get_bucket_logging("cached-bucket")
|
||||||
|
assert "cached-bucket" in logging_service._configs
|
||||||
|
|
||||||
|
def test_log_request_all_fields(self, logging_service, storage):
|
||||||
|
storage.create_bucket("detailed-target")
|
||||||
|
|
||||||
|
config = LoggingConfiguration(target_bucket="detailed-target", target_prefix="detailed/")
|
||||||
|
logging_service.set_bucket_logging("detailed-source", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"detailed-source",
|
||||||
|
operation="REST.PUT.OBJECT",
|
||||||
|
key="detailed/file.txt",
|
||||||
|
remote_ip="10.0.0.1",
|
||||||
|
requester="admin-user",
|
||||||
|
request_uri="PUT /detailed-source/detailed/file.txt HTTP/1.1",
|
||||||
|
http_status=201,
|
||||||
|
error_code="",
|
||||||
|
bytes_sent=2048,
|
||||||
|
object_size=2048,
|
||||||
|
total_time_ms=100,
|
||||||
|
referrer="http://admin.example.com",
|
||||||
|
user_agent="curl/7.68.0",
|
||||||
|
version_id="v1.0",
|
||||||
|
request_id="CUSTOM_REQ_ID",
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert stats["buffered_entries"] == 1
|
||||||
|
|
||||||
|
def test_failed_flush_returns_to_buffer(self, logging_service):
|
||||||
|
config = LoggingConfiguration(target_bucket="nonexistent-target")
|
||||||
|
logging_service.set_bucket_logging("fail-source", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"fail-source",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="test.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
initial_count = logging_service.get_stats()["buffered_entries"]
|
||||||
|
logging_service.flush()
|
||||||
|
|
||||||
|
final_count = logging_service.get_stats()["buffered_entries"]
|
||||||
|
assert final_count >= initial_count
|
||||||
284
tests/test_acl.py
Normal file
284
tests/test_acl.py
Normal file
@@ -0,0 +1,284 @@
|
|||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.acl import (
|
||||||
|
Acl,
|
||||||
|
AclGrant,
|
||||||
|
AclService,
|
||||||
|
ACL_PERMISSION_FULL_CONTROL,
|
||||||
|
ACL_PERMISSION_READ,
|
||||||
|
ACL_PERMISSION_WRITE,
|
||||||
|
ACL_PERMISSION_READ_ACP,
|
||||||
|
ACL_PERMISSION_WRITE_ACP,
|
||||||
|
GRANTEE_ALL_USERS,
|
||||||
|
GRANTEE_AUTHENTICATED_USERS,
|
||||||
|
PERMISSION_TO_ACTIONS,
|
||||||
|
create_canned_acl,
|
||||||
|
CANNED_ACLS,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestAclGrant:
|
||||||
|
def test_to_dict(self):
|
||||||
|
grant = AclGrant(grantee="user123", permission=ACL_PERMISSION_READ)
|
||||||
|
result = grant.to_dict()
|
||||||
|
assert result == {"grantee": "user123", "permission": "READ"}
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {"grantee": "admin", "permission": "FULL_CONTROL"}
|
||||||
|
grant = AclGrant.from_dict(data)
|
||||||
|
assert grant.grantee == "admin"
|
||||||
|
assert grant.permission == ACL_PERMISSION_FULL_CONTROL
|
||||||
|
|
||||||
|
|
||||||
|
class TestAcl:
|
||||||
|
def test_to_dict(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner-user",
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee="owner-user", permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
result = acl.to_dict()
|
||||||
|
assert result["owner"] == "owner-user"
|
||||||
|
assert len(result["grants"]) == 2
|
||||||
|
assert result["grants"][0]["grantee"] == "owner-user"
|
||||||
|
assert result["grants"][1]["grantee"] == "*"
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"owner": "the-owner",
|
||||||
|
"grants": [
|
||||||
|
{"grantee": "the-owner", "permission": "FULL_CONTROL"},
|
||||||
|
{"grantee": "authenticated", "permission": "READ"},
|
||||||
|
],
|
||||||
|
}
|
||||||
|
acl = Acl.from_dict(data)
|
||||||
|
assert acl.owner == "the-owner"
|
||||||
|
assert len(acl.grants) == 2
|
||||||
|
assert acl.grants[0].grantee == "the-owner"
|
||||||
|
assert acl.grants[1].grantee == GRANTEE_AUTHENTICATED_USERS
|
||||||
|
|
||||||
|
def test_from_dict_empty_grants(self):
|
||||||
|
data = {"owner": "solo-owner"}
|
||||||
|
acl = Acl.from_dict(data)
|
||||||
|
assert acl.owner == "solo-owner"
|
||||||
|
assert len(acl.grants) == 0
|
||||||
|
|
||||||
|
def test_get_allowed_actions_owner(self):
|
||||||
|
acl = Acl(owner="owner123", grants=[])
|
||||||
|
actions = acl.get_allowed_actions("owner123", is_authenticated=True)
|
||||||
|
assert actions == PERMISSION_TO_ACTIONS[ACL_PERMISSION_FULL_CONTROL]
|
||||||
|
|
||||||
|
def test_get_allowed_actions_all_users(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ)],
|
||||||
|
)
|
||||||
|
actions = acl.get_allowed_actions(None, is_authenticated=False)
|
||||||
|
assert "read" in actions
|
||||||
|
assert "list" in actions
|
||||||
|
assert "write" not in actions
|
||||||
|
|
||||||
|
def test_get_allowed_actions_authenticated_users(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[AclGrant(grantee=GRANTEE_AUTHENTICATED_USERS, permission=ACL_PERMISSION_WRITE)],
|
||||||
|
)
|
||||||
|
actions_authenticated = acl.get_allowed_actions("some-user", is_authenticated=True)
|
||||||
|
assert "write" in actions_authenticated
|
||||||
|
assert "delete" in actions_authenticated
|
||||||
|
|
||||||
|
actions_anonymous = acl.get_allowed_actions(None, is_authenticated=False)
|
||||||
|
assert "write" not in actions_anonymous
|
||||||
|
|
||||||
|
def test_get_allowed_actions_specific_grantee(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee="user-abc", permission=ACL_PERMISSION_READ),
|
||||||
|
AclGrant(grantee="user-xyz", permission=ACL_PERMISSION_WRITE),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
abc_actions = acl.get_allowed_actions("user-abc", is_authenticated=True)
|
||||||
|
assert "read" in abc_actions
|
||||||
|
assert "list" in abc_actions
|
||||||
|
assert "write" not in abc_actions
|
||||||
|
|
||||||
|
xyz_actions = acl.get_allowed_actions("user-xyz", is_authenticated=True)
|
||||||
|
assert "write" in xyz_actions
|
||||||
|
assert "read" not in xyz_actions
|
||||||
|
|
||||||
|
def test_get_allowed_actions_combined(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
AclGrant(grantee="special-user", permission=ACL_PERMISSION_WRITE),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
actions = acl.get_allowed_actions("special-user", is_authenticated=True)
|
||||||
|
assert "read" in actions
|
||||||
|
assert "list" in actions
|
||||||
|
assert "write" in actions
|
||||||
|
assert "delete" in actions
|
||||||
|
|
||||||
|
|
||||||
|
class TestCannedAcls:
|
||||||
|
def test_private_acl(self):
|
||||||
|
acl = create_canned_acl("private", "the-owner")
|
||||||
|
assert acl.owner == "the-owner"
|
||||||
|
assert len(acl.grants) == 1
|
||||||
|
assert acl.grants[0].grantee == "the-owner"
|
||||||
|
assert acl.grants[0].permission == ACL_PERMISSION_FULL_CONTROL
|
||||||
|
|
||||||
|
def test_public_read_acl(self):
|
||||||
|
acl = create_canned_acl("public-read", "owner")
|
||||||
|
assert acl.owner == "owner"
|
||||||
|
has_owner_full_control = any(
|
||||||
|
g.grantee == "owner" and g.permission == ACL_PERMISSION_FULL_CONTROL for g in acl.grants
|
||||||
|
)
|
||||||
|
has_public_read = any(
|
||||||
|
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_READ for g in acl.grants
|
||||||
|
)
|
||||||
|
assert has_owner_full_control
|
||||||
|
assert has_public_read
|
||||||
|
|
||||||
|
def test_public_read_write_acl(self):
|
||||||
|
acl = create_canned_acl("public-read-write", "owner")
|
||||||
|
assert acl.owner == "owner"
|
||||||
|
has_public_read = any(
|
||||||
|
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_READ for g in acl.grants
|
||||||
|
)
|
||||||
|
has_public_write = any(
|
||||||
|
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_WRITE for g in acl.grants
|
||||||
|
)
|
||||||
|
assert has_public_read
|
||||||
|
assert has_public_write
|
||||||
|
|
||||||
|
def test_authenticated_read_acl(self):
|
||||||
|
acl = create_canned_acl("authenticated-read", "owner")
|
||||||
|
has_authenticated_read = any(
|
||||||
|
g.grantee == GRANTEE_AUTHENTICATED_USERS and g.permission == ACL_PERMISSION_READ for g in acl.grants
|
||||||
|
)
|
||||||
|
assert has_authenticated_read
|
||||||
|
|
||||||
|
def test_unknown_canned_acl_defaults_to_private(self):
|
||||||
|
acl = create_canned_acl("unknown-acl", "owner")
|
||||||
|
private_acl = create_canned_acl("private", "owner")
|
||||||
|
assert acl.to_dict() == private_acl.to_dict()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def acl_service(tmp_path: Path):
|
||||||
|
return AclService(tmp_path)
|
||||||
|
|
||||||
|
|
||||||
|
class TestAclService:
|
||||||
|
def test_get_bucket_acl_not_exists(self, acl_service):
|
||||||
|
result = acl_service.get_bucket_acl("nonexistent-bucket")
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_set_and_get_bucket_acl(self, acl_service):
|
||||||
|
acl = Acl(
|
||||||
|
owner="bucket-owner",
|
||||||
|
grants=[AclGrant(grantee="bucket-owner", permission=ACL_PERMISSION_FULL_CONTROL)],
|
||||||
|
)
|
||||||
|
acl_service.set_bucket_acl("my-bucket", acl)
|
||||||
|
|
||||||
|
retrieved = acl_service.get_bucket_acl("my-bucket")
|
||||||
|
assert retrieved is not None
|
||||||
|
assert retrieved.owner == "bucket-owner"
|
||||||
|
assert len(retrieved.grants) == 1
|
||||||
|
|
||||||
|
def test_bucket_acl_caching(self, acl_service):
|
||||||
|
acl = Acl(owner="cached-owner", grants=[])
|
||||||
|
acl_service.set_bucket_acl("cached-bucket", acl)
|
||||||
|
|
||||||
|
acl_service.get_bucket_acl("cached-bucket")
|
||||||
|
assert "cached-bucket" in acl_service._bucket_acl_cache
|
||||||
|
|
||||||
|
retrieved = acl_service.get_bucket_acl("cached-bucket")
|
||||||
|
assert retrieved.owner == "cached-owner"
|
||||||
|
|
||||||
|
def test_set_bucket_canned_acl(self, acl_service):
|
||||||
|
result = acl_service.set_bucket_canned_acl("new-bucket", "public-read", "the-owner")
|
||||||
|
assert result.owner == "the-owner"
|
||||||
|
|
||||||
|
retrieved = acl_service.get_bucket_acl("new-bucket")
|
||||||
|
assert retrieved is not None
|
||||||
|
has_public_read = any(
|
||||||
|
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_READ for g in retrieved.grants
|
||||||
|
)
|
||||||
|
assert has_public_read
|
||||||
|
|
||||||
|
def test_delete_bucket_acl(self, acl_service):
|
||||||
|
acl = Acl(owner="to-delete-owner", grants=[])
|
||||||
|
acl_service.set_bucket_acl("delete-me", acl)
|
||||||
|
assert acl_service.get_bucket_acl("delete-me") is not None
|
||||||
|
|
||||||
|
acl_service.delete_bucket_acl("delete-me")
|
||||||
|
acl_service._bucket_acl_cache.clear()
|
||||||
|
assert acl_service.get_bucket_acl("delete-me") is None
|
||||||
|
|
||||||
|
def test_evaluate_bucket_acl_allowed(self, acl_service):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ)],
|
||||||
|
)
|
||||||
|
acl_service.set_bucket_acl("public-bucket", acl)
|
||||||
|
|
||||||
|
result = acl_service.evaluate_bucket_acl("public-bucket", None, "read", is_authenticated=False)
|
||||||
|
assert result is True
|
||||||
|
|
||||||
|
def test_evaluate_bucket_acl_denied(self, acl_service):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[AclGrant(grantee="owner", permission=ACL_PERMISSION_FULL_CONTROL)],
|
||||||
|
)
|
||||||
|
acl_service.set_bucket_acl("private-bucket", acl)
|
||||||
|
|
||||||
|
result = acl_service.evaluate_bucket_acl("private-bucket", "other-user", "write", is_authenticated=True)
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
def test_evaluate_bucket_acl_no_acl(self, acl_service):
|
||||||
|
result = acl_service.evaluate_bucket_acl("no-acl-bucket", "anyone", "read")
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
def test_get_object_acl_from_metadata(self, acl_service):
|
||||||
|
metadata = {
|
||||||
|
"__acl__": {
|
||||||
|
"owner": "object-owner",
|
||||||
|
"grants": [{"grantee": "object-owner", "permission": "FULL_CONTROL"}],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result = acl_service.get_object_acl("bucket", "key", metadata)
|
||||||
|
assert result is not None
|
||||||
|
assert result.owner == "object-owner"
|
||||||
|
|
||||||
|
def test_get_object_acl_no_acl_in_metadata(self, acl_service):
|
||||||
|
metadata = {"Content-Type": "text/plain"}
|
||||||
|
result = acl_service.get_object_acl("bucket", "key", metadata)
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_create_object_acl_metadata(self, acl_service):
|
||||||
|
acl = Acl(owner="obj-owner", grants=[])
|
||||||
|
result = acl_service.create_object_acl_metadata(acl)
|
||||||
|
assert "__acl__" in result
|
||||||
|
assert result["__acl__"]["owner"] == "obj-owner"
|
||||||
|
|
||||||
|
def test_evaluate_object_acl(self, acl_service):
|
||||||
|
metadata = {
|
||||||
|
"__acl__": {
|
||||||
|
"owner": "obj-owner",
|
||||||
|
"grants": [{"grantee": "*", "permission": "READ"}],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result = acl_service.evaluate_object_acl(metadata, None, "read", is_authenticated=False)
|
||||||
|
assert result is True
|
||||||
|
|
||||||
|
result = acl_service.evaluate_object_acl(metadata, None, "write", is_authenticated=False)
|
||||||
|
assert result is False
|
||||||
88
tests/test_api_multipart.py
Normal file
88
tests/test_api_multipart.py
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
import io
|
||||||
|
import pytest
|
||||||
|
from xml.etree.ElementTree import fromstring
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def client(app):
|
||||||
|
return app.test_client()
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def auth_headers(app):
|
||||||
|
return {
|
||||||
|
"X-Access-Key": "test",
|
||||||
|
"X-Secret-Key": "secret"
|
||||||
|
}
|
||||||
|
|
||||||
|
def test_multipart_upload_flow(client, auth_headers):
|
||||||
|
# 1. Create bucket
|
||||||
|
client.put("/test-bucket", headers=auth_headers)
|
||||||
|
|
||||||
|
# 2. Initiate Multipart Upload
|
||||||
|
resp = client.post("/test-bucket/large-file.txt?uploads", headers=auth_headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
upload_id = root.find("UploadId").text
|
||||||
|
assert upload_id
|
||||||
|
|
||||||
|
# 3. Upload Part 1
|
||||||
|
resp = client.put(
|
||||||
|
f"/test-bucket/large-file.txt?partNumber=1&uploadId={upload_id}",
|
||||||
|
headers=auth_headers,
|
||||||
|
data=b"part1"
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
etag1 = resp.headers["ETag"]
|
||||||
|
assert etag1
|
||||||
|
|
||||||
|
# 4. Upload Part 2
|
||||||
|
resp = client.put(
|
||||||
|
f"/test-bucket/large-file.txt?partNumber=2&uploadId={upload_id}",
|
||||||
|
headers=auth_headers,
|
||||||
|
data=b"part2"
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
etag2 = resp.headers["ETag"]
|
||||||
|
assert etag2
|
||||||
|
|
||||||
|
# 5. Complete Multipart Upload
|
||||||
|
xml_body = f"""
|
||||||
|
<CompleteMultipartUpload>
|
||||||
|
<Part>
|
||||||
|
<PartNumber>1</PartNumber>
|
||||||
|
<ETag>{etag1}</ETag>
|
||||||
|
</Part>
|
||||||
|
<Part>
|
||||||
|
<PartNumber>2</PartNumber>
|
||||||
|
<ETag>{etag2}</ETag>
|
||||||
|
</Part>
|
||||||
|
</CompleteMultipartUpload>
|
||||||
|
"""
|
||||||
|
resp = client.post(
|
||||||
|
f"/test-bucket/large-file.txt?uploadId={upload_id}",
|
||||||
|
headers=auth_headers,
|
||||||
|
data=xml_body
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.find("Key").text == "large-file.txt"
|
||||||
|
|
||||||
|
# 6. Verify object content
|
||||||
|
resp = client.get("/test-bucket/large-file.txt", headers=auth_headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
assert resp.data == b"part1part2"
|
||||||
|
|
||||||
|
def test_abort_multipart_upload(client, auth_headers):
|
||||||
|
client.put("/abort-bucket", headers=auth_headers)
|
||||||
|
|
||||||
|
resp = client.post("/abort-bucket/file.txt?uploads", headers=auth_headers)
|
||||||
|
upload_id = fromstring(resp.data).find("UploadId").text
|
||||||
|
|
||||||
|
resp = client.delete(f"/abort-bucket/file.txt?uploadId={upload_id}", headers=auth_headers)
|
||||||
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
resp = client.put(
|
||||||
|
f"/abort-bucket/file.txt?partNumber=1&uploadId={upload_id}",
|
||||||
|
headers=auth_headers,
|
||||||
|
data=b"data"
|
||||||
|
)
|
||||||
|
assert resp.status_code == 404
|
||||||
@@ -24,14 +24,6 @@ def test_boto3_basic_operations(live_server):
|
|||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
# No need to inject custom headers anymore, as we support SigV4
|
|
||||||
# def _inject_headers(params, **_kwargs):
|
|
||||||
# headers = params.setdefault("headers", {})
|
|
||||||
# headers["X-Access-Key"] = "test"
|
|
||||||
# headers["X-Secret-Key"] = "secret"
|
|
||||||
|
|
||||||
# s3.meta.events.register("before-call.s3", _inject_headers)
|
|
||||||
|
|
||||||
s3.create_bucket(Bucket=bucket_name)
|
s3.create_bucket(Bucket=bucket_name)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|||||||
28
tests/test_boto3_multipart.py
Normal file
28
tests/test_boto3_multipart.py
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
import uuid
|
||||||
|
import pytest
|
||||||
|
import boto3
|
||||||
|
from botocore.client import Config
|
||||||
|
|
||||||
|
@pytest.mark.integration
|
||||||
|
def test_boto3_multipart_upload(live_server):
|
||||||
|
bucket_name = f'mp-test-{uuid.uuid4().hex[:8]}'
|
||||||
|
object_key = 'large-file.bin'
|
||||||
|
s3 = boto3.client('s3', endpoint_url=live_server, aws_access_key_id='test', aws_secret_access_key='secret', region_name='us-east-1', use_ssl=False, config=Config(signature_version='s3v4', retries={'max_attempts': 1}, s3={'addressing_style': 'path'}))
|
||||||
|
s3.create_bucket(Bucket=bucket_name)
|
||||||
|
try:
|
||||||
|
response = s3.create_multipart_upload(Bucket=bucket_name, Key=object_key)
|
||||||
|
upload_id = response['UploadId']
|
||||||
|
parts = []
|
||||||
|
part1_data = b'A' * 1024
|
||||||
|
part2_data = b'B' * 1024
|
||||||
|
resp1 = s3.upload_part(Bucket=bucket_name, Key=object_key, PartNumber=1, UploadId=upload_id, Body=part1_data)
|
||||||
|
parts.append({'PartNumber': 1, 'ETag': resp1['ETag']})
|
||||||
|
resp2 = s3.upload_part(Bucket=bucket_name, Key=object_key, PartNumber=2, UploadId=upload_id, Body=part2_data)
|
||||||
|
parts.append({'PartNumber': 2, 'ETag': resp2['ETag']})
|
||||||
|
s3.complete_multipart_upload(Bucket=bucket_name, Key=object_key, UploadId=upload_id, MultipartUpload={'Parts': parts})
|
||||||
|
obj = s3.get_object(Bucket=bucket_name, Key=object_key)
|
||||||
|
content = obj['Body'].read()
|
||||||
|
assert content == part1_data + part2_data
|
||||||
|
s3.delete_object(Bucket=bucket_name, Key=object_key)
|
||||||
|
finally:
|
||||||
|
s3.delete_bucket(Bucket=bucket_name)
|
||||||
@@ -38,7 +38,7 @@ def test_unicode_bucket_and_object_names(tmp_path: Path):
|
|||||||
assert storage.get_object_path("unicode-test", key).exists()
|
assert storage.get_object_path("unicode-test", key).exists()
|
||||||
|
|
||||||
# Verify listing
|
# Verify listing
|
||||||
objects = storage.list_objects("unicode-test")
|
objects = storage.list_objects_all("unicode-test")
|
||||||
assert any(o.key == key for o in objects)
|
assert any(o.key == key for o in objects)
|
||||||
|
|
||||||
def test_special_characters_in_metadata(tmp_path: Path):
|
def test_special_characters_in_metadata(tmp_path: Path):
|
||||||
|
|||||||
726
tests/test_encryption.py
Normal file
726
tests/test_encryption.py
Normal file
@@ -0,0 +1,726 @@
|
|||||||
|
"""Tests for encryption functionality."""
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
import io
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import secrets
|
||||||
|
import tempfile
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
|
class TestLocalKeyEncryption:
|
||||||
|
"""Tests for LocalKeyEncryption provider."""
|
||||||
|
|
||||||
|
def test_create_master_key(self, tmp_path):
|
||||||
|
"""Test that master key is created if it doesn't exist."""
|
||||||
|
from app.encryption import LocalKeyEncryption
|
||||||
|
|
||||||
|
key_path = tmp_path / "keys" / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
|
||||||
|
key = provider.master_key
|
||||||
|
|
||||||
|
assert key_path.exists()
|
||||||
|
assert len(key) == 32
|
||||||
|
|
||||||
|
def test_load_existing_master_key(self, tmp_path):
|
||||||
|
"""Test loading an existing master key."""
|
||||||
|
from app.encryption import LocalKeyEncryption
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
original_key = secrets.token_bytes(32)
|
||||||
|
key_path.write_text(base64.b64encode(original_key).decode())
|
||||||
|
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
loaded_key = provider.master_key
|
||||||
|
|
||||||
|
assert loaded_key == original_key
|
||||||
|
|
||||||
|
def test_encrypt_decrypt_roundtrip(self, tmp_path):
|
||||||
|
"""Test that data can be encrypted and decrypted correctly."""
|
||||||
|
from app.encryption import LocalKeyEncryption
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
|
||||||
|
plaintext = b"Hello, World! This is a test message."
|
||||||
|
|
||||||
|
result = provider.encrypt(plaintext)
|
||||||
|
|
||||||
|
assert result.ciphertext != plaintext
|
||||||
|
assert result.key_id == "local"
|
||||||
|
assert len(result.nonce) == 12
|
||||||
|
assert len(result.encrypted_data_key) > 0
|
||||||
|
|
||||||
|
decrypted = provider.decrypt(
|
||||||
|
result.ciphertext,
|
||||||
|
result.nonce,
|
||||||
|
result.encrypted_data_key,
|
||||||
|
result.key_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
def test_different_data_keys_per_encryption(self, tmp_path):
|
||||||
|
"""Test that each encryption uses a different data key."""
|
||||||
|
from app.encryption import LocalKeyEncryption
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
|
||||||
|
plaintext = b"Same message"
|
||||||
|
|
||||||
|
result1 = provider.encrypt(plaintext)
|
||||||
|
result2 = provider.encrypt(plaintext)
|
||||||
|
|
||||||
|
assert result1.encrypted_data_key != result2.encrypted_data_key
|
||||||
|
assert result1.nonce != result2.nonce
|
||||||
|
assert result1.ciphertext != result2.ciphertext
|
||||||
|
|
||||||
|
def test_generate_data_key(self, tmp_path):
|
||||||
|
"""Test data key generation."""
|
||||||
|
from app.encryption import LocalKeyEncryption
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
|
||||||
|
plaintext_key, encrypted_key = provider.generate_data_key()
|
||||||
|
|
||||||
|
assert len(plaintext_key) == 32
|
||||||
|
assert len(encrypted_key) > 32
|
||||||
|
|
||||||
|
decrypted_key = provider._decrypt_data_key(encrypted_key)
|
||||||
|
assert decrypted_key == plaintext_key
|
||||||
|
|
||||||
|
def test_decrypt_with_wrong_key_fails(self, tmp_path):
|
||||||
|
"""Test that decryption fails with wrong master key."""
|
||||||
|
from app.encryption import LocalKeyEncryption, EncryptionError
|
||||||
|
|
||||||
|
key_path1 = tmp_path / "master1.key"
|
||||||
|
key_path2 = tmp_path / "master2.key"
|
||||||
|
|
||||||
|
provider1 = LocalKeyEncryption(key_path1)
|
||||||
|
provider2 = LocalKeyEncryption(key_path2)
|
||||||
|
|
||||||
|
plaintext = b"Secret message"
|
||||||
|
result = provider1.encrypt(plaintext)
|
||||||
|
|
||||||
|
with pytest.raises(EncryptionError):
|
||||||
|
provider2.decrypt(
|
||||||
|
result.ciphertext,
|
||||||
|
result.nonce,
|
||||||
|
result.encrypted_data_key,
|
||||||
|
result.key_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestEncryptionMetadata:
|
||||||
|
"""Tests for EncryptionMetadata class."""
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
"""Test converting metadata to dictionary."""
|
||||||
|
from app.encryption import EncryptionMetadata
|
||||||
|
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
encrypted_key = secrets.token_bytes(60)
|
||||||
|
|
||||||
|
metadata = EncryptionMetadata(
|
||||||
|
algorithm="AES256",
|
||||||
|
key_id="local",
|
||||||
|
nonce=nonce,
|
||||||
|
encrypted_data_key=encrypted_key,
|
||||||
|
)
|
||||||
|
|
||||||
|
result = metadata.to_dict()
|
||||||
|
|
||||||
|
assert result["x-amz-server-side-encryption"] == "AES256"
|
||||||
|
assert result["x-amz-encryption-key-id"] == "local"
|
||||||
|
assert base64.b64decode(result["x-amz-encryption-nonce"]) == nonce
|
||||||
|
assert base64.b64decode(result["x-amz-encrypted-data-key"]) == encrypted_key
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
"""Test creating metadata from dictionary."""
|
||||||
|
from app.encryption import EncryptionMetadata
|
||||||
|
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
encrypted_key = secrets.token_bytes(60)
|
||||||
|
|
||||||
|
data = {
|
||||||
|
"x-amz-server-side-encryption": "AES256",
|
||||||
|
"x-amz-encryption-key-id": "local",
|
||||||
|
"x-amz-encryption-nonce": base64.b64encode(nonce).decode(),
|
||||||
|
"x-amz-encrypted-data-key": base64.b64encode(encrypted_key).decode(),
|
||||||
|
}
|
||||||
|
|
||||||
|
metadata = EncryptionMetadata.from_dict(data)
|
||||||
|
|
||||||
|
assert metadata is not None
|
||||||
|
assert metadata.algorithm == "AES256"
|
||||||
|
assert metadata.key_id == "local"
|
||||||
|
assert metadata.nonce == nonce
|
||||||
|
assert metadata.encrypted_data_key == encrypted_key
|
||||||
|
|
||||||
|
def test_from_dict_returns_none_for_unencrypted(self):
|
||||||
|
"""Test that from_dict returns None for unencrypted objects."""
|
||||||
|
from app.encryption import EncryptionMetadata
|
||||||
|
|
||||||
|
data = {"some-other-key": "value"}
|
||||||
|
|
||||||
|
metadata = EncryptionMetadata.from_dict(data)
|
||||||
|
|
||||||
|
assert metadata is None
|
||||||
|
|
||||||
|
|
||||||
|
class TestStreamingEncryptor:
|
||||||
|
"""Tests for streaming encryption."""
|
||||||
|
|
||||||
|
def test_encrypt_decrypt_stream(self, tmp_path):
|
||||||
|
"""Test streaming encryption and decryption."""
|
||||||
|
from app.encryption import LocalKeyEncryption, StreamingEncryptor
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
encryptor = StreamingEncryptor(provider, chunk_size=1024)
|
||||||
|
|
||||||
|
original_data = b"A" * 5000 + b"B" * 5000 + b"C" * 5000
|
||||||
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
|
encrypted_stream, metadata = encryptor.encrypt_stream(stream)
|
||||||
|
encrypted_data = encrypted_stream.read()
|
||||||
|
|
||||||
|
assert encrypted_data != original_data
|
||||||
|
assert metadata.algorithm == "AES256"
|
||||||
|
|
||||||
|
encrypted_stream = io.BytesIO(encrypted_data)
|
||||||
|
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
|
||||||
|
decrypted_data = decrypted_stream.read()
|
||||||
|
|
||||||
|
assert decrypted_data == original_data
|
||||||
|
|
||||||
|
def test_encrypt_small_data(self, tmp_path):
|
||||||
|
"""Test encrypting data smaller than chunk size."""
|
||||||
|
from app.encryption import LocalKeyEncryption, StreamingEncryptor
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
encryptor = StreamingEncryptor(provider, chunk_size=1024)
|
||||||
|
|
||||||
|
original_data = b"Small data"
|
||||||
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
|
encrypted_stream, metadata = encryptor.encrypt_stream(stream)
|
||||||
|
encrypted_stream.seek(0)
|
||||||
|
|
||||||
|
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
|
||||||
|
decrypted_data = decrypted_stream.read()
|
||||||
|
|
||||||
|
assert decrypted_data == original_data
|
||||||
|
|
||||||
|
def test_encrypt_empty_data(self, tmp_path):
|
||||||
|
"""Test encrypting empty data."""
|
||||||
|
from app.encryption import LocalKeyEncryption, StreamingEncryptor
|
||||||
|
|
||||||
|
key_path = tmp_path / "master.key"
|
||||||
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
encryptor = StreamingEncryptor(provider)
|
||||||
|
|
||||||
|
stream = io.BytesIO(b"")
|
||||||
|
|
||||||
|
encrypted_stream, metadata = encryptor.encrypt_stream(stream)
|
||||||
|
encrypted_stream.seek(0)
|
||||||
|
|
||||||
|
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
|
||||||
|
decrypted_data = decrypted_stream.read()
|
||||||
|
|
||||||
|
assert decrypted_data == b""
|
||||||
|
|
||||||
|
|
||||||
|
class TestEncryptionManager:
|
||||||
|
"""Tests for EncryptionManager."""
|
||||||
|
|
||||||
|
def test_encryption_disabled_by_default(self, tmp_path):
|
||||||
|
"""Test that encryption is disabled by default."""
|
||||||
|
from app.encryption import EncryptionManager
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"encryption_enabled": False,
|
||||||
|
"encryption_master_key_path": str(tmp_path / "master.key"),
|
||||||
|
}
|
||||||
|
|
||||||
|
manager = EncryptionManager(config)
|
||||||
|
|
||||||
|
assert not manager.enabled
|
||||||
|
|
||||||
|
def test_encryption_enabled(self, tmp_path):
|
||||||
|
"""Test enabling encryption."""
|
||||||
|
from app.encryption import EncryptionManager
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"encryption_enabled": True,
|
||||||
|
"encryption_master_key_path": str(tmp_path / "master.key"),
|
||||||
|
"default_encryption_algorithm": "AES256",
|
||||||
|
}
|
||||||
|
|
||||||
|
manager = EncryptionManager(config)
|
||||||
|
|
||||||
|
assert manager.enabled
|
||||||
|
assert manager.default_algorithm == "AES256"
|
||||||
|
|
||||||
|
def test_encrypt_decrypt_object(self, tmp_path):
|
||||||
|
"""Test encrypting and decrypting an object."""
|
||||||
|
from app.encryption import EncryptionManager
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"encryption_enabled": True,
|
||||||
|
"encryption_master_key_path": str(tmp_path / "master.key"),
|
||||||
|
}
|
||||||
|
|
||||||
|
manager = EncryptionManager(config)
|
||||||
|
|
||||||
|
plaintext = b"Object data to encrypt"
|
||||||
|
|
||||||
|
ciphertext, metadata = manager.encrypt_object(plaintext)
|
||||||
|
|
||||||
|
assert ciphertext != plaintext
|
||||||
|
assert metadata.algorithm == "AES256"
|
||||||
|
|
||||||
|
decrypted = manager.decrypt_object(ciphertext, metadata)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
|
||||||
|
class TestClientEncryptionHelper:
|
||||||
|
"""Tests for client-side encryption helpers."""
|
||||||
|
|
||||||
|
def test_generate_client_key(self):
|
||||||
|
"""Test generating a client encryption key."""
|
||||||
|
from app.encryption import ClientEncryptionHelper
|
||||||
|
|
||||||
|
key_info = ClientEncryptionHelper.generate_client_key()
|
||||||
|
|
||||||
|
assert "key" in key_info
|
||||||
|
assert key_info["algorithm"] == "AES-256-GCM"
|
||||||
|
assert "created_at" in key_info
|
||||||
|
|
||||||
|
key = base64.b64decode(key_info["key"])
|
||||||
|
assert len(key) == 32
|
||||||
|
|
||||||
|
def test_encrypt_with_key(self):
|
||||||
|
"""Test encrypting data with a client key."""
|
||||||
|
from app.encryption import ClientEncryptionHelper
|
||||||
|
|
||||||
|
key = base64.b64encode(secrets.token_bytes(32)).decode()
|
||||||
|
plaintext = b"Client-side encrypted data"
|
||||||
|
|
||||||
|
result = ClientEncryptionHelper.encrypt_with_key(plaintext, key)
|
||||||
|
|
||||||
|
assert "ciphertext" in result
|
||||||
|
assert "nonce" in result
|
||||||
|
assert result["algorithm"] == "AES-256-GCM"
|
||||||
|
|
||||||
|
def test_encrypt_decrypt_with_key(self):
|
||||||
|
"""Test round-trip client-side encryption."""
|
||||||
|
from app.encryption import ClientEncryptionHelper
|
||||||
|
|
||||||
|
key = base64.b64encode(secrets.token_bytes(32)).decode()
|
||||||
|
plaintext = b"Client-side encrypted data"
|
||||||
|
|
||||||
|
encrypted = ClientEncryptionHelper.encrypt_with_key(plaintext, key)
|
||||||
|
|
||||||
|
decrypted = ClientEncryptionHelper.decrypt_with_key(
|
||||||
|
encrypted["ciphertext"],
|
||||||
|
encrypted["nonce"],
|
||||||
|
key,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
def test_wrong_key_fails(self):
|
||||||
|
"""Test that decryption with wrong key fails."""
|
||||||
|
from app.encryption import ClientEncryptionHelper, EncryptionError
|
||||||
|
|
||||||
|
key1 = base64.b64encode(secrets.token_bytes(32)).decode()
|
||||||
|
key2 = base64.b64encode(secrets.token_bytes(32)).decode()
|
||||||
|
plaintext = b"Secret data"
|
||||||
|
|
||||||
|
encrypted = ClientEncryptionHelper.encrypt_with_key(plaintext, key1)
|
||||||
|
|
||||||
|
with pytest.raises(EncryptionError):
|
||||||
|
ClientEncryptionHelper.decrypt_with_key(
|
||||||
|
encrypted["ciphertext"],
|
||||||
|
encrypted["nonce"],
|
||||||
|
key2,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSManager:
|
||||||
|
"""Tests for KMS key management."""
|
||||||
|
|
||||||
|
def test_create_key(self, tmp_path):
|
||||||
|
"""Test creating a KMS key."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
key = kms.create_key("Test key", key_id="test-key-1")
|
||||||
|
|
||||||
|
assert key.key_id == "test-key-1"
|
||||||
|
assert key.description == "Test key"
|
||||||
|
assert key.enabled
|
||||||
|
assert keys_path.exists()
|
||||||
|
|
||||||
|
def test_list_keys(self, tmp_path):
|
||||||
|
"""Test listing KMS keys."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Key 1", key_id="key-1")
|
||||||
|
kms.create_key("Key 2", key_id="key-2")
|
||||||
|
|
||||||
|
keys = kms.list_keys()
|
||||||
|
|
||||||
|
assert len(keys) == 2
|
||||||
|
key_ids = {k.key_id for k in keys}
|
||||||
|
assert "key-1" in key_ids
|
||||||
|
assert "key-2" in key_ids
|
||||||
|
|
||||||
|
def test_get_key(self, tmp_path):
|
||||||
|
"""Test getting a specific key."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
key = kms.get_key("test-key")
|
||||||
|
|
||||||
|
assert key is not None
|
||||||
|
assert key.key_id == "test-key"
|
||||||
|
|
||||||
|
assert kms.get_key("non-existent") is None
|
||||||
|
|
||||||
|
def test_enable_disable_key(self, tmp_path):
|
||||||
|
"""Test enabling and disabling keys."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
assert kms.get_key("test-key").enabled
|
||||||
|
|
||||||
|
kms.disable_key("test-key")
|
||||||
|
assert not kms.get_key("test-key").enabled
|
||||||
|
|
||||||
|
kms.enable_key("test-key")
|
||||||
|
assert kms.get_key("test-key").enabled
|
||||||
|
|
||||||
|
def test_delete_key(self, tmp_path):
|
||||||
|
"""Test deleting a key."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
assert kms.get_key("test-key") is not None
|
||||||
|
|
||||||
|
kms.delete_key("test-key")
|
||||||
|
assert kms.get_key("test-key") is None
|
||||||
|
|
||||||
|
def test_encrypt_decrypt(self, tmp_path):
|
||||||
|
"""Test KMS encrypt and decrypt."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
key = kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
plaintext = b"Secret data to encrypt"
|
||||||
|
|
||||||
|
ciphertext = kms.encrypt("test-key", plaintext)
|
||||||
|
|
||||||
|
assert ciphertext != plaintext
|
||||||
|
|
||||||
|
decrypted, key_id = kms.decrypt(ciphertext)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
assert key_id == "test-key"
|
||||||
|
|
||||||
|
def test_encrypt_with_context(self, tmp_path):
|
||||||
|
"""Test encryption with encryption context."""
|
||||||
|
from app.kms import KMSManager, EncryptionError
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
plaintext = b"Secret data"
|
||||||
|
context = {"bucket": "test-bucket", "key": "test-key"}
|
||||||
|
|
||||||
|
ciphertext = kms.encrypt("test-key", plaintext, context)
|
||||||
|
|
||||||
|
decrypted, _ = kms.decrypt(ciphertext, context)
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
with pytest.raises(EncryptionError):
|
||||||
|
kms.decrypt(ciphertext, {"different": "context"})
|
||||||
|
|
||||||
|
def test_generate_data_key(self, tmp_path):
|
||||||
|
"""Test generating a data key."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
plaintext_key, encrypted_key = kms.generate_data_key("test-key")
|
||||||
|
|
||||||
|
assert len(plaintext_key) == 32
|
||||||
|
assert len(encrypted_key) > 0
|
||||||
|
|
||||||
|
decrypted_key = kms.decrypt_data_key("test-key", encrypted_key)
|
||||||
|
|
||||||
|
assert decrypted_key == plaintext_key
|
||||||
|
|
||||||
|
def test_disabled_key_cannot_encrypt(self, tmp_path):
|
||||||
|
"""Test that disabled keys cannot be used for encryption."""
|
||||||
|
from app.kms import KMSManager, EncryptionError
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
kms.disable_key("test-key")
|
||||||
|
|
||||||
|
with pytest.raises(EncryptionError, match="disabled"):
|
||||||
|
kms.encrypt("test-key", b"data")
|
||||||
|
|
||||||
|
def test_re_encrypt(self, tmp_path):
|
||||||
|
"""Test re-encrypting data with a different key."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
kms.create_key("Key 1", key_id="key-1")
|
||||||
|
kms.create_key("Key 2", key_id="key-2")
|
||||||
|
|
||||||
|
plaintext = b"Data to re-encrypt"
|
||||||
|
|
||||||
|
ciphertext1 = kms.encrypt("key-1", plaintext)
|
||||||
|
ciphertext2 = kms.re_encrypt(ciphertext1, "key-2")
|
||||||
|
decrypted, key_id = kms.decrypt(ciphertext2)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
assert key_id == "key-2"
|
||||||
|
|
||||||
|
def test_generate_random(self, tmp_path):
|
||||||
|
"""Test generating random bytes."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
random1 = kms.generate_random(32)
|
||||||
|
random2 = kms.generate_random(32)
|
||||||
|
|
||||||
|
assert len(random1) == 32
|
||||||
|
assert len(random2) == 32
|
||||||
|
assert random1 != random2
|
||||||
|
|
||||||
|
def test_keys_persist_across_instances(self, tmp_path):
|
||||||
|
"""Test that keys persist and can be loaded by new instances."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms1 = KMSManager(keys_path, master_key_path)
|
||||||
|
kms1.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
plaintext = b"Persistent encryption test"
|
||||||
|
ciphertext = kms1.encrypt("test-key", plaintext)
|
||||||
|
|
||||||
|
kms2 = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
|
decrypted, key_id = kms2.decrypt(ciphertext)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
assert key_id == "test-key"
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSEncryptionProvider:
|
||||||
|
"""Tests for KMS encryption provider."""
|
||||||
|
|
||||||
|
def test_kms_encryption_provider(self, tmp_path):
|
||||||
|
"""Test using KMS as an encryption provider."""
|
||||||
|
from app.kms import KMSManager
|
||||||
|
|
||||||
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
provider = kms.get_provider("test-key")
|
||||||
|
|
||||||
|
plaintext = b"Data encrypted with KMS provider"
|
||||||
|
|
||||||
|
result = provider.encrypt(plaintext)
|
||||||
|
|
||||||
|
assert result.key_id == "test-key"
|
||||||
|
assert result.ciphertext != plaintext
|
||||||
|
|
||||||
|
decrypted = provider.decrypt(
|
||||||
|
result.ciphertext,
|
||||||
|
result.nonce,
|
||||||
|
result.encrypted_data_key,
|
||||||
|
result.key_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
|
||||||
|
class TestEncryptedStorage:
|
||||||
|
"""Tests for encrypted storage layer."""
|
||||||
|
|
||||||
|
def test_put_and_get_encrypted_object(self, tmp_path):
|
||||||
|
"""Test storing and retrieving an encrypted object."""
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
from app.encryption import EncryptionManager
|
||||||
|
from app.encrypted_storage import EncryptedObjectStorage
|
||||||
|
|
||||||
|
storage_root = tmp_path / "storage"
|
||||||
|
storage = ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"encryption_enabled": True,
|
||||||
|
"encryption_master_key_path": str(tmp_path / "master.key"),
|
||||||
|
"default_encryption_algorithm": "AES256",
|
||||||
|
}
|
||||||
|
encryption = EncryptionManager(config)
|
||||||
|
|
||||||
|
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
||||||
|
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
storage.set_bucket_encryption("test-bucket", {
|
||||||
|
"Rules": [{"SSEAlgorithm": "AES256"}]
|
||||||
|
})
|
||||||
|
|
||||||
|
original_data = b"This is secret data that should be encrypted"
|
||||||
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
|
meta = encrypted_storage.put_object(
|
||||||
|
"test-bucket",
|
||||||
|
"secret.txt",
|
||||||
|
stream,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert meta is not None
|
||||||
|
|
||||||
|
file_path = storage_root / "test-bucket" / "secret.txt"
|
||||||
|
stored_data = file_path.read_bytes()
|
||||||
|
assert stored_data != original_data
|
||||||
|
|
||||||
|
data, metadata = encrypted_storage.get_object_data("test-bucket", "secret.txt")
|
||||||
|
|
||||||
|
assert data == original_data
|
||||||
|
|
||||||
|
def test_no_encryption_without_config(self, tmp_path):
|
||||||
|
"""Test that objects are not encrypted without bucket config."""
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
from app.encryption import EncryptionManager
|
||||||
|
from app.encrypted_storage import EncryptedObjectStorage
|
||||||
|
|
||||||
|
storage_root = tmp_path / "storage"
|
||||||
|
storage = ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"encryption_enabled": True,
|
||||||
|
"encryption_master_key_path": str(tmp_path / "master.key"),
|
||||||
|
}
|
||||||
|
encryption = EncryptionManager(config)
|
||||||
|
|
||||||
|
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
||||||
|
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
original_data = b"Unencrypted data"
|
||||||
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
|
encrypted_storage.put_object("test-bucket", "plain.txt", stream)
|
||||||
|
|
||||||
|
file_path = storage_root / "test-bucket" / "plain.txt"
|
||||||
|
stored_data = file_path.read_bytes()
|
||||||
|
assert stored_data == original_data
|
||||||
|
|
||||||
|
def test_explicit_encryption_request(self, tmp_path):
|
||||||
|
"""Test explicitly requesting encryption."""
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
from app.encryption import EncryptionManager
|
||||||
|
from app.encrypted_storage import EncryptedObjectStorage
|
||||||
|
|
||||||
|
storage_root = tmp_path / "storage"
|
||||||
|
storage = ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"encryption_enabled": True,
|
||||||
|
"encryption_master_key_path": str(tmp_path / "master.key"),
|
||||||
|
}
|
||||||
|
encryption = EncryptionManager(config)
|
||||||
|
|
||||||
|
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
||||||
|
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
original_data = b"Explicitly encrypted data"
|
||||||
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
|
encrypted_storage.put_object(
|
||||||
|
"test-bucket",
|
||||||
|
"encrypted.txt",
|
||||||
|
stream,
|
||||||
|
server_side_encryption="AES256",
|
||||||
|
)
|
||||||
|
|
||||||
|
file_path = storage_root / "test-bucket" / "encrypted.txt"
|
||||||
|
stored_data = file_path.read_bytes()
|
||||||
|
assert stored_data != original_data
|
||||||
|
|
||||||
|
data, _ = encrypted_storage.get_object_data("test-bucket", "encrypted.txt")
|
||||||
|
assert data == original_data
|
||||||
479
tests/test_kms_api.py
Normal file
479
tests/test_kms_api.py
Normal file
@@ -0,0 +1,479 @@
|
|||||||
|
"""Tests for KMS API endpoints."""
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import base64
|
||||||
|
import json
|
||||||
|
import secrets
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def kms_client(tmp_path):
|
||||||
|
"""Create a test client with KMS enabled."""
|
||||||
|
from app import create_app
|
||||||
|
|
||||||
|
app = create_app({
|
||||||
|
"TESTING": True,
|
||||||
|
"STORAGE_ROOT": str(tmp_path / "storage"),
|
||||||
|
"IAM_CONFIG": str(tmp_path / "iam.json"),
|
||||||
|
"BUCKET_POLICY_PATH": str(tmp_path / "policies.json"),
|
||||||
|
"ENCRYPTION_ENABLED": True,
|
||||||
|
"KMS_ENABLED": True,
|
||||||
|
"ENCRYPTION_MASTER_KEY_PATH": str(tmp_path / "master.key"),
|
||||||
|
"KMS_KEYS_PATH": str(tmp_path / "kms_keys.json"),
|
||||||
|
})
|
||||||
|
|
||||||
|
iam_config = {
|
||||||
|
"users": [
|
||||||
|
{
|
||||||
|
"access_key": "test-access-key",
|
||||||
|
"secret_key": "test-secret-key",
|
||||||
|
"display_name": "Test User",
|
||||||
|
"permissions": ["*"]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
(tmp_path / "iam.json").write_text(json.dumps(iam_config))
|
||||||
|
|
||||||
|
return app.test_client()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def auth_headers():
|
||||||
|
"""Get authentication headers."""
|
||||||
|
return {
|
||||||
|
"X-Access-Key": "test-access-key",
|
||||||
|
"X-Secret-Key": "test-secret-key",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSKeyManagement:
|
||||||
|
"""Tests for KMS key management endpoints."""
|
||||||
|
|
||||||
|
def test_create_key(self, kms_client, auth_headers):
|
||||||
|
"""Test creating a KMS key."""
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/keys",
|
||||||
|
json={"Description": "Test encryption key"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert "KeyMetadata" in data
|
||||||
|
assert data["KeyMetadata"]["Description"] == "Test encryption key"
|
||||||
|
assert data["KeyMetadata"]["Enabled"] is True
|
||||||
|
assert "KeyId" in data["KeyMetadata"]
|
||||||
|
|
||||||
|
def test_create_key_with_custom_id(self, kms_client, auth_headers):
|
||||||
|
"""Test creating a key with a custom ID."""
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/keys",
|
||||||
|
json={"KeyId": "my-custom-key", "Description": "Custom key"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert data["KeyMetadata"]["KeyId"] == "my-custom-key"
|
||||||
|
|
||||||
|
def test_list_keys(self, kms_client, auth_headers):
|
||||||
|
"""Test listing KMS keys."""
|
||||||
|
kms_client.post("/kms/keys", json={"Description": "Key 1"}, headers=auth_headers)
|
||||||
|
kms_client.post("/kms/keys", json={"Description": "Key 2"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.get("/kms/keys", headers=auth_headers)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert "Keys" in data
|
||||||
|
assert len(data["Keys"]) == 2
|
||||||
|
|
||||||
|
def test_get_key(self, kms_client, auth_headers):
|
||||||
|
"""Test getting a specific key."""
|
||||||
|
create_response = kms_client.post(
|
||||||
|
"/kms/keys",
|
||||||
|
json={"KeyId": "test-key", "Description": "Test key"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert data["KeyMetadata"]["KeyId"] == "test-key"
|
||||||
|
assert data["KeyMetadata"]["Description"] == "Test key"
|
||||||
|
|
||||||
|
def test_get_nonexistent_key(self, kms_client, auth_headers):
|
||||||
|
"""Test getting a key that doesn't exist."""
|
||||||
|
response = kms_client.get("/kms/keys/nonexistent", headers=auth_headers)
|
||||||
|
|
||||||
|
assert response.status_code == 404
|
||||||
|
|
||||||
|
def test_delete_key(self, kms_client, auth_headers):
|
||||||
|
"""Test deleting a key."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.delete("/kms/keys/test-key", headers=auth_headers)
|
||||||
|
|
||||||
|
assert response.status_code == 204
|
||||||
|
|
||||||
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
|
assert get_response.status_code == 404
|
||||||
|
|
||||||
|
def test_enable_disable_key(self, kms_client, auth_headers):
|
||||||
|
"""Test enabling and disabling a key."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.post("/kms/keys/test-key/disable", headers=auth_headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
|
assert get_response.get_json()["KeyMetadata"]["Enabled"] is False
|
||||||
|
|
||||||
|
response = kms_client.post("/kms/keys/test-key/enable", headers=auth_headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
|
assert get_response.get_json()["KeyMetadata"]["Enabled"] is True
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSEncryption:
|
||||||
|
"""Tests for KMS encryption operations."""
|
||||||
|
|
||||||
|
def test_encrypt_decrypt(self, kms_client, auth_headers):
|
||||||
|
"""Test encrypting and decrypting data."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
plaintext = b"Hello, World!"
|
||||||
|
plaintext_b64 = base64.b64encode(plaintext).decode()
|
||||||
|
|
||||||
|
encrypt_response = kms_client.post(
|
||||||
|
"/kms/encrypt",
|
||||||
|
json={"KeyId": "test-key", "Plaintext": plaintext_b64},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert encrypt_response.status_code == 200
|
||||||
|
encrypt_data = encrypt_response.get_json()
|
||||||
|
|
||||||
|
assert "CiphertextBlob" in encrypt_data
|
||||||
|
assert encrypt_data["KeyId"] == "test-key"
|
||||||
|
|
||||||
|
decrypt_response = kms_client.post(
|
||||||
|
"/kms/decrypt",
|
||||||
|
json={"CiphertextBlob": encrypt_data["CiphertextBlob"]},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert decrypt_response.status_code == 200
|
||||||
|
decrypt_data = decrypt_response.get_json()
|
||||||
|
|
||||||
|
decrypted = base64.b64decode(decrypt_data["Plaintext"])
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
def test_encrypt_with_context(self, kms_client, auth_headers):
|
||||||
|
"""Test encryption with encryption context."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
plaintext = b"Contextualized data"
|
||||||
|
plaintext_b64 = base64.b64encode(plaintext).decode()
|
||||||
|
context = {"purpose": "testing", "bucket": "my-bucket"}
|
||||||
|
|
||||||
|
encrypt_response = kms_client.post(
|
||||||
|
"/kms/encrypt",
|
||||||
|
json={
|
||||||
|
"KeyId": "test-key",
|
||||||
|
"Plaintext": plaintext_b64,
|
||||||
|
"EncryptionContext": context,
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert encrypt_response.status_code == 200
|
||||||
|
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
||||||
|
|
||||||
|
decrypt_response = kms_client.post(
|
||||||
|
"/kms/decrypt",
|
||||||
|
json={
|
||||||
|
"CiphertextBlob": ciphertext,
|
||||||
|
"EncryptionContext": context,
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert decrypt_response.status_code == 200
|
||||||
|
|
||||||
|
wrong_context_response = kms_client.post(
|
||||||
|
"/kms/decrypt",
|
||||||
|
json={
|
||||||
|
"CiphertextBlob": ciphertext,
|
||||||
|
"EncryptionContext": {"wrong": "context"},
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert wrong_context_response.status_code == 400
|
||||||
|
|
||||||
|
def test_encrypt_missing_key_id(self, kms_client, auth_headers):
|
||||||
|
"""Test encryption without KeyId."""
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/encrypt",
|
||||||
|
json={"Plaintext": base64.b64encode(b"data").decode()},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 400
|
||||||
|
assert "KeyId is required" in response.get_json()["message"]
|
||||||
|
|
||||||
|
def test_encrypt_missing_plaintext(self, kms_client, auth_headers):
|
||||||
|
"""Test encryption without Plaintext."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/encrypt",
|
||||||
|
json={"KeyId": "test-key"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 400
|
||||||
|
assert "Plaintext is required" in response.get_json()["message"]
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSDataKey:
|
||||||
|
"""Tests for KMS data key generation."""
|
||||||
|
|
||||||
|
def test_generate_data_key(self, kms_client, auth_headers):
|
||||||
|
"""Test generating a data key."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/generate-data-key",
|
||||||
|
json={"KeyId": "test-key"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert "Plaintext" in data
|
||||||
|
assert "CiphertextBlob" in data
|
||||||
|
assert data["KeyId"] == "test-key"
|
||||||
|
|
||||||
|
# Verify plaintext key is 256 bits (32 bytes)
|
||||||
|
plaintext_key = base64.b64decode(data["Plaintext"])
|
||||||
|
assert len(plaintext_key) == 32
|
||||||
|
|
||||||
|
def test_generate_data_key_aes_128(self, kms_client, auth_headers):
|
||||||
|
"""Test generating an AES-128 data key."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/generate-data-key",
|
||||||
|
json={"KeyId": "test-key", "KeySpec": "AES_128"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
# Verify plaintext key is 128 bits (16 bytes)
|
||||||
|
plaintext_key = base64.b64decode(data["Plaintext"])
|
||||||
|
assert len(plaintext_key) == 16
|
||||||
|
|
||||||
|
def test_generate_data_key_without_plaintext(self, kms_client, auth_headers):
|
||||||
|
"""Test generating a data key without plaintext."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/generate-data-key-without-plaintext",
|
||||||
|
json={"KeyId": "test-key"},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert "CiphertextBlob" in data
|
||||||
|
assert "Plaintext" not in data
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSReEncrypt:
|
||||||
|
"""Tests for KMS re-encryption."""
|
||||||
|
|
||||||
|
def test_re_encrypt(self, kms_client, auth_headers):
|
||||||
|
"""Test re-encrypting data with a different key."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "key-1"}, headers=auth_headers)
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "key-2"}, headers=auth_headers)
|
||||||
|
|
||||||
|
plaintext = b"Data to re-encrypt"
|
||||||
|
encrypt_response = kms_client.post(
|
||||||
|
"/kms/encrypt",
|
||||||
|
json={
|
||||||
|
"KeyId": "key-1",
|
||||||
|
"Plaintext": base64.b64encode(plaintext).decode(),
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
||||||
|
|
||||||
|
re_encrypt_response = kms_client.post(
|
||||||
|
"/kms/re-encrypt",
|
||||||
|
json={
|
||||||
|
"CiphertextBlob": ciphertext,
|
||||||
|
"DestinationKeyId": "key-2",
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert re_encrypt_response.status_code == 200
|
||||||
|
data = re_encrypt_response.get_json()
|
||||||
|
|
||||||
|
assert data["SourceKeyId"] == "key-1"
|
||||||
|
assert data["KeyId"] == "key-2"
|
||||||
|
|
||||||
|
decrypt_response = kms_client.post(
|
||||||
|
"/kms/decrypt",
|
||||||
|
json={"CiphertextBlob": data["CiphertextBlob"]},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
decrypted = base64.b64decode(decrypt_response.get_json()["Plaintext"])
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSRandom:
|
||||||
|
"""Tests for random number generation."""
|
||||||
|
|
||||||
|
def test_generate_random(self, kms_client, auth_headers):
|
||||||
|
"""Test generating random bytes."""
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/generate-random",
|
||||||
|
json={"NumberOfBytes": 64},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
random_bytes = base64.b64decode(data["Plaintext"])
|
||||||
|
assert len(random_bytes) == 64
|
||||||
|
|
||||||
|
def test_generate_random_default_size(self, kms_client, auth_headers):
|
||||||
|
"""Test generating random bytes with default size."""
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/generate-random",
|
||||||
|
json={},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
random_bytes = base64.b64decode(data["Plaintext"])
|
||||||
|
assert len(random_bytes) == 32
|
||||||
|
|
||||||
|
|
||||||
|
class TestClientSideEncryption:
|
||||||
|
"""Tests for client-side encryption helpers."""
|
||||||
|
|
||||||
|
def test_generate_client_key(self, kms_client, auth_headers):
|
||||||
|
"""Test generating a client encryption key."""
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/client/generate-key",
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert "key" in data
|
||||||
|
assert data["algorithm"] == "AES-256-GCM"
|
||||||
|
|
||||||
|
key = base64.b64decode(data["key"])
|
||||||
|
assert len(key) == 32
|
||||||
|
|
||||||
|
def test_client_encrypt_decrypt(self, kms_client, auth_headers):
|
||||||
|
"""Test client-side encryption and decryption."""
|
||||||
|
key_response = kms_client.post("/kms/client/generate-key", headers=auth_headers)
|
||||||
|
key = key_response.get_json()["key"]
|
||||||
|
|
||||||
|
plaintext = b"Client-side encrypted data"
|
||||||
|
encrypt_response = kms_client.post(
|
||||||
|
"/kms/client/encrypt",
|
||||||
|
json={
|
||||||
|
"Plaintext": base64.b64encode(plaintext).decode(),
|
||||||
|
"Key": key,
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert encrypt_response.status_code == 200
|
||||||
|
encrypted = encrypt_response.get_json()
|
||||||
|
|
||||||
|
decrypt_response = kms_client.post(
|
||||||
|
"/kms/client/decrypt",
|
||||||
|
json={
|
||||||
|
"Ciphertext": encrypted["ciphertext"],
|
||||||
|
"Nonce": encrypted["nonce"],
|
||||||
|
"Key": key,
|
||||||
|
},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert decrypt_response.status_code == 200
|
||||||
|
decrypted = base64.b64decode(decrypt_response.get_json()["Plaintext"])
|
||||||
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
|
||||||
|
class TestEncryptionMaterials:
|
||||||
|
"""Tests for S3 encryption materials endpoint."""
|
||||||
|
|
||||||
|
def test_get_encryption_materials(self, kms_client, auth_headers):
|
||||||
|
"""Test getting encryption materials for client-side S3 encryption."""
|
||||||
|
kms_client.post("/kms/keys", json={"KeyId": "s3-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
response = kms_client.post(
|
||||||
|
"/kms/materials/s3-key",
|
||||||
|
json={},
|
||||||
|
headers=auth_headers,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
data = response.get_json()
|
||||||
|
|
||||||
|
assert "PlaintextKey" in data
|
||||||
|
assert "EncryptedKey" in data
|
||||||
|
assert data["KeyId"] == "s3-key"
|
||||||
|
assert data["Algorithm"] == "AES-256-GCM"
|
||||||
|
|
||||||
|
key = base64.b64decode(data["PlaintextKey"])
|
||||||
|
assert len(key) == 32
|
||||||
|
|
||||||
|
|
||||||
|
class TestKMSAuthentication:
|
||||||
|
"""Tests for KMS authentication requirements."""
|
||||||
|
|
||||||
|
def test_unauthenticated_request_fails(self, kms_client):
|
||||||
|
"""Test that unauthenticated requests are rejected."""
|
||||||
|
response = kms_client.get("/kms/keys")
|
||||||
|
|
||||||
|
assert response.status_code == 403
|
||||||
|
|
||||||
|
def test_invalid_credentials_fail(self, kms_client):
|
||||||
|
"""Test that invalid credentials are rejected."""
|
||||||
|
response = kms_client.get(
|
||||||
|
"/kms/keys",
|
||||||
|
headers={
|
||||||
|
"X-Access-Key": "wrong-key",
|
||||||
|
"X-Secret-Key": "wrong-secret",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 403
|
||||||
238
tests/test_lifecycle.py
Normal file
238
tests/test_lifecycle.py
Normal file
@@ -0,0 +1,238 @@
|
|||||||
|
import io
|
||||||
|
import time
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.lifecycle import LifecycleManager, LifecycleResult
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def storage(tmp_path: Path):
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
storage_root.mkdir(parents=True)
|
||||||
|
return ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def lifecycle_manager(storage):
|
||||||
|
manager = LifecycleManager(storage, interval_seconds=3600)
|
||||||
|
yield manager
|
||||||
|
manager.stop()
|
||||||
|
|
||||||
|
|
||||||
|
class TestLifecycleResult:
|
||||||
|
def test_default_values(self):
|
||||||
|
result = LifecycleResult(bucket_name="test-bucket")
|
||||||
|
assert result.bucket_name == "test-bucket"
|
||||||
|
assert result.objects_deleted == 0
|
||||||
|
assert result.versions_deleted == 0
|
||||||
|
assert result.uploads_aborted == 0
|
||||||
|
assert result.errors == []
|
||||||
|
assert result.execution_time_seconds == 0.0
|
||||||
|
|
||||||
|
|
||||||
|
class TestLifecycleManager:
|
||||||
|
def test_start_and_stop(self, lifecycle_manager):
|
||||||
|
lifecycle_manager.start()
|
||||||
|
assert lifecycle_manager._timer is not None
|
||||||
|
assert lifecycle_manager._shutdown is False
|
||||||
|
|
||||||
|
lifecycle_manager.stop()
|
||||||
|
assert lifecycle_manager._shutdown is True
|
||||||
|
assert lifecycle_manager._timer is None
|
||||||
|
|
||||||
|
def test_start_only_once(self, lifecycle_manager):
|
||||||
|
lifecycle_manager.start()
|
||||||
|
first_timer = lifecycle_manager._timer
|
||||||
|
|
||||||
|
lifecycle_manager.start()
|
||||||
|
assert lifecycle_manager._timer is first_timer
|
||||||
|
|
||||||
|
def test_enforce_rules_no_lifecycle(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("no-lifecycle-bucket")
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("no-lifecycle-bucket")
|
||||||
|
assert result.bucket_name == "no-lifecycle-bucket"
|
||||||
|
assert result.objects_deleted == 0
|
||||||
|
|
||||||
|
def test_enforce_rules_disabled_rule(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("disabled-bucket")
|
||||||
|
storage.set_bucket_lifecycle("disabled-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "disabled-rule",
|
||||||
|
"Status": "Disabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"Expiration": {"Days": 1},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
old_object = storage.put_object(
|
||||||
|
"disabled-bucket",
|
||||||
|
"old-file.txt",
|
||||||
|
io.BytesIO(b"old content"),
|
||||||
|
)
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("disabled-bucket")
|
||||||
|
assert result.objects_deleted == 0
|
||||||
|
|
||||||
|
def test_enforce_expiration_by_days(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("expire-bucket")
|
||||||
|
storage.set_bucket_lifecycle("expire-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "expire-30-days",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"Expiration": {"Days": 30},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
storage.put_object(
|
||||||
|
"expire-bucket",
|
||||||
|
"recent-file.txt",
|
||||||
|
io.BytesIO(b"recent content"),
|
||||||
|
)
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("expire-bucket")
|
||||||
|
assert result.objects_deleted == 0
|
||||||
|
|
||||||
|
def test_enforce_expiration_with_prefix(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("prefix-bucket")
|
||||||
|
storage.set_bucket_lifecycle("prefix-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "expire-logs",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "logs/",
|
||||||
|
"Expiration": {"Days": 1},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
storage.put_object("prefix-bucket", "logs/old.log", io.BytesIO(b"log data"))
|
||||||
|
storage.put_object("prefix-bucket", "data/keep.txt", io.BytesIO(b"keep this"))
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("prefix-bucket")
|
||||||
|
|
||||||
|
def test_enforce_all_buckets(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("bucket1")
|
||||||
|
storage.create_bucket("bucket2")
|
||||||
|
|
||||||
|
results = lifecycle_manager.enforce_all_buckets()
|
||||||
|
assert isinstance(results, dict)
|
||||||
|
|
||||||
|
def test_run_now_single_bucket(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("run-now-bucket")
|
||||||
|
|
||||||
|
results = lifecycle_manager.run_now("run-now-bucket")
|
||||||
|
assert "run-now-bucket" in results
|
||||||
|
|
||||||
|
def test_run_now_all_buckets(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("all-bucket-1")
|
||||||
|
storage.create_bucket("all-bucket-2")
|
||||||
|
|
||||||
|
results = lifecycle_manager.run_now()
|
||||||
|
assert isinstance(results, dict)
|
||||||
|
|
||||||
|
def test_enforce_abort_multipart(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("multipart-bucket")
|
||||||
|
storage.set_bucket_lifecycle("multipart-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "abort-old-uploads",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
upload_id = storage.initiate_multipart_upload("multipart-bucket", "large-file.bin")
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("multipart-bucket")
|
||||||
|
assert result.uploads_aborted == 0
|
||||||
|
|
||||||
|
def test_enforce_noncurrent_version_expiration(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("versioned-bucket")
|
||||||
|
storage.set_bucket_versioning("versioned-bucket", True)
|
||||||
|
storage.set_bucket_lifecycle("versioned-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "expire-old-versions",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"NoncurrentVersionExpiration": {"NoncurrentDays": 30},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
storage.put_object("versioned-bucket", "file.txt", io.BytesIO(b"v1"))
|
||||||
|
storage.put_object("versioned-bucket", "file.txt", io.BytesIO(b"v2"))
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("versioned-bucket")
|
||||||
|
assert result.bucket_name == "versioned-bucket"
|
||||||
|
|
||||||
|
def test_execution_time_tracking(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("timed-bucket")
|
||||||
|
storage.set_bucket_lifecycle("timed-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "timer-test",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Expiration": {"Days": 1},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("timed-bucket")
|
||||||
|
assert result.execution_time_seconds >= 0
|
||||||
|
|
||||||
|
def test_enforce_rules_with_error(self, lifecycle_manager, storage):
|
||||||
|
result = lifecycle_manager.enforce_rules("nonexistent-bucket")
|
||||||
|
assert len(result.errors) > 0 or result.objects_deleted == 0
|
||||||
|
|
||||||
|
def test_lifecycle_with_date_expiration(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("date-bucket")
|
||||||
|
past_date = (datetime.now(timezone.utc) - timedelta(days=1)).strftime("%Y-%m-%dT00:00:00Z")
|
||||||
|
storage.set_bucket_lifecycle("date-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "expire-by-date",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"Expiration": {"Date": past_date},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
storage.put_object("date-bucket", "should-expire.txt", io.BytesIO(b"content"))
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("date-bucket")
|
||||||
|
|
||||||
|
def test_enforce_with_filter_prefix(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("filter-bucket")
|
||||||
|
storage.set_bucket_lifecycle("filter-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "filter-prefix-rule",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Filter": {"Prefix": "archive/"},
|
||||||
|
"Expiration": {"Days": 1},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("filter-bucket")
|
||||||
|
assert result.bucket_name == "filter-bucket"
|
||||||
|
|
||||||
|
|
||||||
|
class TestLifecycleManagerScheduling:
|
||||||
|
def test_schedule_next_respects_shutdown(self, storage):
|
||||||
|
manager = LifecycleManager(storage, interval_seconds=1)
|
||||||
|
manager._shutdown = True
|
||||||
|
manager._schedule_next()
|
||||||
|
assert manager._timer is None
|
||||||
|
|
||||||
|
@patch.object(LifecycleManager, "enforce_all_buckets")
|
||||||
|
def test_run_enforcement_catches_exceptions(self, mock_enforce, storage):
|
||||||
|
mock_enforce.side_effect = Exception("Test error")
|
||||||
|
manager = LifecycleManager(storage, interval_seconds=3600)
|
||||||
|
manager._shutdown = True
|
||||||
|
manager._run_enforcement()
|
||||||
|
|
||||||
|
def test_shutdown_flag_prevents_scheduling(self, storage):
|
||||||
|
manager = LifecycleManager(storage, interval_seconds=1)
|
||||||
|
manager.start()
|
||||||
|
manager.stop()
|
||||||
|
assert manager._shutdown is True
|
||||||
266
tests/test_new_api_endpoints.py
Normal file
266
tests/test_new_api_endpoints.py
Normal file
@@ -0,0 +1,266 @@
|
|||||||
|
"""Tests for newly implemented S3 API endpoints."""
|
||||||
|
import io
|
||||||
|
import pytest
|
||||||
|
from xml.etree.ElementTree import fromstring
|
||||||
|
|
||||||
|
|
||||||
|
def _stream(data: bytes):
|
||||||
|
return io.BytesIO(data)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def storage(app):
|
||||||
|
"""Get the storage instance from the app."""
|
||||||
|
return app.extensions["object_storage"]
|
||||||
|
|
||||||
|
|
||||||
|
class TestListObjectsV2:
|
||||||
|
"""Tests for ListObjectsV2 endpoint."""
|
||||||
|
|
||||||
|
def test_list_objects_v2_basic(self, client, signer, storage):
|
||||||
|
storage.create_bucket("v2-test")
|
||||||
|
storage.put_object("v2-test", "file1.txt", _stream(b"hello"))
|
||||||
|
storage.put_object("v2-test", "file2.txt", _stream(b"world"))
|
||||||
|
storage.put_object("v2-test", "folder/file3.txt", _stream(b"nested"))
|
||||||
|
|
||||||
|
headers = signer("GET", "/v2-test?list-type=2")
|
||||||
|
resp = client.get("/v2-test", query_string={"list-type": "2"}, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.find("KeyCount").text == "3"
|
||||||
|
assert root.find("IsTruncated").text == "false"
|
||||||
|
|
||||||
|
keys = [el.find("Key").text for el in root.findall("Contents")]
|
||||||
|
assert "file1.txt" in keys
|
||||||
|
assert "file2.txt" in keys
|
||||||
|
assert "folder/file3.txt" in keys
|
||||||
|
|
||||||
|
def test_list_objects_v2_with_prefix_and_delimiter(self, client, signer, storage):
|
||||||
|
storage.create_bucket("prefix-test")
|
||||||
|
storage.put_object("prefix-test", "photos/2023/jan.jpg", _stream(b"jan"))
|
||||||
|
storage.put_object("prefix-test", "photos/2023/feb.jpg", _stream(b"feb"))
|
||||||
|
storage.put_object("prefix-test", "photos/2024/mar.jpg", _stream(b"mar"))
|
||||||
|
storage.put_object("prefix-test", "docs/readme.md", _stream(b"readme"))
|
||||||
|
|
||||||
|
headers = signer("GET", "/prefix-test?list-type=2&prefix=photos/&delimiter=/")
|
||||||
|
resp = client.get(
|
||||||
|
"/prefix-test",
|
||||||
|
query_string={"list-type": "2", "prefix": "photos/", "delimiter": "/"},
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
prefixes = [el.find("Prefix").text for el in root.findall("CommonPrefixes")]
|
||||||
|
assert "photos/2023/" in prefixes
|
||||||
|
assert "photos/2024/" in prefixes
|
||||||
|
assert len(root.findall("Contents")) == 0
|
||||||
|
|
||||||
|
|
||||||
|
class TestPutBucketVersioning:
|
||||||
|
"""Tests for PutBucketVersioning endpoint."""
|
||||||
|
|
||||||
|
def test_put_versioning_enabled(self, client, signer, storage):
|
||||||
|
storage.create_bucket("version-test")
|
||||||
|
|
||||||
|
payload = b"""<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<VersioningConfiguration>
|
||||||
|
<Status>Enabled</Status>
|
||||||
|
</VersioningConfiguration>"""
|
||||||
|
|
||||||
|
headers = signer("PUT", "/version-test?versioning", body=payload)
|
||||||
|
resp = client.put("/version-test", query_string={"versioning": ""}, data=payload, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
headers = signer("GET", "/version-test?versioning")
|
||||||
|
resp = client.get("/version-test", query_string={"versioning": ""}, headers=headers)
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.find("Status").text == "Enabled"
|
||||||
|
|
||||||
|
def test_put_versioning_suspended(self, client, signer, storage):
|
||||||
|
storage.create_bucket("suspend-test")
|
||||||
|
storage.set_bucket_versioning("suspend-test", True)
|
||||||
|
|
||||||
|
payload = b"""<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<VersioningConfiguration>
|
||||||
|
<Status>Suspended</Status>
|
||||||
|
</VersioningConfiguration>"""
|
||||||
|
|
||||||
|
headers = signer("PUT", "/suspend-test?versioning", body=payload)
|
||||||
|
resp = client.put("/suspend-test", query_string={"versioning": ""}, data=payload, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
headers = signer("GET", "/suspend-test?versioning")
|
||||||
|
resp = client.get("/suspend-test", query_string={"versioning": ""}, headers=headers)
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.find("Status").text == "Suspended"
|
||||||
|
|
||||||
|
|
||||||
|
class TestDeleteBucketTagging:
|
||||||
|
"""Tests for DeleteBucketTagging endpoint."""
|
||||||
|
|
||||||
|
def test_delete_bucket_tags(self, client, signer, storage):
|
||||||
|
storage.create_bucket("tag-delete-test")
|
||||||
|
storage.set_bucket_tags("tag-delete-test", [{"Key": "env", "Value": "test"}])
|
||||||
|
|
||||||
|
headers = signer("DELETE", "/tag-delete-test?tagging")
|
||||||
|
resp = client.delete("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
headers = signer("GET", "/tag-delete-test?tagging")
|
||||||
|
resp = client.get("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
|
class TestDeleteBucketCors:
|
||||||
|
"""Tests for DeleteBucketCors endpoint."""
|
||||||
|
|
||||||
|
def test_delete_bucket_cors(self, client, signer, storage):
|
||||||
|
storage.create_bucket("cors-delete-test")
|
||||||
|
storage.set_bucket_cors("cors-delete-test", [
|
||||||
|
{"AllowedOrigins": ["*"], "AllowedMethods": ["GET"]}
|
||||||
|
])
|
||||||
|
|
||||||
|
headers = signer("DELETE", "/cors-delete-test?cors")
|
||||||
|
resp = client.delete("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
headers = signer("GET", "/cors-delete-test?cors")
|
||||||
|
resp = client.get("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
|
class TestGetBucketLocation:
|
||||||
|
"""Tests for GetBucketLocation endpoint."""
|
||||||
|
|
||||||
|
def test_get_bucket_location(self, client, signer, storage):
|
||||||
|
storage.create_bucket("location-test")
|
||||||
|
|
||||||
|
headers = signer("GET", "/location-test?location")
|
||||||
|
resp = client.get("/location-test", query_string={"location": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.tag == "LocationConstraint"
|
||||||
|
|
||||||
|
|
||||||
|
class TestBucketAcl:
|
||||||
|
"""Tests for Bucket ACL operations."""
|
||||||
|
|
||||||
|
def test_get_bucket_acl(self, client, signer, storage):
|
||||||
|
storage.create_bucket("acl-test")
|
||||||
|
|
||||||
|
headers = signer("GET", "/acl-test?acl")
|
||||||
|
resp = client.get("/acl-test", query_string={"acl": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.tag == "AccessControlPolicy"
|
||||||
|
assert root.find("Owner/ID") is not None
|
||||||
|
assert root.find(".//Permission").text == "FULL_CONTROL"
|
||||||
|
|
||||||
|
def test_put_bucket_acl(self, client, signer, storage):
|
||||||
|
storage.create_bucket("acl-put-test")
|
||||||
|
|
||||||
|
headers = signer("PUT", "/acl-put-test?acl")
|
||||||
|
headers["x-amz-acl"] = "public-read"
|
||||||
|
resp = client.put("/acl-put-test", query_string={"acl": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
|
||||||
|
class TestCopyObject:
|
||||||
|
"""Tests for CopyObject operation."""
|
||||||
|
|
||||||
|
def test_copy_object_basic(self, client, signer, storage):
|
||||||
|
storage.create_bucket("copy-src")
|
||||||
|
storage.create_bucket("copy-dst")
|
||||||
|
storage.put_object("copy-src", "original.txt", _stream(b"original content"))
|
||||||
|
|
||||||
|
headers = signer("PUT", "/copy-dst/copied.txt")
|
||||||
|
headers["x-amz-copy-source"] = "/copy-src/original.txt"
|
||||||
|
resp = client.put("/copy-dst/copied.txt", headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert root.tag == "CopyObjectResult"
|
||||||
|
assert root.find("ETag") is not None
|
||||||
|
assert root.find("LastModified") is not None
|
||||||
|
|
||||||
|
path = storage.get_object_path("copy-dst", "copied.txt")
|
||||||
|
assert path.read_bytes() == b"original content"
|
||||||
|
|
||||||
|
def test_copy_object_with_metadata_replace(self, client, signer, storage):
|
||||||
|
storage.create_bucket("meta-src")
|
||||||
|
storage.create_bucket("meta-dst")
|
||||||
|
storage.put_object("meta-src", "source.txt", _stream(b"data"), metadata={"old": "value"})
|
||||||
|
|
||||||
|
headers = signer("PUT", "/meta-dst/target.txt")
|
||||||
|
headers["x-amz-copy-source"] = "/meta-src/source.txt"
|
||||||
|
headers["x-amz-metadata-directive"] = "REPLACE"
|
||||||
|
headers["x-amz-meta-new"] = "metadata"
|
||||||
|
resp = client.put("/meta-dst/target.txt", headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
meta = storage.get_object_metadata("meta-dst", "target.txt")
|
||||||
|
assert "New" in meta or "new" in meta
|
||||||
|
assert "old" not in meta and "Old" not in meta
|
||||||
|
|
||||||
|
|
||||||
|
class TestObjectTagging:
|
||||||
|
"""Tests for Object tagging operations."""
|
||||||
|
|
||||||
|
def test_put_get_delete_object_tags(self, client, signer, storage):
|
||||||
|
storage.create_bucket("obj-tag-test")
|
||||||
|
storage.put_object("obj-tag-test", "tagged.txt", _stream(b"content"))
|
||||||
|
|
||||||
|
payload = b"""<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<Tagging>
|
||||||
|
<TagSet>
|
||||||
|
<Tag><Key>project</Key><Value>demo</Value></Tag>
|
||||||
|
<Tag><Key>env</Key><Value>test</Value></Tag>
|
||||||
|
</TagSet>
|
||||||
|
</Tagging>"""
|
||||||
|
|
||||||
|
headers = signer("PUT", "/obj-tag-test/tagged.txt?tagging", body=payload)
|
||||||
|
resp = client.put(
|
||||||
|
"/obj-tag-test/tagged.txt",
|
||||||
|
query_string={"tagging": ""},
|
||||||
|
data=payload,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
||||||
|
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
tags = {el.find("Key").text: el.find("Value").text for el in root.findall(".//Tag")}
|
||||||
|
assert tags["project"] == "demo"
|
||||||
|
assert tags["env"] == "test"
|
||||||
|
|
||||||
|
headers = signer("DELETE", "/obj-tag-test/tagged.txt?tagging")
|
||||||
|
resp = client.delete("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
||||||
|
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
|
root = fromstring(resp.data)
|
||||||
|
assert len(root.findall(".//Tag")) == 0
|
||||||
|
|
||||||
|
def test_object_tags_limit(self, client, signer, storage):
|
||||||
|
storage.create_bucket("tag-limit")
|
||||||
|
storage.put_object("tag-limit", "file.txt", _stream(b"x"))
|
||||||
|
|
||||||
|
tags = "".join(f"<Tag><Key>key{i}</Key><Value>val{i}</Value></Tag>" for i in range(11))
|
||||||
|
payload = f"<Tagging><TagSet>{tags}</TagSet></Tagging>".encode()
|
||||||
|
|
||||||
|
headers = signer("PUT", "/tag-limit/file.txt?tagging", body=payload)
|
||||||
|
resp = client.put(
|
||||||
|
"/tag-limit/file.txt",
|
||||||
|
query_string={"tagging": ""},
|
||||||
|
data=payload,
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
assert resp.status_code == 400
|
||||||
374
tests/test_notifications.py
Normal file
374
tests/test_notifications.py
Normal file
@@ -0,0 +1,374 @@
|
|||||||
|
import json
|
||||||
|
import time
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.notifications import (
|
||||||
|
NotificationConfiguration,
|
||||||
|
NotificationEvent,
|
||||||
|
NotificationService,
|
||||||
|
WebhookDestination,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestNotificationEvent:
|
||||||
|
def test_default_values(self):
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="test-bucket",
|
||||||
|
object_key="test/key.txt",
|
||||||
|
)
|
||||||
|
assert event.event_name == "s3:ObjectCreated:Put"
|
||||||
|
assert event.bucket_name == "test-bucket"
|
||||||
|
assert event.object_key == "test/key.txt"
|
||||||
|
assert event.object_size == 0
|
||||||
|
assert event.etag == ""
|
||||||
|
assert event.version_id is None
|
||||||
|
assert event.request_id != ""
|
||||||
|
|
||||||
|
def test_to_s3_event(self):
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="my-bucket",
|
||||||
|
object_key="my/object.txt",
|
||||||
|
object_size=1024,
|
||||||
|
etag="abc123",
|
||||||
|
version_id="v1",
|
||||||
|
source_ip="192.168.1.1",
|
||||||
|
user_identity="user123",
|
||||||
|
)
|
||||||
|
result = event.to_s3_event()
|
||||||
|
|
||||||
|
assert "Records" in result
|
||||||
|
assert len(result["Records"]) == 1
|
||||||
|
|
||||||
|
record = result["Records"][0]
|
||||||
|
assert record["eventVersion"] == "2.1"
|
||||||
|
assert record["eventSource"] == "myfsio:s3"
|
||||||
|
assert record["eventName"] == "s3:ObjectCreated:Put"
|
||||||
|
assert record["s3"]["bucket"]["name"] == "my-bucket"
|
||||||
|
assert record["s3"]["object"]["key"] == "my/object.txt"
|
||||||
|
assert record["s3"]["object"]["size"] == 1024
|
||||||
|
assert record["s3"]["object"]["eTag"] == "abc123"
|
||||||
|
assert record["s3"]["object"]["versionId"] == "v1"
|
||||||
|
assert record["userIdentity"]["principalId"] == "user123"
|
||||||
|
assert record["requestParameters"]["sourceIPAddress"] == "192.168.1.1"
|
||||||
|
|
||||||
|
|
||||||
|
class TestWebhookDestination:
|
||||||
|
def test_default_values(self):
|
||||||
|
dest = WebhookDestination(url="http://example.com/webhook")
|
||||||
|
assert dest.url == "http://example.com/webhook"
|
||||||
|
assert dest.headers == {}
|
||||||
|
assert dest.timeout_seconds == 30
|
||||||
|
assert dest.retry_count == 3
|
||||||
|
assert dest.retry_delay_seconds == 1
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
dest = WebhookDestination(
|
||||||
|
url="http://example.com/webhook",
|
||||||
|
headers={"X-Custom": "value"},
|
||||||
|
timeout_seconds=60,
|
||||||
|
retry_count=5,
|
||||||
|
retry_delay_seconds=2,
|
||||||
|
)
|
||||||
|
result = dest.to_dict()
|
||||||
|
assert result["url"] == "http://example.com/webhook"
|
||||||
|
assert result["headers"] == {"X-Custom": "value"}
|
||||||
|
assert result["timeout_seconds"] == 60
|
||||||
|
assert result["retry_count"] == 5
|
||||||
|
assert result["retry_delay_seconds"] == 2
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"url": "http://hook.example.com",
|
||||||
|
"headers": {"Authorization": "Bearer token"},
|
||||||
|
"timeout_seconds": 45,
|
||||||
|
"retry_count": 2,
|
||||||
|
"retry_delay_seconds": 5,
|
||||||
|
}
|
||||||
|
dest = WebhookDestination.from_dict(data)
|
||||||
|
assert dest.url == "http://hook.example.com"
|
||||||
|
assert dest.headers == {"Authorization": "Bearer token"}
|
||||||
|
assert dest.timeout_seconds == 45
|
||||||
|
assert dest.retry_count == 2
|
||||||
|
assert dest.retry_delay_seconds == 5
|
||||||
|
|
||||||
|
|
||||||
|
class TestNotificationConfiguration:
|
||||||
|
def test_matches_event_exact_match(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:Put"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "any/key.txt") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Post", "any/key.txt") is False
|
||||||
|
|
||||||
|
def test_matches_event_wildcard(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "key.txt") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Copy", "key.txt") is True
|
||||||
|
assert config.matches_event("s3:ObjectRemoved:Delete", "key.txt") is False
|
||||||
|
|
||||||
|
def test_matches_event_with_prefix_filter(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
prefix_filter="logs/",
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "logs/app.log") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "data/file.txt") is False
|
||||||
|
|
||||||
|
def test_matches_event_with_suffix_filter(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
suffix_filter=".jpg",
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "photos/image.jpg") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "photos/image.png") is False
|
||||||
|
|
||||||
|
def test_matches_event_with_both_filters(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
prefix_filter="images/",
|
||||||
|
suffix_filter=".png",
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "images/photo.png") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "images/photo.jpg") is False
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "documents/file.png") is False
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="my-config",
|
||||||
|
events=["s3:ObjectCreated:Put", "s3:ObjectRemoved:Delete"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
prefix_filter="logs/",
|
||||||
|
suffix_filter=".log",
|
||||||
|
)
|
||||||
|
result = config.to_dict()
|
||||||
|
assert result["Id"] == "my-config"
|
||||||
|
assert result["Events"] == ["s3:ObjectCreated:Put", "s3:ObjectRemoved:Delete"]
|
||||||
|
assert "Destination" in result
|
||||||
|
assert result["Filter"]["Key"]["FilterRules"][0]["Value"] == "logs/"
|
||||||
|
assert result["Filter"]["Key"]["FilterRules"][1]["Value"] == ".log"
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"Id": "parsed-config",
|
||||||
|
"Events": ["s3:ObjectCreated:*"],
|
||||||
|
"Destination": {"url": "http://hook.example.com"},
|
||||||
|
"Filter": {
|
||||||
|
"Key": {
|
||||||
|
"FilterRules": [
|
||||||
|
{"Name": "prefix", "Value": "data/"},
|
||||||
|
{"Name": "suffix", "Value": ".csv"},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
config = NotificationConfiguration.from_dict(data)
|
||||||
|
assert config.id == "parsed-config"
|
||||||
|
assert config.events == ["s3:ObjectCreated:*"]
|
||||||
|
assert config.destination.url == "http://hook.example.com"
|
||||||
|
assert config.prefix_filter == "data/"
|
||||||
|
assert config.suffix_filter == ".csv"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def notification_service(tmp_path: Path):
|
||||||
|
service = NotificationService(tmp_path, worker_count=1)
|
||||||
|
yield service
|
||||||
|
service.shutdown()
|
||||||
|
|
||||||
|
|
||||||
|
class TestNotificationService:
|
||||||
|
def test_get_bucket_notifications_empty(self, notification_service):
|
||||||
|
result = notification_service.get_bucket_notifications("nonexistent-bucket")
|
||||||
|
assert result == []
|
||||||
|
|
||||||
|
def test_set_and_get_bucket_notifications(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook1"),
|
||||||
|
),
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="config2",
|
||||||
|
events=["s3:ObjectRemoved:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook2"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("my-bucket", configs)
|
||||||
|
|
||||||
|
retrieved = notification_service.get_bucket_notifications("my-bucket")
|
||||||
|
assert len(retrieved) == 2
|
||||||
|
assert retrieved[0].id == "config1"
|
||||||
|
assert retrieved[1].id == "config2"
|
||||||
|
|
||||||
|
def test_delete_bucket_notifications(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="to-delete",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("delete-bucket", configs)
|
||||||
|
assert len(notification_service.get_bucket_notifications("delete-bucket")) == 1
|
||||||
|
|
||||||
|
notification_service.delete_bucket_notifications("delete-bucket")
|
||||||
|
notification_service._configs.clear()
|
||||||
|
assert len(notification_service.get_bucket_notifications("delete-bucket")) == 0
|
||||||
|
|
||||||
|
def test_emit_event_no_config(self, notification_service):
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="no-config-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
notification_service.emit_event(event)
|
||||||
|
assert notification_service._stats["events_queued"] == 0
|
||||||
|
|
||||||
|
def test_emit_event_matching_config(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="match-config",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("event-bucket", configs)
|
||||||
|
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="event-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
notification_service.emit_event(event)
|
||||||
|
assert notification_service._stats["events_queued"] == 1
|
||||||
|
|
||||||
|
def test_emit_event_non_matching_config(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="delete-only",
|
||||||
|
events=["s3:ObjectRemoved:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("delete-bucket", configs)
|
||||||
|
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="delete-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
notification_service.emit_event(event)
|
||||||
|
assert notification_service._stats["events_queued"] == 0
|
||||||
|
|
||||||
|
def test_emit_object_created(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="create-config",
|
||||||
|
events=["s3:ObjectCreated:Put"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("create-bucket", configs)
|
||||||
|
|
||||||
|
notification_service.emit_object_created(
|
||||||
|
"create-bucket",
|
||||||
|
"new-file.txt",
|
||||||
|
size=1024,
|
||||||
|
etag="abc123",
|
||||||
|
operation="Put",
|
||||||
|
)
|
||||||
|
assert notification_service._stats["events_queued"] == 1
|
||||||
|
|
||||||
|
def test_emit_object_removed(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="remove-config",
|
||||||
|
events=["s3:ObjectRemoved:Delete"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("remove-bucket", configs)
|
||||||
|
|
||||||
|
notification_service.emit_object_removed(
|
||||||
|
"remove-bucket",
|
||||||
|
"deleted-file.txt",
|
||||||
|
operation="Delete",
|
||||||
|
)
|
||||||
|
assert notification_service._stats["events_queued"] == 1
|
||||||
|
|
||||||
|
def test_get_stats(self, notification_service):
|
||||||
|
stats = notification_service.get_stats()
|
||||||
|
assert "events_queued" in stats
|
||||||
|
assert "events_sent" in stats
|
||||||
|
assert "events_failed" in stats
|
||||||
|
|
||||||
|
@patch("app.notifications.requests.post")
|
||||||
|
def test_send_notification_success(self, mock_post, notification_service):
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 200
|
||||||
|
mock_post.return_value = mock_response
|
||||||
|
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="test-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
destination = WebhookDestination(url="http://example.com/webhook")
|
||||||
|
|
||||||
|
notification_service._send_notification(event, destination)
|
||||||
|
mock_post.assert_called_once()
|
||||||
|
|
||||||
|
@patch("app.notifications.requests.post")
|
||||||
|
def test_send_notification_retry_on_failure(self, mock_post, notification_service):
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 500
|
||||||
|
mock_response.text = "Internal Server Error"
|
||||||
|
mock_post.return_value = mock_response
|
||||||
|
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="test-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
destination = WebhookDestination(
|
||||||
|
url="http://example.com/webhook",
|
||||||
|
retry_count=2,
|
||||||
|
retry_delay_seconds=0,
|
||||||
|
)
|
||||||
|
|
||||||
|
with pytest.raises(RuntimeError) as exc_info:
|
||||||
|
notification_service._send_notification(event, destination)
|
||||||
|
assert "Failed after 2 attempts" in str(exc_info.value)
|
||||||
|
assert mock_post.call_count == 2
|
||||||
|
|
||||||
|
def test_notification_caching(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="cached-config",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("cached-bucket", configs)
|
||||||
|
|
||||||
|
notification_service.get_bucket_notifications("cached-bucket")
|
||||||
|
assert "cached-bucket" in notification_service._configs
|
||||||
332
tests/test_object_lock.py
Normal file
332
tests/test_object_lock.py
Normal file
@@ -0,0 +1,332 @@
|
|||||||
|
import json
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.object_lock import (
|
||||||
|
ObjectLockConfig,
|
||||||
|
ObjectLockError,
|
||||||
|
ObjectLockRetention,
|
||||||
|
ObjectLockService,
|
||||||
|
RetentionMode,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestRetentionMode:
|
||||||
|
def test_governance_mode(self):
|
||||||
|
assert RetentionMode.GOVERNANCE.value == "GOVERNANCE"
|
||||||
|
|
||||||
|
def test_compliance_mode(self):
|
||||||
|
assert RetentionMode.COMPLIANCE.value == "COMPLIANCE"
|
||||||
|
|
||||||
|
|
||||||
|
class TestObjectLockRetention:
|
||||||
|
def test_to_dict(self):
|
||||||
|
retain_until = datetime(2025, 12, 31, 23, 59, 59, tzinfo=timezone.utc)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=retain_until,
|
||||||
|
)
|
||||||
|
result = retention.to_dict()
|
||||||
|
assert result["Mode"] == "GOVERNANCE"
|
||||||
|
assert "2025-12-31" in result["RetainUntilDate"]
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"Mode": "COMPLIANCE",
|
||||||
|
"RetainUntilDate": "2030-06-15T12:00:00+00:00",
|
||||||
|
}
|
||||||
|
retention = ObjectLockRetention.from_dict(data)
|
||||||
|
assert retention is not None
|
||||||
|
assert retention.mode == RetentionMode.COMPLIANCE
|
||||||
|
assert retention.retain_until_date.year == 2030
|
||||||
|
|
||||||
|
def test_from_dict_empty(self):
|
||||||
|
result = ObjectLockRetention.from_dict({})
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_from_dict_missing_mode(self):
|
||||||
|
data = {"RetainUntilDate": "2030-06-15T12:00:00+00:00"}
|
||||||
|
result = ObjectLockRetention.from_dict(data)
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_from_dict_missing_date(self):
|
||||||
|
data = {"Mode": "GOVERNANCE"}
|
||||||
|
result = ObjectLockRetention.from_dict(data)
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_is_expired_future_date(self):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
assert retention.is_expired() is False
|
||||||
|
|
||||||
|
def test_is_expired_past_date(self):
|
||||||
|
past = datetime.now(timezone.utc) - timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=past,
|
||||||
|
)
|
||||||
|
assert retention.is_expired() is True
|
||||||
|
|
||||||
|
|
||||||
|
class TestObjectLockConfig:
|
||||||
|
def test_to_dict_enabled(self):
|
||||||
|
config = ObjectLockConfig(enabled=True)
|
||||||
|
result = config.to_dict()
|
||||||
|
assert result["ObjectLockEnabled"] == "Enabled"
|
||||||
|
|
||||||
|
def test_to_dict_disabled(self):
|
||||||
|
config = ObjectLockConfig(enabled=False)
|
||||||
|
result = config.to_dict()
|
||||||
|
assert result["ObjectLockEnabled"] == "Disabled"
|
||||||
|
|
||||||
|
def test_from_dict_enabled(self):
|
||||||
|
data = {"ObjectLockEnabled": "Enabled"}
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
assert config.enabled is True
|
||||||
|
|
||||||
|
def test_from_dict_disabled(self):
|
||||||
|
data = {"ObjectLockEnabled": "Disabled"}
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
assert config.enabled is False
|
||||||
|
|
||||||
|
def test_from_dict_with_default_retention_days(self):
|
||||||
|
data = {
|
||||||
|
"ObjectLockEnabled": "Enabled",
|
||||||
|
"Rule": {
|
||||||
|
"DefaultRetention": {
|
||||||
|
"Mode": "GOVERNANCE",
|
||||||
|
"Days": 30,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
assert config.enabled is True
|
||||||
|
assert config.default_retention is not None
|
||||||
|
assert config.default_retention.mode == RetentionMode.GOVERNANCE
|
||||||
|
|
||||||
|
def test_from_dict_with_default_retention_years(self):
|
||||||
|
data = {
|
||||||
|
"ObjectLockEnabled": "Enabled",
|
||||||
|
"Rule": {
|
||||||
|
"DefaultRetention": {
|
||||||
|
"Mode": "COMPLIANCE",
|
||||||
|
"Years": 1,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
assert config.enabled is True
|
||||||
|
assert config.default_retention is not None
|
||||||
|
assert config.default_retention.mode == RetentionMode.COMPLIANCE
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def lock_service(tmp_path: Path):
|
||||||
|
return ObjectLockService(tmp_path)
|
||||||
|
|
||||||
|
|
||||||
|
class TestObjectLockService:
|
||||||
|
def test_get_bucket_lock_config_default(self, lock_service):
|
||||||
|
config = lock_service.get_bucket_lock_config("nonexistent-bucket")
|
||||||
|
assert config.enabled is False
|
||||||
|
assert config.default_retention is None
|
||||||
|
|
||||||
|
def test_set_and_get_bucket_lock_config(self, lock_service):
|
||||||
|
config = ObjectLockConfig(enabled=True)
|
||||||
|
lock_service.set_bucket_lock_config("my-bucket", config)
|
||||||
|
|
||||||
|
retrieved = lock_service.get_bucket_lock_config("my-bucket")
|
||||||
|
assert retrieved.enabled is True
|
||||||
|
|
||||||
|
def test_enable_bucket_lock(self, lock_service):
|
||||||
|
lock_service.enable_bucket_lock("lock-bucket")
|
||||||
|
|
||||||
|
config = lock_service.get_bucket_lock_config("lock-bucket")
|
||||||
|
assert config.enabled is True
|
||||||
|
|
||||||
|
def test_is_bucket_lock_enabled(self, lock_service):
|
||||||
|
assert lock_service.is_bucket_lock_enabled("new-bucket") is False
|
||||||
|
|
||||||
|
lock_service.enable_bucket_lock("new-bucket")
|
||||||
|
assert lock_service.is_bucket_lock_enabled("new-bucket") is True
|
||||||
|
|
||||||
|
def test_get_object_retention_not_set(self, lock_service):
|
||||||
|
result = lock_service.get_object_retention("bucket", "key.txt")
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_set_and_get_object_retention(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "key.txt", retention)
|
||||||
|
|
||||||
|
retrieved = lock_service.get_object_retention("bucket", "key.txt")
|
||||||
|
assert retrieved is not None
|
||||||
|
assert retrieved.mode == RetentionMode.GOVERNANCE
|
||||||
|
|
||||||
|
def test_cannot_modify_compliance_retention(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.COMPLIANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "locked.txt", retention)
|
||||||
|
|
||||||
|
new_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future + timedelta(days=10),
|
||||||
|
)
|
||||||
|
with pytest.raises(ObjectLockError) as exc_info:
|
||||||
|
lock_service.set_object_retention("bucket", "locked.txt", new_retention)
|
||||||
|
assert "COMPLIANCE" in str(exc_info.value)
|
||||||
|
|
||||||
|
def test_cannot_modify_governance_without_bypass(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "gov.txt", retention)
|
||||||
|
|
||||||
|
new_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future + timedelta(days=10),
|
||||||
|
)
|
||||||
|
with pytest.raises(ObjectLockError) as exc_info:
|
||||||
|
lock_service.set_object_retention("bucket", "gov.txt", new_retention)
|
||||||
|
assert "GOVERNANCE" in str(exc_info.value)
|
||||||
|
|
||||||
|
def test_can_modify_governance_with_bypass(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "bypassable.txt", retention)
|
||||||
|
|
||||||
|
new_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future + timedelta(days=10),
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "bypassable.txt", new_retention, bypass_governance=True)
|
||||||
|
retrieved = lock_service.get_object_retention("bucket", "bypassable.txt")
|
||||||
|
assert retrieved.retain_until_date > future
|
||||||
|
|
||||||
|
def test_can_modify_expired_retention(self, lock_service):
|
||||||
|
past = datetime.now(timezone.utc) - timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.COMPLIANCE,
|
||||||
|
retain_until_date=past,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "expired.txt", retention)
|
||||||
|
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
new_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "expired.txt", new_retention)
|
||||||
|
retrieved = lock_service.get_object_retention("bucket", "expired.txt")
|
||||||
|
assert retrieved.mode == RetentionMode.GOVERNANCE
|
||||||
|
|
||||||
|
def test_get_legal_hold_not_set(self, lock_service):
|
||||||
|
result = lock_service.get_legal_hold("bucket", "key.txt")
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
def test_set_and_get_legal_hold(self, lock_service):
|
||||||
|
lock_service.set_legal_hold("bucket", "held.txt", True)
|
||||||
|
assert lock_service.get_legal_hold("bucket", "held.txt") is True
|
||||||
|
|
||||||
|
lock_service.set_legal_hold("bucket", "held.txt", False)
|
||||||
|
assert lock_service.get_legal_hold("bucket", "held.txt") is False
|
||||||
|
|
||||||
|
def test_can_delete_object_no_lock(self, lock_service):
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "unlocked.txt")
|
||||||
|
assert can_delete is True
|
||||||
|
assert reason == ""
|
||||||
|
|
||||||
|
def test_cannot_delete_object_with_legal_hold(self, lock_service):
|
||||||
|
lock_service.set_legal_hold("bucket", "held.txt", True)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "held.txt")
|
||||||
|
assert can_delete is False
|
||||||
|
assert "legal hold" in reason.lower()
|
||||||
|
|
||||||
|
def test_cannot_delete_object_with_compliance_retention(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.COMPLIANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "compliant.txt", retention)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "compliant.txt")
|
||||||
|
assert can_delete is False
|
||||||
|
assert "COMPLIANCE" in reason
|
||||||
|
|
||||||
|
def test_cannot_delete_governance_without_bypass(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "governed.txt", retention)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "governed.txt")
|
||||||
|
assert can_delete is False
|
||||||
|
assert "GOVERNANCE" in reason
|
||||||
|
|
||||||
|
def test_can_delete_governance_with_bypass(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "governed.txt", retention)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "governed.txt", bypass_governance=True)
|
||||||
|
assert can_delete is True
|
||||||
|
assert reason == ""
|
||||||
|
|
||||||
|
def test_can_delete_expired_retention(self, lock_service):
|
||||||
|
past = datetime.now(timezone.utc) - timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.COMPLIANCE,
|
||||||
|
retain_until_date=past,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "expired.txt", retention)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "expired.txt")
|
||||||
|
assert can_delete is True
|
||||||
|
|
||||||
|
def test_can_overwrite_is_same_as_delete(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "overwrite.txt", retention)
|
||||||
|
|
||||||
|
can_overwrite, _ = lock_service.can_overwrite_object("bucket", "overwrite.txt")
|
||||||
|
can_delete, _ = lock_service.can_delete_object("bucket", "overwrite.txt")
|
||||||
|
assert can_overwrite == can_delete
|
||||||
|
|
||||||
|
def test_delete_object_lock_metadata(self, lock_service):
|
||||||
|
lock_service.set_legal_hold("bucket", "cleanup.txt", True)
|
||||||
|
lock_service.delete_object_lock_metadata("bucket", "cleanup.txt")
|
||||||
|
|
||||||
|
assert lock_service.get_legal_hold("bucket", "cleanup.txt") is False
|
||||||
|
|
||||||
|
def test_config_caching(self, lock_service):
|
||||||
|
config = ObjectLockConfig(enabled=True)
|
||||||
|
lock_service.set_bucket_lock_config("cached-bucket", config)
|
||||||
|
|
||||||
|
lock_service.get_bucket_lock_config("cached-bucket")
|
||||||
|
assert "cached-bucket" in lock_service._config_cache
|
||||||
287
tests/test_replication.py
Normal file
287
tests/test_replication.py
Normal file
@@ -0,0 +1,287 @@
|
|||||||
|
import json
|
||||||
|
import time
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.connections import ConnectionStore, RemoteConnection
|
||||||
|
from app.replication import (
|
||||||
|
ReplicationManager,
|
||||||
|
ReplicationRule,
|
||||||
|
ReplicationStats,
|
||||||
|
REPLICATION_MODE_ALL,
|
||||||
|
REPLICATION_MODE_NEW_ONLY,
|
||||||
|
_create_s3_client,
|
||||||
|
)
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def storage(tmp_path: Path):
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
storage_root.mkdir(parents=True)
|
||||||
|
return ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def connections(tmp_path: Path):
|
||||||
|
connections_path = tmp_path / "connections.json"
|
||||||
|
store = ConnectionStore(connections_path)
|
||||||
|
conn = RemoteConnection(
|
||||||
|
id="test-conn",
|
||||||
|
name="Test Remote",
|
||||||
|
endpoint_url="http://localhost:9000",
|
||||||
|
access_key="remote-access",
|
||||||
|
secret_key="remote-secret",
|
||||||
|
region="us-east-1",
|
||||||
|
)
|
||||||
|
store.add(conn)
|
||||||
|
return store
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def replication_manager(storage, connections, tmp_path):
|
||||||
|
rules_path = tmp_path / "replication_rules.json"
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
storage_root.mkdir(exist_ok=True)
|
||||||
|
manager = ReplicationManager(storage, connections, rules_path, storage_root)
|
||||||
|
yield manager
|
||||||
|
manager.shutdown(wait=False)
|
||||||
|
|
||||||
|
|
||||||
|
class TestReplicationStats:
|
||||||
|
def test_to_dict(self):
|
||||||
|
stats = ReplicationStats(
|
||||||
|
objects_synced=10,
|
||||||
|
objects_pending=5,
|
||||||
|
objects_orphaned=2,
|
||||||
|
bytes_synced=1024,
|
||||||
|
last_sync_at=1234567890.0,
|
||||||
|
last_sync_key="test/key.txt",
|
||||||
|
)
|
||||||
|
result = stats.to_dict()
|
||||||
|
assert result["objects_synced"] == 10
|
||||||
|
assert result["objects_pending"] == 5
|
||||||
|
assert result["objects_orphaned"] == 2
|
||||||
|
assert result["bytes_synced"] == 1024
|
||||||
|
assert result["last_sync_at"] == 1234567890.0
|
||||||
|
assert result["last_sync_key"] == "test/key.txt"
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"objects_synced": 15,
|
||||||
|
"objects_pending": 3,
|
||||||
|
"objects_orphaned": 1,
|
||||||
|
"bytes_synced": 2048,
|
||||||
|
"last_sync_at": 9876543210.0,
|
||||||
|
"last_sync_key": "another/key.txt",
|
||||||
|
}
|
||||||
|
stats = ReplicationStats.from_dict(data)
|
||||||
|
assert stats.objects_synced == 15
|
||||||
|
assert stats.objects_pending == 3
|
||||||
|
assert stats.objects_orphaned == 1
|
||||||
|
assert stats.bytes_synced == 2048
|
||||||
|
assert stats.last_sync_at == 9876543210.0
|
||||||
|
assert stats.last_sync_key == "another/key.txt"
|
||||||
|
|
||||||
|
def test_from_dict_with_defaults(self):
|
||||||
|
stats = ReplicationStats.from_dict({})
|
||||||
|
assert stats.objects_synced == 0
|
||||||
|
assert stats.objects_pending == 0
|
||||||
|
assert stats.objects_orphaned == 0
|
||||||
|
assert stats.bytes_synced == 0
|
||||||
|
assert stats.last_sync_at is None
|
||||||
|
assert stats.last_sync_key is None
|
||||||
|
|
||||||
|
|
||||||
|
class TestReplicationRule:
|
||||||
|
def test_to_dict(self):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="source-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="dest-bucket",
|
||||||
|
enabled=True,
|
||||||
|
mode=REPLICATION_MODE_ALL,
|
||||||
|
created_at=1234567890.0,
|
||||||
|
)
|
||||||
|
result = rule.to_dict()
|
||||||
|
assert result["bucket_name"] == "source-bucket"
|
||||||
|
assert result["target_connection_id"] == "test-conn"
|
||||||
|
assert result["target_bucket"] == "dest-bucket"
|
||||||
|
assert result["enabled"] is True
|
||||||
|
assert result["mode"] == REPLICATION_MODE_ALL
|
||||||
|
assert result["created_at"] == 1234567890.0
|
||||||
|
assert "stats" in result
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"bucket_name": "my-bucket",
|
||||||
|
"target_connection_id": "conn-123",
|
||||||
|
"target_bucket": "remote-bucket",
|
||||||
|
"enabled": False,
|
||||||
|
"mode": REPLICATION_MODE_NEW_ONLY,
|
||||||
|
"created_at": 1111111111.0,
|
||||||
|
"stats": {"objects_synced": 5},
|
||||||
|
}
|
||||||
|
rule = ReplicationRule.from_dict(data)
|
||||||
|
assert rule.bucket_name == "my-bucket"
|
||||||
|
assert rule.target_connection_id == "conn-123"
|
||||||
|
assert rule.target_bucket == "remote-bucket"
|
||||||
|
assert rule.enabled is False
|
||||||
|
assert rule.mode == REPLICATION_MODE_NEW_ONLY
|
||||||
|
assert rule.created_at == 1111111111.0
|
||||||
|
assert rule.stats.objects_synced == 5
|
||||||
|
|
||||||
|
def test_from_dict_defaults_mode(self):
|
||||||
|
data = {
|
||||||
|
"bucket_name": "my-bucket",
|
||||||
|
"target_connection_id": "conn-123",
|
||||||
|
"target_bucket": "remote-bucket",
|
||||||
|
}
|
||||||
|
rule = ReplicationRule.from_dict(data)
|
||||||
|
assert rule.mode == REPLICATION_MODE_NEW_ONLY
|
||||||
|
assert rule.created_at is None
|
||||||
|
|
||||||
|
|
||||||
|
class TestReplicationManager:
|
||||||
|
def test_get_rule_not_exists(self, replication_manager):
|
||||||
|
rule = replication_manager.get_rule("nonexistent-bucket")
|
||||||
|
assert rule is None
|
||||||
|
|
||||||
|
def test_set_and_get_rule(self, replication_manager):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="my-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=True,
|
||||||
|
mode=REPLICATION_MODE_NEW_ONLY,
|
||||||
|
created_at=time.time(),
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
|
||||||
|
retrieved = replication_manager.get_rule("my-bucket")
|
||||||
|
assert retrieved is not None
|
||||||
|
assert retrieved.bucket_name == "my-bucket"
|
||||||
|
assert retrieved.target_connection_id == "test-conn"
|
||||||
|
assert retrieved.target_bucket == "remote-bucket"
|
||||||
|
|
||||||
|
def test_delete_rule(self, replication_manager):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="to-delete",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
assert replication_manager.get_rule("to-delete") is not None
|
||||||
|
|
||||||
|
replication_manager.delete_rule("to-delete")
|
||||||
|
assert replication_manager.get_rule("to-delete") is None
|
||||||
|
|
||||||
|
def test_save_and_reload_rules(self, replication_manager, tmp_path):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="persistent-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
|
||||||
|
rules_path = tmp_path / "replication_rules.json"
|
||||||
|
assert rules_path.exists()
|
||||||
|
data = json.loads(rules_path.read_text())
|
||||||
|
assert "persistent-bucket" in data
|
||||||
|
|
||||||
|
@patch("app.replication._create_s3_client")
|
||||||
|
def test_check_endpoint_health_success(self, mock_create_client, replication_manager, connections):
|
||||||
|
mock_client = MagicMock()
|
||||||
|
mock_client.list_buckets.return_value = {"Buckets": []}
|
||||||
|
mock_create_client.return_value = mock_client
|
||||||
|
|
||||||
|
conn = connections.get("test-conn")
|
||||||
|
result = replication_manager.check_endpoint_health(conn)
|
||||||
|
assert result is True
|
||||||
|
mock_client.list_buckets.assert_called_once()
|
||||||
|
|
||||||
|
@patch("app.replication._create_s3_client")
|
||||||
|
def test_check_endpoint_health_failure(self, mock_create_client, replication_manager, connections):
|
||||||
|
mock_client = MagicMock()
|
||||||
|
mock_client.list_buckets.side_effect = Exception("Connection refused")
|
||||||
|
mock_create_client.return_value = mock_client
|
||||||
|
|
||||||
|
conn = connections.get("test-conn")
|
||||||
|
result = replication_manager.check_endpoint_health(conn)
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
def test_trigger_replication_no_rule(self, replication_manager):
|
||||||
|
replication_manager.trigger_replication("no-such-bucket", "test.txt", "write")
|
||||||
|
|
||||||
|
def test_trigger_replication_disabled_rule(self, replication_manager):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="disabled-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=False,
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
replication_manager.trigger_replication("disabled-bucket", "test.txt", "write")
|
||||||
|
|
||||||
|
def test_trigger_replication_missing_connection(self, replication_manager):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="orphan-bucket",
|
||||||
|
target_connection_id="missing-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
replication_manager.trigger_replication("orphan-bucket", "test.txt", "write")
|
||||||
|
|
||||||
|
def test_replicate_task_path_traversal_blocked(self, replication_manager, connections):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="secure-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
conn = connections.get("test-conn")
|
||||||
|
|
||||||
|
replication_manager._replicate_task("secure-bucket", "../../../etc/passwd", rule, conn, "write")
|
||||||
|
replication_manager._replicate_task("secure-bucket", "/root/secret", rule, conn, "write")
|
||||||
|
replication_manager._replicate_task("secure-bucket", "..\\..\\windows\\system32", rule, conn, "write")
|
||||||
|
|
||||||
|
|
||||||
|
class TestCreateS3Client:
|
||||||
|
@patch("app.replication.boto3.client")
|
||||||
|
def test_creates_client_with_correct_config(self, mock_boto_client):
|
||||||
|
conn = RemoteConnection(
|
||||||
|
id="test",
|
||||||
|
name="Test",
|
||||||
|
endpoint_url="http://localhost:9000",
|
||||||
|
access_key="access",
|
||||||
|
secret_key="secret",
|
||||||
|
region="eu-west-1",
|
||||||
|
)
|
||||||
|
_create_s3_client(conn)
|
||||||
|
|
||||||
|
mock_boto_client.assert_called_once()
|
||||||
|
call_kwargs = mock_boto_client.call_args[1]
|
||||||
|
assert call_kwargs["endpoint_url"] == "http://localhost:9000"
|
||||||
|
assert call_kwargs["aws_access_key_id"] == "access"
|
||||||
|
assert call_kwargs["aws_secret_access_key"] == "secret"
|
||||||
|
assert call_kwargs["region_name"] == "eu-west-1"
|
||||||
|
|
||||||
|
@patch("app.replication.boto3.client")
|
||||||
|
def test_health_check_mode_minimal_retries(self, mock_boto_client):
|
||||||
|
conn = RemoteConnection(
|
||||||
|
id="test",
|
||||||
|
name="Test",
|
||||||
|
endpoint_url="http://localhost:9000",
|
||||||
|
access_key="access",
|
||||||
|
secret_key="secret",
|
||||||
|
)
|
||||||
|
_create_s3_client(conn, health_check=True)
|
||||||
|
|
||||||
|
call_kwargs = mock_boto_client.call_args[1]
|
||||||
|
config = call_kwargs["config"]
|
||||||
|
assert config.retries["max_attempts"] == 1
|
||||||
191
tests/test_security.py
Normal file
191
tests/test_security.py
Normal file
@@ -0,0 +1,191 @@
|
|||||||
|
import hashlib
|
||||||
|
import hmac
|
||||||
|
import pytest
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from urllib.parse import quote
|
||||||
|
|
||||||
|
def _sign(key, msg):
|
||||||
|
return hmac.new(key, msg.encode("utf-8"), hashlib.sha256).digest()
|
||||||
|
|
||||||
|
def _get_signature_key(key, date_stamp, region_name, service_name):
|
||||||
|
k_date = _sign(("AWS4" + key).encode("utf-8"), date_stamp)
|
||||||
|
k_region = _sign(k_date, region_name)
|
||||||
|
k_service = _sign(k_region, service_name)
|
||||||
|
k_signing = _sign(k_service, "aws4_request")
|
||||||
|
return k_signing
|
||||||
|
|
||||||
|
def create_signed_headers(
|
||||||
|
method,
|
||||||
|
path,
|
||||||
|
headers=None,
|
||||||
|
body=None,
|
||||||
|
access_key="test",
|
||||||
|
secret_key="secret",
|
||||||
|
region="us-east-1",
|
||||||
|
service="s3",
|
||||||
|
timestamp=None
|
||||||
|
):
|
||||||
|
if headers is None:
|
||||||
|
headers = {}
|
||||||
|
|
||||||
|
if timestamp is None:
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
else:
|
||||||
|
now = timestamp
|
||||||
|
|
||||||
|
amz_date = now.strftime("%Y%m%dT%H%M%SZ")
|
||||||
|
date_stamp = now.strftime("%Y%m%d")
|
||||||
|
|
||||||
|
headers["X-Amz-Date"] = amz_date
|
||||||
|
headers["Host"] = "testserver"
|
||||||
|
|
||||||
|
canonical_uri = quote(path, safe="/-_.~")
|
||||||
|
canonical_query_string = ""
|
||||||
|
|
||||||
|
canonical_headers = ""
|
||||||
|
signed_headers_list = []
|
||||||
|
for k, v in sorted(headers.items(), key=lambda x: x[0].lower()):
|
||||||
|
canonical_headers += f"{k.lower()}:{v.strip()}\n"
|
||||||
|
signed_headers_list.append(k.lower())
|
||||||
|
|
||||||
|
signed_headers = ";".join(signed_headers_list)
|
||||||
|
|
||||||
|
payload_hash = hashlib.sha256(body or b"").hexdigest()
|
||||||
|
headers["X-Amz-Content-Sha256"] = payload_hash
|
||||||
|
|
||||||
|
canonical_request = f"{method}\n{canonical_uri}\n{canonical_query_string}\n{canonical_headers}\n{signed_headers}\n{payload_hash}"
|
||||||
|
|
||||||
|
credential_scope = f"{date_stamp}/{region}/{service}/aws4_request"
|
||||||
|
string_to_sign = f"AWS4-HMAC-SHA256\n{amz_date}\n{credential_scope}\n{hashlib.sha256(canonical_request.encode('utf-8')).hexdigest()}"
|
||||||
|
|
||||||
|
signing_key = _get_signature_key(secret_key, date_stamp, region, service)
|
||||||
|
signature = hmac.new(signing_key, string_to_sign.encode("utf-8"), hashlib.sha256).hexdigest()
|
||||||
|
|
||||||
|
headers["Authorization"] = (
|
||||||
|
f"AWS4-HMAC-SHA256 Credential={access_key}/{credential_scope}, "
|
||||||
|
f"SignedHeaders={signed_headers}, Signature={signature}"
|
||||||
|
)
|
||||||
|
return headers
|
||||||
|
|
||||||
|
def test_sigv4_old_date(client):
|
||||||
|
# Test with a date 20 minutes in the past
|
||||||
|
old_time = datetime.now(timezone.utc) - timedelta(minutes=20)
|
||||||
|
headers = create_signed_headers("GET", "/", timestamp=old_time)
|
||||||
|
|
||||||
|
response = client.get("/", headers=headers)
|
||||||
|
assert response.status_code == 403
|
||||||
|
assert b"Request timestamp too old" in response.data
|
||||||
|
|
||||||
|
def test_sigv4_future_date(client):
|
||||||
|
# Test with a date 20 minutes in the future
|
||||||
|
future_time = datetime.now(timezone.utc) + timedelta(minutes=20)
|
||||||
|
headers = create_signed_headers("GET", "/", timestamp=future_time)
|
||||||
|
|
||||||
|
response = client.get("/", headers=headers)
|
||||||
|
assert response.status_code == 403
|
||||||
|
assert b"Request timestamp too old" in response.data # The error message is the same
|
||||||
|
|
||||||
|
def test_path_traversal_in_key(client, signer):
|
||||||
|
headers = signer("PUT", "/test-bucket")
|
||||||
|
client.put("/test-bucket", headers=headers)
|
||||||
|
|
||||||
|
# Try to upload with .. in key
|
||||||
|
headers = signer("PUT", "/test-bucket/../secret.txt", body=b"attack")
|
||||||
|
response = client.put("/test-bucket/../secret.txt", headers=headers, data=b"attack")
|
||||||
|
|
||||||
|
# Should be rejected by storage layer or flask routing
|
||||||
|
# Flask might normalize it before it reaches the app, but if it reaches, it should fail.
|
||||||
|
# If Flask normalizes /test-bucket/../secret.txt to /secret.txt, then it hits 404 (bucket not found) or 403.
|
||||||
|
# But we want to test the storage layer check.
|
||||||
|
# We can try to encode the dots?
|
||||||
|
|
||||||
|
# If we use a key that doesn't get normalized by Flask routing easily.
|
||||||
|
# But wait, the route is /<bucket_name>/<path:object_key>
|
||||||
|
# If I send /test-bucket/folder/../file.txt, Flask might pass "folder/../file.txt" as object_key?
|
||||||
|
# Let's try.
|
||||||
|
|
||||||
|
headers = signer("PUT", "/test-bucket/folder/../file.txt", body=b"attack")
|
||||||
|
response = client.put("/test-bucket/folder/../file.txt", headers=headers, data=b"attack")
|
||||||
|
|
||||||
|
# If Flask normalizes it, it becomes /test-bucket/file.txt.
|
||||||
|
# If it doesn't, it hits our check.
|
||||||
|
|
||||||
|
# Let's try to call the storage method directly to verify the check works,
|
||||||
|
# because testing via client depends on Flask's URL handling.
|
||||||
|
pass
|
||||||
|
|
||||||
|
def test_storage_path_traversal(app):
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
from app.storage import StorageError, ObjectStorage
|
||||||
|
from app.encrypted_storage import EncryptedObjectStorage
|
||||||
|
|
||||||
|
# Get the underlying ObjectStorage if wrapped
|
||||||
|
if isinstance(storage, EncryptedObjectStorage):
|
||||||
|
storage = storage.storage
|
||||||
|
|
||||||
|
with pytest.raises(StorageError, match="Object key contains parent directory references"):
|
||||||
|
storage._sanitize_object_key("folder/../file.txt")
|
||||||
|
|
||||||
|
with pytest.raises(StorageError, match="Object key contains parent directory references"):
|
||||||
|
storage._sanitize_object_key("..")
|
||||||
|
|
||||||
|
def test_head_bucket(client, signer):
|
||||||
|
headers = signer("PUT", "/head-test")
|
||||||
|
client.put("/head-test", headers=headers)
|
||||||
|
|
||||||
|
headers = signer("HEAD", "/head-test")
|
||||||
|
response = client.head("/head-test", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
headers = signer("HEAD", "/non-existent")
|
||||||
|
response = client.head("/non-existent", headers=headers)
|
||||||
|
assert response.status_code == 404
|
||||||
|
|
||||||
|
def test_head_object(client, signer):
|
||||||
|
headers = signer("PUT", "/head-obj-test")
|
||||||
|
client.put("/head-obj-test", headers=headers)
|
||||||
|
|
||||||
|
headers = signer("PUT", "/head-obj-test/obj", body=b"content")
|
||||||
|
client.put("/head-obj-test/obj", headers=headers, data=b"content")
|
||||||
|
|
||||||
|
headers = signer("HEAD", "/head-obj-test/obj")
|
||||||
|
response = client.head("/head-obj-test/obj", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
assert response.headers["ETag"]
|
||||||
|
assert response.headers["Content-Length"] == "7"
|
||||||
|
|
||||||
|
headers = signer("HEAD", "/head-obj-test/missing")
|
||||||
|
response = client.head("/head-obj-test/missing", headers=headers)
|
||||||
|
assert response.status_code == 404
|
||||||
|
|
||||||
|
def test_list_parts(client, signer):
|
||||||
|
# Create bucket
|
||||||
|
headers = signer("PUT", "/multipart-test")
|
||||||
|
client.put("/multipart-test", headers=headers)
|
||||||
|
|
||||||
|
# Initiate multipart upload
|
||||||
|
headers = signer("POST", "/multipart-test/obj?uploads")
|
||||||
|
response = client.post("/multipart-test/obj?uploads", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
from xml.etree.ElementTree import fromstring
|
||||||
|
upload_id = fromstring(response.data).find("UploadId").text
|
||||||
|
|
||||||
|
# Upload part 1
|
||||||
|
headers = signer("PUT", f"/multipart-test/obj?partNumber=1&uploadId={upload_id}", body=b"part1")
|
||||||
|
client.put(f"/multipart-test/obj?partNumber=1&uploadId={upload_id}", headers=headers, data=b"part1")
|
||||||
|
|
||||||
|
# Upload part 2
|
||||||
|
headers = signer("PUT", f"/multipart-test/obj?partNumber=2&uploadId={upload_id}", body=b"part2")
|
||||||
|
client.put(f"/multipart-test/obj?partNumber=2&uploadId={upload_id}", headers=headers, data=b"part2")
|
||||||
|
|
||||||
|
# List parts
|
||||||
|
headers = signer("GET", f"/multipart-test/obj?uploadId={upload_id}")
|
||||||
|
response = client.get(f"/multipart-test/obj?uploadId={upload_id}", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
root = fromstring(response.data)
|
||||||
|
assert root.tag == "ListPartsResult"
|
||||||
|
parts = root.findall("Part")
|
||||||
|
assert len(parts) == 2
|
||||||
|
assert parts[0].find("PartNumber").text == "1"
|
||||||
|
assert parts[1].find("PartNumber").text == "2"
|
||||||
@@ -99,11 +99,11 @@ def test_delete_object_retries_when_locked(tmp_path, monkeypatch):
|
|||||||
original_unlink = Path.unlink
|
original_unlink = Path.unlink
|
||||||
attempts = {"count": 0}
|
attempts = {"count": 0}
|
||||||
|
|
||||||
def flaky_unlink(self):
|
def flaky_unlink(self, missing_ok=False):
|
||||||
if self == target_path and attempts["count"] < 1:
|
if self == target_path and attempts["count"] < 1:
|
||||||
attempts["count"] += 1
|
attempts["count"] += 1
|
||||||
raise PermissionError("locked")
|
raise PermissionError("locked")
|
||||||
return original_unlink(self)
|
return original_unlink(self, missing_ok=missing_ok)
|
||||||
|
|
||||||
monkeypatch.setattr(Path, "unlink", flaky_unlink)
|
monkeypatch.setattr(Path, "unlink", flaky_unlink)
|
||||||
|
|
||||||
@@ -220,7 +220,7 @@ def test_bucket_config_filename_allowed(tmp_path):
|
|||||||
storage.create_bucket("demo")
|
storage.create_bucket("demo")
|
||||||
storage.put_object("demo", ".bucket.json", io.BytesIO(b"{}"))
|
storage.put_object("demo", ".bucket.json", io.BytesIO(b"{}"))
|
||||||
|
|
||||||
objects = storage.list_objects("demo")
|
objects = storage.list_objects_all("demo")
|
||||||
assert any(meta.key == ".bucket.json" for meta in objects)
|
assert any(meta.key == ".bucket.json" for meta in objects)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -62,7 +62,7 @@ def test_bulk_delete_json_route(tmp_path: Path):
|
|||||||
assert set(payload["deleted"]) == {"first.txt", "missing.txt"}
|
assert set(payload["deleted"]) == {"first.txt", "missing.txt"}
|
||||||
assert payload["errors"] == []
|
assert payload["errors"] == []
|
||||||
|
|
||||||
listing = storage.list_objects("demo")
|
listing = storage.list_objects_all("demo")
|
||||||
assert {meta.key for meta in listing} == {"second.txt"}
|
assert {meta.key for meta in listing} == {"second.txt"}
|
||||||
|
|
||||||
|
|
||||||
@@ -92,5 +92,5 @@ def test_bulk_delete_validation(tmp_path: Path):
|
|||||||
assert limit_response.status_code == 400
|
assert limit_response.status_code == 400
|
||||||
assert limit_response.get_json()["status"] == "error"
|
assert limit_response.get_json()["status"] == "error"
|
||||||
|
|
||||||
still_there = storage.list_objects("demo")
|
still_there = storage.list_objects_all("demo")
|
||||||
assert {meta.key for meta in still_there} == {"keep.txt"}
|
assert {meta.key for meta in still_there} == {"keep.txt"}
|
||||||
|
|||||||
248
tests/test_ui_encryption.py
Normal file
248
tests/test_ui_encryption.py
Normal file
@@ -0,0 +1,248 @@
|
|||||||
|
"""Tests for UI-based encryption configuration."""
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app import create_app
|
||||||
|
|
||||||
|
|
||||||
|
def get_csrf_token(response):
|
||||||
|
"""Extract CSRF token from response HTML."""
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
import re
|
||||||
|
match = re.search(r'name="csrf_token"\s+value="([^"]+)"', html)
|
||||||
|
return match.group(1) if match else None
|
||||||
|
|
||||||
|
|
||||||
|
def _make_encryption_app(tmp_path: Path, *, kms_enabled: bool = True):
|
||||||
|
"""Create an app with encryption enabled."""
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
iam_config = tmp_path / "iam.json"
|
||||||
|
bucket_policies = tmp_path / "bucket_policies.json"
|
||||||
|
iam_payload = {
|
||||||
|
"users": [
|
||||||
|
{
|
||||||
|
"access_key": "test",
|
||||||
|
"secret_key": "secret",
|
||||||
|
"display_name": "Test User",
|
||||||
|
"policies": [{"bucket": "*", "actions": ["list", "read", "write", "delete", "policy"]}],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"access_key": "readonly",
|
||||||
|
"secret_key": "secret",
|
||||||
|
"display_name": "Read Only User",
|
||||||
|
"policies": [{"bucket": "*", "actions": ["list", "read"]}],
|
||||||
|
},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
iam_config.write_text(json.dumps(iam_payload))
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"TESTING": True,
|
||||||
|
"STORAGE_ROOT": storage_root,
|
||||||
|
"IAM_CONFIG": iam_config,
|
||||||
|
"BUCKET_POLICY_PATH": bucket_policies,
|
||||||
|
"API_BASE_URL": "http://testserver",
|
||||||
|
"SECRET_KEY": "testing",
|
||||||
|
"ENCRYPTION_ENABLED": True,
|
||||||
|
}
|
||||||
|
|
||||||
|
if kms_enabled:
|
||||||
|
config["KMS_ENABLED"] = True
|
||||||
|
config["KMS_KEYS_PATH"] = str(tmp_path / "kms_keys.json")
|
||||||
|
config["ENCRYPTION_MASTER_KEY_PATH"] = str(tmp_path / "master.key")
|
||||||
|
|
||||||
|
app = create_app(config)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
return app
|
||||||
|
|
||||||
|
|
||||||
|
class TestUIBucketEncryption:
|
||||||
|
"""Test bucket encryption configuration via UI."""
|
||||||
|
|
||||||
|
def test_bucket_detail_shows_encryption_card(self, tmp_path):
|
||||||
|
"""Encryption card should be visible on bucket detail page."""
|
||||||
|
app = _make_encryption_app(tmp_path)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
assert "Default Encryption" in html
|
||||||
|
assert "Encryption Algorithm" in html or "Default encryption disabled" in html
|
||||||
|
|
||||||
|
def test_enable_aes256_encryption(self, tmp_path):
|
||||||
|
"""Should be able to enable AES-256 encryption."""
|
||||||
|
app = _make_encryption_app(tmp_path)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
response = client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "enable",
|
||||||
|
"algorithm": "AES256",
|
||||||
|
},
|
||||||
|
follow_redirects=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
assert "AES-256" in html or "encryption enabled" in html.lower()
|
||||||
|
|
||||||
|
def test_enable_kms_encryption(self, tmp_path):
|
||||||
|
"""Should be able to enable KMS encryption."""
|
||||||
|
app = _make_encryption_app(tmp_path, kms_enabled=True)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
with app.app_context():
|
||||||
|
kms = app.extensions.get("kms")
|
||||||
|
if kms:
|
||||||
|
key = kms.create_key("test-key")
|
||||||
|
key_id = key.key_id
|
||||||
|
else:
|
||||||
|
pytest.skip("KMS not available")
|
||||||
|
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
response = client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "enable",
|
||||||
|
"algorithm": "aws:kms",
|
||||||
|
"kms_key_id": key_id,
|
||||||
|
},
|
||||||
|
follow_redirects=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
assert "KMS" in html or "encryption enabled" in html.lower()
|
||||||
|
|
||||||
|
def test_disable_encryption(self, tmp_path):
|
||||||
|
"""Should be able to disable encryption."""
|
||||||
|
app = _make_encryption_app(tmp_path)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "enable",
|
||||||
|
"algorithm": "AES256",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
response = client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "disable",
|
||||||
|
},
|
||||||
|
follow_redirects=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
assert "disabled" in html.lower() or "Default encryption disabled" in html
|
||||||
|
|
||||||
|
def test_invalid_algorithm_rejected(self, tmp_path):
|
||||||
|
"""Invalid encryption algorithm should be rejected."""
|
||||||
|
app = _make_encryption_app(tmp_path)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
response = client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "enable",
|
||||||
|
"algorithm": "INVALID",
|
||||||
|
},
|
||||||
|
follow_redirects=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
assert "Invalid" in html or "danger" in html
|
||||||
|
|
||||||
|
def test_encryption_persists_in_config(self, tmp_path):
|
||||||
|
"""Encryption config should persist in bucket config."""
|
||||||
|
app = _make_encryption_app(tmp_path)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "enable",
|
||||||
|
"algorithm": "AES256",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
with app.app_context():
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
config = storage.get_bucket_encryption("test-bucket")
|
||||||
|
|
||||||
|
assert "Rules" in config
|
||||||
|
assert len(config["Rules"]) == 1
|
||||||
|
assert config["Rules"][0]["ApplyServerSideEncryptionByDefault"]["SSEAlgorithm"] == "AES256"
|
||||||
|
|
||||||
|
|
||||||
|
class TestUIEncryptionWithoutPermission:
|
||||||
|
"""Test encryption UI when user lacks permissions."""
|
||||||
|
|
||||||
|
def test_readonly_user_cannot_change_encryption(self, tmp_path):
|
||||||
|
"""Read-only user should not be able to change encryption settings."""
|
||||||
|
app = _make_encryption_app(tmp_path)
|
||||||
|
client = app.test_client()
|
||||||
|
|
||||||
|
client.post("/ui/login", data={"access_key": "readonly", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
response = client.post(
|
||||||
|
"/ui/buckets/test-bucket/encryption",
|
||||||
|
data={
|
||||||
|
"csrf_token": csrf_token,
|
||||||
|
"action": "enable",
|
||||||
|
"algorithm": "AES256",
|
||||||
|
},
|
||||||
|
follow_redirects=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
html = response.data.decode("utf-8")
|
||||||
|
assert "Access denied" in html or "permission" in html.lower() or "not authorized" in html.lower()
|
||||||
188
tests/test_ui_pagination.py
Normal file
188
tests/test_ui_pagination.py
Normal file
@@ -0,0 +1,188 @@
|
|||||||
|
"""Tests for UI pagination of bucket objects."""
|
||||||
|
import json
|
||||||
|
from io import BytesIO
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app import create_app
|
||||||
|
|
||||||
|
|
||||||
|
def _make_app(tmp_path: Path):
|
||||||
|
"""Create an app for testing."""
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
iam_config = tmp_path / "iam.json"
|
||||||
|
bucket_policies = tmp_path / "bucket_policies.json"
|
||||||
|
iam_payload = {
|
||||||
|
"users": [
|
||||||
|
{
|
||||||
|
"access_key": "test",
|
||||||
|
"secret_key": "secret",
|
||||||
|
"display_name": "Test User",
|
||||||
|
"policies": [{"bucket": "*", "actions": ["list", "read", "write", "delete", "policy"]}],
|
||||||
|
},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
iam_config.write_text(json.dumps(iam_payload))
|
||||||
|
|
||||||
|
flask_app = create_app(
|
||||||
|
{
|
||||||
|
"TESTING": True,
|
||||||
|
"WTF_CSRF_ENABLED": False,
|
||||||
|
"STORAGE_ROOT": storage_root,
|
||||||
|
"IAM_CONFIG": iam_config,
|
||||||
|
"BUCKET_POLICY_PATH": bucket_policies,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return flask_app
|
||||||
|
|
||||||
|
|
||||||
|
class TestPaginatedObjectListing:
|
||||||
|
"""Test paginated object listing API."""
|
||||||
|
|
||||||
|
def test_objects_api_returns_paginated_results(self, tmp_path):
|
||||||
|
"""Objects API should return paginated results."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
# Create 10 test objects
|
||||||
|
for i in range(10):
|
||||||
|
storage.put_object("test-bucket", f"file{i:02d}.txt", BytesIO(b"content"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
# Login first
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Request first page of 3 objects
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects?max_keys=3")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
data = resp.get_json()
|
||||||
|
assert len(data["objects"]) == 3
|
||||||
|
assert data["is_truncated"] is True
|
||||||
|
assert data["next_continuation_token"] is not None
|
||||||
|
assert data["total_count"] == 10
|
||||||
|
|
||||||
|
def test_objects_api_pagination_continuation(self, tmp_path):
|
||||||
|
"""Objects API should support continuation tokens."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
# Create 5 test objects
|
||||||
|
for i in range(5):
|
||||||
|
storage.put_object("test-bucket", f"file{i:02d}.txt", BytesIO(b"content"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Get first page
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects?max_keys=2")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.get_json()
|
||||||
|
|
||||||
|
first_page_keys = [obj["key"] for obj in data["objects"]]
|
||||||
|
assert len(first_page_keys) == 2
|
||||||
|
assert data["is_truncated"] is True
|
||||||
|
|
||||||
|
# Get second page
|
||||||
|
token = data["next_continuation_token"]
|
||||||
|
resp = client.get(f"/ui/buckets/test-bucket/objects?max_keys=2&continuation_token={token}")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.get_json()
|
||||||
|
|
||||||
|
second_page_keys = [obj["key"] for obj in data["objects"]]
|
||||||
|
assert len(second_page_keys) == 2
|
||||||
|
|
||||||
|
# No overlap between pages
|
||||||
|
assert set(first_page_keys).isdisjoint(set(second_page_keys))
|
||||||
|
|
||||||
|
def test_objects_api_prefix_filter(self, tmp_path):
|
||||||
|
"""Objects API should support prefix filtering."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
# Create objects with different prefixes
|
||||||
|
storage.put_object("test-bucket", "logs/access.log", BytesIO(b"log"))
|
||||||
|
storage.put_object("test-bucket", "logs/error.log", BytesIO(b"log"))
|
||||||
|
storage.put_object("test-bucket", "data/file.txt", BytesIO(b"data"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Filter by prefix
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects?prefix=logs/")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.get_json()
|
||||||
|
|
||||||
|
keys = [obj["key"] for obj in data["objects"]]
|
||||||
|
assert all(k.startswith("logs/") for k in keys)
|
||||||
|
assert len(keys) == 2
|
||||||
|
|
||||||
|
def test_objects_api_requires_authentication(self, tmp_path):
|
||||||
|
"""Objects API should require login."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
# Don't login
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects")
|
||||||
|
# Should redirect to login
|
||||||
|
assert resp.status_code == 302
|
||||||
|
assert "/ui/login" in resp.headers.get("Location", "")
|
||||||
|
|
||||||
|
def test_objects_api_returns_object_metadata(self, tmp_path):
|
||||||
|
"""Objects API should return complete object metadata."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
storage.put_object("test-bucket", "test.txt", BytesIO(b"test content"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.get_json()
|
||||||
|
|
||||||
|
assert len(data["objects"]) == 1
|
||||||
|
obj = data["objects"][0]
|
||||||
|
|
||||||
|
# Check all expected fields
|
||||||
|
assert obj["key"] == "test.txt"
|
||||||
|
assert obj["size"] == 12 # len("test content")
|
||||||
|
assert "last_modified" in obj
|
||||||
|
assert "last_modified_display" in obj
|
||||||
|
assert "etag" in obj
|
||||||
|
|
||||||
|
# URLs are now returned as templates (not per-object) for performance
|
||||||
|
assert "url_templates" in data
|
||||||
|
templates = data["url_templates"]
|
||||||
|
assert "preview" in templates
|
||||||
|
assert "download" in templates
|
||||||
|
assert "delete" in templates
|
||||||
|
assert "KEY_PLACEHOLDER" in templates["preview"]
|
||||||
|
|
||||||
|
def test_bucket_detail_page_loads_without_objects(self, tmp_path):
|
||||||
|
"""Bucket detail page should load even with many objects."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
# Create many objects
|
||||||
|
for i in range(100):
|
||||||
|
storage.put_object("test-bucket", f"file{i:03d}.txt", BytesIO(b"x"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# The page should load quickly (objects loaded via JS)
|
||||||
|
resp = client.get("/ui/buckets/test-bucket")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
html = resp.data.decode("utf-8")
|
||||||
|
# Should have the JavaScript loading infrastructure
|
||||||
|
assert "loadObjects" in html or "objectsApiUrl" in html
|
||||||
@@ -70,8 +70,12 @@ def test_ui_bucket_policy_enforcement_toggle(tmp_path: Path, enforce: bool):
|
|||||||
assert b"Access denied by bucket policy" in response.data
|
assert b"Access denied by bucket policy" in response.data
|
||||||
else:
|
else:
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert b"vid.mp4" in response.data
|
|
||||||
assert b"Access denied by bucket policy" not in response.data
|
assert b"Access denied by bucket policy" not in response.data
|
||||||
|
# Objects are now loaded via async API - check the objects endpoint
|
||||||
|
objects_response = client.get("/ui/buckets/testbucket/objects")
|
||||||
|
assert objects_response.status_code == 200
|
||||||
|
data = objects_response.get_json()
|
||||||
|
assert any(obj["key"] == "vid.mp4" for obj in data["objects"])
|
||||||
|
|
||||||
|
|
||||||
def test_ui_bucket_policy_disabled_by_default(tmp_path: Path):
|
def test_ui_bucket_policy_disabled_by_default(tmp_path: Path):
|
||||||
@@ -109,5 +113,9 @@ def test_ui_bucket_policy_disabled_by_default(tmp_path: Path):
|
|||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
response = client.get("/ui/buckets/testbucket", follow_redirects=True)
|
response = client.get("/ui/buckets/testbucket", follow_redirects=True)
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert b"vid.mp4" in response.data
|
|
||||||
assert b"Access denied by bucket policy" not in response.data
|
assert b"Access denied by bucket policy" not in response.data
|
||||||
|
# Objects are now loaded via async API - check the objects endpoint
|
||||||
|
objects_response = client.get("/ui/buckets/testbucket/objects")
|
||||||
|
assert objects_response.status_code == 200
|
||||||
|
data = objects_response.get_json()
|
||||||
|
assert any(obj["key"] == "vid.mp4" for obj in data["objects"])
|
||||||
|
|||||||
Reference in New Issue
Block a user