Compare commits
80 Commits
563bb8fa6a
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| bb6590fc5e | |||
| 4de936cea9 | |||
| adb9017580 | |||
| 4adfcc4131 | |||
| ebc315c1cc | |||
| 5ab62a00ff | |||
| 9c3518de63 | |||
| a52657e684 | |||
| 53297abe1e | |||
| a3b9db544c | |||
| f5d2e1c488 | |||
| f04c6a9cdc | |||
| 7a494abb96 | |||
| 956d17a649 | |||
| 5522f9ac04 | |||
| 3742f0228e | |||
| ba694cb717 | |||
| 433d291b4b | |||
| 899db3421b | |||
| e3509e997f | |||
| 1c30200db0 | |||
| 7ff422d4dc | |||
| 546d51af9a | |||
| 0d1fe05fd0 | |||
| c5d4b2f1cd | |||
| caf01d6ada | |||
| a5d19e2982 | |||
| 692e7e3a6e | |||
| 78dba93ee0 | |||
| 93a5aa6618 | |||
| 9ab750650c | |||
| 609e9db2f7 | |||
| 94a55cf2b7 | |||
| b9cfc45aa2 | |||
| 2d60e36fbf | |||
| c78f7fa6b0 | |||
| b3dce8d13e | |||
| e792b86485 | |||
| cdb86aeea7 | |||
| cdbc156b5b | |||
| 1df8ff9d25 | |||
| 05f1b00473 | |||
| 5ebc97300e | |||
| d2f9c3bded | |||
| 9f347f2caa | |||
| 4ab58e59c2 | |||
| 32232211a1 | |||
| bb366cb4cd | |||
| 1cacb80dd6 | |||
| e89bbb62dc | |||
| c8eb3de629 | |||
| a2745ff2ee | |||
| 9165e365e6 | |||
| 01e26754e8 | |||
| b592fa9fdb | |||
| cd9734b398 | |||
| 90893cac27 | |||
| 6e659902bd | |||
| 39a707ecbc | |||
| 4199f8e6c7 | |||
| adc6770273 | |||
| f5451c162b | |||
| aab9ef696a | |||
| be48f59452 | |||
| 86c04f85f6 | |||
| 28cb656d94 | |||
| 992d9eccd9 | |||
| 40f3192c5c | |||
| 2498b950f6 | |||
| 97435f15e5 | |||
| 3c44152fc6 | |||
| 97860669ec | |||
| 4a5dd76286 | |||
| d2dc293722 | |||
| 397515edce | |||
| 980fced7e4 | |||
| bae5009ec4 | |||
| 233780617f | |||
| fd8fb21517 | |||
| c6cbe822e1 |
@@ -1,5 +1,5 @@
|
|||||||
# syntax=docker/dockerfile:1.7
|
# syntax=docker/dockerfile:1.7
|
||||||
FROM python:3.11-slim
|
FROM python:3.12.12-slim
|
||||||
|
|
||||||
ENV PYTHONDONTWRITEBYTECODE=1 \
|
ENV PYTHONDONTWRITEBYTECODE=1 \
|
||||||
PYTHONUNBUFFERED=1
|
PYTHONUNBUFFERED=1
|
||||||
@@ -32,6 +32,6 @@ ENV APP_HOST=0.0.0.0 \
|
|||||||
FLASK_DEBUG=0
|
FLASK_DEBUG=0
|
||||||
|
|
||||||
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
||||||
CMD python -c "import requests; requests.get('http://localhost:5000/healthz', timeout=2)"
|
CMD python -c "import requests; requests.get('http://localhost:5000/myfsio/health', timeout=2)"
|
||||||
|
|
||||||
CMD ["./docker-entrypoint.sh"]
|
CMD ["./docker-entrypoint.sh"]
|
||||||
|
|||||||
661
LICENSE
Normal file
661
LICENSE
Normal file
@@ -0,0 +1,661 @@
|
|||||||
|
GNU AFFERO GENERAL PUBLIC LICENSE
|
||||||
|
Version 3, 19 November 2007
|
||||||
|
|
||||||
|
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||||
|
Everyone is permitted to copy and distribute verbatim copies
|
||||||
|
of this license document, but changing it is not allowed.
|
||||||
|
|
||||||
|
Preamble
|
||||||
|
|
||||||
|
The GNU Affero General Public License is a free, copyleft license for
|
||||||
|
software and other kinds of works, specifically designed to ensure
|
||||||
|
cooperation with the community in the case of network server software.
|
||||||
|
|
||||||
|
The licenses for most software and other practical works are designed
|
||||||
|
to take away your freedom to share and change the works. By contrast,
|
||||||
|
our General Public Licenses are intended to guarantee your freedom to
|
||||||
|
share and change all versions of a program--to make sure it remains free
|
||||||
|
software for all its users.
|
||||||
|
|
||||||
|
When we speak of free software, we are referring to freedom, not
|
||||||
|
price. Our General Public Licenses are designed to make sure that you
|
||||||
|
have the freedom to distribute copies of free software (and charge for
|
||||||
|
them if you wish), that you receive source code or can get it if you
|
||||||
|
want it, that you can change the software or use pieces of it in new
|
||||||
|
free programs, and that you know you can do these things.
|
||||||
|
|
||||||
|
Developers that use our General Public Licenses protect your rights
|
||||||
|
with two steps: (1) assert copyright on the software, and (2) offer
|
||||||
|
you this License which gives you legal permission to copy, distribute
|
||||||
|
and/or modify the software.
|
||||||
|
|
||||||
|
A secondary benefit of defending all users' freedom is that
|
||||||
|
improvements made in alternate versions of the program, if they
|
||||||
|
receive widespread use, become available for other developers to
|
||||||
|
incorporate. Many developers of free software are heartened and
|
||||||
|
encouraged by the resulting cooperation. However, in the case of
|
||||||
|
software used on network servers, this result may fail to come about.
|
||||||
|
The GNU General Public License permits making a modified version and
|
||||||
|
letting the public access it on a server without ever releasing its
|
||||||
|
source code to the public.
|
||||||
|
|
||||||
|
The GNU Affero General Public License is designed specifically to
|
||||||
|
ensure that, in such cases, the modified source code becomes available
|
||||||
|
to the community. It requires the operator of a network server to
|
||||||
|
provide the source code of the modified version running there to the
|
||||||
|
users of that server. Therefore, public use of a modified version, on
|
||||||
|
a publicly accessible server, gives the public access to the source
|
||||||
|
code of the modified version.
|
||||||
|
|
||||||
|
An older license, called the Affero General Public License and
|
||||||
|
published by Affero, was designed to accomplish similar goals. This is
|
||||||
|
a different license, not a version of the Affero GPL, but Affero has
|
||||||
|
released a new version of the Affero GPL which permits relicensing under
|
||||||
|
this license.
|
||||||
|
|
||||||
|
The precise terms and conditions for copying, distribution and
|
||||||
|
modification follow.
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
0. Definitions.
|
||||||
|
|
||||||
|
"This License" refers to version 3 of the GNU Affero General Public License.
|
||||||
|
|
||||||
|
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||||
|
works, such as semiconductor masks.
|
||||||
|
|
||||||
|
"The Program" refers to any copyrightable work licensed under this
|
||||||
|
License. Each licensee is addressed as "you". "Licensees" and
|
||||||
|
"recipients" may be individuals or organizations.
|
||||||
|
|
||||||
|
To "modify" a work means to copy from or adapt all or part of the work
|
||||||
|
in a fashion requiring copyright permission, other than the making of an
|
||||||
|
exact copy. The resulting work is called a "modified version" of the
|
||||||
|
earlier work or a work "based on" the earlier work.
|
||||||
|
|
||||||
|
A "covered work" means either the unmodified Program or a work based
|
||||||
|
on the Program.
|
||||||
|
|
||||||
|
To "propagate" a work means to do anything with it that, without
|
||||||
|
permission, would make you directly or secondarily liable for
|
||||||
|
infringement under applicable copyright law, except executing it on a
|
||||||
|
computer or modifying a private copy. Propagation includes copying,
|
||||||
|
distribution (with or without modification), making available to the
|
||||||
|
public, and in some countries other activities as well.
|
||||||
|
|
||||||
|
To "convey" a work means any kind of propagation that enables other
|
||||||
|
parties to make or receive copies. Mere interaction with a user through
|
||||||
|
a computer network, with no transfer of a copy, is not conveying.
|
||||||
|
|
||||||
|
An interactive user interface displays "Appropriate Legal Notices"
|
||||||
|
to the extent that it includes a convenient and prominently visible
|
||||||
|
feature that (1) displays an appropriate copyright notice, and (2)
|
||||||
|
tells the user that there is no warranty for the work (except to the
|
||||||
|
extent that warranties are provided), that licensees may convey the
|
||||||
|
work under this License, and how to view a copy of this License. If
|
||||||
|
the interface presents a list of user commands or options, such as a
|
||||||
|
menu, a prominent item in the list meets this criterion.
|
||||||
|
|
||||||
|
1. Source Code.
|
||||||
|
|
||||||
|
The "source code" for a work means the preferred form of the work
|
||||||
|
for making modifications to it. "Object code" means any non-source
|
||||||
|
form of a work.
|
||||||
|
|
||||||
|
A "Standard Interface" means an interface that either is an official
|
||||||
|
standard defined by a recognized standards body, or, in the case of
|
||||||
|
interfaces specified for a particular programming language, one that
|
||||||
|
is widely used among developers working in that language.
|
||||||
|
|
||||||
|
The "System Libraries" of an executable work include anything, other
|
||||||
|
than the work as a whole, that (a) is included in the normal form of
|
||||||
|
packaging a Major Component, but which is not part of that Major
|
||||||
|
Component, and (b) serves only to enable use of the work with that
|
||||||
|
Major Component, or to implement a Standard Interface for which an
|
||||||
|
implementation is available to the public in source code form. A
|
||||||
|
"Major Component", in this context, means a major essential component
|
||||||
|
(kernel, window system, and so on) of the specific operating system
|
||||||
|
(if any) on which the executable work runs, or a compiler used to
|
||||||
|
produce the work, or an object code interpreter used to run it.
|
||||||
|
|
||||||
|
The "Corresponding Source" for a work in object code form means all
|
||||||
|
the source code needed to generate, install, and (for an executable
|
||||||
|
work) run the object code and to modify the work, including scripts to
|
||||||
|
control those activities. However, it does not include the work's
|
||||||
|
System Libraries, or general-purpose tools or generally available free
|
||||||
|
programs which are used unmodified in performing those activities but
|
||||||
|
which are not part of the work. For example, Corresponding Source
|
||||||
|
includes interface definition files associated with source files for
|
||||||
|
the work, and the source code for shared libraries and dynamically
|
||||||
|
linked subprograms that the work is specifically designed to require,
|
||||||
|
such as by intimate data communication or control flow between those
|
||||||
|
subprograms and other parts of the work.
|
||||||
|
|
||||||
|
The Corresponding Source need not include anything that users
|
||||||
|
can regenerate automatically from other parts of the Corresponding
|
||||||
|
Source.
|
||||||
|
|
||||||
|
The Corresponding Source for a work in source code form is that
|
||||||
|
same work.
|
||||||
|
|
||||||
|
2. Basic Permissions.
|
||||||
|
|
||||||
|
All rights granted under this License are granted for the term of
|
||||||
|
copyright on the Program, and are irrevocable provided the stated
|
||||||
|
conditions are met. This License explicitly affirms your unlimited
|
||||||
|
permission to run the unmodified Program. The output from running a
|
||||||
|
covered work is covered by this License only if the output, given its
|
||||||
|
content, constitutes a covered work. This License acknowledges your
|
||||||
|
rights of fair use or other equivalent, as provided by copyright law.
|
||||||
|
|
||||||
|
You may make, run and propagate covered works that you do not
|
||||||
|
convey, without conditions so long as your license otherwise remains
|
||||||
|
in force. You may convey covered works to others for the sole purpose
|
||||||
|
of having them make modifications exclusively for you, or provide you
|
||||||
|
with facilities for running those works, provided that you comply with
|
||||||
|
the terms of this License in conveying all material for which you do
|
||||||
|
not control copyright. Those thus making or running the covered works
|
||||||
|
for you must do so exclusively on your behalf, under your direction
|
||||||
|
and control, on terms that prohibit them from making any copies of
|
||||||
|
your copyrighted material outside their relationship with you.
|
||||||
|
|
||||||
|
Conveying under any other circumstances is permitted solely under
|
||||||
|
the conditions stated below. Sublicensing is not allowed; section 10
|
||||||
|
makes it unnecessary.
|
||||||
|
|
||||||
|
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||||
|
|
||||||
|
No covered work shall be deemed part of an effective technological
|
||||||
|
measure under any applicable law fulfilling obligations under article
|
||||||
|
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||||
|
similar laws prohibiting or restricting circumvention of such
|
||||||
|
measures.
|
||||||
|
|
||||||
|
When you convey a covered work, you waive any legal power to forbid
|
||||||
|
circumvention of technological measures to the extent such circumvention
|
||||||
|
is effected by exercising rights under this License with respect to
|
||||||
|
the covered work, and you disclaim any intention to limit operation or
|
||||||
|
modification of the work as a means of enforcing, against the work's
|
||||||
|
users, your or third parties' legal rights to forbid circumvention of
|
||||||
|
technological measures.
|
||||||
|
|
||||||
|
4. Conveying Verbatim Copies.
|
||||||
|
|
||||||
|
You may convey verbatim copies of the Program's source code as you
|
||||||
|
receive it, in any medium, provided that you conspicuously and
|
||||||
|
appropriately publish on each copy an appropriate copyright notice;
|
||||||
|
keep intact all notices stating that this License and any
|
||||||
|
non-permissive terms added in accord with section 7 apply to the code;
|
||||||
|
keep intact all notices of the absence of any warranty; and give all
|
||||||
|
recipients a copy of this License along with the Program.
|
||||||
|
|
||||||
|
You may charge any price or no price for each copy that you convey,
|
||||||
|
and you may offer support or warranty protection for a fee.
|
||||||
|
|
||||||
|
5. Conveying Modified Source Versions.
|
||||||
|
|
||||||
|
You may convey a work based on the Program, or the modifications to
|
||||||
|
produce it from the Program, in the form of source code under the
|
||||||
|
terms of section 4, provided that you also meet all of these conditions:
|
||||||
|
|
||||||
|
a) The work must carry prominent notices stating that you modified
|
||||||
|
it, and giving a relevant date.
|
||||||
|
|
||||||
|
b) The work must carry prominent notices stating that it is
|
||||||
|
released under this License and any conditions added under section
|
||||||
|
7. This requirement modifies the requirement in section 4 to
|
||||||
|
"keep intact all notices".
|
||||||
|
|
||||||
|
c) You must license the entire work, as a whole, under this
|
||||||
|
License to anyone who comes into possession of a copy. This
|
||||||
|
License will therefore apply, along with any applicable section 7
|
||||||
|
additional terms, to the whole of the work, and all its parts,
|
||||||
|
regardless of how they are packaged. This License gives no
|
||||||
|
permission to license the work in any other way, but it does not
|
||||||
|
invalidate such permission if you have separately received it.
|
||||||
|
|
||||||
|
d) If the work has interactive user interfaces, each must display
|
||||||
|
Appropriate Legal Notices; however, if the Program has interactive
|
||||||
|
interfaces that do not display Appropriate Legal Notices, your
|
||||||
|
work need not make them do so.
|
||||||
|
|
||||||
|
A compilation of a covered work with other separate and independent
|
||||||
|
works, which are not by their nature extensions of the covered work,
|
||||||
|
and which are not combined with it such as to form a larger program,
|
||||||
|
in or on a volume of a storage or distribution medium, is called an
|
||||||
|
"aggregate" if the compilation and its resulting copyright are not
|
||||||
|
used to limit the access or legal rights of the compilation's users
|
||||||
|
beyond what the individual works permit. Inclusion of a covered work
|
||||||
|
in an aggregate does not cause this License to apply to the other
|
||||||
|
parts of the aggregate.
|
||||||
|
|
||||||
|
6. Conveying Non-Source Forms.
|
||||||
|
|
||||||
|
You may convey a covered work in object code form under the terms
|
||||||
|
of sections 4 and 5, provided that you also convey the
|
||||||
|
machine-readable Corresponding Source under the terms of this License,
|
||||||
|
in one of these ways:
|
||||||
|
|
||||||
|
a) Convey the object code in, or embodied in, a physical product
|
||||||
|
(including a physical distribution medium), accompanied by the
|
||||||
|
Corresponding Source fixed on a durable physical medium
|
||||||
|
customarily used for software interchange.
|
||||||
|
|
||||||
|
b) Convey the object code in, or embodied in, a physical product
|
||||||
|
(including a physical distribution medium), accompanied by a
|
||||||
|
written offer, valid for at least three years and valid for as
|
||||||
|
long as you offer spare parts or customer support for that product
|
||||||
|
model, to give anyone who possesses the object code either (1) a
|
||||||
|
copy of the Corresponding Source for all the software in the
|
||||||
|
product that is covered by this License, on a durable physical
|
||||||
|
medium customarily used for software interchange, for a price no
|
||||||
|
more than your reasonable cost of physically performing this
|
||||||
|
conveying of source, or (2) access to copy the
|
||||||
|
Corresponding Source from a network server at no charge.
|
||||||
|
|
||||||
|
c) Convey individual copies of the object code with a copy of the
|
||||||
|
written offer to provide the Corresponding Source. This
|
||||||
|
alternative is allowed only occasionally and noncommercially, and
|
||||||
|
only if you received the object code with such an offer, in accord
|
||||||
|
with subsection 6b.
|
||||||
|
|
||||||
|
d) Convey the object code by offering access from a designated
|
||||||
|
place (gratis or for a charge), and offer equivalent access to the
|
||||||
|
Corresponding Source in the same way through the same place at no
|
||||||
|
further charge. You need not require recipients to copy the
|
||||||
|
Corresponding Source along with the object code. If the place to
|
||||||
|
copy the object code is a network server, the Corresponding Source
|
||||||
|
may be on a different server (operated by you or a third party)
|
||||||
|
that supports equivalent copying facilities, provided you maintain
|
||||||
|
clear directions next to the object code saying where to find the
|
||||||
|
Corresponding Source. Regardless of what server hosts the
|
||||||
|
Corresponding Source, you remain obligated to ensure that it is
|
||||||
|
available for as long as needed to satisfy these requirements.
|
||||||
|
|
||||||
|
e) Convey the object code using peer-to-peer transmission, provided
|
||||||
|
you inform other peers where the object code and Corresponding
|
||||||
|
Source of the work are being offered to the general public at no
|
||||||
|
charge under subsection 6d.
|
||||||
|
|
||||||
|
A separable portion of the object code, whose source code is excluded
|
||||||
|
from the Corresponding Source as a System Library, need not be
|
||||||
|
included in conveying the object code work.
|
||||||
|
|
||||||
|
A "User Product" is either (1) a "consumer product", which means any
|
||||||
|
tangible personal property which is normally used for personal, family,
|
||||||
|
or household purposes, or (2) anything designed or sold for incorporation
|
||||||
|
into a dwelling. In determining whether a product is a consumer product,
|
||||||
|
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||||
|
product received by a particular user, "normally used" refers to a
|
||||||
|
typical or common use of that class of product, regardless of the status
|
||||||
|
of the particular user or of the way in which the particular user
|
||||||
|
actually uses, or expects or is expected to use, the product. A product
|
||||||
|
is a consumer product regardless of whether the product has substantial
|
||||||
|
commercial, industrial or non-consumer uses, unless such uses represent
|
||||||
|
the only significant mode of use of the product.
|
||||||
|
|
||||||
|
"Installation Information" for a User Product means any methods,
|
||||||
|
procedures, authorization keys, or other information required to install
|
||||||
|
and execute modified versions of a covered work in that User Product from
|
||||||
|
a modified version of its Corresponding Source. The information must
|
||||||
|
suffice to ensure that the continued functioning of the modified object
|
||||||
|
code is in no case prevented or interfered with solely because
|
||||||
|
modification has been made.
|
||||||
|
|
||||||
|
If you convey an object code work under this section in, or with, or
|
||||||
|
specifically for use in, a User Product, and the conveying occurs as
|
||||||
|
part of a transaction in which the right of possession and use of the
|
||||||
|
User Product is transferred to the recipient in perpetuity or for a
|
||||||
|
fixed term (regardless of how the transaction is characterized), the
|
||||||
|
Corresponding Source conveyed under this section must be accompanied
|
||||||
|
by the Installation Information. But this requirement does not apply
|
||||||
|
if neither you nor any third party retains the ability to install
|
||||||
|
modified object code on the User Product (for example, the work has
|
||||||
|
been installed in ROM).
|
||||||
|
|
||||||
|
The requirement to provide Installation Information does not include a
|
||||||
|
requirement to continue to provide support service, warranty, or updates
|
||||||
|
for a work that has been modified or installed by the recipient, or for
|
||||||
|
the User Product in which it has been modified or installed. Access to a
|
||||||
|
network may be denied when the modification itself materially and
|
||||||
|
adversely affects the operation of the network or violates the rules and
|
||||||
|
protocols for communication across the network.
|
||||||
|
|
||||||
|
Corresponding Source conveyed, and Installation Information provided,
|
||||||
|
in accord with this section must be in a format that is publicly
|
||||||
|
documented (and with an implementation available to the public in
|
||||||
|
source code form), and must require no special password or key for
|
||||||
|
unpacking, reading or copying.
|
||||||
|
|
||||||
|
7. Additional Terms.
|
||||||
|
|
||||||
|
"Additional permissions" are terms that supplement the terms of this
|
||||||
|
License by making exceptions from one or more of its conditions.
|
||||||
|
Additional permissions that are applicable to the entire Program shall
|
||||||
|
be treated as though they were included in this License, to the extent
|
||||||
|
that they are valid under applicable law. If additional permissions
|
||||||
|
apply only to part of the Program, that part may be used separately
|
||||||
|
under those permissions, but the entire Program remains governed by
|
||||||
|
this License without regard to the additional permissions.
|
||||||
|
|
||||||
|
When you convey a copy of a covered work, you may at your option
|
||||||
|
remove any additional permissions from that copy, or from any part of
|
||||||
|
it. (Additional permissions may be written to require their own
|
||||||
|
removal in certain cases when you modify the work.) You may place
|
||||||
|
additional permissions on material, added by you to a covered work,
|
||||||
|
for which you have or can give appropriate copyright permission.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, for material you
|
||||||
|
add to a covered work, you may (if authorized by the copyright holders of
|
||||||
|
that material) supplement the terms of this License with terms:
|
||||||
|
|
||||||
|
a) Disclaiming warranty or limiting liability differently from the
|
||||||
|
terms of sections 15 and 16 of this License; or
|
||||||
|
|
||||||
|
b) Requiring preservation of specified reasonable legal notices or
|
||||||
|
author attributions in that material or in the Appropriate Legal
|
||||||
|
Notices displayed by works containing it; or
|
||||||
|
|
||||||
|
c) Prohibiting misrepresentation of the origin of that material, or
|
||||||
|
requiring that modified versions of such material be marked in
|
||||||
|
reasonable ways as different from the original version; or
|
||||||
|
|
||||||
|
d) Limiting the use for publicity purposes of names of licensors or
|
||||||
|
authors of the material; or
|
||||||
|
|
||||||
|
e) Declining to grant rights under trademark law for use of some
|
||||||
|
trade names, trademarks, or service marks; or
|
||||||
|
|
||||||
|
f) Requiring indemnification of licensors and authors of that
|
||||||
|
material by anyone who conveys the material (or modified versions of
|
||||||
|
it) with contractual assumptions of liability to the recipient, for
|
||||||
|
any liability that these contractual assumptions directly impose on
|
||||||
|
those licensors and authors.
|
||||||
|
|
||||||
|
All other non-permissive additional terms are considered "further
|
||||||
|
restrictions" within the meaning of section 10. If the Program as you
|
||||||
|
received it, or any part of it, contains a notice stating that it is
|
||||||
|
governed by this License along with a term that is a further
|
||||||
|
restriction, you may remove that term. If a license document contains
|
||||||
|
a further restriction but permits relicensing or conveying under this
|
||||||
|
License, you may add to a covered work material governed by the terms
|
||||||
|
of that license document, provided that the further restriction does
|
||||||
|
not survive such relicensing or conveying.
|
||||||
|
|
||||||
|
If you add terms to a covered work in accord with this section, you
|
||||||
|
must place, in the relevant source files, a statement of the
|
||||||
|
additional terms that apply to those files, or a notice indicating
|
||||||
|
where to find the applicable terms.
|
||||||
|
|
||||||
|
Additional terms, permissive or non-permissive, may be stated in the
|
||||||
|
form of a separately written license, or stated as exceptions;
|
||||||
|
the above requirements apply either way.
|
||||||
|
|
||||||
|
8. Termination.
|
||||||
|
|
||||||
|
You may not propagate or modify a covered work except as expressly
|
||||||
|
provided under this License. Any attempt otherwise to propagate or
|
||||||
|
modify it is void, and will automatically terminate your rights under
|
||||||
|
this License (including any patent licenses granted under the third
|
||||||
|
paragraph of section 11).
|
||||||
|
|
||||||
|
However, if you cease all violation of this License, then your
|
||||||
|
license from a particular copyright holder is reinstated (a)
|
||||||
|
provisionally, unless and until the copyright holder explicitly and
|
||||||
|
finally terminates your license, and (b) permanently, if the copyright
|
||||||
|
holder fails to notify you of the violation by some reasonable means
|
||||||
|
prior to 60 days after the cessation.
|
||||||
|
|
||||||
|
Moreover, your license from a particular copyright holder is
|
||||||
|
reinstated permanently if the copyright holder notifies you of the
|
||||||
|
violation by some reasonable means, this is the first time you have
|
||||||
|
received notice of violation of this License (for any work) from that
|
||||||
|
copyright holder, and you cure the violation prior to 30 days after
|
||||||
|
your receipt of the notice.
|
||||||
|
|
||||||
|
Termination of your rights under this section does not terminate the
|
||||||
|
licenses of parties who have received copies or rights from you under
|
||||||
|
this License. If your rights have been terminated and not permanently
|
||||||
|
reinstated, you do not qualify to receive new licenses for the same
|
||||||
|
material under section 10.
|
||||||
|
|
||||||
|
9. Acceptance Not Required for Having Copies.
|
||||||
|
|
||||||
|
You are not required to accept this License in order to receive or
|
||||||
|
run a copy of the Program. Ancillary propagation of a covered work
|
||||||
|
occurring solely as a consequence of using peer-to-peer transmission
|
||||||
|
to receive a copy likewise does not require acceptance. However,
|
||||||
|
nothing other than this License grants you permission to propagate or
|
||||||
|
modify any covered work. These actions infringe copyright if you do
|
||||||
|
not accept this License. Therefore, by modifying or propagating a
|
||||||
|
covered work, you indicate your acceptance of this License to do so.
|
||||||
|
|
||||||
|
10. Automatic Licensing of Downstream Recipients.
|
||||||
|
|
||||||
|
Each time you convey a covered work, the recipient automatically
|
||||||
|
receives a license from the original licensors, to run, modify and
|
||||||
|
propagate that work, subject to this License. You are not responsible
|
||||||
|
for enforcing compliance by third parties with this License.
|
||||||
|
|
||||||
|
An "entity transaction" is a transaction transferring control of an
|
||||||
|
organization, or substantially all assets of one, or subdividing an
|
||||||
|
organization, or merging organizations. If propagation of a covered
|
||||||
|
work results from an entity transaction, each party to that
|
||||||
|
transaction who receives a copy of the work also receives whatever
|
||||||
|
licenses to the work the party's predecessor in interest had or could
|
||||||
|
give under the previous paragraph, plus a right to possession of the
|
||||||
|
Corresponding Source of the work from the predecessor in interest, if
|
||||||
|
the predecessor has it or can get it with reasonable efforts.
|
||||||
|
|
||||||
|
You may not impose any further restrictions on the exercise of the
|
||||||
|
rights granted or affirmed under this License. For example, you may
|
||||||
|
not impose a license fee, royalty, or other charge for exercise of
|
||||||
|
rights granted under this License, and you may not initiate litigation
|
||||||
|
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||||
|
any patent claim is infringed by making, using, selling, offering for
|
||||||
|
sale, or importing the Program or any portion of it.
|
||||||
|
|
||||||
|
11. Patents.
|
||||||
|
|
||||||
|
A "contributor" is a copyright holder who authorizes use under this
|
||||||
|
License of the Program or a work on which the Program is based. The
|
||||||
|
work thus licensed is called the contributor's "contributor version".
|
||||||
|
|
||||||
|
A contributor's "essential patent claims" are all patent claims
|
||||||
|
owned or controlled by the contributor, whether already acquired or
|
||||||
|
hereafter acquired, that would be infringed by some manner, permitted
|
||||||
|
by this License, of making, using, or selling its contributor version,
|
||||||
|
but do not include claims that would be infringed only as a
|
||||||
|
consequence of further modification of the contributor version. For
|
||||||
|
purposes of this definition, "control" includes the right to grant
|
||||||
|
patent sublicenses in a manner consistent with the requirements of
|
||||||
|
this License.
|
||||||
|
|
||||||
|
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||||
|
patent license under the contributor's essential patent claims, to
|
||||||
|
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||||
|
propagate the contents of its contributor version.
|
||||||
|
|
||||||
|
In the following three paragraphs, a "patent license" is any express
|
||||||
|
agreement or commitment, however denominated, not to enforce a patent
|
||||||
|
(such as an express permission to practice a patent or covenant not to
|
||||||
|
sue for patent infringement). To "grant" such a patent license to a
|
||||||
|
party means to make such an agreement or commitment not to enforce a
|
||||||
|
patent against the party.
|
||||||
|
|
||||||
|
If you convey a covered work, knowingly relying on a patent license,
|
||||||
|
and the Corresponding Source of the work is not available for anyone
|
||||||
|
to copy, free of charge and under the terms of this License, through a
|
||||||
|
publicly available network server or other readily accessible means,
|
||||||
|
then you must either (1) cause the Corresponding Source to be so
|
||||||
|
available, or (2) arrange to deprive yourself of the benefit of the
|
||||||
|
patent license for this particular work, or (3) arrange, in a manner
|
||||||
|
consistent with the requirements of this License, to extend the patent
|
||||||
|
license to downstream recipients. "Knowingly relying" means you have
|
||||||
|
actual knowledge that, but for the patent license, your conveying the
|
||||||
|
covered work in a country, or your recipient's use of the covered work
|
||||||
|
in a country, would infringe one or more identifiable patents in that
|
||||||
|
country that you have reason to believe are valid.
|
||||||
|
|
||||||
|
If, pursuant to or in connection with a single transaction or
|
||||||
|
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||||
|
covered work, and grant a patent license to some of the parties
|
||||||
|
receiving the covered work authorizing them to use, propagate, modify
|
||||||
|
or convey a specific copy of the covered work, then the patent license
|
||||||
|
you grant is automatically extended to all recipients of the covered
|
||||||
|
work and works based on it.
|
||||||
|
|
||||||
|
A patent license is "discriminatory" if it does not include within
|
||||||
|
the scope of its coverage, prohibits the exercise of, or is
|
||||||
|
conditioned on the non-exercise of one or more of the rights that are
|
||||||
|
specifically granted under this License. You may not convey a covered
|
||||||
|
work if you are a party to an arrangement with a third party that is
|
||||||
|
in the business of distributing software, under which you make payment
|
||||||
|
to the third party based on the extent of your activity of conveying
|
||||||
|
the work, and under which the third party grants, to any of the
|
||||||
|
parties who would receive the covered work from you, a discriminatory
|
||||||
|
patent license (a) in connection with copies of the covered work
|
||||||
|
conveyed by you (or copies made from those copies), or (b) primarily
|
||||||
|
for and in connection with specific products or compilations that
|
||||||
|
contain the covered work, unless you entered into that arrangement,
|
||||||
|
or that patent license was granted, prior to 28 March 2007.
|
||||||
|
|
||||||
|
Nothing in this License shall be construed as excluding or limiting
|
||||||
|
any implied license or other defenses to infringement that may
|
||||||
|
otherwise be available to you under applicable patent law.
|
||||||
|
|
||||||
|
12. No Surrender of Others' Freedom.
|
||||||
|
|
||||||
|
If conditions are imposed on you (whether by court order, agreement or
|
||||||
|
otherwise) that contradict the conditions of this License, they do not
|
||||||
|
excuse you from the conditions of this License. If you cannot convey a
|
||||||
|
covered work so as to satisfy simultaneously your obligations under this
|
||||||
|
License and any other pertinent obligations, then as a consequence you may
|
||||||
|
not convey it at all. For example, if you agree to terms that obligate you
|
||||||
|
to collect a royalty for further conveying from those to whom you convey
|
||||||
|
the Program, the only way you could satisfy both those terms and this
|
||||||
|
License would be to refrain entirely from conveying the Program.
|
||||||
|
|
||||||
|
13. Remote Network Interaction; Use with the GNU General Public License.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, if you modify the
|
||||||
|
Program, your modified version must prominently offer all users
|
||||||
|
interacting with it remotely through a computer network (if your version
|
||||||
|
supports such interaction) an opportunity to receive the Corresponding
|
||||||
|
Source of your version by providing access to the Corresponding Source
|
||||||
|
from a network server at no charge, through some standard or customary
|
||||||
|
means of facilitating copying of software. This Corresponding Source
|
||||||
|
shall include the Corresponding Source for any work covered by version 3
|
||||||
|
of the GNU General Public License that is incorporated pursuant to the
|
||||||
|
following paragraph.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, you have
|
||||||
|
permission to link or combine any covered work with a work licensed
|
||||||
|
under version 3 of the GNU General Public License into a single
|
||||||
|
combined work, and to convey the resulting work. The terms of this
|
||||||
|
License will continue to apply to the part which is the covered work,
|
||||||
|
but the work with which it is combined will remain governed by version
|
||||||
|
3 of the GNU General Public License.
|
||||||
|
|
||||||
|
14. Revised Versions of this License.
|
||||||
|
|
||||||
|
The Free Software Foundation may publish revised and/or new versions of
|
||||||
|
the GNU Affero General Public License from time to time. Such new versions
|
||||||
|
will be similar in spirit to the present version, but may differ in detail to
|
||||||
|
address new problems or concerns.
|
||||||
|
|
||||||
|
Each version is given a distinguishing version number. If the
|
||||||
|
Program specifies that a certain numbered version of the GNU Affero General
|
||||||
|
Public License "or any later version" applies to it, you have the
|
||||||
|
option of following the terms and conditions either of that numbered
|
||||||
|
version or of any later version published by the Free Software
|
||||||
|
Foundation. If the Program does not specify a version number of the
|
||||||
|
GNU Affero General Public License, you may choose any version ever published
|
||||||
|
by the Free Software Foundation.
|
||||||
|
|
||||||
|
If the Program specifies that a proxy can decide which future
|
||||||
|
versions of the GNU Affero General Public License can be used, that proxy's
|
||||||
|
public statement of acceptance of a version permanently authorizes you
|
||||||
|
to choose that version for the Program.
|
||||||
|
|
||||||
|
Later license versions may give you additional or different
|
||||||
|
permissions. However, no additional obligations are imposed on any
|
||||||
|
author or copyright holder as a result of your choosing to follow a
|
||||||
|
later version.
|
||||||
|
|
||||||
|
15. Disclaimer of Warranty.
|
||||||
|
|
||||||
|
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||||
|
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||||
|
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||||
|
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||||
|
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||||
|
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||||
|
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||||
|
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||||
|
|
||||||
|
16. Limitation of Liability.
|
||||||
|
|
||||||
|
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||||
|
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||||
|
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||||
|
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||||
|
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||||
|
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||||
|
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||||
|
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||||
|
SUCH DAMAGES.
|
||||||
|
|
||||||
|
17. Interpretation of Sections 15 and 16.
|
||||||
|
|
||||||
|
If the disclaimer of warranty and limitation of liability provided
|
||||||
|
above cannot be given local legal effect according to their terms,
|
||||||
|
reviewing courts shall apply local law that most closely approximates
|
||||||
|
an absolute waiver of all civil liability in connection with the
|
||||||
|
Program, unless a warranty or assumption of liability accompanies a
|
||||||
|
copy of the Program in return for a fee.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
How to Apply These Terms to Your New Programs
|
||||||
|
|
||||||
|
If you develop a new program, and you want it to be of the greatest
|
||||||
|
possible use to the public, the best way to achieve this is to make it
|
||||||
|
free software which everyone can redistribute and change under these terms.
|
||||||
|
|
||||||
|
To do so, attach the following notices to the program. It is safest
|
||||||
|
to attach them to the start of each source file to most effectively
|
||||||
|
state the exclusion of warranty; and each file should have at least
|
||||||
|
the "copyright" line and a pointer to where the full notice is found.
|
||||||
|
|
||||||
|
<one line to give the program's name and a brief idea of what it does.>
|
||||||
|
Copyright (C) <year> <name of author>
|
||||||
|
|
||||||
|
This program is free software: you can redistribute it and/or modify
|
||||||
|
it under the terms of the GNU Affero General Public License as published by
|
||||||
|
the Free Software Foundation, either version 3 of the License, or
|
||||||
|
(at your option) any later version.
|
||||||
|
|
||||||
|
This program is distributed in the hope that it will be useful,
|
||||||
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
GNU Affero General Public License for more details.
|
||||||
|
|
||||||
|
You should have received a copy of the GNU Affero General Public License
|
||||||
|
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
Also add information on how to contact you by electronic and paper mail.
|
||||||
|
|
||||||
|
If your software can interact with users remotely through a computer
|
||||||
|
network, you should also make sure that it provides a way for users to
|
||||||
|
get its source. For example, if your program is a web application, its
|
||||||
|
interface could display a "Source" link that leads users to an archive
|
||||||
|
of the code. There are many ways you could offer source, and different
|
||||||
|
solutions will be better for different programs; see section 13 for the
|
||||||
|
specific requirements.
|
||||||
|
|
||||||
|
You should also get your employer (if you work as a programmer) or school,
|
||||||
|
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||||
|
For more information on this, and how to apply and follow the GNU AGPL, see
|
||||||
|
<https://www.gnu.org/licenses/>.
|
||||||
294
README.md
294
README.md
@@ -1,117 +1,245 @@
|
|||||||
# MyFSIO (Flask S3 + IAM)
|
# MyFSIO
|
||||||
|
|
||||||
MyFSIO is a batteries-included, Flask-based recreation of Amazon S3 and IAM workflows built for local development. The design mirrors the [AWS S3 documentation](https://docs.aws.amazon.com/s3/) wherever practical: bucket naming, Signature Version 4 presigning, Version 2012-10-17 bucket policies, IAM-style users, and familiar REST endpoints.
|
A lightweight, S3-compatible object storage system built with Flask. MyFSIO implements core AWS S3 REST API operations with filesystem-backed storage, making it ideal for local development, testing, and self-hosted storage scenarios.
|
||||||
|
|
||||||
## Why MyFSIO?
|
## Features
|
||||||
|
|
||||||
- **Dual servers:** Run both the API (port 5000) and UI (port 5100) with a single command: `python run.py`.
|
**Core Storage**
|
||||||
- **IAM + access keys:** Users, access keys, key rotation, and bucket-scoped actions (`list/read/write/delete/policy`) now live in `data/.myfsio.sys/config/iam.json` and are editable from the IAM dashboard.
|
- S3-compatible REST API with AWS Signature Version 4 authentication
|
||||||
- **Bucket policies + hot reload:** `data/.myfsio.sys/config/bucket_policies.json` uses AWS' policy grammar (Version `2012-10-17`) with a built-in watcher, so editing the JSON file applies immediately. The UI also ships Public/Private/Custom presets for faster edits.
|
- Bucket and object CRUD operations
|
||||||
- **Presigned URLs everywhere:** Signature Version 4 presigned URLs respect IAM + bucket policies and replace the now-removed "share link" feature for public access scenarios.
|
- Object versioning with version history
|
||||||
- **Modern UI:** Responsive tables, quick filters, preview sidebar, object-level delete buttons, a presign modal, and an inline JSON policy editor that respects dark mode keep bucket management friendly.
|
- Multipart uploads for large files
|
||||||
- **Tests & health:** `/healthz` for smoke checks and `pytest` coverage for IAM, CRUD, presign, and policy flows.
|
- Presigned URLs (1 second to 7 days validity)
|
||||||
|
|
||||||
## Architecture at a Glance
|
**Security & Access Control**
|
||||||
|
- IAM users with access key management and rotation
|
||||||
|
- Bucket policies (AWS Policy Version 2012-10-17)
|
||||||
|
- Server-side encryption (SSE-S3 and SSE-KMS)
|
||||||
|
- Built-in Key Management Service (KMS)
|
||||||
|
- Rate limiting per endpoint
|
||||||
|
|
||||||
|
**Advanced Features**
|
||||||
|
- Cross-bucket replication to remote S3-compatible endpoints
|
||||||
|
- Hot-reload for bucket policies (no restart required)
|
||||||
|
- CORS configuration per bucket
|
||||||
|
|
||||||
|
**Management UI**
|
||||||
|
- Web console for bucket and object management
|
||||||
|
- IAM dashboard for user administration
|
||||||
|
- Inline JSON policy editor with presets
|
||||||
|
- Object browser with folder navigation and bulk operations
|
||||||
|
- Dark mode support
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
```
|
```
|
||||||
+-----------------+ +----------------+
|
+------------------+ +------------------+
|
||||||
| API Server |<----->| Object storage |
|
| API Server | | UI Server |
|
||||||
| (port 5000) | | (filesystem) |
|
| (port 5000) | | (port 5100) |
|
||||||
| - S3 routes | +----------------+
|
| | | |
|
||||||
| - Presigned URLs |
|
| - S3 REST API |<------->| - Web Console |
|
||||||
| - Bucket policy |
|
| - SigV4 Auth | | - IAM Dashboard |
|
||||||
+-----------------+
|
| - Presign URLs | | - Bucket Editor |
|
||||||
^
|
+--------+---------+ +------------------+
|
||||||
|
|
|
|
||||||
+-----------------+
|
v
|
||||||
| UI Server |
|
+------------------+ +------------------+
|
||||||
| (port 5100) |
|
| Object Storage | | System Metadata |
|
||||||
| - Auth console |
|
| (filesystem) | | (.myfsio.sys/) |
|
||||||
| - IAM dashboard|
|
| | | |
|
||||||
| - Bucket editor|
|
| data/<bucket>/ | | - IAM config |
|
||||||
+-----------------+
|
| <objects> | | - Bucket policies|
|
||||||
|
| | | - Encryption keys|
|
||||||
|
+------------------+ +------------------+
|
||||||
```
|
```
|
||||||
|
|
||||||
Both apps load the same configuration via `AppConfig` so IAM data and bucket policies stay consistent no matter which process you run.
|
## Quick Start
|
||||||
Bucket policies are automatically reloaded whenever `bucket_policies.json` changes—no restarts required.
|
|
||||||
|
|
||||||
## Getting Started
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# Clone and setup
|
||||||
|
git clone https://gitea.jzwsite.com/kqjy/MyFSIO
|
||||||
|
cd s3
|
||||||
python -m venv .venv
|
python -m venv .venv
|
||||||
. .venv/Scripts/activate # PowerShell: .\.venv\Scripts\Activate.ps1
|
|
||||||
|
# Activate virtual environment
|
||||||
|
# Windows PowerShell:
|
||||||
|
.\.venv\Scripts\Activate.ps1
|
||||||
|
# Windows CMD:
|
||||||
|
.venv\Scripts\activate.bat
|
||||||
|
# Linux/macOS:
|
||||||
|
source .venv/bin/activate
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
|
|
||||||
# Run both API and UI (default)
|
# Start both servers
|
||||||
python run.py
|
python run.py
|
||||||
|
|
||||||
# Or run individually:
|
# Or start individually
|
||||||
# python run.py --mode api
|
python run.py --mode api # API only (port 5000)
|
||||||
# python run.py --mode ui
|
python run.py --mode ui # UI only (port 5100)
|
||||||
```
|
```
|
||||||
|
|
||||||
Visit `http://127.0.0.1:5100/ui` for the console and `http://127.0.0.1:5000/` for the raw API. Override ports/hosts with the environment variables listed below.
|
**Default Credentials:** `localadmin` / `localadmin`
|
||||||
|
|
||||||
## IAM, Access Keys, and Bucket Policies
|
- **Web Console:** http://127.0.0.1:5100/ui
|
||||||
|
- **API Endpoint:** http://127.0.0.1:5000
|
||||||
- First run creates `data/.myfsio.sys/config/iam.json` with `localadmin / localadmin` (full control). Sign in via the UI, then use the **IAM** tab to create users, rotate secrets, or edit inline policies without touching JSON by hand.
|
|
||||||
- Bucket policies live in `data/.myfsio.sys/config/bucket_policies.json` and follow the AWS `arn:aws:s3:::bucket/key` resource syntax with Version `2012-10-17`. Attach/replace/remove policies from the bucket detail page or edit the JSON by hand—changes hot reload automatically.
|
|
||||||
- IAM actions include extended verbs (`iam:list_users`, `iam:create_user`, `iam:update_policy`, etc.) so you can control who is allowed to manage other users and policies.
|
|
||||||
|
|
||||||
### Bucket Policy Presets & Hot Reload
|
|
||||||
|
|
||||||
- **Presets:** Every bucket detail view includes Public (read-only), Private (detach policy), and Custom presets. Public auto-populates a policy that grants anonymous `s3:ListBucket` + `s3:GetObject` access to the entire bucket.
|
|
||||||
- **Custom drafts:** Switching back to Custom restores your last manual edit so you can toggle between presets without losing work.
|
|
||||||
- **Hot reload:** The server watches `bucket_policies.json` and reloads statements on-the-fly—ideal for editing policies in your favorite editor while testing Via curl or the UI.
|
|
||||||
|
|
||||||
## Presigned URLs
|
|
||||||
|
|
||||||
Presigned URLs follow the AWS CLI playbook:
|
|
||||||
|
|
||||||
- Call `POST /presign/<bucket>/<key>` (or use the "Presign" button in the UI) to request a Signature Version 4 URL valid for 1 second to 7 days.
|
|
||||||
- The generated URL honors IAM permissions and bucket-policy decisions at generation-time and again when somebody fetches it.
|
|
||||||
- Because presigned URLs cover both authenticated and public sharing scenarios, the legacy "share link" feature has been removed.
|
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
| Variable | Default | Description |
|
| Variable | Default | Description |
|
||||||
| --- | --- | --- |
|
|----------|---------|-------------|
|
||||||
| `STORAGE_ROOT` | `<project>/data` | Filesystem root for bucket directories |
|
| `STORAGE_ROOT` | `./data` | Filesystem root for bucket storage |
|
||||||
| `MAX_UPLOAD_SIZE` | `1073741824` | Maximum upload size (bytes) |
|
| `IAM_CONFIG` | `.myfsio.sys/config/iam.json` | IAM user and policy store |
|
||||||
| `UI_PAGE_SIZE` | `100` | `MaxKeys` hint for listings |
|
| `BUCKET_POLICY_PATH` | `.myfsio.sys/config/bucket_policies.json` | Bucket policy store |
|
||||||
| `SECRET_KEY` | `dev-secret-key` | Flask session secret for the UI |
|
| `API_BASE_URL` | `http://127.0.0.1:5000` | API endpoint for UI calls |
|
||||||
| `IAM_CONFIG` | `<project>/data/.myfsio.sys/config/iam.json` | IAM user + policy store |
|
| `MAX_UPLOAD_SIZE` | `1073741824` | Maximum upload size in bytes (1 GB) |
|
||||||
| `BUCKET_POLICY_PATH` | `<project>/data/.myfsio.sys/config/bucket_policies.json` | Bucket policy store |
|
| `MULTIPART_MIN_PART_SIZE` | `5242880` | Minimum multipart part size (5 MB) |
|
||||||
| `API_BASE_URL` | `http://127.0.0.1:5000` | Used by the UI when calling API endpoints (presign, bucket policy) |
|
| `UI_PAGE_SIZE` | `100` | Default page size for listings |
|
||||||
| `AWS_REGION` | `us-east-1` | Region used in Signature V4 scope |
|
| `SECRET_KEY` | `dev-secret-key` | Flask session secret |
|
||||||
| `AWS_SERVICE` | `s3` | Service used in Signature V4 scope |
|
| `AWS_REGION` | `us-east-1` | Region for SigV4 signing |
|
||||||
|
| `AWS_SERVICE` | `s3` | Service name for SigV4 signing |
|
||||||
|
| `ENCRYPTION_ENABLED` | `false` | Enable server-side encryption |
|
||||||
|
| `KMS_ENABLED` | `false` | Enable Key Management Service |
|
||||||
|
| `LOG_LEVEL` | `INFO` | Logging verbosity |
|
||||||
|
|
||||||
> Buckets now live directly under `data/` while system metadata (versions, IAM, bucket policies, multipart uploads, etc.) lives in `data/.myfsio.sys`.
|
## Data Layout
|
||||||
|
|
||||||
## API Cheatsheet (IAM headers required)
|
|
||||||
|
|
||||||
```
|
```
|
||||||
GET / -> List buckets (XML)
|
data/
|
||||||
PUT /<bucket> -> Create bucket
|
├── <bucket>/ # User buckets with objects
|
||||||
DELETE /<bucket> -> Delete bucket (must be empty)
|
└── .myfsio.sys/ # System metadata
|
||||||
GET /<bucket> -> List objects (XML)
|
├── config/
|
||||||
PUT /<bucket>/<key> -> Upload object (binary stream)
|
│ ├── iam.json # IAM users and policies
|
||||||
GET /<bucket>/<key> -> Download object
|
│ ├── bucket_policies.json # Bucket policies
|
||||||
DELETE /<bucket>/<key> -> Delete object
|
│ ├── replication_rules.json
|
||||||
POST /presign/<bucket>/<key> -> Generate AWS SigV4 presigned URL (JSON)
|
│ └── connections.json # Remote S3 connections
|
||||||
GET /bucket-policy/<bucket> -> Fetch bucket policy (JSON)
|
├── buckets/<bucket>/
|
||||||
PUT /bucket-policy/<bucket> -> Attach/replace bucket policy (JSON)
|
│ ├── meta/ # Object metadata (.meta.json)
|
||||||
DELETE /bucket-policy/<bucket> -> Remove bucket policy
|
│ ├── versions/ # Archived object versions
|
||||||
|
│ └── .bucket.json # Bucket config (versioning, CORS)
|
||||||
|
├── multipart/ # Active multipart uploads
|
||||||
|
└── keys/ # Encryption keys (SSE-S3/KMS)
|
||||||
|
```
|
||||||
|
|
||||||
|
## API Reference
|
||||||
|
|
||||||
|
All endpoints require AWS Signature Version 4 authentication unless using presigned URLs or public bucket policies.
|
||||||
|
|
||||||
|
### Bucket Operations
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/` | List all buckets |
|
||||||
|
| `PUT` | `/<bucket>` | Create bucket |
|
||||||
|
| `DELETE` | `/<bucket>` | Delete bucket (must be empty) |
|
||||||
|
| `HEAD` | `/<bucket>` | Check bucket exists |
|
||||||
|
|
||||||
|
### Object Operations
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/<bucket>` | List objects (supports `list-type=2`) |
|
||||||
|
| `PUT` | `/<bucket>/<key>` | Upload object |
|
||||||
|
| `GET` | `/<bucket>/<key>` | Download object |
|
||||||
|
| `DELETE` | `/<bucket>/<key>` | Delete object |
|
||||||
|
| `HEAD` | `/<bucket>/<key>` | Get object metadata |
|
||||||
|
| `POST` | `/<bucket>/<key>?uploads` | Initiate multipart upload |
|
||||||
|
| `PUT` | `/<bucket>/<key>?partNumber=N&uploadId=X` | Upload part |
|
||||||
|
| `POST` | `/<bucket>/<key>?uploadId=X` | Complete multipart upload |
|
||||||
|
| `DELETE` | `/<bucket>/<key>?uploadId=X` | Abort multipart upload |
|
||||||
|
|
||||||
|
### Bucket Policies (S3-compatible)
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/<bucket>?policy` | Get bucket policy |
|
||||||
|
| `PUT` | `/<bucket>?policy` | Set bucket policy |
|
||||||
|
| `DELETE` | `/<bucket>?policy` | Delete bucket policy |
|
||||||
|
|
||||||
|
### Versioning
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/<bucket>/<key>?versionId=X` | Get specific version |
|
||||||
|
| `DELETE` | `/<bucket>/<key>?versionId=X` | Delete specific version |
|
||||||
|
| `GET` | `/<bucket>?versions` | List object versions |
|
||||||
|
|
||||||
|
### Health Check
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|--------|----------|-------------|
|
||||||
|
| `GET` | `/myfsio/health` | Health check endpoint |
|
||||||
|
|
||||||
|
## IAM & Access Control
|
||||||
|
|
||||||
|
### Users and Access Keys
|
||||||
|
|
||||||
|
On first run, MyFSIO creates a default admin user (`localadmin`/`localadmin`). Use the IAM dashboard to:
|
||||||
|
|
||||||
|
- Create and delete users
|
||||||
|
- Generate and rotate access keys
|
||||||
|
- Attach inline policies to users
|
||||||
|
- Control IAM management permissions
|
||||||
|
|
||||||
|
### Bucket Policies
|
||||||
|
|
||||||
|
Bucket policies follow AWS policy grammar (Version `2012-10-17`) with support for:
|
||||||
|
|
||||||
|
- Principal-based access (`*` for anonymous, specific users)
|
||||||
|
- Action-based permissions (`s3:GetObject`, `s3:PutObject`, etc.)
|
||||||
|
- Resource patterns (`arn:aws:s3:::bucket/*`)
|
||||||
|
- Condition keys
|
||||||
|
|
||||||
|
**Policy Presets:**
|
||||||
|
- **Public:** Grants anonymous read access (`s3:GetObject`, `s3:ListBucket`)
|
||||||
|
- **Private:** Removes bucket policy (IAM-only access)
|
||||||
|
- **Custom:** Manual policy editing with draft preservation
|
||||||
|
|
||||||
|
Policies hot-reload when the JSON file changes.
|
||||||
|
|
||||||
|
## Server-Side Encryption
|
||||||
|
|
||||||
|
MyFSIO supports two encryption modes:
|
||||||
|
|
||||||
|
- **SSE-S3:** Server-managed keys with automatic key rotation
|
||||||
|
- **SSE-KMS:** Customer-managed keys via built-in KMS
|
||||||
|
|
||||||
|
Enable encryption with:
|
||||||
|
```bash
|
||||||
|
ENCRYPTION_ENABLED=true python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
## Cross-Bucket Replication
|
||||||
|
|
||||||
|
Replicate objects to remote S3-compatible endpoints:
|
||||||
|
|
||||||
|
1. Configure remote connections in the UI
|
||||||
|
2. Create replication rules specifying source/destination
|
||||||
|
3. Objects are automatically replicated on upload
|
||||||
|
|
||||||
|
## Docker
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker build -t myfsio .
|
||||||
|
docker run -p 5000:5000 -p 5100:5100 -v ./data:/app/data myfsio
|
||||||
```
|
```
|
||||||
|
|
||||||
## Testing
|
## Testing
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pytest -q
|
# Run all tests
|
||||||
|
pytest tests/ -v
|
||||||
|
|
||||||
|
# Run specific test file
|
||||||
|
pytest tests/test_api.py -v
|
||||||
|
|
||||||
|
# Run with coverage
|
||||||
|
pytest tests/ --cov=app --cov-report=html
|
||||||
```
|
```
|
||||||
|
|
||||||
## References
|
## References
|
||||||
|
|
||||||
- [Amazon Simple Storage Service Documentation](https://docs.aws.amazon.com/s3/)
|
- [Amazon S3 Documentation](https://docs.aws.amazon.com/s3/)
|
||||||
- [Signature Version 4 Signing Process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)
|
- [AWS Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)
|
||||||
- [Amazon S3 Bucket Policy Examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html)
|
- [S3 Bucket Policy Examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html)
|
||||||
|
|||||||
176
app/__init__.py
176
app/__init__.py
@@ -1,20 +1,24 @@
|
|||||||
"""Application factory for the mini S3-compatible object store."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
import shutil
|
||||||
import sys
|
import sys
|
||||||
import time
|
import time
|
||||||
import uuid
|
import uuid
|
||||||
from logging.handlers import RotatingFileHandler
|
from logging.handlers import RotatingFileHandler
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from datetime import timedelta
|
from datetime import timedelta
|
||||||
from typing import Any, Dict, Optional
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
from flask import Flask, g, has_request_context, redirect, render_template, request, url_for
|
from flask import Flask, g, has_request_context, redirect, render_template, request, url_for
|
||||||
from flask_cors import CORS
|
from flask_cors import CORS
|
||||||
from flask_wtf.csrf import CSRFError
|
from flask_wtf.csrf import CSRFError
|
||||||
from werkzeug.middleware.proxy_fix import ProxyFix
|
from werkzeug.middleware.proxy_fix import ProxyFix
|
||||||
|
|
||||||
|
from .access_logging import AccessLoggingService
|
||||||
|
from .operation_metrics import OperationMetricsCollector, classify_endpoint
|
||||||
|
from .compression import GzipMiddleware
|
||||||
|
from .acl import AclService
|
||||||
from .bucket_policies import BucketPolicyStore
|
from .bucket_policies import BucketPolicyStore
|
||||||
from .config import AppConfig
|
from .config import AppConfig
|
||||||
from .connections import ConnectionStore
|
from .connections import ConnectionStore
|
||||||
@@ -22,12 +26,41 @@ from .encryption import EncryptionManager
|
|||||||
from .extensions import limiter, csrf
|
from .extensions import limiter, csrf
|
||||||
from .iam import IamService
|
from .iam import IamService
|
||||||
from .kms import KMSManager
|
from .kms import KMSManager
|
||||||
|
from .lifecycle import LifecycleManager
|
||||||
|
from .notifications import NotificationService
|
||||||
|
from .object_lock import ObjectLockService
|
||||||
from .replication import ReplicationManager
|
from .replication import ReplicationManager
|
||||||
from .secret_store import EphemeralSecretStore
|
from .secret_store import EphemeralSecretStore
|
||||||
from .storage import ObjectStorage
|
from .storage import ObjectStorage
|
||||||
from .version import get_version
|
from .version import get_version
|
||||||
|
|
||||||
|
|
||||||
|
def _migrate_config_file(active_path: Path, legacy_paths: List[Path]) -> Path:
|
||||||
|
"""Migrate config file from legacy locations to the active path.
|
||||||
|
|
||||||
|
Checks each legacy path in order and moves the first one found to the active path.
|
||||||
|
This ensures backward compatibility for users upgrading from older versions.
|
||||||
|
"""
|
||||||
|
active_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
if active_path.exists():
|
||||||
|
return active_path
|
||||||
|
|
||||||
|
for legacy_path in legacy_paths:
|
||||||
|
if legacy_path.exists():
|
||||||
|
try:
|
||||||
|
shutil.move(str(legacy_path), str(active_path))
|
||||||
|
except OSError:
|
||||||
|
shutil.copy2(legacy_path, active_path)
|
||||||
|
try:
|
||||||
|
legacy_path.unlink(missing_ok=True)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
break
|
||||||
|
|
||||||
|
return active_path
|
||||||
|
|
||||||
|
|
||||||
def create_app(
|
def create_app(
|
||||||
test_config: Optional[Dict[str, Any]] = None,
|
test_config: Optional[Dict[str, Any]] = None,
|
||||||
*,
|
*,
|
||||||
@@ -58,13 +91,24 @@ def create_app(
|
|||||||
# Trust X-Forwarded-* headers from proxies
|
# Trust X-Forwarded-* headers from proxies
|
||||||
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_prefix=1)
|
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_prefix=1)
|
||||||
|
|
||||||
|
# Enable gzip compression for responses (10-20x smaller JSON payloads)
|
||||||
|
if app.config.get("ENABLE_GZIP", True):
|
||||||
|
app.wsgi_app = GzipMiddleware(app.wsgi_app, compression_level=6)
|
||||||
|
|
||||||
_configure_cors(app)
|
_configure_cors(app)
|
||||||
_configure_logging(app)
|
_configure_logging(app)
|
||||||
|
|
||||||
limiter.init_app(app)
|
limiter.init_app(app)
|
||||||
csrf.init_app(app)
|
csrf.init_app(app)
|
||||||
|
|
||||||
storage = ObjectStorage(Path(app.config["STORAGE_ROOT"]))
|
storage = ObjectStorage(
|
||||||
|
Path(app.config["STORAGE_ROOT"]),
|
||||||
|
cache_ttl=app.config.get("OBJECT_CACHE_TTL", 5),
|
||||||
|
)
|
||||||
|
|
||||||
|
if app.config.get("WARM_CACHE_ON_STARTUP", True) and not app.config.get("TESTING"):
|
||||||
|
storage.warm_cache_async()
|
||||||
|
|
||||||
iam = IamService(
|
iam = IamService(
|
||||||
Path(app.config["IAM_CONFIG"]),
|
Path(app.config["IAM_CONFIG"]),
|
||||||
auth_max_attempts=app.config.get("AUTH_MAX_ATTEMPTS", 5),
|
auth_max_attempts=app.config.get("AUTH_MAX_ATTEMPTS", 5),
|
||||||
@@ -73,14 +117,28 @@ def create_app(
|
|||||||
bucket_policies = BucketPolicyStore(Path(app.config["BUCKET_POLICY_PATH"]))
|
bucket_policies = BucketPolicyStore(Path(app.config["BUCKET_POLICY_PATH"]))
|
||||||
secret_store = EphemeralSecretStore(default_ttl=app.config.get("SECRET_TTL_SECONDS", 300))
|
secret_store = EphemeralSecretStore(default_ttl=app.config.get("SECRET_TTL_SECONDS", 300))
|
||||||
|
|
||||||
# Initialize Replication components
|
storage_root = Path(app.config["STORAGE_ROOT"])
|
||||||
connections_path = Path(app.config["STORAGE_ROOT"]) / ".connections.json"
|
config_dir = storage_root / ".myfsio.sys" / "config"
|
||||||
replication_rules_path = Path(app.config["STORAGE_ROOT"]) / ".replication_rules.json"
|
config_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
connections_path = _migrate_config_file(
|
||||||
|
active_path=config_dir / "connections.json",
|
||||||
|
legacy_paths=[
|
||||||
|
storage_root / ".myfsio.sys" / "connections.json",
|
||||||
|
storage_root / ".connections.json",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
replication_rules_path = _migrate_config_file(
|
||||||
|
active_path=config_dir / "replication_rules.json",
|
||||||
|
legacy_paths=[
|
||||||
|
storage_root / ".myfsio.sys" / "replication_rules.json",
|
||||||
|
storage_root / ".replication_rules.json",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
connections = ConnectionStore(connections_path)
|
connections = ConnectionStore(connections_path)
|
||||||
replication = ReplicationManager(storage, connections, replication_rules_path)
|
replication = ReplicationManager(storage, connections, replication_rules_path, storage_root)
|
||||||
|
|
||||||
# Initialize encryption and KMS
|
|
||||||
encryption_config = {
|
encryption_config = {
|
||||||
"encryption_enabled": app.config.get("ENCRYPTION_ENABLED", False),
|
"encryption_enabled": app.config.get("ENCRYPTION_ENABLED", False),
|
||||||
"encryption_master_key_path": app.config.get("ENCRYPTION_MASTER_KEY_PATH"),
|
"encryption_master_key_path": app.config.get("ENCRYPTION_MASTER_KEY_PATH"),
|
||||||
@@ -95,11 +153,26 @@ def create_app(
|
|||||||
kms_manager = KMSManager(kms_keys_path, kms_master_key_path)
|
kms_manager = KMSManager(kms_keys_path, kms_master_key_path)
|
||||||
encryption_manager.set_kms_provider(kms_manager)
|
encryption_manager.set_kms_provider(kms_manager)
|
||||||
|
|
||||||
# Wrap storage with encryption layer if encryption is enabled
|
|
||||||
if app.config.get("ENCRYPTION_ENABLED", False):
|
if app.config.get("ENCRYPTION_ENABLED", False):
|
||||||
from .encrypted_storage import EncryptedObjectStorage
|
from .encrypted_storage import EncryptedObjectStorage
|
||||||
storage = EncryptedObjectStorage(storage, encryption_manager)
|
storage = EncryptedObjectStorage(storage, encryption_manager)
|
||||||
|
|
||||||
|
acl_service = AclService(storage_root)
|
||||||
|
object_lock_service = ObjectLockService(storage_root)
|
||||||
|
notification_service = NotificationService(storage_root)
|
||||||
|
access_logging_service = AccessLoggingService(storage_root)
|
||||||
|
access_logging_service.set_storage(storage)
|
||||||
|
|
||||||
|
lifecycle_manager = None
|
||||||
|
if app.config.get("LIFECYCLE_ENABLED", False):
|
||||||
|
base_storage = storage.storage if hasattr(storage, 'storage') else storage
|
||||||
|
lifecycle_manager = LifecycleManager(
|
||||||
|
base_storage,
|
||||||
|
interval_seconds=app.config.get("LIFECYCLE_INTERVAL_SECONDS", 3600),
|
||||||
|
storage_root=storage_root,
|
||||||
|
)
|
||||||
|
lifecycle_manager.start()
|
||||||
|
|
||||||
app.extensions["object_storage"] = storage
|
app.extensions["object_storage"] = storage
|
||||||
app.extensions["iam"] = iam
|
app.extensions["iam"] = iam
|
||||||
app.extensions["bucket_policies"] = bucket_policies
|
app.extensions["bucket_policies"] = bucket_policies
|
||||||
@@ -109,6 +182,20 @@ def create_app(
|
|||||||
app.extensions["replication"] = replication
|
app.extensions["replication"] = replication
|
||||||
app.extensions["encryption"] = encryption_manager
|
app.extensions["encryption"] = encryption_manager
|
||||||
app.extensions["kms"] = kms_manager
|
app.extensions["kms"] = kms_manager
|
||||||
|
app.extensions["acl"] = acl_service
|
||||||
|
app.extensions["lifecycle"] = lifecycle_manager
|
||||||
|
app.extensions["object_lock"] = object_lock_service
|
||||||
|
app.extensions["notifications"] = notification_service
|
||||||
|
app.extensions["access_logging"] = access_logging_service
|
||||||
|
|
||||||
|
operation_metrics_collector = None
|
||||||
|
if app.config.get("OPERATION_METRICS_ENABLED", False):
|
||||||
|
operation_metrics_collector = OperationMetricsCollector(
|
||||||
|
storage_root,
|
||||||
|
interval_minutes=app.config.get("OPERATION_METRICS_INTERVAL_MINUTES", 5),
|
||||||
|
retention_hours=app.config.get("OPERATION_METRICS_RETENTION_HOURS", 24),
|
||||||
|
)
|
||||||
|
app.extensions["operation_metrics"] = operation_metrics_collector
|
||||||
|
|
||||||
@app.errorhandler(500)
|
@app.errorhandler(500)
|
||||||
def internal_error(error):
|
def internal_error(error):
|
||||||
@@ -131,16 +218,49 @@ def create_app(
|
|||||||
|
|
||||||
@app.template_filter("timestamp_to_datetime")
|
@app.template_filter("timestamp_to_datetime")
|
||||||
def timestamp_to_datetime(value: float) -> str:
|
def timestamp_to_datetime(value: float) -> str:
|
||||||
"""Format Unix timestamp as human-readable datetime."""
|
"""Format Unix timestamp as human-readable datetime in configured timezone."""
|
||||||
from datetime import datetime
|
from datetime import datetime, timezone as dt_timezone
|
||||||
|
from zoneinfo import ZoneInfo
|
||||||
if not value:
|
if not value:
|
||||||
return "Never"
|
return "Never"
|
||||||
try:
|
try:
|
||||||
dt = datetime.fromtimestamp(value)
|
dt_utc = datetime.fromtimestamp(value, dt_timezone.utc)
|
||||||
return dt.strftime("%Y-%m-%d %H:%M:%S")
|
display_tz = app.config.get("DISPLAY_TIMEZONE", "UTC")
|
||||||
|
if display_tz and display_tz != "UTC":
|
||||||
|
try:
|
||||||
|
tz = ZoneInfo(display_tz)
|
||||||
|
dt_local = dt_utc.astimezone(tz)
|
||||||
|
return dt_local.strftime("%Y-%m-%d %H:%M:%S")
|
||||||
|
except (KeyError, ValueError):
|
||||||
|
pass
|
||||||
|
return dt_utc.strftime("%Y-%m-%d %H:%M:%S UTC")
|
||||||
except (ValueError, OSError):
|
except (ValueError, OSError):
|
||||||
return "Unknown"
|
return "Unknown"
|
||||||
|
|
||||||
|
@app.template_filter("format_datetime")
|
||||||
|
def format_datetime_filter(dt, include_tz: bool = True) -> str:
|
||||||
|
"""Format datetime object as human-readable string in configured timezone."""
|
||||||
|
from datetime import datetime, timezone as dt_timezone
|
||||||
|
from zoneinfo import ZoneInfo
|
||||||
|
if not dt:
|
||||||
|
return ""
|
||||||
|
try:
|
||||||
|
display_tz = app.config.get("DISPLAY_TIMEZONE", "UTC")
|
||||||
|
if display_tz and display_tz != "UTC":
|
||||||
|
try:
|
||||||
|
tz = ZoneInfo(display_tz)
|
||||||
|
if dt.tzinfo is None:
|
||||||
|
dt = dt.replace(tzinfo=dt_timezone.utc)
|
||||||
|
dt = dt.astimezone(tz)
|
||||||
|
except (KeyError, ValueError):
|
||||||
|
pass
|
||||||
|
tz_abbr = dt.strftime("%Z") or "UTC"
|
||||||
|
if include_tz:
|
||||||
|
return f"{dt.strftime('%b %d, %Y %H:%M')} ({tz_abbr})"
|
||||||
|
return dt.strftime("%b %d, %Y %H:%M")
|
||||||
|
except (ValueError, AttributeError):
|
||||||
|
return str(dt)
|
||||||
|
|
||||||
if include_api:
|
if include_api:
|
||||||
from .s3_api import s3_api_bp
|
from .s3_api import s3_api_bp
|
||||||
from .kms_api import kms_api_bp
|
from .kms_api import kms_api_bp
|
||||||
@@ -168,9 +288,9 @@ def create_app(
|
|||||||
return render_template("404.html"), 404
|
return render_template("404.html"), 404
|
||||||
return error
|
return error
|
||||||
|
|
||||||
@app.get("/healthz")
|
@app.get("/myfsio/health")
|
||||||
def healthcheck() -> Dict[str, str]:
|
def healthcheck() -> Dict[str, str]:
|
||||||
return {"status": "ok", "version": app.config.get("APP_VERSION", "unknown")}
|
return {"status": "ok"}
|
||||||
|
|
||||||
return app
|
return app
|
||||||
|
|
||||||
@@ -198,7 +318,7 @@ def _configure_cors(app: Flask) -> None:
|
|||||||
class _RequestContextFilter(logging.Filter):
|
class _RequestContextFilter(logging.Filter):
|
||||||
"""Inject request-specific attributes into log records."""
|
"""Inject request-specific attributes into log records."""
|
||||||
|
|
||||||
def filter(self, record: logging.LogRecord) -> bool: # pragma: no cover - simple boilerplate
|
def filter(self, record: logging.LogRecord) -> bool:
|
||||||
if has_request_context():
|
if has_request_context():
|
||||||
record.request_id = getattr(g, "request_id", "-")
|
record.request_id = getattr(g, "request_id", "-")
|
||||||
record.path = request.path
|
record.path = request.path
|
||||||
@@ -216,17 +336,17 @@ def _configure_logging(app: Flask) -> None:
|
|||||||
formatter = logging.Formatter(
|
formatter = logging.Formatter(
|
||||||
"%(asctime)s | %(levelname)s | %(request_id)s | %(method)s %(path)s | %(message)s"
|
"%(asctime)s | %(levelname)s | %(request_id)s | %(method)s %(path)s | %(message)s"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Stream Handler (stdout) - Primary for Docker
|
|
||||||
stream_handler = logging.StreamHandler(sys.stdout)
|
stream_handler = logging.StreamHandler(sys.stdout)
|
||||||
stream_handler.setFormatter(formatter)
|
stream_handler.setFormatter(formatter)
|
||||||
stream_handler.addFilter(_RequestContextFilter())
|
stream_handler.addFilter(_RequestContextFilter())
|
||||||
|
|
||||||
logger = app.logger
|
logger = app.logger
|
||||||
|
for handler in logger.handlers[:]:
|
||||||
|
handler.close()
|
||||||
logger.handlers.clear()
|
logger.handlers.clear()
|
||||||
logger.addHandler(stream_handler)
|
logger.addHandler(stream_handler)
|
||||||
|
|
||||||
# File Handler (optional, if configured)
|
|
||||||
if app.config.get("LOG_TO_FILE"):
|
if app.config.get("LOG_TO_FILE"):
|
||||||
log_file = Path(app.config["LOG_FILE"])
|
log_file = Path(app.config["LOG_FILE"])
|
||||||
log_file.parent.mkdir(parents=True, exist_ok=True)
|
log_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
@@ -246,6 +366,7 @@ def _configure_logging(app: Flask) -> None:
|
|||||||
def _log_request_start() -> None:
|
def _log_request_start() -> None:
|
||||||
g.request_id = uuid.uuid4().hex
|
g.request_id = uuid.uuid4().hex
|
||||||
g.request_started_at = time.perf_counter()
|
g.request_started_at = time.perf_counter()
|
||||||
|
g.request_bytes_in = request.content_length or 0
|
||||||
app.logger.info(
|
app.logger.info(
|
||||||
"Request started",
|
"Request started",
|
||||||
extra={"path": request.path, "method": request.method, "remote_addr": request.remote_addr},
|
extra={"path": request.path, "method": request.method, "remote_addr": request.remote_addr},
|
||||||
@@ -267,4 +388,21 @@ def _configure_logging(app: Flask) -> None:
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
response.headers["X-Request-Duration-ms"] = f"{duration_ms:.2f}"
|
response.headers["X-Request-Duration-ms"] = f"{duration_ms:.2f}"
|
||||||
|
|
||||||
|
operation_metrics = app.extensions.get("operation_metrics")
|
||||||
|
if operation_metrics:
|
||||||
|
bytes_in = getattr(g, "request_bytes_in", 0)
|
||||||
|
bytes_out = response.content_length or 0
|
||||||
|
error_code = getattr(g, "s3_error_code", None)
|
||||||
|
endpoint_type = classify_endpoint(request.path)
|
||||||
|
operation_metrics.record_request(
|
||||||
|
method=request.method,
|
||||||
|
endpoint_type=endpoint_type,
|
||||||
|
status_code=response.status_code,
|
||||||
|
latency_ms=duration_ms,
|
||||||
|
bytes_in=bytes_in,
|
||||||
|
bytes_out=bytes_out,
|
||||||
|
error_code=error_code,
|
||||||
|
)
|
||||||
|
|
||||||
return response
|
return response
|
||||||
|
|||||||
265
app/access_logging.py
Normal file
265
app/access_logging.py
Normal file
@@ -0,0 +1,265 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import io
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import queue
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class AccessLogEntry:
|
||||||
|
bucket_owner: str = "-"
|
||||||
|
bucket: str = "-"
|
||||||
|
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||||
|
remote_ip: str = "-"
|
||||||
|
requester: str = "-"
|
||||||
|
request_id: str = field(default_factory=lambda: uuid.uuid4().hex[:16].upper())
|
||||||
|
operation: str = "-"
|
||||||
|
key: str = "-"
|
||||||
|
request_uri: str = "-"
|
||||||
|
http_status: int = 200
|
||||||
|
error_code: str = "-"
|
||||||
|
bytes_sent: int = 0
|
||||||
|
object_size: int = 0
|
||||||
|
total_time_ms: int = 0
|
||||||
|
turn_around_time_ms: int = 0
|
||||||
|
referrer: str = "-"
|
||||||
|
user_agent: str = "-"
|
||||||
|
version_id: str = "-"
|
||||||
|
host_id: str = "-"
|
||||||
|
signature_version: str = "SigV4"
|
||||||
|
cipher_suite: str = "-"
|
||||||
|
authentication_type: str = "AuthHeader"
|
||||||
|
host_header: str = "-"
|
||||||
|
tls_version: str = "-"
|
||||||
|
|
||||||
|
def to_log_line(self) -> str:
|
||||||
|
time_str = self.timestamp.strftime("[%d/%b/%Y:%H:%M:%S %z]")
|
||||||
|
return (
|
||||||
|
f'{self.bucket_owner} {self.bucket} {time_str} {self.remote_ip} '
|
||||||
|
f'{self.requester} {self.request_id} {self.operation} {self.key} '
|
||||||
|
f'"{self.request_uri}" {self.http_status} {self.error_code or "-"} '
|
||||||
|
f'{self.bytes_sent or "-"} {self.object_size or "-"} {self.total_time_ms or "-"} '
|
||||||
|
f'{self.turn_around_time_ms or "-"} "{self.referrer}" "{self.user_agent}" {self.version_id}'
|
||||||
|
)
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"bucket_owner": self.bucket_owner,
|
||||||
|
"bucket": self.bucket,
|
||||||
|
"timestamp": self.timestamp.isoformat(),
|
||||||
|
"remote_ip": self.remote_ip,
|
||||||
|
"requester": self.requester,
|
||||||
|
"request_id": self.request_id,
|
||||||
|
"operation": self.operation,
|
||||||
|
"key": self.key,
|
||||||
|
"request_uri": self.request_uri,
|
||||||
|
"http_status": self.http_status,
|
||||||
|
"error_code": self.error_code,
|
||||||
|
"bytes_sent": self.bytes_sent,
|
||||||
|
"object_size": self.object_size,
|
||||||
|
"total_time_ms": self.total_time_ms,
|
||||||
|
"referrer": self.referrer,
|
||||||
|
"user_agent": self.user_agent,
|
||||||
|
"version_id": self.version_id,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LoggingConfiguration:
|
||||||
|
target_bucket: str
|
||||||
|
target_prefix: str = ""
|
||||||
|
enabled: bool = True
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"LoggingEnabled": {
|
||||||
|
"TargetBucket": self.target_bucket,
|
||||||
|
"TargetPrefix": self.target_prefix,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> Optional["LoggingConfiguration"]:
|
||||||
|
logging_enabled = data.get("LoggingEnabled")
|
||||||
|
if not logging_enabled:
|
||||||
|
return None
|
||||||
|
return cls(
|
||||||
|
target_bucket=logging_enabled.get("TargetBucket", ""),
|
||||||
|
target_prefix=logging_enabled.get("TargetPrefix", ""),
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class AccessLoggingService:
|
||||||
|
def __init__(self, storage_root: Path, flush_interval: int = 60, max_buffer_size: int = 1000):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self.flush_interval = flush_interval
|
||||||
|
self.max_buffer_size = max_buffer_size
|
||||||
|
self._configs: Dict[str, LoggingConfiguration] = {}
|
||||||
|
self._buffer: Dict[str, List[AccessLogEntry]] = {}
|
||||||
|
self._buffer_lock = threading.Lock()
|
||||||
|
self._shutdown = threading.Event()
|
||||||
|
self._storage = None
|
||||||
|
|
||||||
|
self._flush_thread = threading.Thread(target=self._flush_loop, name="access-log-flush", daemon=True)
|
||||||
|
self._flush_thread.start()
|
||||||
|
|
||||||
|
def set_storage(self, storage: Any) -> None:
|
||||||
|
self._storage = storage
|
||||||
|
|
||||||
|
def _config_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "logging.json"
|
||||||
|
|
||||||
|
def get_bucket_logging(self, bucket_name: str) -> Optional[LoggingConfiguration]:
|
||||||
|
if bucket_name in self._configs:
|
||||||
|
return self._configs[bucket_name]
|
||||||
|
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
if not config_path.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||||
|
config = LoggingConfiguration.from_dict(data)
|
||||||
|
if config:
|
||||||
|
self._configs[bucket_name] = config
|
||||||
|
return config
|
||||||
|
except (json.JSONDecodeError, OSError) as e:
|
||||||
|
logger.warning(f"Failed to load logging config for {bucket_name}: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def set_bucket_logging(self, bucket_name: str, config: LoggingConfiguration) -> None:
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
config_path.write_text(json.dumps(config.to_dict(), indent=2), encoding="utf-8")
|
||||||
|
self._configs[bucket_name] = config
|
||||||
|
|
||||||
|
def delete_bucket_logging(self, bucket_name: str) -> None:
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
try:
|
||||||
|
if config_path.exists():
|
||||||
|
config_path.unlink()
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
self._configs.pop(bucket_name, None)
|
||||||
|
|
||||||
|
def log_request(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
*,
|
||||||
|
operation: str,
|
||||||
|
key: str = "-",
|
||||||
|
remote_ip: str = "-",
|
||||||
|
requester: str = "-",
|
||||||
|
request_uri: str = "-",
|
||||||
|
http_status: int = 200,
|
||||||
|
error_code: str = "",
|
||||||
|
bytes_sent: int = 0,
|
||||||
|
object_size: int = 0,
|
||||||
|
total_time_ms: int = 0,
|
||||||
|
referrer: str = "-",
|
||||||
|
user_agent: str = "-",
|
||||||
|
version_id: str = "-",
|
||||||
|
request_id: str = "",
|
||||||
|
) -> None:
|
||||||
|
config = self.get_bucket_logging(bucket_name)
|
||||||
|
if not config or not config.enabled:
|
||||||
|
return
|
||||||
|
|
||||||
|
entry = AccessLogEntry(
|
||||||
|
bucket_owner="local-owner",
|
||||||
|
bucket=bucket_name,
|
||||||
|
remote_ip=remote_ip,
|
||||||
|
requester=requester,
|
||||||
|
request_id=request_id or uuid.uuid4().hex[:16].upper(),
|
||||||
|
operation=operation,
|
||||||
|
key=key,
|
||||||
|
request_uri=request_uri,
|
||||||
|
http_status=http_status,
|
||||||
|
error_code=error_code,
|
||||||
|
bytes_sent=bytes_sent,
|
||||||
|
object_size=object_size,
|
||||||
|
total_time_ms=total_time_ms,
|
||||||
|
referrer=referrer,
|
||||||
|
user_agent=user_agent,
|
||||||
|
version_id=version_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
target_key = f"{config.target_bucket}:{config.target_prefix}"
|
||||||
|
should_flush = False
|
||||||
|
with self._buffer_lock:
|
||||||
|
if target_key not in self._buffer:
|
||||||
|
self._buffer[target_key] = []
|
||||||
|
self._buffer[target_key].append(entry)
|
||||||
|
should_flush = len(self._buffer[target_key]) >= self.max_buffer_size
|
||||||
|
|
||||||
|
if should_flush:
|
||||||
|
self._flush_buffer(target_key)
|
||||||
|
|
||||||
|
def _flush_loop(self) -> None:
|
||||||
|
while not self._shutdown.is_set():
|
||||||
|
self._shutdown.wait(timeout=self.flush_interval)
|
||||||
|
if not self._shutdown.is_set():
|
||||||
|
self._flush_all()
|
||||||
|
|
||||||
|
def _flush_all(self) -> None:
|
||||||
|
with self._buffer_lock:
|
||||||
|
targets = list(self._buffer.keys())
|
||||||
|
|
||||||
|
for target_key in targets:
|
||||||
|
self._flush_buffer(target_key)
|
||||||
|
|
||||||
|
def _flush_buffer(self, target_key: str) -> None:
|
||||||
|
with self._buffer_lock:
|
||||||
|
entries = self._buffer.pop(target_key, [])
|
||||||
|
|
||||||
|
if not entries or not self._storage:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
bucket_name, prefix = target_key.split(":", 1)
|
||||||
|
except ValueError:
|
||||||
|
logger.error(f"Invalid target key: {target_key}")
|
||||||
|
return
|
||||||
|
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
log_key = f"{prefix}{now.strftime('%Y-%m-%d-%H-%M-%S')}-{uuid.uuid4().hex[:8]}"
|
||||||
|
|
||||||
|
log_content = "\n".join(entry.to_log_line() for entry in entries) + "\n"
|
||||||
|
|
||||||
|
try:
|
||||||
|
stream = io.BytesIO(log_content.encode("utf-8"))
|
||||||
|
self._storage.put_object(bucket_name, log_key, stream, enforce_quota=False)
|
||||||
|
logger.info(f"Flushed {len(entries)} access log entries to {bucket_name}/{log_key}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to write access log to {bucket_name}/{log_key}: {e}")
|
||||||
|
with self._buffer_lock:
|
||||||
|
if target_key not in self._buffer:
|
||||||
|
self._buffer[target_key] = []
|
||||||
|
self._buffer[target_key] = entries + self._buffer[target_key]
|
||||||
|
|
||||||
|
def flush(self) -> None:
|
||||||
|
self._flush_all()
|
||||||
|
|
||||||
|
def shutdown(self) -> None:
|
||||||
|
self._shutdown.set()
|
||||||
|
self._flush_all()
|
||||||
|
self._flush_thread.join(timeout=5.0)
|
||||||
|
|
||||||
|
def get_stats(self) -> Dict[str, Any]:
|
||||||
|
with self._buffer_lock:
|
||||||
|
buffered = sum(len(entries) for entries in self._buffer.values())
|
||||||
|
return {
|
||||||
|
"buffered_entries": buffered,
|
||||||
|
"target_buckets": len(self._buffer),
|
||||||
|
}
|
||||||
204
app/acl.py
Normal file
204
app/acl.py
Normal file
@@ -0,0 +1,204 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional, Set
|
||||||
|
|
||||||
|
|
||||||
|
ACL_PERMISSION_FULL_CONTROL = "FULL_CONTROL"
|
||||||
|
ACL_PERMISSION_WRITE = "WRITE"
|
||||||
|
ACL_PERMISSION_WRITE_ACP = "WRITE_ACP"
|
||||||
|
ACL_PERMISSION_READ = "READ"
|
||||||
|
ACL_PERMISSION_READ_ACP = "READ_ACP"
|
||||||
|
|
||||||
|
ALL_PERMISSIONS = {
|
||||||
|
ACL_PERMISSION_FULL_CONTROL,
|
||||||
|
ACL_PERMISSION_WRITE,
|
||||||
|
ACL_PERMISSION_WRITE_ACP,
|
||||||
|
ACL_PERMISSION_READ,
|
||||||
|
ACL_PERMISSION_READ_ACP,
|
||||||
|
}
|
||||||
|
|
||||||
|
PERMISSION_TO_ACTIONS = {
|
||||||
|
ACL_PERMISSION_FULL_CONTROL: {"read", "write", "delete", "list", "share"},
|
||||||
|
ACL_PERMISSION_WRITE: {"write", "delete"},
|
||||||
|
ACL_PERMISSION_WRITE_ACP: {"share"},
|
||||||
|
ACL_PERMISSION_READ: {"read", "list"},
|
||||||
|
ACL_PERMISSION_READ_ACP: {"share"},
|
||||||
|
}
|
||||||
|
|
||||||
|
GRANTEE_ALL_USERS = "*"
|
||||||
|
GRANTEE_AUTHENTICATED_USERS = "authenticated"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class AclGrant:
|
||||||
|
grantee: str
|
||||||
|
permission: str
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, str]:
|
||||||
|
return {"grantee": self.grantee, "permission": self.permission}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, str]) -> "AclGrant":
|
||||||
|
return cls(grantee=data["grantee"], permission=data["permission"])
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Acl:
|
||||||
|
owner: str
|
||||||
|
grants: List[AclGrant] = field(default_factory=list)
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"owner": self.owner,
|
||||||
|
"grants": [g.to_dict() for g in self.grants],
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "Acl":
|
||||||
|
return cls(
|
||||||
|
owner=data.get("owner", ""),
|
||||||
|
grants=[AclGrant.from_dict(g) for g in data.get("grants", [])],
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_allowed_actions(self, principal_id: Optional[str], is_authenticated: bool = True) -> Set[str]:
|
||||||
|
actions: Set[str] = set()
|
||||||
|
if principal_id and principal_id == self.owner:
|
||||||
|
actions.update(PERMISSION_TO_ACTIONS[ACL_PERMISSION_FULL_CONTROL])
|
||||||
|
for grant in self.grants:
|
||||||
|
if grant.grantee == GRANTEE_ALL_USERS:
|
||||||
|
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
|
||||||
|
elif grant.grantee == GRANTEE_AUTHENTICATED_USERS and is_authenticated:
|
||||||
|
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
|
||||||
|
elif principal_id and grant.grantee == principal_id:
|
||||||
|
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
|
||||||
|
return actions
|
||||||
|
|
||||||
|
|
||||||
|
CANNED_ACLS = {
|
||||||
|
"private": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL)],
|
||||||
|
),
|
||||||
|
"public-read": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
"public-read-write": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_WRITE),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
"authenticated-read": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
AclGrant(grantee=GRANTEE_AUTHENTICATED_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
"bucket-owner-read": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
"bucket-owner-full-control": lambda owner: Acl(
|
||||||
|
owner=owner,
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def create_canned_acl(canned_acl: str, owner: str) -> Acl:
|
||||||
|
factory = CANNED_ACLS.get(canned_acl)
|
||||||
|
if not factory:
|
||||||
|
return CANNED_ACLS["private"](owner)
|
||||||
|
return factory(owner)
|
||||||
|
|
||||||
|
|
||||||
|
class AclService:
|
||||||
|
def __init__(self, storage_root: Path):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._bucket_acl_cache: Dict[str, Acl] = {}
|
||||||
|
|
||||||
|
def _bucket_acl_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / ".acl.json"
|
||||||
|
|
||||||
|
def get_bucket_acl(self, bucket_name: str) -> Optional[Acl]:
|
||||||
|
if bucket_name in self._bucket_acl_cache:
|
||||||
|
return self._bucket_acl_cache[bucket_name]
|
||||||
|
acl_path = self._bucket_acl_path(bucket_name)
|
||||||
|
if not acl_path.exists():
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
data = json.loads(acl_path.read_text(encoding="utf-8"))
|
||||||
|
acl = Acl.from_dict(data)
|
||||||
|
self._bucket_acl_cache[bucket_name] = acl
|
||||||
|
return acl
|
||||||
|
except (OSError, json.JSONDecodeError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def set_bucket_acl(self, bucket_name: str, acl: Acl) -> None:
|
||||||
|
acl_path = self._bucket_acl_path(bucket_name)
|
||||||
|
acl_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
acl_path.write_text(json.dumps(acl.to_dict(), indent=2), encoding="utf-8")
|
||||||
|
self._bucket_acl_cache[bucket_name] = acl
|
||||||
|
|
||||||
|
def set_bucket_canned_acl(self, bucket_name: str, canned_acl: str, owner: str) -> Acl:
|
||||||
|
acl = create_canned_acl(canned_acl, owner)
|
||||||
|
self.set_bucket_acl(bucket_name, acl)
|
||||||
|
return acl
|
||||||
|
|
||||||
|
def delete_bucket_acl(self, bucket_name: str) -> None:
|
||||||
|
acl_path = self._bucket_acl_path(bucket_name)
|
||||||
|
if acl_path.exists():
|
||||||
|
acl_path.unlink()
|
||||||
|
self._bucket_acl_cache.pop(bucket_name, None)
|
||||||
|
|
||||||
|
def evaluate_bucket_acl(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
principal_id: Optional[str],
|
||||||
|
action: str,
|
||||||
|
is_authenticated: bool = True,
|
||||||
|
) -> bool:
|
||||||
|
acl = self.get_bucket_acl(bucket_name)
|
||||||
|
if not acl:
|
||||||
|
return False
|
||||||
|
allowed_actions = acl.get_allowed_actions(principal_id, is_authenticated)
|
||||||
|
return action in allowed_actions
|
||||||
|
|
||||||
|
def get_object_acl(self, bucket_name: str, object_key: str, object_metadata: Dict[str, Any]) -> Optional[Acl]:
|
||||||
|
acl_data = object_metadata.get("__acl__")
|
||||||
|
if not acl_data:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
return Acl.from_dict(acl_data)
|
||||||
|
except (TypeError, KeyError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def create_object_acl_metadata(self, acl: Acl) -> Dict[str, Any]:
|
||||||
|
return {"__acl__": acl.to_dict()}
|
||||||
|
|
||||||
|
def evaluate_object_acl(
|
||||||
|
self,
|
||||||
|
object_metadata: Dict[str, Any],
|
||||||
|
principal_id: Optional[str],
|
||||||
|
action: str,
|
||||||
|
is_authenticated: bool = True,
|
||||||
|
) -> bool:
|
||||||
|
acl = self.get_object_acl("", "", object_metadata)
|
||||||
|
if not acl:
|
||||||
|
return False
|
||||||
|
allowed_actions = acl.get_allowed_actions(principal_id, is_authenticated)
|
||||||
|
return action in allowed_actions
|
||||||
@@ -1,23 +1,82 @@
|
|||||||
"""Bucket policy loader/enforcer with a subset of AWS semantics."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import ipaddress
|
||||||
import json
|
import json
|
||||||
from dataclasses import dataclass
|
import re
|
||||||
from fnmatch import fnmatch
|
import time
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from fnmatch import fnmatch, translate
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, Iterable, List, Optional, Sequence
|
from typing import Any, Dict, Iterable, List, Optional, Pattern, Sequence, Tuple
|
||||||
|
|
||||||
|
|
||||||
RESOURCE_PREFIX = "arn:aws:s3:::"
|
RESOURCE_PREFIX = "arn:aws:s3:::"
|
||||||
|
|
||||||
|
|
||||||
|
def _match_string_like(value: str, pattern: str) -> bool:
|
||||||
|
regex = translate(pattern)
|
||||||
|
return bool(re.match(regex, value, re.IGNORECASE))
|
||||||
|
|
||||||
|
|
||||||
|
def _ip_in_cidr(ip_str: str, cidr: str) -> bool:
|
||||||
|
try:
|
||||||
|
ip = ipaddress.ip_address(ip_str)
|
||||||
|
network = ipaddress.ip_network(cidr, strict=False)
|
||||||
|
return ip in network
|
||||||
|
except ValueError:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _evaluate_condition_operator(
|
||||||
|
operator: str,
|
||||||
|
condition_key: str,
|
||||||
|
condition_values: List[str],
|
||||||
|
context: Dict[str, Any],
|
||||||
|
) -> bool:
|
||||||
|
context_value = context.get(condition_key)
|
||||||
|
op_lower = operator.lower()
|
||||||
|
if_exists = op_lower.endswith("ifexists")
|
||||||
|
if if_exists:
|
||||||
|
op_lower = op_lower[:-8]
|
||||||
|
|
||||||
|
if context_value is None:
|
||||||
|
return if_exists
|
||||||
|
|
||||||
|
context_value_str = str(context_value)
|
||||||
|
context_value_lower = context_value_str.lower()
|
||||||
|
|
||||||
|
if op_lower == "stringequals":
|
||||||
|
return context_value_str in condition_values
|
||||||
|
elif op_lower == "stringnotequals":
|
||||||
|
return context_value_str not in condition_values
|
||||||
|
elif op_lower == "stringequalsignorecase":
|
||||||
|
return context_value_lower in [v.lower() for v in condition_values]
|
||||||
|
elif op_lower == "stringnotequalsignorecase":
|
||||||
|
return context_value_lower not in [v.lower() for v in condition_values]
|
||||||
|
elif op_lower == "stringlike":
|
||||||
|
return any(_match_string_like(context_value_str, p) for p in condition_values)
|
||||||
|
elif op_lower == "stringnotlike":
|
||||||
|
return not any(_match_string_like(context_value_str, p) for p in condition_values)
|
||||||
|
elif op_lower == "ipaddress":
|
||||||
|
return any(_ip_in_cidr(context_value_str, cidr) for cidr in condition_values)
|
||||||
|
elif op_lower == "notipaddress":
|
||||||
|
return not any(_ip_in_cidr(context_value_str, cidr) for cidr in condition_values)
|
||||||
|
elif op_lower == "bool":
|
||||||
|
bool_val = context_value_lower in ("true", "1", "yes")
|
||||||
|
return str(bool_val).lower() in [v.lower() for v in condition_values]
|
||||||
|
elif op_lower == "null":
|
||||||
|
is_null = context_value is None or context_value == ""
|
||||||
|
expected_null = condition_values[0].lower() in ("true", "1", "yes") if condition_values else True
|
||||||
|
return is_null == expected_null
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
ACTION_ALIASES = {
|
ACTION_ALIASES = {
|
||||||
# List actions
|
|
||||||
"s3:listbucket": "list",
|
"s3:listbucket": "list",
|
||||||
"s3:listallmybuckets": "list",
|
"s3:listallmybuckets": "list",
|
||||||
"s3:listbucketversions": "list",
|
"s3:listbucketversions": "list",
|
||||||
"s3:listmultipartuploads": "list",
|
"s3:listmultipartuploads": "list",
|
||||||
"s3:listparts": "list",
|
"s3:listparts": "list",
|
||||||
# Read actions
|
|
||||||
"s3:getobject": "read",
|
"s3:getobject": "read",
|
||||||
"s3:getobjectversion": "read",
|
"s3:getobjectversion": "read",
|
||||||
"s3:getobjecttagging": "read",
|
"s3:getobjecttagging": "read",
|
||||||
@@ -26,7 +85,6 @@ ACTION_ALIASES = {
|
|||||||
"s3:getbucketversioning": "read",
|
"s3:getbucketversioning": "read",
|
||||||
"s3:headobject": "read",
|
"s3:headobject": "read",
|
||||||
"s3:headbucket": "read",
|
"s3:headbucket": "read",
|
||||||
# Write actions
|
|
||||||
"s3:putobject": "write",
|
"s3:putobject": "write",
|
||||||
"s3:createbucket": "write",
|
"s3:createbucket": "write",
|
||||||
"s3:putobjecttagging": "write",
|
"s3:putobjecttagging": "write",
|
||||||
@@ -36,26 +94,30 @@ ACTION_ALIASES = {
|
|||||||
"s3:completemultipartupload": "write",
|
"s3:completemultipartupload": "write",
|
||||||
"s3:abortmultipartupload": "write",
|
"s3:abortmultipartupload": "write",
|
||||||
"s3:copyobject": "write",
|
"s3:copyobject": "write",
|
||||||
# Delete actions
|
|
||||||
"s3:deleteobject": "delete",
|
"s3:deleteobject": "delete",
|
||||||
"s3:deleteobjectversion": "delete",
|
"s3:deleteobjectversion": "delete",
|
||||||
"s3:deletebucket": "delete",
|
"s3:deletebucket": "delete",
|
||||||
"s3:deleteobjecttagging": "delete",
|
"s3:deleteobjecttagging": "delete",
|
||||||
# Share actions (ACL)
|
|
||||||
"s3:putobjectacl": "share",
|
"s3:putobjectacl": "share",
|
||||||
"s3:putbucketacl": "share",
|
"s3:putbucketacl": "share",
|
||||||
"s3:getbucketacl": "share",
|
"s3:getbucketacl": "share",
|
||||||
# Policy actions
|
|
||||||
"s3:putbucketpolicy": "policy",
|
"s3:putbucketpolicy": "policy",
|
||||||
"s3:getbucketpolicy": "policy",
|
"s3:getbucketpolicy": "policy",
|
||||||
"s3:deletebucketpolicy": "policy",
|
"s3:deletebucketpolicy": "policy",
|
||||||
# Replication actions
|
|
||||||
"s3:getreplicationconfiguration": "replication",
|
"s3:getreplicationconfiguration": "replication",
|
||||||
"s3:putreplicationconfiguration": "replication",
|
"s3:putreplicationconfiguration": "replication",
|
||||||
"s3:deletereplicationconfiguration": "replication",
|
"s3:deletereplicationconfiguration": "replication",
|
||||||
"s3:replicateobject": "replication",
|
"s3:replicateobject": "replication",
|
||||||
"s3:replicatetags": "replication",
|
"s3:replicatetags": "replication",
|
||||||
"s3:replicatedelete": "replication",
|
"s3:replicatedelete": "replication",
|
||||||
|
"s3:getlifecycleconfiguration": "lifecycle",
|
||||||
|
"s3:putlifecycleconfiguration": "lifecycle",
|
||||||
|
"s3:deletelifecycleconfiguration": "lifecycle",
|
||||||
|
"s3:getbucketlifecycle": "lifecycle",
|
||||||
|
"s3:putbucketlifecycle": "lifecycle",
|
||||||
|
"s3:getbucketcors": "cors",
|
||||||
|
"s3:putbucketcors": "cors",
|
||||||
|
"s3:deletebucketcors": "cors",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -133,7 +195,20 @@ class BucketPolicyStatement:
|
|||||||
effect: str
|
effect: str
|
||||||
principals: List[str] | str
|
principals: List[str] | str
|
||||||
actions: List[str]
|
actions: List[str]
|
||||||
resources: List[tuple[str | None, str | None]]
|
resources: List[Tuple[str | None, str | None]]
|
||||||
|
conditions: Dict[str, Dict[str, List[str]]] = field(default_factory=dict)
|
||||||
|
_compiled_patterns: List[Tuple[str | None, Optional[Pattern[str]]]] | None = None
|
||||||
|
|
||||||
|
def _get_compiled_patterns(self) -> List[Tuple[str | None, Optional[Pattern[str]]]]:
|
||||||
|
if self._compiled_patterns is None:
|
||||||
|
self._compiled_patterns = []
|
||||||
|
for resource_bucket, key_pattern in self.resources:
|
||||||
|
if key_pattern is None:
|
||||||
|
self._compiled_patterns.append((resource_bucket, None))
|
||||||
|
else:
|
||||||
|
regex_pattern = translate(key_pattern)
|
||||||
|
self._compiled_patterns.append((resource_bucket, re.compile(regex_pattern)))
|
||||||
|
return self._compiled_patterns
|
||||||
|
|
||||||
def matches_principal(self, access_key: Optional[str]) -> bool:
|
def matches_principal(self, access_key: Optional[str]) -> bool:
|
||||||
if self.principals == "*":
|
if self.principals == "*":
|
||||||
@@ -149,18 +224,29 @@ class BucketPolicyStatement:
|
|||||||
def matches_resource(self, bucket: Optional[str], object_key: Optional[str]) -> bool:
|
def matches_resource(self, bucket: Optional[str], object_key: Optional[str]) -> bool:
|
||||||
bucket = (bucket or "*").lower()
|
bucket = (bucket or "*").lower()
|
||||||
key = object_key or ""
|
key = object_key or ""
|
||||||
for resource_bucket, key_pattern in self.resources:
|
for resource_bucket, compiled_pattern in self._get_compiled_patterns():
|
||||||
resource_bucket = (resource_bucket or "*").lower()
|
resource_bucket = (resource_bucket or "*").lower()
|
||||||
if resource_bucket not in {"*", bucket}:
|
if resource_bucket not in {"*", bucket}:
|
||||||
continue
|
continue
|
||||||
if key_pattern is None:
|
if compiled_pattern is None:
|
||||||
if not key:
|
if not key:
|
||||||
return True
|
return True
|
||||||
continue
|
continue
|
||||||
if fnmatch(key, key_pattern):
|
if compiled_pattern.match(key):
|
||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
def matches_condition(self, context: Optional[Dict[str, Any]]) -> bool:
|
||||||
|
if not self.conditions:
|
||||||
|
return True
|
||||||
|
if context is None:
|
||||||
|
context = {}
|
||||||
|
for operator, key_values in self.conditions.items():
|
||||||
|
for condition_key, condition_values in key_values.items():
|
||||||
|
if not _evaluate_condition_operator(operator, condition_key, condition_values, context):
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
class BucketPolicyStore:
|
class BucketPolicyStore:
|
||||||
"""Loads bucket policies from disk and evaluates statements."""
|
"""Loads bucket policies from disk and evaluates statements."""
|
||||||
@@ -174,8 +260,16 @@ class BucketPolicyStore:
|
|||||||
self._policies: Dict[str, List[BucketPolicyStatement]] = {}
|
self._policies: Dict[str, List[BucketPolicyStatement]] = {}
|
||||||
self._load()
|
self._load()
|
||||||
self._last_mtime = self._current_mtime()
|
self._last_mtime = self._current_mtime()
|
||||||
|
# Performance: Avoid stat() on every request
|
||||||
|
self._last_stat_check = 0.0
|
||||||
|
self._stat_check_interval = 1.0 # Only check mtime every 1 second
|
||||||
|
|
||||||
def maybe_reload(self) -> None:
|
def maybe_reload(self) -> None:
|
||||||
|
# Performance: Skip stat check if we checked recently
|
||||||
|
now = time.time()
|
||||||
|
if now - self._last_stat_check < self._stat_check_interval:
|
||||||
|
return
|
||||||
|
self._last_stat_check = now
|
||||||
current = self._current_mtime()
|
current = self._current_mtime()
|
||||||
if current is None or current == self._last_mtime:
|
if current is None or current == self._last_mtime:
|
||||||
return
|
return
|
||||||
@@ -188,13 +282,13 @@ class BucketPolicyStore:
|
|||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
def evaluate(
|
def evaluate(
|
||||||
self,
|
self,
|
||||||
access_key: Optional[str],
|
access_key: Optional[str],
|
||||||
bucket: Optional[str],
|
bucket: Optional[str],
|
||||||
object_key: Optional[str],
|
object_key: Optional[str],
|
||||||
action: str,
|
action: str,
|
||||||
|
context: Optional[Dict[str, Any]] = None,
|
||||||
) -> str | None:
|
) -> str | None:
|
||||||
bucket = (bucket or "").lower()
|
bucket = (bucket or "").lower()
|
||||||
statements = self._policies.get(bucket) or []
|
statements = self._policies.get(bucket) or []
|
||||||
@@ -206,6 +300,8 @@ class BucketPolicyStore:
|
|||||||
continue
|
continue
|
||||||
if not statement.matches_resource(bucket, object_key):
|
if not statement.matches_resource(bucket, object_key):
|
||||||
continue
|
continue
|
||||||
|
if not statement.matches_condition(context):
|
||||||
|
continue
|
||||||
if statement.effect == "deny":
|
if statement.effect == "deny":
|
||||||
return "deny"
|
return "deny"
|
||||||
decision = "allow"
|
decision = "allow"
|
||||||
@@ -229,7 +325,6 @@ class BucketPolicyStore:
|
|||||||
self._policies.pop(bucket, None)
|
self._policies.pop(bucket, None)
|
||||||
self._persist()
|
self._persist()
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
def _load(self) -> None:
|
def _load(self) -> None:
|
||||||
try:
|
try:
|
||||||
content = self.policy_path.read_text(encoding='utf-8')
|
content = self.policy_path.read_text(encoding='utf-8')
|
||||||
@@ -271,6 +366,7 @@ class BucketPolicyStore:
|
|||||||
if not resources:
|
if not resources:
|
||||||
continue
|
continue
|
||||||
effect = statement.get("Effect", "Allow").lower()
|
effect = statement.get("Effect", "Allow").lower()
|
||||||
|
conditions = self._normalize_conditions(statement.get("Condition", {}))
|
||||||
statements.append(
|
statements.append(
|
||||||
BucketPolicyStatement(
|
BucketPolicyStatement(
|
||||||
sid=statement.get("Sid"),
|
sid=statement.get("Sid"),
|
||||||
@@ -278,6 +374,24 @@ class BucketPolicyStore:
|
|||||||
principals=principals,
|
principals=principals,
|
||||||
actions=actions or ["*"],
|
actions=actions or ["*"],
|
||||||
resources=resources,
|
resources=resources,
|
||||||
|
conditions=conditions,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
return statements
|
return statements
|
||||||
|
|
||||||
|
def _normalize_conditions(self, condition_block: Dict[str, Any]) -> Dict[str, Dict[str, List[str]]]:
|
||||||
|
if not condition_block or not isinstance(condition_block, dict):
|
||||||
|
return {}
|
||||||
|
normalized: Dict[str, Dict[str, List[str]]] = {}
|
||||||
|
for operator, key_values in condition_block.items():
|
||||||
|
if not isinstance(key_values, dict):
|
||||||
|
continue
|
||||||
|
normalized[operator] = {}
|
||||||
|
for cond_key, cond_values in key_values.items():
|
||||||
|
if isinstance(cond_values, str):
|
||||||
|
normalized[operator][cond_key] = [cond_values]
|
||||||
|
elif isinstance(cond_values, list):
|
||||||
|
normalized[operator][cond_key] = [str(v) for v in cond_values]
|
||||||
|
else:
|
||||||
|
normalized[operator][cond_key] = [str(cond_values)]
|
||||||
|
return normalized
|
||||||
94
app/compression.py
Normal file
94
app/compression.py
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import gzip
|
||||||
|
import io
|
||||||
|
from typing import Callable, Iterable, List, Tuple
|
||||||
|
|
||||||
|
COMPRESSIBLE_MIMES = frozenset([
|
||||||
|
'application/json',
|
||||||
|
'application/javascript',
|
||||||
|
'application/xml',
|
||||||
|
'text/html',
|
||||||
|
'text/css',
|
||||||
|
'text/plain',
|
||||||
|
'text/xml',
|
||||||
|
'text/javascript',
|
||||||
|
'application/x-ndjson',
|
||||||
|
])
|
||||||
|
|
||||||
|
MIN_SIZE_FOR_COMPRESSION = 500
|
||||||
|
|
||||||
|
|
||||||
|
class GzipMiddleware:
|
||||||
|
def __init__(self, app: Callable, compression_level: int = 6, min_size: int = MIN_SIZE_FOR_COMPRESSION):
|
||||||
|
self.app = app
|
||||||
|
self.compression_level = compression_level
|
||||||
|
self.min_size = min_size
|
||||||
|
|
||||||
|
def __call__(self, environ: dict, start_response: Callable) -> Iterable[bytes]:
|
||||||
|
accept_encoding = environ.get('HTTP_ACCEPT_ENCODING', '')
|
||||||
|
if 'gzip' not in accept_encoding.lower():
|
||||||
|
return self.app(environ, start_response)
|
||||||
|
|
||||||
|
response_started = False
|
||||||
|
status_code = None
|
||||||
|
response_headers: List[Tuple[str, str]] = []
|
||||||
|
content_type = None
|
||||||
|
content_length = None
|
||||||
|
should_compress = False
|
||||||
|
exc_info_holder = [None]
|
||||||
|
|
||||||
|
def custom_start_response(status: str, headers: List[Tuple[str, str]], exc_info=None):
|
||||||
|
nonlocal response_started, status_code, response_headers, content_type, content_length, should_compress
|
||||||
|
response_started = True
|
||||||
|
status_code = int(status.split(' ', 1)[0])
|
||||||
|
response_headers = list(headers)
|
||||||
|
exc_info_holder[0] = exc_info
|
||||||
|
|
||||||
|
for name, value in headers:
|
||||||
|
name_lower = name.lower()
|
||||||
|
if name_lower == 'content-type':
|
||||||
|
content_type = value.split(';')[0].strip().lower()
|
||||||
|
elif name_lower == 'content-length':
|
||||||
|
content_length = int(value)
|
||||||
|
elif name_lower == 'content-encoding':
|
||||||
|
should_compress = False
|
||||||
|
return start_response(status, headers, exc_info)
|
||||||
|
|
||||||
|
if content_type and content_type in COMPRESSIBLE_MIMES:
|
||||||
|
if content_length is None or content_length >= self.min_size:
|
||||||
|
should_compress = True
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
response_body = b''.join(self.app(environ, custom_start_response))
|
||||||
|
|
||||||
|
if not response_started:
|
||||||
|
return [response_body]
|
||||||
|
|
||||||
|
if should_compress and len(response_body) >= self.min_size:
|
||||||
|
buf = io.BytesIO()
|
||||||
|
with gzip.GzipFile(fileobj=buf, mode='wb', compresslevel=self.compression_level) as gz:
|
||||||
|
gz.write(response_body)
|
||||||
|
compressed = buf.getvalue()
|
||||||
|
|
||||||
|
if len(compressed) < len(response_body):
|
||||||
|
response_body = compressed
|
||||||
|
new_headers = []
|
||||||
|
for name, value in response_headers:
|
||||||
|
if name.lower() not in ('content-length', 'content-encoding'):
|
||||||
|
new_headers.append((name, value))
|
||||||
|
new_headers.append(('Content-Encoding', 'gzip'))
|
||||||
|
new_headers.append(('Content-Length', str(len(response_body))))
|
||||||
|
new_headers.append(('Vary', 'Accept-Encoding'))
|
||||||
|
response_headers = new_headers
|
||||||
|
|
||||||
|
status_str = f"{status_code} " + {
|
||||||
|
200: "OK", 201: "Created", 204: "No Content", 206: "Partial Content",
|
||||||
|
301: "Moved Permanently", 302: "Found", 304: "Not Modified",
|
||||||
|
400: "Bad Request", 401: "Unauthorized", 403: "Forbidden", 404: "Not Found",
|
||||||
|
405: "Method Not Allowed", 409: "Conflict", 500: "Internal Server Error",
|
||||||
|
}.get(status_code, "Unknown")
|
||||||
|
|
||||||
|
start_response(status_str, response_headers, exc_info_holder[0])
|
||||||
|
return [response_body]
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
"""Configuration helpers for the S3 clone application."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import os
|
import os
|
||||||
|
import re
|
||||||
import secrets
|
import secrets
|
||||||
import shutil
|
import shutil
|
||||||
import sys
|
import sys
|
||||||
@@ -10,6 +10,13 @@ from dataclasses import dataclass
|
|||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, Optional
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_rate_limit(value: str) -> str:
|
||||||
|
pattern = r"^\d+\s+per\s+(second|minute|hour|day)$"
|
||||||
|
if not re.match(pattern, value):
|
||||||
|
raise ValueError(f"Invalid rate limit format: {value}. Expected format: '200 per minute'")
|
||||||
|
return value
|
||||||
|
|
||||||
if getattr(sys, "frozen", False):
|
if getattr(sys, "frozen", False):
|
||||||
# Running in a PyInstaller bundle
|
# Running in a PyInstaller bundle
|
||||||
PROJECT_ROOT = Path(sys._MEIPASS)
|
PROJECT_ROOT = Path(sys._MEIPASS)
|
||||||
@@ -68,11 +75,21 @@ class AppConfig:
|
|||||||
stream_chunk_size: int
|
stream_chunk_size: int
|
||||||
multipart_min_part_size: int
|
multipart_min_part_size: int
|
||||||
bucket_stats_cache_ttl: int
|
bucket_stats_cache_ttl: int
|
||||||
|
object_cache_ttl: int
|
||||||
encryption_enabled: bool
|
encryption_enabled: bool
|
||||||
encryption_master_key_path: Path
|
encryption_master_key_path: Path
|
||||||
kms_enabled: bool
|
kms_enabled: bool
|
||||||
kms_keys_path: Path
|
kms_keys_path: Path
|
||||||
default_encryption_algorithm: str
|
default_encryption_algorithm: str
|
||||||
|
display_timezone: str
|
||||||
|
lifecycle_enabled: bool
|
||||||
|
lifecycle_interval_seconds: int
|
||||||
|
metrics_history_enabled: bool
|
||||||
|
metrics_history_retention_hours: int
|
||||||
|
metrics_history_interval_minutes: int
|
||||||
|
operation_metrics_enabled: bool
|
||||||
|
operation_metrics_interval_minutes: int
|
||||||
|
operation_metrics_retention_hours: int
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_env(cls, overrides: Optional[Dict[str, Any]] = None) -> "AppConfig":
|
def from_env(cls, overrides: Optional[Dict[str, Any]] = None) -> "AppConfig":
|
||||||
@@ -82,7 +99,7 @@ class AppConfig:
|
|||||||
return overrides.get(name, os.getenv(name, default))
|
return overrides.get(name, os.getenv(name, default))
|
||||||
|
|
||||||
storage_root = Path(_get("STORAGE_ROOT", PROJECT_ROOT / "data")).resolve()
|
storage_root = Path(_get("STORAGE_ROOT", PROJECT_ROOT / "data")).resolve()
|
||||||
max_upload_size = int(_get("MAX_UPLOAD_SIZE", 1024 * 1024 * 1024)) # 1 GiB default
|
max_upload_size = int(_get("MAX_UPLOAD_SIZE", 1024 * 1024 * 1024))
|
||||||
ui_page_size = int(_get("UI_PAGE_SIZE", 100))
|
ui_page_size = int(_get("UI_PAGE_SIZE", 100))
|
||||||
auth_max_attempts = int(_get("AUTH_MAX_ATTEMPTS", 5))
|
auth_max_attempts = int(_get("AUTH_MAX_ATTEMPTS", 5))
|
||||||
auth_lockout_minutes = int(_get("AUTH_LOCKOUT_MINUTES", 15))
|
auth_lockout_minutes = int(_get("AUTH_LOCKOUT_MINUTES", 15))
|
||||||
@@ -90,6 +107,8 @@ class AppConfig:
|
|||||||
secret_ttl_seconds = int(_get("SECRET_TTL_SECONDS", 300))
|
secret_ttl_seconds = int(_get("SECRET_TTL_SECONDS", 300))
|
||||||
stream_chunk_size = int(_get("STREAM_CHUNK_SIZE", 64 * 1024))
|
stream_chunk_size = int(_get("STREAM_CHUNK_SIZE", 64 * 1024))
|
||||||
multipart_min_part_size = int(_get("MULTIPART_MIN_PART_SIZE", 5 * 1024 * 1024))
|
multipart_min_part_size = int(_get("MULTIPART_MIN_PART_SIZE", 5 * 1024 * 1024))
|
||||||
|
lifecycle_enabled = _get("LIFECYCLE_ENABLED", "false").lower() in ("true", "1", "yes")
|
||||||
|
lifecycle_interval_seconds = int(_get("LIFECYCLE_INTERVAL_SECONDS", 3600))
|
||||||
default_secret = "dev-secret-key"
|
default_secret = "dev-secret-key"
|
||||||
secret_key = str(_get("SECRET_KEY", default_secret))
|
secret_key = str(_get("SECRET_KEY", default_secret))
|
||||||
|
|
||||||
@@ -104,6 +123,10 @@ class AppConfig:
|
|||||||
try:
|
try:
|
||||||
secret_file.parent.mkdir(parents=True, exist_ok=True)
|
secret_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
secret_file.write_text(generated)
|
secret_file.write_text(generated)
|
||||||
|
try:
|
||||||
|
os.chmod(secret_file, 0o600)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
secret_key = generated
|
secret_key = generated
|
||||||
except OSError:
|
except OSError:
|
||||||
secret_key = generated
|
secret_key = generated
|
||||||
@@ -139,7 +162,7 @@ class AppConfig:
|
|||||||
log_path = log_dir / str(_get("LOG_FILE", "app.log"))
|
log_path = log_dir / str(_get("LOG_FILE", "app.log"))
|
||||||
log_max_bytes = int(_get("LOG_MAX_BYTES", 5 * 1024 * 1024))
|
log_max_bytes = int(_get("LOG_MAX_BYTES", 5 * 1024 * 1024))
|
||||||
log_backup_count = int(_get("LOG_BACKUP_COUNT", 3))
|
log_backup_count = int(_get("LOG_BACKUP_COUNT", 3))
|
||||||
ratelimit_default = str(_get("RATE_LIMIT_DEFAULT", "200 per minute"))
|
ratelimit_default = _validate_rate_limit(str(_get("RATE_LIMIT_DEFAULT", "200 per minute")))
|
||||||
ratelimit_storage_uri = str(_get("RATE_LIMIT_STORAGE_URI", "memory://"))
|
ratelimit_storage_uri = str(_get("RATE_LIMIT_STORAGE_URI", "memory://"))
|
||||||
|
|
||||||
def _csv(value: str, default: list[str]) -> list[str]:
|
def _csv(value: str, default: list[str]) -> list[str]:
|
||||||
@@ -153,15 +176,22 @@ class AppConfig:
|
|||||||
cors_allow_headers = _csv(str(_get("CORS_ALLOW_HEADERS", "*")), ["*"])
|
cors_allow_headers = _csv(str(_get("CORS_ALLOW_HEADERS", "*")), ["*"])
|
||||||
cors_expose_headers = _csv(str(_get("CORS_EXPOSE_HEADERS", "*")), ["*"])
|
cors_expose_headers = _csv(str(_get("CORS_EXPOSE_HEADERS", "*")), ["*"])
|
||||||
session_lifetime_days = int(_get("SESSION_LIFETIME_DAYS", 30))
|
session_lifetime_days = int(_get("SESSION_LIFETIME_DAYS", 30))
|
||||||
bucket_stats_cache_ttl = int(_get("BUCKET_STATS_CACHE_TTL", 60)) # Default 60 seconds
|
bucket_stats_cache_ttl = int(_get("BUCKET_STATS_CACHE_TTL", 60))
|
||||||
|
object_cache_ttl = int(_get("OBJECT_CACHE_TTL", 5))
|
||||||
# Encryption settings
|
|
||||||
encryption_enabled = str(_get("ENCRYPTION_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
encryption_enabled = str(_get("ENCRYPTION_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
encryption_keys_dir = storage_root / ".myfsio.sys" / "keys"
|
encryption_keys_dir = storage_root / ".myfsio.sys" / "keys"
|
||||||
encryption_master_key_path = Path(_get("ENCRYPTION_MASTER_KEY_PATH", encryption_keys_dir / "master.key")).resolve()
|
encryption_master_key_path = Path(_get("ENCRYPTION_MASTER_KEY_PATH", encryption_keys_dir / "master.key")).resolve()
|
||||||
kms_enabled = str(_get("KMS_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
kms_enabled = str(_get("KMS_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
kms_keys_path = Path(_get("KMS_KEYS_PATH", encryption_keys_dir / "kms_keys.json")).resolve()
|
kms_keys_path = Path(_get("KMS_KEYS_PATH", encryption_keys_dir / "kms_keys.json")).resolve()
|
||||||
default_encryption_algorithm = str(_get("DEFAULT_ENCRYPTION_ALGORITHM", "AES256"))
|
default_encryption_algorithm = str(_get("DEFAULT_ENCRYPTION_ALGORITHM", "AES256"))
|
||||||
|
display_timezone = str(_get("DISPLAY_TIMEZONE", "UTC"))
|
||||||
|
metrics_history_enabled = str(_get("METRICS_HISTORY_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
|
metrics_history_retention_hours = int(_get("METRICS_HISTORY_RETENTION_HOURS", 24))
|
||||||
|
metrics_history_interval_minutes = int(_get("METRICS_HISTORY_INTERVAL_MINUTES", 5))
|
||||||
|
operation_metrics_enabled = str(_get("OPERATION_METRICS_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
|
operation_metrics_interval_minutes = int(_get("OPERATION_METRICS_INTERVAL_MINUTES", 5))
|
||||||
|
operation_metrics_retention_hours = int(_get("OPERATION_METRICS_RETENTION_HOURS", 24))
|
||||||
|
|
||||||
return cls(storage_root=storage_root,
|
return cls(storage_root=storage_root,
|
||||||
max_upload_size=max_upload_size,
|
max_upload_size=max_upload_size,
|
||||||
@@ -192,11 +222,21 @@ class AppConfig:
|
|||||||
stream_chunk_size=stream_chunk_size,
|
stream_chunk_size=stream_chunk_size,
|
||||||
multipart_min_part_size=multipart_min_part_size,
|
multipart_min_part_size=multipart_min_part_size,
|
||||||
bucket_stats_cache_ttl=bucket_stats_cache_ttl,
|
bucket_stats_cache_ttl=bucket_stats_cache_ttl,
|
||||||
|
object_cache_ttl=object_cache_ttl,
|
||||||
encryption_enabled=encryption_enabled,
|
encryption_enabled=encryption_enabled,
|
||||||
encryption_master_key_path=encryption_master_key_path,
|
encryption_master_key_path=encryption_master_key_path,
|
||||||
kms_enabled=kms_enabled,
|
kms_enabled=kms_enabled,
|
||||||
kms_keys_path=kms_keys_path,
|
kms_keys_path=kms_keys_path,
|
||||||
default_encryption_algorithm=default_encryption_algorithm)
|
default_encryption_algorithm=default_encryption_algorithm,
|
||||||
|
display_timezone=display_timezone,
|
||||||
|
lifecycle_enabled=lifecycle_enabled,
|
||||||
|
lifecycle_interval_seconds=lifecycle_interval_seconds,
|
||||||
|
metrics_history_enabled=metrics_history_enabled,
|
||||||
|
metrics_history_retention_hours=metrics_history_retention_hours,
|
||||||
|
metrics_history_interval_minutes=metrics_history_interval_minutes,
|
||||||
|
operation_metrics_enabled=operation_metrics_enabled,
|
||||||
|
operation_metrics_interval_minutes=operation_metrics_interval_minutes,
|
||||||
|
operation_metrics_retention_hours=operation_metrics_retention_hours)
|
||||||
|
|
||||||
def validate_and_report(self) -> list[str]:
|
def validate_and_report(self) -> list[str]:
|
||||||
"""Validate configuration and return a list of warnings/issues.
|
"""Validate configuration and return a list of warnings/issues.
|
||||||
@@ -206,7 +246,6 @@ class AppConfig:
|
|||||||
"""
|
"""
|
||||||
issues = []
|
issues = []
|
||||||
|
|
||||||
# Check if storage_root is writable
|
|
||||||
try:
|
try:
|
||||||
test_file = self.storage_root / ".write_test"
|
test_file = self.storage_root / ".write_test"
|
||||||
test_file.touch()
|
test_file.touch()
|
||||||
@@ -214,24 +253,20 @@ class AppConfig:
|
|||||||
except (OSError, PermissionError) as e:
|
except (OSError, PermissionError) as e:
|
||||||
issues.append(f"CRITICAL: STORAGE_ROOT '{self.storage_root}' is not writable: {e}")
|
issues.append(f"CRITICAL: STORAGE_ROOT '{self.storage_root}' is not writable: {e}")
|
||||||
|
|
||||||
# Check if storage_root looks like a temp directory
|
|
||||||
storage_str = str(self.storage_root).lower()
|
storage_str = str(self.storage_root).lower()
|
||||||
if "/tmp" in storage_str or "\\temp" in storage_str or "appdata\\local\\temp" in storage_str:
|
if "/tmp" in storage_str or "\\temp" in storage_str or "appdata\\local\\temp" in storage_str:
|
||||||
issues.append(f"WARNING: STORAGE_ROOT '{self.storage_root}' appears to be a temporary directory. Data may be lost on reboot!")
|
issues.append(f"WARNING: STORAGE_ROOT '{self.storage_root}' appears to be a temporary directory. Data may be lost on reboot!")
|
||||||
|
|
||||||
# Check if IAM config path is under storage_root
|
|
||||||
try:
|
try:
|
||||||
self.iam_config_path.relative_to(self.storage_root)
|
self.iam_config_path.relative_to(self.storage_root)
|
||||||
except ValueError:
|
except ValueError:
|
||||||
issues.append(f"WARNING: IAM_CONFIG '{self.iam_config_path}' is outside STORAGE_ROOT '{self.storage_root}'. Consider setting IAM_CONFIG explicitly or ensuring paths are aligned.")
|
issues.append(f"WARNING: IAM_CONFIG '{self.iam_config_path}' is outside STORAGE_ROOT '{self.storage_root}'. Consider setting IAM_CONFIG explicitly or ensuring paths are aligned.")
|
||||||
|
|
||||||
# Check if bucket policy path is under storage_root
|
|
||||||
try:
|
try:
|
||||||
self.bucket_policy_path.relative_to(self.storage_root)
|
self.bucket_policy_path.relative_to(self.storage_root)
|
||||||
except ValueError:
|
except ValueError:
|
||||||
issues.append(f"WARNING: BUCKET_POLICY_PATH '{self.bucket_policy_path}' is outside STORAGE_ROOT '{self.storage_root}'. Consider setting BUCKET_POLICY_PATH explicitly.")
|
issues.append(f"WARNING: BUCKET_POLICY_PATH '{self.bucket_policy_path}' is outside STORAGE_ROOT '{self.storage_root}'. Consider setting BUCKET_POLICY_PATH explicitly.")
|
||||||
|
|
||||||
# Check if log path is writable
|
|
||||||
try:
|
try:
|
||||||
self.log_path.parent.mkdir(parents=True, exist_ok=True)
|
self.log_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
test_log = self.log_path.parent / ".write_test"
|
test_log = self.log_path.parent / ".write_test"
|
||||||
@@ -240,26 +275,22 @@ class AppConfig:
|
|||||||
except (OSError, PermissionError) as e:
|
except (OSError, PermissionError) as e:
|
||||||
issues.append(f"WARNING: Log directory '{self.log_path.parent}' is not writable: {e}")
|
issues.append(f"WARNING: Log directory '{self.log_path.parent}' is not writable: {e}")
|
||||||
|
|
||||||
# Check log path location
|
|
||||||
log_str = str(self.log_path).lower()
|
log_str = str(self.log_path).lower()
|
||||||
if "/tmp" in log_str or "\\temp" in log_str or "appdata\\local\\temp" in log_str:
|
if "/tmp" in log_str or "\\temp" in log_str or "appdata\\local\\temp" in log_str:
|
||||||
issues.append(f"WARNING: LOG_DIR '{self.log_path.parent}' appears to be a temporary directory. Logs may be lost on reboot!")
|
issues.append(f"WARNING: LOG_DIR '{self.log_path.parent}' appears to be a temporary directory. Logs may be lost on reboot!")
|
||||||
|
|
||||||
# Check if encryption keys path is under storage_root (when encryption is enabled)
|
|
||||||
if self.encryption_enabled:
|
if self.encryption_enabled:
|
||||||
try:
|
try:
|
||||||
self.encryption_master_key_path.relative_to(self.storage_root)
|
self.encryption_master_key_path.relative_to(self.storage_root)
|
||||||
except ValueError:
|
except ValueError:
|
||||||
issues.append(f"WARNING: ENCRYPTION_MASTER_KEY_PATH '{self.encryption_master_key_path}' is outside STORAGE_ROOT. Ensure proper backup procedures.")
|
issues.append(f"WARNING: ENCRYPTION_MASTER_KEY_PATH '{self.encryption_master_key_path}' is outside STORAGE_ROOT. Ensure proper backup procedures.")
|
||||||
|
|
||||||
# Check if KMS keys path is under storage_root (when KMS is enabled)
|
|
||||||
if self.kms_enabled:
|
if self.kms_enabled:
|
||||||
try:
|
try:
|
||||||
self.kms_keys_path.relative_to(self.storage_root)
|
self.kms_keys_path.relative_to(self.storage_root)
|
||||||
except ValueError:
|
except ValueError:
|
||||||
issues.append(f"WARNING: KMS_KEYS_PATH '{self.kms_keys_path}' is outside STORAGE_ROOT. Ensure proper backup procedures.")
|
issues.append(f"WARNING: KMS_KEYS_PATH '{self.kms_keys_path}' is outside STORAGE_ROOT. Ensure proper backup procedures.")
|
||||||
|
|
||||||
# Warn about production settings
|
|
||||||
if self.secret_key == "dev-secret-key":
|
if self.secret_key == "dev-secret-key":
|
||||||
issues.append("WARNING: Using default SECRET_KEY. Set SECRET_KEY environment variable for production.")
|
issues.append("WARNING: Using default SECRET_KEY. Set SECRET_KEY environment variable for production.")
|
||||||
|
|
||||||
@@ -313,6 +344,7 @@ class AppConfig:
|
|||||||
"STREAM_CHUNK_SIZE": self.stream_chunk_size,
|
"STREAM_CHUNK_SIZE": self.stream_chunk_size,
|
||||||
"MULTIPART_MIN_PART_SIZE": self.multipart_min_part_size,
|
"MULTIPART_MIN_PART_SIZE": self.multipart_min_part_size,
|
||||||
"BUCKET_STATS_CACHE_TTL": self.bucket_stats_cache_ttl,
|
"BUCKET_STATS_CACHE_TTL": self.bucket_stats_cache_ttl,
|
||||||
|
"OBJECT_CACHE_TTL": self.object_cache_ttl,
|
||||||
"LOG_LEVEL": self.log_level,
|
"LOG_LEVEL": self.log_level,
|
||||||
"LOG_TO_FILE": self.log_to_file,
|
"LOG_TO_FILE": self.log_to_file,
|
||||||
"LOG_FILE": str(self.log_path),
|
"LOG_FILE": str(self.log_path),
|
||||||
@@ -330,4 +362,13 @@ class AppConfig:
|
|||||||
"KMS_ENABLED": self.kms_enabled,
|
"KMS_ENABLED": self.kms_enabled,
|
||||||
"KMS_KEYS_PATH": str(self.kms_keys_path),
|
"KMS_KEYS_PATH": str(self.kms_keys_path),
|
||||||
"DEFAULT_ENCRYPTION_ALGORITHM": self.default_encryption_algorithm,
|
"DEFAULT_ENCRYPTION_ALGORITHM": self.default_encryption_algorithm,
|
||||||
|
"DISPLAY_TIMEZONE": self.display_timezone,
|
||||||
|
"LIFECYCLE_ENABLED": self.lifecycle_enabled,
|
||||||
|
"LIFECYCLE_INTERVAL_SECONDS": self.lifecycle_interval_seconds,
|
||||||
|
"METRICS_HISTORY_ENABLED": self.metrics_history_enabled,
|
||||||
|
"METRICS_HISTORY_RETENTION_HOURS": self.metrics_history_retention_hours,
|
||||||
|
"METRICS_HISTORY_INTERVAL_MINUTES": self.metrics_history_interval_minutes,
|
||||||
|
"OPERATION_METRICS_ENABLED": self.operation_metrics_enabled,
|
||||||
|
"OPERATION_METRICS_INTERVAL_MINUTES": self.operation_metrics_interval_minutes,
|
||||||
|
"OPERATION_METRICS_RETENTION_HOURS": self.operation_metrics_retention_hours,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
"""Manage remote S3 connections."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
"""Encrypted storage layer that wraps ObjectStorage with encryption support."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import io
|
import io
|
||||||
@@ -79,7 +78,7 @@ class EncryptedObjectStorage:
|
|||||||
kms_key_id: Optional[str] = None,
|
kms_key_id: Optional[str] = None,
|
||||||
) -> ObjectMeta:
|
) -> ObjectMeta:
|
||||||
"""Store an object, optionally with encryption.
|
"""Store an object, optionally with encryption.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
bucket_name: Name of the bucket
|
bucket_name: Name of the bucket
|
||||||
object_key: Key for the object
|
object_key: Key for the object
|
||||||
@@ -87,42 +86,41 @@ class EncryptedObjectStorage:
|
|||||||
metadata: Optional user metadata
|
metadata: Optional user metadata
|
||||||
server_side_encryption: Encryption algorithm ("AES256" or "aws:kms")
|
server_side_encryption: Encryption algorithm ("AES256" or "aws:kms")
|
||||||
kms_key_id: KMS key ID (for aws:kms encryption)
|
kms_key_id: KMS key ID (for aws:kms encryption)
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
ObjectMeta with object information
|
ObjectMeta with object information
|
||||||
|
|
||||||
|
Performance: Uses streaming encryption for large files to reduce memory usage.
|
||||||
"""
|
"""
|
||||||
should_encrypt, algorithm, detected_kms_key = self._should_encrypt(
|
should_encrypt, algorithm, detected_kms_key = self._should_encrypt(
|
||||||
bucket_name, server_side_encryption
|
bucket_name, server_side_encryption
|
||||||
)
|
)
|
||||||
|
|
||||||
if kms_key_id is None:
|
if kms_key_id is None:
|
||||||
kms_key_id = detected_kms_key
|
kms_key_id = detected_kms_key
|
||||||
|
|
||||||
if should_encrypt:
|
if should_encrypt:
|
||||||
data = stream.read()
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
ciphertext, enc_metadata = self.encryption.encrypt_object(
|
# Performance: Use streaming encryption to avoid loading entire file into memory
|
||||||
data,
|
encrypted_stream, enc_metadata = self.encryption.encrypt_stream(
|
||||||
|
stream,
|
||||||
algorithm=algorithm,
|
algorithm=algorithm,
|
||||||
kms_key_id=kms_key_id,
|
|
||||||
context={"bucket": bucket_name, "key": object_key},
|
context={"bucket": bucket_name, "key": object_key},
|
||||||
)
|
)
|
||||||
|
|
||||||
combined_metadata = metadata.copy() if metadata else {}
|
combined_metadata = metadata.copy() if metadata else {}
|
||||||
combined_metadata.update(enc_metadata.to_dict())
|
combined_metadata.update(enc_metadata.to_dict())
|
||||||
|
|
||||||
encrypted_stream = io.BytesIO(ciphertext)
|
|
||||||
result = self.storage.put_object(
|
result = self.storage.put_object(
|
||||||
bucket_name,
|
bucket_name,
|
||||||
object_key,
|
object_key,
|
||||||
encrypted_stream,
|
encrypted_stream,
|
||||||
metadata=combined_metadata,
|
metadata=combined_metadata,
|
||||||
)
|
)
|
||||||
|
|
||||||
result.metadata = combined_metadata
|
result.metadata = combined_metadata
|
||||||
return result
|
return result
|
||||||
|
|
||||||
except EncryptionError as exc:
|
except EncryptionError as exc:
|
||||||
raise StorageError(f"Encryption failed: {exc}") from exc
|
raise StorageError(f"Encryption failed: {exc}") from exc
|
||||||
else:
|
else:
|
||||||
@@ -135,33 +133,34 @@ class EncryptedObjectStorage:
|
|||||||
|
|
||||||
def get_object_data(self, bucket_name: str, object_key: str) -> tuple[bytes, Dict[str, str]]:
|
def get_object_data(self, bucket_name: str, object_key: str) -> tuple[bytes, Dict[str, str]]:
|
||||||
"""Get object data, decrypting if necessary.
|
"""Get object data, decrypting if necessary.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tuple of (data, metadata)
|
Tuple of (data, metadata)
|
||||||
|
|
||||||
|
Performance: Uses streaming decryption to reduce memory usage.
|
||||||
"""
|
"""
|
||||||
path = self.storage.get_object_path(bucket_name, object_key)
|
path = self.storage.get_object_path(bucket_name, object_key)
|
||||||
metadata = self.storage.get_object_metadata(bucket_name, object_key)
|
metadata = self.storage.get_object_metadata(bucket_name, object_key)
|
||||||
|
|
||||||
with path.open("rb") as f:
|
|
||||||
data = f.read()
|
|
||||||
|
|
||||||
enc_metadata = EncryptionMetadata.from_dict(metadata)
|
enc_metadata = EncryptionMetadata.from_dict(metadata)
|
||||||
if enc_metadata:
|
if enc_metadata:
|
||||||
try:
|
try:
|
||||||
data = self.encryption.decrypt_object(
|
# Performance: Use streaming decryption to avoid loading entire file into memory
|
||||||
data,
|
with path.open("rb") as f:
|
||||||
enc_metadata,
|
decrypted_stream = self.encryption.decrypt_stream(f, enc_metadata)
|
||||||
context={"bucket": bucket_name, "key": object_key},
|
data = decrypted_stream.read()
|
||||||
)
|
|
||||||
except EncryptionError as exc:
|
except EncryptionError as exc:
|
||||||
raise StorageError(f"Decryption failed: {exc}") from exc
|
raise StorageError(f"Decryption failed: {exc}") from exc
|
||||||
|
else:
|
||||||
|
with path.open("rb") as f:
|
||||||
|
data = f.read()
|
||||||
|
|
||||||
clean_metadata = {
|
clean_metadata = {
|
||||||
k: v for k, v in metadata.items()
|
k: v for k, v in metadata.items()
|
||||||
if not k.startswith("x-amz-encryption")
|
if not k.startswith("x-amz-encryption")
|
||||||
and k != "x-amz-encrypted-data-key"
|
and k != "x-amz-encrypted-data-key"
|
||||||
}
|
}
|
||||||
|
|
||||||
return data, clean_metadata
|
return data, clean_metadata
|
||||||
|
|
||||||
def get_object_stream(self, bucket_name: str, object_key: str) -> tuple[BinaryIO, Dict[str, str], int]:
|
def get_object_stream(self, bucket_name: str, object_key: str) -> tuple[BinaryIO, Dict[str, str], int]:
|
||||||
@@ -188,8 +187,11 @@ class EncryptedObjectStorage:
|
|||||||
def bucket_stats(self, bucket_name: str, cache_ttl: int = 60):
|
def bucket_stats(self, bucket_name: str, cache_ttl: int = 60):
|
||||||
return self.storage.bucket_stats(bucket_name, cache_ttl)
|
return self.storage.bucket_stats(bucket_name, cache_ttl)
|
||||||
|
|
||||||
def list_objects(self, bucket_name: str):
|
def list_objects(self, bucket_name: str, **kwargs):
|
||||||
return self.storage.list_objects(bucket_name)
|
return self.storage.list_objects(bucket_name, **kwargs)
|
||||||
|
|
||||||
|
def list_objects_all(self, bucket_name: str):
|
||||||
|
return self.storage.list_objects_all(bucket_name)
|
||||||
|
|
||||||
def get_object_path(self, bucket_name: str, object_key: str):
|
def get_object_path(self, bucket_name: str, object_key: str):
|
||||||
return self.storage.get_object_path(bucket_name, object_key)
|
return self.storage.get_object_path(bucket_name, object_key)
|
||||||
|
|||||||
@@ -157,10 +157,7 @@ class LocalKeyEncryption(EncryptionProvider):
|
|||||||
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
||||||
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
||||||
"""Decrypt data using envelope encryption."""
|
"""Decrypt data using envelope encryption."""
|
||||||
# Decrypt the data key
|
|
||||||
data_key = self._decrypt_data_key(encrypted_data_key)
|
data_key = self._decrypt_data_key(encrypted_data_key)
|
||||||
|
|
||||||
# Decrypt the data
|
|
||||||
aesgcm = AESGCM(data_key)
|
aesgcm = AESGCM(data_key)
|
||||||
try:
|
try:
|
||||||
return aesgcm.decrypt(nonce, ciphertext, None)
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
@@ -183,81 +180,94 @@ class StreamingEncryptor:
|
|||||||
self.chunk_size = chunk_size
|
self.chunk_size = chunk_size
|
||||||
|
|
||||||
def _derive_chunk_nonce(self, base_nonce: bytes, chunk_index: int) -> bytes:
|
def _derive_chunk_nonce(self, base_nonce: bytes, chunk_index: int) -> bytes:
|
||||||
"""Derive a unique nonce for each chunk."""
|
"""Derive a unique nonce for each chunk.
|
||||||
# XOR the base nonce with the chunk index
|
|
||||||
nonce_int = int.from_bytes(base_nonce, "big")
|
|
||||||
derived = nonce_int ^ chunk_index
|
|
||||||
return derived.to_bytes(12, "big")
|
|
||||||
|
|
||||||
def encrypt_stream(self, stream: BinaryIO,
|
|
||||||
context: Dict[str, str] | None = None) -> tuple[BinaryIO, EncryptionMetadata]:
|
|
||||||
"""Encrypt a stream and return encrypted stream + metadata."""
|
|
||||||
|
|
||||||
|
Performance: Use direct byte manipulation instead of full int conversion.
|
||||||
|
"""
|
||||||
|
# Performance: Only modify last 4 bytes instead of full 12-byte conversion
|
||||||
|
return base_nonce[:8] + (chunk_index ^ int.from_bytes(base_nonce[8:], "big")).to_bytes(4, "big")
|
||||||
|
|
||||||
|
def encrypt_stream(self, stream: BinaryIO,
|
||||||
|
context: Dict[str, str] | None = None) -> tuple[BinaryIO, EncryptionMetadata]:
|
||||||
|
"""Encrypt a stream and return encrypted stream + metadata.
|
||||||
|
|
||||||
|
Performance: Writes chunks directly to output buffer instead of accumulating in list.
|
||||||
|
"""
|
||||||
data_key, encrypted_data_key = self.provider.generate_data_key()
|
data_key, encrypted_data_key = self.provider.generate_data_key()
|
||||||
base_nonce = secrets.token_bytes(12)
|
base_nonce = secrets.token_bytes(12)
|
||||||
|
|
||||||
aesgcm = AESGCM(data_key)
|
aesgcm = AESGCM(data_key)
|
||||||
encrypted_chunks = []
|
# Performance: Write directly to BytesIO instead of accumulating chunks
|
||||||
|
output = io.BytesIO()
|
||||||
|
output.write(b"\x00\x00\x00\x00") # Placeholder for chunk count
|
||||||
chunk_index = 0
|
chunk_index = 0
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
chunk = stream.read(self.chunk_size)
|
chunk = stream.read(self.chunk_size)
|
||||||
if not chunk:
|
if not chunk:
|
||||||
break
|
break
|
||||||
|
|
||||||
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
||||||
encrypted_chunk = aesgcm.encrypt(chunk_nonce, chunk, None)
|
encrypted_chunk = aesgcm.encrypt(chunk_nonce, chunk, None)
|
||||||
|
|
||||||
size_prefix = len(encrypted_chunk).to_bytes(self.HEADER_SIZE, "big")
|
# Write size prefix + encrypted chunk directly
|
||||||
encrypted_chunks.append(size_prefix + encrypted_chunk)
|
output.write(len(encrypted_chunk).to_bytes(self.HEADER_SIZE, "big"))
|
||||||
|
output.write(encrypted_chunk)
|
||||||
chunk_index += 1
|
chunk_index += 1
|
||||||
|
|
||||||
header = chunk_index.to_bytes(4, "big")
|
# Write actual chunk count to header
|
||||||
encrypted_data = header + b"".join(encrypted_chunks)
|
output.seek(0)
|
||||||
|
output.write(chunk_index.to_bytes(4, "big"))
|
||||||
|
output.seek(0)
|
||||||
|
|
||||||
metadata = EncryptionMetadata(
|
metadata = EncryptionMetadata(
|
||||||
algorithm="AES256",
|
algorithm="AES256",
|
||||||
key_id=self.provider.KEY_ID if hasattr(self.provider, "KEY_ID") else "local",
|
key_id=self.provider.KEY_ID if hasattr(self.provider, "KEY_ID") else "local",
|
||||||
nonce=base_nonce,
|
nonce=base_nonce,
|
||||||
encrypted_data_key=encrypted_data_key,
|
encrypted_data_key=encrypted_data_key,
|
||||||
)
|
)
|
||||||
|
|
||||||
return io.BytesIO(encrypted_data), metadata
|
return output, metadata
|
||||||
|
|
||||||
def decrypt_stream(self, stream: BinaryIO, metadata: EncryptionMetadata) -> BinaryIO:
|
def decrypt_stream(self, stream: BinaryIO, metadata: EncryptionMetadata) -> BinaryIO:
|
||||||
"""Decrypt a stream using the provided metadata."""
|
"""Decrypt a stream using the provided metadata.
|
||||||
|
|
||||||
|
Performance: Writes chunks directly to output buffer instead of accumulating in list.
|
||||||
|
"""
|
||||||
if isinstance(self.provider, LocalKeyEncryption):
|
if isinstance(self.provider, LocalKeyEncryption):
|
||||||
data_key = self.provider._decrypt_data_key(metadata.encrypted_data_key)
|
data_key = self.provider._decrypt_data_key(metadata.encrypted_data_key)
|
||||||
else:
|
else:
|
||||||
raise EncryptionError("Unsupported provider for streaming decryption")
|
raise EncryptionError("Unsupported provider for streaming decryption")
|
||||||
|
|
||||||
aesgcm = AESGCM(data_key)
|
aesgcm = AESGCM(data_key)
|
||||||
base_nonce = metadata.nonce
|
base_nonce = metadata.nonce
|
||||||
|
|
||||||
chunk_count_bytes = stream.read(4)
|
chunk_count_bytes = stream.read(4)
|
||||||
if len(chunk_count_bytes) < 4:
|
if len(chunk_count_bytes) < 4:
|
||||||
raise EncryptionError("Invalid encrypted stream: missing header")
|
raise EncryptionError("Invalid encrypted stream: missing header")
|
||||||
chunk_count = int.from_bytes(chunk_count_bytes, "big")
|
chunk_count = int.from_bytes(chunk_count_bytes, "big")
|
||||||
|
|
||||||
decrypted_chunks = []
|
# Performance: Write directly to BytesIO instead of accumulating chunks
|
||||||
|
output = io.BytesIO()
|
||||||
for chunk_index in range(chunk_count):
|
for chunk_index in range(chunk_count):
|
||||||
size_bytes = stream.read(self.HEADER_SIZE)
|
size_bytes = stream.read(self.HEADER_SIZE)
|
||||||
if len(size_bytes) < self.HEADER_SIZE:
|
if len(size_bytes) < self.HEADER_SIZE:
|
||||||
raise EncryptionError(f"Invalid encrypted stream: truncated at chunk {chunk_index}")
|
raise EncryptionError(f"Invalid encrypted stream: truncated at chunk {chunk_index}")
|
||||||
chunk_size = int.from_bytes(size_bytes, "big")
|
chunk_size = int.from_bytes(size_bytes, "big")
|
||||||
|
|
||||||
encrypted_chunk = stream.read(chunk_size)
|
encrypted_chunk = stream.read(chunk_size)
|
||||||
if len(encrypted_chunk) < chunk_size:
|
if len(encrypted_chunk) < chunk_size:
|
||||||
raise EncryptionError(f"Invalid encrypted stream: incomplete chunk {chunk_index}")
|
raise EncryptionError(f"Invalid encrypted stream: incomplete chunk {chunk_index}")
|
||||||
|
|
||||||
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
||||||
try:
|
try:
|
||||||
decrypted_chunk = aesgcm.decrypt(chunk_nonce, encrypted_chunk, None)
|
decrypted_chunk = aesgcm.decrypt(chunk_nonce, encrypted_chunk, None)
|
||||||
decrypted_chunks.append(decrypted_chunk)
|
output.write(decrypted_chunk) # Write directly instead of appending to list
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
raise EncryptionError(f"Failed to decrypt chunk {chunk_index}: {exc}") from exc
|
raise EncryptionError(f"Failed to decrypt chunk {chunk_index}: {exc}") from exc
|
||||||
|
|
||||||
return io.BytesIO(b"".join(decrypted_chunks))
|
output.seek(0)
|
||||||
|
return output
|
||||||
|
|
||||||
|
|
||||||
class EncryptionManager:
|
class EncryptionManager:
|
||||||
@@ -343,13 +353,113 @@ class EncryptionManager:
|
|||||||
return encryptor.decrypt_stream(stream, metadata)
|
return encryptor.decrypt_stream(stream, metadata)
|
||||||
|
|
||||||
|
|
||||||
|
class SSECEncryption(EncryptionProvider):
|
||||||
|
"""SSE-C: Server-Side Encryption with Customer-Provided Keys.
|
||||||
|
|
||||||
|
The client provides the encryption key with each request.
|
||||||
|
Server encrypts/decrypts but never stores the key.
|
||||||
|
|
||||||
|
Required headers for PUT:
|
||||||
|
- x-amz-server-side-encryption-customer-algorithm: AES256
|
||||||
|
- x-amz-server-side-encryption-customer-key: Base64-encoded 256-bit key
|
||||||
|
- x-amz-server-side-encryption-customer-key-MD5: Base64-encoded MD5 of key
|
||||||
|
"""
|
||||||
|
|
||||||
|
KEY_ID = "customer-provided"
|
||||||
|
|
||||||
|
def __init__(self, customer_key: bytes):
|
||||||
|
if len(customer_key) != 32:
|
||||||
|
raise EncryptionError("Customer key must be exactly 256 bits (32 bytes)")
|
||||||
|
self.customer_key = customer_key
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_headers(cls, headers: Dict[str, str]) -> "SSECEncryption":
|
||||||
|
algorithm = headers.get("x-amz-server-side-encryption-customer-algorithm", "")
|
||||||
|
if algorithm.upper() != "AES256":
|
||||||
|
raise EncryptionError(f"Unsupported SSE-C algorithm: {algorithm}. Only AES256 is supported.")
|
||||||
|
|
||||||
|
key_b64 = headers.get("x-amz-server-side-encryption-customer-key", "")
|
||||||
|
if not key_b64:
|
||||||
|
raise EncryptionError("Missing x-amz-server-side-encryption-customer-key header")
|
||||||
|
|
||||||
|
key_md5_b64 = headers.get("x-amz-server-side-encryption-customer-key-md5", "")
|
||||||
|
|
||||||
|
try:
|
||||||
|
customer_key = base64.b64decode(key_b64)
|
||||||
|
except Exception as e:
|
||||||
|
raise EncryptionError(f"Invalid base64 in customer key: {e}") from e
|
||||||
|
|
||||||
|
if len(customer_key) != 32:
|
||||||
|
raise EncryptionError(f"Customer key must be 256 bits, got {len(customer_key) * 8} bits")
|
||||||
|
|
||||||
|
if key_md5_b64:
|
||||||
|
import hashlib
|
||||||
|
expected_md5 = base64.b64encode(hashlib.md5(customer_key).digest()).decode()
|
||||||
|
if key_md5_b64 != expected_md5:
|
||||||
|
raise EncryptionError("Customer key MD5 mismatch")
|
||||||
|
|
||||||
|
return cls(customer_key)
|
||||||
|
|
||||||
|
def encrypt(self, plaintext: bytes, context: Dict[str, str] | None = None) -> EncryptionResult:
|
||||||
|
aesgcm = AESGCM(self.customer_key)
|
||||||
|
nonce = secrets.token_bytes(12)
|
||||||
|
ciphertext = aesgcm.encrypt(nonce, plaintext, None)
|
||||||
|
|
||||||
|
return EncryptionResult(
|
||||||
|
ciphertext=ciphertext,
|
||||||
|
nonce=nonce,
|
||||||
|
key_id=self.KEY_ID,
|
||||||
|
encrypted_data_key=b"",
|
||||||
|
)
|
||||||
|
|
||||||
|
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
||||||
|
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
||||||
|
aesgcm = AESGCM(self.customer_key)
|
||||||
|
try:
|
||||||
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
|
except Exception as exc:
|
||||||
|
raise EncryptionError(f"SSE-C decryption failed: {exc}") from exc
|
||||||
|
|
||||||
|
def generate_data_key(self) -> tuple[bytes, bytes]:
|
||||||
|
return self.customer_key, b""
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SSECMetadata:
|
||||||
|
algorithm: str = "AES256"
|
||||||
|
nonce: bytes = b""
|
||||||
|
key_md5: str = ""
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, str]:
|
||||||
|
return {
|
||||||
|
"x-amz-server-side-encryption-customer-algorithm": self.algorithm,
|
||||||
|
"x-amz-encryption-nonce": base64.b64encode(self.nonce).decode(),
|
||||||
|
"x-amz-server-side-encryption-customer-key-MD5": self.key_md5,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, str]) -> Optional["SSECMetadata"]:
|
||||||
|
algorithm = data.get("x-amz-server-side-encryption-customer-algorithm")
|
||||||
|
if not algorithm:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
nonce = base64.b64decode(data.get("x-amz-encryption-nonce", ""))
|
||||||
|
return cls(
|
||||||
|
algorithm=algorithm,
|
||||||
|
nonce=nonce,
|
||||||
|
key_md5=data.get("x-amz-server-side-encryption-customer-key-MD5", ""),
|
||||||
|
)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
class ClientEncryptionHelper:
|
class ClientEncryptionHelper:
|
||||||
"""Helpers for client-side encryption.
|
"""Helpers for client-side encryption.
|
||||||
|
|
||||||
Client-side encryption is performed by the client, but this helper
|
Client-side encryption is performed by the client, but this helper
|
||||||
provides key generation and materials for clients that need them.
|
provides key generation and materials for clients that need them.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def generate_client_key() -> Dict[str, str]:
|
def generate_client_key() -> Dict[str, str]:
|
||||||
"""Generate a new client encryption key."""
|
"""Generate a new client encryption key."""
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
"""Standardized error handling for API and UI responses."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
"""Application-wide extension instances."""
|
|
||||||
from flask import g
|
from flask import g
|
||||||
from flask_limiter import Limiter
|
from flask_limiter import Limiter
|
||||||
from flask_limiter.util import get_remote_address
|
from flask_limiter.util import get_remote_address
|
||||||
|
|||||||
131
app/iam.py
131
app/iam.py
@@ -1,21 +1,22 @@
|
|||||||
"""Lightweight IAM-style user and policy management."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import hmac
|
||||||
import json
|
import json
|
||||||
import math
|
import math
|
||||||
import secrets
|
import secrets
|
||||||
|
import time
|
||||||
from collections import deque
|
from collections import deque
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta, timezone
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set
|
from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set, Tuple
|
||||||
|
|
||||||
|
|
||||||
class IamError(RuntimeError):
|
class IamError(RuntimeError):
|
||||||
"""Raised when authentication or authorization fails."""
|
"""Raised when authentication or authorization fails."""
|
||||||
|
|
||||||
|
|
||||||
S3_ACTIONS = {"list", "read", "write", "delete", "share", "policy", "replication"}
|
S3_ACTIONS = {"list", "read", "write", "delete", "share", "policy", "replication", "lifecycle", "cors"}
|
||||||
IAM_ACTIONS = {
|
IAM_ACTIONS = {
|
||||||
"iam:list_users",
|
"iam:list_users",
|
||||||
"iam:create_user",
|
"iam:create_user",
|
||||||
@@ -26,14 +27,12 @@ IAM_ACTIONS = {
|
|||||||
ALLOWED_ACTIONS = (S3_ACTIONS | IAM_ACTIONS) | {"iam:*"}
|
ALLOWED_ACTIONS = (S3_ACTIONS | IAM_ACTIONS) | {"iam:*"}
|
||||||
|
|
||||||
ACTION_ALIASES = {
|
ACTION_ALIASES = {
|
||||||
# List actions
|
|
||||||
"list": "list",
|
"list": "list",
|
||||||
"s3:listbucket": "list",
|
"s3:listbucket": "list",
|
||||||
"s3:listallmybuckets": "list",
|
"s3:listallmybuckets": "list",
|
||||||
"s3:listbucketversions": "list",
|
"s3:listbucketversions": "list",
|
||||||
"s3:listmultipartuploads": "list",
|
"s3:listmultipartuploads": "list",
|
||||||
"s3:listparts": "list",
|
"s3:listparts": "list",
|
||||||
# Read actions
|
|
||||||
"read": "read",
|
"read": "read",
|
||||||
"s3:getobject": "read",
|
"s3:getobject": "read",
|
||||||
"s3:getobjectversion": "read",
|
"s3:getobjectversion": "read",
|
||||||
@@ -43,7 +42,6 @@ ACTION_ALIASES = {
|
|||||||
"s3:getbucketversioning": "read",
|
"s3:getbucketversioning": "read",
|
||||||
"s3:headobject": "read",
|
"s3:headobject": "read",
|
||||||
"s3:headbucket": "read",
|
"s3:headbucket": "read",
|
||||||
# Write actions
|
|
||||||
"write": "write",
|
"write": "write",
|
||||||
"s3:putobject": "write",
|
"s3:putobject": "write",
|
||||||
"s3:createbucket": "write",
|
"s3:createbucket": "write",
|
||||||
@@ -54,23 +52,19 @@ ACTION_ALIASES = {
|
|||||||
"s3:completemultipartupload": "write",
|
"s3:completemultipartupload": "write",
|
||||||
"s3:abortmultipartupload": "write",
|
"s3:abortmultipartupload": "write",
|
||||||
"s3:copyobject": "write",
|
"s3:copyobject": "write",
|
||||||
# Delete actions
|
|
||||||
"delete": "delete",
|
"delete": "delete",
|
||||||
"s3:deleteobject": "delete",
|
"s3:deleteobject": "delete",
|
||||||
"s3:deleteobjectversion": "delete",
|
"s3:deleteobjectversion": "delete",
|
||||||
"s3:deletebucket": "delete",
|
"s3:deletebucket": "delete",
|
||||||
"s3:deleteobjecttagging": "delete",
|
"s3:deleteobjecttagging": "delete",
|
||||||
# Share actions (ACL)
|
|
||||||
"share": "share",
|
"share": "share",
|
||||||
"s3:putobjectacl": "share",
|
"s3:putobjectacl": "share",
|
||||||
"s3:putbucketacl": "share",
|
"s3:putbucketacl": "share",
|
||||||
"s3:getbucketacl": "share",
|
"s3:getbucketacl": "share",
|
||||||
# Policy actions
|
|
||||||
"policy": "policy",
|
"policy": "policy",
|
||||||
"s3:putbucketpolicy": "policy",
|
"s3:putbucketpolicy": "policy",
|
||||||
"s3:getbucketpolicy": "policy",
|
"s3:getbucketpolicy": "policy",
|
||||||
"s3:deletebucketpolicy": "policy",
|
"s3:deletebucketpolicy": "policy",
|
||||||
# Replication actions
|
|
||||||
"replication": "replication",
|
"replication": "replication",
|
||||||
"s3:getreplicationconfiguration": "replication",
|
"s3:getreplicationconfiguration": "replication",
|
||||||
"s3:putreplicationconfiguration": "replication",
|
"s3:putreplicationconfiguration": "replication",
|
||||||
@@ -78,7 +72,16 @@ ACTION_ALIASES = {
|
|||||||
"s3:replicateobject": "replication",
|
"s3:replicateobject": "replication",
|
||||||
"s3:replicatetags": "replication",
|
"s3:replicatetags": "replication",
|
||||||
"s3:replicatedelete": "replication",
|
"s3:replicatedelete": "replication",
|
||||||
# IAM actions
|
"lifecycle": "lifecycle",
|
||||||
|
"s3:getlifecycleconfiguration": "lifecycle",
|
||||||
|
"s3:putlifecycleconfiguration": "lifecycle",
|
||||||
|
"s3:deletelifecycleconfiguration": "lifecycle",
|
||||||
|
"s3:getbucketlifecycle": "lifecycle",
|
||||||
|
"s3:putbucketlifecycle": "lifecycle",
|
||||||
|
"cors": "cors",
|
||||||
|
"s3:getbucketcors": "cors",
|
||||||
|
"s3:putbucketcors": "cors",
|
||||||
|
"s3:deletebucketcors": "cors",
|
||||||
"iam:listusers": "iam:list_users",
|
"iam:listusers": "iam:list_users",
|
||||||
"iam:createuser": "iam:create_user",
|
"iam:createuser": "iam:create_user",
|
||||||
"iam:deleteuser": "iam:delete_user",
|
"iam:deleteuser": "iam:delete_user",
|
||||||
@@ -115,17 +118,26 @@ class IamService:
|
|||||||
self._raw_config: Dict[str, Any] = {}
|
self._raw_config: Dict[str, Any] = {}
|
||||||
self._failed_attempts: Dict[str, Deque[datetime]] = {}
|
self._failed_attempts: Dict[str, Deque[datetime]] = {}
|
||||||
self._last_load_time = 0.0
|
self._last_load_time = 0.0
|
||||||
|
self._credential_cache: Dict[str, Tuple[str, Principal, float]] = {}
|
||||||
|
self._cache_ttl = 60.0
|
||||||
|
self._last_stat_check = 0.0
|
||||||
|
self._stat_check_interval = 1.0
|
||||||
|
self._sessions: Dict[str, Dict[str, Any]] = {}
|
||||||
self._load()
|
self._load()
|
||||||
|
|
||||||
def _maybe_reload(self) -> None:
|
def _maybe_reload(self) -> None:
|
||||||
"""Reload configuration if the file has changed on disk."""
|
"""Reload configuration if the file has changed on disk."""
|
||||||
|
now = time.time()
|
||||||
|
if now - self._last_stat_check < self._stat_check_interval:
|
||||||
|
return
|
||||||
|
self._last_stat_check = now
|
||||||
try:
|
try:
|
||||||
if self.config_path.stat().st_mtime > self._last_load_time:
|
if self.config_path.stat().st_mtime > self._last_load_time:
|
||||||
self._load()
|
self._load()
|
||||||
|
self._credential_cache.clear()
|
||||||
except OSError:
|
except OSError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
# ---------------------- authz helpers ----------------------
|
|
||||||
def authenticate(self, access_key: str, secret_key: str) -> Principal:
|
def authenticate(self, access_key: str, secret_key: str) -> Principal:
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
access_key = (access_key or "").strip()
|
access_key = (access_key or "").strip()
|
||||||
@@ -138,7 +150,7 @@ class IamService:
|
|||||||
f"Access temporarily locked. Try again in {seconds} seconds."
|
f"Access temporarily locked. Try again in {seconds} seconds."
|
||||||
)
|
)
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if not record or record["secret_key"] != secret_key:
|
if not record or not hmac.compare_digest(record["secret_key"], secret_key):
|
||||||
self._record_failed_attempt(access_key)
|
self._record_failed_attempt(access_key)
|
||||||
raise IamError("Invalid credentials")
|
raise IamError("Invalid credentials")
|
||||||
self._clear_failed_attempts(access_key)
|
self._clear_failed_attempts(access_key)
|
||||||
@@ -149,7 +161,7 @@ class IamService:
|
|||||||
return
|
return
|
||||||
attempts = self._failed_attempts.setdefault(access_key, deque())
|
attempts = self._failed_attempts.setdefault(access_key, deque())
|
||||||
self._prune_attempts(attempts)
|
self._prune_attempts(attempts)
|
||||||
attempts.append(datetime.now())
|
attempts.append(datetime.now(timezone.utc))
|
||||||
|
|
||||||
def _clear_failed_attempts(self, access_key: str) -> None:
|
def _clear_failed_attempts(self, access_key: str) -> None:
|
||||||
if not access_key:
|
if not access_key:
|
||||||
@@ -157,7 +169,7 @@ class IamService:
|
|||||||
self._failed_attempts.pop(access_key, None)
|
self._failed_attempts.pop(access_key, None)
|
||||||
|
|
||||||
def _prune_attempts(self, attempts: Deque[datetime]) -> None:
|
def _prune_attempts(self, attempts: Deque[datetime]) -> None:
|
||||||
cutoff = datetime.now() - self.auth_lockout_window
|
cutoff = datetime.now(timezone.utc) - self.auth_lockout_window
|
||||||
while attempts and attempts[0] < cutoff:
|
while attempts and attempts[0] < cutoff:
|
||||||
attempts.popleft()
|
attempts.popleft()
|
||||||
|
|
||||||
@@ -178,21 +190,73 @@ class IamService:
|
|||||||
if len(attempts) < self.auth_max_attempts:
|
if len(attempts) < self.auth_max_attempts:
|
||||||
return 0
|
return 0
|
||||||
oldest = attempts[0]
|
oldest = attempts[0]
|
||||||
elapsed = (datetime.now() - oldest).total_seconds()
|
elapsed = (datetime.now(timezone.utc) - oldest).total_seconds()
|
||||||
return int(max(0, self.auth_lockout_window.total_seconds() - elapsed))
|
return int(max(0, self.auth_lockout_window.total_seconds() - elapsed))
|
||||||
|
|
||||||
def principal_for_key(self, access_key: str) -> Principal:
|
def create_session_token(self, access_key: str, duration_seconds: int = 3600) -> str:
|
||||||
|
"""Create a temporary session token for an access key."""
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if not record:
|
if not record:
|
||||||
raise IamError("Unknown access key")
|
raise IamError("Unknown access key")
|
||||||
return self._build_principal(access_key, record)
|
self._cleanup_expired_sessions()
|
||||||
|
token = secrets.token_urlsafe(32)
|
||||||
|
expires_at = time.time() + duration_seconds
|
||||||
|
self._sessions[token] = {
|
||||||
|
"access_key": access_key,
|
||||||
|
"expires_at": expires_at,
|
||||||
|
}
|
||||||
|
return token
|
||||||
|
|
||||||
|
def validate_session_token(self, access_key: str, session_token: str) -> bool:
|
||||||
|
"""Validate a session token for an access key."""
|
||||||
|
session = self._sessions.get(session_token)
|
||||||
|
if not session:
|
||||||
|
return False
|
||||||
|
if session["access_key"] != access_key:
|
||||||
|
return False
|
||||||
|
if time.time() > session["expires_at"]:
|
||||||
|
del self._sessions[session_token]
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _cleanup_expired_sessions(self) -> None:
|
||||||
|
"""Remove expired session tokens."""
|
||||||
|
now = time.time()
|
||||||
|
expired = [token for token, data in self._sessions.items() if now > data["expires_at"]]
|
||||||
|
for token in expired:
|
||||||
|
del self._sessions[token]
|
||||||
|
|
||||||
|
def principal_for_key(self, access_key: str) -> Principal:
|
||||||
|
now = time.time()
|
||||||
|
cached = self._credential_cache.get(access_key)
|
||||||
|
if cached:
|
||||||
|
secret, principal, cached_time = cached
|
||||||
|
if now - cached_time < self._cache_ttl:
|
||||||
|
return principal
|
||||||
|
|
||||||
|
self._maybe_reload()
|
||||||
|
record = self._users.get(access_key)
|
||||||
|
if not record:
|
||||||
|
raise IamError("Unknown access key")
|
||||||
|
principal = self._build_principal(access_key, record)
|
||||||
|
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
||||||
|
return principal
|
||||||
|
|
||||||
def secret_for_key(self, access_key: str) -> str:
|
def secret_for_key(self, access_key: str) -> str:
|
||||||
|
now = time.time()
|
||||||
|
cached = self._credential_cache.get(access_key)
|
||||||
|
if cached:
|
||||||
|
secret, principal, cached_time = cached
|
||||||
|
if now - cached_time < self._cache_ttl:
|
||||||
|
return secret
|
||||||
|
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if not record:
|
if not record:
|
||||||
raise IamError("Unknown access key")
|
raise IamError("Unknown access key")
|
||||||
|
principal = self._build_principal(access_key, record)
|
||||||
|
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
||||||
return record["secret_key"]
|
return record["secret_key"]
|
||||||
|
|
||||||
def authorize(self, principal: Principal, bucket_name: str | None, action: str) -> None:
|
def authorize(self, principal: Principal, bucket_name: str | None, action: str) -> None:
|
||||||
@@ -218,7 +282,6 @@ class IamService:
|
|||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# ---------------------- management helpers ----------------------
|
|
||||||
def list_users(self) -> List[Dict[str, Any]]:
|
def list_users(self) -> List[Dict[str, Any]]:
|
||||||
listing: List[Dict[str, Any]] = []
|
listing: List[Dict[str, Any]] = []
|
||||||
for access_key, record in self._users.items():
|
for access_key, record in self._users.items():
|
||||||
@@ -291,7 +354,6 @@ class IamService:
|
|||||||
self._save()
|
self._save()
|
||||||
self._load()
|
self._load()
|
||||||
|
|
||||||
# ---------------------- config helpers ----------------------
|
|
||||||
def _load(self) -> None:
|
def _load(self) -> None:
|
||||||
try:
|
try:
|
||||||
self._last_load_time = self.config_path.stat().st_mtime
|
self._last_load_time = self.config_path.stat().st_mtime
|
||||||
@@ -337,7 +399,6 @@ class IamService:
|
|||||||
except (OSError, PermissionError) as e:
|
except (OSError, PermissionError) as e:
|
||||||
raise IamError(f"Cannot save IAM config: {e}")
|
raise IamError(f"Cannot save IAM config: {e}")
|
||||||
|
|
||||||
# ---------------------- insight helpers ----------------------
|
|
||||||
def config_summary(self) -> Dict[str, Any]:
|
def config_summary(self) -> Dict[str, Any]:
|
||||||
return {
|
return {
|
||||||
"path": str(self.config_path),
|
"path": str(self.config_path),
|
||||||
@@ -446,11 +507,33 @@ class IamService:
|
|||||||
raise IamError("User not found")
|
raise IamError("User not found")
|
||||||
|
|
||||||
def get_secret_key(self, access_key: str) -> str | None:
|
def get_secret_key(self, access_key: str) -> str | None:
|
||||||
|
now = time.time()
|
||||||
|
cached = self._credential_cache.get(access_key)
|
||||||
|
if cached:
|
||||||
|
secret, principal, cached_time = cached
|
||||||
|
if now - cached_time < self._cache_ttl:
|
||||||
|
return secret
|
||||||
|
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
return record["secret_key"] if record else None
|
if record:
|
||||||
|
principal = self._build_principal(access_key, record)
|
||||||
|
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
||||||
|
return record["secret_key"]
|
||||||
|
return None
|
||||||
|
|
||||||
def get_principal(self, access_key: str) -> Principal | None:
|
def get_principal(self, access_key: str) -> Principal | None:
|
||||||
|
now = time.time()
|
||||||
|
cached = self._credential_cache.get(access_key)
|
||||||
|
if cached:
|
||||||
|
secret, principal, cached_time = cached
|
||||||
|
if now - cached_time < self._cache_ttl:
|
||||||
|
return principal
|
||||||
|
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
return self._build_principal(access_key, record) if record else None
|
if record:
|
||||||
|
principal = self._build_principal(access_key, record)
|
||||||
|
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
||||||
|
return principal
|
||||||
|
return None
|
||||||
|
|||||||
23
app/kms.py
23
app/kms.py
@@ -1,4 +1,3 @@
|
|||||||
"""Key Management Service (KMS) for encryption key management."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import base64
|
import base64
|
||||||
@@ -211,7 +210,27 @@ class KMSManager:
|
|||||||
"""List all keys."""
|
"""List all keys."""
|
||||||
self._load_keys()
|
self._load_keys()
|
||||||
return list(self._keys.values())
|
return list(self._keys.values())
|
||||||
|
|
||||||
|
def get_default_key_id(self) -> str:
|
||||||
|
"""Get the default KMS key ID, creating one if none exist."""
|
||||||
|
self._load_keys()
|
||||||
|
for key in self._keys.values():
|
||||||
|
if key.enabled:
|
||||||
|
return key.key_id
|
||||||
|
default_key = self.create_key(description="Default KMS Key")
|
||||||
|
return default_key.key_id
|
||||||
|
|
||||||
|
def get_provider(self, key_id: str | None = None) -> "KMSEncryptionProvider":
|
||||||
|
"""Get a KMS encryption provider for the specified key."""
|
||||||
|
if key_id is None:
|
||||||
|
key_id = self.get_default_key_id()
|
||||||
|
key = self.get_key(key_id)
|
||||||
|
if not key:
|
||||||
|
raise EncryptionError(f"Key not found: {key_id}")
|
||||||
|
if not key.enabled:
|
||||||
|
raise EncryptionError(f"Key is disabled: {key_id}")
|
||||||
|
return KMSEncryptionProvider(self, key_id)
|
||||||
|
|
||||||
def enable_key(self, key_id: str) -> None:
|
def enable_key(self, key_id: str) -> None:
|
||||||
"""Enable a key."""
|
"""Enable a key."""
|
||||||
self._load_keys()
|
self._load_keys()
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
"""KMS and encryption API endpoints."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import base64
|
import base64
|
||||||
@@ -33,9 +32,6 @@ def _encryption():
|
|||||||
def _error_response(code: str, message: str, status: int) -> tuple[Dict[str, Any], int]:
|
def _error_response(code: str, message: str, status: int) -> tuple[Dict[str, Any], int]:
|
||||||
return {"__type": code, "message": message}, status
|
return {"__type": code, "message": message}, status
|
||||||
|
|
||||||
|
|
||||||
# ---------------------- Key Management ----------------------
|
|
||||||
|
|
||||||
@kms_api_bp.route("/keys", methods=["GET", "POST"])
|
@kms_api_bp.route("/keys", methods=["GET", "POST"])
|
||||||
@limiter.limit("30 per minute")
|
@limiter.limit("30 per minute")
|
||||||
def list_or_create_keys():
|
def list_or_create_keys():
|
||||||
@@ -65,7 +61,6 @@ def list_or_create_keys():
|
|||||||
except EncryptionError as exc:
|
except EncryptionError as exc:
|
||||||
return _error_response("KMSInternalException", str(exc), 400)
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
|
|
||||||
# GET - List keys
|
|
||||||
keys = kms.list_keys()
|
keys = kms.list_keys()
|
||||||
return jsonify({
|
return jsonify({
|
||||||
"Keys": [{"KeyId": k.key_id, "KeyArn": k.arn} for k in keys],
|
"Keys": [{"KeyId": k.key_id, "KeyArn": k.arn} for k in keys],
|
||||||
@@ -96,7 +91,6 @@ def get_or_delete_key(key_id: str):
|
|||||||
except EncryptionError as exc:
|
except EncryptionError as exc:
|
||||||
return _error_response("NotFoundException", str(exc), 404)
|
return _error_response("NotFoundException", str(exc), 404)
|
||||||
|
|
||||||
# GET
|
|
||||||
key = kms.get_key(key_id)
|
key = kms.get_key(key_id)
|
||||||
if not key:
|
if not key:
|
||||||
return _error_response("NotFoundException", f"Key not found: {key_id}", 404)
|
return _error_response("NotFoundException", f"Key not found: {key_id}", 404)
|
||||||
@@ -149,9 +143,6 @@ def disable_key(key_id: str):
|
|||||||
except EncryptionError as exc:
|
except EncryptionError as exc:
|
||||||
return _error_response("NotFoundException", str(exc), 404)
|
return _error_response("NotFoundException", str(exc), 404)
|
||||||
|
|
||||||
|
|
||||||
# ---------------------- Encryption Operations ----------------------
|
|
||||||
|
|
||||||
@kms_api_bp.route("/encrypt", methods=["POST"])
|
@kms_api_bp.route("/encrypt", methods=["POST"])
|
||||||
@limiter.limit("60 per minute")
|
@limiter.limit("60 per minute")
|
||||||
def encrypt_data():
|
def encrypt_data():
|
||||||
@@ -251,7 +242,6 @@ def generate_data_key():
|
|||||||
try:
|
try:
|
||||||
plaintext_key, encrypted_key = kms.generate_data_key(key_id, context)
|
plaintext_key, encrypted_key = kms.generate_data_key(key_id, context)
|
||||||
|
|
||||||
# Trim key if AES_128 requested
|
|
||||||
if key_spec == "AES_128":
|
if key_spec == "AES_128":
|
||||||
plaintext_key = plaintext_key[:16]
|
plaintext_key = plaintext_key[:16]
|
||||||
|
|
||||||
@@ -322,10 +312,7 @@ def re_encrypt():
|
|||||||
return _error_response("ValidationException", "CiphertextBlob must be base64 encoded", 400)
|
return _error_response("ValidationException", "CiphertextBlob must be base64 encoded", 400)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# First decrypt, get source key id
|
|
||||||
plaintext, source_key_id = kms.decrypt(ciphertext, source_context)
|
plaintext, source_key_id = kms.decrypt(ciphertext, source_context)
|
||||||
|
|
||||||
# Re-encrypt with destination key
|
|
||||||
new_ciphertext = kms.encrypt(destination_key_id, plaintext, destination_context)
|
new_ciphertext = kms.encrypt(destination_key_id, plaintext, destination_context)
|
||||||
|
|
||||||
return jsonify({
|
return jsonify({
|
||||||
@@ -365,9 +352,6 @@ def generate_random():
|
|||||||
except EncryptionError as exc:
|
except EncryptionError as exc:
|
||||||
return _error_response("ValidationException", str(exc), 400)
|
return _error_response("ValidationException", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
# ---------------------- Client-Side Encryption Helpers ----------------------
|
|
||||||
|
|
||||||
@kms_api_bp.route("/client/generate-key", methods=["POST"])
|
@kms_api_bp.route("/client/generate-key", methods=["POST"])
|
||||||
@limiter.limit("30 per minute")
|
@limiter.limit("30 per minute")
|
||||||
def generate_client_key():
|
def generate_client_key():
|
||||||
@@ -427,9 +411,6 @@ def client_decrypt():
|
|||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
return _error_response("DecryptionError", str(exc), 400)
|
return _error_response("DecryptionError", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
# ---------------------- Encryption Materials for S3 Client-Side Encryption ----------------------
|
|
||||||
|
|
||||||
@kms_api_bp.route("/materials/<key_id>", methods=["POST"])
|
@kms_api_bp.route("/materials/<key_id>", methods=["POST"])
|
||||||
@limiter.limit("60 per minute")
|
@limiter.limit("60 per minute")
|
||||||
def get_encryption_materials(key_id: str):
|
def get_encryption_materials(key_id: str):
|
||||||
|
|||||||
335
app/lifecycle.py
Normal file
335
app/lifecycle.py
Normal file
@@ -0,0 +1,335 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
from .storage import ObjectStorage, StorageError
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LifecycleResult:
|
||||||
|
bucket_name: str
|
||||||
|
objects_deleted: int = 0
|
||||||
|
versions_deleted: int = 0
|
||||||
|
uploads_aborted: int = 0
|
||||||
|
errors: List[str] = field(default_factory=list)
|
||||||
|
execution_time_seconds: float = 0.0
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LifecycleExecutionRecord:
|
||||||
|
timestamp: float
|
||||||
|
bucket_name: str
|
||||||
|
objects_deleted: int
|
||||||
|
versions_deleted: int
|
||||||
|
uploads_aborted: int
|
||||||
|
errors: List[str]
|
||||||
|
execution_time_seconds: float
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
return {
|
||||||
|
"timestamp": self.timestamp,
|
||||||
|
"bucket_name": self.bucket_name,
|
||||||
|
"objects_deleted": self.objects_deleted,
|
||||||
|
"versions_deleted": self.versions_deleted,
|
||||||
|
"uploads_aborted": self.uploads_aborted,
|
||||||
|
"errors": self.errors,
|
||||||
|
"execution_time_seconds": self.execution_time_seconds,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: dict) -> "LifecycleExecutionRecord":
|
||||||
|
return cls(
|
||||||
|
timestamp=data["timestamp"],
|
||||||
|
bucket_name=data["bucket_name"],
|
||||||
|
objects_deleted=data["objects_deleted"],
|
||||||
|
versions_deleted=data["versions_deleted"],
|
||||||
|
uploads_aborted=data["uploads_aborted"],
|
||||||
|
errors=data.get("errors", []),
|
||||||
|
execution_time_seconds=data["execution_time_seconds"],
|
||||||
|
)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_result(cls, result: LifecycleResult) -> "LifecycleExecutionRecord":
|
||||||
|
return cls(
|
||||||
|
timestamp=time.time(),
|
||||||
|
bucket_name=result.bucket_name,
|
||||||
|
objects_deleted=result.objects_deleted,
|
||||||
|
versions_deleted=result.versions_deleted,
|
||||||
|
uploads_aborted=result.uploads_aborted,
|
||||||
|
errors=result.errors.copy(),
|
||||||
|
execution_time_seconds=result.execution_time_seconds,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class LifecycleHistoryStore:
|
||||||
|
MAX_HISTORY_PER_BUCKET = 50
|
||||||
|
|
||||||
|
def __init__(self, storage_root: Path) -> None:
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
|
||||||
|
def _get_history_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "lifecycle_history.json"
|
||||||
|
|
||||||
|
def load_history(self, bucket_name: str) -> List[LifecycleExecutionRecord]:
|
||||||
|
path = self._get_history_path(bucket_name)
|
||||||
|
if not path.exists():
|
||||||
|
return []
|
||||||
|
try:
|
||||||
|
with open(path, "r") as f:
|
||||||
|
data = json.load(f)
|
||||||
|
return [LifecycleExecutionRecord.from_dict(d) for d in data.get("executions", [])]
|
||||||
|
except (OSError, ValueError, KeyError) as e:
|
||||||
|
logger.error(f"Failed to load lifecycle history for {bucket_name}: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
def save_history(self, bucket_name: str, records: List[LifecycleExecutionRecord]) -> None:
|
||||||
|
path = self._get_history_path(bucket_name)
|
||||||
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
data = {"executions": [r.to_dict() for r in records[:self.MAX_HISTORY_PER_BUCKET]]}
|
||||||
|
try:
|
||||||
|
with open(path, "w") as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
except OSError as e:
|
||||||
|
logger.error(f"Failed to save lifecycle history for {bucket_name}: {e}")
|
||||||
|
|
||||||
|
def add_record(self, bucket_name: str, record: LifecycleExecutionRecord) -> None:
|
||||||
|
with self._lock:
|
||||||
|
records = self.load_history(bucket_name)
|
||||||
|
records.insert(0, record)
|
||||||
|
self.save_history(bucket_name, records)
|
||||||
|
|
||||||
|
def get_history(self, bucket_name: str, limit: int = 50, offset: int = 0) -> List[LifecycleExecutionRecord]:
|
||||||
|
records = self.load_history(bucket_name)
|
||||||
|
return records[offset:offset + limit]
|
||||||
|
|
||||||
|
|
||||||
|
class LifecycleManager:
|
||||||
|
def __init__(self, storage: ObjectStorage, interval_seconds: int = 3600, storage_root: Optional[Path] = None):
|
||||||
|
self.storage = storage
|
||||||
|
self.interval_seconds = interval_seconds
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._timer: Optional[threading.Timer] = None
|
||||||
|
self._shutdown = False
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
self.history_store = LifecycleHistoryStore(storage_root) if storage_root else None
|
||||||
|
|
||||||
|
def start(self) -> None:
|
||||||
|
if self._timer is not None:
|
||||||
|
return
|
||||||
|
self._shutdown = False
|
||||||
|
self._schedule_next()
|
||||||
|
logger.info(f"Lifecycle manager started with interval {self.interval_seconds}s")
|
||||||
|
|
||||||
|
def stop(self) -> None:
|
||||||
|
self._shutdown = True
|
||||||
|
if self._timer:
|
||||||
|
self._timer.cancel()
|
||||||
|
self._timer = None
|
||||||
|
logger.info("Lifecycle manager stopped")
|
||||||
|
|
||||||
|
def _schedule_next(self) -> None:
|
||||||
|
if self._shutdown:
|
||||||
|
return
|
||||||
|
self._timer = threading.Timer(self.interval_seconds, self._run_enforcement)
|
||||||
|
self._timer.daemon = True
|
||||||
|
self._timer.start()
|
||||||
|
|
||||||
|
def _run_enforcement(self) -> None:
|
||||||
|
if self._shutdown:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
self.enforce_all_buckets()
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Lifecycle enforcement failed: {e}")
|
||||||
|
finally:
|
||||||
|
self._schedule_next()
|
||||||
|
|
||||||
|
def enforce_all_buckets(self) -> Dict[str, LifecycleResult]:
|
||||||
|
results = {}
|
||||||
|
try:
|
||||||
|
buckets = self.storage.list_buckets()
|
||||||
|
for bucket in buckets:
|
||||||
|
result = self.enforce_rules(bucket.name)
|
||||||
|
if result.objects_deleted > 0 or result.versions_deleted > 0 or result.uploads_aborted > 0:
|
||||||
|
results[bucket.name] = result
|
||||||
|
except StorageError as e:
|
||||||
|
logger.error(f"Failed to list buckets for lifecycle: {e}")
|
||||||
|
return results
|
||||||
|
|
||||||
|
def enforce_rules(self, bucket_name: str) -> LifecycleResult:
|
||||||
|
start_time = time.time()
|
||||||
|
result = LifecycleResult(bucket_name=bucket_name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
lifecycle = self.storage.get_bucket_lifecycle(bucket_name)
|
||||||
|
if not lifecycle:
|
||||||
|
return result
|
||||||
|
|
||||||
|
for rule in lifecycle:
|
||||||
|
if rule.get("Status") != "Enabled":
|
||||||
|
continue
|
||||||
|
rule_id = rule.get("ID", "unknown")
|
||||||
|
prefix = rule.get("Prefix", rule.get("Filter", {}).get("Prefix", ""))
|
||||||
|
|
||||||
|
self._enforce_expiration(bucket_name, rule, prefix, result)
|
||||||
|
self._enforce_noncurrent_expiration(bucket_name, rule, prefix, result)
|
||||||
|
self._enforce_abort_multipart(bucket_name, rule, result)
|
||||||
|
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(str(e))
|
||||||
|
logger.error(f"Lifecycle enforcement error for {bucket_name}: {e}")
|
||||||
|
|
||||||
|
result.execution_time_seconds = time.time() - start_time
|
||||||
|
if result.objects_deleted > 0 or result.versions_deleted > 0 or result.uploads_aborted > 0 or result.errors:
|
||||||
|
logger.info(
|
||||||
|
f"Lifecycle enforcement for {bucket_name}: "
|
||||||
|
f"deleted={result.objects_deleted}, versions={result.versions_deleted}, "
|
||||||
|
f"aborted={result.uploads_aborted}, time={result.execution_time_seconds:.2f}s"
|
||||||
|
)
|
||||||
|
if self.history_store:
|
||||||
|
record = LifecycleExecutionRecord.from_result(result)
|
||||||
|
self.history_store.add_record(bucket_name, record)
|
||||||
|
return result
|
||||||
|
|
||||||
|
def _enforce_expiration(
|
||||||
|
self, bucket_name: str, rule: Dict[str, Any], prefix: str, result: LifecycleResult
|
||||||
|
) -> None:
|
||||||
|
expiration = rule.get("Expiration", {})
|
||||||
|
if not expiration:
|
||||||
|
return
|
||||||
|
|
||||||
|
days = expiration.get("Days")
|
||||||
|
date_str = expiration.get("Date")
|
||||||
|
|
||||||
|
if days:
|
||||||
|
cutoff = datetime.now(timezone.utc) - timedelta(days=days)
|
||||||
|
elif date_str:
|
||||||
|
try:
|
||||||
|
cutoff = datetime.fromisoformat(date_str.replace("Z", "+00:00"))
|
||||||
|
except ValueError:
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
objects = self.storage.list_objects_all(bucket_name)
|
||||||
|
for obj in objects:
|
||||||
|
if prefix and not obj.key.startswith(prefix):
|
||||||
|
continue
|
||||||
|
if obj.last_modified < cutoff:
|
||||||
|
try:
|
||||||
|
self.storage.delete_object(bucket_name, obj.key)
|
||||||
|
result.objects_deleted += 1
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to delete {obj.key}: {e}")
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to list objects: {e}")
|
||||||
|
|
||||||
|
def _enforce_noncurrent_expiration(
|
||||||
|
self, bucket_name: str, rule: Dict[str, Any], prefix: str, result: LifecycleResult
|
||||||
|
) -> None:
|
||||||
|
noncurrent = rule.get("NoncurrentVersionExpiration", {})
|
||||||
|
noncurrent_days = noncurrent.get("NoncurrentDays")
|
||||||
|
if not noncurrent_days:
|
||||||
|
return
|
||||||
|
|
||||||
|
cutoff = datetime.now(timezone.utc) - timedelta(days=noncurrent_days)
|
||||||
|
|
||||||
|
try:
|
||||||
|
objects = self.storage.list_objects_all(bucket_name)
|
||||||
|
for obj in objects:
|
||||||
|
if prefix and not obj.key.startswith(prefix):
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
versions = self.storage.list_object_versions(bucket_name, obj.key)
|
||||||
|
for version in versions:
|
||||||
|
archived_at_str = version.get("archived_at", "")
|
||||||
|
if not archived_at_str:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
archived_at = datetime.fromisoformat(archived_at_str.replace("Z", "+00:00"))
|
||||||
|
if archived_at < cutoff:
|
||||||
|
version_id = version.get("version_id")
|
||||||
|
if version_id:
|
||||||
|
self.storage.delete_object_version(bucket_name, obj.key, version_id)
|
||||||
|
result.versions_deleted += 1
|
||||||
|
except (ValueError, StorageError) as e:
|
||||||
|
result.errors.append(f"Failed to process version: {e}")
|
||||||
|
except StorageError:
|
||||||
|
pass
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to list objects: {e}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
orphaned = self.storage.list_orphaned_objects(bucket_name)
|
||||||
|
for item in orphaned:
|
||||||
|
obj_key = item.get("key", "")
|
||||||
|
if prefix and not obj_key.startswith(prefix):
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
versions = self.storage.list_object_versions(bucket_name, obj_key)
|
||||||
|
for version in versions:
|
||||||
|
archived_at_str = version.get("archived_at", "")
|
||||||
|
if not archived_at_str:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
archived_at = datetime.fromisoformat(archived_at_str.replace("Z", "+00:00"))
|
||||||
|
if archived_at < cutoff:
|
||||||
|
version_id = version.get("version_id")
|
||||||
|
if version_id:
|
||||||
|
self.storage.delete_object_version(bucket_name, obj_key, version_id)
|
||||||
|
result.versions_deleted += 1
|
||||||
|
except (ValueError, StorageError) as e:
|
||||||
|
result.errors.append(f"Failed to process orphaned version: {e}")
|
||||||
|
except StorageError:
|
||||||
|
pass
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to list orphaned objects: {e}")
|
||||||
|
|
||||||
|
def _enforce_abort_multipart(
|
||||||
|
self, bucket_name: str, rule: Dict[str, Any], result: LifecycleResult
|
||||||
|
) -> None:
|
||||||
|
abort_config = rule.get("AbortIncompleteMultipartUpload", {})
|
||||||
|
days_after = abort_config.get("DaysAfterInitiation")
|
||||||
|
if not days_after:
|
||||||
|
return
|
||||||
|
|
||||||
|
cutoff = datetime.now(timezone.utc) - timedelta(days=days_after)
|
||||||
|
|
||||||
|
try:
|
||||||
|
uploads = self.storage.list_multipart_uploads(bucket_name)
|
||||||
|
for upload in uploads:
|
||||||
|
created_at_str = upload.get("created_at", "")
|
||||||
|
if not created_at_str:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
created_at = datetime.fromisoformat(created_at_str.replace("Z", "+00:00"))
|
||||||
|
if created_at < cutoff:
|
||||||
|
upload_id = upload.get("upload_id")
|
||||||
|
if upload_id:
|
||||||
|
self.storage.abort_multipart_upload(bucket_name, upload_id)
|
||||||
|
result.uploads_aborted += 1
|
||||||
|
except (ValueError, StorageError) as e:
|
||||||
|
result.errors.append(f"Failed to abort upload: {e}")
|
||||||
|
except StorageError as e:
|
||||||
|
result.errors.append(f"Failed to list multipart uploads: {e}")
|
||||||
|
|
||||||
|
def run_now(self, bucket_name: Optional[str] = None) -> Dict[str, LifecycleResult]:
|
||||||
|
if bucket_name:
|
||||||
|
return {bucket_name: self.enforce_rules(bucket_name)}
|
||||||
|
return self.enforce_all_buckets()
|
||||||
|
|
||||||
|
def get_execution_history(self, bucket_name: str, limit: int = 50, offset: int = 0) -> List[LifecycleExecutionRecord]:
|
||||||
|
if not self.history_store:
|
||||||
|
return []
|
||||||
|
return self.history_store.get_history(bucket_name, limit, offset)
|
||||||
334
app/notifications.py
Normal file
334
app/notifications.py
Normal file
@@ -0,0 +1,334 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import queue
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
|
import requests
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class NotificationEvent:
|
||||||
|
event_name: str
|
||||||
|
bucket_name: str
|
||||||
|
object_key: str
|
||||||
|
object_size: int = 0
|
||||||
|
etag: str = ""
|
||||||
|
version_id: Optional[str] = None
|
||||||
|
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||||
|
request_id: str = field(default_factory=lambda: uuid.uuid4().hex)
|
||||||
|
source_ip: str = ""
|
||||||
|
user_identity: str = ""
|
||||||
|
|
||||||
|
def to_s3_event(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"Records": [
|
||||||
|
{
|
||||||
|
"eventVersion": "2.1",
|
||||||
|
"eventSource": "myfsio:s3",
|
||||||
|
"awsRegion": "local",
|
||||||
|
"eventTime": self.timestamp.strftime("%Y-%m-%dT%H:%M:%S.000Z"),
|
||||||
|
"eventName": self.event_name,
|
||||||
|
"userIdentity": {
|
||||||
|
"principalId": self.user_identity or "ANONYMOUS",
|
||||||
|
},
|
||||||
|
"requestParameters": {
|
||||||
|
"sourceIPAddress": self.source_ip or "127.0.0.1",
|
||||||
|
},
|
||||||
|
"responseElements": {
|
||||||
|
"x-amz-request-id": self.request_id,
|
||||||
|
"x-amz-id-2": self.request_id,
|
||||||
|
},
|
||||||
|
"s3": {
|
||||||
|
"s3SchemaVersion": "1.0",
|
||||||
|
"configurationId": "notification",
|
||||||
|
"bucket": {
|
||||||
|
"name": self.bucket_name,
|
||||||
|
"ownerIdentity": {"principalId": "local"},
|
||||||
|
"arn": f"arn:aws:s3:::{self.bucket_name}",
|
||||||
|
},
|
||||||
|
"object": {
|
||||||
|
"key": self.object_key,
|
||||||
|
"size": self.object_size,
|
||||||
|
"eTag": self.etag,
|
||||||
|
"versionId": self.version_id or "null",
|
||||||
|
"sequencer": f"{int(time.time() * 1000):016X}",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class WebhookDestination:
|
||||||
|
url: str
|
||||||
|
headers: Dict[str, str] = field(default_factory=dict)
|
||||||
|
timeout_seconds: int = 30
|
||||||
|
retry_count: int = 3
|
||||||
|
retry_delay_seconds: int = 1
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"url": self.url,
|
||||||
|
"headers": self.headers,
|
||||||
|
"timeout_seconds": self.timeout_seconds,
|
||||||
|
"retry_count": self.retry_count,
|
||||||
|
"retry_delay_seconds": self.retry_delay_seconds,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "WebhookDestination":
|
||||||
|
return cls(
|
||||||
|
url=data.get("url", ""),
|
||||||
|
headers=data.get("headers", {}),
|
||||||
|
timeout_seconds=data.get("timeout_seconds", 30),
|
||||||
|
retry_count=data.get("retry_count", 3),
|
||||||
|
retry_delay_seconds=data.get("retry_delay_seconds", 1),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class NotificationConfiguration:
|
||||||
|
id: str
|
||||||
|
events: List[str]
|
||||||
|
destination: WebhookDestination
|
||||||
|
prefix_filter: str = ""
|
||||||
|
suffix_filter: str = ""
|
||||||
|
|
||||||
|
def matches_event(self, event_name: str, object_key: str) -> bool:
|
||||||
|
event_match = False
|
||||||
|
for pattern in self.events:
|
||||||
|
if pattern.endswith("*"):
|
||||||
|
base = pattern[:-1]
|
||||||
|
if event_name.startswith(base):
|
||||||
|
event_match = True
|
||||||
|
break
|
||||||
|
elif pattern == event_name:
|
||||||
|
event_match = True
|
||||||
|
break
|
||||||
|
|
||||||
|
if not event_match:
|
||||||
|
return False
|
||||||
|
|
||||||
|
if self.prefix_filter and not object_key.startswith(self.prefix_filter):
|
||||||
|
return False
|
||||||
|
if self.suffix_filter and not object_key.endswith(self.suffix_filter):
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"Id": self.id,
|
||||||
|
"Events": self.events,
|
||||||
|
"Destination": self.destination.to_dict(),
|
||||||
|
"Filter": {
|
||||||
|
"Key": {
|
||||||
|
"FilterRules": [
|
||||||
|
{"Name": "prefix", "Value": self.prefix_filter},
|
||||||
|
{"Name": "suffix", "Value": self.suffix_filter},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "NotificationConfiguration":
|
||||||
|
prefix = ""
|
||||||
|
suffix = ""
|
||||||
|
filter_data = data.get("Filter", {})
|
||||||
|
key_filter = filter_data.get("Key", {})
|
||||||
|
for rule in key_filter.get("FilterRules", []):
|
||||||
|
if rule.get("Name") == "prefix":
|
||||||
|
prefix = rule.get("Value", "")
|
||||||
|
elif rule.get("Name") == "suffix":
|
||||||
|
suffix = rule.get("Value", "")
|
||||||
|
|
||||||
|
return cls(
|
||||||
|
id=data.get("Id", uuid.uuid4().hex),
|
||||||
|
events=data.get("Events", []),
|
||||||
|
destination=WebhookDestination.from_dict(data.get("Destination", {})),
|
||||||
|
prefix_filter=prefix,
|
||||||
|
suffix_filter=suffix,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class NotificationService:
|
||||||
|
def __init__(self, storage_root: Path, worker_count: int = 2):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._configs: Dict[str, List[NotificationConfiguration]] = {}
|
||||||
|
self._queue: queue.Queue[tuple[NotificationEvent, WebhookDestination]] = queue.Queue()
|
||||||
|
self._workers: List[threading.Thread] = []
|
||||||
|
self._shutdown = threading.Event()
|
||||||
|
self._stats = {
|
||||||
|
"events_queued": 0,
|
||||||
|
"events_sent": 0,
|
||||||
|
"events_failed": 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
for i in range(worker_count):
|
||||||
|
worker = threading.Thread(target=self._worker_loop, name=f"notification-worker-{i}", daemon=True)
|
||||||
|
worker.start()
|
||||||
|
self._workers.append(worker)
|
||||||
|
|
||||||
|
def _config_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "notifications.json"
|
||||||
|
|
||||||
|
def get_bucket_notifications(self, bucket_name: str) -> List[NotificationConfiguration]:
|
||||||
|
if bucket_name in self._configs:
|
||||||
|
return self._configs[bucket_name]
|
||||||
|
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
if not config_path.exists():
|
||||||
|
return []
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||||
|
configs = [NotificationConfiguration.from_dict(c) for c in data.get("configurations", [])]
|
||||||
|
self._configs[bucket_name] = configs
|
||||||
|
return configs
|
||||||
|
except (json.JSONDecodeError, OSError) as e:
|
||||||
|
logger.warning(f"Failed to load notification config for {bucket_name}: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
def set_bucket_notifications(
|
||||||
|
self, bucket_name: str, configurations: List[NotificationConfiguration]
|
||||||
|
) -> None:
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
data = {"configurations": [c.to_dict() for c in configurations]}
|
||||||
|
config_path.write_text(json.dumps(data, indent=2), encoding="utf-8")
|
||||||
|
self._configs[bucket_name] = configurations
|
||||||
|
|
||||||
|
def delete_bucket_notifications(self, bucket_name: str) -> None:
|
||||||
|
config_path = self._config_path(bucket_name)
|
||||||
|
try:
|
||||||
|
if config_path.exists():
|
||||||
|
config_path.unlink()
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
self._configs.pop(bucket_name, None)
|
||||||
|
|
||||||
|
def emit_event(self, event: NotificationEvent) -> None:
|
||||||
|
configurations = self.get_bucket_notifications(event.bucket_name)
|
||||||
|
if not configurations:
|
||||||
|
return
|
||||||
|
|
||||||
|
for config in configurations:
|
||||||
|
if config.matches_event(event.event_name, event.object_key):
|
||||||
|
self._queue.put((event, config.destination))
|
||||||
|
self._stats["events_queued"] += 1
|
||||||
|
logger.debug(
|
||||||
|
f"Queued notification for {event.event_name} on {event.bucket_name}/{event.object_key}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def emit_object_created(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
*,
|
||||||
|
size: int = 0,
|
||||||
|
etag: str = "",
|
||||||
|
version_id: Optional[str] = None,
|
||||||
|
request_id: str = "",
|
||||||
|
source_ip: str = "",
|
||||||
|
user_identity: str = "",
|
||||||
|
operation: str = "Put",
|
||||||
|
) -> None:
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name=f"s3:ObjectCreated:{operation}",
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
object_key=object_key,
|
||||||
|
object_size=size,
|
||||||
|
etag=etag,
|
||||||
|
version_id=version_id,
|
||||||
|
request_id=request_id or uuid.uuid4().hex,
|
||||||
|
source_ip=source_ip,
|
||||||
|
user_identity=user_identity,
|
||||||
|
)
|
||||||
|
self.emit_event(event)
|
||||||
|
|
||||||
|
def emit_object_removed(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
*,
|
||||||
|
version_id: Optional[str] = None,
|
||||||
|
request_id: str = "",
|
||||||
|
source_ip: str = "",
|
||||||
|
user_identity: str = "",
|
||||||
|
operation: str = "Delete",
|
||||||
|
) -> None:
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name=f"s3:ObjectRemoved:{operation}",
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
object_key=object_key,
|
||||||
|
version_id=version_id,
|
||||||
|
request_id=request_id or uuid.uuid4().hex,
|
||||||
|
source_ip=source_ip,
|
||||||
|
user_identity=user_identity,
|
||||||
|
)
|
||||||
|
self.emit_event(event)
|
||||||
|
|
||||||
|
def _worker_loop(self) -> None:
|
||||||
|
while not self._shutdown.is_set():
|
||||||
|
try:
|
||||||
|
event, destination = self._queue.get(timeout=1.0)
|
||||||
|
except queue.Empty:
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
self._send_notification(event, destination)
|
||||||
|
self._stats["events_sent"] += 1
|
||||||
|
except Exception as e:
|
||||||
|
self._stats["events_failed"] += 1
|
||||||
|
logger.error(f"Failed to send notification: {e}")
|
||||||
|
finally:
|
||||||
|
self._queue.task_done()
|
||||||
|
|
||||||
|
def _send_notification(self, event: NotificationEvent, destination: WebhookDestination) -> None:
|
||||||
|
payload = event.to_s3_event()
|
||||||
|
headers = {"Content-Type": "application/json", **destination.headers}
|
||||||
|
|
||||||
|
last_error = None
|
||||||
|
for attempt in range(destination.retry_count):
|
||||||
|
try:
|
||||||
|
response = requests.post(
|
||||||
|
destination.url,
|
||||||
|
json=payload,
|
||||||
|
headers=headers,
|
||||||
|
timeout=destination.timeout_seconds,
|
||||||
|
)
|
||||||
|
if response.status_code < 400:
|
||||||
|
logger.info(
|
||||||
|
f"Notification sent: {event.event_name} -> {destination.url} (status={response.status_code})"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
last_error = f"HTTP {response.status_code}: {response.text[:200]}"
|
||||||
|
except requests.RequestException as e:
|
||||||
|
last_error = str(e)
|
||||||
|
|
||||||
|
if attempt < destination.retry_count - 1:
|
||||||
|
time.sleep(destination.retry_delay_seconds * (attempt + 1))
|
||||||
|
|
||||||
|
raise RuntimeError(f"Failed after {destination.retry_count} attempts: {last_error}")
|
||||||
|
|
||||||
|
def get_stats(self) -> Dict[str, int]:
|
||||||
|
return dict(self._stats)
|
||||||
|
|
||||||
|
def shutdown(self) -> None:
|
||||||
|
self._shutdown.set()
|
||||||
|
for worker in self._workers:
|
||||||
|
worker.join(timeout=5.0)
|
||||||
234
app/object_lock.py
Normal file
234
app/object_lock.py
Normal file
@@ -0,0 +1,234 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from enum import Enum
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
|
|
||||||
|
class RetentionMode(Enum):
|
||||||
|
GOVERNANCE = "GOVERNANCE"
|
||||||
|
COMPLIANCE = "COMPLIANCE"
|
||||||
|
|
||||||
|
|
||||||
|
class ObjectLockError(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ObjectLockRetention:
|
||||||
|
mode: RetentionMode
|
||||||
|
retain_until_date: datetime
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, str]:
|
||||||
|
return {
|
||||||
|
"Mode": self.mode.value,
|
||||||
|
"RetainUntilDate": self.retain_until_date.isoformat(),
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> Optional["ObjectLockRetention"]:
|
||||||
|
if not data:
|
||||||
|
return None
|
||||||
|
mode_str = data.get("Mode")
|
||||||
|
date_str = data.get("RetainUntilDate")
|
||||||
|
if not mode_str or not date_str:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
mode = RetentionMode(mode_str)
|
||||||
|
retain_until = datetime.fromisoformat(date_str.replace("Z", "+00:00"))
|
||||||
|
return cls(mode=mode, retain_until_date=retain_until)
|
||||||
|
except (ValueError, KeyError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def is_expired(self) -> bool:
|
||||||
|
return datetime.now(timezone.utc) > self.retain_until_date
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ObjectLockConfig:
|
||||||
|
enabled: bool = False
|
||||||
|
default_retention: Optional[ObjectLockRetention] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
result: Dict[str, Any] = {"ObjectLockEnabled": "Enabled" if self.enabled else "Disabled"}
|
||||||
|
if self.default_retention:
|
||||||
|
result["Rule"] = {
|
||||||
|
"DefaultRetention": {
|
||||||
|
"Mode": self.default_retention.mode.value,
|
||||||
|
"Days": None,
|
||||||
|
"Years": None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "ObjectLockConfig":
|
||||||
|
enabled = data.get("ObjectLockEnabled") == "Enabled"
|
||||||
|
default_retention = None
|
||||||
|
rule = data.get("Rule")
|
||||||
|
if rule and "DefaultRetention" in rule:
|
||||||
|
dr = rule["DefaultRetention"]
|
||||||
|
mode_str = dr.get("Mode", "GOVERNANCE")
|
||||||
|
days = dr.get("Days")
|
||||||
|
years = dr.get("Years")
|
||||||
|
if days or years:
|
||||||
|
from datetime import timedelta
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
if years:
|
||||||
|
delta = timedelta(days=int(years) * 365)
|
||||||
|
else:
|
||||||
|
delta = timedelta(days=int(days))
|
||||||
|
default_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode(mode_str),
|
||||||
|
retain_until_date=now + delta,
|
||||||
|
)
|
||||||
|
return cls(enabled=enabled, default_retention=default_retention)
|
||||||
|
|
||||||
|
|
||||||
|
class ObjectLockService:
|
||||||
|
def __init__(self, storage_root: Path):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._config_cache: Dict[str, ObjectLockConfig] = {}
|
||||||
|
|
||||||
|
def _bucket_lock_config_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "object_lock.json"
|
||||||
|
|
||||||
|
def _object_lock_meta_path(self, bucket_name: str, object_key: str) -> Path:
|
||||||
|
safe_key = object_key.replace("/", "_").replace("\\", "_")
|
||||||
|
return (
|
||||||
|
self.storage_root / ".myfsio.sys" / "buckets" / bucket_name /
|
||||||
|
"locks" / f"{safe_key}.lock.json"
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_bucket_lock_config(self, bucket_name: str) -> ObjectLockConfig:
|
||||||
|
if bucket_name in self._config_cache:
|
||||||
|
return self._config_cache[bucket_name]
|
||||||
|
|
||||||
|
config_path = self._bucket_lock_config_path(bucket_name)
|
||||||
|
if not config_path.exists():
|
||||||
|
return ObjectLockConfig(enabled=False)
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
self._config_cache[bucket_name] = config
|
||||||
|
return config
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
return ObjectLockConfig(enabled=False)
|
||||||
|
|
||||||
|
def set_bucket_lock_config(self, bucket_name: str, config: ObjectLockConfig) -> None:
|
||||||
|
config_path = self._bucket_lock_config_path(bucket_name)
|
||||||
|
config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
config_path.write_text(json.dumps(config.to_dict()), encoding="utf-8")
|
||||||
|
self._config_cache[bucket_name] = config
|
||||||
|
|
||||||
|
def enable_bucket_lock(self, bucket_name: str) -> None:
|
||||||
|
config = self.get_bucket_lock_config(bucket_name)
|
||||||
|
config.enabled = True
|
||||||
|
self.set_bucket_lock_config(bucket_name, config)
|
||||||
|
|
||||||
|
def is_bucket_lock_enabled(self, bucket_name: str) -> bool:
|
||||||
|
return self.get_bucket_lock_config(bucket_name).enabled
|
||||||
|
|
||||||
|
def get_object_retention(self, bucket_name: str, object_key: str) -> Optional[ObjectLockRetention]:
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
if not meta_path.exists():
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
data = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||||
|
return ObjectLockRetention.from_dict(data.get("retention", {}))
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def set_object_retention(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
retention: ObjectLockRetention,
|
||||||
|
bypass_governance: bool = False,
|
||||||
|
) -> None:
|
||||||
|
existing = self.get_object_retention(bucket_name, object_key)
|
||||||
|
if existing and not existing.is_expired():
|
||||||
|
if existing.mode == RetentionMode.COMPLIANCE:
|
||||||
|
raise ObjectLockError(
|
||||||
|
"Cannot modify retention on object with COMPLIANCE mode until retention expires"
|
||||||
|
)
|
||||||
|
if existing.mode == RetentionMode.GOVERNANCE and not bypass_governance:
|
||||||
|
raise ObjectLockError(
|
||||||
|
"Cannot modify GOVERNANCE retention without bypass-governance permission"
|
||||||
|
)
|
||||||
|
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
meta_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
existing_data: Dict[str, Any] = {}
|
||||||
|
if meta_path.exists():
|
||||||
|
try:
|
||||||
|
existing_data = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
existing_data["retention"] = retention.to_dict()
|
||||||
|
meta_path.write_text(json.dumps(existing_data), encoding="utf-8")
|
||||||
|
|
||||||
|
def get_legal_hold(self, bucket_name: str, object_key: str) -> bool:
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
if not meta_path.exists():
|
||||||
|
return False
|
||||||
|
try:
|
||||||
|
data = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||||
|
return data.get("legal_hold", False)
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
return False
|
||||||
|
|
||||||
|
def set_legal_hold(self, bucket_name: str, object_key: str, enabled: bool) -> None:
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
meta_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
existing_data: Dict[str, Any] = {}
|
||||||
|
if meta_path.exists():
|
||||||
|
try:
|
||||||
|
existing_data = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
existing_data["legal_hold"] = enabled
|
||||||
|
meta_path.write_text(json.dumps(existing_data), encoding="utf-8")
|
||||||
|
|
||||||
|
def can_delete_object(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
bypass_governance: bool = False,
|
||||||
|
) -> tuple[bool, str]:
|
||||||
|
if self.get_legal_hold(bucket_name, object_key):
|
||||||
|
return False, "Object is under legal hold"
|
||||||
|
|
||||||
|
retention = self.get_object_retention(bucket_name, object_key)
|
||||||
|
if retention and not retention.is_expired():
|
||||||
|
if retention.mode == RetentionMode.COMPLIANCE:
|
||||||
|
return False, f"Object is locked in COMPLIANCE mode until {retention.retain_until_date.isoformat()}"
|
||||||
|
if retention.mode == RetentionMode.GOVERNANCE:
|
||||||
|
if not bypass_governance:
|
||||||
|
return False, f"Object is locked in GOVERNANCE mode until {retention.retain_until_date.isoformat()}"
|
||||||
|
|
||||||
|
return True, ""
|
||||||
|
|
||||||
|
def can_overwrite_object(
|
||||||
|
self,
|
||||||
|
bucket_name: str,
|
||||||
|
object_key: str,
|
||||||
|
bypass_governance: bool = False,
|
||||||
|
) -> tuple[bool, str]:
|
||||||
|
return self.can_delete_object(bucket_name, object_key, bypass_governance)
|
||||||
|
|
||||||
|
def delete_object_lock_metadata(self, bucket_name: str, object_key: str) -> None:
|
||||||
|
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
||||||
|
try:
|
||||||
|
if meta_path.exists():
|
||||||
|
meta_path.unlink()
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
271
app/operation_metrics.py
Normal file
271
app/operation_metrics.py
Normal file
@@ -0,0 +1,271 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class OperationStats:
|
||||||
|
count: int = 0
|
||||||
|
success_count: int = 0
|
||||||
|
error_count: int = 0
|
||||||
|
latency_sum_ms: float = 0.0
|
||||||
|
latency_min_ms: float = float("inf")
|
||||||
|
latency_max_ms: float = 0.0
|
||||||
|
bytes_in: int = 0
|
||||||
|
bytes_out: int = 0
|
||||||
|
|
||||||
|
def record(self, latency_ms: float, success: bool, bytes_in: int = 0, bytes_out: int = 0) -> None:
|
||||||
|
self.count += 1
|
||||||
|
if success:
|
||||||
|
self.success_count += 1
|
||||||
|
else:
|
||||||
|
self.error_count += 1
|
||||||
|
self.latency_sum_ms += latency_ms
|
||||||
|
if latency_ms < self.latency_min_ms:
|
||||||
|
self.latency_min_ms = latency_ms
|
||||||
|
if latency_ms > self.latency_max_ms:
|
||||||
|
self.latency_max_ms = latency_ms
|
||||||
|
self.bytes_in += bytes_in
|
||||||
|
self.bytes_out += bytes_out
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
avg_latency = self.latency_sum_ms / self.count if self.count > 0 else 0.0
|
||||||
|
min_latency = self.latency_min_ms if self.latency_min_ms != float("inf") else 0.0
|
||||||
|
return {
|
||||||
|
"count": self.count,
|
||||||
|
"success_count": self.success_count,
|
||||||
|
"error_count": self.error_count,
|
||||||
|
"latency_avg_ms": round(avg_latency, 2),
|
||||||
|
"latency_min_ms": round(min_latency, 2),
|
||||||
|
"latency_max_ms": round(self.latency_max_ms, 2),
|
||||||
|
"bytes_in": self.bytes_in,
|
||||||
|
"bytes_out": self.bytes_out,
|
||||||
|
}
|
||||||
|
|
||||||
|
def merge(self, other: "OperationStats") -> None:
|
||||||
|
self.count += other.count
|
||||||
|
self.success_count += other.success_count
|
||||||
|
self.error_count += other.error_count
|
||||||
|
self.latency_sum_ms += other.latency_sum_ms
|
||||||
|
if other.latency_min_ms < self.latency_min_ms:
|
||||||
|
self.latency_min_ms = other.latency_min_ms
|
||||||
|
if other.latency_max_ms > self.latency_max_ms:
|
||||||
|
self.latency_max_ms = other.latency_max_ms
|
||||||
|
self.bytes_in += other.bytes_in
|
||||||
|
self.bytes_out += other.bytes_out
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class MetricsSnapshot:
|
||||||
|
timestamp: datetime
|
||||||
|
window_seconds: int
|
||||||
|
by_method: Dict[str, Dict[str, Any]]
|
||||||
|
by_endpoint: Dict[str, Dict[str, Any]]
|
||||||
|
by_status_class: Dict[str, int]
|
||||||
|
error_codes: Dict[str, int]
|
||||||
|
totals: Dict[str, Any]
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"timestamp": self.timestamp.isoformat(),
|
||||||
|
"window_seconds": self.window_seconds,
|
||||||
|
"by_method": self.by_method,
|
||||||
|
"by_endpoint": self.by_endpoint,
|
||||||
|
"by_status_class": self.by_status_class,
|
||||||
|
"error_codes": self.error_codes,
|
||||||
|
"totals": self.totals,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "MetricsSnapshot":
|
||||||
|
return cls(
|
||||||
|
timestamp=datetime.fromisoformat(data["timestamp"]),
|
||||||
|
window_seconds=data.get("window_seconds", 300),
|
||||||
|
by_method=data.get("by_method", {}),
|
||||||
|
by_endpoint=data.get("by_endpoint", {}),
|
||||||
|
by_status_class=data.get("by_status_class", {}),
|
||||||
|
error_codes=data.get("error_codes", {}),
|
||||||
|
totals=data.get("totals", {}),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class OperationMetricsCollector:
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
storage_root: Path,
|
||||||
|
interval_minutes: int = 5,
|
||||||
|
retention_hours: int = 24,
|
||||||
|
):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self.interval_seconds = interval_minutes * 60
|
||||||
|
self.retention_hours = retention_hours
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
self._by_method: Dict[str, OperationStats] = {}
|
||||||
|
self._by_endpoint: Dict[str, OperationStats] = {}
|
||||||
|
self._by_status_class: Dict[str, int] = {}
|
||||||
|
self._error_codes: Dict[str, int] = {}
|
||||||
|
self._totals = OperationStats()
|
||||||
|
self._window_start = time.time()
|
||||||
|
self._shutdown = threading.Event()
|
||||||
|
self._snapshots: List[MetricsSnapshot] = []
|
||||||
|
|
||||||
|
self._load_history()
|
||||||
|
|
||||||
|
self._snapshot_thread = threading.Thread(
|
||||||
|
target=self._snapshot_loop, name="operation-metrics-snapshot", daemon=True
|
||||||
|
)
|
||||||
|
self._snapshot_thread.start()
|
||||||
|
|
||||||
|
def _config_path(self) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "config" / "operation_metrics.json"
|
||||||
|
|
||||||
|
def _load_history(self) -> None:
|
||||||
|
config_path = self._config_path()
|
||||||
|
if not config_path.exists():
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||||
|
snapshots_data = data.get("snapshots", [])
|
||||||
|
self._snapshots = [MetricsSnapshot.from_dict(s) for s in snapshots_data]
|
||||||
|
self._prune_old_snapshots()
|
||||||
|
except (json.JSONDecodeError, OSError, KeyError) as e:
|
||||||
|
logger.warning(f"Failed to load operation metrics history: {e}")
|
||||||
|
|
||||||
|
def _save_history(self) -> None:
|
||||||
|
config_path = self._config_path()
|
||||||
|
config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
try:
|
||||||
|
data = {"snapshots": [s.to_dict() for s in self._snapshots]}
|
||||||
|
config_path.write_text(json.dumps(data, indent=2), encoding="utf-8")
|
||||||
|
except OSError as e:
|
||||||
|
logger.warning(f"Failed to save operation metrics history: {e}")
|
||||||
|
|
||||||
|
def _prune_old_snapshots(self) -> None:
|
||||||
|
if not self._snapshots:
|
||||||
|
return
|
||||||
|
cutoff = datetime.now(timezone.utc).timestamp() - (self.retention_hours * 3600)
|
||||||
|
self._snapshots = [
|
||||||
|
s for s in self._snapshots if s.timestamp.timestamp() > cutoff
|
||||||
|
]
|
||||||
|
|
||||||
|
def _snapshot_loop(self) -> None:
|
||||||
|
while not self._shutdown.is_set():
|
||||||
|
self._shutdown.wait(timeout=self.interval_seconds)
|
||||||
|
if not self._shutdown.is_set():
|
||||||
|
self._take_snapshot()
|
||||||
|
|
||||||
|
def _take_snapshot(self) -> None:
|
||||||
|
with self._lock:
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
window_seconds = int(time.time() - self._window_start)
|
||||||
|
|
||||||
|
snapshot = MetricsSnapshot(
|
||||||
|
timestamp=now,
|
||||||
|
window_seconds=window_seconds,
|
||||||
|
by_method={k: v.to_dict() for k, v in self._by_method.items()},
|
||||||
|
by_endpoint={k: v.to_dict() for k, v in self._by_endpoint.items()},
|
||||||
|
by_status_class=dict(self._by_status_class),
|
||||||
|
error_codes=dict(self._error_codes),
|
||||||
|
totals=self._totals.to_dict(),
|
||||||
|
)
|
||||||
|
|
||||||
|
self._snapshots.append(snapshot)
|
||||||
|
self._prune_old_snapshots()
|
||||||
|
self._save_history()
|
||||||
|
|
||||||
|
self._by_method.clear()
|
||||||
|
self._by_endpoint.clear()
|
||||||
|
self._by_status_class.clear()
|
||||||
|
self._error_codes.clear()
|
||||||
|
self._totals = OperationStats()
|
||||||
|
self._window_start = time.time()
|
||||||
|
|
||||||
|
def record_request(
|
||||||
|
self,
|
||||||
|
method: str,
|
||||||
|
endpoint_type: str,
|
||||||
|
status_code: int,
|
||||||
|
latency_ms: float,
|
||||||
|
bytes_in: int = 0,
|
||||||
|
bytes_out: int = 0,
|
||||||
|
error_code: Optional[str] = None,
|
||||||
|
) -> None:
|
||||||
|
success = 200 <= status_code < 400
|
||||||
|
status_class = f"{status_code // 100}xx"
|
||||||
|
|
||||||
|
with self._lock:
|
||||||
|
if method not in self._by_method:
|
||||||
|
self._by_method[method] = OperationStats()
|
||||||
|
self._by_method[method].record(latency_ms, success, bytes_in, bytes_out)
|
||||||
|
|
||||||
|
if endpoint_type not in self._by_endpoint:
|
||||||
|
self._by_endpoint[endpoint_type] = OperationStats()
|
||||||
|
self._by_endpoint[endpoint_type].record(latency_ms, success, bytes_in, bytes_out)
|
||||||
|
|
||||||
|
self._by_status_class[status_class] = self._by_status_class.get(status_class, 0) + 1
|
||||||
|
|
||||||
|
if error_code:
|
||||||
|
self._error_codes[error_code] = self._error_codes.get(error_code, 0) + 1
|
||||||
|
|
||||||
|
self._totals.record(latency_ms, success, bytes_in, bytes_out)
|
||||||
|
|
||||||
|
def get_current_stats(self) -> Dict[str, Any]:
|
||||||
|
with self._lock:
|
||||||
|
window_seconds = int(time.time() - self._window_start)
|
||||||
|
return {
|
||||||
|
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||||
|
"window_seconds": window_seconds,
|
||||||
|
"by_method": {k: v.to_dict() for k, v in self._by_method.items()},
|
||||||
|
"by_endpoint": {k: v.to_dict() for k, v in self._by_endpoint.items()},
|
||||||
|
"by_status_class": dict(self._by_status_class),
|
||||||
|
"error_codes": dict(self._error_codes),
|
||||||
|
"totals": self._totals.to_dict(),
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_history(self, hours: Optional[int] = None) -> List[Dict[str, Any]]:
|
||||||
|
with self._lock:
|
||||||
|
snapshots = list(self._snapshots)
|
||||||
|
|
||||||
|
if hours:
|
||||||
|
cutoff = datetime.now(timezone.utc).timestamp() - (hours * 3600)
|
||||||
|
snapshots = [s for s in snapshots if s.timestamp.timestamp() > cutoff]
|
||||||
|
|
||||||
|
return [s.to_dict() for s in snapshots]
|
||||||
|
|
||||||
|
def shutdown(self) -> None:
|
||||||
|
self._shutdown.set()
|
||||||
|
self._take_snapshot()
|
||||||
|
self._snapshot_thread.join(timeout=5.0)
|
||||||
|
|
||||||
|
|
||||||
|
def classify_endpoint(path: str) -> str:
|
||||||
|
if not path or path == "/":
|
||||||
|
return "service"
|
||||||
|
|
||||||
|
path = path.rstrip("/")
|
||||||
|
|
||||||
|
if path.startswith("/ui"):
|
||||||
|
return "ui"
|
||||||
|
|
||||||
|
if path.startswith("/kms"):
|
||||||
|
return "kms"
|
||||||
|
|
||||||
|
if path.startswith("/myfsio"):
|
||||||
|
return "service"
|
||||||
|
|
||||||
|
parts = path.lstrip("/").split("/")
|
||||||
|
if len(parts) == 0:
|
||||||
|
return "service"
|
||||||
|
elif len(parts) == 1:
|
||||||
|
return "bucket"
|
||||||
|
else:
|
||||||
|
return "object"
|
||||||
@@ -1,4 +1,3 @@
|
|||||||
"""Background replication worker."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
@@ -9,7 +8,7 @@ import time
|
|||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Dict, Optional
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
import boto3
|
import boto3
|
||||||
from botocore.config import Config
|
from botocore.config import Config
|
||||||
@@ -22,18 +21,47 @@ from .storage import ObjectStorage, StorageError
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
REPLICATION_USER_AGENT = "S3ReplicationAgent/1.0"
|
REPLICATION_USER_AGENT = "S3ReplicationAgent/1.0"
|
||||||
|
REPLICATION_CONNECT_TIMEOUT = 5
|
||||||
|
REPLICATION_READ_TIMEOUT = 30
|
||||||
|
STREAMING_THRESHOLD_BYTES = 10 * 1024 * 1024
|
||||||
|
|
||||||
REPLICATION_MODE_NEW_ONLY = "new_only"
|
REPLICATION_MODE_NEW_ONLY = "new_only"
|
||||||
REPLICATION_MODE_ALL = "all"
|
REPLICATION_MODE_ALL = "all"
|
||||||
|
|
||||||
|
|
||||||
|
def _create_s3_client(connection: RemoteConnection, *, health_check: bool = False) -> Any:
|
||||||
|
"""Create a boto3 S3 client for the given connection.
|
||||||
|
Args:
|
||||||
|
connection: Remote S3 connection configuration
|
||||||
|
health_check: If True, use minimal retries for quick health checks
|
||||||
|
"""
|
||||||
|
config = Config(
|
||||||
|
user_agent_extra=REPLICATION_USER_AGENT,
|
||||||
|
connect_timeout=REPLICATION_CONNECT_TIMEOUT,
|
||||||
|
read_timeout=REPLICATION_READ_TIMEOUT,
|
||||||
|
retries={'max_attempts': 1 if health_check else 2},
|
||||||
|
signature_version='s3v4',
|
||||||
|
s3={'addressing_style': 'path'},
|
||||||
|
request_checksum_calculation='when_required',
|
||||||
|
response_checksum_validation='when_required',
|
||||||
|
)
|
||||||
|
return boto3.client(
|
||||||
|
"s3",
|
||||||
|
endpoint_url=connection.endpoint_url,
|
||||||
|
aws_access_key_id=connection.access_key,
|
||||||
|
aws_secret_access_key=connection.secret_key,
|
||||||
|
region_name=connection.region or 'us-east-1',
|
||||||
|
config=config,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ReplicationStats:
|
class ReplicationStats:
|
||||||
"""Statistics for replication operations - computed dynamically."""
|
"""Statistics for replication operations - computed dynamically."""
|
||||||
objects_synced: int = 0 # Objects that exist in both source and destination
|
objects_synced: int = 0
|
||||||
objects_pending: int = 0 # Objects in source but not in destination
|
objects_pending: int = 0
|
||||||
objects_orphaned: int = 0 # Objects in destination but not in source (will be deleted)
|
objects_orphaned: int = 0
|
||||||
bytes_synced: int = 0 # Total bytes synced to destination
|
bytes_synced: int = 0
|
||||||
last_sync_at: Optional[float] = None
|
last_sync_at: Optional[float] = None
|
||||||
last_sync_key: Optional[str] = None
|
last_sync_key: Optional[str] = None
|
||||||
|
|
||||||
@@ -59,6 +87,40 @@ class ReplicationStats:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ReplicationFailure:
|
||||||
|
object_key: str
|
||||||
|
error_message: str
|
||||||
|
timestamp: float
|
||||||
|
failure_count: int
|
||||||
|
bucket_name: str
|
||||||
|
action: str
|
||||||
|
last_error_code: Optional[str] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
return {
|
||||||
|
"object_key": self.object_key,
|
||||||
|
"error_message": self.error_message,
|
||||||
|
"timestamp": self.timestamp,
|
||||||
|
"failure_count": self.failure_count,
|
||||||
|
"bucket_name": self.bucket_name,
|
||||||
|
"action": self.action,
|
||||||
|
"last_error_code": self.last_error_code,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: dict) -> "ReplicationFailure":
|
||||||
|
return cls(
|
||||||
|
object_key=data["object_key"],
|
||||||
|
error_message=data["error_message"],
|
||||||
|
timestamp=data["timestamp"],
|
||||||
|
failure_count=data["failure_count"],
|
||||||
|
bucket_name=data["bucket_name"],
|
||||||
|
action=data["action"],
|
||||||
|
last_error_code=data.get("last_error_code"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ReplicationRule:
|
class ReplicationRule:
|
||||||
bucket_name: str
|
bucket_name: str
|
||||||
@@ -83,7 +145,6 @@ class ReplicationRule:
|
|||||||
@classmethod
|
@classmethod
|
||||||
def from_dict(cls, data: dict) -> "ReplicationRule":
|
def from_dict(cls, data: dict) -> "ReplicationRule":
|
||||||
stats_data = data.pop("stats", {})
|
stats_data = data.pop("stats", {})
|
||||||
# Handle old rules without mode/created_at
|
|
||||||
if "mode" not in data:
|
if "mode" not in data:
|
||||||
data["mode"] = REPLICATION_MODE_NEW_ONLY
|
data["mode"] = REPLICATION_MODE_NEW_ONLY
|
||||||
if "created_at" not in data:
|
if "created_at" not in data:
|
||||||
@@ -93,16 +154,98 @@ class ReplicationRule:
|
|||||||
return rule
|
return rule
|
||||||
|
|
||||||
|
|
||||||
|
class ReplicationFailureStore:
|
||||||
|
MAX_FAILURES_PER_BUCKET = 50
|
||||||
|
|
||||||
|
def __init__(self, storage_root: Path) -> None:
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
|
||||||
|
def _get_failures_path(self, bucket_name: str) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "replication_failures.json"
|
||||||
|
|
||||||
|
def load_failures(self, bucket_name: str) -> List[ReplicationFailure]:
|
||||||
|
path = self._get_failures_path(bucket_name)
|
||||||
|
if not path.exists():
|
||||||
|
return []
|
||||||
|
try:
|
||||||
|
with open(path, "r") as f:
|
||||||
|
data = json.load(f)
|
||||||
|
return [ReplicationFailure.from_dict(d) for d in data.get("failures", [])]
|
||||||
|
except (OSError, ValueError, KeyError) as e:
|
||||||
|
logger.error(f"Failed to load replication failures for {bucket_name}: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
def save_failures(self, bucket_name: str, failures: List[ReplicationFailure]) -> None:
|
||||||
|
path = self._get_failures_path(bucket_name)
|
||||||
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
data = {"failures": [f.to_dict() for f in failures[:self.MAX_FAILURES_PER_BUCKET]]}
|
||||||
|
try:
|
||||||
|
with open(path, "w") as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
except OSError as e:
|
||||||
|
logger.error(f"Failed to save replication failures for {bucket_name}: {e}")
|
||||||
|
|
||||||
|
def add_failure(self, bucket_name: str, failure: ReplicationFailure) -> None:
|
||||||
|
with self._lock:
|
||||||
|
failures = self.load_failures(bucket_name)
|
||||||
|
existing = next((f for f in failures if f.object_key == failure.object_key), None)
|
||||||
|
if existing:
|
||||||
|
existing.failure_count += 1
|
||||||
|
existing.timestamp = failure.timestamp
|
||||||
|
existing.error_message = failure.error_message
|
||||||
|
existing.last_error_code = failure.last_error_code
|
||||||
|
else:
|
||||||
|
failures.insert(0, failure)
|
||||||
|
self.save_failures(bucket_name, failures)
|
||||||
|
|
||||||
|
def remove_failure(self, bucket_name: str, object_key: str) -> bool:
|
||||||
|
with self._lock:
|
||||||
|
failures = self.load_failures(bucket_name)
|
||||||
|
original_len = len(failures)
|
||||||
|
failures = [f for f in failures if f.object_key != object_key]
|
||||||
|
if len(failures) < original_len:
|
||||||
|
self.save_failures(bucket_name, failures)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def clear_failures(self, bucket_name: str) -> None:
|
||||||
|
with self._lock:
|
||||||
|
path = self._get_failures_path(bucket_name)
|
||||||
|
if path.exists():
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def get_failure(self, bucket_name: str, object_key: str) -> Optional[ReplicationFailure]:
|
||||||
|
failures = self.load_failures(bucket_name)
|
||||||
|
return next((f for f in failures if f.object_key == object_key), None)
|
||||||
|
|
||||||
|
def get_failure_count(self, bucket_name: str) -> int:
|
||||||
|
return len(self.load_failures(bucket_name))
|
||||||
|
|
||||||
|
|
||||||
class ReplicationManager:
|
class ReplicationManager:
|
||||||
def __init__(self, storage: ObjectStorage, connections: ConnectionStore, rules_path: Path) -> None:
|
def __init__(self, storage: ObjectStorage, connections: ConnectionStore, rules_path: Path, storage_root: Path) -> None:
|
||||||
self.storage = storage
|
self.storage = storage
|
||||||
self.connections = connections
|
self.connections = connections
|
||||||
self.rules_path = rules_path
|
self.rules_path = rules_path
|
||||||
|
self.storage_root = storage_root
|
||||||
self._rules: Dict[str, ReplicationRule] = {}
|
self._rules: Dict[str, ReplicationRule] = {}
|
||||||
self._stats_lock = threading.Lock()
|
self._stats_lock = threading.Lock()
|
||||||
self._executor = ThreadPoolExecutor(max_workers=4, thread_name_prefix="ReplicationWorker")
|
self._executor = ThreadPoolExecutor(max_workers=4, thread_name_prefix="ReplicationWorker")
|
||||||
|
self._shutdown = False
|
||||||
|
self.failure_store = ReplicationFailureStore(storage_root)
|
||||||
self.reload_rules()
|
self.reload_rules()
|
||||||
|
|
||||||
|
def shutdown(self, wait: bool = True) -> None:
|
||||||
|
"""Shutdown the replication executor gracefully.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
wait: If True, wait for pending tasks to complete
|
||||||
|
"""
|
||||||
|
self._shutdown = True
|
||||||
|
self._executor.shutdown(wait=wait)
|
||||||
|
logger.info("Replication manager shut down")
|
||||||
|
|
||||||
def reload_rules(self) -> None:
|
def reload_rules(self) -> None:
|
||||||
if not self.rules_path.exists():
|
if not self.rules_path.exists():
|
||||||
self._rules = {}
|
self._rules = {}
|
||||||
@@ -121,13 +264,33 @@ class ReplicationManager:
|
|||||||
with open(self.rules_path, "w") as f:
|
with open(self.rules_path, "w") as f:
|
||||||
json.dump(data, f, indent=2)
|
json.dump(data, f, indent=2)
|
||||||
|
|
||||||
|
def check_endpoint_health(self, connection: RemoteConnection) -> bool:
|
||||||
|
"""Check if a remote endpoint is reachable and responsive.
|
||||||
|
|
||||||
|
Returns True if endpoint is healthy, False otherwise.
|
||||||
|
Uses short timeouts to prevent blocking.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
s3 = _create_s3_client(connection, health_check=True)
|
||||||
|
s3.list_buckets()
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Endpoint health check failed for {connection.name} ({connection.endpoint_url}): {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
def get_rule(self, bucket_name: str) -> Optional[ReplicationRule]:
|
def get_rule(self, bucket_name: str) -> Optional[ReplicationRule]:
|
||||||
return self._rules.get(bucket_name)
|
return self._rules.get(bucket_name)
|
||||||
|
|
||||||
def set_rule(self, rule: ReplicationRule) -> None:
|
def set_rule(self, rule: ReplicationRule) -> None:
|
||||||
|
old_rule = self._rules.get(rule.bucket_name)
|
||||||
|
was_all_mode = old_rule and old_rule.mode == REPLICATION_MODE_ALL if old_rule else False
|
||||||
self._rules[rule.bucket_name] = rule
|
self._rules[rule.bucket_name] = rule
|
||||||
self.save_rules()
|
self.save_rules()
|
||||||
|
|
||||||
|
if rule.mode == REPLICATION_MODE_ALL and rule.enabled and not was_all_mode:
|
||||||
|
logger.info(f"Replication mode ALL enabled for {rule.bucket_name}, triggering sync of existing objects")
|
||||||
|
self._executor.submit(self.replicate_existing_objects, rule.bucket_name)
|
||||||
|
|
||||||
def delete_rule(self, bucket_name: str) -> None:
|
def delete_rule(self, bucket_name: str) -> None:
|
||||||
if bucket_name in self._rules:
|
if bucket_name in self._rules:
|
||||||
del self._rules[bucket_name]
|
del self._rules[bucket_name]
|
||||||
@@ -151,22 +314,14 @@ class ReplicationManager:
|
|||||||
|
|
||||||
connection = self.connections.get(rule.target_connection_id)
|
connection = self.connections.get(rule.target_connection_id)
|
||||||
if not connection:
|
if not connection:
|
||||||
return rule.stats # Return cached stats if connection unavailable
|
return rule.stats
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Get source objects
|
source_objects = self.storage.list_objects_all(bucket_name)
|
||||||
source_objects = self.storage.list_objects(bucket_name)
|
|
||||||
source_keys = {obj.key: obj.size for obj in source_objects}
|
source_keys = {obj.key: obj.size for obj in source_objects}
|
||||||
|
|
||||||
# Get destination objects
|
s3 = _create_s3_client(connection)
|
||||||
s3 = boto3.client(
|
|
||||||
"s3",
|
|
||||||
endpoint_url=connection.endpoint_url,
|
|
||||||
aws_access_key_id=connection.access_key,
|
|
||||||
aws_secret_access_key=connection.secret_key,
|
|
||||||
region_name=connection.region,
|
|
||||||
)
|
|
||||||
|
|
||||||
dest_keys = set()
|
dest_keys = set()
|
||||||
bytes_synced = 0
|
bytes_synced = 0
|
||||||
paginator = s3.get_paginator('list_objects_v2')
|
paginator = s3.get_paginator('list_objects_v2')
|
||||||
@@ -178,24 +333,18 @@ class ReplicationManager:
|
|||||||
bytes_synced += obj.get('Size', 0)
|
bytes_synced += obj.get('Size', 0)
|
||||||
except ClientError as e:
|
except ClientError as e:
|
||||||
if e.response['Error']['Code'] == 'NoSuchBucket':
|
if e.response['Error']['Code'] == 'NoSuchBucket':
|
||||||
# Destination bucket doesn't exist yet
|
|
||||||
dest_keys = set()
|
dest_keys = set()
|
||||||
else:
|
else:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
# Compute stats
|
synced = source_keys.keys() & dest_keys
|
||||||
synced = source_keys.keys() & dest_keys # Objects in both
|
orphaned = dest_keys - source_keys.keys()
|
||||||
orphaned = dest_keys - source_keys.keys() # In dest but not source
|
|
||||||
|
|
||||||
# For "new_only" mode, we can't determine pending since we don't know
|
|
||||||
# which objects existed before replication was enabled. Only "all" mode
|
|
||||||
# should show pending (objects that should be replicated but aren't yet).
|
|
||||||
if rule.mode == REPLICATION_MODE_ALL:
|
if rule.mode == REPLICATION_MODE_ALL:
|
||||||
pending = source_keys.keys() - dest_keys # In source but not dest
|
pending = source_keys.keys() - dest_keys
|
||||||
else:
|
else:
|
||||||
pending = set() # New-only mode: don't show pre-existing as pending
|
pending = set()
|
||||||
|
|
||||||
# Update cached stats with computed values
|
|
||||||
rule.stats.objects_synced = len(synced)
|
rule.stats.objects_synced = len(synced)
|
||||||
rule.stats.objects_pending = len(pending)
|
rule.stats.objects_pending = len(pending)
|
||||||
rule.stats.objects_orphaned = len(orphaned)
|
rule.stats.objects_orphaned = len(orphaned)
|
||||||
@@ -205,7 +354,7 @@ class ReplicationManager:
|
|||||||
|
|
||||||
except (ClientError, StorageError) as e:
|
except (ClientError, StorageError) as e:
|
||||||
logger.error(f"Failed to compute sync status for {bucket_name}: {e}")
|
logger.error(f"Failed to compute sync status for {bucket_name}: {e}")
|
||||||
return rule.stats # Return cached stats on error
|
return rule.stats
|
||||||
|
|
||||||
def replicate_existing_objects(self, bucket_name: str) -> None:
|
def replicate_existing_objects(self, bucket_name: str) -> None:
|
||||||
"""Trigger replication for all existing objects in a bucket."""
|
"""Trigger replication for all existing objects in a bucket."""
|
||||||
@@ -218,8 +367,12 @@ class ReplicationManager:
|
|||||||
logger.warning(f"Cannot replicate existing objects: Connection {rule.target_connection_id} not found")
|
logger.warning(f"Cannot replicate existing objects: Connection {rule.target_connection_id} not found")
|
||||||
return
|
return
|
||||||
|
|
||||||
|
if not self.check_endpoint_health(connection):
|
||||||
|
logger.warning(f"Cannot replicate existing objects: Endpoint {connection.name} ({connection.endpoint_url}) is not reachable")
|
||||||
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
objects = self.storage.list_objects(bucket_name)
|
objects = self.storage.list_objects_all(bucket_name)
|
||||||
logger.info(f"Starting replication of {len(objects)} existing objects from {bucket_name}")
|
logger.info(f"Starting replication of {len(objects)} existing objects from {bucket_name}")
|
||||||
for obj in objects:
|
for obj in objects:
|
||||||
self._executor.submit(self._replicate_task, bucket_name, obj.key, rule, connection, "write")
|
self._executor.submit(self._replicate_task, bucket_name, obj.key, rule, connection, "write")
|
||||||
@@ -233,13 +386,7 @@ class ReplicationManager:
|
|||||||
raise ValueError(f"Connection {connection_id} not found")
|
raise ValueError(f"Connection {connection_id} not found")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
s3 = boto3.client(
|
s3 = _create_s3_client(connection)
|
||||||
"s3",
|
|
||||||
endpoint_url=connection.endpoint_url,
|
|
||||||
aws_access_key_id=connection.access_key,
|
|
||||||
aws_secret_access_key=connection.secret_key,
|
|
||||||
region_name=connection.region,
|
|
||||||
)
|
|
||||||
s3.create_bucket(Bucket=bucket_name)
|
s3.create_bucket(Bucket=bucket_name)
|
||||||
except ClientError as e:
|
except ClientError as e:
|
||||||
logger.error(f"Failed to create remote bucket {bucket_name}: {e}")
|
logger.error(f"Failed to create remote bucket {bucket_name}: {e}")
|
||||||
@@ -255,39 +402,53 @@ class ReplicationManager:
|
|||||||
logger.warning(f"Replication skipped for {bucket_name}/{object_key}: Connection {rule.target_connection_id} not found")
|
logger.warning(f"Replication skipped for {bucket_name}/{object_key}: Connection {rule.target_connection_id} not found")
|
||||||
return
|
return
|
||||||
|
|
||||||
|
if not self.check_endpoint_health(connection):
|
||||||
|
logger.warning(f"Replication skipped for {bucket_name}/{object_key}: Endpoint {connection.name} ({connection.endpoint_url}) is not reachable")
|
||||||
|
return
|
||||||
|
|
||||||
self._executor.submit(self._replicate_task, bucket_name, object_key, rule, connection, action)
|
self._executor.submit(self._replicate_task, bucket_name, object_key, rule, connection, action)
|
||||||
|
|
||||||
def _replicate_task(self, bucket_name: str, object_key: str, rule: ReplicationRule, conn: RemoteConnection, action: str) -> None:
|
def _replicate_task(self, bucket_name: str, object_key: str, rule: ReplicationRule, conn: RemoteConnection, action: str) -> None:
|
||||||
|
if self._shutdown:
|
||||||
|
return
|
||||||
|
|
||||||
|
current_rule = self.get_rule(bucket_name)
|
||||||
|
if not current_rule or not current_rule.enabled:
|
||||||
|
logger.debug(f"Replication skipped for {bucket_name}/{object_key}: rule disabled or removed")
|
||||||
|
return
|
||||||
|
|
||||||
if ".." in object_key or object_key.startswith("/") or object_key.startswith("\\"):
|
if ".." in object_key or object_key.startswith("/") or object_key.startswith("\\"):
|
||||||
logger.error(f"Invalid object key in replication (path traversal attempt): {object_key}")
|
logger.error(f"Invalid object key in replication (path traversal attempt): {object_key}")
|
||||||
return
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from .storage import ObjectStorage
|
from .storage import ObjectStorage
|
||||||
ObjectStorage._sanitize_object_key(object_key)
|
ObjectStorage._sanitize_object_key(object_key)
|
||||||
except StorageError as e:
|
except StorageError as e:
|
||||||
logger.error(f"Object key validation failed in replication: {e}")
|
logger.error(f"Object key validation failed in replication: {e}")
|
||||||
return
|
return
|
||||||
|
|
||||||
file_size = 0
|
|
||||||
try:
|
try:
|
||||||
config = Config(user_agent_extra=REPLICATION_USER_AGENT)
|
s3 = _create_s3_client(conn)
|
||||||
s3 = boto3.client(
|
|
||||||
"s3",
|
|
||||||
endpoint_url=conn.endpoint_url,
|
|
||||||
aws_access_key_id=conn.access_key,
|
|
||||||
aws_secret_access_key=conn.secret_key,
|
|
||||||
region_name=conn.region,
|
|
||||||
config=config,
|
|
||||||
)
|
|
||||||
|
|
||||||
if action == "delete":
|
if action == "delete":
|
||||||
try:
|
try:
|
||||||
s3.delete_object(Bucket=rule.target_bucket, Key=object_key)
|
s3.delete_object(Bucket=rule.target_bucket, Key=object_key)
|
||||||
logger.info(f"Replicated DELETE {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
logger.info(f"Replicated DELETE {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
||||||
self._update_last_sync(bucket_name, object_key)
|
self._update_last_sync(bucket_name, object_key)
|
||||||
|
self.failure_store.remove_failure(bucket_name, object_key)
|
||||||
except ClientError as e:
|
except ClientError as e:
|
||||||
|
error_code = e.response.get('Error', {}).get('Code')
|
||||||
logger.error(f"Replication DELETE failed for {bucket_name}/{object_key}: {e}")
|
logger.error(f"Replication DELETE failed for {bucket_name}/{object_key}: {e}")
|
||||||
|
self.failure_store.add_failure(bucket_name, ReplicationFailure(
|
||||||
|
object_key=object_key,
|
||||||
|
error_message=str(e),
|
||||||
|
timestamp=time.time(),
|
||||||
|
failure_count=1,
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
action="delete",
|
||||||
|
last_error_code=error_code,
|
||||||
|
))
|
||||||
return
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -296,61 +457,153 @@ class ReplicationManager:
|
|||||||
logger.error(f"Source object not found: {bucket_name}/{object_key}")
|
logger.error(f"Source object not found: {bucket_name}/{object_key}")
|
||||||
return
|
return
|
||||||
|
|
||||||
metadata = self.storage.get_object_metadata(bucket_name, object_key)
|
|
||||||
|
|
||||||
extra_args = {}
|
|
||||||
if metadata:
|
|
||||||
extra_args["Metadata"] = metadata
|
|
||||||
|
|
||||||
# Guess content type to prevent corruption/wrong handling
|
|
||||||
content_type, _ = mimetypes.guess_type(path)
|
content_type, _ = mimetypes.guess_type(path)
|
||||||
file_size = path.stat().st_size
|
file_size = path.stat().st_size
|
||||||
|
|
||||||
logger.info(f"Replicating {bucket_name}/{object_key}: Size={file_size}, ContentType={content_type}")
|
logger.info(f"Replicating {bucket_name}/{object_key}: Size={file_size}, ContentType={content_type}")
|
||||||
|
|
||||||
try:
|
def do_upload() -> None:
|
||||||
with path.open("rb") as f:
|
"""Upload object using appropriate method based on file size.
|
||||||
s3.put_object(
|
|
||||||
Bucket=rule.target_bucket,
|
For small files (< 10 MiB): Read into memory for simpler handling
|
||||||
Key=object_key,
|
For large files: Use streaming upload to avoid memory issues
|
||||||
Body=f,
|
"""
|
||||||
ContentLength=file_size,
|
extra_args = {}
|
||||||
ContentType=content_type or "application/octet-stream",
|
if content_type:
|
||||||
Metadata=metadata or {}
|
extra_args["ContentType"] = content_type
|
||||||
|
|
||||||
|
if file_size >= STREAMING_THRESHOLD_BYTES:
|
||||||
|
s3.upload_file(
|
||||||
|
str(path),
|
||||||
|
rule.target_bucket,
|
||||||
|
object_key,
|
||||||
|
ExtraArgs=extra_args if extra_args else None,
|
||||||
)
|
)
|
||||||
|
else:
|
||||||
|
file_content = path.read_bytes()
|
||||||
|
put_kwargs = {
|
||||||
|
"Bucket": rule.target_bucket,
|
||||||
|
"Key": object_key,
|
||||||
|
"Body": file_content,
|
||||||
|
**extra_args,
|
||||||
|
}
|
||||||
|
s3.put_object(**put_kwargs)
|
||||||
|
|
||||||
|
try:
|
||||||
|
do_upload()
|
||||||
except (ClientError, S3UploadFailedError) as e:
|
except (ClientError, S3UploadFailedError) as e:
|
||||||
is_no_bucket = False
|
error_code = None
|
||||||
if isinstance(e, ClientError):
|
if isinstance(e, ClientError):
|
||||||
if e.response['Error']['Code'] == 'NoSuchBucket':
|
error_code = e.response['Error']['Code']
|
||||||
is_no_bucket = True
|
|
||||||
elif isinstance(e, S3UploadFailedError):
|
elif isinstance(e, S3UploadFailedError):
|
||||||
if "NoSuchBucket" in str(e):
|
if "NoSuchBucket" in str(e):
|
||||||
is_no_bucket = True
|
error_code = 'NoSuchBucket'
|
||||||
|
|
||||||
if is_no_bucket:
|
if error_code == 'NoSuchBucket':
|
||||||
logger.info(f"Target bucket {rule.target_bucket} not found. Attempting to create it.")
|
logger.info(f"Target bucket {rule.target_bucket} not found. Attempting to create it.")
|
||||||
|
bucket_ready = False
|
||||||
try:
|
try:
|
||||||
s3.create_bucket(Bucket=rule.target_bucket)
|
s3.create_bucket(Bucket=rule.target_bucket)
|
||||||
# Retry upload
|
bucket_ready = True
|
||||||
with path.open("rb") as f:
|
logger.info(f"Created target bucket {rule.target_bucket}")
|
||||||
s3.put_object(
|
except ClientError as bucket_err:
|
||||||
Bucket=rule.target_bucket,
|
if bucket_err.response['Error']['Code'] in ('BucketAlreadyExists', 'BucketAlreadyOwnedByYou'):
|
||||||
Key=object_key,
|
logger.debug(f"Bucket {rule.target_bucket} already exists (created by another thread)")
|
||||||
Body=f,
|
bucket_ready = True
|
||||||
ContentLength=file_size,
|
else:
|
||||||
ContentType=content_type or "application/octet-stream",
|
logger.error(f"Failed to create target bucket {rule.target_bucket}: {bucket_err}")
|
||||||
Metadata=metadata or {}
|
raise e
|
||||||
)
|
|
||||||
except Exception as create_err:
|
if bucket_ready:
|
||||||
logger.error(f"Failed to create target bucket {rule.target_bucket}: {create_err}")
|
do_upload()
|
||||||
raise e # Raise original error
|
|
||||||
else:
|
else:
|
||||||
raise e
|
raise e
|
||||||
|
|
||||||
logger.info(f"Replicated {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
logger.info(f"Replicated {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
||||||
self._update_last_sync(bucket_name, object_key)
|
self._update_last_sync(bucket_name, object_key)
|
||||||
|
self.failure_store.remove_failure(bucket_name, object_key)
|
||||||
|
|
||||||
except (ClientError, OSError, ValueError) as e:
|
except (ClientError, OSError, ValueError) as e:
|
||||||
|
error_code = None
|
||||||
|
if isinstance(e, ClientError):
|
||||||
|
error_code = e.response.get('Error', {}).get('Code')
|
||||||
logger.error(f"Replication failed for {bucket_name}/{object_key}: {e}")
|
logger.error(f"Replication failed for {bucket_name}/{object_key}: {e}")
|
||||||
except Exception:
|
self.failure_store.add_failure(bucket_name, ReplicationFailure(
|
||||||
|
object_key=object_key,
|
||||||
|
error_message=str(e),
|
||||||
|
timestamp=time.time(),
|
||||||
|
failure_count=1,
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
action=action,
|
||||||
|
last_error_code=error_code,
|
||||||
|
))
|
||||||
|
except Exception as e:
|
||||||
logger.exception(f"Unexpected error during replication for {bucket_name}/{object_key}")
|
logger.exception(f"Unexpected error during replication for {bucket_name}/{object_key}")
|
||||||
|
self.failure_store.add_failure(bucket_name, ReplicationFailure(
|
||||||
|
object_key=object_key,
|
||||||
|
error_message=str(e),
|
||||||
|
timestamp=time.time(),
|
||||||
|
failure_count=1,
|
||||||
|
bucket_name=bucket_name,
|
||||||
|
action=action,
|
||||||
|
last_error_code=None,
|
||||||
|
))
|
||||||
|
|
||||||
|
def get_failed_items(self, bucket_name: str, limit: int = 50, offset: int = 0) -> List[ReplicationFailure]:
|
||||||
|
failures = self.failure_store.load_failures(bucket_name)
|
||||||
|
return failures[offset:offset + limit]
|
||||||
|
|
||||||
|
def get_failure_count(self, bucket_name: str) -> int:
|
||||||
|
return self.failure_store.get_failure_count(bucket_name)
|
||||||
|
|
||||||
|
def retry_failed_item(self, bucket_name: str, object_key: str) -> bool:
|
||||||
|
failure = self.failure_store.get_failure(bucket_name, object_key)
|
||||||
|
if not failure:
|
||||||
|
return False
|
||||||
|
|
||||||
|
rule = self.get_rule(bucket_name)
|
||||||
|
if not rule or not rule.enabled:
|
||||||
|
return False
|
||||||
|
|
||||||
|
connection = self.connections.get(rule.target_connection_id)
|
||||||
|
if not connection:
|
||||||
|
logger.warning(f"Cannot retry: Connection {rule.target_connection_id} not found")
|
||||||
|
return False
|
||||||
|
|
||||||
|
if not self.check_endpoint_health(connection):
|
||||||
|
logger.warning(f"Cannot retry: Endpoint {connection.name} is not reachable")
|
||||||
|
return False
|
||||||
|
|
||||||
|
self._executor.submit(self._replicate_task, bucket_name, object_key, rule, connection, failure.action)
|
||||||
|
return True
|
||||||
|
|
||||||
|
def retry_all_failed(self, bucket_name: str) -> Dict[str, int]:
|
||||||
|
failures = self.failure_store.load_failures(bucket_name)
|
||||||
|
if not failures:
|
||||||
|
return {"submitted": 0, "skipped": 0}
|
||||||
|
|
||||||
|
rule = self.get_rule(bucket_name)
|
||||||
|
if not rule or not rule.enabled:
|
||||||
|
return {"submitted": 0, "skipped": len(failures)}
|
||||||
|
|
||||||
|
connection = self.connections.get(rule.target_connection_id)
|
||||||
|
if not connection:
|
||||||
|
logger.warning(f"Cannot retry: Connection {rule.target_connection_id} not found")
|
||||||
|
return {"submitted": 0, "skipped": len(failures)}
|
||||||
|
|
||||||
|
if not self.check_endpoint_health(connection):
|
||||||
|
logger.warning(f"Cannot retry: Endpoint {connection.name} is not reachable")
|
||||||
|
return {"submitted": 0, "skipped": len(failures)}
|
||||||
|
|
||||||
|
submitted = 0
|
||||||
|
for failure in failures:
|
||||||
|
self._executor.submit(self._replicate_task, bucket_name, failure.object_key, rule, connection, failure.action)
|
||||||
|
submitted += 1
|
||||||
|
|
||||||
|
return {"submitted": submitted, "skipped": 0}
|
||||||
|
|
||||||
|
def dismiss_failure(self, bucket_name: str, object_key: str) -> bool:
|
||||||
|
return self.failure_store.remove_failure(bucket_name, object_key)
|
||||||
|
|
||||||
|
def clear_failures(self, bucket_name: str) -> None:
|
||||||
|
self.failure_store.clear_failures(bucket_name)
|
||||||
|
|||||||
1281
app/s3_api.py
1281
app/s3_api.py
File diff suppressed because it is too large
Load Diff
@@ -1,4 +1,3 @@
|
|||||||
"""Ephemeral store for one-time secrets communicated to the UI."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import secrets
|
import secrets
|
||||||
|
|||||||
727
app/storage.py
727
app/storage.py
File diff suppressed because it is too large
Load Diff
@@ -1,7 +1,6 @@
|
|||||||
"""Central location for the application version string."""
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
APP_VERSION = "0.1.5"
|
APP_VERSION = "0.2.2"
|
||||||
|
|
||||||
|
|
||||||
def get_version() -> str:
|
def get_version() -> str:
|
||||||
|
|||||||
435
docs.md
435
docs.md
@@ -122,7 +122,7 @@ With these volumes attached you can rebuild/restart the container without losing
|
|||||||
|
|
||||||
### Versioning
|
### Versioning
|
||||||
|
|
||||||
The repo now tracks a human-friendly release string inside `app/version.py` (see the `APP_VERSION` constant). Edit that value whenever you cut a release. The constant flows into Flask as `APP_VERSION` and is exposed via `GET /healthz`, so you can monitor deployments or surface it in UIs.
|
The repo now tracks a human-friendly release string inside `app/version.py` (see the `APP_VERSION` constant). Edit that value whenever you cut a release. The constant flows into Flask as `APP_VERSION` and is exposed via `GET /myfsio/health`, so you can monitor deployments or surface it in UIs.
|
||||||
|
|
||||||
## 3. Configuration Reference
|
## 3. Configuration Reference
|
||||||
|
|
||||||
@@ -189,6 +189,52 @@ All configuration is done via environment variables. The table below lists every
|
|||||||
| `KMS_ENABLED` | `false` | Enable KMS key management for encryption. |
|
| `KMS_ENABLED` | `false` | Enable KMS key management for encryption. |
|
||||||
| `KMS_KEYS_PATH` | `data/.myfsio.sys/keys/kms_keys.json` | Path to store KMS key metadata. |
|
| `KMS_KEYS_PATH` | `data/.myfsio.sys/keys/kms_keys.json` | Path to store KMS key metadata. |
|
||||||
|
|
||||||
|
|
||||||
|
## Lifecycle Rules
|
||||||
|
|
||||||
|
Lifecycle rules automate object management by scheduling deletions based on object age.
|
||||||
|
|
||||||
|
### Enabling Lifecycle Enforcement
|
||||||
|
|
||||||
|
By default, lifecycle enforcement is disabled. Enable it by setting the environment variable:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
LIFECYCLE_ENABLED=true python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
Or in your `myfsio.env` file:
|
||||||
|
```
|
||||||
|
LIFECYCLE_ENABLED=true
|
||||||
|
LIFECYCLE_INTERVAL_SECONDS=3600 # Check interval (default: 1 hour)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuring Rules
|
||||||
|
|
||||||
|
Once enabled, configure lifecycle rules via:
|
||||||
|
- **Web UI:** Bucket Details → Lifecycle tab → Add Rule
|
||||||
|
- **S3 API:** `PUT /<bucket>?lifecycle` with XML configuration
|
||||||
|
|
||||||
|
### Available Actions
|
||||||
|
|
||||||
|
| Action | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| **Expiration** | Delete current version objects after N days |
|
||||||
|
| **NoncurrentVersionExpiration** | Delete old versions N days after becoming noncurrent (requires versioning) |
|
||||||
|
| **AbortIncompleteMultipartUpload** | Clean up incomplete multipart uploads after N days |
|
||||||
|
|
||||||
|
### Example Configuration (XML)
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<LifecycleConfiguration>
|
||||||
|
<Rule>
|
||||||
|
<ID>DeleteOldLogs</ID>
|
||||||
|
<Status>Enabled</Status>
|
||||||
|
<Filter><Prefix>logs/</Prefix></Filter>
|
||||||
|
<Expiration><Days>30</Days></Expiration>
|
||||||
|
</Rule>
|
||||||
|
</LifecycleConfiguration>
|
||||||
|
```
|
||||||
|
|
||||||
### Performance Tuning
|
### Performance Tuning
|
||||||
|
|
||||||
| Variable | Default | Notes |
|
| Variable | Default | Notes |
|
||||||
@@ -231,14 +277,14 @@ The application automatically trusts these headers to generate correct presigned
|
|||||||
### Version Checking
|
### Version Checking
|
||||||
|
|
||||||
The application version is tracked in `app/version.py` and exposed via:
|
The application version is tracked in `app/version.py` and exposed via:
|
||||||
- **Health endpoint:** `GET /healthz` returns JSON with `version` field
|
- **Health endpoint:** `GET /myfsio/health` returns JSON with `version` field
|
||||||
- **Metrics dashboard:** Navigate to `/ui/metrics` to see the running version in the System Status card
|
- **Metrics dashboard:** Navigate to `/ui/metrics` to see the running version in the System Status card
|
||||||
|
|
||||||
To check your current version:
|
To check your current version:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# API health endpoint
|
# API health endpoint
|
||||||
curl http://localhost:5000/healthz
|
curl http://localhost:5000/myfsio/health
|
||||||
|
|
||||||
# Or inspect version.py directly
|
# Or inspect version.py directly
|
||||||
cat app/version.py | grep APP_VERSION
|
cat app/version.py | grep APP_VERSION
|
||||||
@@ -331,7 +377,7 @@ docker run -d \
|
|||||||
myfsio:latest
|
myfsio:latest
|
||||||
|
|
||||||
# 5. Verify health
|
# 5. Verify health
|
||||||
curl http://localhost:5000/healthz
|
curl http://localhost:5000/myfsio/health
|
||||||
```
|
```
|
||||||
|
|
||||||
### Version Compatibility Checks
|
### Version Compatibility Checks
|
||||||
@@ -341,6 +387,7 @@ Before upgrading across major versions, verify compatibility:
|
|||||||
| From Version | To Version | Breaking Changes | Migration Required |
|
| From Version | To Version | Breaking Changes | Migration Required |
|
||||||
|--------------|------------|------------------|-------------------|
|
|--------------|------------|------------------|-------------------|
|
||||||
| 0.1.x | 0.2.x | None expected | No |
|
| 0.1.x | 0.2.x | None expected | No |
|
||||||
|
| 0.1.6 | 0.1.7 | None | No |
|
||||||
| < 0.1.0 | >= 0.1.0 | New IAM config format | Yes - run migration script |
|
| < 0.1.0 | >= 0.1.0 | New IAM config format | Yes - run migration script |
|
||||||
|
|
||||||
**Automatic compatibility detection:**
|
**Automatic compatibility detection:**
|
||||||
@@ -455,7 +502,7 @@ docker run -d \
|
|||||||
myfsio:0.1.3 # specify previous version tag
|
myfsio:0.1.3 # specify previous version tag
|
||||||
|
|
||||||
# 3. Verify
|
# 3. Verify
|
||||||
curl http://localhost:5000/healthz
|
curl http://localhost:5000/myfsio/health
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Emergency Config Restore
|
#### Emergency Config Restore
|
||||||
@@ -481,7 +528,7 @@ For production environments requiring zero downtime:
|
|||||||
APP_PORT=5001 UI_PORT=5101 python run.py &
|
APP_PORT=5001 UI_PORT=5101 python run.py &
|
||||||
|
|
||||||
# 2. Health check new instance
|
# 2. Health check new instance
|
||||||
curl http://localhost:5001/healthz
|
curl http://localhost:5001/myfsio/health
|
||||||
|
|
||||||
# 3. Update load balancer to route to new ports
|
# 3. Update load balancer to route to new ports
|
||||||
|
|
||||||
@@ -497,7 +544,7 @@ After any update, verify functionality:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. Health check
|
# 1. Health check
|
||||||
curl http://localhost:5000/healthz
|
curl http://localhost:5000/myfsio/health
|
||||||
|
|
||||||
# 2. Login to UI
|
# 2. Login to UI
|
||||||
open http://localhost:5100/ui
|
open http://localhost:5100/ui
|
||||||
@@ -541,7 +588,7 @@ APP_PID=$!
|
|||||||
|
|
||||||
# Wait and health check
|
# Wait and health check
|
||||||
sleep 5
|
sleep 5
|
||||||
if curl -f http://localhost:5000/healthz; then
|
if curl -f http://localhost:5000/myfsio/health; then
|
||||||
echo "Update successful!"
|
echo "Update successful!"
|
||||||
else
|
else
|
||||||
echo "Health check failed, rolling back..."
|
echo "Health check failed, rolling back..."
|
||||||
@@ -555,6 +602,10 @@ fi
|
|||||||
|
|
||||||
## 4. Authentication & IAM
|
## 4. Authentication & IAM
|
||||||
|
|
||||||
|
MyFSIO implements a comprehensive Identity and Access Management (IAM) system that controls who can access your buckets and what operations they can perform. The system supports both simple action-based permissions and AWS-compatible policy syntax.
|
||||||
|
|
||||||
|
### Getting Started
|
||||||
|
|
||||||
1. On first boot, `data/.myfsio.sys/config/iam.json` is seeded with `localadmin / localadmin` that has wildcard access.
|
1. On first boot, `data/.myfsio.sys/config/iam.json` is seeded with `localadmin / localadmin` that has wildcard access.
|
||||||
2. Sign into the UI using those credentials, then open **IAM**:
|
2. Sign into the UI using those credentials, then open **IAM**:
|
||||||
- **Create user**: supply a display name and optional JSON inline policy array.
|
- **Create user**: supply a display name and optional JSON inline policy array.
|
||||||
@@ -562,48 +613,241 @@ fi
|
|||||||
- **Policy editor**: select a user, paste an array of objects (`{"bucket": "*", "actions": ["list", "read"]}`), and submit. Alias support includes AWS-style verbs (e.g., `s3:GetObject`).
|
- **Policy editor**: select a user, paste an array of objects (`{"bucket": "*", "actions": ["list", "read"]}`), and submit. Alias support includes AWS-style verbs (e.g., `s3:GetObject`).
|
||||||
3. Wildcard action `iam:*` is supported for admin user definitions.
|
3. Wildcard action `iam:*` is supported for admin user definitions.
|
||||||
|
|
||||||
The API expects every request to include `X-Access-Key` and `X-Secret-Key` headers. The UI persists them in the Flask session after login.
|
### Authentication
|
||||||
|
|
||||||
|
The API expects every request to include authentication headers. The UI persists them in the Flask session after login.
|
||||||
|
|
||||||
|
| Header | Description |
|
||||||
|
| --- | --- |
|
||||||
|
| `X-Access-Key` | The user's access key identifier |
|
||||||
|
| `X-Secret-Key` | The user's secret key for signing |
|
||||||
|
|
||||||
|
**Security Features:**
|
||||||
|
- **Lockout Protection**: After `AUTH_MAX_ATTEMPTS` (default: 5) failed login attempts, the account is locked for `AUTH_LOCKOUT_MINUTES` (default: 15 minutes).
|
||||||
|
- **Session Management**: UI sessions remain valid for `SESSION_LIFETIME_DAYS` (default: 30 days).
|
||||||
|
- **Hot Reload**: IAM configuration changes take effect immediately without restart.
|
||||||
|
|
||||||
|
### Permission Model
|
||||||
|
|
||||||
|
MyFSIO uses a two-layer permission model:
|
||||||
|
|
||||||
|
1. **IAM User Policies** – Define what a user can do across the system (stored in `iam.json`)
|
||||||
|
2. **Bucket Policies** – Define who can access a specific bucket (stored in `bucket_policies.json`)
|
||||||
|
|
||||||
|
Both layers are evaluated for each request. A user must have permission in their IAM policy AND the bucket policy must allow the action (or have no explicit deny).
|
||||||
|
|
||||||
### Available IAM Actions
|
### Available IAM Actions
|
||||||
|
|
||||||
|
#### S3 Actions (Bucket/Object Operations)
|
||||||
|
|
||||||
| Action | Description | AWS Aliases |
|
| Action | Description | AWS Aliases |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `list` | List buckets and objects | `s3:ListBucket`, `s3:ListAllMyBuckets`, `s3:ListBucketVersions`, `s3:ListMultipartUploads`, `s3:ListParts` |
|
| `list` | List buckets and objects | `s3:ListBucket`, `s3:ListAllMyBuckets`, `s3:ListBucketVersions`, `s3:ListMultipartUploads`, `s3:ListParts` |
|
||||||
| `read` | Download objects | `s3:GetObject`, `s3:GetObjectVersion`, `s3:GetObjectTagging`, `s3:HeadObject`, `s3:HeadBucket` |
|
| `read` | Download objects, get metadata | `s3:GetObject`, `s3:GetObjectVersion`, `s3:GetObjectTagging`, `s3:GetObjectVersionTagging`, `s3:GetObjectAcl`, `s3:GetBucketVersioning`, `s3:HeadObject`, `s3:HeadBucket` |
|
||||||
| `write` | Upload objects, create buckets | `s3:PutObject`, `s3:CreateBucket`, `s3:CreateMultipartUpload`, `s3:UploadPart`, `s3:CompleteMultipartUpload`, `s3:AbortMultipartUpload`, `s3:CopyObject` |
|
| `write` | Upload objects, create buckets, manage tags | `s3:PutObject`, `s3:CreateBucket`, `s3:PutObjectTagging`, `s3:PutBucketVersioning`, `s3:CreateMultipartUpload`, `s3:UploadPart`, `s3:CompleteMultipartUpload`, `s3:AbortMultipartUpload`, `s3:CopyObject` |
|
||||||
| `delete` | Remove objects and buckets | `s3:DeleteObject`, `s3:DeleteObjectVersion`, `s3:DeleteBucket` |
|
| `delete` | Remove objects, versions, and buckets | `s3:DeleteObject`, `s3:DeleteObjectVersion`, `s3:DeleteBucket`, `s3:DeleteObjectTagging` |
|
||||||
| `share` | Manage ACLs | `s3:PutObjectAcl`, `s3:PutBucketAcl`, `s3:GetBucketAcl` |
|
| `share` | Manage Access Control Lists (ACLs) | `s3:PutObjectAcl`, `s3:PutBucketAcl`, `s3:GetBucketAcl` |
|
||||||
| `policy` | Manage bucket policies | `s3:PutBucketPolicy`, `s3:GetBucketPolicy`, `s3:DeleteBucketPolicy` |
|
| `policy` | Manage bucket policies | `s3:PutBucketPolicy`, `s3:GetBucketPolicy`, `s3:DeleteBucketPolicy` |
|
||||||
| `replication` | Configure and manage replication | `s3:GetReplicationConfiguration`, `s3:PutReplicationConfiguration`, `s3:ReplicateObject`, `s3:ReplicateTags`, `s3:ReplicateDelete` |
|
| `lifecycle` | Manage lifecycle rules | `s3:GetLifecycleConfiguration`, `s3:PutLifecycleConfiguration`, `s3:DeleteLifecycleConfiguration`, `s3:GetBucketLifecycle`, `s3:PutBucketLifecycle` |
|
||||||
| `iam:list_users` | View IAM users | `iam:ListUsers` |
|
| `cors` | Manage CORS configuration | `s3:GetBucketCors`, `s3:PutBucketCors`, `s3:DeleteBucketCors` |
|
||||||
| `iam:create_user` | Create IAM users | `iam:CreateUser` |
|
| `replication` | Configure and manage replication | `s3:GetReplicationConfiguration`, `s3:PutReplicationConfiguration`, `s3:DeleteReplicationConfiguration`, `s3:ReplicateObject`, `s3:ReplicateTags`, `s3:ReplicateDelete` |
|
||||||
|
|
||||||
|
#### IAM Actions (User Management)
|
||||||
|
|
||||||
|
| Action | Description | AWS Aliases |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `iam:list_users` | View all IAM users and their policies | `iam:ListUsers` |
|
||||||
|
| `iam:create_user` | Create new IAM users | `iam:CreateUser` |
|
||||||
| `iam:delete_user` | Delete IAM users | `iam:DeleteUser` |
|
| `iam:delete_user` | Delete IAM users | `iam:DeleteUser` |
|
||||||
| `iam:rotate_key` | Rotate user secrets | `iam:RotateAccessKey` |
|
| `iam:rotate_key` | Rotate user secret keys | `iam:RotateAccessKey` |
|
||||||
| `iam:update_policy` | Modify user policies | `iam:PutUserPolicy` |
|
| `iam:update_policy` | Modify user policies | `iam:PutUserPolicy` |
|
||||||
| `iam:*` | All IAM actions (admin wildcard) | — |
|
| `iam:*` | **Admin wildcard** – grants all IAM actions | — |
|
||||||
|
|
||||||
### Example Policies
|
#### Wildcards
|
||||||
|
|
||||||
|
| Wildcard | Scope | Description |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `*` (in actions) | All S3 actions | Grants `list`, `read`, `write`, `delete`, `share`, `policy`, `lifecycle`, `cors`, `replication` |
|
||||||
|
| `iam:*` | All IAM actions | Grants all `iam:*` actions for user management |
|
||||||
|
| `*` (in bucket) | All buckets | Policy applies to every bucket |
|
||||||
|
|
||||||
|
### IAM Policy Structure
|
||||||
|
|
||||||
|
User policies are stored as a JSON array of policy objects. Each object specifies a bucket and the allowed actions:
|
||||||
|
|
||||||
**Full Control (admin):**
|
|
||||||
```json
|
```json
|
||||||
[{"bucket": "*", "actions": ["list", "read", "write", "delete", "share", "policy", "replication", "iam:*"]}]
|
[
|
||||||
|
{
|
||||||
|
"bucket": "<bucket-name-or-wildcard>",
|
||||||
|
"actions": ["<action1>", "<action2>", ...]
|
||||||
|
}
|
||||||
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Read-Only:**
|
**Fields:**
|
||||||
|
- `bucket`: The bucket name (case-insensitive) or `*` for all buckets
|
||||||
|
- `actions`: Array of action strings (simple names or AWS aliases)
|
||||||
|
|
||||||
|
### Example User Policies
|
||||||
|
|
||||||
|
**Full Administrator (complete system access):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["list", "read", "write", "delete", "share", "policy", "lifecycle", "cors", "replication", "iam:*"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Read-Only User (browse and download only):**
|
||||||
```json
|
```json
|
||||||
[{"bucket": "*", "actions": ["list", "read"]}]
|
[{"bucket": "*", "actions": ["list", "read"]}]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Single Bucket Access (no listing other buckets):**
|
**Single Bucket Full Access (no access to other buckets):**
|
||||||
```json
|
```json
|
||||||
[{"bucket": "user-bucket", "actions": ["read", "write", "delete"]}]
|
[{"bucket": "user-bucket", "actions": ["list", "read", "write", "delete"]}]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Bucket Access with Replication:**
|
**Multiple Bucket Access (different permissions per bucket):**
|
||||||
```json
|
```json
|
||||||
[{"bucket": "my-bucket", "actions": ["list", "read", "write", "delete", "replication"]}]
|
[
|
||||||
|
{"bucket": "public-data", "actions": ["list", "read"]},
|
||||||
|
{"bucket": "my-uploads", "actions": ["list", "read", "write", "delete"]},
|
||||||
|
{"bucket": "team-shared", "actions": ["list", "read", "write"]}
|
||||||
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**IAM Manager (manage users but no data access):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["iam:list_users", "iam:create_user", "iam:delete_user", "iam:rotate_key", "iam:update_policy"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Replication Operator (manage replication only):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["list", "read", "replication"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Lifecycle Manager (configure object expiration):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["list", "lifecycle"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**CORS Administrator (configure cross-origin access):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["cors"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Bucket Administrator (full bucket config, no IAM access):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "my-bucket", "actions": ["list", "read", "write", "delete", "policy", "lifecycle", "cors"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Upload-Only User (write but cannot read back):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "drop-box", "actions": ["write"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Backup Operator (read, list, and replicate):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["list", "read", "replication"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using AWS-Style Action Names
|
||||||
|
|
||||||
|
You can use AWS S3 action names instead of simple names. They are automatically normalized:
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"bucket": "my-bucket",
|
||||||
|
"actions": [
|
||||||
|
"s3:ListBucket",
|
||||||
|
"s3:GetObject",
|
||||||
|
"s3:PutObject",
|
||||||
|
"s3:DeleteObject"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
This is equivalent to:
|
||||||
|
```json
|
||||||
|
[{"bucket": "my-bucket", "actions": ["list", "read", "write", "delete"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Managing Users via API
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all users (requires iam:list_users)
|
||||||
|
curl http://localhost:5000/iam/users \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Create a new user (requires iam:create_user)
|
||||||
|
curl -X POST http://localhost:5000/iam/users \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
||||||
|
-d '{
|
||||||
|
"display_name": "New User",
|
||||||
|
"policies": [{"bucket": "*", "actions": ["list", "read"]}]
|
||||||
|
}'
|
||||||
|
|
||||||
|
# Rotate user secret (requires iam:rotate_key)
|
||||||
|
curl -X POST http://localhost:5000/iam/users/<access-key>/rotate \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Update user policies (requires iam:update_policy)
|
||||||
|
curl -X PUT http://localhost:5000/iam/users/<access-key>/policies \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
||||||
|
-d '[{"bucket": "*", "actions": ["list", "read", "write"]}]'
|
||||||
|
|
||||||
|
# Delete a user (requires iam:delete_user)
|
||||||
|
curl -X DELETE http://localhost:5000/iam/users/<access-key> \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
```
|
||||||
|
|
||||||
|
### Permission Precedence
|
||||||
|
|
||||||
|
When a request is made, permissions are evaluated in this order:
|
||||||
|
|
||||||
|
1. **Authentication** – Verify the access key and secret key are valid
|
||||||
|
2. **Lockout Check** – Ensure the account is not locked due to failed attempts
|
||||||
|
3. **IAM Policy Check** – Verify the user has the required action for the target bucket
|
||||||
|
4. **Bucket Policy Check** – If a bucket policy exists, verify it allows the action
|
||||||
|
|
||||||
|
A request is allowed only if:
|
||||||
|
- The IAM policy grants the action, AND
|
||||||
|
- The bucket policy allows the action (or no bucket policy exists)
|
||||||
|
|
||||||
|
### Common Permission Scenarios
|
||||||
|
|
||||||
|
| Scenario | Required Actions |
|
||||||
|
| --- | --- |
|
||||||
|
| Browse bucket contents | `list` |
|
||||||
|
| Download a file | `read` |
|
||||||
|
| Upload a file | `write` |
|
||||||
|
| Delete a file | `delete` |
|
||||||
|
| Generate presigned URL (GET) | `read` |
|
||||||
|
| Generate presigned URL (PUT) | `write` |
|
||||||
|
| Generate presigned URL (DELETE) | `delete` |
|
||||||
|
| Enable versioning | `write` (includes `s3:PutBucketVersioning`) |
|
||||||
|
| View bucket policy | `policy` |
|
||||||
|
| Modify bucket policy | `policy` |
|
||||||
|
| Configure lifecycle rules | `lifecycle` |
|
||||||
|
| View lifecycle rules | `lifecycle` |
|
||||||
|
| Configure CORS | `cors` |
|
||||||
|
| View CORS rules | `cors` |
|
||||||
|
| Configure replication | `replication` (admin-only for creation) |
|
||||||
|
| Pause/resume replication | `replication` |
|
||||||
|
| Manage other users | `iam:*` or specific `iam:` actions |
|
||||||
|
| Set bucket quotas | `iam:*` or `iam:list_users` (admin feature) |
|
||||||
|
|
||||||
|
### Security Best Practices
|
||||||
|
|
||||||
|
1. **Principle of Least Privilege** – Grant only the permissions users need
|
||||||
|
2. **Avoid Wildcards** – Use specific bucket names instead of `*` when possible
|
||||||
|
3. **Rotate Secrets Regularly** – Use the rotate key feature periodically
|
||||||
|
4. **Separate Admin Accounts** – Don't use admin accounts for daily operations
|
||||||
|
5. **Monitor Failed Logins** – Check logs for repeated authentication failures
|
||||||
|
6. **Use Bucket Policies for Fine-Grained Control** – Combine with IAM for defense in depth
|
||||||
|
|
||||||
## 5. Bucket Policies & Presets
|
## 5. Bucket Policies & Presets
|
||||||
|
|
||||||
- **Storage**: Policies are persisted in `data/.myfsio.sys/config/bucket_policies.json` under `{"policies": {"bucket": {...}}}`.
|
- **Storage**: Policies are persisted in `data/.myfsio.sys/config/bucket_policies.json` under `{"policies": {"bucket": {...}}}`.
|
||||||
@@ -616,7 +860,7 @@ The API expects every request to include `X-Access-Key` and `X-Secret-Key` heade
|
|||||||
### Editing via CLI
|
### Editing via CLI
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -X PUT http://127.0.0.1:5000/bucket-policy/test \
|
curl -X PUT "http://127.0.0.1:5000/test?policy" \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
||||||
-d '{
|
-d '{
|
||||||
@@ -634,12 +878,53 @@ curl -X PUT http://127.0.0.1:5000/bucket-policy/test \
|
|||||||
|
|
||||||
The UI will reflect this change as soon as the request completes thanks to the hot reload.
|
The UI will reflect this change as soon as the request completes thanks to the hot reload.
|
||||||
|
|
||||||
|
### UI Object Browser
|
||||||
|
|
||||||
|
The bucket detail page includes a powerful object browser with the following features:
|
||||||
|
|
||||||
|
#### Folder Navigation
|
||||||
|
|
||||||
|
Objects with forward slashes (`/`) in their keys are displayed as a folder hierarchy. Click a folder row to navigate into it. A breadcrumb navigation bar shows your current path and allows quick navigation back to parent folders or the root.
|
||||||
|
|
||||||
|
#### Pagination & Infinite Scroll
|
||||||
|
|
||||||
|
- Objects load in configurable batches (50, 100, 150, 200, or 250 per page)
|
||||||
|
- Scroll to the bottom to automatically load more objects (infinite scroll)
|
||||||
|
- A **Load more** button is available as a fallback for touch devices or when infinite scroll doesn't trigger
|
||||||
|
- The footer shows the current load status (e.g., "Showing 100 of 500 objects")
|
||||||
|
|
||||||
|
#### Bulk Operations
|
||||||
|
|
||||||
|
- Select multiple objects using checkboxes
|
||||||
|
- **Bulk Delete**: Delete multiple objects at once
|
||||||
|
- **Bulk Download**: Download selected objects as individual files
|
||||||
|
|
||||||
|
#### Search & Filter
|
||||||
|
|
||||||
|
Use the search box to filter objects by name in real-time. The filter applies to the currently loaded objects.
|
||||||
|
|
||||||
|
#### Error Handling
|
||||||
|
|
||||||
|
If object loading fails (e.g., network error), a friendly error message is displayed with a **Retry** button to attempt loading again.
|
||||||
|
|
||||||
|
#### Object Preview
|
||||||
|
|
||||||
|
Click any object row to view its details in the preview sidebar:
|
||||||
|
- File size and last modified date
|
||||||
|
- ETag (content hash)
|
||||||
|
- Custom metadata (if present)
|
||||||
|
- Download and presign (share link) buttons
|
||||||
|
- Version history (when versioning is enabled)
|
||||||
|
|
||||||
|
#### Drag & Drop Upload
|
||||||
|
|
||||||
|
Drag files directly onto the objects table to upload them to the current bucket and folder path.
|
||||||
|
|
||||||
## 6. Presigned URLs
|
## 6. Presigned URLs
|
||||||
|
|
||||||
- Trigger from the UI using the **Presign** button after selecting an object.
|
- Trigger from the UI using the **Presign** button after selecting an object.
|
||||||
- Or call `POST /presign/<bucket>/<key>` with JSON `{ "method": "GET", "expires_in": 900 }`.
|
|
||||||
- Supported methods: `GET`, `PUT`, `DELETE`; expiration must be `1..604800` seconds.
|
- Supported methods: `GET`, `PUT`, `DELETE`; expiration must be `1..604800` seconds.
|
||||||
- The service signs requests using the caller’s IAM credentials and enforces bucket policies both when issuing and when the presigned URL is used.
|
- The service signs requests using the caller's IAM credentials and enforces bucket policies both when issuing and when the presigned URL is used.
|
||||||
- Legacy share links have been removed; presigned URLs now handle both private and public workflows.
|
- Legacy share links have been removed; presigned URLs now handle both private and public workflows.
|
||||||
|
|
||||||
### Multipart Upload Example
|
### Multipart Upload Example
|
||||||
@@ -862,7 +1147,84 @@ curl -X PUT "http://localhost:5000/bucket/<bucket>?quota" \
|
|||||||
</Error>
|
</Error>
|
||||||
```
|
```
|
||||||
|
|
||||||
## 9. Site Replication
|
## 9. Operation Metrics
|
||||||
|
|
||||||
|
Operation metrics provide real-time visibility into API request statistics, including request counts, latency, error rates, and bandwidth usage.
|
||||||
|
|
||||||
|
### Enabling Operation Metrics
|
||||||
|
|
||||||
|
By default, operation metrics are disabled. Enable by setting the environment variable:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
OPERATION_METRICS_ENABLED=true python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
Or in your `myfsio.env` file:
|
||||||
|
```
|
||||||
|
OPERATION_METRICS_ENABLED=true
|
||||||
|
OPERATION_METRICS_INTERVAL_MINUTES=5
|
||||||
|
OPERATION_METRICS_RETENTION_HOURS=24
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuration Options
|
||||||
|
|
||||||
|
| Variable | Default | Description |
|
||||||
|
|----------|---------|-------------|
|
||||||
|
| `OPERATION_METRICS_ENABLED` | `false` | Enable/disable operation metrics |
|
||||||
|
| `OPERATION_METRICS_INTERVAL_MINUTES` | `5` | Snapshot interval (minutes) |
|
||||||
|
| `OPERATION_METRICS_RETENTION_HOURS` | `24` | History retention period (hours) |
|
||||||
|
|
||||||
|
### What's Tracked
|
||||||
|
|
||||||
|
**Request Statistics:**
|
||||||
|
- Request counts by HTTP method (GET, PUT, POST, DELETE, HEAD, OPTIONS)
|
||||||
|
- Response status codes grouped by class (2xx, 3xx, 4xx, 5xx)
|
||||||
|
- Latency statistics (min, max, average)
|
||||||
|
- Bytes transferred in/out
|
||||||
|
|
||||||
|
**Endpoint Breakdown:**
|
||||||
|
- `object` - Object operations (GET/PUT/DELETE objects)
|
||||||
|
- `bucket` - Bucket operations (list, create, delete buckets)
|
||||||
|
- `ui` - Web UI requests
|
||||||
|
- `service` - Health checks, internal endpoints
|
||||||
|
- `kms` - KMS API operations
|
||||||
|
|
||||||
|
**S3 Error Codes:**
|
||||||
|
Tracks API-specific error codes like `NoSuchKey`, `AccessDenied`, `BucketNotFound`. Note: These are separate from HTTP status codes - a 404 from the UI won't appear here, only S3 API errors.
|
||||||
|
|
||||||
|
### API Endpoints
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get current operation metrics
|
||||||
|
curl http://localhost:5100/ui/metrics/operations \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Get operation metrics history
|
||||||
|
curl http://localhost:5100/ui/metrics/operations/history \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Filter history by time range
|
||||||
|
curl "http://localhost:5100/ui/metrics/operations/history?hours=6" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
```
|
||||||
|
|
||||||
|
### Storage Location
|
||||||
|
|
||||||
|
Operation metrics data is stored at:
|
||||||
|
```
|
||||||
|
data/.myfsio.sys/config/operation_metrics.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### UI Dashboard
|
||||||
|
|
||||||
|
When enabled, the Metrics page (`/ui/metrics`) shows an "API Operations" section with:
|
||||||
|
- Summary cards: Requests, Success Rate, Errors, Latency, Bytes In, Bytes Out
|
||||||
|
- Charts: Requests by Method (doughnut), Requests by Status (bar), Requests by Endpoint (horizontal bar)
|
||||||
|
- S3 Error Codes table with distribution
|
||||||
|
|
||||||
|
Data refreshes every 5 seconds.
|
||||||
|
|
||||||
|
## 10. Site Replication
|
||||||
|
|
||||||
### Permission Model
|
### Permission Model
|
||||||
|
|
||||||
@@ -999,7 +1361,7 @@ To set up two-way replication (Server A ↔ Server B):
|
|||||||
|
|
||||||
**Note**: Deleting a bucket will automatically remove its associated replication configuration.
|
**Note**: Deleting a bucket will automatically remove its associated replication configuration.
|
||||||
|
|
||||||
## 11. Running Tests
|
## 12. Running Tests
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pytest -q
|
pytest -q
|
||||||
@@ -1009,7 +1371,7 @@ The suite now includes a boto3 integration test that spins up a live HTTP server
|
|||||||
|
|
||||||
The suite covers bucket CRUD, presigned downloads, bucket policy enforcement, and regression tests for anonymous reads when a Public policy is attached.
|
The suite covers bucket CRUD, presigned downloads, bucket policy enforcement, and regression tests for anonymous reads when a Public policy is attached.
|
||||||
|
|
||||||
## 12. Troubleshooting
|
## 13. Troubleshooting
|
||||||
|
|
||||||
| Symptom | Likely Cause | Fix |
|
| Symptom | Likely Cause | Fix |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
@@ -1018,7 +1380,7 @@ The suite covers bucket CRUD, presigned downloads, bucket policy enforcement, an
|
|||||||
| Presign modal errors with 403 | IAM user lacks `read/write/delete` for target bucket or bucket policy denies | Update IAM inline policies or remove conflicting deny statements. |
|
| Presign modal errors with 403 | IAM user lacks `read/write/delete` for target bucket or bucket policy denies | Update IAM inline policies or remove conflicting deny statements. |
|
||||||
| Large upload rejected immediately | File exceeds `MAX_UPLOAD_SIZE` | Increase env var or shrink object. |
|
| Large upload rejected immediately | File exceeds `MAX_UPLOAD_SIZE` | Increase env var or shrink object. |
|
||||||
|
|
||||||
## 13. API Matrix
|
## 14. API Matrix
|
||||||
|
|
||||||
```
|
```
|
||||||
GET / # List buckets
|
GET / # List buckets
|
||||||
@@ -1028,10 +1390,9 @@ GET /<bucket> # List objects
|
|||||||
PUT /<bucket>/<key> # Upload object
|
PUT /<bucket>/<key> # Upload object
|
||||||
GET /<bucket>/<key> # Download object
|
GET /<bucket>/<key> # Download object
|
||||||
DELETE /<bucket>/<key> # Delete object
|
DELETE /<bucket>/<key> # Delete object
|
||||||
POST /presign/<bucket>/<key> # Generate SigV4 URL
|
GET /<bucket>?policy # Fetch policy
|
||||||
GET /bucket-policy/<bucket> # Fetch policy
|
PUT /<bucket>?policy # Upsert policy
|
||||||
PUT /bucket-policy/<bucket> # Upsert policy
|
DELETE /<bucket>?policy # Delete policy
|
||||||
DELETE /bucket-policy/<bucket> # Delete policy
|
|
||||||
GET /<bucket>?quota # Get bucket quota
|
GET /<bucket>?quota # Get bucket quota
|
||||||
PUT /<bucket>?quota # Set bucket quota (admin only)
|
PUT /<bucket>?quota # Set bucket quota (admin only)
|
||||||
```
|
```
|
||||||
|
|||||||
5
pytest.ini
Normal file
5
pytest.ini
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
[pytest]
|
||||||
|
testpaths = tests
|
||||||
|
norecursedirs = data .git __pycache__ .venv
|
||||||
|
markers =
|
||||||
|
integration: marks tests as integration tests (may require external services)
|
||||||
@@ -1,10 +1,12 @@
|
|||||||
Flask>=3.1.2
|
Flask>=3.1.2
|
||||||
Flask-Limiter>=4.1.0
|
Flask-Limiter>=4.1.1
|
||||||
Flask-Cors>=6.0.1
|
Flask-Cors>=6.0.2
|
||||||
Flask-WTF>=1.2.2
|
Flask-WTF>=1.2.2
|
||||||
pytest>=9.0.1
|
python-dotenv>=1.2.1
|
||||||
|
pytest>=9.0.2
|
||||||
requests>=2.32.5
|
requests>=2.32.5
|
||||||
boto3>=1.42.1
|
boto3>=1.42.14
|
||||||
waitress>=3.0.2
|
waitress>=3.0.2
|
||||||
psutil>=7.1.3
|
psutil>=7.1.3
|
||||||
cryptography>=46.0.3
|
cryptography>=46.0.3
|
||||||
|
defusedxml>=0.7.1
|
||||||
11
run.py
11
run.py
@@ -6,6 +6,17 @@ import os
|
|||||||
import sys
|
import sys
|
||||||
import warnings
|
import warnings
|
||||||
from multiprocessing import Process
|
from multiprocessing import Process
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
|
for _env_file in [
|
||||||
|
Path("/opt/myfsio/myfsio.env"),
|
||||||
|
Path.cwd() / ".env",
|
||||||
|
Path.cwd() / "myfsio.env",
|
||||||
|
]:
|
||||||
|
if _env_file.exists():
|
||||||
|
load_dotenv(_env_file, override=True)
|
||||||
|
|
||||||
from app import create_api_app, create_ui_app
|
from app import create_api_app, create_ui_app
|
||||||
from app.config import AppConfig
|
from app.config import AppConfig
|
||||||
|
|||||||
@@ -4,8 +4,6 @@
|
|||||||
# This script sets up MyFSIO for production use on Linux systems.
|
# This script sets up MyFSIO for production use on Linux systems.
|
||||||
#
|
#
|
||||||
# Usage:
|
# Usage:
|
||||||
# curl -fsSL https://example.com/install.sh | bash
|
|
||||||
# OR
|
|
||||||
# ./install.sh [OPTIONS]
|
# ./install.sh [OPTIONS]
|
||||||
#
|
#
|
||||||
# Options:
|
# Options:
|
||||||
@@ -23,14 +21,6 @@
|
|||||||
|
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
# Colors for output
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
BLUE='\033[0;34m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
# Default values
|
|
||||||
INSTALL_DIR="/opt/myfsio"
|
INSTALL_DIR="/opt/myfsio"
|
||||||
DATA_DIR="/var/lib/myfsio"
|
DATA_DIR="/var/lib/myfsio"
|
||||||
LOG_DIR="/var/log/myfsio"
|
LOG_DIR="/var/log/myfsio"
|
||||||
@@ -42,7 +32,6 @@ SKIP_SYSTEMD=false
|
|||||||
BINARY_PATH=""
|
BINARY_PATH=""
|
||||||
AUTO_YES=false
|
AUTO_YES=false
|
||||||
|
|
||||||
# Parse arguments
|
|
||||||
while [[ $# -gt 0 ]]; do
|
while [[ $# -gt 0 ]]; do
|
||||||
case $1 in
|
case $1 in
|
||||||
--install-dir)
|
--install-dir)
|
||||||
@@ -90,27 +79,30 @@ while [[ $# -gt 0 ]]; do
|
|||||||
exit 0
|
exit 0
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
echo -e "${RED}Unknown option: $1${NC}"
|
echo "Unknown option: $1"
|
||||||
exit 1
|
exit 1
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
done
|
done
|
||||||
|
|
||||||
echo -e "${BLUE}"
|
echo ""
|
||||||
echo "╔══════════════════════════════════════════════════════════╗"
|
echo "============================================================"
|
||||||
echo "║ MyFSIO Installation ║"
|
echo " MyFSIO Installation Script"
|
||||||
echo "║ S3-Compatible Object Storage ║"
|
echo " S3-Compatible Object Storage"
|
||||||
echo "╚══════════════════════════════════════════════════════════╝"
|
echo "============================================================"
|
||||||
echo -e "${NC}"
|
echo ""
|
||||||
|
echo "Documentation: https://go.jzwsite.com/myfsio"
|
||||||
|
echo ""
|
||||||
|
|
||||||
# Check if running as root
|
|
||||||
if [[ $EUID -ne 0 ]]; then
|
if [[ $EUID -ne 0 ]]; then
|
||||||
echo -e "${RED}Error: This script must be run as root (use sudo)${NC}"
|
echo "Error: This script must be run as root (use sudo)"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Display configuration
|
echo "------------------------------------------------------------"
|
||||||
echo -e "${YELLOW}Installation Configuration:${NC}"
|
echo "STEP 1: Review Installation Configuration"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
echo " Install directory: $INSTALL_DIR"
|
echo " Install directory: $INSTALL_DIR"
|
||||||
echo " Data directory: $DATA_DIR"
|
echo " Data directory: $DATA_DIR"
|
||||||
echo " Log directory: $LOG_DIR"
|
echo " Log directory: $LOG_DIR"
|
||||||
@@ -125,9 +117,8 @@ if [[ -n "$BINARY_PATH" ]]; then
|
|||||||
fi
|
fi
|
||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
# Confirm installation
|
|
||||||
if [[ "$AUTO_YES" != true ]]; then
|
if [[ "$AUTO_YES" != true ]]; then
|
||||||
read -p "Proceed with installation? [y/N] " -n 1 -r
|
read -p "Do you want to proceed with these settings? [y/N] " -n 1 -r
|
||||||
echo
|
echo
|
||||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||||
echo "Installation cancelled."
|
echo "Installation cancelled."
|
||||||
@@ -136,48 +127,70 @@ if [[ "$AUTO_YES" != true ]]; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${GREEN}[1/7]${NC} Creating system user..."
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 2: Creating System User"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
if id "$SERVICE_USER" &>/dev/null; then
|
if id "$SERVICE_USER" &>/dev/null; then
|
||||||
echo " User '$SERVICE_USER' already exists"
|
echo " [OK] User '$SERVICE_USER' already exists"
|
||||||
else
|
else
|
||||||
useradd --system --no-create-home --shell /usr/sbin/nologin "$SERVICE_USER"
|
useradd --system --no-create-home --shell /usr/sbin/nologin "$SERVICE_USER"
|
||||||
echo " Created user '$SERVICE_USER'"
|
echo " [OK] Created user '$SERVICE_USER'"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo -e "${GREEN}[2/7]${NC} Creating directories..."
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 3: Creating Directories"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
mkdir -p "$INSTALL_DIR"
|
mkdir -p "$INSTALL_DIR"
|
||||||
|
echo " [OK] Created $INSTALL_DIR"
|
||||||
mkdir -p "$DATA_DIR"
|
mkdir -p "$DATA_DIR"
|
||||||
|
echo " [OK] Created $DATA_DIR"
|
||||||
mkdir -p "$LOG_DIR"
|
mkdir -p "$LOG_DIR"
|
||||||
echo " Created $INSTALL_DIR"
|
echo " [OK] Created $LOG_DIR"
|
||||||
echo " Created $DATA_DIR"
|
|
||||||
echo " Created $LOG_DIR"
|
|
||||||
|
|
||||||
echo -e "${GREEN}[3/7]${NC} Installing binary..."
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 4: Installing Binary"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
if [[ -n "$BINARY_PATH" ]]; then
|
if [[ -n "$BINARY_PATH" ]]; then
|
||||||
if [[ -f "$BINARY_PATH" ]]; then
|
if [[ -f "$BINARY_PATH" ]]; then
|
||||||
cp "$BINARY_PATH" "$INSTALL_DIR/myfsio"
|
cp "$BINARY_PATH" "$INSTALL_DIR/myfsio"
|
||||||
echo " Copied binary from $BINARY_PATH"
|
echo " [OK] Copied binary from $BINARY_PATH"
|
||||||
else
|
else
|
||||||
echo -e "${RED}Error: Binary not found at $BINARY_PATH${NC}"
|
echo " [ERROR] Binary not found at $BINARY_PATH"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
elif [[ -f "./myfsio" ]]; then
|
elif [[ -f "./myfsio" ]]; then
|
||||||
cp "./myfsio" "$INSTALL_DIR/myfsio"
|
cp "./myfsio" "$INSTALL_DIR/myfsio"
|
||||||
echo " Copied binary from ./myfsio"
|
echo " [OK] Copied binary from ./myfsio"
|
||||||
else
|
else
|
||||||
echo -e "${RED}Error: No binary provided. Use --binary PATH or place 'myfsio' in current directory${NC}"
|
echo " [ERROR] No binary provided."
|
||||||
|
echo " Use --binary PATH or place 'myfsio' in current directory"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
chmod +x "$INSTALL_DIR/myfsio"
|
chmod +x "$INSTALL_DIR/myfsio"
|
||||||
|
echo " [OK] Set executable permissions"
|
||||||
|
|
||||||
echo -e "${GREEN}[4/7]${NC} Generating secret key..."
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 5: Generating Secret Key"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
SECRET_KEY=$(openssl rand -base64 32)
|
SECRET_KEY=$(openssl rand -base64 32)
|
||||||
echo " Generated secure SECRET_KEY"
|
echo " [OK] Generated secure SECRET_KEY"
|
||||||
|
|
||||||
echo -e "${GREEN}[5/7]${NC} Creating environment file..."
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 6: Creating Configuration File"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
cat > "$INSTALL_DIR/myfsio.env" << EOF
|
cat > "$INSTALL_DIR/myfsio.env" << EOF
|
||||||
# MyFSIO Configuration
|
# MyFSIO Configuration
|
||||||
# Generated by install.sh on $(date)
|
# Generated by install.sh on $(date)
|
||||||
|
# Documentation: https://go.jzwsite.com/myfsio
|
||||||
|
|
||||||
# Storage paths
|
# Storage paths
|
||||||
STORAGE_ROOT=$DATA_DIR
|
STORAGE_ROOT=$DATA_DIR
|
||||||
@@ -206,20 +219,30 @@ RATE_LIMIT_DEFAULT=200 per minute
|
|||||||
# KMS_ENABLED=true
|
# KMS_ENABLED=true
|
||||||
EOF
|
EOF
|
||||||
chmod 600 "$INSTALL_DIR/myfsio.env"
|
chmod 600 "$INSTALL_DIR/myfsio.env"
|
||||||
echo " Created $INSTALL_DIR/myfsio.env"
|
echo " [OK] Created $INSTALL_DIR/myfsio.env"
|
||||||
|
|
||||||
echo -e "${GREEN}[6/7]${NC} Setting permissions..."
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 7: Setting Permissions"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
chown -R "$SERVICE_USER:$SERVICE_USER" "$INSTALL_DIR"
|
chown -R "$SERVICE_USER:$SERVICE_USER" "$INSTALL_DIR"
|
||||||
|
echo " [OK] Set ownership for $INSTALL_DIR"
|
||||||
chown -R "$SERVICE_USER:$SERVICE_USER" "$DATA_DIR"
|
chown -R "$SERVICE_USER:$SERVICE_USER" "$DATA_DIR"
|
||||||
|
echo " [OK] Set ownership for $DATA_DIR"
|
||||||
chown -R "$SERVICE_USER:$SERVICE_USER" "$LOG_DIR"
|
chown -R "$SERVICE_USER:$SERVICE_USER" "$LOG_DIR"
|
||||||
echo " Set ownership to $SERVICE_USER"
|
echo " [OK] Set ownership for $LOG_DIR"
|
||||||
|
|
||||||
if [[ "$SKIP_SYSTEMD" != true ]]; then
|
if [[ "$SKIP_SYSTEMD" != true ]]; then
|
||||||
echo -e "${GREEN}[7/7]${NC} Creating systemd service..."
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 8: Creating Systemd Service"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
cat > /etc/systemd/system/myfsio.service << EOF
|
cat > /etc/systemd/system/myfsio.service << EOF
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=MyFSIO S3-Compatible Storage
|
Description=MyFSIO S3-Compatible Storage
|
||||||
Documentation=https://github.com/yourusername/myfsio
|
Documentation=https://go.jzwsite.com/myfsio
|
||||||
After=network.target
|
After=network.target
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
@@ -248,45 +271,100 @@ WantedBy=multi-user.target
|
|||||||
EOF
|
EOF
|
||||||
|
|
||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
echo " Created /etc/systemd/system/myfsio.service"
|
echo " [OK] Created /etc/systemd/system/myfsio.service"
|
||||||
|
echo " [OK] Reloaded systemd daemon"
|
||||||
else
|
else
|
||||||
echo -e "${GREEN}[7/7]${NC} Skipping systemd service (--no-systemd)"
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 8: Skipping Systemd Service (--no-systemd flag used)"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${GREEN}╔══════════════════════════════════════════════════════════╗${NC}"
|
echo "============================================================"
|
||||||
echo -e "${GREEN}║ Installation Complete! ║${NC}"
|
echo " Installation Complete!"
|
||||||
echo -e "${GREEN}╚══════════════════════════════════════════════════════════╝${NC}"
|
echo "============================================================"
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${YELLOW}Next steps:${NC}"
|
|
||||||
|
if [[ "$SKIP_SYSTEMD" != true ]]; then
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 9: Start the Service"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$AUTO_YES" != true ]]; then
|
||||||
|
read -p "Would you like to start MyFSIO now? [Y/n] " -n 1 -r
|
||||||
|
echo
|
||||||
|
START_SERVICE=true
|
||||||
|
if [[ $REPLY =~ ^[Nn]$ ]]; then
|
||||||
|
START_SERVICE=false
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
START_SERVICE=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$START_SERVICE" == true ]]; then
|
||||||
|
echo " Starting MyFSIO service..."
|
||||||
|
systemctl start myfsio
|
||||||
|
echo " [OK] Service started"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
read -p "Would you like to enable MyFSIO to start on boot? [Y/n] " -n 1 -r
|
||||||
|
echo
|
||||||
|
if [[ ! $REPLY =~ ^[Nn]$ ]]; then
|
||||||
|
systemctl enable myfsio
|
||||||
|
echo " [OK] Service enabled on boot"
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
sleep 2
|
||||||
|
echo " Service Status:"
|
||||||
|
echo " ---------------"
|
||||||
|
if systemctl is-active --quiet myfsio; then
|
||||||
|
echo " [OK] MyFSIO is running"
|
||||||
|
else
|
||||||
|
echo " [WARNING] MyFSIO may not have started correctly"
|
||||||
|
echo " Check logs with: journalctl -u myfsio -f"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " [SKIPPED] Service not started"
|
||||||
|
echo ""
|
||||||
|
echo " To start manually, run:"
|
||||||
|
echo " sudo systemctl start myfsio"
|
||||||
|
echo ""
|
||||||
|
echo " To enable on boot, run:"
|
||||||
|
echo " sudo systemctl enable myfsio"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
echo ""
|
echo ""
|
||||||
echo " 1. Review configuration:"
|
echo "============================================================"
|
||||||
echo " ${BLUE}cat $INSTALL_DIR/myfsio.env${NC}"
|
echo " Summary"
|
||||||
|
echo "============================================================"
|
||||||
echo ""
|
echo ""
|
||||||
echo " 2. Start the service:"
|
echo "Access Points:"
|
||||||
echo " ${BLUE}sudo systemctl start myfsio${NC}"
|
echo " API: http://$(hostname -I 2>/dev/null | awk '{print $1}' || echo "localhost"):$API_PORT"
|
||||||
|
echo " UI: http://$(hostname -I 2>/dev/null | awk '{print $1}' || echo "localhost"):$UI_PORT/ui"
|
||||||
echo ""
|
echo ""
|
||||||
echo " 3. Enable on boot:"
|
echo "Default Credentials:"
|
||||||
echo " ${BLUE}sudo systemctl enable myfsio${NC}"
|
|
||||||
echo ""
|
|
||||||
echo " 4. Check status:"
|
|
||||||
echo " ${BLUE}sudo systemctl status myfsio${NC}"
|
|
||||||
echo ""
|
|
||||||
echo " 5. View logs:"
|
|
||||||
echo " ${BLUE}sudo journalctl -u myfsio -f${NC}"
|
|
||||||
echo " ${BLUE}tail -f $LOG_DIR/app.log${NC}"
|
|
||||||
echo ""
|
|
||||||
echo -e "${YELLOW}Access:${NC}"
|
|
||||||
echo " API: http://$(hostname -I | awk '{print $1}'):$API_PORT"
|
|
||||||
echo " UI: http://$(hostname -I | awk '{print $1}'):$UI_PORT/ui"
|
|
||||||
echo ""
|
|
||||||
echo -e "${YELLOW}Default credentials:${NC}"
|
|
||||||
echo " Username: localadmin"
|
echo " Username: localadmin"
|
||||||
echo " Password: localadmin"
|
echo " Password: localadmin"
|
||||||
echo -e " ${RED}⚠ Change these immediately after first login!${NC}"
|
echo " [!] WARNING: Change these immediately after first login!"
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${YELLOW}Configuration files:${NC}"
|
echo "Configuration Files:"
|
||||||
echo " Environment: $INSTALL_DIR/myfsio.env"
|
echo " Environment: $INSTALL_DIR/myfsio.env"
|
||||||
echo " IAM Users: $DATA_DIR/.myfsio.sys/config/iam.json"
|
echo " IAM Users: $DATA_DIR/.myfsio.sys/config/iam.json"
|
||||||
echo " Bucket Policies: $DATA_DIR/.myfsio.sys/config/bucket_policies.json"
|
echo " Bucket Policies: $DATA_DIR/.myfsio.sys/config/bucket_policies.json"
|
||||||
echo ""
|
echo ""
|
||||||
|
echo "Useful Commands:"
|
||||||
|
echo " Check status: sudo systemctl status myfsio"
|
||||||
|
echo " View logs: sudo journalctl -u myfsio -f"
|
||||||
|
echo " Restart: sudo systemctl restart myfsio"
|
||||||
|
echo " Stop: sudo systemctl stop myfsio"
|
||||||
|
echo ""
|
||||||
|
echo "Documentation: https://go.jzwsite.com/myfsio"
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " Thank you for installing MyFSIO!"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
|
|||||||
@@ -18,13 +18,6 @@
|
|||||||
|
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
# Colors
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
NC='\033[0m'
|
|
||||||
|
|
||||||
# Default values
|
|
||||||
INSTALL_DIR="/opt/myfsio"
|
INSTALL_DIR="/opt/myfsio"
|
||||||
DATA_DIR="/var/lib/myfsio"
|
DATA_DIR="/var/lib/myfsio"
|
||||||
LOG_DIR="/var/log/myfsio"
|
LOG_DIR="/var/log/myfsio"
|
||||||
@@ -33,7 +26,6 @@ KEEP_DATA=false
|
|||||||
KEEP_LOGS=false
|
KEEP_LOGS=false
|
||||||
AUTO_YES=false
|
AUTO_YES=false
|
||||||
|
|
||||||
# Parse arguments
|
|
||||||
while [[ $# -gt 0 ]]; do
|
while [[ $# -gt 0 ]]; do
|
||||||
case $1 in
|
case $1 in
|
||||||
--keep-data)
|
--keep-data)
|
||||||
@@ -69,106 +61,184 @@ while [[ $# -gt 0 ]]; do
|
|||||||
exit 0
|
exit 0
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
echo -e "${RED}Unknown option: $1${NC}"
|
echo "Unknown option: $1"
|
||||||
exit 1
|
exit 1
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
done
|
done
|
||||||
|
|
||||||
echo -e "${RED}"
|
echo ""
|
||||||
echo "╔══════════════════════════════════════════════════════════╗"
|
echo "============================================================"
|
||||||
echo "║ MyFSIO Uninstallation ║"
|
echo " MyFSIO Uninstallation Script"
|
||||||
echo "╚══════════════════════════════════════════════════════════╝"
|
echo "============================================================"
|
||||||
echo -e "${NC}"
|
echo ""
|
||||||
|
echo "Documentation: https://go.jzwsite.com/myfsio"
|
||||||
|
echo ""
|
||||||
|
|
||||||
# Check if running as root
|
|
||||||
if [[ $EUID -ne 0 ]]; then
|
if [[ $EUID -ne 0 ]]; then
|
||||||
echo -e "${RED}Error: This script must be run as root (use sudo)${NC}"
|
echo "Error: This script must be run as root (use sudo)"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo -e "${YELLOW}The following will be removed:${NC}"
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 1: Review What Will Be Removed"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
echo "The following items will be removed:"
|
||||||
|
echo ""
|
||||||
echo " Install directory: $INSTALL_DIR"
|
echo " Install directory: $INSTALL_DIR"
|
||||||
if [[ "$KEEP_DATA" != true ]]; then
|
if [[ "$KEEP_DATA" != true ]]; then
|
||||||
echo -e " Data directory: $DATA_DIR ${RED}(ALL YOUR DATA!)${NC}"
|
echo " Data directory: $DATA_DIR (ALL YOUR DATA WILL BE DELETED!)"
|
||||||
else
|
else
|
||||||
echo " Data directory: $DATA_DIR (KEPT)"
|
echo " Data directory: $DATA_DIR (WILL BE KEPT)"
|
||||||
fi
|
fi
|
||||||
if [[ "$KEEP_LOGS" != true ]]; then
|
if [[ "$KEEP_LOGS" != true ]]; then
|
||||||
echo " Log directory: $LOG_DIR"
|
echo " Log directory: $LOG_DIR"
|
||||||
else
|
else
|
||||||
echo " Log directory: $LOG_DIR (KEPT)"
|
echo " Log directory: $LOG_DIR (WILL BE KEPT)"
|
||||||
fi
|
fi
|
||||||
echo " Systemd service: /etc/systemd/system/myfsio.service"
|
echo " Systemd service: /etc/systemd/system/myfsio.service"
|
||||||
echo " System user: $SERVICE_USER"
|
echo " System user: $SERVICE_USER"
|
||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
if [[ "$AUTO_YES" != true ]]; then
|
if [[ "$AUTO_YES" != true ]]; then
|
||||||
echo -e "${RED}WARNING: This action cannot be undone!${NC}"
|
echo "WARNING: This action cannot be undone!"
|
||||||
|
echo ""
|
||||||
read -p "Are you sure you want to uninstall MyFSIO? [y/N] " -n 1 -r
|
read -p "Are you sure you want to uninstall MyFSIO? [y/N] " -n 1 -r
|
||||||
echo
|
echo
|
||||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||||
|
echo ""
|
||||||
echo "Uninstallation cancelled."
|
echo "Uninstallation cancelled."
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [[ "$KEEP_DATA" != true ]]; then
|
||||||
|
echo ""
|
||||||
|
read -p "This will DELETE ALL YOUR DATA. Type 'DELETE' to confirm: " CONFIRM
|
||||||
|
if [[ "$CONFIRM" != "DELETE" ]]; then
|
||||||
|
echo ""
|
||||||
|
echo "Uninstallation cancelled."
|
||||||
|
echo "Tip: Use --keep-data to preserve your data directory"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${GREEN}[1/5]${NC} Stopping service..."
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 2: Stopping Service"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
if systemctl is-active --quiet myfsio 2>/dev/null; then
|
if systemctl is-active --quiet myfsio 2>/dev/null; then
|
||||||
systemctl stop myfsio
|
systemctl stop myfsio
|
||||||
echo " Stopped myfsio service"
|
echo " [OK] Stopped myfsio service"
|
||||||
else
|
else
|
||||||
echo " Service not running"
|
echo " [SKIP] Service not running"
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "${GREEN}[2/5]${NC} Disabling service..."
|
|
||||||
if systemctl is-enabled --quiet myfsio 2>/dev/null; then
|
|
||||||
systemctl disable myfsio
|
|
||||||
echo " Disabled myfsio service"
|
|
||||||
else
|
|
||||||
echo " Service not enabled"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "${GREEN}[3/5]${NC} Removing systemd service..."
|
|
||||||
if [[ -f /etc/systemd/system/myfsio.service ]]; then
|
|
||||||
rm -f /etc/systemd/system/myfsio.service
|
|
||||||
systemctl daemon-reload
|
|
||||||
echo " Removed /etc/systemd/system/myfsio.service"
|
|
||||||
else
|
|
||||||
echo " Service file not found"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "${GREEN}[4/5]${NC} Removing directories..."
|
|
||||||
if [[ -d "$INSTALL_DIR" ]]; then
|
|
||||||
rm -rf "$INSTALL_DIR"
|
|
||||||
echo " Removed $INSTALL_DIR"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$KEEP_DATA" != true ]] && [[ -d "$DATA_DIR" ]]; then
|
|
||||||
rm -rf "$DATA_DIR"
|
|
||||||
echo " Removed $DATA_DIR"
|
|
||||||
elif [[ "$KEEP_DATA" == true ]]; then
|
|
||||||
echo " Kept $DATA_DIR"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$KEEP_LOGS" != true ]] && [[ -d "$LOG_DIR" ]]; then
|
|
||||||
rm -rf "$LOG_DIR"
|
|
||||||
echo " Removed $LOG_DIR"
|
|
||||||
elif [[ "$KEEP_LOGS" == true ]]; then
|
|
||||||
echo " Kept $LOG_DIR"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "${GREEN}[5/5]${NC} Removing system user..."
|
|
||||||
if id "$SERVICE_USER" &>/dev/null; then
|
|
||||||
userdel "$SERVICE_USER" 2>/dev/null || true
|
|
||||||
echo " Removed user '$SERVICE_USER'"
|
|
||||||
else
|
|
||||||
echo " User not found"
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${GREEN}MyFSIO has been uninstalled.${NC}"
|
echo "------------------------------------------------------------"
|
||||||
if [[ "$KEEP_DATA" == true ]]; then
|
echo "STEP 3: Disabling Service"
|
||||||
echo -e "${YELLOW}Data preserved at: $DATA_DIR${NC}"
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if systemctl is-enabled --quiet myfsio 2>/dev/null; then
|
||||||
|
systemctl disable myfsio
|
||||||
|
echo " [OK] Disabled myfsio service"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Service not enabled"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 4: Removing Systemd Service File"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ -f /etc/systemd/system/myfsio.service ]]; then
|
||||||
|
rm -f /etc/systemd/system/myfsio.service
|
||||||
|
systemctl daemon-reload
|
||||||
|
echo " [OK] Removed /etc/systemd/system/myfsio.service"
|
||||||
|
echo " [OK] Reloaded systemd daemon"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Service file not found"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 5: Removing Installation Directory"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ -d "$INSTALL_DIR" ]]; then
|
||||||
|
rm -rf "$INSTALL_DIR"
|
||||||
|
echo " [OK] Removed $INSTALL_DIR"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Directory not found: $INSTALL_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 6: Removing Data Directory"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ "$KEEP_DATA" != true ]]; then
|
||||||
|
if [[ -d "$DATA_DIR" ]]; then
|
||||||
|
rm -rf "$DATA_DIR"
|
||||||
|
echo " [OK] Removed $DATA_DIR"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Directory not found: $DATA_DIR"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " [KEPT] Data preserved at: $DATA_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 7: Removing Log Directory"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if [[ "$KEEP_LOGS" != true ]]; then
|
||||||
|
if [[ -d "$LOG_DIR" ]]; then
|
||||||
|
rm -rf "$LOG_DIR"
|
||||||
|
echo " [OK] Removed $LOG_DIR"
|
||||||
|
else
|
||||||
|
echo " [SKIP] Directory not found: $LOG_DIR"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " [KEPT] Logs preserved at: $LOG_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo "STEP 8: Removing System User"
|
||||||
|
echo "------------------------------------------------------------"
|
||||||
|
echo ""
|
||||||
|
if id "$SERVICE_USER" &>/dev/null; then
|
||||||
|
userdel "$SERVICE_USER" 2>/dev/null || true
|
||||||
|
echo " [OK] Removed user '$SERVICE_USER'"
|
||||||
|
else
|
||||||
|
echo " [SKIP] User not found: $SERVICE_USER"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo " Uninstallation Complete!"
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$KEEP_DATA" == true ]]; then
|
||||||
|
echo "Your data has been preserved at: $DATA_DIR"
|
||||||
|
echo ""
|
||||||
|
echo "To reinstall MyFSIO with existing data, run:"
|
||||||
|
echo " curl -fsSL https://go.jzwsite.com/myfsio-install | sudo bash"
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$KEEP_LOGS" == true ]]; then
|
||||||
|
echo "Your logs have been preserved at: $LOG_DIR"
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Thank you for using MyFSIO."
|
||||||
|
echo "Documentation: https://go.jzwsite.com/myfsio"
|
||||||
|
echo ""
|
||||||
|
echo "============================================================"
|
||||||
|
echo ""
|
||||||
|
|||||||
1249
static/css/main.css
1249
static/css/main.css
File diff suppressed because it is too large
Load Diff
Binary file not shown.
|
Before Width: | Height: | Size: 200 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 628 KiB |
BIN
static/images/MyFSIO.ico
Normal file
BIN
static/images/MyFSIO.ico
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 200 KiB |
BIN
static/images/MyFSIO.png
Normal file
BIN
static/images/MyFSIO.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 872 KiB |
4238
static/js/bucket-detail-main.js
Normal file
4238
static/js/bucket-detail-main.js
Normal file
File diff suppressed because it is too large
Load Diff
192
static/js/bucket-detail-operations.js
Normal file
192
static/js/bucket-detail-operations.js
Normal file
@@ -0,0 +1,192 @@
|
|||||||
|
window.BucketDetailOperations = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
let showMessage = function() {};
|
||||||
|
let escapeHtml = function(s) { return s; };
|
||||||
|
|
||||||
|
function init(config) {
|
||||||
|
showMessage = config.showMessage || showMessage;
|
||||||
|
escapeHtml = config.escapeHtml || escapeHtml;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadLifecycleRules(card, endpoint) {
|
||||||
|
if (!card || !endpoint) return;
|
||||||
|
const body = card.querySelector('[data-lifecycle-body]');
|
||||||
|
if (!body) return;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch(endpoint);
|
||||||
|
const data = await response.json();
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(data.error || 'Failed to load')}</td></tr>`;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const rules = data.rules || [];
|
||||||
|
if (rules.length === 0) {
|
||||||
|
body.innerHTML = '<tr><td colspan="5" class="text-center text-muted py-3">No lifecycle rules configured</td></tr>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
body.innerHTML = rules.map(rule => {
|
||||||
|
const actions = [];
|
||||||
|
if (rule.expiration_days) actions.push(`Delete after ${rule.expiration_days} days`);
|
||||||
|
if (rule.noncurrent_days) actions.push(`Delete old versions after ${rule.noncurrent_days} days`);
|
||||||
|
if (rule.abort_mpu_days) actions.push(`Abort incomplete MPU after ${rule.abort_mpu_days} days`);
|
||||||
|
|
||||||
|
return `
|
||||||
|
<tr>
|
||||||
|
<td class="fw-medium">${escapeHtml(rule.id)}</td>
|
||||||
|
<td><code>${escapeHtml(rule.prefix || '(all)')}</code></td>
|
||||||
|
<td>${actions.map(a => `<div class="small">${escapeHtml(a)}</div>`).join('')}</td>
|
||||||
|
<td>
|
||||||
|
<span class="badge ${rule.status === 'Enabled' ? 'text-bg-success' : 'text-bg-secondary'}">${escapeHtml(rule.status)}</span>
|
||||||
|
</td>
|
||||||
|
<td class="text-end">
|
||||||
|
<button class="btn btn-sm btn-outline-danger" onclick="BucketDetailOperations.deleteLifecycleRule('${escapeHtml(rule.id)}')">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="12" height="12" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
`;
|
||||||
|
}).join('');
|
||||||
|
} catch (err) {
|
||||||
|
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(err.message)}</td></tr>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadCorsRules(card, endpoint) {
|
||||||
|
if (!card || !endpoint) return;
|
||||||
|
const body = document.getElementById('cors-rules-body');
|
||||||
|
if (!body) return;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch(endpoint);
|
||||||
|
const data = await response.json();
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(data.error || 'Failed to load')}</td></tr>`;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const rules = data.rules || [];
|
||||||
|
if (rules.length === 0) {
|
||||||
|
body.innerHTML = '<tr><td colspan="5" class="text-center text-muted py-3">No CORS rules configured</td></tr>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
body.innerHTML = rules.map((rule, idx) => `
|
||||||
|
<tr>
|
||||||
|
<td>${(rule.allowed_origins || []).map(o => `<code class="d-block">${escapeHtml(o)}</code>`).join('')}</td>
|
||||||
|
<td>${(rule.allowed_methods || []).map(m => `<span class="badge text-bg-secondary me-1">${escapeHtml(m)}</span>`).join('')}</td>
|
||||||
|
<td class="small text-muted">${(rule.allowed_headers || []).slice(0, 3).join(', ')}${(rule.allowed_headers || []).length > 3 ? '...' : ''}</td>
|
||||||
|
<td class="text-muted">${rule.max_age_seconds || 0}s</td>
|
||||||
|
<td class="text-end">
|
||||||
|
<button class="btn btn-sm btn-outline-danger" onclick="BucketDetailOperations.deleteCorsRule(${idx})">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="12" height="12" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
`).join('');
|
||||||
|
} catch (err) {
|
||||||
|
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(err.message)}</td></tr>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadAcl(card, endpoint) {
|
||||||
|
if (!card || !endpoint) return;
|
||||||
|
const body = card.querySelector('[data-acl-body]');
|
||||||
|
if (!body) return;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch(endpoint);
|
||||||
|
const data = await response.json();
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
body.innerHTML = `<tr><td colspan="3" class="text-center text-danger py-3">${escapeHtml(data.error || 'Failed to load')}</td></tr>`;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const grants = data.grants || [];
|
||||||
|
if (grants.length === 0) {
|
||||||
|
body.innerHTML = '<tr><td colspan="3" class="text-center text-muted py-3">No ACL grants configured</td></tr>';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
body.innerHTML = grants.map(grant => {
|
||||||
|
const grantee = grant.grantee_type === 'CanonicalUser'
|
||||||
|
? grant.display_name || grant.grantee_id
|
||||||
|
: grant.grantee_uri || grant.grantee_type;
|
||||||
|
return `
|
||||||
|
<tr>
|
||||||
|
<td class="fw-medium">${escapeHtml(grantee)}</td>
|
||||||
|
<td><span class="badge text-bg-info">${escapeHtml(grant.permission)}</span></td>
|
||||||
|
<td class="text-muted small">${escapeHtml(grant.grantee_type)}</td>
|
||||||
|
</tr>
|
||||||
|
`;
|
||||||
|
}).join('');
|
||||||
|
} catch (err) {
|
||||||
|
body.innerHTML = `<tr><td colspan="3" class="text-center text-danger py-3">${escapeHtml(err.message)}</td></tr>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function deleteLifecycleRule(ruleId) {
|
||||||
|
if (!confirm(`Delete lifecycle rule "${ruleId}"?`)) return;
|
||||||
|
const card = document.getElementById('lifecycle-rules-card');
|
||||||
|
if (!card) return;
|
||||||
|
const endpoint = card.dataset.lifecycleUrl;
|
||||||
|
const csrfToken = window.getCsrfToken ? window.getCsrfToken() : '';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const resp = await fetch(endpoint, {
|
||||||
|
method: 'DELETE',
|
||||||
|
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken },
|
||||||
|
body: JSON.stringify({ rule_id: ruleId })
|
||||||
|
});
|
||||||
|
const data = await resp.json();
|
||||||
|
if (!resp.ok) throw new Error(data.error || 'Failed to delete');
|
||||||
|
showMessage({ title: 'Rule deleted', body: `Lifecycle rule "${ruleId}" has been deleted.`, variant: 'success' });
|
||||||
|
loadLifecycleRules(card, endpoint);
|
||||||
|
} catch (err) {
|
||||||
|
showMessage({ title: 'Delete failed', body: err.message, variant: 'danger' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function deleteCorsRule(index) {
|
||||||
|
if (!confirm('Delete this CORS rule?')) return;
|
||||||
|
const card = document.getElementById('cors-rules-card');
|
||||||
|
if (!card) return;
|
||||||
|
const endpoint = card.dataset.corsUrl;
|
||||||
|
const csrfToken = window.getCsrfToken ? window.getCsrfToken() : '';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const resp = await fetch(endpoint, {
|
||||||
|
method: 'DELETE',
|
||||||
|
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken },
|
||||||
|
body: JSON.stringify({ rule_index: index })
|
||||||
|
});
|
||||||
|
const data = await resp.json();
|
||||||
|
if (!resp.ok) throw new Error(data.error || 'Failed to delete');
|
||||||
|
showMessage({ title: 'Rule deleted', body: 'CORS rule has been deleted.', variant: 'success' });
|
||||||
|
loadCorsRules(card, endpoint);
|
||||||
|
} catch (err) {
|
||||||
|
showMessage({ title: 'Delete failed', body: err.message, variant: 'danger' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
init: init,
|
||||||
|
loadLifecycleRules: loadLifecycleRules,
|
||||||
|
loadCorsRules: loadCorsRules,
|
||||||
|
loadAcl: loadAcl,
|
||||||
|
deleteLifecycleRule: deleteLifecycleRule,
|
||||||
|
deleteCorsRule: deleteCorsRule
|
||||||
|
};
|
||||||
|
})();
|
||||||
548
static/js/bucket-detail-upload.js
Normal file
548
static/js/bucket-detail-upload.js
Normal file
@@ -0,0 +1,548 @@
|
|||||||
|
window.BucketDetailUpload = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
const MULTIPART_THRESHOLD = 8 * 1024 * 1024;
|
||||||
|
const CHUNK_SIZE = 8 * 1024 * 1024;
|
||||||
|
|
||||||
|
let state = {
|
||||||
|
isUploading: false,
|
||||||
|
uploadProgress: { current: 0, total: 0, currentFile: '' }
|
||||||
|
};
|
||||||
|
|
||||||
|
let elements = {};
|
||||||
|
let callbacks = {};
|
||||||
|
|
||||||
|
function init(config) {
|
||||||
|
elements = {
|
||||||
|
uploadForm: config.uploadForm,
|
||||||
|
uploadFileInput: config.uploadFileInput,
|
||||||
|
uploadModal: config.uploadModal,
|
||||||
|
uploadModalEl: config.uploadModalEl,
|
||||||
|
uploadSubmitBtn: config.uploadSubmitBtn,
|
||||||
|
uploadCancelBtn: config.uploadCancelBtn,
|
||||||
|
uploadBtnText: config.uploadBtnText,
|
||||||
|
uploadDropZone: config.uploadDropZone,
|
||||||
|
uploadDropZoneLabel: config.uploadDropZoneLabel,
|
||||||
|
uploadProgressStack: config.uploadProgressStack,
|
||||||
|
uploadKeyPrefix: config.uploadKeyPrefix,
|
||||||
|
singleFileOptions: config.singleFileOptions,
|
||||||
|
bulkUploadProgress: config.bulkUploadProgress,
|
||||||
|
bulkUploadStatus: config.bulkUploadStatus,
|
||||||
|
bulkUploadCounter: config.bulkUploadCounter,
|
||||||
|
bulkUploadProgressBar: config.bulkUploadProgressBar,
|
||||||
|
bulkUploadCurrentFile: config.bulkUploadCurrentFile,
|
||||||
|
bulkUploadResults: config.bulkUploadResults,
|
||||||
|
bulkUploadSuccessAlert: config.bulkUploadSuccessAlert,
|
||||||
|
bulkUploadErrorAlert: config.bulkUploadErrorAlert,
|
||||||
|
bulkUploadSuccessCount: config.bulkUploadSuccessCount,
|
||||||
|
bulkUploadErrorCount: config.bulkUploadErrorCount,
|
||||||
|
bulkUploadErrorList: config.bulkUploadErrorList,
|
||||||
|
floatingProgress: config.floatingProgress,
|
||||||
|
floatingProgressBar: config.floatingProgressBar,
|
||||||
|
floatingProgressStatus: config.floatingProgressStatus,
|
||||||
|
floatingProgressTitle: config.floatingProgressTitle,
|
||||||
|
floatingProgressExpand: config.floatingProgressExpand
|
||||||
|
};
|
||||||
|
|
||||||
|
callbacks = {
|
||||||
|
showMessage: config.showMessage || function() {},
|
||||||
|
formatBytes: config.formatBytes || function(b) { return b + ' bytes'; },
|
||||||
|
escapeHtml: config.escapeHtml || function(s) { return s; },
|
||||||
|
onUploadComplete: config.onUploadComplete || function() {},
|
||||||
|
hasFolders: config.hasFolders || function() { return false; },
|
||||||
|
getCurrentPrefix: config.getCurrentPrefix || function() { return ''; }
|
||||||
|
};
|
||||||
|
|
||||||
|
setupEventListeners();
|
||||||
|
setupBeforeUnload();
|
||||||
|
}
|
||||||
|
|
||||||
|
function isUploading() {
|
||||||
|
return state.isUploading;
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupBeforeUnload() {
|
||||||
|
window.addEventListener('beforeunload', (e) => {
|
||||||
|
if (state.isUploading) {
|
||||||
|
e.preventDefault();
|
||||||
|
e.returnValue = 'Upload in progress. Are you sure you want to leave?';
|
||||||
|
return e.returnValue;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function showFloatingProgress() {
|
||||||
|
if (elements.floatingProgress) {
|
||||||
|
elements.floatingProgress.classList.remove('d-none');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function hideFloatingProgress() {
|
||||||
|
if (elements.floatingProgress) {
|
||||||
|
elements.floatingProgress.classList.add('d-none');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateFloatingProgress(current, total, currentFile) {
|
||||||
|
state.uploadProgress = { current, total, currentFile: currentFile || '' };
|
||||||
|
if (elements.floatingProgressBar && total > 0) {
|
||||||
|
const percent = Math.round((current / total) * 100);
|
||||||
|
elements.floatingProgressBar.style.width = `${percent}%`;
|
||||||
|
}
|
||||||
|
if (elements.floatingProgressStatus) {
|
||||||
|
if (currentFile) {
|
||||||
|
elements.floatingProgressStatus.textContent = `${current}/${total} files - ${currentFile}`;
|
||||||
|
} else {
|
||||||
|
elements.floatingProgressStatus.textContent = `${current}/${total} files completed`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (elements.floatingProgressTitle) {
|
||||||
|
elements.floatingProgressTitle.textContent = `Uploading ${total} file${total !== 1 ? 's' : ''}...`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function refreshUploadDropLabel() {
|
||||||
|
if (!elements.uploadDropZoneLabel || !elements.uploadFileInput) return;
|
||||||
|
const files = elements.uploadFileInput.files;
|
||||||
|
if (!files || files.length === 0) {
|
||||||
|
elements.uploadDropZoneLabel.textContent = 'No file selected';
|
||||||
|
if (elements.singleFileOptions) elements.singleFileOptions.classList.remove('d-none');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
elements.uploadDropZoneLabel.textContent = files.length === 1 ? files[0].name : `${files.length} files selected`;
|
||||||
|
if (elements.singleFileOptions) {
|
||||||
|
elements.singleFileOptions.classList.toggle('d-none', files.length > 1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateUploadBtnText() {
|
||||||
|
if (!elements.uploadBtnText || !elements.uploadFileInput) return;
|
||||||
|
const files = elements.uploadFileInput.files;
|
||||||
|
if (!files || files.length <= 1) {
|
||||||
|
elements.uploadBtnText.textContent = 'Upload';
|
||||||
|
} else {
|
||||||
|
elements.uploadBtnText.textContent = `Upload ${files.length} files`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function resetUploadUI() {
|
||||||
|
if (elements.bulkUploadProgress) elements.bulkUploadProgress.classList.add('d-none');
|
||||||
|
if (elements.bulkUploadResults) elements.bulkUploadResults.classList.add('d-none');
|
||||||
|
if (elements.bulkUploadSuccessAlert) elements.bulkUploadSuccessAlert.classList.remove('d-none');
|
||||||
|
if (elements.bulkUploadErrorAlert) elements.bulkUploadErrorAlert.classList.add('d-none');
|
||||||
|
if (elements.bulkUploadErrorList) elements.bulkUploadErrorList.innerHTML = '';
|
||||||
|
if (elements.uploadSubmitBtn) elements.uploadSubmitBtn.disabled = false;
|
||||||
|
if (elements.uploadFileInput) elements.uploadFileInput.disabled = false;
|
||||||
|
if (elements.uploadProgressStack) elements.uploadProgressStack.innerHTML = '';
|
||||||
|
if (elements.uploadDropZone) {
|
||||||
|
elements.uploadDropZone.classList.remove('upload-locked');
|
||||||
|
elements.uploadDropZone.style.pointerEvents = '';
|
||||||
|
}
|
||||||
|
state.isUploading = false;
|
||||||
|
hideFloatingProgress();
|
||||||
|
}
|
||||||
|
|
||||||
|
function setUploadLockState(locked) {
|
||||||
|
if (elements.uploadDropZone) {
|
||||||
|
elements.uploadDropZone.classList.toggle('upload-locked', locked);
|
||||||
|
elements.uploadDropZone.style.pointerEvents = locked ? 'none' : '';
|
||||||
|
}
|
||||||
|
if (elements.uploadFileInput) {
|
||||||
|
elements.uploadFileInput.disabled = locked;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function createProgressItem(file) {
|
||||||
|
const item = document.createElement('div');
|
||||||
|
item.className = 'upload-progress-item';
|
||||||
|
item.dataset.state = 'uploading';
|
||||||
|
item.innerHTML = `
|
||||||
|
<div class="d-flex justify-content-between align-items-start">
|
||||||
|
<div class="min-width-0 flex-grow-1">
|
||||||
|
<div class="file-name">${callbacks.escapeHtml(file.name)}</div>
|
||||||
|
<div class="file-size">${callbacks.formatBytes(file.size)}</div>
|
||||||
|
</div>
|
||||||
|
<div class="upload-status text-end ms-2">Preparing...</div>
|
||||||
|
</div>
|
||||||
|
<div class="progress-container">
|
||||||
|
<div class="progress">
|
||||||
|
<div class="progress-bar bg-primary" role="progressbar" style="width: 0%"></div>
|
||||||
|
</div>
|
||||||
|
<div class="progress-text">
|
||||||
|
<span class="progress-loaded">0 B</span>
|
||||||
|
<span class="progress-percent">0%</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
`;
|
||||||
|
return item;
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateProgressItem(item, { loaded, total, status, progressState, error }) {
|
||||||
|
if (progressState) item.dataset.state = progressState;
|
||||||
|
const statusEl = item.querySelector('.upload-status');
|
||||||
|
const progressBar = item.querySelector('.progress-bar');
|
||||||
|
const progressLoaded = item.querySelector('.progress-loaded');
|
||||||
|
const progressPercent = item.querySelector('.progress-percent');
|
||||||
|
|
||||||
|
if (status) {
|
||||||
|
statusEl.textContent = status;
|
||||||
|
statusEl.className = 'upload-status text-end ms-2';
|
||||||
|
if (progressState === 'success') statusEl.classList.add('success');
|
||||||
|
if (progressState === 'error') statusEl.classList.add('error');
|
||||||
|
}
|
||||||
|
if (typeof loaded === 'number' && typeof total === 'number' && total > 0) {
|
||||||
|
const percent = Math.round((loaded / total) * 100);
|
||||||
|
progressBar.style.width = `${percent}%`;
|
||||||
|
progressLoaded.textContent = `${callbacks.formatBytes(loaded)} / ${callbacks.formatBytes(total)}`;
|
||||||
|
progressPercent.textContent = `${percent}%`;
|
||||||
|
}
|
||||||
|
if (error) {
|
||||||
|
const progressContainer = item.querySelector('.progress-container');
|
||||||
|
if (progressContainer) {
|
||||||
|
progressContainer.innerHTML = `<div class="text-danger small mt-1">${callbacks.escapeHtml(error)}</div>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function uploadMultipart(file, objectKey, metadata, progressItem, urls) {
|
||||||
|
const csrfToken = document.querySelector('input[name="csrf_token"]')?.value;
|
||||||
|
|
||||||
|
updateProgressItem(progressItem, { status: 'Initiating...', loaded: 0, total: file.size });
|
||||||
|
const initResp = await fetch(urls.initUrl, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken || '' },
|
||||||
|
body: JSON.stringify({ object_key: objectKey, metadata })
|
||||||
|
});
|
||||||
|
if (!initResp.ok) {
|
||||||
|
const err = await initResp.json().catch(() => ({}));
|
||||||
|
throw new Error(err.error || 'Failed to initiate upload');
|
||||||
|
}
|
||||||
|
const { upload_id } = await initResp.json();
|
||||||
|
|
||||||
|
const partUrl = urls.partTemplate.replace('UPLOAD_ID_PLACEHOLDER', upload_id);
|
||||||
|
const completeUrl = urls.completeTemplate.replace('UPLOAD_ID_PLACEHOLDER', upload_id);
|
||||||
|
const abortUrl = urls.abortTemplate.replace('UPLOAD_ID_PLACEHOLDER', upload_id);
|
||||||
|
|
||||||
|
const parts = [];
|
||||||
|
const totalParts = Math.ceil(file.size / CHUNK_SIZE);
|
||||||
|
let uploadedBytes = 0;
|
||||||
|
|
||||||
|
try {
|
||||||
|
for (let partNumber = 1; partNumber <= totalParts; partNumber++) {
|
||||||
|
const start = (partNumber - 1) * CHUNK_SIZE;
|
||||||
|
const end = Math.min(start + CHUNK_SIZE, file.size);
|
||||||
|
const chunk = file.slice(start, end);
|
||||||
|
|
||||||
|
updateProgressItem(progressItem, {
|
||||||
|
status: `Part ${partNumber}/${totalParts}`,
|
||||||
|
loaded: uploadedBytes,
|
||||||
|
total: file.size
|
||||||
|
});
|
||||||
|
|
||||||
|
const partResp = await fetch(`${partUrl}?partNumber=${partNumber}`, {
|
||||||
|
method: 'PUT',
|
||||||
|
headers: { 'X-CSRFToken': csrfToken || '' },
|
||||||
|
body: chunk
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!partResp.ok) {
|
||||||
|
const err = await partResp.json().catch(() => ({}));
|
||||||
|
throw new Error(err.error || `Part ${partNumber} failed`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const partData = await partResp.json();
|
||||||
|
parts.push({ part_number: partNumber, etag: partData.etag });
|
||||||
|
uploadedBytes += chunk.size;
|
||||||
|
|
||||||
|
updateProgressItem(progressItem, {
|
||||||
|
loaded: uploadedBytes,
|
||||||
|
total: file.size
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
updateProgressItem(progressItem, { status: 'Completing...', loaded: file.size, total: file.size });
|
||||||
|
const completeResp = await fetch(completeUrl, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken || '' },
|
||||||
|
body: JSON.stringify({ parts })
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!completeResp.ok) {
|
||||||
|
const err = await completeResp.json().catch(() => ({}));
|
||||||
|
throw new Error(err.error || 'Failed to complete upload');
|
||||||
|
}
|
||||||
|
|
||||||
|
return await completeResp.json();
|
||||||
|
} catch (err) {
|
||||||
|
try {
|
||||||
|
await fetch(abortUrl, { method: 'DELETE', headers: { 'X-CSRFToken': csrfToken || '' } });
|
||||||
|
} catch {}
|
||||||
|
throw err;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function uploadRegular(file, objectKey, metadata, progressItem, formAction) {
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
const formData = new FormData();
|
||||||
|
formData.append('object', file);
|
||||||
|
formData.append('object_key', objectKey);
|
||||||
|
if (metadata) formData.append('metadata', JSON.stringify(metadata));
|
||||||
|
const csrfToken = document.querySelector('input[name="csrf_token"]')?.value;
|
||||||
|
if (csrfToken) formData.append('csrf_token', csrfToken);
|
||||||
|
|
||||||
|
const xhr = new XMLHttpRequest();
|
||||||
|
xhr.open('POST', formAction, true);
|
||||||
|
xhr.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
|
||||||
|
|
||||||
|
xhr.upload.addEventListener('progress', (e) => {
|
||||||
|
if (e.lengthComputable) {
|
||||||
|
updateProgressItem(progressItem, {
|
||||||
|
status: 'Uploading...',
|
||||||
|
loaded: e.loaded,
|
||||||
|
total: e.total
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
xhr.addEventListener('load', () => {
|
||||||
|
if (xhr.status >= 200 && xhr.status < 300) {
|
||||||
|
try {
|
||||||
|
const data = JSON.parse(xhr.responseText);
|
||||||
|
if (data.status === 'error') {
|
||||||
|
reject(new Error(data.message || 'Upload failed'));
|
||||||
|
} else {
|
||||||
|
resolve(data);
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
resolve({});
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
try {
|
||||||
|
const data = JSON.parse(xhr.responseText);
|
||||||
|
reject(new Error(data.message || `Upload failed (${xhr.status})`));
|
||||||
|
} catch {
|
||||||
|
reject(new Error(`Upload failed (${xhr.status})`));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
xhr.addEventListener('error', () => reject(new Error('Network error')));
|
||||||
|
xhr.addEventListener('abort', () => reject(new Error('Upload aborted')));
|
||||||
|
|
||||||
|
xhr.send(formData);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async function uploadSingleFile(file, keyPrefix, metadata, progressItem, urls) {
|
||||||
|
const objectKey = keyPrefix ? `${keyPrefix}${file.name}` : file.name;
|
||||||
|
const shouldUseMultipart = file.size >= MULTIPART_THRESHOLD && urls.initUrl;
|
||||||
|
|
||||||
|
if (!progressItem && elements.uploadProgressStack) {
|
||||||
|
progressItem = createProgressItem(file);
|
||||||
|
elements.uploadProgressStack.appendChild(progressItem);
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
let result;
|
||||||
|
if (shouldUseMultipart) {
|
||||||
|
updateProgressItem(progressItem, { status: 'Multipart upload...', loaded: 0, total: file.size });
|
||||||
|
result = await uploadMultipart(file, objectKey, metadata, progressItem, urls);
|
||||||
|
} else {
|
||||||
|
updateProgressItem(progressItem, { status: 'Uploading...', loaded: 0, total: file.size });
|
||||||
|
result = await uploadRegular(file, objectKey, metadata, progressItem, urls.formAction);
|
||||||
|
}
|
||||||
|
updateProgressItem(progressItem, { progressState: 'success', status: 'Complete', loaded: file.size, total: file.size });
|
||||||
|
return result;
|
||||||
|
} catch (err) {
|
||||||
|
updateProgressItem(progressItem, { progressState: 'error', status: 'Failed', error: err.message });
|
||||||
|
throw err;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function performBulkUpload(files, urls) {
|
||||||
|
if (state.isUploading || !files || files.length === 0) return;
|
||||||
|
|
||||||
|
state.isUploading = true;
|
||||||
|
setUploadLockState(true);
|
||||||
|
const keyPrefix = (elements.uploadKeyPrefix?.value || '').trim();
|
||||||
|
const metadataRaw = elements.uploadForm?.querySelector('textarea[name="metadata"]')?.value?.trim();
|
||||||
|
let metadata = null;
|
||||||
|
if (metadataRaw) {
|
||||||
|
try {
|
||||||
|
metadata = JSON.parse(metadataRaw);
|
||||||
|
} catch {
|
||||||
|
callbacks.showMessage({ title: 'Invalid metadata', body: 'Metadata must be valid JSON.', variant: 'danger' });
|
||||||
|
resetUploadUI();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (elements.bulkUploadProgress) elements.bulkUploadProgress.classList.remove('d-none');
|
||||||
|
if (elements.bulkUploadResults) elements.bulkUploadResults.classList.add('d-none');
|
||||||
|
if (elements.uploadSubmitBtn) elements.uploadSubmitBtn.disabled = true;
|
||||||
|
if (elements.uploadFileInput) elements.uploadFileInput.disabled = true;
|
||||||
|
|
||||||
|
const successFiles = [];
|
||||||
|
const errorFiles = [];
|
||||||
|
const total = files.length;
|
||||||
|
|
||||||
|
updateFloatingProgress(0, total, files[0]?.name || '');
|
||||||
|
|
||||||
|
for (let i = 0; i < total; i++) {
|
||||||
|
const file = files[i];
|
||||||
|
const current = i + 1;
|
||||||
|
|
||||||
|
if (elements.bulkUploadCounter) elements.bulkUploadCounter.textContent = `${current}/${total}`;
|
||||||
|
if (elements.bulkUploadCurrentFile) elements.bulkUploadCurrentFile.textContent = `Uploading: ${file.name}`;
|
||||||
|
if (elements.bulkUploadProgressBar) {
|
||||||
|
const percent = Math.round((current / total) * 100);
|
||||||
|
elements.bulkUploadProgressBar.style.width = `${percent}%`;
|
||||||
|
}
|
||||||
|
updateFloatingProgress(i, total, file.name);
|
||||||
|
|
||||||
|
try {
|
||||||
|
await uploadSingleFile(file, keyPrefix, metadata, null, urls);
|
||||||
|
successFiles.push(file.name);
|
||||||
|
} catch (error) {
|
||||||
|
errorFiles.push({ name: file.name, error: error.message || 'Unknown error' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
updateFloatingProgress(total, total);
|
||||||
|
|
||||||
|
if (elements.bulkUploadProgress) elements.bulkUploadProgress.classList.add('d-none');
|
||||||
|
if (elements.bulkUploadResults) elements.bulkUploadResults.classList.remove('d-none');
|
||||||
|
|
||||||
|
if (elements.bulkUploadSuccessCount) elements.bulkUploadSuccessCount.textContent = successFiles.length;
|
||||||
|
if (successFiles.length === 0 && elements.bulkUploadSuccessAlert) {
|
||||||
|
elements.bulkUploadSuccessAlert.classList.add('d-none');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (errorFiles.length > 0) {
|
||||||
|
if (elements.bulkUploadErrorCount) elements.bulkUploadErrorCount.textContent = errorFiles.length;
|
||||||
|
if (elements.bulkUploadErrorAlert) elements.bulkUploadErrorAlert.classList.remove('d-none');
|
||||||
|
if (elements.bulkUploadErrorList) {
|
||||||
|
elements.bulkUploadErrorList.innerHTML = errorFiles
|
||||||
|
.map(f => `<li><strong>${callbacks.escapeHtml(f.name)}</strong>: ${callbacks.escapeHtml(f.error)}</li>`)
|
||||||
|
.join('');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
state.isUploading = false;
|
||||||
|
setUploadLockState(false);
|
||||||
|
|
||||||
|
if (successFiles.length > 0) {
|
||||||
|
if (elements.uploadBtnText) elements.uploadBtnText.textContent = 'Refreshing...';
|
||||||
|
callbacks.onUploadComplete(successFiles, errorFiles);
|
||||||
|
} else {
|
||||||
|
if (elements.uploadSubmitBtn) elements.uploadSubmitBtn.disabled = false;
|
||||||
|
if (elements.uploadFileInput) elements.uploadFileInput.disabled = false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupEventListeners() {
|
||||||
|
if (elements.uploadFileInput) {
|
||||||
|
elements.uploadFileInput.addEventListener('change', () => {
|
||||||
|
if (state.isUploading) return;
|
||||||
|
refreshUploadDropLabel();
|
||||||
|
updateUploadBtnText();
|
||||||
|
resetUploadUI();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (elements.uploadDropZone) {
|
||||||
|
elements.uploadDropZone.addEventListener('click', () => {
|
||||||
|
if (state.isUploading) return;
|
||||||
|
elements.uploadFileInput?.click();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (elements.floatingProgressExpand) {
|
||||||
|
elements.floatingProgressExpand.addEventListener('click', () => {
|
||||||
|
if (elements.uploadModal) {
|
||||||
|
elements.uploadModal.show();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (elements.uploadModalEl) {
|
||||||
|
elements.uploadModalEl.addEventListener('hide.bs.modal', () => {
|
||||||
|
if (state.isUploading) {
|
||||||
|
showFloatingProgress();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
elements.uploadModalEl.addEventListener('hidden.bs.modal', () => {
|
||||||
|
if (!state.isUploading) {
|
||||||
|
resetUploadUI();
|
||||||
|
if (elements.uploadFileInput) elements.uploadFileInput.value = '';
|
||||||
|
refreshUploadDropLabel();
|
||||||
|
updateUploadBtnText();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
elements.uploadModalEl.addEventListener('show.bs.modal', () => {
|
||||||
|
if (state.isUploading) {
|
||||||
|
hideFloatingProgress();
|
||||||
|
}
|
||||||
|
if (callbacks.hasFolders() && callbacks.getCurrentPrefix()) {
|
||||||
|
if (elements.uploadKeyPrefix) {
|
||||||
|
elements.uploadKeyPrefix.value = callbacks.getCurrentPrefix();
|
||||||
|
}
|
||||||
|
} else if (elements.uploadKeyPrefix) {
|
||||||
|
elements.uploadKeyPrefix.value = '';
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function wireDropTarget(target, options) {
|
||||||
|
const { highlightClass = '', autoOpenModal = false } = options || {};
|
||||||
|
if (!target) return;
|
||||||
|
|
||||||
|
const preventDefaults = (event) => {
|
||||||
|
event.preventDefault();
|
||||||
|
event.stopPropagation();
|
||||||
|
};
|
||||||
|
|
||||||
|
['dragenter', 'dragover'].forEach((eventName) => {
|
||||||
|
target.addEventListener(eventName, (event) => {
|
||||||
|
preventDefaults(event);
|
||||||
|
if (state.isUploading) return;
|
||||||
|
if (highlightClass) {
|
||||||
|
target.classList.add(highlightClass);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
['dragleave', 'drop'].forEach((eventName) => {
|
||||||
|
target.addEventListener(eventName, (event) => {
|
||||||
|
preventDefaults(event);
|
||||||
|
if (highlightClass) {
|
||||||
|
target.classList.remove(highlightClass);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
target.addEventListener('drop', (event) => {
|
||||||
|
if (state.isUploading) return;
|
||||||
|
if (!event.dataTransfer?.files?.length || !elements.uploadFileInput) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
elements.uploadFileInput.files = event.dataTransfer.files;
|
||||||
|
elements.uploadFileInput.dispatchEvent(new Event('change', { bubbles: true }));
|
||||||
|
if (autoOpenModal && elements.uploadModal) {
|
||||||
|
elements.uploadModal.show();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
init: init,
|
||||||
|
isUploading: isUploading,
|
||||||
|
performBulkUpload: performBulkUpload,
|
||||||
|
wireDropTarget: wireDropTarget,
|
||||||
|
resetUploadUI: resetUploadUI,
|
||||||
|
refreshUploadDropLabel: refreshUploadDropLabel,
|
||||||
|
updateUploadBtnText: updateUploadBtnText
|
||||||
|
};
|
||||||
|
})();
|
||||||
120
static/js/bucket-detail-utils.js
Normal file
120
static/js/bucket-detail-utils.js
Normal file
@@ -0,0 +1,120 @@
|
|||||||
|
window.BucketDetailUtils = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
function setupJsonAutoIndent(textarea) {
|
||||||
|
if (!textarea) return;
|
||||||
|
|
||||||
|
textarea.addEventListener('keydown', function(e) {
|
||||||
|
if (e.key === 'Enter') {
|
||||||
|
e.preventDefault();
|
||||||
|
|
||||||
|
const start = this.selectionStart;
|
||||||
|
const end = this.selectionEnd;
|
||||||
|
const value = this.value;
|
||||||
|
|
||||||
|
const lineStart = value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
const currentLine = value.substring(lineStart, start);
|
||||||
|
|
||||||
|
const indentMatch = currentLine.match(/^(\s*)/);
|
||||||
|
let indent = indentMatch ? indentMatch[1] : '';
|
||||||
|
|
||||||
|
const trimmedLine = currentLine.trim();
|
||||||
|
const lastChar = trimmedLine.slice(-1);
|
||||||
|
|
||||||
|
let newIndent = indent;
|
||||||
|
let insertAfter = '';
|
||||||
|
|
||||||
|
if (lastChar === '{' || lastChar === '[') {
|
||||||
|
newIndent = indent + ' ';
|
||||||
|
|
||||||
|
const charAfterCursor = value.substring(start, start + 1).trim();
|
||||||
|
if ((lastChar === '{' && charAfterCursor === '}') ||
|
||||||
|
(lastChar === '[' && charAfterCursor === ']')) {
|
||||||
|
insertAfter = '\n' + indent;
|
||||||
|
}
|
||||||
|
} else if (lastChar === ',' || lastChar === ':') {
|
||||||
|
newIndent = indent;
|
||||||
|
}
|
||||||
|
|
||||||
|
const insertion = '\n' + newIndent + insertAfter;
|
||||||
|
const newValue = value.substring(0, start) + insertion + value.substring(end);
|
||||||
|
|
||||||
|
this.value = newValue;
|
||||||
|
|
||||||
|
const newCursorPos = start + 1 + newIndent.length;
|
||||||
|
this.selectionStart = this.selectionEnd = newCursorPos;
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
|
||||||
|
if (e.key === 'Tab') {
|
||||||
|
e.preventDefault();
|
||||||
|
const start = this.selectionStart;
|
||||||
|
const end = this.selectionEnd;
|
||||||
|
|
||||||
|
if (e.shiftKey) {
|
||||||
|
const lineStart = this.value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
const lineContent = this.value.substring(lineStart, start);
|
||||||
|
if (lineContent.startsWith(' ')) {
|
||||||
|
this.value = this.value.substring(0, lineStart) +
|
||||||
|
this.value.substring(lineStart + 2);
|
||||||
|
this.selectionStart = this.selectionEnd = Math.max(lineStart, start - 2);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
this.value = this.value.substring(0, start) + ' ' + this.value.substring(end);
|
||||||
|
this.selectionStart = this.selectionEnd = start + 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatBytes(bytes) {
|
||||||
|
if (!Number.isFinite(bytes)) return `${bytes} bytes`;
|
||||||
|
const units = ['bytes', 'KB', 'MB', 'GB', 'TB'];
|
||||||
|
let i = 0;
|
||||||
|
let size = bytes;
|
||||||
|
while (size >= 1024 && i < units.length - 1) {
|
||||||
|
size /= 1024;
|
||||||
|
i++;
|
||||||
|
}
|
||||||
|
return `${size.toFixed(i === 0 ? 0 : 1)} ${units[i]}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function escapeHtml(value) {
|
||||||
|
if (value === null || value === undefined) return '';
|
||||||
|
return String(value)
|
||||||
|
.replace(/&/g, '&')
|
||||||
|
.replace(/</g, '<')
|
||||||
|
.replace(/>/g, '>')
|
||||||
|
.replace(/"/g, '"')
|
||||||
|
.replace(/'/g, ''');
|
||||||
|
}
|
||||||
|
|
||||||
|
function fallbackCopy(text) {
|
||||||
|
const textArea = document.createElement('textarea');
|
||||||
|
textArea.value = text;
|
||||||
|
textArea.style.position = 'fixed';
|
||||||
|
textArea.style.left = '-9999px';
|
||||||
|
textArea.style.top = '-9999px';
|
||||||
|
document.body.appendChild(textArea);
|
||||||
|
textArea.focus();
|
||||||
|
textArea.select();
|
||||||
|
let success = false;
|
||||||
|
try {
|
||||||
|
success = document.execCommand('copy');
|
||||||
|
} catch {
|
||||||
|
success = false;
|
||||||
|
}
|
||||||
|
document.body.removeChild(textArea);
|
||||||
|
return success;
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
setupJsonAutoIndent: setupJsonAutoIndent,
|
||||||
|
formatBytes: formatBytes,
|
||||||
|
escapeHtml: escapeHtml,
|
||||||
|
fallbackCopy: fallbackCopy
|
||||||
|
};
|
||||||
|
})();
|
||||||
344
static/js/connections-management.js
Normal file
344
static/js/connections-management.js
Normal file
@@ -0,0 +1,344 @@
|
|||||||
|
window.ConnectionsManagement = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
var endpoints = {};
|
||||||
|
var csrfToken = '';
|
||||||
|
|
||||||
|
function init(config) {
|
||||||
|
endpoints = config.endpoints || {};
|
||||||
|
csrfToken = config.csrfToken || '';
|
||||||
|
|
||||||
|
setupEventListeners();
|
||||||
|
checkAllConnectionHealth();
|
||||||
|
}
|
||||||
|
|
||||||
|
function togglePassword(id) {
|
||||||
|
var input = document.getElementById(id);
|
||||||
|
if (input) {
|
||||||
|
input.type = input.type === 'password' ? 'text' : 'password';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function testConnection(formId, resultId) {
|
||||||
|
var form = document.getElementById(formId);
|
||||||
|
var resultDiv = document.getElementById(resultId);
|
||||||
|
if (!form || !resultDiv) return;
|
||||||
|
|
||||||
|
var formData = new FormData(form);
|
||||||
|
var data = {};
|
||||||
|
formData.forEach(function(value, key) {
|
||||||
|
if (key !== 'csrf_token') {
|
||||||
|
data[key] = value;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
resultDiv.innerHTML = '<div class="text-info"><span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Testing connection...</div>';
|
||||||
|
|
||||||
|
var controller = new AbortController();
|
||||||
|
var timeoutId = setTimeout(function() { controller.abort(); }, 20000);
|
||||||
|
|
||||||
|
try {
|
||||||
|
var response = await fetch(endpoints.test, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: {
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
'X-CSRFToken': csrfToken
|
||||||
|
},
|
||||||
|
body: JSON.stringify(data),
|
||||||
|
signal: controller.signal
|
||||||
|
});
|
||||||
|
clearTimeout(timeoutId);
|
||||||
|
|
||||||
|
var result = await response.json();
|
||||||
|
if (response.ok) {
|
||||||
|
resultDiv.innerHTML = '<div class="text-success">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>' +
|
||||||
|
'</svg>' + window.UICore.escapeHtml(result.message) + '</div>';
|
||||||
|
} else {
|
||||||
|
resultDiv.innerHTML = '<div class="text-danger">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>' +
|
||||||
|
'</svg>' + window.UICore.escapeHtml(result.message) + '</div>';
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
clearTimeout(timeoutId);
|
||||||
|
var message = error.name === 'AbortError'
|
||||||
|
? 'Connection test timed out - endpoint may be unreachable'
|
||||||
|
: 'Connection failed: Network error';
|
||||||
|
resultDiv.innerHTML = '<div class="text-danger">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>' +
|
||||||
|
'</svg>' + message + '</div>';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function checkConnectionHealth(connectionId, statusEl) {
|
||||||
|
if (!statusEl) return;
|
||||||
|
|
||||||
|
try {
|
||||||
|
var controller = new AbortController();
|
||||||
|
var timeoutId = setTimeout(function() { controller.abort(); }, 15000);
|
||||||
|
|
||||||
|
var response = await fetch(endpoints.healthTemplate.replace('CONNECTION_ID', connectionId), {
|
||||||
|
signal: controller.signal
|
||||||
|
});
|
||||||
|
clearTimeout(timeoutId);
|
||||||
|
|
||||||
|
var data = await response.json();
|
||||||
|
if (data.healthy) {
|
||||||
|
statusEl.innerHTML = '<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-success" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/></svg>';
|
||||||
|
statusEl.setAttribute('data-status', 'healthy');
|
||||||
|
statusEl.setAttribute('title', 'Connected');
|
||||||
|
} else {
|
||||||
|
statusEl.innerHTML = '<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-danger" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/></svg>';
|
||||||
|
statusEl.setAttribute('data-status', 'unhealthy');
|
||||||
|
statusEl.setAttribute('title', data.error || 'Unreachable');
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
statusEl.innerHTML = '<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-warning" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M8.982 1.566a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566zM8 5c.535 0 .954.462.9.995l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995A.905.905 0 0 1 8 5zm.002 6a1 1 0 1 1 0 2 1 1 0 0 1 0-2z"/></svg>';
|
||||||
|
statusEl.setAttribute('data-status', 'unknown');
|
||||||
|
statusEl.setAttribute('title', 'Could not check status');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function checkAllConnectionHealth() {
|
||||||
|
var rows = document.querySelectorAll('tr[data-connection-id]');
|
||||||
|
rows.forEach(function(row, index) {
|
||||||
|
var connectionId = row.getAttribute('data-connection-id');
|
||||||
|
var statusEl = row.querySelector('.connection-status');
|
||||||
|
if (statusEl) {
|
||||||
|
setTimeout(function() {
|
||||||
|
checkConnectionHealth(connectionId, statusEl);
|
||||||
|
}, index * 200);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateConnectionCount() {
|
||||||
|
var countBadge = document.querySelector('.badge.bg-primary.bg-opacity-10.text-primary.fs-6');
|
||||||
|
if (countBadge) {
|
||||||
|
var remaining = document.querySelectorAll('tr[data-connection-id]').length;
|
||||||
|
countBadge.textContent = remaining + ' connection' + (remaining !== 1 ? 's' : '');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function createConnectionRowHtml(conn) {
|
||||||
|
var ak = conn.access_key || '';
|
||||||
|
var maskedKey = ak.length > 12 ? ak.slice(0, 8) + '...' + ak.slice(-4) : ak;
|
||||||
|
|
||||||
|
return '<tr data-connection-id="' + window.UICore.escapeHtml(conn.id) + '">' +
|
||||||
|
'<td class="text-center">' +
|
||||||
|
'<span class="connection-status" data-status="checking" title="Checking...">' +
|
||||||
|
'<span class="spinner-border spinner-border-sm text-muted" role="status" style="width: 12px; height: 12px;"></span>' +
|
||||||
|
'</span></td>' +
|
||||||
|
'<td><div class="d-flex align-items-center gap-2">' +
|
||||||
|
'<div class="connection-icon"><svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/></svg></div>' +
|
||||||
|
'<span class="fw-medium">' + window.UICore.escapeHtml(conn.name) + '</span>' +
|
||||||
|
'</div></td>' +
|
||||||
|
'<td><span class="text-muted small text-truncate d-inline-block" style="max-width: 200px;" title="' + window.UICore.escapeHtml(conn.endpoint_url) + '">' + window.UICore.escapeHtml(conn.endpoint_url) + '</span></td>' +
|
||||||
|
'<td><span class="badge bg-primary bg-opacity-10 text-primary">' + window.UICore.escapeHtml(conn.region) + '</span></td>' +
|
||||||
|
'<td><code class="small">' + window.UICore.escapeHtml(maskedKey) + '</code></td>' +
|
||||||
|
'<td class="text-end"><div class="btn-group btn-group-sm" role="group">' +
|
||||||
|
'<button type="button" class="btn btn-outline-secondary" data-bs-toggle="modal" data-bs-target="#editConnectionModal" ' +
|
||||||
|
'data-id="' + window.UICore.escapeHtml(conn.id) + '" data-name="' + window.UICore.escapeHtml(conn.name) + '" ' +
|
||||||
|
'data-endpoint="' + window.UICore.escapeHtml(conn.endpoint_url) + '" data-region="' + window.UICore.escapeHtml(conn.region) + '" ' +
|
||||||
|
'data-access="' + window.UICore.escapeHtml(conn.access_key) + '" data-secret="' + window.UICore.escapeHtml(conn.secret_key || '') + '" title="Edit connection">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/></svg></button>' +
|
||||||
|
'<button type="button" class="btn btn-outline-danger" data-bs-toggle="modal" data-bs-target="#deleteConnectionModal" ' +
|
||||||
|
'data-id="' + window.UICore.escapeHtml(conn.id) + '" data-name="' + window.UICore.escapeHtml(conn.name) + '" title="Delete connection">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>' +
|
||||||
|
'<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/></svg></button>' +
|
||||||
|
'</div></td></tr>';
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupEventListeners() {
|
||||||
|
var testBtn = document.getElementById('testConnectionBtn');
|
||||||
|
if (testBtn) {
|
||||||
|
testBtn.addEventListener('click', function() {
|
||||||
|
testConnection('createConnectionForm', 'testResult');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var editTestBtn = document.getElementById('editTestConnectionBtn');
|
||||||
|
if (editTestBtn) {
|
||||||
|
editTestBtn.addEventListener('click', function() {
|
||||||
|
testConnection('editConnectionForm', 'editTestResult');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var editModal = document.getElementById('editConnectionModal');
|
||||||
|
if (editModal) {
|
||||||
|
editModal.addEventListener('show.bs.modal', function(event) {
|
||||||
|
var button = event.relatedTarget;
|
||||||
|
if (!button) return;
|
||||||
|
|
||||||
|
var id = button.getAttribute('data-id');
|
||||||
|
|
||||||
|
document.getElementById('edit_name').value = button.getAttribute('data-name') || '';
|
||||||
|
document.getElementById('edit_endpoint_url').value = button.getAttribute('data-endpoint') || '';
|
||||||
|
document.getElementById('edit_region').value = button.getAttribute('data-region') || '';
|
||||||
|
document.getElementById('edit_access_key').value = button.getAttribute('data-access') || '';
|
||||||
|
document.getElementById('edit_secret_key').value = button.getAttribute('data-secret') || '';
|
||||||
|
document.getElementById('editTestResult').innerHTML = '';
|
||||||
|
|
||||||
|
var form = document.getElementById('editConnectionForm');
|
||||||
|
form.action = endpoints.updateTemplate.replace('CONNECTION_ID', id);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteModal = document.getElementById('deleteConnectionModal');
|
||||||
|
if (deleteModal) {
|
||||||
|
deleteModal.addEventListener('show.bs.modal', function(event) {
|
||||||
|
var button = event.relatedTarget;
|
||||||
|
if (!button) return;
|
||||||
|
|
||||||
|
var id = button.getAttribute('data-id');
|
||||||
|
var name = button.getAttribute('data-name');
|
||||||
|
|
||||||
|
document.getElementById('deleteConnectionName').textContent = name;
|
||||||
|
var form = document.getElementById('deleteConnectionForm');
|
||||||
|
form.action = endpoints.deleteTemplate.replace('CONNECTION_ID', id);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var createForm = document.getElementById('createConnectionForm');
|
||||||
|
if (createForm) {
|
||||||
|
createForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
window.UICore.submitFormAjax(createForm, {
|
||||||
|
successMessage: 'Connection created',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
createForm.reset();
|
||||||
|
document.getElementById('testResult').innerHTML = '';
|
||||||
|
|
||||||
|
if (data.connection) {
|
||||||
|
var emptyState = document.querySelector('.empty-state');
|
||||||
|
if (emptyState) {
|
||||||
|
var cardBody = emptyState.closest('.card-body');
|
||||||
|
if (cardBody) {
|
||||||
|
cardBody.innerHTML = '<div class="table-responsive"><table class="table table-hover align-middle mb-0">' +
|
||||||
|
'<thead class="table-light"><tr>' +
|
||||||
|
'<th scope="col" style="width: 50px;">Status</th>' +
|
||||||
|
'<th scope="col">Name</th><th scope="col">Endpoint</th>' +
|
||||||
|
'<th scope="col">Region</th><th scope="col">Access Key</th>' +
|
||||||
|
'<th scope="col" class="text-end">Actions</th></tr></thead>' +
|
||||||
|
'<tbody></tbody></table></div>';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var tbody = document.querySelector('table tbody');
|
||||||
|
if (tbody) {
|
||||||
|
tbody.insertAdjacentHTML('beforeend', createConnectionRowHtml(data.connection));
|
||||||
|
var newRow = tbody.lastElementChild;
|
||||||
|
var statusEl = newRow.querySelector('.connection-status');
|
||||||
|
if (statusEl) {
|
||||||
|
checkConnectionHealth(data.connection.id, statusEl);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
updateConnectionCount();
|
||||||
|
} else {
|
||||||
|
location.reload();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var editForm = document.getElementById('editConnectionForm');
|
||||||
|
if (editForm) {
|
||||||
|
editForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
window.UICore.submitFormAjax(editForm, {
|
||||||
|
successMessage: 'Connection updated',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
var modal = bootstrap.Modal.getInstance(document.getElementById('editConnectionModal'));
|
||||||
|
if (modal) modal.hide();
|
||||||
|
|
||||||
|
var connId = editForm.action.split('/').slice(-2)[0];
|
||||||
|
var row = document.querySelector('tr[data-connection-id="' + connId + '"]');
|
||||||
|
if (row && data.connection) {
|
||||||
|
var nameCell = row.querySelector('.fw-medium');
|
||||||
|
if (nameCell) nameCell.textContent = data.connection.name;
|
||||||
|
|
||||||
|
var endpointCell = row.querySelector('.text-truncate');
|
||||||
|
if (endpointCell) {
|
||||||
|
endpointCell.textContent = data.connection.endpoint_url;
|
||||||
|
endpointCell.title = data.connection.endpoint_url;
|
||||||
|
}
|
||||||
|
|
||||||
|
var regionBadge = row.querySelector('.badge.bg-primary');
|
||||||
|
if (regionBadge) regionBadge.textContent = data.connection.region;
|
||||||
|
|
||||||
|
var accessCode = row.querySelector('code.small');
|
||||||
|
if (accessCode && data.connection.access_key) {
|
||||||
|
var ak = data.connection.access_key;
|
||||||
|
accessCode.textContent = ak.slice(0, 8) + '...' + ak.slice(-4);
|
||||||
|
}
|
||||||
|
|
||||||
|
var editBtn = row.querySelector('[data-bs-target="#editConnectionModal"]');
|
||||||
|
if (editBtn) {
|
||||||
|
editBtn.setAttribute('data-name', data.connection.name);
|
||||||
|
editBtn.setAttribute('data-endpoint', data.connection.endpoint_url);
|
||||||
|
editBtn.setAttribute('data-region', data.connection.region);
|
||||||
|
editBtn.setAttribute('data-access', data.connection.access_key);
|
||||||
|
if (data.connection.secret_key) {
|
||||||
|
editBtn.setAttribute('data-secret', data.connection.secret_key);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteBtn = row.querySelector('[data-bs-target="#deleteConnectionModal"]');
|
||||||
|
if (deleteBtn) {
|
||||||
|
deleteBtn.setAttribute('data-name', data.connection.name);
|
||||||
|
}
|
||||||
|
|
||||||
|
var statusEl = row.querySelector('.connection-status');
|
||||||
|
if (statusEl) {
|
||||||
|
checkConnectionHealth(connId, statusEl);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteForm = document.getElementById('deleteConnectionForm');
|
||||||
|
if (deleteForm) {
|
||||||
|
deleteForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
window.UICore.submitFormAjax(deleteForm, {
|
||||||
|
successMessage: 'Connection deleted',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
var modal = bootstrap.Modal.getInstance(document.getElementById('deleteConnectionModal'));
|
||||||
|
if (modal) modal.hide();
|
||||||
|
|
||||||
|
var connId = deleteForm.action.split('/').slice(-2)[0];
|
||||||
|
var row = document.querySelector('tr[data-connection-id="' + connId + '"]');
|
||||||
|
if (row) {
|
||||||
|
row.remove();
|
||||||
|
}
|
||||||
|
|
||||||
|
updateConnectionCount();
|
||||||
|
|
||||||
|
if (document.querySelectorAll('tr[data-connection-id]').length === 0) {
|
||||||
|
location.reload();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
init: init,
|
||||||
|
togglePassword: togglePassword,
|
||||||
|
testConnection: testConnection,
|
||||||
|
checkConnectionHealth: checkConnectionHealth
|
||||||
|
};
|
||||||
|
})();
|
||||||
545
static/js/iam-management.js
Normal file
545
static/js/iam-management.js
Normal file
@@ -0,0 +1,545 @@
|
|||||||
|
window.IAMManagement = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
var users = [];
|
||||||
|
var currentUserKey = null;
|
||||||
|
var endpoints = {};
|
||||||
|
var csrfToken = '';
|
||||||
|
var iamLocked = false;
|
||||||
|
|
||||||
|
var policyModal = null;
|
||||||
|
var editUserModal = null;
|
||||||
|
var deleteUserModal = null;
|
||||||
|
var rotateSecretModal = null;
|
||||||
|
var currentRotateKey = null;
|
||||||
|
var currentEditKey = null;
|
||||||
|
var currentDeleteKey = null;
|
||||||
|
|
||||||
|
var policyTemplates = {
|
||||||
|
full: [{ bucket: '*', actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'replication', 'iam:list_users', 'iam:*'] }],
|
||||||
|
readonly: [{ bucket: '*', actions: ['list', 'read'] }],
|
||||||
|
writer: [{ bucket: '*', actions: ['list', 'read', 'write'] }]
|
||||||
|
};
|
||||||
|
|
||||||
|
function init(config) {
|
||||||
|
users = config.users || [];
|
||||||
|
currentUserKey = config.currentUserKey || null;
|
||||||
|
endpoints = config.endpoints || {};
|
||||||
|
csrfToken = config.csrfToken || '';
|
||||||
|
iamLocked = config.iamLocked || false;
|
||||||
|
|
||||||
|
if (iamLocked) return;
|
||||||
|
|
||||||
|
initModals();
|
||||||
|
setupJsonAutoIndent();
|
||||||
|
setupCopyButtons();
|
||||||
|
setupPolicyEditor();
|
||||||
|
setupCreateUserModal();
|
||||||
|
setupEditUserModal();
|
||||||
|
setupDeleteUserModal();
|
||||||
|
setupRotateSecretModal();
|
||||||
|
setupFormHandlers();
|
||||||
|
}
|
||||||
|
|
||||||
|
function initModals() {
|
||||||
|
var policyModalEl = document.getElementById('policyEditorModal');
|
||||||
|
var editModalEl = document.getElementById('editUserModal');
|
||||||
|
var deleteModalEl = document.getElementById('deleteUserModal');
|
||||||
|
var rotateModalEl = document.getElementById('rotateSecretModal');
|
||||||
|
|
||||||
|
if (policyModalEl) policyModal = new bootstrap.Modal(policyModalEl);
|
||||||
|
if (editModalEl) editUserModal = new bootstrap.Modal(editModalEl);
|
||||||
|
if (deleteModalEl) deleteUserModal = new bootstrap.Modal(deleteModalEl);
|
||||||
|
if (rotateModalEl) rotateSecretModal = new bootstrap.Modal(rotateModalEl);
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupJsonAutoIndent() {
|
||||||
|
window.UICore.setupJsonAutoIndent(document.getElementById('policyEditorDocument'));
|
||||||
|
window.UICore.setupJsonAutoIndent(document.getElementById('createUserPolicies'));
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupCopyButtons() {
|
||||||
|
document.querySelectorAll('.config-copy').forEach(function(button) {
|
||||||
|
button.addEventListener('click', async function() {
|
||||||
|
var targetId = button.dataset.copyTarget;
|
||||||
|
var target = document.getElementById(targetId);
|
||||||
|
if (!target) return;
|
||||||
|
await window.UICore.copyToClipboard(target.innerText, button, 'Copy JSON');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
var secretCopyButton = document.querySelector('[data-secret-copy]');
|
||||||
|
if (secretCopyButton) {
|
||||||
|
secretCopyButton.addEventListener('click', async function() {
|
||||||
|
var secretInput = document.getElementById('disclosedSecretValue');
|
||||||
|
if (!secretInput) return;
|
||||||
|
await window.UICore.copyToClipboard(secretInput.value, secretCopyButton, 'Copy');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function getUserPolicies(accessKey) {
|
||||||
|
var user = users.find(function(u) { return u.access_key === accessKey; });
|
||||||
|
return user ? JSON.stringify(user.policies, null, 2) : '';
|
||||||
|
}
|
||||||
|
|
||||||
|
function applyPolicyTemplate(name, textareaEl) {
|
||||||
|
if (policyTemplates[name] && textareaEl) {
|
||||||
|
textareaEl.value = JSON.stringify(policyTemplates[name], null, 2);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupPolicyEditor() {
|
||||||
|
var userLabelEl = document.getElementById('policyEditorUserLabel');
|
||||||
|
var userInputEl = document.getElementById('policyEditorUser');
|
||||||
|
var textareaEl = document.getElementById('policyEditorDocument');
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-policy-template]').forEach(function(button) {
|
||||||
|
button.addEventListener('click', function() {
|
||||||
|
applyPolicyTemplate(button.dataset.policyTemplate, textareaEl);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-policy-editor]').forEach(function(button) {
|
||||||
|
button.addEventListener('click', function() {
|
||||||
|
var key = button.getAttribute('data-access-key');
|
||||||
|
if (!key) return;
|
||||||
|
|
||||||
|
userLabelEl.textContent = key;
|
||||||
|
userInputEl.value = key;
|
||||||
|
textareaEl.value = getUserPolicies(key);
|
||||||
|
|
||||||
|
policyModal.show();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupCreateUserModal() {
|
||||||
|
var createUserPoliciesEl = document.getElementById('createUserPolicies');
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-create-policy-template]').forEach(function(button) {
|
||||||
|
button.addEventListener('click', function() {
|
||||||
|
applyPolicyTemplate(button.dataset.createPolicyTemplate, createUserPoliciesEl);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupEditUserModal() {
|
||||||
|
var editUserForm = document.getElementById('editUserForm');
|
||||||
|
var editUserDisplayName = document.getElementById('editUserDisplayName');
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-edit-user]').forEach(function(btn) {
|
||||||
|
btn.addEventListener('click', function() {
|
||||||
|
var key = btn.dataset.editUser;
|
||||||
|
var name = btn.dataset.displayName;
|
||||||
|
currentEditKey = key;
|
||||||
|
editUserDisplayName.value = name;
|
||||||
|
editUserForm.action = endpoints.updateUser.replace('ACCESS_KEY', key);
|
||||||
|
editUserModal.show();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupDeleteUserModal() {
|
||||||
|
var deleteUserForm = document.getElementById('deleteUserForm');
|
||||||
|
var deleteUserLabel = document.getElementById('deleteUserLabel');
|
||||||
|
var deleteSelfWarning = document.getElementById('deleteSelfWarning');
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-delete-user]').forEach(function(btn) {
|
||||||
|
btn.addEventListener('click', function() {
|
||||||
|
var key = btn.dataset.deleteUser;
|
||||||
|
currentDeleteKey = key;
|
||||||
|
deleteUserLabel.textContent = key;
|
||||||
|
deleteUserForm.action = endpoints.deleteUser.replace('ACCESS_KEY', key);
|
||||||
|
|
||||||
|
if (key === currentUserKey) {
|
||||||
|
deleteSelfWarning.classList.remove('d-none');
|
||||||
|
} else {
|
||||||
|
deleteSelfWarning.classList.add('d-none');
|
||||||
|
}
|
||||||
|
|
||||||
|
deleteUserModal.show();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupRotateSecretModal() {
|
||||||
|
var rotateUserLabel = document.getElementById('rotateUserLabel');
|
||||||
|
var confirmRotateBtn = document.getElementById('confirmRotateBtn');
|
||||||
|
var rotateCancelBtn = document.getElementById('rotateCancelBtn');
|
||||||
|
var rotateDoneBtn = document.getElementById('rotateDoneBtn');
|
||||||
|
var rotateSecretConfirm = document.getElementById('rotateSecretConfirm');
|
||||||
|
var rotateSecretResult = document.getElementById('rotateSecretResult');
|
||||||
|
var newSecretKeyInput = document.getElementById('newSecretKey');
|
||||||
|
var copyNewSecretBtn = document.getElementById('copyNewSecret');
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-rotate-user]').forEach(function(btn) {
|
||||||
|
btn.addEventListener('click', function() {
|
||||||
|
currentRotateKey = btn.dataset.rotateUser;
|
||||||
|
rotateUserLabel.textContent = currentRotateKey;
|
||||||
|
|
||||||
|
rotateSecretConfirm.classList.remove('d-none');
|
||||||
|
rotateSecretResult.classList.add('d-none');
|
||||||
|
confirmRotateBtn.classList.remove('d-none');
|
||||||
|
rotateCancelBtn.classList.remove('d-none');
|
||||||
|
rotateDoneBtn.classList.add('d-none');
|
||||||
|
|
||||||
|
rotateSecretModal.show();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
if (confirmRotateBtn) {
|
||||||
|
confirmRotateBtn.addEventListener('click', async function() {
|
||||||
|
if (!currentRotateKey) return;
|
||||||
|
|
||||||
|
window.UICore.setButtonLoading(confirmRotateBtn, true, 'Rotating...');
|
||||||
|
|
||||||
|
try {
|
||||||
|
var url = endpoints.rotateSecret.replace('ACCESS_KEY', currentRotateKey);
|
||||||
|
var response = await fetch(url, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: {
|
||||||
|
'Accept': 'application/json',
|
||||||
|
'X-CSRFToken': csrfToken
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
var data = await response.json();
|
||||||
|
throw new Error(data.error || 'Failed to rotate secret');
|
||||||
|
}
|
||||||
|
|
||||||
|
var data = await response.json();
|
||||||
|
newSecretKeyInput.value = data.secret_key;
|
||||||
|
|
||||||
|
rotateSecretConfirm.classList.add('d-none');
|
||||||
|
rotateSecretResult.classList.remove('d-none');
|
||||||
|
confirmRotateBtn.classList.add('d-none');
|
||||||
|
rotateCancelBtn.classList.add('d-none');
|
||||||
|
rotateDoneBtn.classList.remove('d-none');
|
||||||
|
|
||||||
|
} catch (err) {
|
||||||
|
if (window.showToast) {
|
||||||
|
window.showToast(err.message, 'Error', 'danger');
|
||||||
|
}
|
||||||
|
rotateSecretModal.hide();
|
||||||
|
} finally {
|
||||||
|
window.UICore.setButtonLoading(confirmRotateBtn, false);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (copyNewSecretBtn) {
|
||||||
|
copyNewSecretBtn.addEventListener('click', async function() {
|
||||||
|
await window.UICore.copyToClipboard(newSecretKeyInput.value, copyNewSecretBtn, 'Copy');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (rotateDoneBtn) {
|
||||||
|
rotateDoneBtn.addEventListener('click', function() {
|
||||||
|
window.location.reload();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function createUserCardHtml(accessKey, displayName, policies) {
|
||||||
|
var policyBadges = '';
|
||||||
|
if (policies && policies.length > 0) {
|
||||||
|
policyBadges = policies.map(function(p) {
|
||||||
|
var actionText = p.actions && p.actions.includes('*') ? 'full' : (p.actions ? p.actions.length : 0);
|
||||||
|
return '<span class="badge bg-primary bg-opacity-10 text-primary">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="10" height="10" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>' +
|
||||||
|
'</svg>' + window.UICore.escapeHtml(p.bucket) +
|
||||||
|
'<span class="opacity-75">(' + actionText + ')</span></span>';
|
||||||
|
}).join('');
|
||||||
|
} else {
|
||||||
|
policyBadges = '<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>';
|
||||||
|
}
|
||||||
|
|
||||||
|
return '<div class="col-md-6 col-xl-4">' +
|
||||||
|
'<div class="card h-100 iam-user-card">' +
|
||||||
|
'<div class="card-body">' +
|
||||||
|
'<div class="d-flex align-items-start justify-content-between mb-3">' +
|
||||||
|
'<div class="d-flex align-items-center gap-3 min-width-0 overflow-hidden">' +
|
||||||
|
'<div class="user-avatar user-avatar-lg flex-shrink-0">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>' +
|
||||||
|
'</svg></div>' +
|
||||||
|
'<div class="min-width-0">' +
|
||||||
|
'<h6 class="fw-semibold mb-0 text-truncate" title="' + window.UICore.escapeHtml(displayName) + '">' + window.UICore.escapeHtml(displayName) + '</h6>' +
|
||||||
|
'<code class="small text-muted d-block text-truncate" title="' + window.UICore.escapeHtml(accessKey) + '">' + window.UICore.escapeHtml(accessKey) + '</code>' +
|
||||||
|
'</div></div>' +
|
||||||
|
'<div class="dropdown flex-shrink-0">' +
|
||||||
|
'<button class="btn btn-sm btn-icon" type="button" data-bs-toggle="dropdown" aria-expanded="false">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M9.5 13a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0z"/>' +
|
||||||
|
'</svg></button>' +
|
||||||
|
'<ul class="dropdown-menu dropdown-menu-end">' +
|
||||||
|
'<li><button class="dropdown-item" type="button" data-edit-user="' + window.UICore.escapeHtml(accessKey) + '" data-display-name="' + window.UICore.escapeHtml(displayName) + '">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16"><path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/></svg>Edit Name</button></li>' +
|
||||||
|
'<li><button class="dropdown-item" type="button" data-rotate-user="' + window.UICore.escapeHtml(accessKey) + '">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16"><path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/><path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/></svg>Rotate Secret</button></li>' +
|
||||||
|
'<li><hr class="dropdown-divider"></li>' +
|
||||||
|
'<li><button class="dropdown-item text-danger" type="button" data-delete-user="' + window.UICore.escapeHtml(accessKey) + '">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16"><path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/><path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/></svg>Delete User</button></li>' +
|
||||||
|
'</ul></div></div>' +
|
||||||
|
'<div class="mb-3">' +
|
||||||
|
'<div class="small text-muted mb-2">Bucket Permissions</div>' +
|
||||||
|
'<div class="d-flex flex-wrap gap-1">' + policyBadges + '</div></div>' +
|
||||||
|
'<button class="btn btn-outline-primary btn-sm w-100" type="button" data-policy-editor data-access-key="' + window.UICore.escapeHtml(accessKey) + '">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16"><path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/><path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/></svg>Manage Policies</button>' +
|
||||||
|
'</div></div></div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
function attachUserCardHandlers(cardElement, accessKey, displayName) {
|
||||||
|
var editBtn = cardElement.querySelector('[data-edit-user]');
|
||||||
|
if (editBtn) {
|
||||||
|
editBtn.addEventListener('click', function() {
|
||||||
|
currentEditKey = accessKey;
|
||||||
|
document.getElementById('editUserDisplayName').value = displayName;
|
||||||
|
document.getElementById('editUserForm').action = endpoints.updateUser.replace('ACCESS_KEY', accessKey);
|
||||||
|
editUserModal.show();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteBtn = cardElement.querySelector('[data-delete-user]');
|
||||||
|
if (deleteBtn) {
|
||||||
|
deleteBtn.addEventListener('click', function() {
|
||||||
|
currentDeleteKey = accessKey;
|
||||||
|
document.getElementById('deleteUserLabel').textContent = accessKey;
|
||||||
|
document.getElementById('deleteUserForm').action = endpoints.deleteUser.replace('ACCESS_KEY', accessKey);
|
||||||
|
var deleteSelfWarning = document.getElementById('deleteSelfWarning');
|
||||||
|
if (accessKey === currentUserKey) {
|
||||||
|
deleteSelfWarning.classList.remove('d-none');
|
||||||
|
} else {
|
||||||
|
deleteSelfWarning.classList.add('d-none');
|
||||||
|
}
|
||||||
|
deleteUserModal.show();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var rotateBtn = cardElement.querySelector('[data-rotate-user]');
|
||||||
|
if (rotateBtn) {
|
||||||
|
rotateBtn.addEventListener('click', function() {
|
||||||
|
currentRotateKey = accessKey;
|
||||||
|
document.getElementById('rotateUserLabel').textContent = accessKey;
|
||||||
|
document.getElementById('rotateSecretConfirm').classList.remove('d-none');
|
||||||
|
document.getElementById('rotateSecretResult').classList.add('d-none');
|
||||||
|
document.getElementById('confirmRotateBtn').classList.remove('d-none');
|
||||||
|
document.getElementById('rotateCancelBtn').classList.remove('d-none');
|
||||||
|
document.getElementById('rotateDoneBtn').classList.add('d-none');
|
||||||
|
rotateSecretModal.show();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var policyBtn = cardElement.querySelector('[data-policy-editor]');
|
||||||
|
if (policyBtn) {
|
||||||
|
policyBtn.addEventListener('click', function() {
|
||||||
|
document.getElementById('policyEditorUserLabel').textContent = accessKey;
|
||||||
|
document.getElementById('policyEditorUser').value = accessKey;
|
||||||
|
document.getElementById('policyEditorDocument').value = getUserPolicies(accessKey);
|
||||||
|
policyModal.show();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateUserCount() {
|
||||||
|
var countEl = document.querySelector('.card-header .text-muted.small');
|
||||||
|
if (countEl) {
|
||||||
|
var count = document.querySelectorAll('.iam-user-card').length;
|
||||||
|
countEl.textContent = count + ' user' + (count !== 1 ? 's' : '') + ' configured';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupFormHandlers() {
|
||||||
|
var createUserForm = document.querySelector('#createUserModal form');
|
||||||
|
if (createUserForm) {
|
||||||
|
createUserForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
window.UICore.submitFormAjax(createUserForm, {
|
||||||
|
successMessage: 'User created',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
var modal = bootstrap.Modal.getInstance(document.getElementById('createUserModal'));
|
||||||
|
if (modal) modal.hide();
|
||||||
|
createUserForm.reset();
|
||||||
|
|
||||||
|
var existingAlert = document.querySelector('.alert.alert-info.border-0.shadow-sm');
|
||||||
|
if (existingAlert) existingAlert.remove();
|
||||||
|
|
||||||
|
if (data.secret_key) {
|
||||||
|
var alertHtml = '<div class="alert alert-info border-0 shadow-sm mb-4" role="alert" id="newUserSecretAlert">' +
|
||||||
|
'<div class="d-flex align-items-start gap-2 mb-2">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="bi bi-key flex-shrink-0 mt-1" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M0 8a4 4 0 0 1 7.465-2H14a.5.5 0 0 1 .354.146l1.5 1.5a.5.5 0 0 1 0 .708l-1.5 1.5a.5.5 0 0 1-.708 0L13 9.207l-.646.647a.5.5 0 0 1-.708 0L11 9.207l-.646.647a.5.5 0 0 1-.708 0L9 9.207l-.646.647A.5.5 0 0 1 8 10h-.535A4 4 0 0 1 0 8zm4-3a3 3 0 1 0 2.712 4.285A.5.5 0 0 1 7.163 9h.63l.853-.854a.5.5 0 0 1 .708 0l.646.647.646-.647a.5.5 0 0 1 .708 0l.646.647.646-.647a.5.5 0 0 1 .708 0l.646.647.793-.793-1-1h-6.63a.5.5 0 0 1-.451-.285A3 3 0 0 0 4 5z"/><path d="M4 8a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>' +
|
||||||
|
'</svg>' +
|
||||||
|
'<div class="flex-grow-1">' +
|
||||||
|
'<div class="fw-semibold">New user created: <code>' + window.UICore.escapeHtml(data.access_key) + '</code></div>' +
|
||||||
|
'<p class="mb-2 small">This secret is only shown once. Copy it now and store it securely.</p>' +
|
||||||
|
'</div>' +
|
||||||
|
'<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"></button>' +
|
||||||
|
'</div>' +
|
||||||
|
'<div class="input-group">' +
|
||||||
|
'<span class="input-group-text"><strong>Secret key</strong></span>' +
|
||||||
|
'<input class="form-control font-monospace" type="text" value="' + window.UICore.escapeHtml(data.secret_key) + '" readonly id="newUserSecret" />' +
|
||||||
|
'<button class="btn btn-outline-primary" type="button" id="copyNewUserSecret">Copy</button>' +
|
||||||
|
'</div></div>';
|
||||||
|
var container = document.querySelector('.page-header');
|
||||||
|
if (container) {
|
||||||
|
container.insertAdjacentHTML('afterend', alertHtml);
|
||||||
|
document.getElementById('copyNewUserSecret').addEventListener('click', async function() {
|
||||||
|
await window.UICore.copyToClipboard(data.secret_key, this, 'Copy');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var usersGrid = document.querySelector('.row.g-3');
|
||||||
|
var emptyState = document.querySelector('.empty-state');
|
||||||
|
if (emptyState) {
|
||||||
|
var emptyCol = emptyState.closest('.col-12');
|
||||||
|
if (emptyCol) emptyCol.remove();
|
||||||
|
if (!usersGrid) {
|
||||||
|
var cardBody = document.querySelector('.card-body.px-4.pb-4');
|
||||||
|
if (cardBody) {
|
||||||
|
cardBody.innerHTML = '<div class="row g-3"></div>';
|
||||||
|
usersGrid = cardBody.querySelector('.row.g-3');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (usersGrid) {
|
||||||
|
var cardHtml = createUserCardHtml(data.access_key, data.display_name, data.policies);
|
||||||
|
usersGrid.insertAdjacentHTML('beforeend', cardHtml);
|
||||||
|
var newCard = usersGrid.lastElementChild;
|
||||||
|
attachUserCardHandlers(newCard, data.access_key, data.display_name);
|
||||||
|
users.push({
|
||||||
|
access_key: data.access_key,
|
||||||
|
display_name: data.display_name,
|
||||||
|
policies: data.policies || []
|
||||||
|
});
|
||||||
|
updateUserCount();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var policyEditorForm = document.getElementById('policyEditorForm');
|
||||||
|
if (policyEditorForm) {
|
||||||
|
policyEditorForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
var userInputEl = document.getElementById('policyEditorUser');
|
||||||
|
var key = userInputEl.value;
|
||||||
|
if (!key) return;
|
||||||
|
|
||||||
|
var template = policyEditorForm.dataset.actionTemplate;
|
||||||
|
policyEditorForm.action = template.replace('ACCESS_KEY_PLACEHOLDER', key);
|
||||||
|
|
||||||
|
window.UICore.submitFormAjax(policyEditorForm, {
|
||||||
|
successMessage: 'Policies updated',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
policyModal.hide();
|
||||||
|
|
||||||
|
var userCard = document.querySelector('[data-access-key="' + key + '"]');
|
||||||
|
if (userCard) {
|
||||||
|
var badgeContainer = userCard.closest('.iam-user-card').querySelector('.d-flex.flex-wrap.gap-1');
|
||||||
|
if (badgeContainer && data.policies) {
|
||||||
|
var badges = data.policies.map(function(p) {
|
||||||
|
return '<span class="badge bg-primary bg-opacity-10 text-primary">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="10" height="10" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>' +
|
||||||
|
'</svg>' + window.UICore.escapeHtml(p.bucket) +
|
||||||
|
'<span class="opacity-75">(' + (p.actions.includes('*') ? 'full' : p.actions.length) + ')</span></span>';
|
||||||
|
}).join('');
|
||||||
|
badgeContainer.innerHTML = badges || '<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var userIndex = users.findIndex(function(u) { return u.access_key === key; });
|
||||||
|
if (userIndex >= 0 && data.policies) {
|
||||||
|
users[userIndex].policies = data.policies;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var editUserForm = document.getElementById('editUserForm');
|
||||||
|
if (editUserForm) {
|
||||||
|
editUserForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
var key = currentEditKey;
|
||||||
|
window.UICore.submitFormAjax(editUserForm, {
|
||||||
|
successMessage: 'User updated',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
editUserModal.hide();
|
||||||
|
|
||||||
|
var newName = data.display_name || document.getElementById('editUserDisplayName').value;
|
||||||
|
var editBtn = document.querySelector('[data-edit-user="' + key + '"]');
|
||||||
|
if (editBtn) {
|
||||||
|
editBtn.setAttribute('data-display-name', newName);
|
||||||
|
var card = editBtn.closest('.iam-user-card');
|
||||||
|
if (card) {
|
||||||
|
var nameEl = card.querySelector('h6');
|
||||||
|
if (nameEl) {
|
||||||
|
nameEl.textContent = newName;
|
||||||
|
nameEl.title = newName;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var userIndex = users.findIndex(function(u) { return u.access_key === key; });
|
||||||
|
if (userIndex >= 0) {
|
||||||
|
users[userIndex].display_name = newName;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (key === currentUserKey) {
|
||||||
|
document.querySelectorAll('.sidebar-user .user-name').forEach(function(el) {
|
||||||
|
var truncated = newName.length > 16 ? newName.substring(0, 16) + '...' : newName;
|
||||||
|
el.textContent = truncated;
|
||||||
|
el.title = newName;
|
||||||
|
});
|
||||||
|
document.querySelectorAll('.sidebar-user[data-username]').forEach(function(el) {
|
||||||
|
el.setAttribute('data-username', newName);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteUserForm = document.getElementById('deleteUserForm');
|
||||||
|
if (deleteUserForm) {
|
||||||
|
deleteUserForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
var key = currentDeleteKey;
|
||||||
|
window.UICore.submitFormAjax(deleteUserForm, {
|
||||||
|
successMessage: 'User deleted',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
deleteUserModal.hide();
|
||||||
|
|
||||||
|
if (key === currentUserKey) {
|
||||||
|
window.location.href = '/ui/';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteBtn = document.querySelector('[data-delete-user="' + key + '"]');
|
||||||
|
if (deleteBtn) {
|
||||||
|
var cardCol = deleteBtn.closest('[class*="col-"]');
|
||||||
|
if (cardCol) {
|
||||||
|
cardCol.remove();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
users = users.filter(function(u) { return u.access_key !== key; });
|
||||||
|
updateUserCount();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
init: init
|
||||||
|
};
|
||||||
|
})();
|
||||||
324
static/js/ui-core.js
Normal file
324
static/js/ui-core.js
Normal file
@@ -0,0 +1,324 @@
|
|||||||
|
window.UICore = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
function getCsrfToken() {
|
||||||
|
const meta = document.querySelector('meta[name="csrf-token"]');
|
||||||
|
return meta ? meta.getAttribute('content') : '';
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatBytes(bytes) {
|
||||||
|
if (!Number.isFinite(bytes)) return bytes + ' bytes';
|
||||||
|
const units = ['bytes', 'KB', 'MB', 'GB', 'TB'];
|
||||||
|
let i = 0;
|
||||||
|
let size = bytes;
|
||||||
|
while (size >= 1024 && i < units.length - 1) {
|
||||||
|
size /= 1024;
|
||||||
|
i++;
|
||||||
|
}
|
||||||
|
return size.toFixed(i === 0 ? 0 : 1) + ' ' + units[i];
|
||||||
|
}
|
||||||
|
|
||||||
|
function escapeHtml(value) {
|
||||||
|
if (value === null || value === undefined) return '';
|
||||||
|
return String(value)
|
||||||
|
.replace(/&/g, '&')
|
||||||
|
.replace(/</g, '<')
|
||||||
|
.replace(/>/g, '>')
|
||||||
|
.replace(/"/g, '"')
|
||||||
|
.replace(/'/g, ''');
|
||||||
|
}
|
||||||
|
|
||||||
|
async function submitFormAjax(form, options) {
|
||||||
|
options = options || {};
|
||||||
|
var onSuccess = options.onSuccess || function() {};
|
||||||
|
var onError = options.onError || function() {};
|
||||||
|
var successMessage = options.successMessage || 'Operation completed';
|
||||||
|
|
||||||
|
var formData = new FormData(form);
|
||||||
|
var csrfToken = getCsrfToken();
|
||||||
|
var submitBtn = form.querySelector('[type="submit"]');
|
||||||
|
var originalHtml = submitBtn ? submitBtn.innerHTML : '';
|
||||||
|
|
||||||
|
try {
|
||||||
|
if (submitBtn) {
|
||||||
|
submitBtn.disabled = true;
|
||||||
|
submitBtn.innerHTML = '<span class="spinner-border spinner-border-sm me-1"></span>Saving...';
|
||||||
|
}
|
||||||
|
|
||||||
|
var formAction = form.getAttribute('action') || form.action;
|
||||||
|
var response = await fetch(formAction, {
|
||||||
|
method: form.getAttribute('method') || 'POST',
|
||||||
|
headers: {
|
||||||
|
'X-CSRFToken': csrfToken,
|
||||||
|
'Accept': 'application/json',
|
||||||
|
'X-Requested-With': 'XMLHttpRequest'
|
||||||
|
},
|
||||||
|
body: formData,
|
||||||
|
redirect: 'follow'
|
||||||
|
});
|
||||||
|
|
||||||
|
var contentType = response.headers.get('content-type') || '';
|
||||||
|
if (!contentType.includes('application/json')) {
|
||||||
|
throw new Error('Server returned an unexpected response. Please try again.');
|
||||||
|
}
|
||||||
|
|
||||||
|
var data = await response.json();
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error(data.error || 'HTTP ' + response.status);
|
||||||
|
}
|
||||||
|
|
||||||
|
window.showToast(data.message || successMessage, 'Success', 'success');
|
||||||
|
onSuccess(data);
|
||||||
|
|
||||||
|
} catch (err) {
|
||||||
|
window.showToast(err.message, 'Error', 'error');
|
||||||
|
onError(err);
|
||||||
|
} finally {
|
||||||
|
if (submitBtn) {
|
||||||
|
submitBtn.disabled = false;
|
||||||
|
submitBtn.innerHTML = originalHtml;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function PollingManager() {
|
||||||
|
this.intervals = {};
|
||||||
|
this.callbacks = {};
|
||||||
|
this.timers = {};
|
||||||
|
this.defaults = {
|
||||||
|
replication: 30000,
|
||||||
|
lifecycle: 60000,
|
||||||
|
connectionHealth: 60000,
|
||||||
|
bucketStats: 120000
|
||||||
|
};
|
||||||
|
this._loadSettings();
|
||||||
|
}
|
||||||
|
|
||||||
|
PollingManager.prototype._loadSettings = function() {
|
||||||
|
try {
|
||||||
|
var stored = localStorage.getItem('myfsio-polling-intervals');
|
||||||
|
if (stored) {
|
||||||
|
var settings = JSON.parse(stored);
|
||||||
|
for (var key in settings) {
|
||||||
|
if (settings.hasOwnProperty(key)) {
|
||||||
|
this.defaults[key] = settings[key];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
console.warn('Failed to load polling settings:', e);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
PollingManager.prototype.saveSettings = function(settings) {
|
||||||
|
try {
|
||||||
|
for (var key in settings) {
|
||||||
|
if (settings.hasOwnProperty(key)) {
|
||||||
|
this.defaults[key] = settings[key];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
localStorage.setItem('myfsio-polling-intervals', JSON.stringify(this.defaults));
|
||||||
|
} catch (e) {
|
||||||
|
console.warn('Failed to save polling settings:', e);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
PollingManager.prototype.start = function(key, callback, interval) {
|
||||||
|
this.stop(key);
|
||||||
|
var ms = interval !== undefined ? interval : (this.defaults[key] || 30000);
|
||||||
|
if (ms <= 0) return;
|
||||||
|
|
||||||
|
this.callbacks[key] = callback;
|
||||||
|
this.intervals[key] = ms;
|
||||||
|
|
||||||
|
callback();
|
||||||
|
|
||||||
|
var self = this;
|
||||||
|
this.timers[key] = setInterval(function() {
|
||||||
|
if (!document.hidden) {
|
||||||
|
callback();
|
||||||
|
}
|
||||||
|
}, ms);
|
||||||
|
};
|
||||||
|
|
||||||
|
PollingManager.prototype.stop = function(key) {
|
||||||
|
if (this.timers[key]) {
|
||||||
|
clearInterval(this.timers[key]);
|
||||||
|
delete this.timers[key];
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
PollingManager.prototype.stopAll = function() {
|
||||||
|
for (var key in this.timers) {
|
||||||
|
if (this.timers.hasOwnProperty(key)) {
|
||||||
|
clearInterval(this.timers[key]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
this.timers = {};
|
||||||
|
};
|
||||||
|
|
||||||
|
PollingManager.prototype.updateInterval = function(key, newInterval) {
|
||||||
|
var callback = this.callbacks[key];
|
||||||
|
this.defaults[key] = newInterval;
|
||||||
|
this.saveSettings(this.defaults);
|
||||||
|
if (callback) {
|
||||||
|
this.start(key, callback, newInterval);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
PollingManager.prototype.getSettings = function() {
|
||||||
|
var result = {};
|
||||||
|
for (var key in this.defaults) {
|
||||||
|
if (this.defaults.hasOwnProperty(key)) {
|
||||||
|
result[key] = this.defaults[key];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result;
|
||||||
|
};
|
||||||
|
|
||||||
|
var pollingManager = new PollingManager();
|
||||||
|
|
||||||
|
document.addEventListener('visibilitychange', function() {
|
||||||
|
if (document.hidden) {
|
||||||
|
pollingManager.stopAll();
|
||||||
|
} else {
|
||||||
|
for (var key in pollingManager.callbacks) {
|
||||||
|
if (pollingManager.callbacks.hasOwnProperty(key)) {
|
||||||
|
pollingManager.start(key, pollingManager.callbacks[key], pollingManager.intervals[key]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return {
|
||||||
|
getCsrfToken: getCsrfToken,
|
||||||
|
formatBytes: formatBytes,
|
||||||
|
escapeHtml: escapeHtml,
|
||||||
|
submitFormAjax: submitFormAjax,
|
||||||
|
PollingManager: PollingManager,
|
||||||
|
pollingManager: pollingManager
|
||||||
|
};
|
||||||
|
})();
|
||||||
|
|
||||||
|
window.pollingManager = window.UICore.pollingManager;
|
||||||
|
|
||||||
|
window.UICore.copyToClipboard = async function(text, button, originalText) {
|
||||||
|
try {
|
||||||
|
await navigator.clipboard.writeText(text);
|
||||||
|
if (button) {
|
||||||
|
var prevText = button.textContent;
|
||||||
|
button.textContent = 'Copied!';
|
||||||
|
setTimeout(function() {
|
||||||
|
button.textContent = originalText || prevText;
|
||||||
|
}, 1500);
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Copy failed:', err);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
window.UICore.setButtonLoading = function(button, isLoading, loadingText) {
|
||||||
|
if (!button) return;
|
||||||
|
if (isLoading) {
|
||||||
|
button._originalHtml = button.innerHTML;
|
||||||
|
button._originalDisabled = button.disabled;
|
||||||
|
button.disabled = true;
|
||||||
|
button.innerHTML = '<span class="spinner-border spinner-border-sm me-1"></span>' + (loadingText || 'Loading...');
|
||||||
|
} else {
|
||||||
|
button.disabled = button._originalDisabled || false;
|
||||||
|
button.innerHTML = button._originalHtml || button.innerHTML;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
window.UICore.updateBadgeCount = function(selector, count, singular, plural) {
|
||||||
|
var badge = document.querySelector(selector);
|
||||||
|
if (badge) {
|
||||||
|
var label = count === 1 ? (singular || '') : (plural || 's');
|
||||||
|
badge.textContent = count + ' ' + label;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
window.UICore.setupJsonAutoIndent = function(textarea) {
|
||||||
|
if (!textarea) return;
|
||||||
|
|
||||||
|
textarea.addEventListener('keydown', function(e) {
|
||||||
|
if (e.key === 'Enter') {
|
||||||
|
e.preventDefault();
|
||||||
|
|
||||||
|
var start = this.selectionStart;
|
||||||
|
var end = this.selectionEnd;
|
||||||
|
var value = this.value;
|
||||||
|
|
||||||
|
var lineStart = value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
var currentLine = value.substring(lineStart, start);
|
||||||
|
|
||||||
|
var indentMatch = currentLine.match(/^(\s*)/);
|
||||||
|
var indent = indentMatch ? indentMatch[1] : '';
|
||||||
|
|
||||||
|
var trimmedLine = currentLine.trim();
|
||||||
|
var lastChar = trimmedLine.slice(-1);
|
||||||
|
|
||||||
|
var newIndent = indent;
|
||||||
|
var insertAfter = '';
|
||||||
|
|
||||||
|
if (lastChar === '{' || lastChar === '[') {
|
||||||
|
newIndent = indent + ' ';
|
||||||
|
|
||||||
|
var charAfterCursor = value.substring(start, start + 1).trim();
|
||||||
|
if ((lastChar === '{' && charAfterCursor === '}') ||
|
||||||
|
(lastChar === '[' && charAfterCursor === ']')) {
|
||||||
|
insertAfter = '\n' + indent;
|
||||||
|
}
|
||||||
|
} else if (lastChar === ',' || lastChar === ':') {
|
||||||
|
newIndent = indent;
|
||||||
|
}
|
||||||
|
|
||||||
|
var insertion = '\n' + newIndent + insertAfter;
|
||||||
|
var newValue = value.substring(0, start) + insertion + value.substring(end);
|
||||||
|
|
||||||
|
this.value = newValue;
|
||||||
|
|
||||||
|
var newCursorPos = start + 1 + newIndent.length;
|
||||||
|
this.selectionStart = this.selectionEnd = newCursorPos;
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
|
||||||
|
if (e.key === 'Tab') {
|
||||||
|
e.preventDefault();
|
||||||
|
var start = this.selectionStart;
|
||||||
|
var end = this.selectionEnd;
|
||||||
|
|
||||||
|
if (e.shiftKey) {
|
||||||
|
var lineStart = this.value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
var lineContent = this.value.substring(lineStart, start);
|
||||||
|
if (lineContent.startsWith(' ')) {
|
||||||
|
this.value = this.value.substring(0, lineStart) +
|
||||||
|
this.value.substring(lineStart + 2);
|
||||||
|
this.selectionStart = this.selectionEnd = Math.max(lineStart, start - 2);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
this.value = this.value.substring(0, start) + ' ' + this.value.substring(end);
|
||||||
|
this.selectionStart = this.selectionEnd = start + 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
document.addEventListener('DOMContentLoaded', function() {
|
||||||
|
var flashMessage = sessionStorage.getItem('flashMessage');
|
||||||
|
if (flashMessage) {
|
||||||
|
sessionStorage.removeItem('flashMessage');
|
||||||
|
try {
|
||||||
|
var msg = JSON.parse(flashMessage);
|
||||||
|
if (window.showToast) {
|
||||||
|
window.showToast(msg.body || msg.title, msg.title, msg.variant || 'info');
|
||||||
|
}
|
||||||
|
} catch (e) {}
|
||||||
|
}
|
||||||
|
});
|
||||||
@@ -5,8 +5,8 @@
|
|||||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||||
{% if principal %}<meta name="csrf-token" content="{{ csrf_token() }}" />{% endif %}
|
{% if principal %}<meta name="csrf-token" content="{{ csrf_token() }}" />{% endif %}
|
||||||
<title>MyFSIO Console</title>
|
<title>MyFSIO Console</title>
|
||||||
<link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFISO.png') }}" />
|
<link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFSIO.png') }}" />
|
||||||
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFISO.ico') }}" />
|
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFSIO.ico') }}" />
|
||||||
<link
|
<link
|
||||||
href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css"
|
href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css"
|
||||||
rel="stylesheet"
|
rel="stylesheet"
|
||||||
@@ -24,105 +24,218 @@
|
|||||||
document.documentElement.dataset.bsTheme = 'light';
|
document.documentElement.dataset.bsTheme = 'light';
|
||||||
document.documentElement.dataset.theme = 'light';
|
document.documentElement.dataset.theme = 'light';
|
||||||
}
|
}
|
||||||
|
try {
|
||||||
|
if (localStorage.getItem('myfsio-sidebar-collapsed') === 'true') {
|
||||||
|
document.documentElement.classList.add('sidebar-will-collapse');
|
||||||
|
}
|
||||||
|
} catch (err) {}
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}" />
|
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}" />
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<nav class="navbar navbar-expand-lg myfsio-nav shadow-sm">
|
<header class="mobile-header d-lg-none">
|
||||||
<div class="container-fluid">
|
<button class="sidebar-toggle-btn" type="button" data-bs-toggle="offcanvas" data-bs-target="#mobileSidebar" aria-controls="mobileSidebar" aria-label="Toggle navigation">
|
||||||
<a class="navbar-brand fw-semibold" href="{{ url_for('ui.buckets_overview') }}">
|
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<img
|
<path fill-rule="evenodd" d="M2.5 12a.5.5 0 0 1 .5-.5h10a.5.5 0 0 1 0 1H3a.5.5 0 0 1-.5-.5zm0-4a.5.5 0 0 1 .5-.5h10a.5.5 0 0 1 0 1H3a.5.5 0 0 1-.5-.5zm0-4a.5.5 0 0 1 .5-.5h10a.5.5 0 0 1 0 1H3a.5.5 0 0 1-.5-.5z"/>
|
||||||
src="{{ url_for('static', filename='images/MyFISO.png') }}"
|
</svg>
|
||||||
alt="MyFSIO logo"
|
</button>
|
||||||
class="myfsio-logo"
|
<a class="mobile-brand" href="{{ url_for('ui.buckets_overview') }}">
|
||||||
width="32"
|
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO logo" width="28" height="28" />
|
||||||
height="32"
|
<span>MyFSIO</span>
|
||||||
decoding="async"
|
</a>
|
||||||
/>
|
<button class="theme-toggle-mobile" type="button" id="themeToggleMobile" aria-label="Toggle dark mode">
|
||||||
<span class="myfsio-title">MyFSIO</span>
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon-mobile" id="themeToggleSunMobile" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 11.5a3.5 3.5 0 1 1 0-7 3.5 3.5 0 0 1 0 7zm0 1.5a5 5 0 1 0 0-10 5 5 0 0 0 0 10zM8 0a.5.5 0 0 1 .5.5v1.555a.5.5 0 0 1-1 0V.5A.5.5 0 0 1 8 0zm0 12.945a.5.5 0 0 1 .5.5v2.055a.5.5 0 0 1-1 0v-2.055a.5.5 0 0 1 .5-.5zM2.343 2.343a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.708.707l-1.1-1.1a.5.5 0 0 1 0-.707zm9.507 9.507a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.707.708l-1.1-1.1a.5.5 0 0 1 0-.708zM0 8a.5.5 0 0 1 .5-.5h1.555a.5.5 0 0 1 0 1H.5A.5.5 0 0 1 0 8zm12.945 0a.5.5 0 0 1 .5-.5H15.5a.5.5 0 0 1 0 1h-2.055a.5.5 0 0 1-.5-.5zM2.343 13.657a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 1 1 .708.707l-1.1 1.1a.5.5 0 0 1-.708 0zm9.507-9.507a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 0 1 .707.708l-1.1 1.1a.5.5 0 0 1-.707 0z"/>
|
||||||
|
</svg>
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon-mobile" id="themeToggleMoonMobile" viewBox="0 0 16 16">
|
||||||
|
<path d="M6 .278a.768.768 0 0 1 .08.858 7.208 7.208 0 0 0-.878 3.46c0 4.021 3.278 7.277 7.318 7.277.527 0 1.04-.055 1.533-.16a.787.787 0 0 1 .81.316.733.733 0 0 1-.031.893A8.349 8.349 0 0 1 8.344 16C3.734 16 0 12.286 0 7.71 0 4.266 2.114 1.312 5.124.06A.752.752 0 0 1 6 .278z"/>
|
||||||
|
<path d="M10.794 3.148a.217.217 0 0 1 .412 0l.387 1.162c.173.518.579.924 1.097 1.097l1.162.387a.217.217 0 0 1 0 .412l-1.162.387a1.734 1.734 0 0 0-1.097 1.097l-.387 1.162a.217.217 0 0 1-.412 0l-.387-1.162A1.734 1.734 0 0 0 9.31 6.593l-1.162-.387a.217.217 0 0 1 0-.412l1.162-.387a1.734 1.734 0 0 0 1.097-1.097l.387-1.162zM13.863.099a.145.145 0 0 1 .274 0l.258.774c.115.346.386.617.732.732l.774.258a.145.145 0 0 1 0 .274l-.774.258a1.156 1.156 0 0 0-.732.732l-.258.774a.145.145 0 0 1-.274 0l-.258-.774a1.156 1.156 0 0 0-.732-.732l-.774-.258a.145.145 0 0 1 0-.274l.774-.258c.346-.115.617-.386.732-.732L13.863.1z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</header>
|
||||||
|
|
||||||
|
<div class="offcanvas offcanvas-start sidebar-offcanvas" tabindex="-1" id="mobileSidebar" aria-labelledby="mobileSidebarLabel">
|
||||||
|
<div class="offcanvas-header sidebar-header">
|
||||||
|
<a class="sidebar-brand" href="{{ url_for('ui.buckets_overview') }}">
|
||||||
|
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO logo" class="sidebar-logo" width="36" height="36" />
|
||||||
|
<span class="sidebar-title">MyFSIO</span>
|
||||||
</a>
|
</a>
|
||||||
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navContent" aria-controls="navContent" aria-expanded="false" aria-label="Toggle navigation">
|
<button type="button" class="btn-close btn-close-white" data-bs-dismiss="offcanvas" aria-label="Close"></button>
|
||||||
<span class="navbar-toggler-icon"></span>
|
</div>
|
||||||
</button>
|
<div class="offcanvas-body sidebar-body">
|
||||||
<div class="collapse navbar-collapse" id="navContent">
|
<nav class="sidebar-nav">
|
||||||
<ul class="navbar-nav me-auto mb-2 mb-lg-0">
|
{% if principal %}
|
||||||
{% if principal %}
|
<div class="nav-section">
|
||||||
<li class="nav-item">
|
<span class="nav-section-title">Navigation</span>
|
||||||
<a class="nav-link" href="{{ url_for('ui.buckets_overview') }}">Buckets</a>
|
<a href="{{ url_for('ui.buckets_overview') }}" class="sidebar-link {% if request.endpoint == 'ui.buckets_overview' or request.endpoint == 'ui.bucket_detail' %}active{% endif %}">
|
||||||
</li>
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
|
</svg>
|
||||||
|
<span>Buckets</span>
|
||||||
|
</a>
|
||||||
{% if can_manage_iam %}
|
{% if can_manage_iam %}
|
||||||
<li class="nav-item">
|
<a href="{{ url_for('ui.iam_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.iam_dashboard' %}active{% endif %}">
|
||||||
<a class="nav-link" href="{{ url_for('ui.iam_dashboard') }}">IAM</a>
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
</li>
|
<path d="M15 14s1 0 1-1-1-4-5-4-5 3-5 4 1 1 1 1h8zm-7.978-1A.261.261 0 0 1 7 12.996c.001-.264.167-1.03.76-1.72C8.312 10.629 9.282 10 11 10c1.717 0 2.687.63 3.24 1.276.593.69.758 1.457.76 1.72l-.008.002a.274.274 0 0 1-.014.002H7.022zM11 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4zm3-2a3 3 0 1 1-6 0 3 3 0 0 1 6 0zM6.936 9.28a5.88 5.88 0 0 0-1.23-.247A7.35 7.35 0 0 0 5 9c-4 0-5 3-5 4 0 .667.333 1 1 1h4.216A2.238 2.238 0 0 1 5 13c0-1.01.377-2.042 1.09-2.904.243-.294.526-.569.846-.816zM4.92 10A5.493 5.493 0 0 0 4 13H1c0-.26.164-1.03.76-1.724.545-.636 1.492-1.256 3.16-1.275zM1.5 5.5a3 3 0 1 1 6 0 3 3 0 0 1-6 0zm3-2a2 2 0 1 0 0 4 2 2 0 0 0 0-4z"/>
|
||||||
<li class="nav-item">
|
|
||||||
<a class="nav-link" href="{{ url_for('ui.connections_dashboard') }}">Connections</a>
|
|
||||||
</li>
|
|
||||||
<li class="nav-item">
|
|
||||||
<a class="nav-link" href="{{ url_for('ui.metrics_dashboard') }}">Metrics</a>
|
|
||||||
</li>
|
|
||||||
{% endif %}
|
|
||||||
{% endif %}
|
|
||||||
{% if principal %}
|
|
||||||
<li class="nav-item">
|
|
||||||
<a class="nav-link" href="{{ url_for('ui.docs_page') }}">Docs</a>
|
|
||||||
</li>
|
|
||||||
{% endif %}
|
|
||||||
</ul>
|
|
||||||
<div class="ms-lg-auto d-flex align-items-center gap-3 text-light flex-wrap">
|
|
||||||
<button
|
|
||||||
class="btn btn-outline-light btn-sm theme-toggle"
|
|
||||||
type="button"
|
|
||||||
id="themeToggle"
|
|
||||||
aria-pressed="false"
|
|
||||||
aria-label="Toggle dark mode"
|
|
||||||
>
|
|
||||||
<span id="themeToggleLabel" class="visually-hidden">Toggle dark mode</span>
|
|
||||||
<svg
|
|
||||||
xmlns="http://www.w3.org/2000/svg"
|
|
||||||
width="16"
|
|
||||||
height="16"
|
|
||||||
fill="currentColor"
|
|
||||||
class="theme-icon"
|
|
||||||
id="themeToggleSun"
|
|
||||||
viewBox="0 0 16 16"
|
|
||||||
aria-hidden="true"
|
|
||||||
>
|
|
||||||
<path
|
|
||||||
d="M8 11.5a3.5 3.5 0 1 1 0-7 3.5 3.5 0 0 1 0 7zm0 1.5a5 5 0 1 0 0-10 5 5 0 0 0 0 10zM8 0a.5.5 0 0 1 .5.5v1.555a.5.5 0 0 1-1 0V.5A.5.5 0 0 1 8 0zm0 12.945a.5.5 0 0 1 .5.5v2.055a.5.5 0 0 1-1 0v-2.055a.5.5 0 0 1 .5-.5zM2.343 2.343a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.708.707l-1.1-1.1a.5.5 0 0 1 0-.707zm9.507 9.507a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.707.708l-1.1-1.1a.5.5 0 0 1 0-.708zM0 8a.5.5 0 0 1 .5-.5h1.555a.5.5 0 0 1 0 1H.5A.5.5 0 0 1 0 8zm12.945 0a.5.5 0 0 1 .5-.5H15.5a.5.5 0 0 1 0 1h-2.055a.5.5 0 0 1-.5-.5zM2.343 13.657a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 1 1 .708.707l-1.1 1.1a.5.5 0 0 1-.708 0zm9.507-9.507a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 0 1 .707.708l-1.1 1.1a.5.5 0 0 1-.707 0z"
|
|
||||||
/>
|
|
||||||
</svg>
|
</svg>
|
||||||
<svg
|
<span>IAM</span>
|
||||||
xmlns="http://www.w3.org/2000/svg"
|
</a>
|
||||||
width="16"
|
<a href="{{ url_for('ui.connections_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.connections_dashboard' %}active{% endif %}">
|
||||||
height="16"
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
fill="currentColor"
|
<path fill-rule="evenodd" d="M6 3.5A1.5 1.5 0 0 1 7.5 2h1A1.5 1.5 0 0 1 10 3.5v1A1.5 1.5 0 0 1 8.5 6v1H14a.5.5 0 0 1 .5.5v1a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0v-1A.5.5 0 0 1 2 7h5.5V6A1.5 1.5 0 0 1 6 4.5v-1zM8.5 5a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1zM0 11.5A1.5 1.5 0 0 1 1.5 10h1A1.5 1.5 0 0 1 4 11.5v1A1.5 1.5 0 0 1 2.5 14h-1A1.5 1.5 0 0 1 0 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5A1.5 1.5 0 0 1 7.5 10h1a1.5 1.5 0 0 1 1.5 1.5v1A1.5 1.5 0 0 1 8.5 14h-1A1.5 1.5 0 0 1 6 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5a1.5 1.5 0 0 1 1.5-1.5h1a1.5 1.5 0 0 1 1.5 1.5v1a1.5 1.5 0 0 1-1.5 1.5h-1a1.5 1.5 0 0 1-1.5-1.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1z"/>
|
||||||
class="theme-icon d-none"
|
|
||||||
id="themeToggleMoon"
|
|
||||||
viewBox="0 0 16 16"
|
|
||||||
aria-hidden="true"
|
|
||||||
>
|
|
||||||
<path d="M6 .278a.768.768 0 0 1 .08.858 7.208 7.208 0 0 0-.878 3.46c0 4.021 3.278 7.277 7.318 7.277.527 0 1.04-.055 1.533-.16a.787.787 0 0 1 .81.316.733.733 0 0 1-.031.893A8.349 8.349 0 0 1 8.344 16C3.734 16 0 12.286 0 7.71 0 4.266 2.114 1.312 5.124.06A.752.752 0 0 1 6 .278z"/>
|
|
||||||
<path d="M10.794 3.148a.217.217 0 0 1 .412 0l.387 1.162c.173.518.579.924 1.097 1.097l1.162.387a.217.217 0 0 1 0 .412l-1.162.387a1.734 1.734 0 0 0-1.097 1.097l-.387 1.162a.217.217 0 0 1-.412 0l-.387-1.162A1.734 1.734 0 0 0 9.31 6.593l-1.162-.387a.217.217 0 0 1 0-.412l1.162-.387a1.734 1.734 0 0 0 1.097-1.097l.387-1.162zM13.863.099a.145.145 0 0 1 .274 0l.258.774c.115.346.386.617.732.732l.774.258a.145.145 0 0 1 0 .274l-.774.258a1.156 1.156 0 0 0-.732.732l-.258.774a.145.145 0 0 1-.274 0l-.258-.774a1.156 1.156 0 0 0-.732-.732l-.774-.258a.145.145 0 0 1 0-.274l.774-.258c.346-.115.617-.386.732-.732L13.863.1z"/>
|
|
||||||
</svg>
|
</svg>
|
||||||
</button>
|
<span>Connections</span>
|
||||||
{% if principal %}
|
</a>
|
||||||
<div class="text-end small">
|
<a href="{{ url_for('ui.metrics_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.metrics_dashboard' %}active{% endif %}">
|
||||||
<div class="fw-semibold" title="{{ principal.display_name }}">{{ principal.display_name | truncate(20, true) }}</div>
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<div class="opacity-75">{{ principal.access_key }}</div>
|
<path d="M8 4a.5.5 0 0 1 .5.5V6a.5.5 0 0 1-1 0V4.5A.5.5 0 0 1 8 4zM3.732 5.732a.5.5 0 0 1 .707 0l.915.914a.5.5 0 1 1-.708.708l-.914-.915a.5.5 0 0 1 0-.707zM2 10a.5.5 0 0 1 .5-.5h1.586a.5.5 0 0 1 0 1H2.5A.5.5 0 0 1 2 10zm9.5 0a.5.5 0 0 1 .5-.5h1.5a.5.5 0 0 1 0 1H12a.5.5 0 0 1-.5-.5zm.754-4.246a.389.389 0 0 0-.527-.02L7.547 9.31a.91.91 0 1 0 1.302 1.258l3.434-4.297a.389.389 0 0 0-.029-.518z"/>
|
||||||
</div>
|
<path fill-rule="evenodd" d="M0 10a8 8 0 1 1 15.547 2.661c-.442 1.253-1.845 1.602-2.932 1.25C11.309 13.488 9.475 13 8 13c-1.474 0-3.31.488-4.615.911-1.087.352-2.49.003-2.932-1.25A7.988 7.988 0 0 1 0 10zm8-7a7 7 0 0 0-6.603 9.329c.203.575.923.876 1.68.63C4.397 12.533 6.358 12 8 12s3.604.532 4.923.96c.757.245 1.477-.056 1.68-.631A7 7 0 0 0 8 3z"/>
|
||||||
<form method="post" action="{{ url_for('ui.logout') }}">
|
</svg>
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<span>Metrics</span>
|
||||||
<button class="btn btn-outline-light btn-sm" type="submit">Sign out</button>
|
</a>
|
||||||
</form>
|
|
||||||
{% endif %}
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
|
<div class="nav-section">
|
||||||
|
<span class="nav-section-title">Resources</span>
|
||||||
|
<a href="{{ url_for('ui.docs_page') }}" class="sidebar-link {% if request.endpoint == 'ui.docs_page' %}active{% endif %}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M1 2.828c.885-.37 2.154-.769 3.388-.893 1.33-.134 2.458.063 3.112.752v9.746c-.935-.53-2.12-.603-3.213-.493-1.18.12-2.37.461-3.287.811V2.828zm7.5-.141c.654-.689 1.782-.886 3.112-.752 1.234.124 2.503.523 3.388.893v9.923c-.918-.35-2.107-.692-3.287-.81-1.094-.111-2.278-.039-3.213.492V2.687zM8 1.783C7.015.936 5.587.81 4.287.94c-1.514.153-3.042.672-3.994 1.105A.5.5 0 0 0 0 2.5v11a.5.5 0 0 0 .707.455c.882-.4 2.303-.881 3.68-1.02 1.409-.142 2.59.087 3.223.877a.5.5 0 0 0 .78 0c.633-.79 1.814-1.019 3.222-.877 1.378.139 2.8.62 3.681 1.02A.5.5 0 0 0 16 13.5v-11a.5.5 0 0 0-.293-.455c-.952-.433-2.48-.952-3.994-1.105C10.413.809 8.985.936 8 1.783z"/>
|
||||||
|
</svg>
|
||||||
|
<span>Documentation</span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
|
</nav>
|
||||||
|
{% if principal %}
|
||||||
|
<div class="sidebar-footer">
|
||||||
|
<div class="sidebar-user">
|
||||||
|
<div class="user-avatar">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M11 6a3 3 0 1 1-6 0 3 3 0 0 1 6 0z"/>
|
||||||
|
<path fill-rule="evenodd" d="M0 8a8 8 0 1 1 16 0A8 8 0 0 1 0 8zm8-7a7 7 0 0 0-5.468 11.37C3.242 11.226 4.805 10 8 10s4.757 1.225 5.468 2.37A7 7 0 0 0 8 1z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<div class="user-info">
|
||||||
|
<div class="user-name" title="{{ principal.display_name }}">{{ principal.display_name | truncate(16, true) }}</div>
|
||||||
|
<div class="user-key">{{ principal.access_key | truncate(12, true) }}</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<form method="post" action="{{ url_for('ui.logout') }}" class="w-100">
|
||||||
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
|
<button class="sidebar-logout-btn" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M10 12.5a.5.5 0 0 1-.5.5h-8a.5.5 0 0 1-.5-.5v-9a.5.5 0 0 1 .5-.5h8a.5.5 0 0 1 .5.5v2a.5.5 0 0 0 1 0v-2A1.5 1.5 0 0 0 9.5 2h-8A1.5 1.5 0 0 0 0 3.5v9A1.5 1.5 0 0 0 1.5 14h8a1.5 1.5 0 0 0 1.5-1.5v-2a.5.5 0 0 0-1 0v2z"/>
|
||||||
|
<path fill-rule="evenodd" d="M15.854 8.354a.5.5 0 0 0 0-.708l-3-3a.5.5 0 0 0-.708.708L14.293 7.5H5.5a.5.5 0 0 0 0 1h8.793l-2.147 2.146a.5.5 0 0 0 .708.708l3-3z"/>
|
||||||
|
</svg>
|
||||||
|
<span>Sign out</span>
|
||||||
|
</button>
|
||||||
|
</form>
|
||||||
</div>
|
</div>
|
||||||
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
</nav>
|
</div>
|
||||||
<main class="container py-4">
|
|
||||||
{% block content %}{% endblock %}
|
<aside class="sidebar d-none d-lg-flex" id="desktopSidebar">
|
||||||
</main>
|
<div class="sidebar-header">
|
||||||
|
<div class="sidebar-brand" id="sidebarBrand">
|
||||||
|
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO logo" class="sidebar-logo" width="36" height="36" />
|
||||||
|
<span class="sidebar-title">MyFSIO</span>
|
||||||
|
</div>
|
||||||
|
<button class="sidebar-collapse-btn" type="button" id="sidebarCollapseBtn" aria-label="Collapse sidebar">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<div class="sidebar-body">
|
||||||
|
<nav class="sidebar-nav">
|
||||||
|
{% if principal %}
|
||||||
|
<div class="nav-section">
|
||||||
|
<span class="nav-section-title">Navigation</span>
|
||||||
|
<a href="{{ url_for('ui.buckets_overview') }}" class="sidebar-link {% if request.endpoint == 'ui.buckets_overview' or request.endpoint == 'ui.bucket_detail' %}active{% endif %}" data-tooltip="Buckets">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">Buckets</span>
|
||||||
|
</a>
|
||||||
|
{% if can_manage_iam %}
|
||||||
|
<a href="{{ url_for('ui.iam_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.iam_dashboard' %}active{% endif %}" data-tooltip="IAM">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M15 14s1 0 1-1-1-4-5-4-5 3-5 4 1 1 1 1h8zm-7.978-1A.261.261 0 0 1 7 12.996c.001-.264.167-1.03.76-1.72C8.312 10.629 9.282 10 11 10c1.717 0 2.687.63 3.24 1.276.593.69.758 1.457.76 1.72l-.008.002a.274.274 0 0 1-.014.002H7.022zM11 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4zm3-2a3 3 0 1 1-6 0 3 3 0 0 1 6 0zM6.936 9.28a5.88 5.88 0 0 0-1.23-.247A7.35 7.35 0 0 0 5 9c-4 0-5 3-5 4 0 .667.333 1 1 1h4.216A2.238 2.238 0 0 1 5 13c0-1.01.377-2.042 1.09-2.904.243-.294.526-.569.846-.816zM4.92 10A5.493 5.493 0 0 0 4 13H1c0-.26.164-1.03.76-1.724.545-.636 1.492-1.256 3.16-1.275zM1.5 5.5a3 3 0 1 1 6 0 3 3 0 0 1-6 0zm3-2a2 2 0 1 0 0 4 2 2 0 0 0 0-4z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">IAM</span>
|
||||||
|
</a>
|
||||||
|
<a href="{{ url_for('ui.connections_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.connections_dashboard' %}active{% endif %}" data-tooltip="Connections">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M6 3.5A1.5 1.5 0 0 1 7.5 2h1A1.5 1.5 0 0 1 10 3.5v1A1.5 1.5 0 0 1 8.5 6v1H14a.5.5 0 0 1 .5.5v1a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0v-1A.5.5 0 0 1 2 7h5.5V6A1.5 1.5 0 0 1 6 4.5v-1zM8.5 5a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1zM0 11.5A1.5 1.5 0 0 1 1.5 10h1A1.5 1.5 0 0 1 4 11.5v1A1.5 1.5 0 0 1 2.5 14h-1A1.5 1.5 0 0 1 0 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5A1.5 1.5 0 0 1 7.5 10h1a1.5 1.5 0 0 1 1.5 1.5v1A1.5 1.5 0 0 1 8.5 14h-1A1.5 1.5 0 0 1 6 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5a1.5 1.5 0 0 1 1.5-1.5h1a1.5 1.5 0 0 1 1.5 1.5v1a1.5 1.5 0 0 1-1.5 1.5h-1a1.5 1.5 0 0 1-1.5-1.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">Connections</span>
|
||||||
|
</a>
|
||||||
|
<a href="{{ url_for('ui.metrics_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.metrics_dashboard' %}active{% endif %}" data-tooltip="Metrics">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 4a.5.5 0 0 1 .5.5V6a.5.5 0 0 1-1 0V4.5A.5.5 0 0 1 8 4zM3.732 5.732a.5.5 0 0 1 .707 0l.915.914a.5.5 0 1 1-.708.708l-.914-.915a.5.5 0 0 1 0-.707zM2 10a.5.5 0 0 1 .5-.5h1.586a.5.5 0 0 1 0 1H2.5A.5.5 0 0 1 2 10zm9.5 0a.5.5 0 0 1 .5-.5h1.5a.5.5 0 0 1 0 1H12a.5.5 0 0 1-.5-.5zm.754-4.246a.389.389 0 0 0-.527-.02L7.547 9.31a.91.91 0 1 0 1.302 1.258l3.434-4.297a.389.389 0 0 0-.029-.518z"/>
|
||||||
|
<path fill-rule="evenodd" d="M0 10a8 8 0 1 1 15.547 2.661c-.442 1.253-1.845 1.602-2.932 1.25C11.309 13.488 9.475 13 8 13c-1.474 0-3.31.488-4.615.911-1.087.352-2.49.003-2.932-1.25A7.988 7.988 0 0 1 0 10zm8-7a7 7 0 0 0-6.603 9.329c.203.575.923.876 1.68.63C4.397 12.533 6.358 12 8 12s3.604.532 4.923.96c.757.245 1.477-.056 1.68-.631A7 7 0 0 0 8 3z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">Metrics</span>
|
||||||
|
</a>
|
||||||
|
{% endif %}
|
||||||
|
</div>
|
||||||
|
<div class="nav-section">
|
||||||
|
<span class="nav-section-title">Resources</span>
|
||||||
|
<a href="{{ url_for('ui.docs_page') }}" class="sidebar-link {% if request.endpoint == 'ui.docs_page' %}active{% endif %}" data-tooltip="Documentation">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M1 2.828c.885-.37 2.154-.769 3.388-.893 1.33-.134 2.458.063 3.112.752v9.746c-.935-.53-2.12-.603-3.213-.493-1.18.12-2.37.461-3.287.811V2.828zm7.5-.141c.654-.689 1.782-.886 3.112-.752 1.234.124 2.503.523 3.388.893v9.923c-.918-.35-2.107-.692-3.287-.81-1.094-.111-2.278-.039-3.213.492V2.687zM8 1.783C7.015.936 5.587.81 4.287.94c-1.514.153-3.042.672-3.994 1.105A.5.5 0 0 0 0 2.5v11a.5.5 0 0 0 .707.455c.882-.4 2.303-.881 3.68-1.02 1.409-.142 2.59.087 3.223.877a.5.5 0 0 0 .78 0c.633-.79 1.814-1.019 3.222-.877 1.378.139 2.8.62 3.681 1.02A.5.5 0 0 0 16 13.5v-11a.5.5 0 0 0-.293-.455c-.952-.433-2.48-.952-3.994-1.105C10.413.809 8.985.936 8 1.783z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="sidebar-link-text">Documentation</span>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
|
</nav>
|
||||||
|
</div>
|
||||||
|
<div class="sidebar-footer">
|
||||||
|
<button class="theme-toggle-sidebar" type="button" id="themeToggle" aria-label="Toggle dark mode">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon" id="themeToggleSun" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 11.5a3.5 3.5 0 1 1 0-7 3.5 3.5 0 0 1 0 7zm0 1.5a5 5 0 1 0 0-10 5 5 0 0 0 0 10zM8 0a.5.5 0 0 1 .5.5v1.555a.5.5 0 0 1-1 0V.5A.5.5 0 0 1 8 0zm0 12.945a.5.5 0 0 1 .5.5v2.055a.5.5 0 0 1-1 0v-2.055a.5.5 0 0 1 .5-.5zM2.343 2.343a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.708.707l-1.1-1.1a.5.5 0 0 1 0-.707zm9.507 9.507a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.707.708l-1.1-1.1a.5.5 0 0 1 0-.708zM0 8a.5.5 0 0 1 .5-.5h1.555a.5.5 0 0 1 0 1H.5A.5.5 0 0 1 0 8zm12.945 0a.5.5 0 0 1 .5-.5H15.5a.5.5 0 0 1 0 1h-2.055a.5.5 0 0 1-.5-.5zM2.343 13.657a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 1 1 .708.707l-1.1 1.1a.5.5 0 0 1-.708 0zm9.507-9.507a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 0 1 .707.708l-1.1 1.1a.5.5 0 0 1-.707 0z"/>
|
||||||
|
</svg>
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon" id="themeToggleMoon" viewBox="0 0 16 16">
|
||||||
|
<path d="M6 .278a.768.768 0 0 1 .08.858 7.208 7.208 0 0 0-.878 3.46c0 4.021 3.278 7.277 7.318 7.277.527 0 1.04-.055 1.533-.16a.787.787 0 0 1 .81.316.733.733 0 0 1-.031.893A8.349 8.349 0 0 1 8.344 16C3.734 16 0 12.286 0 7.71 0 4.266 2.114 1.312 5.124.06A.752.752 0 0 1 6 .278z"/>
|
||||||
|
<path d="M10.794 3.148a.217.217 0 0 1 .412 0l.387 1.162c.173.518.579.924 1.097 1.097l1.162.387a.217.217 0 0 1 0 .412l-1.162.387a1.734 1.734 0 0 0-1.097 1.097l-.387 1.162a.217.217 0 0 1-.412 0l-.387-1.162A1.734 1.734 0 0 0 9.31 6.593l-1.162-.387a.217.217 0 0 1 0-.412l1.162-.387a1.734 1.734 0 0 0 1.097-1.097l.387-1.162zM13.863.099a.145.145 0 0 1 .274 0l.258.774c.115.346.386.617.732.732l.774.258a.145.145 0 0 1 0 .274l-.774.258a1.156 1.156 0 0 0-.732.732l-.258.774a.145.145 0 0 1-.274 0l-.258-.774a1.156 1.156 0 0 0-.732-.732l-.774-.258a.145.145 0 0 1 0-.274l.774-.258c.346-.115.617-.386.732-.732L13.863.1z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="theme-toggle-text">Toggle theme</span>
|
||||||
|
</button>
|
||||||
|
{% if principal %}
|
||||||
|
<div class="sidebar-user" data-username="{{ principal.display_name }}">
|
||||||
|
<div class="user-avatar">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M11 6a3 3 0 1 1-6 0 3 3 0 0 1 6 0z"/>
|
||||||
|
<path fill-rule="evenodd" d="M0 8a8 8 0 1 1 16 0A8 8 0 0 1 0 8zm8-7a7 7 0 0 0-5.468 11.37C3.242 11.226 4.805 10 8 10s4.757 1.225 5.468 2.37A7 7 0 0 0 8 1z"/>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<div class="user-info">
|
||||||
|
<div class="user-name" title="{{ principal.display_name }}">{{ principal.display_name | truncate(16, true) }}</div>
|
||||||
|
<div class="user-key">{{ principal.access_key | truncate(12, true) }}</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<form method="post" action="{{ url_for('ui.logout') }}" class="w-100">
|
||||||
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
|
<button class="sidebar-logout-btn" type="submit">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M10 12.5a.5.5 0 0 1-.5.5h-8a.5.5 0 0 1-.5-.5v-9a.5.5 0 0 1 .5-.5h8a.5.5 0 0 1 .5.5v2a.5.5 0 0 0 1 0v-2A1.5 1.5 0 0 0 9.5 2h-8A1.5 1.5 0 0 0 0 3.5v9A1.5 1.5 0 0 0 1.5 14h8a1.5 1.5 0 0 0 1.5-1.5v-2a.5.5 0 0 0-1 0v2z"/>
|
||||||
|
<path fill-rule="evenodd" d="M15.854 8.354a.5.5 0 0 0 0-.708l-3-3a.5.5 0 0 0-.708.708L14.293 7.5H5.5a.5.5 0 0 0 0 1h8.793l-2.147 2.146a.5.5 0 0 0 .708.708l3-3z"/>
|
||||||
|
</svg>
|
||||||
|
<span class="logout-text">Sign out</span>
|
||||||
|
</button>
|
||||||
|
</form>
|
||||||
|
{% endif %}
|
||||||
|
</div>
|
||||||
|
</aside>
|
||||||
|
|
||||||
|
<div class="main-wrapper">
|
||||||
|
<main class="main-content">
|
||||||
|
{% block content %}{% endblock %}
|
||||||
|
</main>
|
||||||
|
</div>
|
||||||
<div class="toast-container position-fixed bottom-0 end-0 p-3">
|
<div class="toast-container position-fixed bottom-0 end-0 p-3">
|
||||||
<div id="liveToast" class="toast" role="alert" aria-live="assertive" aria-atomic="true">
|
<div id="liveToast" class="toast" role="alert" aria-live="assertive" aria-atomic="true">
|
||||||
<div class="toast-header">
|
<div class="toast-header">
|
||||||
@@ -162,9 +275,11 @@
|
|||||||
(function () {
|
(function () {
|
||||||
const storageKey = 'myfsio-theme';
|
const storageKey = 'myfsio-theme';
|
||||||
const toggle = document.getElementById('themeToggle');
|
const toggle = document.getElementById('themeToggle');
|
||||||
const label = document.getElementById('themeToggleLabel');
|
const toggleMobile = document.getElementById('themeToggleMobile');
|
||||||
const sunIcon = document.getElementById('themeToggleSun');
|
const sunIcon = document.getElementById('themeToggleSun');
|
||||||
const moonIcon = document.getElementById('themeToggleMoon');
|
const moonIcon = document.getElementById('themeToggleMoon');
|
||||||
|
const sunIconMobile = document.getElementById('themeToggleSunMobile');
|
||||||
|
const moonIconMobile = document.getElementById('themeToggleMoonMobile');
|
||||||
|
|
||||||
const applyTheme = (theme) => {
|
const applyTheme = (theme) => {
|
||||||
document.documentElement.dataset.bsTheme = theme;
|
document.documentElement.dataset.bsTheme = theme;
|
||||||
@@ -172,34 +287,79 @@
|
|||||||
try {
|
try {
|
||||||
localStorage.setItem(storageKey, theme);
|
localStorage.setItem(storageKey, theme);
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
/* localStorage unavailable */
|
console.log("Error: local storage not available, cannot save theme preference.");
|
||||||
}
|
|
||||||
if (label) {
|
|
||||||
label.textContent = theme === 'dark' ? 'Switch to light mode' : 'Switch to dark mode';
|
|
||||||
}
|
|
||||||
if (toggle) {
|
|
||||||
toggle.setAttribute('aria-pressed', theme === 'dark' ? 'true' : 'false');
|
|
||||||
toggle.setAttribute('title', theme === 'dark' ? 'Switch to light mode' : 'Switch to dark mode');
|
|
||||||
toggle.setAttribute('aria-label', theme === 'dark' ? 'Switch to light mode' : 'Switch to dark mode');
|
|
||||||
}
|
}
|
||||||
|
const isDark = theme === 'dark';
|
||||||
if (sunIcon && moonIcon) {
|
if (sunIcon && moonIcon) {
|
||||||
const isDark = theme === 'dark';
|
|
||||||
sunIcon.classList.toggle('d-none', !isDark);
|
sunIcon.classList.toggle('d-none', !isDark);
|
||||||
moonIcon.classList.toggle('d-none', isDark);
|
moonIcon.classList.toggle('d-none', isDark);
|
||||||
}
|
}
|
||||||
|
if (sunIconMobile && moonIconMobile) {
|
||||||
|
sunIconMobile.classList.toggle('d-none', !isDark);
|
||||||
|
moonIconMobile.classList.toggle('d-none', isDark);
|
||||||
|
}
|
||||||
|
[toggle, toggleMobile].forEach(btn => {
|
||||||
|
if (btn) {
|
||||||
|
btn.setAttribute('aria-pressed', isDark ? 'true' : 'false');
|
||||||
|
btn.setAttribute('title', isDark ? 'Switch to light mode' : 'Switch to dark mode');
|
||||||
|
btn.setAttribute('aria-label', isDark ? 'Switch to light mode' : 'Switch to dark mode');
|
||||||
|
}
|
||||||
|
});
|
||||||
};
|
};
|
||||||
|
|
||||||
const current = document.documentElement.dataset.bsTheme || 'light';
|
const current = document.documentElement.dataset.bsTheme || 'light';
|
||||||
applyTheme(current);
|
applyTheme(current);
|
||||||
|
|
||||||
toggle?.addEventListener('click', () => {
|
const handleToggle = () => {
|
||||||
const next = document.documentElement.dataset.bsTheme === 'dark' ? 'light' : 'dark';
|
const next = document.documentElement.dataset.bsTheme === 'dark' ? 'light' : 'dark';
|
||||||
applyTheme(next);
|
applyTheme(next);
|
||||||
|
};
|
||||||
|
|
||||||
|
toggle?.addEventListener('click', handleToggle);
|
||||||
|
toggleMobile?.addEventListener('click', handleToggle);
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
|
<script>
|
||||||
|
(function () {
|
||||||
|
const sidebar = document.getElementById('desktopSidebar');
|
||||||
|
const collapseBtn = document.getElementById('sidebarCollapseBtn');
|
||||||
|
const sidebarBrand = document.getElementById('sidebarBrand');
|
||||||
|
const storageKey = 'myfsio-sidebar-collapsed';
|
||||||
|
|
||||||
|
if (!sidebar || !collapseBtn) return;
|
||||||
|
|
||||||
|
const applyCollapsed = (collapsed) => {
|
||||||
|
sidebar.classList.toggle('sidebar-collapsed', collapsed);
|
||||||
|
document.body.classList.toggle('sidebar-is-collapsed', collapsed);
|
||||||
|
document.documentElement.classList.remove('sidebar-will-collapse');
|
||||||
|
try {
|
||||||
|
localStorage.setItem(storageKey, collapsed ? 'true' : 'false');
|
||||||
|
} catch (err) {}
|
||||||
|
};
|
||||||
|
|
||||||
|
try {
|
||||||
|
const stored = localStorage.getItem(storageKey);
|
||||||
|
applyCollapsed(stored === 'true');
|
||||||
|
} catch (err) {
|
||||||
|
document.documentElement.classList.remove('sidebar-will-collapse');
|
||||||
|
}
|
||||||
|
|
||||||
|
collapseBtn.addEventListener('click', () => {
|
||||||
|
const isCollapsed = sidebar.classList.contains('sidebar-collapsed');
|
||||||
|
applyCollapsed(!isCollapsed);
|
||||||
|
});
|
||||||
|
|
||||||
|
sidebarBrand?.addEventListener('click', (e) => {
|
||||||
|
const isCollapsed = sidebar.classList.contains('sidebar-collapsed');
|
||||||
|
if (isCollapsed) {
|
||||||
|
e.preventDefault();
|
||||||
|
applyCollapsed(false);
|
||||||
|
}
|
||||||
});
|
});
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
<script>
|
<script>
|
||||||
// Toast utility
|
|
||||||
window.showToast = function(message, title = 'Notification', type = 'info') {
|
window.showToast = function(message, title = 'Notification', type = 'info') {
|
||||||
const toastEl = document.getElementById('liveToast');
|
const toastEl = document.getElementById('liveToast');
|
||||||
const toastTitle = document.getElementById('toastTitle');
|
const toastTitle = document.getElementById('toastTitle');
|
||||||
@@ -207,8 +367,7 @@
|
|||||||
|
|
||||||
toastTitle.textContent = title;
|
toastTitle.textContent = title;
|
||||||
toastMessage.textContent = message;
|
toastMessage.textContent = message;
|
||||||
|
|
||||||
// Reset classes
|
|
||||||
toastEl.classList.remove('text-bg-primary', 'text-bg-success', 'text-bg-danger', 'text-bg-warning');
|
toastEl.classList.remove('text-bg-primary', 'text-bg-success', 'text-bg-danger', 'text-bg-warning');
|
||||||
|
|
||||||
if (type === 'success') toastEl.classList.add('text-bg-success');
|
if (type === 'success') toastEl.classList.add('text-bg-success');
|
||||||
@@ -221,13 +380,11 @@
|
|||||||
</script>
|
</script>
|
||||||
<script>
|
<script>
|
||||||
(function () {
|
(function () {
|
||||||
// Show flashed messages as toasts
|
|
||||||
{% with messages = get_flashed_messages(with_categories=true) %}
|
{% with messages = get_flashed_messages(with_categories=true) %}
|
||||||
{% if messages %}
|
{% if messages %}
|
||||||
{% for category, message in messages %}
|
{% for category, message in messages %}
|
||||||
// Map Flask categories to Toast types
|
|
||||||
// Flask: success, danger, warning, info
|
|
||||||
// Toast: success, error, warning, info
|
|
||||||
var type = "{{ category }}";
|
var type = "{{ category }}";
|
||||||
if (type === "danger") type = "error";
|
if (type === "danger") type = "error";
|
||||||
window.showToast({{ message | tojson | safe }}, "Notification", type);
|
window.showToast({{ message | tojson | safe }}, "Notification", type);
|
||||||
@@ -236,6 +393,8 @@
|
|||||||
{% endwith %}
|
{% endwith %}
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
|
<script src="{{ url_for('static', filename='js/ui-core.js') }}"></script>
|
||||||
{% block extra_scripts %}{% endblock %}
|
{% block extra_scripts %}{% endblock %}
|
||||||
|
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -46,13 +46,12 @@
|
|||||||
<div class="d-flex align-items-center gap-3">
|
<div class="d-flex align-items-center gap-3">
|
||||||
<div class="bucket-icon">
|
<div class="bucket-icon">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="22" height="22" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="22" height="22" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H11a.5.5 0 0 1 0 1h-1v1h1a.5.5 0 0 1 0 1h-1v1a.5.5 0 0 1-1 0v-1H6v1a.5.5 0 0 1-1 0v-1H4a.5.5 0 0 1 0-1h1v-1H4a.5.5 0 0 1 0-1h1.5A1.5 1.5 0 0 1 7 10.5V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1zm5 7.5v1h3v-1a.5.5 0 0 0-.5-.5h-2a.5.5 0 0 0-.5.5z"/>
|
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<div>
|
<div>
|
||||||
<h5 class="bucket-name text-break">{{ bucket.meta.name }}</h5>
|
<h5 class="bucket-name text-break">{{ bucket.meta.name }}</h5>
|
||||||
<small class="text-muted">Created {{ bucket.meta.created_at.strftime('%b %d, %Y') }}</small>
|
<small class="text-muted">Created {{ bucket.meta.created_at | format_datetime }}</small>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<span class="badge {{ bucket.access_badge }} bucket-access-badge">{{ bucket.access_label }}</span>
|
<span class="badge {{ bucket.access_badge }} bucket-access-badge">{{ bucket.access_label }}</span>
|
||||||
@@ -105,7 +104,7 @@
|
|||||||
</h1>
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<form method="post" action="{{ url_for('ui.create_bucket') }}">
|
<form method="post" action="{{ url_for('ui.create_bucket') }}" id="createBucketForm">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<div class="modal-body pt-0">
|
<div class="modal-body pt-0">
|
||||||
<label class="form-label fw-medium">Bucket name</label>
|
<label class="form-label fw-medium">Bucket name</label>
|
||||||
@@ -131,10 +130,10 @@
|
|||||||
{{ super() }}
|
{{ super() }}
|
||||||
<script>
|
<script>
|
||||||
(function () {
|
(function () {
|
||||||
// Search functionality
|
|
||||||
const searchInput = document.getElementById('bucket-search');
|
const searchInput = document.getElementById('bucket-search');
|
||||||
const bucketItems = document.querySelectorAll('.bucket-item');
|
const bucketItems = document.querySelectorAll('.bucket-item');
|
||||||
const noBucketsMsg = document.querySelector('.text-center.py-5'); // The "No buckets found" empty state
|
const noBucketsMsg = document.querySelector('.text-center.py-5');
|
||||||
|
|
||||||
if (searchInput) {
|
if (searchInput) {
|
||||||
searchInput.addEventListener('input', (e) => {
|
searchInput.addEventListener('input', (e) => {
|
||||||
@@ -153,7 +152,6 @@
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
// View toggle functionality
|
|
||||||
const viewGrid = document.getElementById('view-grid');
|
const viewGrid = document.getElementById('view-grid');
|
||||||
const viewList = document.getElementById('view-list');
|
const viewList = document.getElementById('view-list');
|
||||||
const container = document.getElementById('buckets-container');
|
const container = document.getElementById('buckets-container');
|
||||||
@@ -168,8 +166,7 @@
|
|||||||
});
|
});
|
||||||
cards.forEach(card => {
|
cards.forEach(card => {
|
||||||
card.classList.remove('h-100');
|
card.classList.remove('h-100');
|
||||||
// Optional: Add flex-row to card-body content if we want a horizontal layout
|
|
||||||
// For now, full-width stacked cards is a good list view
|
|
||||||
});
|
});
|
||||||
localStorage.setItem('bucket-view-pref', 'list');
|
localStorage.setItem('bucket-view-pref', 'list');
|
||||||
} else {
|
} else {
|
||||||
@@ -188,7 +185,6 @@
|
|||||||
viewGrid.addEventListener('change', () => setView('grid'));
|
viewGrid.addEventListener('change', () => setView('grid'));
|
||||||
viewList.addEventListener('change', () => setView('list'));
|
viewList.addEventListener('change', () => setView('list'));
|
||||||
|
|
||||||
// Restore preference
|
|
||||||
const pref = localStorage.getItem('bucket-view-pref');
|
const pref = localStorage.getItem('bucket-view-pref');
|
||||||
if (pref === 'list') {
|
if (pref === 'list') {
|
||||||
viewList.checked = true;
|
viewList.checked = true;
|
||||||
@@ -209,6 +205,25 @@
|
|||||||
});
|
});
|
||||||
row.style.cursor = 'pointer';
|
row.style.cursor = 'pointer';
|
||||||
});
|
});
|
||||||
|
|
||||||
|
var createForm = document.getElementById('createBucketForm');
|
||||||
|
if (createForm) {
|
||||||
|
createForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
window.UICore.submitFormAjax(createForm, {
|
||||||
|
successMessage: 'Bucket created',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
var modal = bootstrap.Modal.getInstance(document.getElementById('createBucketModal'));
|
||||||
|
if (modal) modal.hide();
|
||||||
|
if (data.bucket_name) {
|
||||||
|
window.location.href = '{{ url_for("ui.bucket_detail", bucket_name="__BUCKET__") }}'.replace('__BUCKET__', data.bucket_name);
|
||||||
|
} else {
|
||||||
|
location.reload();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
@@ -8,8 +8,8 @@
|
|||||||
<p class="text-uppercase text-muted small mb-1">Replication</p>
|
<p class="text-uppercase text-muted small mb-1">Replication</p>
|
||||||
<h1 class="h3 mb-1 d-flex align-items-center gap-2">
|
<h1 class="h3 mb-1 d-flex align-items-center gap-2">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
<path d="M10.232 8.768l.546-.353a.25.25 0 0 0 0-.418l-.546-.354a.25.25 0 0 1-.116-.21V6.25a.25.25 0 0 0-.25-.25h-.5a.25.25 0 0 0-.25.25v1.183a.25.25 0 0 1-.116.21l-.546.354a.25.25 0 0 0 0 .418l.546.353a.25.25 0 0 1 .116.21v1.183a.25.25 0 0 0 .25.25h.5a.25.25 0 0 0 .25-.25V8.978a.25.25 0 0 1 .116-.21z"/>
|
||||||
</svg>
|
</svg>
|
||||||
Remote Connections
|
Remote Connections
|
||||||
</h1>
|
</h1>
|
||||||
@@ -57,7 +57,7 @@
|
|||||||
<label for="secret_key" class="form-label fw-medium">Secret Key</label>
|
<label for="secret_key" class="form-label fw-medium">Secret Key</label>
|
||||||
<div class="input-group">
|
<div class="input-group">
|
||||||
<input type="password" class="form-control font-monospace" id="secret_key" name="secret_key" required>
|
<input type="password" class="form-control font-monospace" id="secret_key" name="secret_key" required>
|
||||||
<button class="btn btn-outline-secondary" type="button" onclick="togglePassword('secret_key')" title="Toggle visibility">
|
<button class="btn btn-outline-secondary" type="button" onclick="ConnectionsManagement.togglePassword('secret_key')" title="Toggle visibility">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
||||||
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
||||||
@@ -104,6 +104,7 @@
|
|||||||
<table class="table table-hover align-middle mb-0">
|
<table class="table table-hover align-middle mb-0">
|
||||||
<thead class="table-light">
|
<thead class="table-light">
|
||||||
<tr>
|
<tr>
|
||||||
|
<th scope="col" style="width: 50px;">Status</th>
|
||||||
<th scope="col">Name</th>
|
<th scope="col">Name</th>
|
||||||
<th scope="col">Endpoint</th>
|
<th scope="col">Endpoint</th>
|
||||||
<th scope="col">Region</th>
|
<th scope="col">Region</th>
|
||||||
@@ -113,13 +114,17 @@
|
|||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
{% for conn in connections %}
|
{% for conn in connections %}
|
||||||
<tr>
|
<tr data-connection-id="{{ conn.id }}">
|
||||||
|
<td class="text-center">
|
||||||
|
<span class="connection-status" data-status="checking" title="Checking...">
|
||||||
|
<span class="spinner-border spinner-border-sm text-muted" role="status" style="width: 12px; height: 12px;"></span>
|
||||||
|
</span>
|
||||||
|
</td>
|
||||||
<td>
|
<td>
|
||||||
<div class="d-flex align-items-center gap-2">
|
<div class="d-flex align-items-center gap-2">
|
||||||
<div class="connection-icon">
|
<div class="connection-icon">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<span class="fw-medium">{{ conn.name }}</span>
|
<span class="fw-medium">{{ conn.name }}</span>
|
||||||
@@ -168,8 +173,7 @@
|
|||||||
<div class="empty-state text-center py-5">
|
<div class="empty-state text-center py-5">
|
||||||
<div class="empty-state-icon mx-auto mb-3">
|
<div class="empty-state-icon mx-auto mb-3">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<h5 class="fw-semibold mb-2">No connections yet</h5>
|
<h5 class="fw-semibold mb-2">No connections yet</h5>
|
||||||
@@ -181,7 +185,6 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Edit Connection Modal -->
|
|
||||||
<div class="modal fade" id="editConnectionModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="editConnectionModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
@@ -217,7 +220,7 @@
|
|||||||
<label for="edit_secret_key" class="form-label fw-medium">Secret Key</label>
|
<label for="edit_secret_key" class="form-label fw-medium">Secret Key</label>
|
||||||
<div class="input-group">
|
<div class="input-group">
|
||||||
<input type="password" class="form-control font-monospace" id="edit_secret_key" name="secret_key" required>
|
<input type="password" class="form-control font-monospace" id="edit_secret_key" name="secret_key" required>
|
||||||
<button class="btn btn-outline-secondary" type="button" onclick="togglePassword('edit_secret_key')">
|
<button class="btn btn-outline-secondary" type="button" onclick="ConnectionsManagement.togglePassword('edit_secret_key')">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
||||||
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
||||||
@@ -247,7 +250,6 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Delete Connection Modal -->
|
|
||||||
<div class="modal fade" id="deleteConnectionModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="deleteConnectionModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
@@ -287,80 +289,16 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<script src="{{ url_for('static', filename='js/connections-management.js') }}"></script>
|
||||||
<script>
|
<script>
|
||||||
function togglePassword(id) {
|
ConnectionsManagement.init({
|
||||||
const input = document.getElementById(id);
|
csrfToken: "{{ csrf_token() }}",
|
||||||
if (input.type === "password") {
|
endpoints: {
|
||||||
input.type = "text";
|
test: "{{ url_for('ui.test_connection') }}",
|
||||||
} else {
|
updateTemplate: "{{ url_for('ui.update_connection', connection_id='CONNECTION_ID') }}",
|
||||||
input.type = "password";
|
deleteTemplate: "{{ url_for('ui.delete_connection', connection_id='CONNECTION_ID') }}",
|
||||||
}
|
healthTemplate: "/ui/connections/CONNECTION_ID/health"
|
||||||
}
|
}
|
||||||
|
});
|
||||||
// Test Connection Logic
|
|
||||||
async function testConnection(formId, resultId) {
|
|
||||||
const form = document.getElementById(formId);
|
|
||||||
const resultDiv = document.getElementById(resultId);
|
|
||||||
const formData = new FormData(form);
|
|
||||||
const data = Object.fromEntries(formData.entries());
|
|
||||||
|
|
||||||
resultDiv.innerHTML = '<div class="text-info"><span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Testing...</div>';
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch("{{ url_for('ui.test_connection') }}", {
|
|
||||||
method: "POST",
|
|
||||||
headers: {
|
|
||||||
"Content-Type": "application/json",
|
|
||||||
"X-CSRFToken": "{{ csrf_token() }}"
|
|
||||||
},
|
|
||||||
body: JSON.stringify(data)
|
|
||||||
});
|
|
||||||
|
|
||||||
const result = await response.json();
|
|
||||||
if (response.ok) {
|
|
||||||
resultDiv.innerHTML = `<div class="text-success"><i class="bi bi-check-circle"></i> ${result.message}</div>`;
|
|
||||||
} else {
|
|
||||||
resultDiv.innerHTML = `<div class="text-danger"><i class="bi bi-exclamation-circle"></i> ${result.message}</div>`;
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
resultDiv.innerHTML = `<div class="text-danger"><i class="bi bi-exclamation-circle"></i> Connection failed</div>`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
document.getElementById('testConnectionBtn').addEventListener('click', () => {
|
|
||||||
testConnection('createConnectionForm', 'testResult');
|
|
||||||
});
|
|
||||||
|
|
||||||
document.getElementById('editTestConnectionBtn').addEventListener('click', () => {
|
|
||||||
testConnection('editConnectionForm', 'editTestResult');
|
|
||||||
});
|
|
||||||
|
|
||||||
// Modal Event Listeners
|
|
||||||
const editModal = document.getElementById('editConnectionModal');
|
|
||||||
editModal.addEventListener('show.bs.modal', event => {
|
|
||||||
const button = event.relatedTarget;
|
|
||||||
const id = button.getAttribute('data-id');
|
|
||||||
|
|
||||||
document.getElementById('edit_name').value = button.getAttribute('data-name');
|
|
||||||
document.getElementById('edit_endpoint_url').value = button.getAttribute('data-endpoint');
|
|
||||||
document.getElementById('edit_region').value = button.getAttribute('data-region');
|
|
||||||
document.getElementById('edit_access_key').value = button.getAttribute('data-access');
|
|
||||||
document.getElementById('edit_secret_key').value = button.getAttribute('data-secret');
|
|
||||||
document.getElementById('editTestResult').innerHTML = '';
|
|
||||||
|
|
||||||
const form = document.getElementById('editConnectionForm');
|
|
||||||
form.action = "{{ url_for('ui.update_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
|
|
||||||
});
|
|
||||||
|
|
||||||
const deleteModal = document.getElementById('deleteConnectionModal');
|
|
||||||
deleteModal.addEventListener('show.bs.modal', event => {
|
|
||||||
const button = event.relatedTarget;
|
|
||||||
const id = button.getAttribute('data-id');
|
|
||||||
const name = button.getAttribute('data-name');
|
|
||||||
|
|
||||||
document.getElementById('deleteConnectionName').textContent = name;
|
|
||||||
const form = document.getElementById('deleteConnectionForm');
|
|
||||||
form.action = "{{ url_for('ui.delete_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
|
|
||||||
});
|
|
||||||
</script>
|
</script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
@@ -14,6 +14,39 @@
|
|||||||
</div>
|
</div>
|
||||||
</section>
|
</section>
|
||||||
<div class="row g-4">
|
<div class="row g-4">
|
||||||
|
<div class="col-12 d-xl-none">
|
||||||
|
<div class="card shadow-sm docs-sidebar-mobile mb-0">
|
||||||
|
<div class="card-body py-3">
|
||||||
|
<div class="d-flex align-items-center justify-content-between mb-2">
|
||||||
|
<h3 class="h6 text-uppercase text-muted mb-0">On this page</h3>
|
||||||
|
<button class="btn btn-sm btn-outline-secondary" type="button" data-bs-toggle="collapse" data-bs-target="#mobileDocsToc" aria-expanded="false" aria-controls="mobileDocsToc">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<div class="collapse" id="mobileDocsToc">
|
||||||
|
<ul class="list-unstyled docs-toc mb-0 small">
|
||||||
|
<li><a href="#setup">Set up & run</a></li>
|
||||||
|
<li><a href="#background">Running in background</a></li>
|
||||||
|
<li><a href="#auth">Authentication & IAM</a></li>
|
||||||
|
<li><a href="#console">Console tour</a></li>
|
||||||
|
<li><a href="#automation">Automation / CLI</a></li>
|
||||||
|
<li><a href="#api">REST endpoints</a></li>
|
||||||
|
<li><a href="#examples">API Examples</a></li>
|
||||||
|
<li><a href="#replication">Site Replication</a></li>
|
||||||
|
<li><a href="#versioning">Object Versioning</a></li>
|
||||||
|
<li><a href="#quotas">Bucket Quotas</a></li>
|
||||||
|
<li><a href="#encryption">Encryption</a></li>
|
||||||
|
<li><a href="#lifecycle">Lifecycle Rules</a></li>
|
||||||
|
<li><a href="#metrics">Metrics History</a></li>
|
||||||
|
<li><a href="#operation-metrics">Operation Metrics</a></li>
|
||||||
|
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
<div class="col-xl-8">
|
<div class="col-xl-8">
|
||||||
<article id="setup" class="card shadow-sm docs-section">
|
<article id="setup" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
@@ -47,9 +80,9 @@ python run.py --mode ui
|
|||||||
<table class="table table-sm table-bordered small mb-0">
|
<table class="table table-sm table-bordered small mb-0">
|
||||||
<thead class="table-light">
|
<thead class="table-light">
|
||||||
<tr>
|
<tr>
|
||||||
<th>Variable</th>
|
<th style="min-width: 180px;">Variable</th>
|
||||||
<th>Default</th>
|
<th style="min-width: 120px;">Default</th>
|
||||||
<th>Description</th>
|
<th class="text-wrap" style="min-width: 250px;">Description</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
@@ -150,6 +183,24 @@ python run.py --mode ui
|
|||||||
<td><code>true</code></td>
|
<td><code>true</code></td>
|
||||||
<td>Enable file logging.</td>
|
<td>Enable file logging.</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr class="table-secondary">
|
||||||
|
<td colspan="3" class="fw-semibold">Metrics History Settings</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>METRICS_HISTORY_ENABLED</code></td>
|
||||||
|
<td><code>false</code></td>
|
||||||
|
<td>Enable metrics history recording and charts (opt-in).</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>METRICS_HISTORY_RETENTION_HOURS</code></td>
|
||||||
|
<td><code>24</code></td>
|
||||||
|
<td>How long to retain metrics history data.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>METRICS_HISTORY_INTERVAL_MINUTES</code></td>
|
||||||
|
<td><code>5</code></td>
|
||||||
|
<td>Interval between history snapshots.</td>
|
||||||
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
</div>
|
</div>
|
||||||
@@ -255,6 +306,15 @@ sudo journalctl -u myfsio -f # View logs</code></pre>
|
|||||||
<li>Progress rows highlight retries, throughput, and completion even if you close the modal.</li>
|
<li>Progress rows highlight retries, throughput, and completion even if you close the modal.</li>
|
||||||
</ul>
|
</ul>
|
||||||
</div>
|
</div>
|
||||||
|
<div>
|
||||||
|
<h3 class="h6 text-uppercase text-muted">Object browser</h3>
|
||||||
|
<ul>
|
||||||
|
<li>Navigate folder hierarchies using breadcrumbs. Objects with <code>/</code> in keys display as folders.</li>
|
||||||
|
<li>Infinite scroll loads more objects automatically. Choose batch size (50–250) from the footer dropdown.</li>
|
||||||
|
<li>Bulk select objects for multi-delete or multi-download. Filter by name using the search box.</li>
|
||||||
|
<li>If loading fails, click <strong>Retry</strong> to attempt again—no page refresh needed.</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
<div>
|
<div>
|
||||||
<h3 class="h6 text-uppercase text-muted">Object details</h3>
|
<h3 class="h6 text-uppercase text-muted">Object details</h3>
|
||||||
<ul>
|
<ul>
|
||||||
@@ -316,11 +376,8 @@ curl -X PUT {{ api_base }}/demo/notes.txt \
|
|||||||
-H "X-Secret-Key: <secret_key>" \
|
-H "X-Secret-Key: <secret_key>" \
|
||||||
--data-binary @notes.txt
|
--data-binary @notes.txt
|
||||||
|
|
||||||
curl -X POST {{ api_base }}/presign/demo/notes.txt \
|
# Presigned URLs are generated via the UI
|
||||||
-H "Content-Type: application/json" \
|
# Use the "Presign" button in the object browser
|
||||||
-H "X-Access-Key: <access_key>" \
|
|
||||||
-H "X-Secret-Key: <secret_key>" \
|
|
||||||
-d '{"method":"GET", "expires_in": 900}'
|
|
||||||
</code></pre>
|
</code></pre>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -378,13 +435,8 @@ curl -X POST {{ api_base }}/presign/demo/notes.txt \
|
|||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>GET/PUT/DELETE</td>
|
<td>GET/PUT/DELETE</td>
|
||||||
<td><code>/bucket-policy/<bucket></code></td>
|
<td><code>/<bucket>?policy</code></td>
|
||||||
<td>Fetch, upsert, or remove a bucket policy.</td>
|
<td>Fetch, upsert, or remove a bucket policy (S3-compatible).</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td>POST</td>
|
|
||||||
<td><code>/presign/<bucket>/<key></code></td>
|
|
||||||
<td>Generate SigV4 URLs for GET/PUT/DELETE with custom expiry.</td>
|
|
||||||
</tr>
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
@@ -398,10 +450,62 @@ curl -X POST {{ api_base }}/presign/demo/notes.txt \
|
|||||||
<span class="docs-section-kicker">07</span>
|
<span class="docs-section-kicker">07</span>
|
||||||
<h2 class="h4 mb-0">API Examples</h2>
|
<h2 class="h4 mb-0">API Examples</h2>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-muted">Common operations using boto3.</p>
|
<p class="text-muted">Common operations using popular SDKs and tools.</p>
|
||||||
|
|
||||||
<h5 class="mt-4">Multipart Upload</h5>
|
<h3 class="h6 text-uppercase text-muted mt-4">Python (boto3)</h3>
|
||||||
<pre><code class="language-python">import boto3
|
<pre class="mb-4"><code class="language-python">import boto3
|
||||||
|
|
||||||
|
s3 = boto3.client(
|
||||||
|
's3',
|
||||||
|
endpoint_url='{{ api_base }}',
|
||||||
|
aws_access_key_id='<access_key>',
|
||||||
|
aws_secret_access_key='<secret_key>'
|
||||||
|
)
|
||||||
|
|
||||||
|
# List buckets
|
||||||
|
buckets = s3.list_buckets()['Buckets']
|
||||||
|
|
||||||
|
# Create bucket
|
||||||
|
s3.create_bucket(Bucket='mybucket')
|
||||||
|
|
||||||
|
# Upload file
|
||||||
|
s3.upload_file('local.txt', 'mybucket', 'remote.txt')
|
||||||
|
|
||||||
|
# Download file
|
||||||
|
s3.download_file('mybucket', 'remote.txt', 'downloaded.txt')
|
||||||
|
|
||||||
|
# Generate presigned URL (valid 1 hour)
|
||||||
|
url = s3.generate_presigned_url(
|
||||||
|
'get_object',
|
||||||
|
Params={'Bucket': 'mybucket', 'Key': 'remote.txt'},
|
||||||
|
ExpiresIn=3600
|
||||||
|
)</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">JavaScript (AWS SDK v3)</h3>
|
||||||
|
<pre class="mb-4"><code class="language-javascript">import { S3Client, ListBucketsCommand, PutObjectCommand } from '@aws-sdk/client-s3';
|
||||||
|
|
||||||
|
const s3 = new S3Client({
|
||||||
|
endpoint: '{{ api_base }}',
|
||||||
|
region: 'us-east-1',
|
||||||
|
credentials: {
|
||||||
|
accessKeyId: '<access_key>',
|
||||||
|
secretAccessKey: '<secret_key>'
|
||||||
|
},
|
||||||
|
forcePathStyle: true // Required for S3-compatible services
|
||||||
|
});
|
||||||
|
|
||||||
|
// List buckets
|
||||||
|
const { Buckets } = await s3.send(new ListBucketsCommand({}));
|
||||||
|
|
||||||
|
// Upload object
|
||||||
|
await s3.send(new PutObjectCommand({
|
||||||
|
Bucket: 'mybucket',
|
||||||
|
Key: 'hello.txt',
|
||||||
|
Body: 'Hello, World!'
|
||||||
|
}));</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Multipart Upload (Python)</h3>
|
||||||
|
<pre class="mb-4"><code class="language-python">import boto3
|
||||||
|
|
||||||
s3 = boto3.client('s3', endpoint_url='{{ api_base }}')
|
s3 = boto3.client('s3', endpoint_url='{{ api_base }}')
|
||||||
|
|
||||||
@@ -409,9 +513,9 @@ s3 = boto3.client('s3', endpoint_url='{{ api_base }}')
|
|||||||
response = s3.create_multipart_upload(Bucket='mybucket', Key='large.bin')
|
response = s3.create_multipart_upload(Bucket='mybucket', Key='large.bin')
|
||||||
upload_id = response['UploadId']
|
upload_id = response['UploadId']
|
||||||
|
|
||||||
# Upload parts
|
# Upload parts (minimum 5MB each, except last part)
|
||||||
parts = []
|
parts = []
|
||||||
chunks = [b'chunk1', b'chunk2'] # Example data chunks
|
chunks = [b'chunk1...', b'chunk2...']
|
||||||
for part_number, chunk in enumerate(chunks, start=1):
|
for part_number, chunk in enumerate(chunks, start=1):
|
||||||
response = s3.upload_part(
|
response = s3.upload_part(
|
||||||
Bucket='mybucket',
|
Bucket='mybucket',
|
||||||
@@ -429,6 +533,18 @@ s3.complete_multipart_upload(
|
|||||||
UploadId=upload_id,
|
UploadId=upload_id,
|
||||||
MultipartUpload={'Parts': parts}
|
MultipartUpload={'Parts': parts}
|
||||||
)</code></pre>
|
)</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Presigned URLs for Sharing</h3>
|
||||||
|
<pre class="mb-0"><code class="language-text"># Generate presigned URLs via the UI:
|
||||||
|
# 1. Navigate to your bucket in the object browser
|
||||||
|
# 2. Select the object you want to share
|
||||||
|
# 3. Click the "Presign" button
|
||||||
|
# 4. Choose method (GET/PUT/DELETE) and expiration time
|
||||||
|
# 5. Copy the generated URL
|
||||||
|
|
||||||
|
# Supported options:
|
||||||
|
# - Method: GET (download), PUT (upload), DELETE (remove)
|
||||||
|
# - Expiration: 1 second to 7 days (604800 seconds)</code></pre>
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="replication" class="card shadow-sm docs-section">
|
<article id="replication" class="card shadow-sm docs-section">
|
||||||
@@ -452,15 +568,46 @@ s3.complete_multipart_upload(
|
|||||||
</li>
|
</li>
|
||||||
</ol>
|
</ol>
|
||||||
|
|
||||||
<div class="alert alert-light border mb-0">
|
<div class="alert alert-light border mb-3 overflow-hidden">
|
||||||
<div class="d-flex gap-2">
|
<div class="d-flex flex-column flex-sm-row gap-2 mb-2">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-terminal text-muted mt-1" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-terminal text-muted mt-1 flex-shrink-0 d-none d-sm-block" viewBox="0 0 16 16">
|
||||||
<path d="M6 9a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 0 1h-3A.5.5 0 0 1 6 9zM3.854 4.146a.5.5 0 1 0-.708.708L4.793 6.5 3.146 8.146a.5.5 0 1 0 .708.708l2-2a.5.5 0 0 0 0-.708l-2-2z"/>
|
<path d="M6 9a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 0 1h-3A.5.5 0 0 1 6 9zM3.854 4.146a.5.5 0 1 0-.708.708L4.793 6.5 3.146 8.146a.5.5 0 1 0 .708.708l2-2a.5.5 0 0 0 0-.708l-2-2z"/>
|
||||||
<path d="M2 1a2 2 0 0 0-2 2v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V3a2 2 0 0 0-2-2H2zm12 1a1 1 0 0 1 1 1v10a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1h12z"/>
|
<path d="M2 1a2 2 0 0 0-2 2v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V3a2 2 0 0 0-2-2H2zm12 1a1 1 0 0 1 1 1v10a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1h12z"/>
|
||||||
</svg>
|
</svg>
|
||||||
<div>
|
<div class="flex-grow-1 min-width-0">
|
||||||
<strong>Headless Target Setup?</strong>
|
<strong>Headless Target Setup</strong>
|
||||||
<p class="small text-muted mb-0">If your target server has no UI, use the Python API directly to bootstrap credentials. See <code>docs.md</code> in the project root for the <code>setup_target.py</code> script.</p>
|
<p class="small text-muted mb-2">If your target server has no UI, create a <code>setup_target.py</code> script to bootstrap credentials:</p>
|
||||||
|
<pre class="mb-0 overflow-auto" style="max-width: 100%;"><code class="language-python"># setup_target.py
|
||||||
|
from pathlib import Path
|
||||||
|
from app.iam import IamService
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
|
||||||
|
# Initialize services (paths match default config)
|
||||||
|
data_dir = Path("data")
|
||||||
|
iam = IamService(data_dir / ".myfsio.sys" / "config" / "iam.json")
|
||||||
|
storage = ObjectStorage(data_dir)
|
||||||
|
|
||||||
|
# 1. Create the bucket
|
||||||
|
bucket_name = "backup-bucket"
|
||||||
|
try:
|
||||||
|
storage.create_bucket(bucket_name)
|
||||||
|
print(f"Bucket '{bucket_name}' created.")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Bucket creation skipped: {e}")
|
||||||
|
|
||||||
|
# 2. Create the user
|
||||||
|
try:
|
||||||
|
creds = iam.create_user(
|
||||||
|
display_name="Replication User",
|
||||||
|
policies=[{"bucket": bucket_name, "actions": ["write", "read", "list"]}]
|
||||||
|
)
|
||||||
|
print("\n--- CREDENTIALS GENERATED ---")
|
||||||
|
print(f"Access Key: {creds['access_key']}")
|
||||||
|
print(f"Secret Key: {creds['secret_key']}")
|
||||||
|
print("-----------------------------")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"User creation failed: {e}")</code></pre>
|
||||||
|
<p class="small text-muted mt-2 mb-0">Save and run: <code>python setup_target.py</code></p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -471,11 +618,129 @@ s3.complete_multipart_upload(
|
|||||||
<li>Follow the steps above to replicate <strong>A → B</strong>.</li>
|
<li>Follow the steps above to replicate <strong>A → B</strong>.</li>
|
||||||
<li>Repeat the process on Server B to replicate <strong>B → A</strong> (create a connection to A, enable rule).</li>
|
<li>Repeat the process on Server B to replicate <strong>B → A</strong> (create a connection to A, enable rule).</li>
|
||||||
</ol>
|
</ol>
|
||||||
<p class="small text-muted mb-0">
|
<p class="small text-muted mb-3">
|
||||||
<strong>Loop Prevention:</strong> The system automatically detects replication traffic using a custom User-Agent (<code>S3ReplicationAgent</code>). This prevents infinite loops where an object replicated from A to B is immediately replicated back to A.
|
<strong>Loop Prevention:</strong> The system automatically detects replication traffic using a custom User-Agent (<code>S3ReplicationAgent</code>). This prevents infinite loops where an object replicated from A to B is immediately replicated back to A.
|
||||||
<br>
|
<br>
|
||||||
<strong>Deletes:</strong> Deleting an object on one server will propagate the deletion to the other server.
|
<strong>Deletes:</strong> Deleting an object on one server will propagate the deletion to the other server.
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Error Handling & Rate Limits</h3>
|
||||||
|
<p class="small text-muted mb-3">The replication system handles transient failures automatically:</p>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Behavior</th>
|
||||||
|
<th>Details</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Retry Logic</strong></td>
|
||||||
|
<td>boto3 automatically handles 429 (rate limit) errors using exponential backoff with <code>max_attempts=2</code></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Concurrency</strong></td>
|
||||||
|
<td>Uses a ThreadPoolExecutor with 4 parallel workers for replication tasks</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Timeouts</strong></td>
|
||||||
|
<td>Connect: 5s, Read: 30s. Large files use streaming transfers</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
<div class="alert alert-warning border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-exclamation-triangle text-warning mt-1 flex-shrink-0" viewBox="0 0 16 16">
|
||||||
|
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
||||||
|
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Large File Counts:</strong> When replicating buckets with many objects, the target server's rate limits may cause delays. There is no built-in pause mechanism. Consider increasing <code>RATE_LIMIT_DEFAULT</code> on the target server during bulk replication operations.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="versioning" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">09</span>
|
||||||
|
<h2 class="h4 mb-0">Object Versioning</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">Keep multiple versions of objects to protect against accidental deletions and overwrites. Restore previous versions at any time.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Enabling Versioning</h3>
|
||||||
|
<ol class="docs-steps mb-3">
|
||||||
|
<li>Navigate to your bucket's <strong>Properties</strong> tab.</li>
|
||||||
|
<li>Find the <strong>Versioning</strong> card and click <strong>Enable</strong>.</li>
|
||||||
|
<li>All subsequent uploads will create new versions instead of overwriting.</li>
|
||||||
|
</ol>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Version Operations</h3>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Operation</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><strong>View Versions</strong></td>
|
||||||
|
<td>Click the version icon on any object to see all historical versions with timestamps and sizes.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Restore Version</strong></td>
|
||||||
|
<td>Click <strong>Restore</strong> on any version to make it the current version (creates a copy).</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Delete Current</strong></td>
|
||||||
|
<td>Deleting an object archives it. Previous versions remain accessible.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Purge All</strong></td>
|
||||||
|
<td>Permanently delete an object and all its versions. This cannot be undone.</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Archived Objects</h3>
|
||||||
|
<p class="small text-muted mb-3">When you delete a versioned object, it becomes "archived" - the current version is removed but historical versions remain. The <strong>Archived</strong> tab shows these objects so you can restore them.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">API Usage</h3>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Enable versioning
|
||||||
|
curl -X PUT "{{ api_base }}/<bucket>?versioning" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '{"Status": "Enabled"}'
|
||||||
|
|
||||||
|
# Get versioning status
|
||||||
|
curl "{{ api_base }}/<bucket>?versioning" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# List object versions
|
||||||
|
curl "{{ api_base }}/<bucket>?versions" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Get specific version
|
||||||
|
curl "{{ api_base }}/<bucket>/<key>?versionId=<version-id>" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"</code></pre>
|
||||||
|
|
||||||
|
<div class="alert alert-light border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Storage Impact:</strong> Each version consumes storage. Enable quotas to limit total bucket size including all versions.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="quotas" class="card shadow-sm docs-section">
|
<article id="quotas" class="card shadow-sm docs-section">
|
||||||
@@ -640,10 +905,283 @@ curl -X DELETE "{{ api_base }}/kms/keys/{key-id}?waiting_period_days=30" \
|
|||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="troubleshooting" class="card shadow-sm docs-section">
|
<article id="lifecycle" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">12</span>
|
<span class="docs-section-kicker">12</span>
|
||||||
|
<h2 class="h4 mb-0">Lifecycle Rules</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">Automatically delete expired objects, clean up old versions, and abort incomplete multipart uploads using time-based lifecycle rules.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">How It Works</h3>
|
||||||
|
<p class="small text-muted mb-3">
|
||||||
|
Lifecycle rules run on a background timer (Python <code>threading.Timer</code>), not a system cronjob. The enforcement cycle triggers every <strong>3600 seconds (1 hour)</strong> by default. Each cycle scans all buckets with lifecycle configurations and applies matching rules.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Expiration Types</h3>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Type</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Expiration (Days)</strong></td>
|
||||||
|
<td>Delete current objects older than N days from their last modification</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>Expiration (Date)</strong></td>
|
||||||
|
<td>Delete current objects after a specific date (ISO 8601 format)</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>NoncurrentVersionExpiration</strong></td>
|
||||||
|
<td>Delete non-current (archived) versions older than N days from when they became non-current</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><strong>AbortIncompleteMultipartUpload</strong></td>
|
||||||
|
<td>Abort multipart uploads that have been in progress longer than N days</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">API Usage</h3>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Set lifecycle rule (delete objects older than 30 days)
|
||||||
|
curl -X PUT "{{ api_base }}/<bucket>?lifecycle" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '[{
|
||||||
|
"ID": "expire-old-objects",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"Expiration": {"Days": 30}
|
||||||
|
}]'
|
||||||
|
|
||||||
|
# Abort incomplete multipart uploads after 7 days
|
||||||
|
curl -X PUT "{{ api_base }}/<bucket>?lifecycle" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '[{
|
||||||
|
"ID": "cleanup-multipart",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7}
|
||||||
|
}]'
|
||||||
|
|
||||||
|
# Get current lifecycle configuration
|
||||||
|
curl "{{ api_base }}/<bucket>?lifecycle" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"</code></pre>
|
||||||
|
|
||||||
|
<div class="alert alert-light border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1 flex-shrink-0" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>Prefix Filtering:</strong> Use the <code>Prefix</code> field to scope rules to specific paths (e.g., <code>"logs/"</code>). Leave empty to apply to all objects in the bucket.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="metrics" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">13</span>
|
||||||
|
<h2 class="h4 mb-0">Metrics History</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">Track CPU, memory, and disk usage over time with optional metrics history. Disabled by default to minimize overhead.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Enabling Metrics History</h3>
|
||||||
|
<p class="small text-muted">Set the environment variable to opt-in:</p>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># PowerShell
|
||||||
|
$env:METRICS_HISTORY_ENABLED = "true"
|
||||||
|
python run.py
|
||||||
|
|
||||||
|
# Bash
|
||||||
|
export METRICS_HISTORY_ENABLED=true
|
||||||
|
python run.py</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Configuration Options</h3>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Variable</th>
|
||||||
|
<th>Default</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><code>METRICS_HISTORY_ENABLED</code></td>
|
||||||
|
<td><code>false</code></td>
|
||||||
|
<td>Enable/disable metrics history recording</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>METRICS_HISTORY_RETENTION_HOURS</code></td>
|
||||||
|
<td><code>24</code></td>
|
||||||
|
<td>How long to keep history data (hours)</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>METRICS_HISTORY_INTERVAL_MINUTES</code></td>
|
||||||
|
<td><code>5</code></td>
|
||||||
|
<td>Interval between snapshots (minutes)</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">API Endpoints</h3>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Get metrics history (last 24 hours by default)
|
||||||
|
curl "{{ api_base | replace('/api', '/ui') }}/metrics/history" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Get history for specific time range
|
||||||
|
curl "{{ api_base | replace('/api', '/ui') }}/metrics/history?hours=6" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Get current settings
|
||||||
|
curl "{{ api_base | replace('/api', '/ui') }}/metrics/settings" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Update settings at runtime
|
||||||
|
curl -X PUT "{{ api_base | replace('/api', '/ui') }}/metrics/settings" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '{"enabled": true, "retention_hours": 48, "interval_minutes": 10}'</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Storage Location</h3>
|
||||||
|
<p class="small text-muted mb-3">History data is stored at:</p>
|
||||||
|
<code class="d-block mb-3">data/.myfsio.sys/config/metrics_history.json</code>
|
||||||
|
|
||||||
|
<div class="alert alert-light border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1 flex-shrink-0" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>UI Charts:</strong> When enabled, the Metrics dashboard displays line charts showing CPU, memory, and disk usage trends with a time range selector (1h, 6h, 24h, 7d).
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="operation-metrics" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">14</span>
|
||||||
|
<h2 class="h4 mb-0">Operation Metrics</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">Track API request statistics including request counts, latency, error rates, and bandwidth usage. Provides real-time visibility into API operations.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Enabling Operation Metrics</h3>
|
||||||
|
<p class="small text-muted">Set the environment variable to opt-in:</p>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># PowerShell
|
||||||
|
$env:OPERATION_METRICS_ENABLED = "true"
|
||||||
|
python run.py
|
||||||
|
|
||||||
|
# Bash
|
||||||
|
export OPERATION_METRICS_ENABLED=true
|
||||||
|
python run.py</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Configuration Options</h3>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Variable</th>
|
||||||
|
<th>Default</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><code>OPERATION_METRICS_ENABLED</code></td>
|
||||||
|
<td><code>false</code></td>
|
||||||
|
<td>Enable/disable operation metrics collection</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>OPERATION_METRICS_INTERVAL_MINUTES</code></td>
|
||||||
|
<td><code>5</code></td>
|
||||||
|
<td>Interval between snapshots (minutes)</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>OPERATION_METRICS_RETENTION_HOURS</code></td>
|
||||||
|
<td><code>24</code></td>
|
||||||
|
<td>How long to keep history data (hours)</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">What's Tracked</h3>
|
||||||
|
<div class="row g-3 mb-4">
|
||||||
|
<div class="col-md-6">
|
||||||
|
<div class="bg-light rounded p-3 h-100">
|
||||||
|
<h6 class="small fw-bold mb-2">Request Statistics</h6>
|
||||||
|
<ul class="small text-muted mb-0 ps-3">
|
||||||
|
<li>Request counts by HTTP method (GET, PUT, POST, DELETE)</li>
|
||||||
|
<li>Response status codes (2xx, 3xx, 4xx, 5xx)</li>
|
||||||
|
<li>Average, min, max latency</li>
|
||||||
|
<li>Bytes transferred in/out</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-md-6">
|
||||||
|
<div class="bg-light rounded p-3 h-100">
|
||||||
|
<h6 class="small fw-bold mb-2">Endpoint Breakdown</h6>
|
||||||
|
<ul class="small text-muted mb-0 ps-3">
|
||||||
|
<li><code>object</code> - Object operations (GET/PUT/DELETE)</li>
|
||||||
|
<li><code>bucket</code> - Bucket operations</li>
|
||||||
|
<li><code>ui</code> - Web UI requests</li>
|
||||||
|
<li><code>service</code> - Health checks, etc.</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">S3 Error Codes</h3>
|
||||||
|
<p class="small text-muted">The dashboard tracks S3 API-specific error codes like <code>NoSuchKey</code>, <code>AccessDenied</code>, <code>BucketNotFound</code>. These are separate from HTTP status codes – a 404 from the UI won't appear here, only S3 API errors.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">API Endpoints</h3>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Get current operation metrics
|
||||||
|
curl "{{ api_base | replace('/api', '/ui') }}/metrics/operations" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Get operation metrics history
|
||||||
|
curl "{{ api_base | replace('/api', '/ui') }}/metrics/operations/history" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Filter history by time range
|
||||||
|
curl "{{ api_base | replace('/api', '/ui') }}/metrics/operations/history?hours=6" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Storage Location</h3>
|
||||||
|
<p class="small text-muted mb-3">Operation metrics data is stored at:</p>
|
||||||
|
<code class="d-block mb-3">data/.myfsio.sys/config/operation_metrics.json</code>
|
||||||
|
|
||||||
|
<div class="alert alert-light border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1 flex-shrink-0" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>UI Dashboard:</strong> When enabled, the Metrics page shows an "API Operations" section with summary cards, charts for requests by method/status/endpoint, and an S3 error codes table. Data refreshes every 5 seconds.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="troubleshooting" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">15</span>
|
||||||
<h2 class="h4 mb-0">Troubleshooting & tips</h2>
|
<h2 class="h4 mb-0">Troubleshooting & tips</h2>
|
||||||
</div>
|
</div>
|
||||||
<div class="table-responsive">
|
<div class="table-responsive">
|
||||||
@@ -681,6 +1219,11 @@ curl -X DELETE "{{ api_base }}/kms/keys/{key-id}?waiting_period_days=30" \
|
|||||||
<td>Proxy headers missing or <code>API_BASE_URL</code> incorrect</td>
|
<td>Proxy headers missing or <code>API_BASE_URL</code> incorrect</td>
|
||||||
<td>Ensure your proxy sends <code>X-Forwarded-Host</code>/<code>Proto</code> headers, or explicitly set <code>API_BASE_URL</code> to your public domain.</td>
|
<td>Ensure your proxy sends <code>X-Forwarded-Host</code>/<code>Proto</code> headers, or explicitly set <code>API_BASE_URL</code> to your public domain.</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Large folder uploads hitting rate limits (429)</td>
|
||||||
|
<td><code>RATE_LIMIT_DEFAULT</code> exceeded (200/min)</td>
|
||||||
|
<td>Increase rate limit in env config, use Redis backend (<code>RATE_LIMIT_STORAGE_URI=redis://host:port</code>) for distributed setups, or upload in smaller batches.</td>
|
||||||
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
</div>
|
</div>
|
||||||
@@ -700,8 +1243,12 @@ curl -X DELETE "{{ api_base }}/kms/keys/{key-id}?waiting_period_days=30" \
|
|||||||
<li><a href="#api">REST endpoints</a></li>
|
<li><a href="#api">REST endpoints</a></li>
|
||||||
<li><a href="#examples">API Examples</a></li>
|
<li><a href="#examples">API Examples</a></li>
|
||||||
<li><a href="#replication">Site Replication</a></li>
|
<li><a href="#replication">Site Replication</a></li>
|
||||||
|
<li><a href="#versioning">Object Versioning</a></li>
|
||||||
<li><a href="#quotas">Bucket Quotas</a></li>
|
<li><a href="#quotas">Bucket Quotas</a></li>
|
||||||
<li><a href="#encryption">Encryption</a></li>
|
<li><a href="#encryption">Encryption</a></li>
|
||||||
|
<li><a href="#lifecycle">Lifecycle Rules</a></li>
|
||||||
|
<li><a href="#metrics">Metrics History</a></li>
|
||||||
|
<li><a href="#operation-metrics">Operation Metrics</a></li>
|
||||||
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
||||||
</ul>
|
</ul>
|
||||||
<div class="docs-sidebar-callouts">
|
<div class="docs-sidebar-callouts">
|
||||||
|
|||||||
@@ -10,6 +10,7 @@
|
|||||||
</svg>
|
</svg>
|
||||||
IAM Configuration
|
IAM Configuration
|
||||||
</h1>
|
</h1>
|
||||||
|
<p class="text-muted mb-0 mt-1">Create and manage users with fine-grained bucket permissions.</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="d-flex gap-2">
|
<div class="d-flex gap-2">
|
||||||
{% if not iam_locked %}
|
{% if not iam_locked %}
|
||||||
@@ -109,35 +110,68 @@
|
|||||||
{% else %}
|
{% else %}
|
||||||
<div class="card-body px-4 pb-4">
|
<div class="card-body px-4 pb-4">
|
||||||
{% if users %}
|
{% if users %}
|
||||||
<div class="table-responsive">
|
<div class="row g-3">
|
||||||
<table class="table table-hover align-middle mb-0">
|
{% for user in users %}
|
||||||
<thead class="table-light">
|
<div class="col-md-6 col-xl-4">
|
||||||
<tr>
|
<div class="card h-100 iam-user-card">
|
||||||
<th scope="col">User</th>
|
<div class="card-body">
|
||||||
<th scope="col">Policies</th>
|
<div class="d-flex align-items-start justify-content-between mb-3">
|
||||||
<th scope="col" class="text-end">Actions</th>
|
<div class="d-flex align-items-center gap-3 min-width-0 overflow-hidden">
|
||||||
</tr>
|
<div class="user-avatar user-avatar-lg flex-shrink-0">
|
||||||
</thead>
|
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<tbody>
|
|
||||||
{% for user in users %}
|
|
||||||
<tr>
|
|
||||||
<td>
|
|
||||||
<div class="d-flex align-items-center gap-3">
|
|
||||||
<div class="user-avatar">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
|
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<div>
|
<div class="min-width-0">
|
||||||
<div class="fw-medium">{{ user.display_name }}</div>
|
<h6 class="fw-semibold mb-0 text-truncate" title="{{ user.display_name }}">{{ user.display_name }}</h6>
|
||||||
<code class="small text-muted">{{ user.access_key }}</code>
|
<code class="small text-muted d-block text-truncate" title="{{ user.access_key }}">{{ user.access_key }}</code>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</td>
|
<div class="dropdown flex-shrink-0">
|
||||||
<td>
|
<button class="btn btn-sm btn-icon" type="button" data-bs-toggle="dropdown" aria-expanded="false">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M9.5 13a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
<ul class="dropdown-menu dropdown-menu-end">
|
||||||
|
<li>
|
||||||
|
<button class="dropdown-item" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
|
||||||
|
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
|
||||||
|
</svg>
|
||||||
|
Edit Name
|
||||||
|
</button>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<button class="dropdown-item" type="button" data-rotate-user="{{ user.access_key }}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
|
||||||
|
<path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
|
||||||
|
<path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
|
||||||
|
</svg>
|
||||||
|
Rotate Secret
|
||||||
|
</button>
|
||||||
|
</li>
|
||||||
|
<li><hr class="dropdown-divider"></li>
|
||||||
|
<li>
|
||||||
|
<button class="dropdown-item text-danger" type="button" data-delete-user="{{ user.access_key }}">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
|
Delete User
|
||||||
|
</button>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="mb-3">
|
||||||
|
<div class="small text-muted mb-2">Bucket Permissions</div>
|
||||||
<div class="d-flex flex-wrap gap-1">
|
<div class="d-flex flex-wrap gap-1">
|
||||||
{% for policy in user.policies %}
|
{% for policy in user.policies %}
|
||||||
<span class="badge bg-primary bg-opacity-10 text-primary">
|
<span class="badge bg-primary bg-opacity-10 text-primary">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="10" height="10" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
|
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
||||||
|
</svg>
|
||||||
{{ policy.bucket }}
|
{{ policy.bucket }}
|
||||||
{% if '*' in policy.actions %}
|
{% if '*' in policy.actions %}
|
||||||
<span class="opacity-75">(full)</span>
|
<span class="opacity-75">(full)</span>
|
||||||
@@ -149,38 +183,18 @@
|
|||||||
<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>
|
<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
</div>
|
</div>
|
||||||
</td>
|
</div>
|
||||||
<td class="text-end">
|
<button class="btn btn-outline-primary btn-sm w-100" type="button" data-policy-editor data-access-key="{{ user.access_key }}">
|
||||||
<div class="btn-group btn-group-sm" role="group">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
||||||
<button class="btn btn-outline-primary" type="button" data-rotate-user="{{ user.access_key }}" title="Rotate Secret">
|
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/>
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/>
|
||||||
<path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
|
</svg>
|
||||||
<path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
|
Manage Policies
|
||||||
</svg>
|
</button>
|
||||||
</button>
|
</div>
|
||||||
<button class="btn btn-outline-secondary" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}" title="Edit User">
|
</div>
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
</div>
|
||||||
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
|
{% endfor %}
|
||||||
</svg>
|
|
||||||
</button>
|
|
||||||
<button class="btn btn-outline-secondary" type="button" data-policy-editor data-access-key="{{ user.access_key }}" title="Edit Policies">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/>
|
|
||||||
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/>
|
|
||||||
</svg>
|
|
||||||
</button>
|
|
||||||
<button class="btn btn-outline-danger" type="button" data-delete-user="{{ user.access_key }}" title="Delete User">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
|
|
||||||
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
|
||||||
</svg>
|
|
||||||
</button>
|
|
||||||
</div>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
{% endfor %}
|
|
||||||
</tbody>
|
|
||||||
</table>
|
|
||||||
</div>
|
</div>
|
||||||
{% else %}
|
{% else %}
|
||||||
<div class="empty-state text-center py-5">
|
<div class="empty-state text-center py-5">
|
||||||
@@ -203,7 +217,6 @@
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Create User Modal -->
|
|
||||||
<div class="modal fade" id="createUserModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="createUserModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
@@ -252,7 +265,6 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Policy Editor Modal -->
|
|
||||||
<div class="modal fade" id="policyEditorModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="policyEditorModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-lg modal-dialog-centered">
|
<div class="modal-dialog modal-lg modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
@@ -303,7 +315,6 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Edit User Modal -->
|
|
||||||
<div class="modal fade" id="editUserModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="editUserModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
@@ -338,15 +349,14 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Delete User Modal -->
|
|
||||||
<div class="modal fade" id="deleteUserModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="deleteUserModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header border-0 pb-0">
|
<div class="modal-header border-0 pb-0">
|
||||||
<h1 class="modal-title fs-5 fw-semibold">
|
<h1 class="modal-title fs-5 fw-semibold">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-danger" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-danger" viewBox="0 0 16 16">
|
||||||
<path d="M1 14s-1 0-1-1 1-4 6-4 6 3 6 4-1 1-1 1H1zm5-6a3 3 0 1 0 0-6 3 3 0 0 0 0 6z"/>
|
<path d="M11 5a3 3 0 1 1-6 0 3 3 0 0 1 6 0M8 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4m.256 7a4.5 4.5 0 0 1-.229-1.004H3c.001-.246.154-.986.832-1.664C4.484 10.68 5.711 10 8 10q.39 0 .74.025c.226-.341.496-.65.804-.918Q9.077 9.014 8 9c-5 0-6 3-6 4s1 1 1 1h5.256Z"/>
|
||||||
<path fill-rule="evenodd" d="M11 1.5v1h5v1h-1v9a2 2 0 0 1-2 2H3a2 2 0 0 1-2-2v-9H0v-1h5v-1a1 1 0 0 1 1-1h4a1 1 0 0 1 1 1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118z"/>
|
<path d="M12.5 16a3.5 3.5 0 1 0 0-7 3.5 3.5 0 0 0 0 7m-.646-4.854.646.647.646-.647a.5.5 0 0 1 .708.708l-.647.646.647.646a.5.5 0 0 1-.708.708l-.646-.647-.646.647a.5.5 0 0 1-.708-.708l.647-.646-.647-.646a.5.5 0 0 1 .708-.708"/>
|
||||||
</svg>
|
</svg>
|
||||||
Delete User
|
Delete User
|
||||||
</h1>
|
</h1>
|
||||||
@@ -382,7 +392,6 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Rotate Secret Modal -->
|
|
||||||
<div class="modal fade" id="rotateSecretModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="rotateSecretModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
@@ -445,272 +454,20 @@
|
|||||||
|
|
||||||
{% block extra_scripts %}
|
{% block extra_scripts %}
|
||||||
{{ super() }}
|
{{ super() }}
|
||||||
|
<script src="{{ url_for('static', filename='js/iam-management.js') }}"></script>
|
||||||
<script>
|
<script>
|
||||||
(function () {
|
IAMManagement.init({
|
||||||
const currentUserKey = {{ principal.access_key | tojson }};
|
users: JSON.parse(document.getElementById('iamUsersJson').textContent || '[]'),
|
||||||
const configCopyButtons = document.querySelectorAll('.config-copy');
|
currentUserKey: {{ principal.access_key | tojson }},
|
||||||
configCopyButtons.forEach((button) => {
|
iamLocked: {{ iam_locked | tojson }},
|
||||||
button.addEventListener('click', async () => {
|
csrfToken: "{{ csrf_token() }}",
|
||||||
const targetId = button.dataset.copyTarget;
|
endpoints: {
|
||||||
const target = document.getElementById(targetId);
|
createUser: "{{ url_for('ui.create_iam_user') }}",
|
||||||
if (!target) return;
|
updateUser: "{{ url_for('ui.update_iam_user', access_key='ACCESS_KEY') }}",
|
||||||
const text = target.innerText;
|
deleteUser: "{{ url_for('ui.delete_iam_user', access_key='ACCESS_KEY') }}",
|
||||||
try {
|
updatePolicies: "{{ url_for('ui.update_iam_policies', access_key='ACCESS_KEY') }}",
|
||||||
await navigator.clipboard.writeText(text);
|
rotateSecret: "{{ url_for('ui.rotate_iam_secret', access_key='ACCESS_KEY') }}"
|
||||||
button.textContent = 'Copied!';
|
|
||||||
setTimeout(() => {
|
|
||||||
button.textContent = 'Copy JSON';
|
|
||||||
}, 1500);
|
|
||||||
} catch (err) {
|
|
||||||
console.error('Unable to copy IAM config', err);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
const secretCopyButton = document.querySelector('[data-secret-copy]');
|
|
||||||
if (secretCopyButton) {
|
|
||||||
secretCopyButton.addEventListener('click', async () => {
|
|
||||||
const secretInput = document.getElementById('disclosedSecretValue');
|
|
||||||
if (!secretInput) return;
|
|
||||||
try {
|
|
||||||
await navigator.clipboard.writeText(secretInput.value);
|
|
||||||
secretCopyButton.textContent = 'Copied!';
|
|
||||||
setTimeout(() => {
|
|
||||||
secretCopyButton.textContent = 'Copy';
|
|
||||||
}, 1500);
|
|
||||||
} catch (err) {
|
|
||||||
console.error('Unable to copy IAM secret', err);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
|
});
|
||||||
const iamUsersData = document.getElementById('iamUsersJson');
|
|
||||||
const users = iamUsersData ? JSON.parse(iamUsersData.textContent || '[]') : [];
|
|
||||||
|
|
||||||
// Policy Editor Logic
|
|
||||||
const policyModalEl = document.getElementById('policyEditorModal');
|
|
||||||
const policyModal = new bootstrap.Modal(policyModalEl);
|
|
||||||
const userLabelEl = document.getElementById('policyEditorUserLabel');
|
|
||||||
const userInputEl = document.getElementById('policyEditorUser');
|
|
||||||
const textareaEl = document.getElementById('policyEditorDocument');
|
|
||||||
const formEl = document.getElementById('policyEditorForm');
|
|
||||||
const templateButtons = document.querySelectorAll('[data-policy-template]');
|
|
||||||
const iamLocked = {{ iam_locked | tojson }};
|
|
||||||
|
|
||||||
if (iamLocked) return;
|
|
||||||
|
|
||||||
const userPolicies = (accessKey) => {
|
|
||||||
const target = users.find((user) => user.access_key === accessKey);
|
|
||||||
return target ? JSON.stringify(target.policies, null, 2) : '';
|
|
||||||
};
|
|
||||||
|
|
||||||
const applyTemplate = (name) => {
|
|
||||||
const templates = {
|
|
||||||
full: [
|
|
||||||
{
|
|
||||||
bucket: '*',
|
|
||||||
actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'replication', 'iam:list_users', 'iam:*'],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
readonly: [
|
|
||||||
{
|
|
||||||
bucket: '*',
|
|
||||||
actions: ['list', 'read'],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
writer: [
|
|
||||||
{
|
|
||||||
bucket: '*',
|
|
||||||
actions: ['list', 'read', 'write'],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
};
|
|
||||||
if (templates[name]) {
|
|
||||||
textareaEl.value = JSON.stringify(templates[name], null, 2);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
templateButtons.forEach((button) => {
|
|
||||||
button.addEventListener('click', () => applyTemplate(button.dataset.policyTemplate));
|
|
||||||
});
|
|
||||||
|
|
||||||
// Create User modal template buttons
|
|
||||||
const createUserPoliciesEl = document.getElementById('createUserPolicies');
|
|
||||||
const createTemplateButtons = document.querySelectorAll('[data-create-policy-template]');
|
|
||||||
|
|
||||||
const applyCreateTemplate = (name) => {
|
|
||||||
const templates = {
|
|
||||||
full: [
|
|
||||||
{
|
|
||||||
bucket: '*',
|
|
||||||
actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'replication', 'iam:list_users', 'iam:*'],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
readonly: [
|
|
||||||
{
|
|
||||||
bucket: '*',
|
|
||||||
actions: ['list', 'read'],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
writer: [
|
|
||||||
{
|
|
||||||
bucket: '*',
|
|
||||||
actions: ['list', 'read', 'write'],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
};
|
|
||||||
if (templates[name] && createUserPoliciesEl) {
|
|
||||||
createUserPoliciesEl.value = JSON.stringify(templates[name], null, 2);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
createTemplateButtons.forEach((button) => {
|
|
||||||
button.addEventListener('click', () => applyCreateTemplate(button.dataset.createPolicyTemplate));
|
|
||||||
});
|
|
||||||
|
|
||||||
formEl?.addEventListener('submit', (event) => {
|
|
||||||
const key = userInputEl.value;
|
|
||||||
if (!key) {
|
|
||||||
event.preventDefault();
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
const template = formEl.dataset.actionTemplate;
|
|
||||||
formEl.action = template.replace('ACCESS_KEY_PLACEHOLDER', key);
|
|
||||||
});
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-policy-editor]').forEach((button) => {
|
|
||||||
button.addEventListener('click', () => {
|
|
||||||
const key = button.getAttribute('data-access-key');
|
|
||||||
if (!key) return;
|
|
||||||
|
|
||||||
userLabelEl.textContent = key;
|
|
||||||
userInputEl.value = key;
|
|
||||||
textareaEl.value = userPolicies(key);
|
|
||||||
|
|
||||||
policyModal.show();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
// Edit User Logic
|
|
||||||
const editUserModal = new bootstrap.Modal(document.getElementById('editUserModal'));
|
|
||||||
const editUserForm = document.getElementById('editUserForm');
|
|
||||||
const editUserDisplayName = document.getElementById('editUserDisplayName');
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-edit-user]').forEach(btn => {
|
|
||||||
btn.addEventListener('click', () => {
|
|
||||||
const key = btn.dataset.editUser;
|
|
||||||
const name = btn.dataset.displayName;
|
|
||||||
editUserDisplayName.value = name;
|
|
||||||
editUserForm.action = "{{ url_for('ui.update_iam_user', access_key='ACCESS_KEY') }}".replace('ACCESS_KEY', key);
|
|
||||||
editUserModal.show();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
// Delete User Logic
|
|
||||||
const deleteUserModal = new bootstrap.Modal(document.getElementById('deleteUserModal'));
|
|
||||||
const deleteUserForm = document.getElementById('deleteUserForm');
|
|
||||||
const deleteUserLabel = document.getElementById('deleteUserLabel');
|
|
||||||
const deleteSelfWarning = document.getElementById('deleteSelfWarning');
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-delete-user]').forEach(btn => {
|
|
||||||
btn.addEventListener('click', () => {
|
|
||||||
const key = btn.dataset.deleteUser;
|
|
||||||
deleteUserLabel.textContent = key;
|
|
||||||
deleteUserForm.action = "{{ url_for('ui.delete_iam_user', access_key='ACCESS_KEY') }}".replace('ACCESS_KEY', key);
|
|
||||||
|
|
||||||
if (key === currentUserKey) {
|
|
||||||
deleteSelfWarning.classList.remove('d-none');
|
|
||||||
} else {
|
|
||||||
deleteSelfWarning.classList.add('d-none');
|
|
||||||
}
|
|
||||||
|
|
||||||
deleteUserModal.show();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
// Rotate Secret Logic
|
|
||||||
const rotateSecretModal = new bootstrap.Modal(document.getElementById('rotateSecretModal'));
|
|
||||||
const rotateUserLabel = document.getElementById('rotateUserLabel');
|
|
||||||
const confirmRotateBtn = document.getElementById('confirmRotateBtn');
|
|
||||||
const rotateCancelBtn = document.getElementById('rotateCancelBtn');
|
|
||||||
const rotateDoneBtn = document.getElementById('rotateDoneBtn');
|
|
||||||
const rotateSecretConfirm = document.getElementById('rotateSecretConfirm');
|
|
||||||
const rotateSecretResult = document.getElementById('rotateSecretResult');
|
|
||||||
const newSecretKeyInput = document.getElementById('newSecretKey');
|
|
||||||
const copyNewSecretBtn = document.getElementById('copyNewSecret');
|
|
||||||
let currentRotateKey = null;
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-rotate-user]').forEach(btn => {
|
|
||||||
btn.addEventListener('click', () => {
|
|
||||||
currentRotateKey = btn.dataset.rotateUser;
|
|
||||||
rotateUserLabel.textContent = currentRotateKey;
|
|
||||||
|
|
||||||
// Reset Modal State
|
|
||||||
rotateSecretConfirm.classList.remove('d-none');
|
|
||||||
rotateSecretResult.classList.add('d-none');
|
|
||||||
confirmRotateBtn.classList.remove('d-none');
|
|
||||||
rotateCancelBtn.classList.remove('d-none');
|
|
||||||
rotateDoneBtn.classList.add('d-none');
|
|
||||||
|
|
||||||
rotateSecretModal.show();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
confirmRotateBtn.addEventListener('click', async () => {
|
|
||||||
if (!currentRotateKey) return;
|
|
||||||
|
|
||||||
confirmRotateBtn.disabled = true;
|
|
||||||
confirmRotateBtn.textContent = "Rotating...";
|
|
||||||
|
|
||||||
try {
|
|
||||||
const url = "{{ url_for('ui.rotate_iam_secret', access_key='ACCESS_KEY') }}".replace('ACCESS_KEY', currentRotateKey);
|
|
||||||
const response = await fetch(url, {
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
'Accept': 'application/json',
|
|
||||||
'X-CSRFToken': "{{ csrf_token() }}"
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
const data = await response.json();
|
|
||||||
throw new Error(data.error || 'Failed to rotate secret');
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
newSecretKeyInput.value = data.secret_key;
|
|
||||||
|
|
||||||
// Show Result
|
|
||||||
rotateSecretConfirm.classList.add('d-none');
|
|
||||||
rotateSecretResult.classList.remove('d-none');
|
|
||||||
confirmRotateBtn.classList.add('d-none');
|
|
||||||
rotateCancelBtn.classList.add('d-none');
|
|
||||||
rotateDoneBtn.classList.remove('d-none');
|
|
||||||
|
|
||||||
} catch (err) {
|
|
||||||
if (window.showToast) {
|
|
||||||
window.showToast(err.message, 'Error', 'danger');
|
|
||||||
}
|
|
||||||
rotateSecretModal.hide();
|
|
||||||
} finally {
|
|
||||||
confirmRotateBtn.disabled = false;
|
|
||||||
confirmRotateBtn.textContent = "Rotate Key";
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
copyNewSecretBtn.addEventListener('click', async () => {
|
|
||||||
try {
|
|
||||||
await navigator.clipboard.writeText(newSecretKeyInput.value);
|
|
||||||
copyNewSecretBtn.textContent = 'Copied!';
|
|
||||||
setTimeout(() => copyNewSecretBtn.textContent = 'Copy', 1500);
|
|
||||||
} catch (err) {
|
|
||||||
console.error('Failed to copy', err);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
rotateDoneBtn.addEventListener('click', () => {
|
|
||||||
window.location.reload();
|
|
||||||
});
|
|
||||||
})();
|
|
||||||
</script>
|
</script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
@@ -35,7 +35,7 @@
|
|||||||
<div class="card shadow-lg login-card position-relative">
|
<div class="card shadow-lg login-card position-relative">
|
||||||
<div class="card-body p-4 p-md-5">
|
<div class="card-body p-4 p-md-5">
|
||||||
<div class="text-center mb-4 d-lg-none">
|
<div class="text-center mb-4 d-lg-none">
|
||||||
<img src="{{ url_for('static', filename='images/MyFISO.png') }}" alt="MyFSIO" width="48" height="48" class="mb-3 rounded-3">
|
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO" width="48" height="48" class="mb-3 rounded-3">
|
||||||
<h2 class="h4 fw-bold">MyFSIO</h2>
|
<h2 class="h4 fw-bold">MyFSIO</h2>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="h4 mb-1 d-none d-lg-block">Sign in</h2>
|
<h2 class="h4 mb-1 d-none d-lg-block">Sign in</h2>
|
||||||
|
|||||||
@@ -6,11 +6,11 @@
|
|||||||
<p class="text-muted mb-0">Real-time server performance and storage usage</p>
|
<p class="text-muted mb-0">Real-time server performance and storage usage</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="d-flex gap-2 align-items-center">
|
<div class="d-flex gap-2 align-items-center">
|
||||||
<span class="d-flex align-items-center gap-2 text-muted small">
|
<span class="d-flex align-items-center gap-2 text-muted small" id="metricsLiveIndicator">
|
||||||
<span class="live-indicator"></span>
|
<span class="live-indicator"></span>
|
||||||
Live
|
Auto-refresh: <span id="refreshCountdown">5</span>s
|
||||||
</span>
|
</span>
|
||||||
<button class="btn btn-outline-secondary btn-sm" onclick="window.location.reload()">
|
<button class="btn btn-outline-secondary btn-sm" id="refreshMetricsBtn">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-arrow-clockwise me-1" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-arrow-clockwise me-1" viewBox="0 0 16 16">
|
||||||
<path fill-rule="evenodd" d="M8 3a5 5 0 1 0 4.546 2.914.5.5 0 0 1 .908-.417A6 6 0 1 1 8 2v1z"/>
|
<path fill-rule="evenodd" d="M8 3a5 5 0 1 0 4.546 2.914.5.5 0 0 1 .908-.417A6 6 0 1 1 8 2v1z"/>
|
||||||
<path d="M8 4.466V.534a.25.25 0 0 1 .41-.192l2.36 1.966c.12.1.12.284 0 .384L8.41 4.658A.25.25 0 0 1 8 4.466z"/>
|
<path d="M8 4.466V.534a.25.25 0 0 1 .41-.192l2.36 1.966c.12.1.12.284 0 .384L8.41 4.658A.25.25 0 0 1 8 4.466z"/>
|
||||||
@@ -32,15 +32,13 @@
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="display-6 fw-bold mb-2 stat-value">{{ cpu_percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
<h2 class="display-6 fw-bold mb-2 stat-value"><span data-metric="cpu_percent">{{ cpu_percent }}</span><span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
<div class="progress" style="height: 8px; border-radius: 4px;">
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
<div class="progress-bar {% if cpu_percent > 80 %}bg-danger{% elif cpu_percent > 50 %}bg-warning{% else %}bg-primary{% endif %}" role="progressbar" style="width: {{ cpu_percent }}%"></div>
|
<div class="progress-bar bg-primary" data-metric="cpu_bar" role="progressbar" style="width: {{ cpu_percent }}%"></div>
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-2 d-flex justify-content-between">
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
<small class="text-muted">Current load</small>
|
<small class="text-muted">Current load</small>
|
||||||
<small class="{% if cpu_percent > 80 %}text-danger{% elif cpu_percent > 50 %}text-warning{% else %}text-success{% endif %}">
|
<small data-metric="cpu_status" class="text-success">Normal</small>
|
||||||
{% if cpu_percent > 80 %}High{% elif cpu_percent > 50 %}Medium{% else %}Normal{% endif %}
|
|
||||||
</small>
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -57,13 +55,13 @@
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="display-6 fw-bold mb-2 stat-value">{{ memory.percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
<h2 class="display-6 fw-bold mb-2 stat-value"><span data-metric="memory_percent">{{ memory.percent }}</span><span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
<div class="progress" style="height: 8px; border-radius: 4px;">
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
<div class="progress-bar bg-info" role="progressbar" style="width: {{ memory.percent }}%"></div>
|
<div class="progress-bar bg-info" data-metric="memory_bar" role="progressbar" style="width: {{ memory.percent }}%"></div>
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-2 d-flex justify-content-between">
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
<small class="text-muted">{{ memory.used }} used</small>
|
<small class="text-muted"><span data-metric="memory_used">{{ memory.used }}</span> used</small>
|
||||||
<small class="text-muted">{{ memory.total }} total</small>
|
<small class="text-muted"><span data-metric="memory_total">{{ memory.total }}</span> total</small>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -81,13 +79,13 @@
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="display-6 fw-bold mb-2 stat-value">{{ disk.percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
<h2 class="display-6 fw-bold mb-2 stat-value"><span data-metric="disk_percent">{{ disk.percent }}</span><span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
<div class="progress" style="height: 8px; border-radius: 4px;">
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
<div class="progress-bar {% if disk.percent > 90 %}bg-danger{% elif disk.percent > 75 %}bg-warning{% else %}bg-warning{% endif %}" role="progressbar" style="width: {{ disk.percent }}%"></div>
|
<div class="progress-bar bg-warning" data-metric="disk_bar" role="progressbar" style="width: {{ disk.percent }}%"></div>
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-2 d-flex justify-content-between">
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
<small class="text-muted">{{ disk.free }} free</small>
|
<small class="text-muted"><span data-metric="disk_free">{{ disk.free }}</span> free</small>
|
||||||
<small class="text-muted">{{ disk.total }} total</small>
|
<small class="text-muted"><span data-metric="disk_total">{{ disk.total }}</span> total</small>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -104,15 +102,15 @@
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="display-6 fw-bold mb-2 stat-value">{{ app.storage_used }}</h2>
|
<h2 class="display-6 fw-bold mb-2 stat-value" data-metric="storage_used">{{ app.storage_used }}</h2>
|
||||||
<div class="d-flex gap-3 mt-3">
|
<div class="d-flex gap-3 mt-3">
|
||||||
<div class="text-center flex-fill">
|
<div class="text-center flex-fill">
|
||||||
<div class="h5 fw-bold mb-0">{{ app.buckets }}</div>
|
<div class="h5 fw-bold mb-0" data-metric="buckets_count">{{ app.buckets }}</div>
|
||||||
<small class="text-muted">Buckets</small>
|
<small class="text-muted">Buckets</small>
|
||||||
</div>
|
</div>
|
||||||
<div class="vr"></div>
|
<div class="vr"></div>
|
||||||
<div class="text-center flex-fill">
|
<div class="text-center flex-fill">
|
||||||
<div class="h5 fw-bold mb-0">{{ app.objects }}</div>
|
<div class="h5 fw-bold mb-0" data-metric="objects_count">{{ app.objects }}</div>
|
||||||
<small class="text-muted">Objects</small>
|
<small class="text-muted">Objects</small>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -219,24 +217,42 @@
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="col-lg-4">
|
<div class="col-lg-4">
|
||||||
<div class="card shadow-sm border-0 h-100 overflow-hidden" style="background: linear-gradient(135deg, #3b82f6 0%, #8b5cf6 100%);">
|
{% set has_issues = (cpu_percent > 80) or (memory.percent > 85) or (disk.percent > 90) %}
|
||||||
|
<div class="card shadow-sm border-0 h-100 overflow-hidden" style="background: linear-gradient(135deg, {% if has_issues %}#ef4444 0%, #f97316{% else %}#3b82f6 0%, #8b5cf6{% endif %} 100%);">
|
||||||
<div class="card-body p-4 d-flex flex-column justify-content-center text-white position-relative">
|
<div class="card-body p-4 d-flex flex-column justify-content-center text-white position-relative">
|
||||||
<div class="position-absolute top-0 end-0 opacity-25" style="transform: translate(20%, -20%);">
|
<div class="position-absolute top-0 end-0 opacity-25" style="transform: translate(20%, -20%);">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="160" height="160" fill="currentColor" class="bi bi-cloud-check" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="160" height="160" fill="currentColor" class="bi bi-{% if has_issues %}exclamation-triangle{% else %}cloud-check{% endif %}" viewBox="0 0 16 16">
|
||||||
|
{% if has_issues %}
|
||||||
|
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
||||||
|
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
||||||
|
{% else %}
|
||||||
<path fill-rule="evenodd" d="M10.354 6.146a.5.5 0 0 1 0 .708l-3 3a.5.5 0 0 1-.708 0l-1.5-1.5a.5.5 0 1 1 .708-.708L7 8.793l2.646-2.647a.5.5 0 0 1 .708 0z"/>
|
<path fill-rule="evenodd" d="M10.354 6.146a.5.5 0 0 1 0 .708l-3 3a.5.5 0 0 1-.708 0l-1.5-1.5a.5.5 0 1 1 .708-.708L7 8.793l2.646-2.647a.5.5 0 0 1 .708 0z"/>
|
||||||
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
|
{% endif %}
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<span class="badge bg-white text-primary fw-semibold px-3 py-2">
|
<span class="badge bg-white {% if has_issues %}text-danger{% else %}text-primary{% endif %} fw-semibold px-3 py-2">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-check-circle-fill me-1" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-{% if has_issues %}exclamation-circle-fill{% else %}check-circle-fill{% endif %} me-1" viewBox="0 0 16 16">
|
||||||
|
{% if has_issues %}
|
||||||
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM8 4a.905.905 0 0 0-.9.995l.35 3.507a.552.552 0 0 0 1.1 0l.35-3.507A.905.905 0 0 0 8 4zm.002 6a1 1 0 1 0 0 2 1 1 0 0 0 0-2z"/>
|
||||||
|
{% else %}
|
||||||
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
||||||
|
{% endif %}
|
||||||
</svg>
|
</svg>
|
||||||
v{{ app.version }}
|
v{{ app.version }}
|
||||||
</span>
|
</span>
|
||||||
</div>
|
</div>
|
||||||
<h4 class="card-title fw-bold mb-3">System Status</h4>
|
<h4 class="card-title fw-bold mb-3">System Health</h4>
|
||||||
<p class="card-text opacity-90 mb-4">All systems operational. Your storage infrastructure is running smoothly with no detected issues.</p>
|
{% if has_issues %}
|
||||||
|
<ul class="list-unstyled small mb-4 opacity-90">
|
||||||
|
{% if cpu_percent > 80 %}<li class="mb-1">CPU usage is high ({{ cpu_percent }}%)</li>{% endif %}
|
||||||
|
{% if memory.percent > 85 %}<li class="mb-1">Memory usage is high ({{ memory.percent }}%)</li>{% endif %}
|
||||||
|
{% if disk.percent > 90 %}<li class="mb-1">Disk space is critically low ({{ disk.percent }}% used)</li>{% endif %}
|
||||||
|
</ul>
|
||||||
|
{% else %}
|
||||||
|
<p class="card-text opacity-90 mb-4 small">All resources are within normal operating parameters.</p>
|
||||||
|
{% endif %}
|
||||||
<div class="d-flex gap-4">
|
<div class="d-flex gap-4">
|
||||||
<div>
|
<div>
|
||||||
<div class="h3 fw-bold mb-0">{{ app.uptime_days }}d</div>
|
<div class="h3 fw-bold mb-0">{{ app.uptime_days }}d</div>
|
||||||
@@ -251,4 +267,629 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
{% if operation_metrics_enabled %}
|
||||||
|
<div class="row g-4 mt-2">
|
||||||
|
<div class="col-12">
|
||||||
|
<div class="card shadow-sm border-0">
|
||||||
|
<div class="card-header bg-transparent border-0 pt-4 px-4 d-flex justify-content-between align-items-center">
|
||||||
|
<h5 class="card-title mb-0 fw-semibold">API Operations</h5>
|
||||||
|
<div class="d-flex align-items-center gap-3">
|
||||||
|
<span class="small text-muted" id="opStatus">Loading...</span>
|
||||||
|
<button class="btn btn-outline-secondary btn-sm" id="resetOpMetricsBtn" title="Reset current window">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-arrow-counterclockwise" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 3a5 5 0 1 1-4.546 2.914.5.5 0 0 0-.908-.417A6 6 0 1 0 8 2v1z"/>
|
||||||
|
<path d="M8 4.466V.534a.25.25 0 0 0-.41-.192L5.23 2.308a.25.25 0 0 0 0 .384l2.36 1.966A.25.25 0 0 0 8 4.466z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="card-body p-4">
|
||||||
|
<div class="row g-3 mb-4">
|
||||||
|
<div class="col-6 col-md-4 col-lg-2">
|
||||||
|
<div class="text-center p-3 bg-light rounded h-100">
|
||||||
|
<h4 class="fw-bold mb-1" id="opTotalRequests">0</h4>
|
||||||
|
<small class="text-muted">Requests</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-6 col-md-4 col-lg-2">
|
||||||
|
<div class="text-center p-3 bg-light rounded h-100">
|
||||||
|
<h4 class="fw-bold mb-1 text-success" id="opSuccessRate">0%</h4>
|
||||||
|
<small class="text-muted">Success</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-6 col-md-4 col-lg-2">
|
||||||
|
<div class="text-center p-3 bg-light rounded h-100">
|
||||||
|
<h4 class="fw-bold mb-1 text-danger" id="opErrorCount">0</h4>
|
||||||
|
<small class="text-muted">Errors</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-6 col-md-4 col-lg-2">
|
||||||
|
<div class="text-center p-3 bg-light rounded h-100">
|
||||||
|
<h4 class="fw-bold mb-1 text-info" id="opAvgLatency">0ms</h4>
|
||||||
|
<small class="text-muted">Latency</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-6 col-md-4 col-lg-2">
|
||||||
|
<div class="text-center p-3 bg-light rounded h-100">
|
||||||
|
<h4 class="fw-bold mb-1 text-primary" id="opBytesIn">0 B</h4>
|
||||||
|
<small class="text-muted">Bytes In</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-6 col-md-4 col-lg-2">
|
||||||
|
<div class="text-center p-3 bg-light rounded h-100">
|
||||||
|
<h4 class="fw-bold mb-1 text-secondary" id="opBytesOut">0 B</h4>
|
||||||
|
<small class="text-muted">Bytes Out</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="row g-4">
|
||||||
|
<div class="col-lg-6">
|
||||||
|
<div class="bg-light rounded p-3">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-3">Requests by Method</h6>
|
||||||
|
<div style="height: 220px; display: flex; align-items: center; justify-content: center;">
|
||||||
|
<canvas id="methodChart"></canvas>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-lg-6">
|
||||||
|
<div class="bg-light rounded p-3">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-3">Requests by Status</h6>
|
||||||
|
<div style="height: 220px;">
|
||||||
|
<canvas id="statusChart"></canvas>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="row g-4 mt-1">
|
||||||
|
<div class="col-lg-6">
|
||||||
|
<div class="bg-light rounded p-3">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-3">Requests by Endpoint</h6>
|
||||||
|
<div style="height: 180px;">
|
||||||
|
<canvas id="endpointChart"></canvas>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-lg-6">
|
||||||
|
<div class="bg-light rounded p-3 h-100 d-flex flex-column">
|
||||||
|
<div class="d-flex justify-content-between align-items-start mb-3">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-0">S3 Error Codes</h6>
|
||||||
|
<span class="badge bg-secondary-subtle text-secondary" style="font-size: 0.65rem;" title="Tracks S3 API errors like NoSuchKey, AccessDenied, etc.">API Only</span>
|
||||||
|
</div>
|
||||||
|
<div class="flex-grow-1 d-flex flex-column" style="min-height: 150px;">
|
||||||
|
<div class="d-flex border-bottom pb-2 mb-2" style="font-size: 0.75rem;">
|
||||||
|
<div class="text-muted fw-semibold" style="flex: 1;">Code</div>
|
||||||
|
<div class="text-muted fw-semibold text-end" style="width: 60px;">Count</div>
|
||||||
|
<div class="text-muted fw-semibold text-end" style="width: 100px;">Distribution</div>
|
||||||
|
</div>
|
||||||
|
<div id="errorCodesContainer" class="flex-grow-1" style="overflow-y: auto;">
|
||||||
|
<div id="errorCodesBody">
|
||||||
|
<div class="text-muted small text-center py-4">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" class="bi bi-check-circle mb-2 text-success" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="M10.97 4.97a.235.235 0 0 0-.02.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-1.071-1.05z"/>
|
||||||
|
</svg>
|
||||||
|
<div>No S3 API errors</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% if metrics_history_enabled %}
|
||||||
|
<div class="row g-4 mt-2">
|
||||||
|
<div class="col-12">
|
||||||
|
<div class="card shadow-sm border-0">
|
||||||
|
<div class="card-header bg-transparent border-0 pt-4 px-4 d-flex justify-content-between align-items-center">
|
||||||
|
<h5 class="card-title mb-0 fw-semibold">Metrics History</h5>
|
||||||
|
<div class="d-flex gap-2 align-items-center">
|
||||||
|
<select class="form-select form-select-sm" id="historyTimeRange" style="width: auto;">
|
||||||
|
<option value="1">Last 1 hour</option>
|
||||||
|
<option value="6">Last 6 hours</option>
|
||||||
|
<option value="24" selected>Last 24 hours</option>
|
||||||
|
<option value="168">Last 7 days</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="card-body p-4">
|
||||||
|
<div class="row">
|
||||||
|
<div class="col-md-4 mb-4">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-3">CPU Usage</h6>
|
||||||
|
<canvas id="cpuHistoryChart" height="200"></canvas>
|
||||||
|
</div>
|
||||||
|
<div class="col-md-4 mb-4">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-3">Memory Usage</h6>
|
||||||
|
<canvas id="memoryHistoryChart" height="200"></canvas>
|
||||||
|
</div>
|
||||||
|
<div class="col-md-4 mb-4">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-3">Disk Usage</h6>
|
||||||
|
<canvas id="diskHistoryChart" height="200"></canvas>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted small mb-0 text-center" id="historyStatus">Loading history data...</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
|
{% endblock %}
|
||||||
|
|
||||||
|
{% block extra_scripts %}
|
||||||
|
{% if metrics_history_enabled or operation_metrics_enabled %}
|
||||||
|
<script src="https://cdn.jsdelivr.net/npm/chart.js@4.4.1/dist/chart.umd.min.js"></script>
|
||||||
|
{% endif %}
|
||||||
|
<script>
|
||||||
|
(function() {
|
||||||
|
var refreshInterval = 5000;
|
||||||
|
var countdown = 5;
|
||||||
|
var countdownEl = document.getElementById('refreshCountdown');
|
||||||
|
var refreshBtn = document.getElementById('refreshMetricsBtn');
|
||||||
|
var countdownTimer = null;
|
||||||
|
var fetchTimer = null;
|
||||||
|
|
||||||
|
function updateMetrics() {
|
||||||
|
fetch('/ui/metrics/api')
|
||||||
|
.then(function(resp) { return resp.json(); })
|
||||||
|
.then(function(data) {
|
||||||
|
var el;
|
||||||
|
el = document.querySelector('[data-metric="cpu_percent"]');
|
||||||
|
if (el) el.textContent = data.cpu_percent.toFixed(2);
|
||||||
|
el = document.querySelector('[data-metric="cpu_bar"]');
|
||||||
|
if (el) {
|
||||||
|
el.style.width = data.cpu_percent + '%';
|
||||||
|
el.className = 'progress-bar ' + (data.cpu_percent > 80 ? 'bg-danger' : data.cpu_percent > 50 ? 'bg-warning' : 'bg-primary');
|
||||||
|
}
|
||||||
|
el = document.querySelector('[data-metric="cpu_status"]');
|
||||||
|
if (el) {
|
||||||
|
el.textContent = data.cpu_percent > 80 ? 'High' : data.cpu_percent > 50 ? 'Medium' : 'Normal';
|
||||||
|
el.className = data.cpu_percent > 80 ? 'text-danger' : data.cpu_percent > 50 ? 'text-warning' : 'text-success';
|
||||||
|
}
|
||||||
|
|
||||||
|
el = document.querySelector('[data-metric="memory_percent"]');
|
||||||
|
if (el) el.textContent = data.memory.percent.toFixed(2);
|
||||||
|
el = document.querySelector('[data-metric="memory_bar"]');
|
||||||
|
if (el) el.style.width = data.memory.percent + '%';
|
||||||
|
el = document.querySelector('[data-metric="memory_used"]');
|
||||||
|
if (el) el.textContent = data.memory.used;
|
||||||
|
el = document.querySelector('[data-metric="memory_total"]');
|
||||||
|
if (el) el.textContent = data.memory.total;
|
||||||
|
|
||||||
|
el = document.querySelector('[data-metric="disk_percent"]');
|
||||||
|
if (el) el.textContent = data.disk.percent.toFixed(2);
|
||||||
|
el = document.querySelector('[data-metric="disk_bar"]');
|
||||||
|
if (el) {
|
||||||
|
el.style.width = data.disk.percent + '%';
|
||||||
|
el.className = 'progress-bar ' + (data.disk.percent > 90 ? 'bg-danger' : 'bg-warning');
|
||||||
|
}
|
||||||
|
el = document.querySelector('[data-metric="disk_free"]');
|
||||||
|
if (el) el.textContent = data.disk.free;
|
||||||
|
el = document.querySelector('[data-metric="disk_total"]');
|
||||||
|
if (el) el.textContent = data.disk.total;
|
||||||
|
|
||||||
|
el = document.querySelector('[data-metric="storage_used"]');
|
||||||
|
if (el) el.textContent = data.app.storage_used;
|
||||||
|
el = document.querySelector('[data-metric="buckets_count"]');
|
||||||
|
if (el) el.textContent = data.app.buckets;
|
||||||
|
el = document.querySelector('[data-metric="objects_count"]');
|
||||||
|
if (el) el.textContent = data.app.objects;
|
||||||
|
|
||||||
|
countdown = 5;
|
||||||
|
})
|
||||||
|
.catch(function(err) {
|
||||||
|
console.error('Metrics fetch error:', err);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function startCountdown() {
|
||||||
|
if (countdownTimer) clearInterval(countdownTimer);
|
||||||
|
countdown = 5;
|
||||||
|
if (countdownEl) countdownEl.textContent = countdown;
|
||||||
|
countdownTimer = setInterval(function() {
|
||||||
|
countdown--;
|
||||||
|
if (countdownEl) countdownEl.textContent = countdown;
|
||||||
|
if (countdown <= 0) {
|
||||||
|
countdown = 5;
|
||||||
|
}
|
||||||
|
}, 1000);
|
||||||
|
}
|
||||||
|
|
||||||
|
function startPolling() {
|
||||||
|
if (fetchTimer) clearInterval(fetchTimer);
|
||||||
|
fetchTimer = setInterval(function() {
|
||||||
|
if (!document.hidden) {
|
||||||
|
updateMetrics();
|
||||||
|
}
|
||||||
|
}, refreshInterval);
|
||||||
|
startCountdown();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (refreshBtn) {
|
||||||
|
refreshBtn.addEventListener('click', function() {
|
||||||
|
updateMetrics();
|
||||||
|
countdown = 5;
|
||||||
|
if (countdownEl) countdownEl.textContent = countdown;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
document.addEventListener('visibilitychange', function() {
|
||||||
|
if (!document.hidden) {
|
||||||
|
updateMetrics();
|
||||||
|
startPolling();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
startPolling();
|
||||||
|
})();
|
||||||
|
|
||||||
|
{% if operation_metrics_enabled %}
|
||||||
|
(function() {
|
||||||
|
var methodChart = null;
|
||||||
|
var statusChart = null;
|
||||||
|
var endpointChart = null;
|
||||||
|
var opStatus = document.getElementById('opStatus');
|
||||||
|
var opTimer = null;
|
||||||
|
var methodColors = {
|
||||||
|
'GET': '#0d6efd',
|
||||||
|
'PUT': '#198754',
|
||||||
|
'POST': '#ffc107',
|
||||||
|
'DELETE': '#dc3545',
|
||||||
|
'HEAD': '#6c757d',
|
||||||
|
'OPTIONS': '#0dcaf0'
|
||||||
|
};
|
||||||
|
var statusColors = {
|
||||||
|
'2xx': '#198754',
|
||||||
|
'3xx': '#0dcaf0',
|
||||||
|
'4xx': '#ffc107',
|
||||||
|
'5xx': '#dc3545'
|
||||||
|
};
|
||||||
|
var endpointColors = {
|
||||||
|
'object': '#0d6efd',
|
||||||
|
'bucket': '#198754',
|
||||||
|
'ui': '#6c757d',
|
||||||
|
'service': '#0dcaf0',
|
||||||
|
'kms': '#ffc107'
|
||||||
|
};
|
||||||
|
|
||||||
|
function formatBytes(bytes) {
|
||||||
|
if (bytes === 0) return '0 B';
|
||||||
|
var k = 1024;
|
||||||
|
var sizes = ['B', 'KB', 'MB', 'GB', 'TB'];
|
||||||
|
var i = Math.floor(Math.log(bytes) / Math.log(k));
|
||||||
|
return parseFloat((bytes / Math.pow(k, i)).toFixed(1)) + ' ' + sizes[i];
|
||||||
|
}
|
||||||
|
|
||||||
|
function initOpCharts() {
|
||||||
|
var methodCtx = document.getElementById('methodChart');
|
||||||
|
var statusCtx = document.getElementById('statusChart');
|
||||||
|
var endpointCtx = document.getElementById('endpointChart');
|
||||||
|
|
||||||
|
if (methodCtx) {
|
||||||
|
methodChart = new Chart(methodCtx, {
|
||||||
|
type: 'doughnut',
|
||||||
|
data: {
|
||||||
|
labels: [],
|
||||||
|
datasets: [{
|
||||||
|
data: [],
|
||||||
|
backgroundColor: []
|
||||||
|
}]
|
||||||
|
},
|
||||||
|
options: {
|
||||||
|
responsive: true,
|
||||||
|
maintainAspectRatio: false,
|
||||||
|
animation: false,
|
||||||
|
plugins: {
|
||||||
|
legend: { position: 'right', labels: { boxWidth: 12, font: { size: 11 } } }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (statusCtx) {
|
||||||
|
statusChart = new Chart(statusCtx, {
|
||||||
|
type: 'bar',
|
||||||
|
data: {
|
||||||
|
labels: [],
|
||||||
|
datasets: [{
|
||||||
|
data: [],
|
||||||
|
backgroundColor: []
|
||||||
|
}]
|
||||||
|
},
|
||||||
|
options: {
|
||||||
|
responsive: true,
|
||||||
|
maintainAspectRatio: false,
|
||||||
|
animation: false,
|
||||||
|
plugins: { legend: { display: false } },
|
||||||
|
scales: {
|
||||||
|
y: { beginAtZero: true, ticks: { stepSize: 1 } }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (endpointCtx) {
|
||||||
|
endpointChart = new Chart(endpointCtx, {
|
||||||
|
type: 'bar',
|
||||||
|
data: {
|
||||||
|
labels: [],
|
||||||
|
datasets: [{
|
||||||
|
data: [],
|
||||||
|
backgroundColor: []
|
||||||
|
}]
|
||||||
|
},
|
||||||
|
options: {
|
||||||
|
responsive: true,
|
||||||
|
maintainAspectRatio: false,
|
||||||
|
indexAxis: 'y',
|
||||||
|
animation: false,
|
||||||
|
plugins: { legend: { display: false } },
|
||||||
|
scales: {
|
||||||
|
x: { beginAtZero: true, ticks: { stepSize: 1 } }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateOpMetrics() {
|
||||||
|
if (document.hidden) return;
|
||||||
|
fetch('/ui/metrics/operations')
|
||||||
|
.then(function(r) { return r.json(); })
|
||||||
|
.then(function(data) {
|
||||||
|
if (!data.enabled || !data.stats) {
|
||||||
|
if (opStatus) opStatus.textContent = 'Operation metrics not available';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
var stats = data.stats;
|
||||||
|
var totals = stats.totals || {};
|
||||||
|
|
||||||
|
var totalEl = document.getElementById('opTotalRequests');
|
||||||
|
var successEl = document.getElementById('opSuccessRate');
|
||||||
|
var errorEl = document.getElementById('opErrorCount');
|
||||||
|
var latencyEl = document.getElementById('opAvgLatency');
|
||||||
|
var bytesInEl = document.getElementById('opBytesIn');
|
||||||
|
var bytesOutEl = document.getElementById('opBytesOut');
|
||||||
|
|
||||||
|
if (totalEl) totalEl.textContent = totals.count || 0;
|
||||||
|
if (successEl) {
|
||||||
|
var rate = totals.count > 0 ? ((totals.success_count / totals.count) * 100).toFixed(1) : 0;
|
||||||
|
successEl.textContent = rate + '%';
|
||||||
|
}
|
||||||
|
if (errorEl) errorEl.textContent = totals.error_count || 0;
|
||||||
|
if (latencyEl) latencyEl.textContent = (totals.latency_avg_ms || 0).toFixed(1) + 'ms';
|
||||||
|
if (bytesInEl) bytesInEl.textContent = formatBytes(totals.bytes_in || 0);
|
||||||
|
if (bytesOutEl) bytesOutEl.textContent = formatBytes(totals.bytes_out || 0);
|
||||||
|
|
||||||
|
if (methodChart && stats.by_method) {
|
||||||
|
var methods = Object.keys(stats.by_method);
|
||||||
|
var methodData = methods.map(function(m) { return stats.by_method[m].count; });
|
||||||
|
var methodBg = methods.map(function(m) { return methodColors[m] || '#6c757d'; });
|
||||||
|
methodChart.data.labels = methods;
|
||||||
|
methodChart.data.datasets[0].data = methodData;
|
||||||
|
methodChart.data.datasets[0].backgroundColor = methodBg;
|
||||||
|
methodChart.update('none');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (statusChart && stats.by_status_class) {
|
||||||
|
var statuses = Object.keys(stats.by_status_class).sort();
|
||||||
|
var statusData = statuses.map(function(s) { return stats.by_status_class[s]; });
|
||||||
|
var statusBg = statuses.map(function(s) { return statusColors[s] || '#6c757d'; });
|
||||||
|
statusChart.data.labels = statuses;
|
||||||
|
statusChart.data.datasets[0].data = statusData;
|
||||||
|
statusChart.data.datasets[0].backgroundColor = statusBg;
|
||||||
|
statusChart.update('none');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (endpointChart && stats.by_endpoint) {
|
||||||
|
var endpoints = Object.keys(stats.by_endpoint);
|
||||||
|
var endpointData = endpoints.map(function(e) { return stats.by_endpoint[e].count; });
|
||||||
|
var endpointBg = endpoints.map(function(e) { return endpointColors[e] || '#6c757d'; });
|
||||||
|
endpointChart.data.labels = endpoints;
|
||||||
|
endpointChart.data.datasets[0].data = endpointData;
|
||||||
|
endpointChart.data.datasets[0].backgroundColor = endpointBg;
|
||||||
|
endpointChart.update('none');
|
||||||
|
}
|
||||||
|
|
||||||
|
var errorBody = document.getElementById('errorCodesBody');
|
||||||
|
if (errorBody && stats.error_codes) {
|
||||||
|
var errorCodes = Object.entries(stats.error_codes);
|
||||||
|
errorCodes.sort(function(a, b) { return b[1] - a[1]; });
|
||||||
|
var totalErrors = errorCodes.reduce(function(sum, e) { return sum + e[1]; }, 0);
|
||||||
|
errorCodes = errorCodes.slice(0, 10);
|
||||||
|
if (errorCodes.length === 0) {
|
||||||
|
errorBody.innerHTML = '<div class="text-muted small text-center py-4">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" class="bi bi-check-circle mb-2 text-success" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>' +
|
||||||
|
'<path d="M10.97 4.97a.235.235 0 0 0-.02.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-1.071-1.05z"/>' +
|
||||||
|
'</svg><div>No S3 API errors</div></div>';
|
||||||
|
} else {
|
||||||
|
errorBody.innerHTML = errorCodes.map(function(e) {
|
||||||
|
var pct = totalErrors > 0 ? ((e[1] / totalErrors) * 100).toFixed(0) : 0;
|
||||||
|
return '<div class="d-flex align-items-center py-1" style="font-size: 0.8rem;">' +
|
||||||
|
'<div style="flex: 1;"><code class="text-danger">' + e[0] + '</code></div>' +
|
||||||
|
'<div class="text-end fw-semibold" style="width: 60px;">' + e[1] + '</div>' +
|
||||||
|
'<div style="width: 100px; padding-left: 10px;"><div class="progress" style="height: 6px;"><div class="progress-bar bg-danger" style="width: ' + pct + '%"></div></div></div>' +
|
||||||
|
'</div>';
|
||||||
|
}).join('');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var windowMins = Math.floor(stats.window_seconds / 60);
|
||||||
|
var windowSecs = stats.window_seconds % 60;
|
||||||
|
var windowStr = windowMins > 0 ? windowMins + 'm ' + windowSecs + 's' : windowSecs + 's';
|
||||||
|
if (opStatus) opStatus.textContent = 'Window: ' + windowStr + ' | ' + new Date().toLocaleTimeString();
|
||||||
|
})
|
||||||
|
.catch(function(err) {
|
||||||
|
console.error('Operation metrics fetch error:', err);
|
||||||
|
if (opStatus) opStatus.textContent = 'Failed to load';
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function startOpPolling() {
|
||||||
|
if (opTimer) clearInterval(opTimer);
|
||||||
|
opTimer = setInterval(updateOpMetrics, 5000);
|
||||||
|
}
|
||||||
|
|
||||||
|
var resetBtn = document.getElementById('resetOpMetricsBtn');
|
||||||
|
if (resetBtn) {
|
||||||
|
resetBtn.addEventListener('click', function() {
|
||||||
|
updateOpMetrics();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
document.addEventListener('visibilitychange', function() {
|
||||||
|
if (document.hidden) {
|
||||||
|
if (opTimer) clearInterval(opTimer);
|
||||||
|
opTimer = null;
|
||||||
|
} else {
|
||||||
|
updateOpMetrics();
|
||||||
|
startOpPolling();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
initOpCharts();
|
||||||
|
updateOpMetrics();
|
||||||
|
startOpPolling();
|
||||||
|
})();
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% if metrics_history_enabled %}
|
||||||
|
(function() {
|
||||||
|
var cpuChart = null;
|
||||||
|
var memoryChart = null;
|
||||||
|
var diskChart = null;
|
||||||
|
var historyStatus = document.getElementById('historyStatus');
|
||||||
|
var timeRangeSelect = document.getElementById('historyTimeRange');
|
||||||
|
var historyTimer = null;
|
||||||
|
var MAX_DATA_POINTS = 500;
|
||||||
|
|
||||||
|
function createChart(ctx, label, color) {
|
||||||
|
return new Chart(ctx, {
|
||||||
|
type: 'line',
|
||||||
|
data: {
|
||||||
|
labels: [],
|
||||||
|
datasets: [{
|
||||||
|
label: label,
|
||||||
|
data: [],
|
||||||
|
borderColor: color,
|
||||||
|
backgroundColor: color + '20',
|
||||||
|
fill: true,
|
||||||
|
tension: 0.3,
|
||||||
|
pointRadius: 3,
|
||||||
|
pointHoverRadius: 6,
|
||||||
|
hitRadius: 10,
|
||||||
|
}]
|
||||||
|
},
|
||||||
|
options: {
|
||||||
|
responsive: true,
|
||||||
|
maintainAspectRatio: true,
|
||||||
|
animation: false,
|
||||||
|
plugins: {
|
||||||
|
legend: { display: false },
|
||||||
|
tooltip: {
|
||||||
|
callbacks: {
|
||||||
|
label: function(ctx) { return ctx.parsed.y.toFixed(2) + '%'; }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
scales: {
|
||||||
|
x: {
|
||||||
|
display: true,
|
||||||
|
ticks: { maxTicksAuto: true, maxRotation: 0, font: { size: 10 }, autoSkip: true, maxTicksLimit: 10 }
|
||||||
|
},
|
||||||
|
y: {
|
||||||
|
display: true,
|
||||||
|
min: 0,
|
||||||
|
max: 100,
|
||||||
|
ticks: { callback: function(v) { return v + '%'; } }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function initCharts() {
|
||||||
|
var cpuCtx = document.getElementById('cpuHistoryChart');
|
||||||
|
var memCtx = document.getElementById('memoryHistoryChart');
|
||||||
|
var diskCtx = document.getElementById('diskHistoryChart');
|
||||||
|
if (cpuCtx) cpuChart = createChart(cpuCtx, 'CPU %', '#0d6efd');
|
||||||
|
if (memCtx) memoryChart = createChart(memCtx, 'Memory %', '#0dcaf0');
|
||||||
|
if (diskCtx) diskChart = createChart(diskCtx, 'Disk %', '#ffc107');
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatTime(ts) {
|
||||||
|
var d = new Date(ts);
|
||||||
|
return d.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' });
|
||||||
|
}
|
||||||
|
|
||||||
|
function loadHistory() {
|
||||||
|
if (document.hidden) return;
|
||||||
|
var hours = timeRangeSelect ? timeRangeSelect.value : 24;
|
||||||
|
fetch('/ui/metrics/history?hours=' + hours)
|
||||||
|
.then(function(r) { return r.json(); })
|
||||||
|
.then(function(data) {
|
||||||
|
if (!data.enabled || !data.history || data.history.length === 0) {
|
||||||
|
if (historyStatus) historyStatus.textContent = 'No history data available yet. Data is recorded every ' + (data.interval_minutes || 5) + ' minutes.';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
var history = data.history.slice(-MAX_DATA_POINTS);
|
||||||
|
var labels = history.map(function(h) { return formatTime(h.timestamp); });
|
||||||
|
var cpuData = history.map(function(h) { return h.cpu_percent; });
|
||||||
|
var memData = history.map(function(h) { return h.memory_percent; });
|
||||||
|
var diskData = history.map(function(h) { return h.disk_percent; });
|
||||||
|
|
||||||
|
if (cpuChart) {
|
||||||
|
cpuChart.data.labels = labels;
|
||||||
|
cpuChart.data.datasets[0].data = cpuData;
|
||||||
|
cpuChart.update('none');
|
||||||
|
}
|
||||||
|
if (memoryChart) {
|
||||||
|
memoryChart.data.labels = labels;
|
||||||
|
memoryChart.data.datasets[0].data = memData;
|
||||||
|
memoryChart.update('none');
|
||||||
|
}
|
||||||
|
if (diskChart) {
|
||||||
|
diskChart.data.labels = labels;
|
||||||
|
diskChart.data.datasets[0].data = diskData;
|
||||||
|
diskChart.update('none');
|
||||||
|
}
|
||||||
|
if (historyStatus) historyStatus.textContent = 'Showing ' + history.length + ' data points';
|
||||||
|
})
|
||||||
|
.catch(function(err) {
|
||||||
|
console.error('History fetch error:', err);
|
||||||
|
if (historyStatus) historyStatus.textContent = 'Failed to load history data';
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function startHistoryPolling() {
|
||||||
|
if (historyTimer) clearInterval(historyTimer);
|
||||||
|
historyTimer = setInterval(loadHistory, 60000);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (timeRangeSelect) {
|
||||||
|
timeRangeSelect.addEventListener('change', loadHistory);
|
||||||
|
}
|
||||||
|
|
||||||
|
document.addEventListener('visibilitychange', function() {
|
||||||
|
if (document.hidden) {
|
||||||
|
if (historyTimer) clearInterval(historyTimer);
|
||||||
|
historyTimer = null;
|
||||||
|
} else {
|
||||||
|
loadHistory();
|
||||||
|
startHistoryPolling();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
initCharts();
|
||||||
|
loadHistory();
|
||||||
|
startHistoryPolling();
|
||||||
|
})();
|
||||||
|
{% endif %}
|
||||||
|
</script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
@@ -35,6 +35,7 @@ def app(tmp_path: Path):
|
|||||||
flask_app = create_api_app(
|
flask_app = create_api_app(
|
||||||
{
|
{
|
||||||
"TESTING": True,
|
"TESTING": True,
|
||||||
|
"SECRET_KEY": "testing",
|
||||||
"STORAGE_ROOT": storage_root,
|
"STORAGE_ROOT": storage_root,
|
||||||
"IAM_CONFIG": iam_config,
|
"IAM_CONFIG": iam_config,
|
||||||
"BUCKET_POLICY_PATH": bucket_policies,
|
"BUCKET_POLICY_PATH": bucket_policies,
|
||||||
|
|||||||
339
tests/test_access_logging.py
Normal file
339
tests/test_access_logging.py
Normal file
@@ -0,0 +1,339 @@
|
|||||||
|
import io
|
||||||
|
import json
|
||||||
|
import time
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.access_logging import (
|
||||||
|
AccessLogEntry,
|
||||||
|
AccessLoggingService,
|
||||||
|
LoggingConfiguration,
|
||||||
|
)
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
|
||||||
|
|
||||||
|
class TestAccessLogEntry:
|
||||||
|
def test_default_values(self):
|
||||||
|
entry = AccessLogEntry()
|
||||||
|
assert entry.bucket_owner == "-"
|
||||||
|
assert entry.bucket == "-"
|
||||||
|
assert entry.remote_ip == "-"
|
||||||
|
assert entry.requester == "-"
|
||||||
|
assert entry.operation == "-"
|
||||||
|
assert entry.http_status == 200
|
||||||
|
assert len(entry.request_id) == 16
|
||||||
|
|
||||||
|
def test_to_log_line(self):
|
||||||
|
entry = AccessLogEntry(
|
||||||
|
bucket_owner="owner123",
|
||||||
|
bucket="my-bucket",
|
||||||
|
remote_ip="192.168.1.1",
|
||||||
|
requester="user456",
|
||||||
|
request_id="REQ123456789012",
|
||||||
|
operation="REST.PUT.OBJECT",
|
||||||
|
key="test/key.txt",
|
||||||
|
request_uri="PUT /my-bucket/test/key.txt HTTP/1.1",
|
||||||
|
http_status=200,
|
||||||
|
bytes_sent=1024,
|
||||||
|
object_size=2048,
|
||||||
|
total_time_ms=150,
|
||||||
|
referrer="http://example.com",
|
||||||
|
user_agent="aws-cli/2.0",
|
||||||
|
version_id="v1",
|
||||||
|
)
|
||||||
|
log_line = entry.to_log_line()
|
||||||
|
|
||||||
|
assert "owner123" in log_line
|
||||||
|
assert "my-bucket" in log_line
|
||||||
|
assert "192.168.1.1" in log_line
|
||||||
|
assert "user456" in log_line
|
||||||
|
assert "REST.PUT.OBJECT" in log_line
|
||||||
|
assert "test/key.txt" in log_line
|
||||||
|
assert "200" in log_line
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
entry = AccessLogEntry(
|
||||||
|
bucket_owner="owner",
|
||||||
|
bucket="bucket",
|
||||||
|
remote_ip="10.0.0.1",
|
||||||
|
requester="admin",
|
||||||
|
request_id="ABC123",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="file.txt",
|
||||||
|
request_uri="GET /bucket/file.txt HTTP/1.1",
|
||||||
|
http_status=200,
|
||||||
|
bytes_sent=512,
|
||||||
|
object_size=512,
|
||||||
|
total_time_ms=50,
|
||||||
|
)
|
||||||
|
result = entry.to_dict()
|
||||||
|
|
||||||
|
assert result["bucket_owner"] == "owner"
|
||||||
|
assert result["bucket"] == "bucket"
|
||||||
|
assert result["remote_ip"] == "10.0.0.1"
|
||||||
|
assert result["requester"] == "admin"
|
||||||
|
assert result["operation"] == "REST.GET.OBJECT"
|
||||||
|
assert result["key"] == "file.txt"
|
||||||
|
assert result["http_status"] == 200
|
||||||
|
assert result["bytes_sent"] == 512
|
||||||
|
|
||||||
|
|
||||||
|
class TestLoggingConfiguration:
|
||||||
|
def test_default_values(self):
|
||||||
|
config = LoggingConfiguration(target_bucket="log-bucket")
|
||||||
|
assert config.target_bucket == "log-bucket"
|
||||||
|
assert config.target_prefix == ""
|
||||||
|
assert config.enabled is True
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="logs",
|
||||||
|
target_prefix="access-logs/",
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
result = config.to_dict()
|
||||||
|
|
||||||
|
assert "LoggingEnabled" in result
|
||||||
|
assert result["LoggingEnabled"]["TargetBucket"] == "logs"
|
||||||
|
assert result["LoggingEnabled"]["TargetPrefix"] == "access-logs/"
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"LoggingEnabled": {
|
||||||
|
"TargetBucket": "my-logs",
|
||||||
|
"TargetPrefix": "bucket-logs/",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
config = LoggingConfiguration.from_dict(data)
|
||||||
|
|
||||||
|
assert config is not None
|
||||||
|
assert config.target_bucket == "my-logs"
|
||||||
|
assert config.target_prefix == "bucket-logs/"
|
||||||
|
assert config.enabled is True
|
||||||
|
|
||||||
|
def test_from_dict_no_logging(self):
|
||||||
|
data = {}
|
||||||
|
config = LoggingConfiguration.from_dict(data)
|
||||||
|
assert config is None
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def storage(tmp_path: Path):
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
storage_root.mkdir(parents=True)
|
||||||
|
return ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def logging_service(tmp_path: Path, storage):
|
||||||
|
service = AccessLoggingService(
|
||||||
|
tmp_path,
|
||||||
|
flush_interval=3600,
|
||||||
|
max_buffer_size=10,
|
||||||
|
)
|
||||||
|
service.set_storage(storage)
|
||||||
|
yield service
|
||||||
|
service.shutdown()
|
||||||
|
|
||||||
|
|
||||||
|
class TestAccessLoggingService:
|
||||||
|
def test_get_bucket_logging_not_configured(self, logging_service):
|
||||||
|
result = logging_service.get_bucket_logging("unconfigured-bucket")
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_set_and_get_bucket_logging(self, logging_service):
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="log-bucket",
|
||||||
|
target_prefix="logs/",
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("source-bucket", config)
|
||||||
|
|
||||||
|
retrieved = logging_service.get_bucket_logging("source-bucket")
|
||||||
|
assert retrieved is not None
|
||||||
|
assert retrieved.target_bucket == "log-bucket"
|
||||||
|
assert retrieved.target_prefix == "logs/"
|
||||||
|
|
||||||
|
def test_delete_bucket_logging(self, logging_service):
|
||||||
|
config = LoggingConfiguration(target_bucket="logs")
|
||||||
|
logging_service.set_bucket_logging("to-delete", config)
|
||||||
|
assert logging_service.get_bucket_logging("to-delete") is not None
|
||||||
|
|
||||||
|
logging_service.delete_bucket_logging("to-delete")
|
||||||
|
logging_service._configs.clear()
|
||||||
|
assert logging_service.get_bucket_logging("to-delete") is None
|
||||||
|
|
||||||
|
def test_log_request_no_config(self, logging_service):
|
||||||
|
logging_service.log_request(
|
||||||
|
"no-config-bucket",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="test.txt",
|
||||||
|
)
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert stats["buffered_entries"] == 0
|
||||||
|
|
||||||
|
def test_log_request_with_config(self, logging_service, storage):
|
||||||
|
storage.create_bucket("log-target")
|
||||||
|
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="log-target",
|
||||||
|
target_prefix="access/",
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("source-bucket", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"source-bucket",
|
||||||
|
operation="REST.PUT.OBJECT",
|
||||||
|
key="uploaded.txt",
|
||||||
|
remote_ip="192.168.1.100",
|
||||||
|
requester="test-user",
|
||||||
|
http_status=200,
|
||||||
|
bytes_sent=1024,
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert stats["buffered_entries"] == 1
|
||||||
|
|
||||||
|
def test_log_request_disabled_config(self, logging_service):
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="logs",
|
||||||
|
enabled=False,
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("disabled-bucket", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"disabled-bucket",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="test.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert stats["buffered_entries"] == 0
|
||||||
|
|
||||||
|
def test_flush_buffer(self, logging_service, storage):
|
||||||
|
storage.create_bucket("flush-target")
|
||||||
|
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="flush-target",
|
||||||
|
target_prefix="logs/",
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("flush-source", config)
|
||||||
|
|
||||||
|
for i in range(3):
|
||||||
|
logging_service.log_request(
|
||||||
|
"flush-source",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key=f"file{i}.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
logging_service.flush()
|
||||||
|
|
||||||
|
objects = storage.list_objects_all("flush-target")
|
||||||
|
assert len(objects) >= 1
|
||||||
|
|
||||||
|
def test_auto_flush_on_buffer_size(self, logging_service, storage):
|
||||||
|
storage.create_bucket("auto-flush-target")
|
||||||
|
|
||||||
|
config = LoggingConfiguration(
|
||||||
|
target_bucket="auto-flush-target",
|
||||||
|
target_prefix="",
|
||||||
|
)
|
||||||
|
logging_service.set_bucket_logging("auto-source", config)
|
||||||
|
|
||||||
|
for i in range(15):
|
||||||
|
logging_service.log_request(
|
||||||
|
"auto-source",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key=f"file{i}.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
objects = storage.list_objects_all("auto-flush-target")
|
||||||
|
assert len(objects) >= 1
|
||||||
|
|
||||||
|
def test_get_stats(self, logging_service, storage):
|
||||||
|
storage.create_bucket("stats-target")
|
||||||
|
config = LoggingConfiguration(target_bucket="stats-target")
|
||||||
|
logging_service.set_bucket_logging("stats-bucket", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"stats-bucket",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="test.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert "buffered_entries" in stats
|
||||||
|
assert "target_buckets" in stats
|
||||||
|
assert stats["buffered_entries"] >= 1
|
||||||
|
|
||||||
|
def test_shutdown_flushes_buffer(self, tmp_path, storage):
|
||||||
|
storage.create_bucket("shutdown-target")
|
||||||
|
|
||||||
|
service = AccessLoggingService(tmp_path, flush_interval=3600, max_buffer_size=100)
|
||||||
|
service.set_storage(storage)
|
||||||
|
|
||||||
|
config = LoggingConfiguration(target_bucket="shutdown-target")
|
||||||
|
service.set_bucket_logging("shutdown-source", config)
|
||||||
|
|
||||||
|
service.log_request(
|
||||||
|
"shutdown-source",
|
||||||
|
operation="REST.PUT.OBJECT",
|
||||||
|
key="final.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
service.shutdown()
|
||||||
|
|
||||||
|
objects = storage.list_objects_all("shutdown-target")
|
||||||
|
assert len(objects) >= 1
|
||||||
|
|
||||||
|
def test_logging_caching(self, logging_service):
|
||||||
|
config = LoggingConfiguration(target_bucket="cached-logs")
|
||||||
|
logging_service.set_bucket_logging("cached-bucket", config)
|
||||||
|
|
||||||
|
logging_service.get_bucket_logging("cached-bucket")
|
||||||
|
assert "cached-bucket" in logging_service._configs
|
||||||
|
|
||||||
|
def test_log_request_all_fields(self, logging_service, storage):
|
||||||
|
storage.create_bucket("detailed-target")
|
||||||
|
|
||||||
|
config = LoggingConfiguration(target_bucket="detailed-target", target_prefix="detailed/")
|
||||||
|
logging_service.set_bucket_logging("detailed-source", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"detailed-source",
|
||||||
|
operation="REST.PUT.OBJECT",
|
||||||
|
key="detailed/file.txt",
|
||||||
|
remote_ip="10.0.0.1",
|
||||||
|
requester="admin-user",
|
||||||
|
request_uri="PUT /detailed-source/detailed/file.txt HTTP/1.1",
|
||||||
|
http_status=201,
|
||||||
|
error_code="",
|
||||||
|
bytes_sent=2048,
|
||||||
|
object_size=2048,
|
||||||
|
total_time_ms=100,
|
||||||
|
referrer="http://admin.example.com",
|
||||||
|
user_agent="curl/7.68.0",
|
||||||
|
version_id="v1.0",
|
||||||
|
request_id="CUSTOM_REQ_ID",
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = logging_service.get_stats()
|
||||||
|
assert stats["buffered_entries"] == 1
|
||||||
|
|
||||||
|
def test_failed_flush_returns_to_buffer(self, logging_service):
|
||||||
|
config = LoggingConfiguration(target_bucket="nonexistent-target")
|
||||||
|
logging_service.set_bucket_logging("fail-source", config)
|
||||||
|
|
||||||
|
logging_service.log_request(
|
||||||
|
"fail-source",
|
||||||
|
operation="REST.GET.OBJECT",
|
||||||
|
key="test.txt",
|
||||||
|
)
|
||||||
|
|
||||||
|
initial_count = logging_service.get_stats()["buffered_entries"]
|
||||||
|
logging_service.flush()
|
||||||
|
|
||||||
|
final_count = logging_service.get_stats()["buffered_entries"]
|
||||||
|
assert final_count >= initial_count
|
||||||
284
tests/test_acl.py
Normal file
284
tests/test_acl.py
Normal file
@@ -0,0 +1,284 @@
|
|||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.acl import (
|
||||||
|
Acl,
|
||||||
|
AclGrant,
|
||||||
|
AclService,
|
||||||
|
ACL_PERMISSION_FULL_CONTROL,
|
||||||
|
ACL_PERMISSION_READ,
|
||||||
|
ACL_PERMISSION_WRITE,
|
||||||
|
ACL_PERMISSION_READ_ACP,
|
||||||
|
ACL_PERMISSION_WRITE_ACP,
|
||||||
|
GRANTEE_ALL_USERS,
|
||||||
|
GRANTEE_AUTHENTICATED_USERS,
|
||||||
|
PERMISSION_TO_ACTIONS,
|
||||||
|
create_canned_acl,
|
||||||
|
CANNED_ACLS,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestAclGrant:
|
||||||
|
def test_to_dict(self):
|
||||||
|
grant = AclGrant(grantee="user123", permission=ACL_PERMISSION_READ)
|
||||||
|
result = grant.to_dict()
|
||||||
|
assert result == {"grantee": "user123", "permission": "READ"}
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {"grantee": "admin", "permission": "FULL_CONTROL"}
|
||||||
|
grant = AclGrant.from_dict(data)
|
||||||
|
assert grant.grantee == "admin"
|
||||||
|
assert grant.permission == ACL_PERMISSION_FULL_CONTROL
|
||||||
|
|
||||||
|
|
||||||
|
class TestAcl:
|
||||||
|
def test_to_dict(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner-user",
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee="owner-user", permission=ACL_PERMISSION_FULL_CONTROL),
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
result = acl.to_dict()
|
||||||
|
assert result["owner"] == "owner-user"
|
||||||
|
assert len(result["grants"]) == 2
|
||||||
|
assert result["grants"][0]["grantee"] == "owner-user"
|
||||||
|
assert result["grants"][1]["grantee"] == "*"
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"owner": "the-owner",
|
||||||
|
"grants": [
|
||||||
|
{"grantee": "the-owner", "permission": "FULL_CONTROL"},
|
||||||
|
{"grantee": "authenticated", "permission": "READ"},
|
||||||
|
],
|
||||||
|
}
|
||||||
|
acl = Acl.from_dict(data)
|
||||||
|
assert acl.owner == "the-owner"
|
||||||
|
assert len(acl.grants) == 2
|
||||||
|
assert acl.grants[0].grantee == "the-owner"
|
||||||
|
assert acl.grants[1].grantee == GRANTEE_AUTHENTICATED_USERS
|
||||||
|
|
||||||
|
def test_from_dict_empty_grants(self):
|
||||||
|
data = {"owner": "solo-owner"}
|
||||||
|
acl = Acl.from_dict(data)
|
||||||
|
assert acl.owner == "solo-owner"
|
||||||
|
assert len(acl.grants) == 0
|
||||||
|
|
||||||
|
def test_get_allowed_actions_owner(self):
|
||||||
|
acl = Acl(owner="owner123", grants=[])
|
||||||
|
actions = acl.get_allowed_actions("owner123", is_authenticated=True)
|
||||||
|
assert actions == PERMISSION_TO_ACTIONS[ACL_PERMISSION_FULL_CONTROL]
|
||||||
|
|
||||||
|
def test_get_allowed_actions_all_users(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ)],
|
||||||
|
)
|
||||||
|
actions = acl.get_allowed_actions(None, is_authenticated=False)
|
||||||
|
assert "read" in actions
|
||||||
|
assert "list" in actions
|
||||||
|
assert "write" not in actions
|
||||||
|
|
||||||
|
def test_get_allowed_actions_authenticated_users(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[AclGrant(grantee=GRANTEE_AUTHENTICATED_USERS, permission=ACL_PERMISSION_WRITE)],
|
||||||
|
)
|
||||||
|
actions_authenticated = acl.get_allowed_actions("some-user", is_authenticated=True)
|
||||||
|
assert "write" in actions_authenticated
|
||||||
|
assert "delete" in actions_authenticated
|
||||||
|
|
||||||
|
actions_anonymous = acl.get_allowed_actions(None, is_authenticated=False)
|
||||||
|
assert "write" not in actions_anonymous
|
||||||
|
|
||||||
|
def test_get_allowed_actions_specific_grantee(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee="user-abc", permission=ACL_PERMISSION_READ),
|
||||||
|
AclGrant(grantee="user-xyz", permission=ACL_PERMISSION_WRITE),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
abc_actions = acl.get_allowed_actions("user-abc", is_authenticated=True)
|
||||||
|
assert "read" in abc_actions
|
||||||
|
assert "list" in abc_actions
|
||||||
|
assert "write" not in abc_actions
|
||||||
|
|
||||||
|
xyz_actions = acl.get_allowed_actions("user-xyz", is_authenticated=True)
|
||||||
|
assert "write" in xyz_actions
|
||||||
|
assert "read" not in xyz_actions
|
||||||
|
|
||||||
|
def test_get_allowed_actions_combined(self):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[
|
||||||
|
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
||||||
|
AclGrant(grantee="special-user", permission=ACL_PERMISSION_WRITE),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
actions = acl.get_allowed_actions("special-user", is_authenticated=True)
|
||||||
|
assert "read" in actions
|
||||||
|
assert "list" in actions
|
||||||
|
assert "write" in actions
|
||||||
|
assert "delete" in actions
|
||||||
|
|
||||||
|
|
||||||
|
class TestCannedAcls:
|
||||||
|
def test_private_acl(self):
|
||||||
|
acl = create_canned_acl("private", "the-owner")
|
||||||
|
assert acl.owner == "the-owner"
|
||||||
|
assert len(acl.grants) == 1
|
||||||
|
assert acl.grants[0].grantee == "the-owner"
|
||||||
|
assert acl.grants[0].permission == ACL_PERMISSION_FULL_CONTROL
|
||||||
|
|
||||||
|
def test_public_read_acl(self):
|
||||||
|
acl = create_canned_acl("public-read", "owner")
|
||||||
|
assert acl.owner == "owner"
|
||||||
|
has_owner_full_control = any(
|
||||||
|
g.grantee == "owner" and g.permission == ACL_PERMISSION_FULL_CONTROL for g in acl.grants
|
||||||
|
)
|
||||||
|
has_public_read = any(
|
||||||
|
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_READ for g in acl.grants
|
||||||
|
)
|
||||||
|
assert has_owner_full_control
|
||||||
|
assert has_public_read
|
||||||
|
|
||||||
|
def test_public_read_write_acl(self):
|
||||||
|
acl = create_canned_acl("public-read-write", "owner")
|
||||||
|
assert acl.owner == "owner"
|
||||||
|
has_public_read = any(
|
||||||
|
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_READ for g in acl.grants
|
||||||
|
)
|
||||||
|
has_public_write = any(
|
||||||
|
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_WRITE for g in acl.grants
|
||||||
|
)
|
||||||
|
assert has_public_read
|
||||||
|
assert has_public_write
|
||||||
|
|
||||||
|
def test_authenticated_read_acl(self):
|
||||||
|
acl = create_canned_acl("authenticated-read", "owner")
|
||||||
|
has_authenticated_read = any(
|
||||||
|
g.grantee == GRANTEE_AUTHENTICATED_USERS and g.permission == ACL_PERMISSION_READ for g in acl.grants
|
||||||
|
)
|
||||||
|
assert has_authenticated_read
|
||||||
|
|
||||||
|
def test_unknown_canned_acl_defaults_to_private(self):
|
||||||
|
acl = create_canned_acl("unknown-acl", "owner")
|
||||||
|
private_acl = create_canned_acl("private", "owner")
|
||||||
|
assert acl.to_dict() == private_acl.to_dict()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def acl_service(tmp_path: Path):
|
||||||
|
return AclService(tmp_path)
|
||||||
|
|
||||||
|
|
||||||
|
class TestAclService:
|
||||||
|
def test_get_bucket_acl_not_exists(self, acl_service):
|
||||||
|
result = acl_service.get_bucket_acl("nonexistent-bucket")
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_set_and_get_bucket_acl(self, acl_service):
|
||||||
|
acl = Acl(
|
||||||
|
owner="bucket-owner",
|
||||||
|
grants=[AclGrant(grantee="bucket-owner", permission=ACL_PERMISSION_FULL_CONTROL)],
|
||||||
|
)
|
||||||
|
acl_service.set_bucket_acl("my-bucket", acl)
|
||||||
|
|
||||||
|
retrieved = acl_service.get_bucket_acl("my-bucket")
|
||||||
|
assert retrieved is not None
|
||||||
|
assert retrieved.owner == "bucket-owner"
|
||||||
|
assert len(retrieved.grants) == 1
|
||||||
|
|
||||||
|
def test_bucket_acl_caching(self, acl_service):
|
||||||
|
acl = Acl(owner="cached-owner", grants=[])
|
||||||
|
acl_service.set_bucket_acl("cached-bucket", acl)
|
||||||
|
|
||||||
|
acl_service.get_bucket_acl("cached-bucket")
|
||||||
|
assert "cached-bucket" in acl_service._bucket_acl_cache
|
||||||
|
|
||||||
|
retrieved = acl_service.get_bucket_acl("cached-bucket")
|
||||||
|
assert retrieved.owner == "cached-owner"
|
||||||
|
|
||||||
|
def test_set_bucket_canned_acl(self, acl_service):
|
||||||
|
result = acl_service.set_bucket_canned_acl("new-bucket", "public-read", "the-owner")
|
||||||
|
assert result.owner == "the-owner"
|
||||||
|
|
||||||
|
retrieved = acl_service.get_bucket_acl("new-bucket")
|
||||||
|
assert retrieved is not None
|
||||||
|
has_public_read = any(
|
||||||
|
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_READ for g in retrieved.grants
|
||||||
|
)
|
||||||
|
assert has_public_read
|
||||||
|
|
||||||
|
def test_delete_bucket_acl(self, acl_service):
|
||||||
|
acl = Acl(owner="to-delete-owner", grants=[])
|
||||||
|
acl_service.set_bucket_acl("delete-me", acl)
|
||||||
|
assert acl_service.get_bucket_acl("delete-me") is not None
|
||||||
|
|
||||||
|
acl_service.delete_bucket_acl("delete-me")
|
||||||
|
acl_service._bucket_acl_cache.clear()
|
||||||
|
assert acl_service.get_bucket_acl("delete-me") is None
|
||||||
|
|
||||||
|
def test_evaluate_bucket_acl_allowed(self, acl_service):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ)],
|
||||||
|
)
|
||||||
|
acl_service.set_bucket_acl("public-bucket", acl)
|
||||||
|
|
||||||
|
result = acl_service.evaluate_bucket_acl("public-bucket", None, "read", is_authenticated=False)
|
||||||
|
assert result is True
|
||||||
|
|
||||||
|
def test_evaluate_bucket_acl_denied(self, acl_service):
|
||||||
|
acl = Acl(
|
||||||
|
owner="owner",
|
||||||
|
grants=[AclGrant(grantee="owner", permission=ACL_PERMISSION_FULL_CONTROL)],
|
||||||
|
)
|
||||||
|
acl_service.set_bucket_acl("private-bucket", acl)
|
||||||
|
|
||||||
|
result = acl_service.evaluate_bucket_acl("private-bucket", "other-user", "write", is_authenticated=True)
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
def test_evaluate_bucket_acl_no_acl(self, acl_service):
|
||||||
|
result = acl_service.evaluate_bucket_acl("no-acl-bucket", "anyone", "read")
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
def test_get_object_acl_from_metadata(self, acl_service):
|
||||||
|
metadata = {
|
||||||
|
"__acl__": {
|
||||||
|
"owner": "object-owner",
|
||||||
|
"grants": [{"grantee": "object-owner", "permission": "FULL_CONTROL"}],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result = acl_service.get_object_acl("bucket", "key", metadata)
|
||||||
|
assert result is not None
|
||||||
|
assert result.owner == "object-owner"
|
||||||
|
|
||||||
|
def test_get_object_acl_no_acl_in_metadata(self, acl_service):
|
||||||
|
metadata = {"Content-Type": "text/plain"}
|
||||||
|
result = acl_service.get_object_acl("bucket", "key", metadata)
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_create_object_acl_metadata(self, acl_service):
|
||||||
|
acl = Acl(owner="obj-owner", grants=[])
|
||||||
|
result = acl_service.create_object_acl_metadata(acl)
|
||||||
|
assert "__acl__" in result
|
||||||
|
assert result["__acl__"]["owner"] == "obj-owner"
|
||||||
|
|
||||||
|
def test_evaluate_object_acl(self, acl_service):
|
||||||
|
metadata = {
|
||||||
|
"__acl__": {
|
||||||
|
"owner": "obj-owner",
|
||||||
|
"grants": [{"grantee": "*", "permission": "READ"}],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result = acl_service.evaluate_object_acl(metadata, None, "read", is_authenticated=False)
|
||||||
|
assert result is True
|
||||||
|
|
||||||
|
result = acl_service.evaluate_object_acl(metadata, None, "write", is_authenticated=False)
|
||||||
|
assert result is False
|
||||||
@@ -1,6 +1,3 @@
|
|||||||
from urllib.parse import urlsplit
|
|
||||||
|
|
||||||
|
|
||||||
def test_bucket_and_object_lifecycle(client, signer):
|
def test_bucket_and_object_lifecycle(client, signer):
|
||||||
headers = signer("PUT", "/photos")
|
headers = signer("PUT", "/photos")
|
||||||
response = client.put("/photos", headers=headers)
|
response = client.put("/photos", headers=headers)
|
||||||
@@ -104,12 +101,12 @@ def test_request_id_header_present(client, signer):
|
|||||||
assert response.headers.get("X-Request-ID")
|
assert response.headers.get("X-Request-ID")
|
||||||
|
|
||||||
|
|
||||||
def test_healthcheck_returns_version(client):
|
def test_healthcheck_returns_status(client):
|
||||||
response = client.get("/healthz")
|
response = client.get("/myfsio/health")
|
||||||
data = response.get_json()
|
data = response.get_json()
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert data["status"] == "ok"
|
assert data["status"] == "ok"
|
||||||
assert "version" in data
|
assert "version" not in data
|
||||||
|
|
||||||
|
|
||||||
def test_missing_credentials_denied(client):
|
def test_missing_credentials_denied(client):
|
||||||
@@ -117,36 +114,20 @@ def test_missing_credentials_denied(client):
|
|||||||
assert response.status_code == 403
|
assert response.status_code == 403
|
||||||
|
|
||||||
|
|
||||||
def test_presign_and_bucket_policies(client, signer):
|
def test_bucket_policies_deny_reads(client, signer):
|
||||||
# Create bucket and object
|
import json
|
||||||
|
|
||||||
headers = signer("PUT", "/docs")
|
headers = signer("PUT", "/docs")
|
||||||
assert client.put("/docs", headers=headers).status_code == 200
|
assert client.put("/docs", headers=headers).status_code == 200
|
||||||
|
|
||||||
headers = signer("PUT", "/docs/readme.txt", body=b"content")
|
headers = signer("PUT", "/docs/readme.txt", body=b"content")
|
||||||
assert client.put("/docs/readme.txt", headers=headers, data=b"content").status_code == 200
|
assert client.put("/docs/readme.txt", headers=headers, data=b"content").status_code == 200
|
||||||
|
|
||||||
# Generate presigned GET URL and follow it
|
headers = signer("GET", "/docs/readme.txt")
|
||||||
json_body = {"method": "GET", "expires_in": 120}
|
response = client.get("/docs/readme.txt", headers=headers)
|
||||||
# Flask test client json parameter automatically sets Content-Type and serializes body
|
|
||||||
# But for signing we need the body bytes.
|
|
||||||
import json
|
|
||||||
body_bytes = json.dumps(json_body).encode("utf-8")
|
|
||||||
headers = signer("POST", "/presign/docs/readme.txt", headers={"Content-Type": "application/json"}, body=body_bytes)
|
|
||||||
|
|
||||||
response = client.post(
|
|
||||||
"/presign/docs/readme.txt",
|
|
||||||
headers=headers,
|
|
||||||
json=json_body,
|
|
||||||
)
|
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
presigned_url = response.get_json()["url"]
|
assert response.data == b"content"
|
||||||
parts = urlsplit(presigned_url)
|
|
||||||
presigned_path = f"{parts.path}?{parts.query}"
|
|
||||||
download = client.get(presigned_path)
|
|
||||||
assert download.status_code == 200
|
|
||||||
assert download.data == b"content"
|
|
||||||
|
|
||||||
# Attach a deny policy for GETs
|
|
||||||
policy = {
|
policy = {
|
||||||
"Version": "2012-10-17",
|
"Version": "2012-10-17",
|
||||||
"Statement": [
|
"Statement": [
|
||||||
@@ -160,29 +141,26 @@ def test_presign_and_bucket_policies(client, signer):
|
|||||||
],
|
],
|
||||||
}
|
}
|
||||||
policy_bytes = json.dumps(policy).encode("utf-8")
|
policy_bytes = json.dumps(policy).encode("utf-8")
|
||||||
headers = signer("PUT", "/bucket-policy/docs", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
headers = signer("PUT", "/docs?policy", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
||||||
assert client.put("/bucket-policy/docs", headers=headers, json=policy).status_code == 204
|
assert client.put("/docs?policy", headers=headers, json=policy).status_code == 204
|
||||||
|
|
||||||
headers = signer("GET", "/bucket-policy/docs")
|
headers = signer("GET", "/docs?policy")
|
||||||
fetched = client.get("/bucket-policy/docs", headers=headers)
|
fetched = client.get("/docs?policy", headers=headers)
|
||||||
assert fetched.status_code == 200
|
assert fetched.status_code == 200
|
||||||
assert fetched.get_json()["Version"] == "2012-10-17"
|
assert fetched.get_json()["Version"] == "2012-10-17"
|
||||||
|
|
||||||
# Reads are now denied by bucket policy
|
|
||||||
headers = signer("GET", "/docs/readme.txt")
|
headers = signer("GET", "/docs/readme.txt")
|
||||||
denied = client.get("/docs/readme.txt", headers=headers)
|
denied = client.get("/docs/readme.txt", headers=headers)
|
||||||
assert denied.status_code == 403
|
assert denied.status_code == 403
|
||||||
|
|
||||||
# Presign attempts are also denied
|
headers = signer("DELETE", "/docs?policy")
|
||||||
json_body = {"method": "GET", "expires_in": 60}
|
assert client.delete("/docs?policy", headers=headers).status_code == 204
|
||||||
body_bytes = json.dumps(json_body).encode("utf-8")
|
|
||||||
headers = signer("POST", "/presign/docs/readme.txt", headers={"Content-Type": "application/json"}, body=body_bytes)
|
headers = signer("DELETE", "/docs/readme.txt")
|
||||||
response = client.post(
|
assert client.delete("/docs/readme.txt", headers=headers).status_code == 204
|
||||||
"/presign/docs/readme.txt",
|
|
||||||
headers=headers,
|
headers = signer("DELETE", "/docs")
|
||||||
json=json_body,
|
assert client.delete("/docs", headers=headers).status_code == 204
|
||||||
)
|
|
||||||
assert response.status_code == 403
|
|
||||||
|
|
||||||
|
|
||||||
def test_trailing_slash_returns_xml(client):
|
def test_trailing_slash_returns_xml(client):
|
||||||
@@ -193,9 +171,11 @@ def test_trailing_slash_returns_xml(client):
|
|||||||
|
|
||||||
|
|
||||||
def test_public_policy_allows_anonymous_list_and_read(client, signer):
|
def test_public_policy_allows_anonymous_list_and_read(client, signer):
|
||||||
|
import json
|
||||||
|
|
||||||
headers = signer("PUT", "/public")
|
headers = signer("PUT", "/public")
|
||||||
assert client.put("/public", headers=headers).status_code == 200
|
assert client.put("/public", headers=headers).status_code == 200
|
||||||
|
|
||||||
headers = signer("PUT", "/public/hello.txt", body=b"hi")
|
headers = signer("PUT", "/public/hello.txt", body=b"hi")
|
||||||
assert client.put("/public/hello.txt", headers=headers, data=b"hi").status_code == 200
|
assert client.put("/public/hello.txt", headers=headers, data=b"hi").status_code == 200
|
||||||
|
|
||||||
@@ -221,10 +201,9 @@ def test_public_policy_allows_anonymous_list_and_read(client, signer):
|
|||||||
},
|
},
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
import json
|
|
||||||
policy_bytes = json.dumps(policy).encode("utf-8")
|
policy_bytes = json.dumps(policy).encode("utf-8")
|
||||||
headers = signer("PUT", "/bucket-policy/public", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
headers = signer("PUT", "/public?policy", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
||||||
assert client.put("/bucket-policy/public", headers=headers, json=policy).status_code == 204
|
assert client.put("/public?policy", headers=headers, json=policy).status_code == 204
|
||||||
|
|
||||||
list_response = client.get("/public")
|
list_response = client.get("/public")
|
||||||
assert list_response.status_code == 200
|
assert list_response.status_code == 200
|
||||||
@@ -236,18 +215,20 @@ def test_public_policy_allows_anonymous_list_and_read(client, signer):
|
|||||||
|
|
||||||
headers = signer("DELETE", "/public/hello.txt")
|
headers = signer("DELETE", "/public/hello.txt")
|
||||||
assert client.delete("/public/hello.txt", headers=headers).status_code == 204
|
assert client.delete("/public/hello.txt", headers=headers).status_code == 204
|
||||||
|
|
||||||
headers = signer("DELETE", "/bucket-policy/public")
|
headers = signer("DELETE", "/public?policy")
|
||||||
assert client.delete("/bucket-policy/public", headers=headers).status_code == 204
|
assert client.delete("/public?policy", headers=headers).status_code == 204
|
||||||
|
|
||||||
headers = signer("DELETE", "/public")
|
headers = signer("DELETE", "/public")
|
||||||
assert client.delete("/public", headers=headers).status_code == 204
|
assert client.delete("/public", headers=headers).status_code == 204
|
||||||
|
|
||||||
|
|
||||||
def test_principal_dict_with_object_get_only(client, signer):
|
def test_principal_dict_with_object_get_only(client, signer):
|
||||||
|
import json
|
||||||
|
|
||||||
headers = signer("PUT", "/mixed")
|
headers = signer("PUT", "/mixed")
|
||||||
assert client.put("/mixed", headers=headers).status_code == 200
|
assert client.put("/mixed", headers=headers).status_code == 200
|
||||||
|
|
||||||
headers = signer("PUT", "/mixed/only.txt", body=b"ok")
|
headers = signer("PUT", "/mixed/only.txt", body=b"ok")
|
||||||
assert client.put("/mixed/only.txt", headers=headers, data=b"ok").status_code == 200
|
assert client.put("/mixed/only.txt", headers=headers, data=b"ok").status_code == 200
|
||||||
|
|
||||||
@@ -270,10 +251,9 @@ def test_principal_dict_with_object_get_only(client, signer):
|
|||||||
},
|
},
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
import json
|
|
||||||
policy_bytes = json.dumps(policy).encode("utf-8")
|
policy_bytes = json.dumps(policy).encode("utf-8")
|
||||||
headers = signer("PUT", "/bucket-policy/mixed", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
headers = signer("PUT", "/mixed?policy", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
||||||
assert client.put("/bucket-policy/mixed", headers=headers, json=policy).status_code == 204
|
assert client.put("/mixed?policy", headers=headers, json=policy).status_code == 204
|
||||||
|
|
||||||
assert client.get("/mixed").status_code == 403
|
assert client.get("/mixed").status_code == 403
|
||||||
allowed = client.get("/mixed/only.txt")
|
allowed = client.get("/mixed/only.txt")
|
||||||
@@ -282,18 +262,20 @@ def test_principal_dict_with_object_get_only(client, signer):
|
|||||||
|
|
||||||
headers = signer("DELETE", "/mixed/only.txt")
|
headers = signer("DELETE", "/mixed/only.txt")
|
||||||
assert client.delete("/mixed/only.txt", headers=headers).status_code == 204
|
assert client.delete("/mixed/only.txt", headers=headers).status_code == 204
|
||||||
|
|
||||||
headers = signer("DELETE", "/bucket-policy/mixed")
|
headers = signer("DELETE", "/mixed?policy")
|
||||||
assert client.delete("/bucket-policy/mixed", headers=headers).status_code == 204
|
assert client.delete("/mixed?policy", headers=headers).status_code == 204
|
||||||
|
|
||||||
headers = signer("DELETE", "/mixed")
|
headers = signer("DELETE", "/mixed")
|
||||||
assert client.delete("/mixed", headers=headers).status_code == 204
|
assert client.delete("/mixed", headers=headers).status_code == 204
|
||||||
|
|
||||||
|
|
||||||
def test_bucket_policy_wildcard_resource_allows_object_get(client, signer):
|
def test_bucket_policy_wildcard_resource_allows_object_get(client, signer):
|
||||||
|
import json
|
||||||
|
|
||||||
headers = signer("PUT", "/test")
|
headers = signer("PUT", "/test")
|
||||||
assert client.put("/test", headers=headers).status_code == 200
|
assert client.put("/test", headers=headers).status_code == 200
|
||||||
|
|
||||||
headers = signer("PUT", "/test/vid.mp4", body=b"video")
|
headers = signer("PUT", "/test/vid.mp4", body=b"video")
|
||||||
assert client.put("/test/vid.mp4", headers=headers, data=b"video").status_code == 200
|
assert client.put("/test/vid.mp4", headers=headers, data=b"video").status_code == 200
|
||||||
|
|
||||||
@@ -314,10 +296,9 @@ def test_bucket_policy_wildcard_resource_allows_object_get(client, signer):
|
|||||||
},
|
},
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
import json
|
|
||||||
policy_bytes = json.dumps(policy).encode("utf-8")
|
policy_bytes = json.dumps(policy).encode("utf-8")
|
||||||
headers = signer("PUT", "/bucket-policy/test", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
headers = signer("PUT", "/test?policy", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
||||||
assert client.put("/bucket-policy/test", headers=headers, json=policy).status_code == 204
|
assert client.put("/test?policy", headers=headers, json=policy).status_code == 204
|
||||||
|
|
||||||
listing = client.get("/test")
|
listing = client.get("/test")
|
||||||
assert listing.status_code == 403
|
assert listing.status_code == 403
|
||||||
@@ -327,10 +308,10 @@ def test_bucket_policy_wildcard_resource_allows_object_get(client, signer):
|
|||||||
|
|
||||||
headers = signer("DELETE", "/test/vid.mp4")
|
headers = signer("DELETE", "/test/vid.mp4")
|
||||||
assert client.delete("/test/vid.mp4", headers=headers).status_code == 204
|
assert client.delete("/test/vid.mp4", headers=headers).status_code == 204
|
||||||
|
|
||||||
headers = signer("DELETE", "/bucket-policy/test")
|
headers = signer("DELETE", "/test?policy")
|
||||||
assert client.delete("/bucket-policy/test", headers=headers).status_code == 204
|
assert client.delete("/test?policy", headers=headers).status_code == 204
|
||||||
|
|
||||||
headers = signer("DELETE", "/test")
|
headers = signer("DELETE", "/test")
|
||||||
assert client.delete("/test", headers=headers).status_code == 204
|
assert client.delete("/test", headers=headers).status_code == 204
|
||||||
|
|
||||||
|
|||||||
@@ -8,8 +8,6 @@ def client(app):
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def auth_headers(app):
|
def auth_headers(app):
|
||||||
# Create a test user and return headers
|
|
||||||
# Using the user defined in conftest.py
|
|
||||||
return {
|
return {
|
||||||
"X-Access-Key": "test",
|
"X-Access-Key": "test",
|
||||||
"X-Secret-Key": "secret"
|
"X-Secret-Key": "secret"
|
||||||
@@ -75,19 +73,16 @@ def test_multipart_upload_flow(client, auth_headers):
|
|||||||
|
|
||||||
def test_abort_multipart_upload(client, auth_headers):
|
def test_abort_multipart_upload(client, auth_headers):
|
||||||
client.put("/abort-bucket", headers=auth_headers)
|
client.put("/abort-bucket", headers=auth_headers)
|
||||||
|
|
||||||
# Initiate
|
|
||||||
resp = client.post("/abort-bucket/file.txt?uploads", headers=auth_headers)
|
resp = client.post("/abort-bucket/file.txt?uploads", headers=auth_headers)
|
||||||
upload_id = fromstring(resp.data).find("UploadId").text
|
upload_id = fromstring(resp.data).find("UploadId").text
|
||||||
|
|
||||||
# Abort
|
|
||||||
resp = client.delete(f"/abort-bucket/file.txt?uploadId={upload_id}", headers=auth_headers)
|
resp = client.delete(f"/abort-bucket/file.txt?uploadId={upload_id}", headers=auth_headers)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
# Try to upload part (should fail)
|
|
||||||
resp = client.put(
|
resp = client.put(
|
||||||
f"/abort-bucket/file.txt?partNumber=1&uploadId={upload_id}",
|
f"/abort-bucket/file.txt?partNumber=1&uploadId={upload_id}",
|
||||||
headers=auth_headers,
|
headers=auth_headers,
|
||||||
data=b"data"
|
data=b"data"
|
||||||
)
|
)
|
||||||
assert resp.status_code == 404 # NoSuchUpload
|
assert resp.status_code == 404
|
||||||
|
|||||||
@@ -38,7 +38,7 @@ def test_unicode_bucket_and_object_names(tmp_path: Path):
|
|||||||
assert storage.get_object_path("unicode-test", key).exists()
|
assert storage.get_object_path("unicode-test", key).exists()
|
||||||
|
|
||||||
# Verify listing
|
# Verify listing
|
||||||
objects = storage.list_objects("unicode-test")
|
objects = storage.list_objects_all("unicode-test")
|
||||||
assert any(o.key == key for o in objects)
|
assert any(o.key == key for o in objects)
|
||||||
|
|
||||||
def test_special_characters_in_metadata(tmp_path: Path):
|
def test_special_characters_in_metadata(tmp_path: Path):
|
||||||
|
|||||||
@@ -21,12 +21,11 @@ class TestLocalKeyEncryption:
|
|||||||
|
|
||||||
key_path = tmp_path / "keys" / "master.key"
|
key_path = tmp_path / "keys" / "master.key"
|
||||||
provider = LocalKeyEncryption(key_path)
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
|
||||||
# Access master key to trigger creation
|
|
||||||
key = provider.master_key
|
key = provider.master_key
|
||||||
|
|
||||||
assert key_path.exists()
|
assert key_path.exists()
|
||||||
assert len(key) == 32 # 256-bit key
|
assert len(key) == 32
|
||||||
|
|
||||||
def test_load_existing_master_key(self, tmp_path):
|
def test_load_existing_master_key(self, tmp_path):
|
||||||
"""Test loading an existing master key."""
|
"""Test loading an existing master key."""
|
||||||
@@ -49,16 +48,14 @@ class TestLocalKeyEncryption:
|
|||||||
provider = LocalKeyEncryption(key_path)
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
|
||||||
plaintext = b"Hello, World! This is a test message."
|
plaintext = b"Hello, World! This is a test message."
|
||||||
|
|
||||||
# Encrypt
|
|
||||||
result = provider.encrypt(plaintext)
|
result = provider.encrypt(plaintext)
|
||||||
|
|
||||||
assert result.ciphertext != plaintext
|
assert result.ciphertext != plaintext
|
||||||
assert result.key_id == "local"
|
assert result.key_id == "local"
|
||||||
assert len(result.nonce) == 12
|
assert len(result.nonce) == 12
|
||||||
assert len(result.encrypted_data_key) > 0
|
assert len(result.encrypted_data_key) > 0
|
||||||
|
|
||||||
# Decrypt
|
|
||||||
decrypted = provider.decrypt(
|
decrypted = provider.decrypt(
|
||||||
result.ciphertext,
|
result.ciphertext,
|
||||||
result.nonce,
|
result.nonce,
|
||||||
@@ -79,12 +76,9 @@ class TestLocalKeyEncryption:
|
|||||||
|
|
||||||
result1 = provider.encrypt(plaintext)
|
result1 = provider.encrypt(plaintext)
|
||||||
result2 = provider.encrypt(plaintext)
|
result2 = provider.encrypt(plaintext)
|
||||||
|
|
||||||
# Different encrypted data keys
|
|
||||||
assert result1.encrypted_data_key != result2.encrypted_data_key
|
assert result1.encrypted_data_key != result2.encrypted_data_key
|
||||||
# Different nonces
|
|
||||||
assert result1.nonce != result2.nonce
|
assert result1.nonce != result2.nonce
|
||||||
# Different ciphertexts
|
|
||||||
assert result1.ciphertext != result2.ciphertext
|
assert result1.ciphertext != result2.ciphertext
|
||||||
|
|
||||||
def test_generate_data_key(self, tmp_path):
|
def test_generate_data_key(self, tmp_path):
|
||||||
@@ -95,30 +89,26 @@ class TestLocalKeyEncryption:
|
|||||||
provider = LocalKeyEncryption(key_path)
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
|
||||||
plaintext_key, encrypted_key = provider.generate_data_key()
|
plaintext_key, encrypted_key = provider.generate_data_key()
|
||||||
|
|
||||||
assert len(plaintext_key) == 32
|
assert len(plaintext_key) == 32
|
||||||
assert len(encrypted_key) > 32 # nonce + ciphertext + tag
|
assert len(encrypted_key) > 32
|
||||||
|
|
||||||
# Verify we can decrypt the key
|
|
||||||
decrypted_key = provider._decrypt_data_key(encrypted_key)
|
decrypted_key = provider._decrypt_data_key(encrypted_key)
|
||||||
assert decrypted_key == plaintext_key
|
assert decrypted_key == plaintext_key
|
||||||
|
|
||||||
def test_decrypt_with_wrong_key_fails(self, tmp_path):
|
def test_decrypt_with_wrong_key_fails(self, tmp_path):
|
||||||
"""Test that decryption fails with wrong master key."""
|
"""Test that decryption fails with wrong master key."""
|
||||||
from app.encryption import LocalKeyEncryption, EncryptionError
|
from app.encryption import LocalKeyEncryption, EncryptionError
|
||||||
|
|
||||||
# Create two providers with different keys
|
|
||||||
key_path1 = tmp_path / "master1.key"
|
key_path1 = tmp_path / "master1.key"
|
||||||
key_path2 = tmp_path / "master2.key"
|
key_path2 = tmp_path / "master2.key"
|
||||||
|
|
||||||
provider1 = LocalKeyEncryption(key_path1)
|
provider1 = LocalKeyEncryption(key_path1)
|
||||||
provider2 = LocalKeyEncryption(key_path2)
|
provider2 = LocalKeyEncryption(key_path2)
|
||||||
|
|
||||||
# Encrypt with provider1
|
|
||||||
plaintext = b"Secret message"
|
plaintext = b"Secret message"
|
||||||
result = provider1.encrypt(plaintext)
|
result = provider1.encrypt(plaintext)
|
||||||
|
|
||||||
# Try to decrypt with provider2
|
|
||||||
with pytest.raises(EncryptionError):
|
with pytest.raises(EncryptionError):
|
||||||
provider2.decrypt(
|
provider2.decrypt(
|
||||||
result.ciphertext,
|
result.ciphertext,
|
||||||
@@ -195,19 +185,16 @@ class TestStreamingEncryptor:
|
|||||||
key_path = tmp_path / "master.key"
|
key_path = tmp_path / "master.key"
|
||||||
provider = LocalKeyEncryption(key_path)
|
provider = LocalKeyEncryption(key_path)
|
||||||
encryptor = StreamingEncryptor(provider, chunk_size=1024)
|
encryptor = StreamingEncryptor(provider, chunk_size=1024)
|
||||||
|
|
||||||
# Create test data
|
original_data = b"A" * 5000 + b"B" * 5000 + b"C" * 5000
|
||||||
original_data = b"A" * 5000 + b"B" * 5000 + b"C" * 5000 # 15KB
|
|
||||||
stream = io.BytesIO(original_data)
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
# Encrypt
|
|
||||||
encrypted_stream, metadata = encryptor.encrypt_stream(stream)
|
encrypted_stream, metadata = encryptor.encrypt_stream(stream)
|
||||||
encrypted_data = encrypted_stream.read()
|
encrypted_data = encrypted_stream.read()
|
||||||
|
|
||||||
assert encrypted_data != original_data
|
assert encrypted_data != original_data
|
||||||
assert metadata.algorithm == "AES256"
|
assert metadata.algorithm == "AES256"
|
||||||
|
|
||||||
# Decrypt
|
|
||||||
encrypted_stream = io.BytesIO(encrypted_data)
|
encrypted_stream = io.BytesIO(encrypted_data)
|
||||||
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
|
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
|
||||||
decrypted_data = decrypted_stream.read()
|
decrypted_data = decrypted_stream.read()
|
||||||
@@ -318,8 +305,7 @@ class TestClientEncryptionHelper:
|
|||||||
assert "key" in key_info
|
assert "key" in key_info
|
||||||
assert key_info["algorithm"] == "AES-256-GCM"
|
assert key_info["algorithm"] == "AES-256-GCM"
|
||||||
assert "created_at" in key_info
|
assert "created_at" in key_info
|
||||||
|
|
||||||
# Verify key is 256 bits
|
|
||||||
key = base64.b64decode(key_info["key"])
|
key = base64.b64decode(key_info["key"])
|
||||||
assert len(key) == 32
|
assert len(key) == 32
|
||||||
|
|
||||||
@@ -424,8 +410,7 @@ class TestKMSManager:
|
|||||||
|
|
||||||
assert key is not None
|
assert key is not None
|
||||||
assert key.key_id == "test-key"
|
assert key.key_id == "test-key"
|
||||||
|
|
||||||
# Non-existent key
|
|
||||||
assert kms.get_key("non-existent") is None
|
assert kms.get_key("non-existent") is None
|
||||||
|
|
||||||
def test_enable_disable_key(self, tmp_path):
|
def test_enable_disable_key(self, tmp_path):
|
||||||
@@ -438,15 +423,12 @@ class TestKMSManager:
|
|||||||
kms = KMSManager(keys_path, master_key_path)
|
kms = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
kms.create_key("Test key", key_id="test-key")
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
# Initially enabled
|
|
||||||
assert kms.get_key("test-key").enabled
|
assert kms.get_key("test-key").enabled
|
||||||
|
|
||||||
# Disable
|
|
||||||
kms.disable_key("test-key")
|
kms.disable_key("test-key")
|
||||||
assert not kms.get_key("test-key").enabled
|
assert not kms.get_key("test-key").enabled
|
||||||
|
|
||||||
# Enable
|
|
||||||
kms.enable_key("test-key")
|
kms.enable_key("test-key")
|
||||||
assert kms.get_key("test-key").enabled
|
assert kms.get_key("test-key").enabled
|
||||||
|
|
||||||
@@ -502,12 +484,10 @@ class TestKMSManager:
|
|||||||
context = {"bucket": "test-bucket", "key": "test-key"}
|
context = {"bucket": "test-bucket", "key": "test-key"}
|
||||||
|
|
||||||
ciphertext = kms.encrypt("test-key", plaintext, context)
|
ciphertext = kms.encrypt("test-key", plaintext, context)
|
||||||
|
|
||||||
# Decrypt with same context succeeds
|
|
||||||
decrypted, _ = kms.decrypt(ciphertext, context)
|
decrypted, _ = kms.decrypt(ciphertext, context)
|
||||||
assert decrypted == plaintext
|
assert decrypted == plaintext
|
||||||
|
|
||||||
# Decrypt with different context fails
|
|
||||||
with pytest.raises(EncryptionError):
|
with pytest.raises(EncryptionError):
|
||||||
kms.decrypt(ciphertext, {"different": "context"})
|
kms.decrypt(ciphertext, {"different": "context"})
|
||||||
|
|
||||||
@@ -526,8 +506,7 @@ class TestKMSManager:
|
|||||||
|
|
||||||
assert len(plaintext_key) == 32
|
assert len(plaintext_key) == 32
|
||||||
assert len(encrypted_key) > 0
|
assert len(encrypted_key) > 0
|
||||||
|
|
||||||
# Decrypt the encrypted key
|
|
||||||
decrypted_key = kms.decrypt_data_key("test-key", encrypted_key)
|
decrypted_key = kms.decrypt_data_key("test-key", encrypted_key)
|
||||||
|
|
||||||
assert decrypted_key == plaintext_key
|
assert decrypted_key == plaintext_key
|
||||||
@@ -560,14 +539,9 @@ class TestKMSManager:
|
|||||||
kms.create_key("Key 2", key_id="key-2")
|
kms.create_key("Key 2", key_id="key-2")
|
||||||
|
|
||||||
plaintext = b"Data to re-encrypt"
|
plaintext = b"Data to re-encrypt"
|
||||||
|
|
||||||
# Encrypt with key-1
|
|
||||||
ciphertext1 = kms.encrypt("key-1", plaintext)
|
ciphertext1 = kms.encrypt("key-1", plaintext)
|
||||||
|
|
||||||
# Re-encrypt with key-2
|
|
||||||
ciphertext2 = kms.re_encrypt(ciphertext1, "key-2")
|
ciphertext2 = kms.re_encrypt(ciphertext1, "key-2")
|
||||||
|
|
||||||
# Decrypt with key-2
|
|
||||||
decrypted, key_id = kms.decrypt(ciphertext2)
|
decrypted, key_id = kms.decrypt(ciphertext2)
|
||||||
|
|
||||||
assert decrypted == plaintext
|
assert decrypted == plaintext
|
||||||
@@ -587,7 +561,7 @@ class TestKMSManager:
|
|||||||
|
|
||||||
assert len(random1) == 32
|
assert len(random1) == 32
|
||||||
assert len(random2) == 32
|
assert len(random2) == 32
|
||||||
assert random1 != random2 # Very unlikely to be equal
|
assert random1 != random2
|
||||||
|
|
||||||
def test_keys_persist_across_instances(self, tmp_path):
|
def test_keys_persist_across_instances(self, tmp_path):
|
||||||
"""Test that keys persist and can be loaded by new instances."""
|
"""Test that keys persist and can be loaded by new instances."""
|
||||||
@@ -595,15 +569,13 @@ class TestKMSManager:
|
|||||||
|
|
||||||
keys_path = tmp_path / "kms_keys.json"
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
master_key_path = tmp_path / "master.key"
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
# Create key with first instance
|
|
||||||
kms1 = KMSManager(keys_path, master_key_path)
|
kms1 = KMSManager(keys_path, master_key_path)
|
||||||
kms1.create_key("Test key", key_id="test-key")
|
kms1.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
plaintext = b"Persistent encryption test"
|
plaintext = b"Persistent encryption test"
|
||||||
ciphertext = kms1.encrypt("test-key", plaintext)
|
ciphertext = kms1.encrypt("test-key", plaintext)
|
||||||
|
|
||||||
# Create new instance and verify key works
|
|
||||||
kms2 = KMSManager(keys_path, master_key_path)
|
kms2 = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
decrypted, key_id = kms2.decrypt(ciphertext)
|
decrypted, key_id = kms2.decrypt(ciphertext)
|
||||||
@@ -664,31 +636,27 @@ class TestEncryptedStorage:
|
|||||||
encryption = EncryptionManager(config)
|
encryption = EncryptionManager(config)
|
||||||
|
|
||||||
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
||||||
|
|
||||||
# Create bucket with encryption config
|
|
||||||
storage.create_bucket("test-bucket")
|
storage.create_bucket("test-bucket")
|
||||||
storage.set_bucket_encryption("test-bucket", {
|
storage.set_bucket_encryption("test-bucket", {
|
||||||
"Rules": [{"SSEAlgorithm": "AES256"}]
|
"Rules": [{"SSEAlgorithm": "AES256"}]
|
||||||
})
|
})
|
||||||
|
|
||||||
# Put object
|
|
||||||
original_data = b"This is secret data that should be encrypted"
|
original_data = b"This is secret data that should be encrypted"
|
||||||
stream = io.BytesIO(original_data)
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
meta = encrypted_storage.put_object(
|
meta = encrypted_storage.put_object(
|
||||||
"test-bucket",
|
"test-bucket",
|
||||||
"secret.txt",
|
"secret.txt",
|
||||||
stream,
|
stream,
|
||||||
)
|
)
|
||||||
|
|
||||||
assert meta is not None
|
assert meta is not None
|
||||||
|
|
||||||
# Verify file on disk is encrypted (not plaintext)
|
|
||||||
file_path = storage_root / "test-bucket" / "secret.txt"
|
file_path = storage_root / "test-bucket" / "secret.txt"
|
||||||
stored_data = file_path.read_bytes()
|
stored_data = file_path.read_bytes()
|
||||||
assert stored_data != original_data
|
assert stored_data != original_data
|
||||||
|
|
||||||
# Get object - should be decrypted
|
|
||||||
data, metadata = encrypted_storage.get_object_data("test-bucket", "secret.txt")
|
data, metadata = encrypted_storage.get_object_data("test-bucket", "secret.txt")
|
||||||
|
|
||||||
assert data == original_data
|
assert data == original_data
|
||||||
@@ -711,14 +679,12 @@ class TestEncryptedStorage:
|
|||||||
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
||||||
|
|
||||||
storage.create_bucket("test-bucket")
|
storage.create_bucket("test-bucket")
|
||||||
# No encryption config
|
|
||||||
|
|
||||||
original_data = b"Unencrypted data"
|
original_data = b"Unencrypted data"
|
||||||
stream = io.BytesIO(original_data)
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
encrypted_storage.put_object("test-bucket", "plain.txt", stream)
|
encrypted_storage.put_object("test-bucket", "plain.txt", stream)
|
||||||
|
|
||||||
# Verify file on disk is NOT encrypted
|
|
||||||
file_path = storage_root / "test-bucket" / "plain.txt"
|
file_path = storage_root / "test-bucket" / "plain.txt"
|
||||||
stored_data = file_path.read_bytes()
|
stored_data = file_path.read_bytes()
|
||||||
assert stored_data == original_data
|
assert stored_data == original_data
|
||||||
@@ -744,20 +710,17 @@ class TestEncryptedStorage:
|
|||||||
|
|
||||||
original_data = b"Explicitly encrypted data"
|
original_data = b"Explicitly encrypted data"
|
||||||
stream = io.BytesIO(original_data)
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
# Request encryption explicitly
|
|
||||||
encrypted_storage.put_object(
|
encrypted_storage.put_object(
|
||||||
"test-bucket",
|
"test-bucket",
|
||||||
"encrypted.txt",
|
"encrypted.txt",
|
||||||
stream,
|
stream,
|
||||||
server_side_encryption="AES256",
|
server_side_encryption="AES256",
|
||||||
)
|
)
|
||||||
|
|
||||||
# Verify file is encrypted
|
|
||||||
file_path = storage_root / "test-bucket" / "encrypted.txt"
|
file_path = storage_root / "test-bucket" / "encrypted.txt"
|
||||||
stored_data = file_path.read_bytes()
|
stored_data = file_path.read_bytes()
|
||||||
assert stored_data != original_data
|
assert stored_data != original_data
|
||||||
|
|
||||||
# Get object - should be decrypted
|
|
||||||
data, _ = encrypted_storage.get_object_data("test-bucket", "encrypted.txt")
|
data, _ = encrypted_storage.get_object_data("test-bucket", "encrypted.txt")
|
||||||
assert data == original_data
|
assert data == original_data
|
||||||
|
|||||||
@@ -15,6 +15,7 @@ def kms_client(tmp_path):
|
|||||||
|
|
||||||
app = create_app({
|
app = create_app({
|
||||||
"TESTING": True,
|
"TESTING": True,
|
||||||
|
"SECRET_KEY": "testing",
|
||||||
"STORAGE_ROOT": str(tmp_path / "storage"),
|
"STORAGE_ROOT": str(tmp_path / "storage"),
|
||||||
"IAM_CONFIG": str(tmp_path / "iam.json"),
|
"IAM_CONFIG": str(tmp_path / "iam.json"),
|
||||||
"BUCKET_POLICY_PATH": str(tmp_path / "policies.json"),
|
"BUCKET_POLICY_PATH": str(tmp_path / "policies.json"),
|
||||||
@@ -23,8 +24,7 @@ def kms_client(tmp_path):
|
|||||||
"ENCRYPTION_MASTER_KEY_PATH": str(tmp_path / "master.key"),
|
"ENCRYPTION_MASTER_KEY_PATH": str(tmp_path / "master.key"),
|
||||||
"KMS_KEYS_PATH": str(tmp_path / "kms_keys.json"),
|
"KMS_KEYS_PATH": str(tmp_path / "kms_keys.json"),
|
||||||
})
|
})
|
||||||
|
|
||||||
# Create default IAM config with admin user
|
|
||||||
iam_config = {
|
iam_config = {
|
||||||
"users": [
|
"users": [
|
||||||
{
|
{
|
||||||
@@ -83,7 +83,6 @@ class TestKMSKeyManagement:
|
|||||||
|
|
||||||
def test_list_keys(self, kms_client, auth_headers):
|
def test_list_keys(self, kms_client, auth_headers):
|
||||||
"""Test listing KMS keys."""
|
"""Test listing KMS keys."""
|
||||||
# Create some keys
|
|
||||||
kms_client.post("/kms/keys", json={"Description": "Key 1"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"Description": "Key 1"}, headers=auth_headers)
|
||||||
kms_client.post("/kms/keys", json={"Description": "Key 2"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"Description": "Key 2"}, headers=auth_headers)
|
||||||
|
|
||||||
@@ -97,7 +96,6 @@ class TestKMSKeyManagement:
|
|||||||
|
|
||||||
def test_get_key(self, kms_client, auth_headers):
|
def test_get_key(self, kms_client, auth_headers):
|
||||||
"""Test getting a specific key."""
|
"""Test getting a specific key."""
|
||||||
# Create a key
|
|
||||||
create_response = kms_client.post(
|
create_response = kms_client.post(
|
||||||
"/kms/keys",
|
"/kms/keys",
|
||||||
json={"KeyId": "test-key", "Description": "Test key"},
|
json={"KeyId": "test-key", "Description": "Test key"},
|
||||||
@@ -120,36 +118,28 @@ class TestKMSKeyManagement:
|
|||||||
|
|
||||||
def test_delete_key(self, kms_client, auth_headers):
|
def test_delete_key(self, kms_client, auth_headers):
|
||||||
"""Test deleting a key."""
|
"""Test deleting a key."""
|
||||||
# Create a key
|
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
# Delete it
|
|
||||||
response = kms_client.delete("/kms/keys/test-key", headers=auth_headers)
|
response = kms_client.delete("/kms/keys/test-key", headers=auth_headers)
|
||||||
|
|
||||||
assert response.status_code == 204
|
assert response.status_code == 204
|
||||||
|
|
||||||
# Verify it's gone
|
|
||||||
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
assert get_response.status_code == 404
|
assert get_response.status_code == 404
|
||||||
|
|
||||||
def test_enable_disable_key(self, kms_client, auth_headers):
|
def test_enable_disable_key(self, kms_client, auth_headers):
|
||||||
"""Test enabling and disabling a key."""
|
"""Test enabling and disabling a key."""
|
||||||
# Create a key
|
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
# Disable
|
|
||||||
response = kms_client.post("/kms/keys/test-key/disable", headers=auth_headers)
|
response = kms_client.post("/kms/keys/test-key/disable", headers=auth_headers)
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
|
|
||||||
# Verify disabled
|
|
||||||
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
assert get_response.get_json()["KeyMetadata"]["Enabled"] is False
|
assert get_response.get_json()["KeyMetadata"]["Enabled"] is False
|
||||||
|
|
||||||
# Enable
|
|
||||||
response = kms_client.post("/kms/keys/test-key/enable", headers=auth_headers)
|
response = kms_client.post("/kms/keys/test-key/enable", headers=auth_headers)
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
|
|
||||||
# Verify enabled
|
|
||||||
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
assert get_response.get_json()["KeyMetadata"]["Enabled"] is True
|
assert get_response.get_json()["KeyMetadata"]["Enabled"] is True
|
||||||
|
|
||||||
@@ -159,13 +149,11 @@ class TestKMSEncryption:
|
|||||||
|
|
||||||
def test_encrypt_decrypt(self, kms_client, auth_headers):
|
def test_encrypt_decrypt(self, kms_client, auth_headers):
|
||||||
"""Test encrypting and decrypting data."""
|
"""Test encrypting and decrypting data."""
|
||||||
# Create a key
|
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
plaintext = b"Hello, World!"
|
plaintext = b"Hello, World!"
|
||||||
plaintext_b64 = base64.b64encode(plaintext).decode()
|
plaintext_b64 = base64.b64encode(plaintext).decode()
|
||||||
|
|
||||||
# Encrypt
|
|
||||||
encrypt_response = kms_client.post(
|
encrypt_response = kms_client.post(
|
||||||
"/kms/encrypt",
|
"/kms/encrypt",
|
||||||
json={"KeyId": "test-key", "Plaintext": plaintext_b64},
|
json={"KeyId": "test-key", "Plaintext": plaintext_b64},
|
||||||
@@ -177,8 +165,7 @@ class TestKMSEncryption:
|
|||||||
|
|
||||||
assert "CiphertextBlob" in encrypt_data
|
assert "CiphertextBlob" in encrypt_data
|
||||||
assert encrypt_data["KeyId"] == "test-key"
|
assert encrypt_data["KeyId"] == "test-key"
|
||||||
|
|
||||||
# Decrypt
|
|
||||||
decrypt_response = kms_client.post(
|
decrypt_response = kms_client.post(
|
||||||
"/kms/decrypt",
|
"/kms/decrypt",
|
||||||
json={"CiphertextBlob": encrypt_data["CiphertextBlob"]},
|
json={"CiphertextBlob": encrypt_data["CiphertextBlob"]},
|
||||||
@@ -198,8 +185,7 @@ class TestKMSEncryption:
|
|||||||
plaintext = b"Contextualized data"
|
plaintext = b"Contextualized data"
|
||||||
plaintext_b64 = base64.b64encode(plaintext).decode()
|
plaintext_b64 = base64.b64encode(plaintext).decode()
|
||||||
context = {"purpose": "testing", "bucket": "my-bucket"}
|
context = {"purpose": "testing", "bucket": "my-bucket"}
|
||||||
|
|
||||||
# Encrypt with context
|
|
||||||
encrypt_response = kms_client.post(
|
encrypt_response = kms_client.post(
|
||||||
"/kms/encrypt",
|
"/kms/encrypt",
|
||||||
json={
|
json={
|
||||||
@@ -212,8 +198,7 @@ class TestKMSEncryption:
|
|||||||
|
|
||||||
assert encrypt_response.status_code == 200
|
assert encrypt_response.status_code == 200
|
||||||
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
||||||
|
|
||||||
# Decrypt with same context succeeds
|
|
||||||
decrypt_response = kms_client.post(
|
decrypt_response = kms_client.post(
|
||||||
"/kms/decrypt",
|
"/kms/decrypt",
|
||||||
json={
|
json={
|
||||||
@@ -224,8 +209,7 @@ class TestKMSEncryption:
|
|||||||
)
|
)
|
||||||
|
|
||||||
assert decrypt_response.status_code == 200
|
assert decrypt_response.status_code == 200
|
||||||
|
|
||||||
# Decrypt with wrong context fails
|
|
||||||
wrong_context_response = kms_client.post(
|
wrong_context_response = kms_client.post(
|
||||||
"/kms/decrypt",
|
"/kms/decrypt",
|
||||||
json={
|
json={
|
||||||
@@ -325,11 +309,9 @@ class TestKMSReEncrypt:
|
|||||||
|
|
||||||
def test_re_encrypt(self, kms_client, auth_headers):
|
def test_re_encrypt(self, kms_client, auth_headers):
|
||||||
"""Test re-encrypting data with a different key."""
|
"""Test re-encrypting data with a different key."""
|
||||||
# Create two keys
|
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "key-1"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "key-1"}, headers=auth_headers)
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "key-2"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "key-2"}, headers=auth_headers)
|
||||||
|
|
||||||
# Encrypt with key-1
|
|
||||||
plaintext = b"Data to re-encrypt"
|
plaintext = b"Data to re-encrypt"
|
||||||
encrypt_response = kms_client.post(
|
encrypt_response = kms_client.post(
|
||||||
"/kms/encrypt",
|
"/kms/encrypt",
|
||||||
@@ -341,8 +323,7 @@ class TestKMSReEncrypt:
|
|||||||
)
|
)
|
||||||
|
|
||||||
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
||||||
|
|
||||||
# Re-encrypt with key-2
|
|
||||||
re_encrypt_response = kms_client.post(
|
re_encrypt_response = kms_client.post(
|
||||||
"/kms/re-encrypt",
|
"/kms/re-encrypt",
|
||||||
json={
|
json={
|
||||||
@@ -357,8 +338,7 @@ class TestKMSReEncrypt:
|
|||||||
|
|
||||||
assert data["SourceKeyId"] == "key-1"
|
assert data["SourceKeyId"] == "key-1"
|
||||||
assert data["KeyId"] == "key-2"
|
assert data["KeyId"] == "key-2"
|
||||||
|
|
||||||
# Verify new ciphertext can be decrypted
|
|
||||||
decrypt_response = kms_client.post(
|
decrypt_response = kms_client.post(
|
||||||
"/kms/decrypt",
|
"/kms/decrypt",
|
||||||
json={"CiphertextBlob": data["CiphertextBlob"]},
|
json={"CiphertextBlob": data["CiphertextBlob"]},
|
||||||
@@ -398,7 +378,7 @@ class TestKMSRandom:
|
|||||||
data = response.get_json()
|
data = response.get_json()
|
||||||
|
|
||||||
random_bytes = base64.b64decode(data["Plaintext"])
|
random_bytes = base64.b64decode(data["Plaintext"])
|
||||||
assert len(random_bytes) == 32 # Default is 32 bytes
|
assert len(random_bytes) == 32
|
||||||
|
|
||||||
|
|
||||||
class TestClientSideEncryption:
|
class TestClientSideEncryption:
|
||||||
@@ -422,11 +402,9 @@ class TestClientSideEncryption:
|
|||||||
|
|
||||||
def test_client_encrypt_decrypt(self, kms_client, auth_headers):
|
def test_client_encrypt_decrypt(self, kms_client, auth_headers):
|
||||||
"""Test client-side encryption and decryption."""
|
"""Test client-side encryption and decryption."""
|
||||||
# Generate a key
|
|
||||||
key_response = kms_client.post("/kms/client/generate-key", headers=auth_headers)
|
key_response = kms_client.post("/kms/client/generate-key", headers=auth_headers)
|
||||||
key = key_response.get_json()["key"]
|
key = key_response.get_json()["key"]
|
||||||
|
|
||||||
# Encrypt
|
|
||||||
plaintext = b"Client-side encrypted data"
|
plaintext = b"Client-side encrypted data"
|
||||||
encrypt_response = kms_client.post(
|
encrypt_response = kms_client.post(
|
||||||
"/kms/client/encrypt",
|
"/kms/client/encrypt",
|
||||||
@@ -439,8 +417,7 @@ class TestClientSideEncryption:
|
|||||||
|
|
||||||
assert encrypt_response.status_code == 200
|
assert encrypt_response.status_code == 200
|
||||||
encrypted = encrypt_response.get_json()
|
encrypted = encrypt_response.get_json()
|
||||||
|
|
||||||
# Decrypt
|
|
||||||
decrypt_response = kms_client.post(
|
decrypt_response = kms_client.post(
|
||||||
"/kms/client/decrypt",
|
"/kms/client/decrypt",
|
||||||
json={
|
json={
|
||||||
@@ -461,7 +438,6 @@ class TestEncryptionMaterials:
|
|||||||
|
|
||||||
def test_get_encryption_materials(self, kms_client, auth_headers):
|
def test_get_encryption_materials(self, kms_client, auth_headers):
|
||||||
"""Test getting encryption materials for client-side S3 encryption."""
|
"""Test getting encryption materials for client-side S3 encryption."""
|
||||||
# Create a key
|
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "s3-key"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "s3-key"}, headers=auth_headers)
|
||||||
|
|
||||||
response = kms_client.post(
|
response = kms_client.post(
|
||||||
@@ -477,8 +453,7 @@ class TestEncryptionMaterials:
|
|||||||
assert "EncryptedKey" in data
|
assert "EncryptedKey" in data
|
||||||
assert data["KeyId"] == "s3-key"
|
assert data["KeyId"] == "s3-key"
|
||||||
assert data["Algorithm"] == "AES-256-GCM"
|
assert data["Algorithm"] == "AES-256-GCM"
|
||||||
|
|
||||||
# Verify key is 256 bits
|
|
||||||
key = base64.b64decode(data["PlaintextKey"])
|
key = base64.b64decode(data["PlaintextKey"])
|
||||||
assert len(key) == 32
|
assert len(key) == 32
|
||||||
|
|
||||||
@@ -489,8 +464,7 @@ class TestKMSAuthentication:
|
|||||||
def test_unauthenticated_request_fails(self, kms_client):
|
def test_unauthenticated_request_fails(self, kms_client):
|
||||||
"""Test that unauthenticated requests are rejected."""
|
"""Test that unauthenticated requests are rejected."""
|
||||||
response = kms_client.get("/kms/keys")
|
response = kms_client.get("/kms/keys")
|
||||||
|
|
||||||
# Should fail with 403 (no credentials)
|
|
||||||
assert response.status_code == 403
|
assert response.status_code == 403
|
||||||
|
|
||||||
def test_invalid_credentials_fail(self, kms_client):
|
def test_invalid_credentials_fail(self, kms_client):
|
||||||
|
|||||||
238
tests/test_lifecycle.py
Normal file
238
tests/test_lifecycle.py
Normal file
@@ -0,0 +1,238 @@
|
|||||||
|
import io
|
||||||
|
import time
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.lifecycle import LifecycleManager, LifecycleResult
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def storage(tmp_path: Path):
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
storage_root.mkdir(parents=True)
|
||||||
|
return ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def lifecycle_manager(storage):
|
||||||
|
manager = LifecycleManager(storage, interval_seconds=3600)
|
||||||
|
yield manager
|
||||||
|
manager.stop()
|
||||||
|
|
||||||
|
|
||||||
|
class TestLifecycleResult:
|
||||||
|
def test_default_values(self):
|
||||||
|
result = LifecycleResult(bucket_name="test-bucket")
|
||||||
|
assert result.bucket_name == "test-bucket"
|
||||||
|
assert result.objects_deleted == 0
|
||||||
|
assert result.versions_deleted == 0
|
||||||
|
assert result.uploads_aborted == 0
|
||||||
|
assert result.errors == []
|
||||||
|
assert result.execution_time_seconds == 0.0
|
||||||
|
|
||||||
|
|
||||||
|
class TestLifecycleManager:
|
||||||
|
def test_start_and_stop(self, lifecycle_manager):
|
||||||
|
lifecycle_manager.start()
|
||||||
|
assert lifecycle_manager._timer is not None
|
||||||
|
assert lifecycle_manager._shutdown is False
|
||||||
|
|
||||||
|
lifecycle_manager.stop()
|
||||||
|
assert lifecycle_manager._shutdown is True
|
||||||
|
assert lifecycle_manager._timer is None
|
||||||
|
|
||||||
|
def test_start_only_once(self, lifecycle_manager):
|
||||||
|
lifecycle_manager.start()
|
||||||
|
first_timer = lifecycle_manager._timer
|
||||||
|
|
||||||
|
lifecycle_manager.start()
|
||||||
|
assert lifecycle_manager._timer is first_timer
|
||||||
|
|
||||||
|
def test_enforce_rules_no_lifecycle(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("no-lifecycle-bucket")
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("no-lifecycle-bucket")
|
||||||
|
assert result.bucket_name == "no-lifecycle-bucket"
|
||||||
|
assert result.objects_deleted == 0
|
||||||
|
|
||||||
|
def test_enforce_rules_disabled_rule(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("disabled-bucket")
|
||||||
|
storage.set_bucket_lifecycle("disabled-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "disabled-rule",
|
||||||
|
"Status": "Disabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"Expiration": {"Days": 1},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
old_object = storage.put_object(
|
||||||
|
"disabled-bucket",
|
||||||
|
"old-file.txt",
|
||||||
|
io.BytesIO(b"old content"),
|
||||||
|
)
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("disabled-bucket")
|
||||||
|
assert result.objects_deleted == 0
|
||||||
|
|
||||||
|
def test_enforce_expiration_by_days(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("expire-bucket")
|
||||||
|
storage.set_bucket_lifecycle("expire-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "expire-30-days",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"Expiration": {"Days": 30},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
storage.put_object(
|
||||||
|
"expire-bucket",
|
||||||
|
"recent-file.txt",
|
||||||
|
io.BytesIO(b"recent content"),
|
||||||
|
)
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("expire-bucket")
|
||||||
|
assert result.objects_deleted == 0
|
||||||
|
|
||||||
|
def test_enforce_expiration_with_prefix(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("prefix-bucket")
|
||||||
|
storage.set_bucket_lifecycle("prefix-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "expire-logs",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "logs/",
|
||||||
|
"Expiration": {"Days": 1},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
storage.put_object("prefix-bucket", "logs/old.log", io.BytesIO(b"log data"))
|
||||||
|
storage.put_object("prefix-bucket", "data/keep.txt", io.BytesIO(b"keep this"))
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("prefix-bucket")
|
||||||
|
|
||||||
|
def test_enforce_all_buckets(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("bucket1")
|
||||||
|
storage.create_bucket("bucket2")
|
||||||
|
|
||||||
|
results = lifecycle_manager.enforce_all_buckets()
|
||||||
|
assert isinstance(results, dict)
|
||||||
|
|
||||||
|
def test_run_now_single_bucket(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("run-now-bucket")
|
||||||
|
|
||||||
|
results = lifecycle_manager.run_now("run-now-bucket")
|
||||||
|
assert "run-now-bucket" in results
|
||||||
|
|
||||||
|
def test_run_now_all_buckets(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("all-bucket-1")
|
||||||
|
storage.create_bucket("all-bucket-2")
|
||||||
|
|
||||||
|
results = lifecycle_manager.run_now()
|
||||||
|
assert isinstance(results, dict)
|
||||||
|
|
||||||
|
def test_enforce_abort_multipart(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("multipart-bucket")
|
||||||
|
storage.set_bucket_lifecycle("multipart-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "abort-old-uploads",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
upload_id = storage.initiate_multipart_upload("multipart-bucket", "large-file.bin")
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("multipart-bucket")
|
||||||
|
assert result.uploads_aborted == 0
|
||||||
|
|
||||||
|
def test_enforce_noncurrent_version_expiration(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("versioned-bucket")
|
||||||
|
storage.set_bucket_versioning("versioned-bucket", True)
|
||||||
|
storage.set_bucket_lifecycle("versioned-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "expire-old-versions",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"NoncurrentVersionExpiration": {"NoncurrentDays": 30},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
storage.put_object("versioned-bucket", "file.txt", io.BytesIO(b"v1"))
|
||||||
|
storage.put_object("versioned-bucket", "file.txt", io.BytesIO(b"v2"))
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("versioned-bucket")
|
||||||
|
assert result.bucket_name == "versioned-bucket"
|
||||||
|
|
||||||
|
def test_execution_time_tracking(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("timed-bucket")
|
||||||
|
storage.set_bucket_lifecycle("timed-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "timer-test",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Expiration": {"Days": 1},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("timed-bucket")
|
||||||
|
assert result.execution_time_seconds >= 0
|
||||||
|
|
||||||
|
def test_enforce_rules_with_error(self, lifecycle_manager, storage):
|
||||||
|
result = lifecycle_manager.enforce_rules("nonexistent-bucket")
|
||||||
|
assert len(result.errors) > 0 or result.objects_deleted == 0
|
||||||
|
|
||||||
|
def test_lifecycle_with_date_expiration(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("date-bucket")
|
||||||
|
past_date = (datetime.now(timezone.utc) - timedelta(days=1)).strftime("%Y-%m-%dT00:00:00Z")
|
||||||
|
storage.set_bucket_lifecycle("date-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "expire-by-date",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Prefix": "",
|
||||||
|
"Expiration": {"Date": past_date},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
storage.put_object("date-bucket", "should-expire.txt", io.BytesIO(b"content"))
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("date-bucket")
|
||||||
|
|
||||||
|
def test_enforce_with_filter_prefix(self, lifecycle_manager, storage):
|
||||||
|
storage.create_bucket("filter-bucket")
|
||||||
|
storage.set_bucket_lifecycle("filter-bucket", [
|
||||||
|
{
|
||||||
|
"ID": "filter-prefix-rule",
|
||||||
|
"Status": "Enabled",
|
||||||
|
"Filter": {"Prefix": "archive/"},
|
||||||
|
"Expiration": {"Days": 1},
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
result = lifecycle_manager.enforce_rules("filter-bucket")
|
||||||
|
assert result.bucket_name == "filter-bucket"
|
||||||
|
|
||||||
|
|
||||||
|
class TestLifecycleManagerScheduling:
|
||||||
|
def test_schedule_next_respects_shutdown(self, storage):
|
||||||
|
manager = LifecycleManager(storage, interval_seconds=1)
|
||||||
|
manager._shutdown = True
|
||||||
|
manager._schedule_next()
|
||||||
|
assert manager._timer is None
|
||||||
|
|
||||||
|
@patch.object(LifecycleManager, "enforce_all_buckets")
|
||||||
|
def test_run_enforcement_catches_exceptions(self, mock_enforce, storage):
|
||||||
|
mock_enforce.side_effect = Exception("Test error")
|
||||||
|
manager = LifecycleManager(storage, interval_seconds=3600)
|
||||||
|
manager._shutdown = True
|
||||||
|
manager._run_enforcement()
|
||||||
|
|
||||||
|
def test_shutdown_flag_prevents_scheduling(self, storage):
|
||||||
|
manager = LifecycleManager(storage, interval_seconds=1)
|
||||||
|
manager.start()
|
||||||
|
manager.stop()
|
||||||
|
assert manager._shutdown is True
|
||||||
@@ -4,7 +4,6 @@ import pytest
|
|||||||
from xml.etree.ElementTree import fromstring
|
from xml.etree.ElementTree import fromstring
|
||||||
|
|
||||||
|
|
||||||
# Helper to create file-like stream
|
|
||||||
def _stream(data: bytes):
|
def _stream(data: bytes):
|
||||||
return io.BytesIO(data)
|
return io.BytesIO(data)
|
||||||
|
|
||||||
@@ -19,13 +18,11 @@ class TestListObjectsV2:
|
|||||||
"""Tests for ListObjectsV2 endpoint."""
|
"""Tests for ListObjectsV2 endpoint."""
|
||||||
|
|
||||||
def test_list_objects_v2_basic(self, client, signer, storage):
|
def test_list_objects_v2_basic(self, client, signer, storage):
|
||||||
# Create bucket and objects
|
|
||||||
storage.create_bucket("v2-test")
|
storage.create_bucket("v2-test")
|
||||||
storage.put_object("v2-test", "file1.txt", _stream(b"hello"))
|
storage.put_object("v2-test", "file1.txt", _stream(b"hello"))
|
||||||
storage.put_object("v2-test", "file2.txt", _stream(b"world"))
|
storage.put_object("v2-test", "file2.txt", _stream(b"world"))
|
||||||
storage.put_object("v2-test", "folder/file3.txt", _stream(b"nested"))
|
storage.put_object("v2-test", "folder/file3.txt", _stream(b"nested"))
|
||||||
|
|
||||||
# ListObjectsV2 request
|
|
||||||
headers = signer("GET", "/v2-test?list-type=2")
|
headers = signer("GET", "/v2-test?list-type=2")
|
||||||
resp = client.get("/v2-test", query_string={"list-type": "2"}, headers=headers)
|
resp = client.get("/v2-test", query_string={"list-type": "2"}, headers=headers)
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
@@ -46,7 +43,6 @@ class TestListObjectsV2:
|
|||||||
storage.put_object("prefix-test", "photos/2024/mar.jpg", _stream(b"mar"))
|
storage.put_object("prefix-test", "photos/2024/mar.jpg", _stream(b"mar"))
|
||||||
storage.put_object("prefix-test", "docs/readme.md", _stream(b"readme"))
|
storage.put_object("prefix-test", "docs/readme.md", _stream(b"readme"))
|
||||||
|
|
||||||
# List with prefix and delimiter
|
|
||||||
headers = signer("GET", "/prefix-test?list-type=2&prefix=photos/&delimiter=/")
|
headers = signer("GET", "/prefix-test?list-type=2&prefix=photos/&delimiter=/")
|
||||||
resp = client.get(
|
resp = client.get(
|
||||||
"/prefix-test",
|
"/prefix-test",
|
||||||
@@ -56,11 +52,10 @@ class TestListObjectsV2:
|
|||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
|
|
||||||
root = fromstring(resp.data)
|
root = fromstring(resp.data)
|
||||||
# Should show common prefixes for 2023/ and 2024/
|
|
||||||
prefixes = [el.find("Prefix").text for el in root.findall("CommonPrefixes")]
|
prefixes = [el.find("Prefix").text for el in root.findall("CommonPrefixes")]
|
||||||
assert "photos/2023/" in prefixes
|
assert "photos/2023/" in prefixes
|
||||||
assert "photos/2024/" in prefixes
|
assert "photos/2024/" in prefixes
|
||||||
assert len(root.findall("Contents")) == 0 # No direct files under photos/
|
assert len(root.findall("Contents")) == 0
|
||||||
|
|
||||||
|
|
||||||
class TestPutBucketVersioning:
|
class TestPutBucketVersioning:
|
||||||
@@ -78,7 +73,6 @@ class TestPutBucketVersioning:
|
|||||||
resp = client.put("/version-test", query_string={"versioning": ""}, data=payload, headers=headers)
|
resp = client.put("/version-test", query_string={"versioning": ""}, data=payload, headers=headers)
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
|
|
||||||
# Verify via GET
|
|
||||||
headers = signer("GET", "/version-test?versioning")
|
headers = signer("GET", "/version-test?versioning")
|
||||||
resp = client.get("/version-test", query_string={"versioning": ""}, headers=headers)
|
resp = client.get("/version-test", query_string={"versioning": ""}, headers=headers)
|
||||||
root = fromstring(resp.data)
|
root = fromstring(resp.data)
|
||||||
@@ -110,15 +104,13 @@ class TestDeleteBucketTagging:
|
|||||||
storage.create_bucket("tag-delete-test")
|
storage.create_bucket("tag-delete-test")
|
||||||
storage.set_bucket_tags("tag-delete-test", [{"Key": "env", "Value": "test"}])
|
storage.set_bucket_tags("tag-delete-test", [{"Key": "env", "Value": "test"}])
|
||||||
|
|
||||||
# Delete tags
|
|
||||||
headers = signer("DELETE", "/tag-delete-test?tagging")
|
headers = signer("DELETE", "/tag-delete-test?tagging")
|
||||||
resp = client.delete("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
resp = client.delete("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
# Verify tags are gone
|
|
||||||
headers = signer("GET", "/tag-delete-test?tagging")
|
headers = signer("GET", "/tag-delete-test?tagging")
|
||||||
resp = client.get("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
resp = client.get("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
||||||
assert resp.status_code == 404 # NoSuchTagSet
|
assert resp.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
class TestDeleteBucketCors:
|
class TestDeleteBucketCors:
|
||||||
@@ -130,15 +122,13 @@ class TestDeleteBucketCors:
|
|||||||
{"AllowedOrigins": ["*"], "AllowedMethods": ["GET"]}
|
{"AllowedOrigins": ["*"], "AllowedMethods": ["GET"]}
|
||||||
])
|
])
|
||||||
|
|
||||||
# Delete CORS
|
|
||||||
headers = signer("DELETE", "/cors-delete-test?cors")
|
headers = signer("DELETE", "/cors-delete-test?cors")
|
||||||
resp = client.delete("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
resp = client.delete("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
# Verify CORS is gone
|
|
||||||
headers = signer("GET", "/cors-delete-test?cors")
|
headers = signer("GET", "/cors-delete-test?cors")
|
||||||
resp = client.get("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
resp = client.get("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
||||||
assert resp.status_code == 404 # NoSuchCORSConfiguration
|
assert resp.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
class TestGetBucketLocation:
|
class TestGetBucketLocation:
|
||||||
@@ -173,7 +163,6 @@ class TestBucketAcl:
|
|||||||
def test_put_bucket_acl(self, client, signer, storage):
|
def test_put_bucket_acl(self, client, signer, storage):
|
||||||
storage.create_bucket("acl-put-test")
|
storage.create_bucket("acl-put-test")
|
||||||
|
|
||||||
# PUT with canned ACL header
|
|
||||||
headers = signer("PUT", "/acl-put-test?acl")
|
headers = signer("PUT", "/acl-put-test?acl")
|
||||||
headers["x-amz-acl"] = "public-read"
|
headers["x-amz-acl"] = "public-read"
|
||||||
resp = client.put("/acl-put-test", query_string={"acl": ""}, headers=headers)
|
resp = client.put("/acl-put-test", query_string={"acl": ""}, headers=headers)
|
||||||
@@ -188,7 +177,6 @@ class TestCopyObject:
|
|||||||
storage.create_bucket("copy-dst")
|
storage.create_bucket("copy-dst")
|
||||||
storage.put_object("copy-src", "original.txt", _stream(b"original content"))
|
storage.put_object("copy-src", "original.txt", _stream(b"original content"))
|
||||||
|
|
||||||
# Copy object
|
|
||||||
headers = signer("PUT", "/copy-dst/copied.txt")
|
headers = signer("PUT", "/copy-dst/copied.txt")
|
||||||
headers["x-amz-copy-source"] = "/copy-src/original.txt"
|
headers["x-amz-copy-source"] = "/copy-src/original.txt"
|
||||||
resp = client.put("/copy-dst/copied.txt", headers=headers)
|
resp = client.put("/copy-dst/copied.txt", headers=headers)
|
||||||
@@ -199,7 +187,6 @@ class TestCopyObject:
|
|||||||
assert root.find("ETag") is not None
|
assert root.find("ETag") is not None
|
||||||
assert root.find("LastModified") is not None
|
assert root.find("LastModified") is not None
|
||||||
|
|
||||||
# Verify copy exists
|
|
||||||
path = storage.get_object_path("copy-dst", "copied.txt")
|
path = storage.get_object_path("copy-dst", "copied.txt")
|
||||||
assert path.read_bytes() == b"original content"
|
assert path.read_bytes() == b"original content"
|
||||||
|
|
||||||
@@ -208,7 +195,6 @@ class TestCopyObject:
|
|||||||
storage.create_bucket("meta-dst")
|
storage.create_bucket("meta-dst")
|
||||||
storage.put_object("meta-src", "source.txt", _stream(b"data"), metadata={"old": "value"})
|
storage.put_object("meta-src", "source.txt", _stream(b"data"), metadata={"old": "value"})
|
||||||
|
|
||||||
# Copy with REPLACE directive
|
|
||||||
headers = signer("PUT", "/meta-dst/target.txt")
|
headers = signer("PUT", "/meta-dst/target.txt")
|
||||||
headers["x-amz-copy-source"] = "/meta-src/source.txt"
|
headers["x-amz-copy-source"] = "/meta-src/source.txt"
|
||||||
headers["x-amz-metadata-directive"] = "REPLACE"
|
headers["x-amz-metadata-directive"] = "REPLACE"
|
||||||
@@ -216,7 +202,6 @@ class TestCopyObject:
|
|||||||
resp = client.put("/meta-dst/target.txt", headers=headers)
|
resp = client.put("/meta-dst/target.txt", headers=headers)
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
|
|
||||||
# Verify new metadata (note: header keys are Title-Cased)
|
|
||||||
meta = storage.get_object_metadata("meta-dst", "target.txt")
|
meta = storage.get_object_metadata("meta-dst", "target.txt")
|
||||||
assert "New" in meta or "new" in meta
|
assert "New" in meta or "new" in meta
|
||||||
assert "old" not in meta and "Old" not in meta
|
assert "old" not in meta and "Old" not in meta
|
||||||
@@ -229,7 +214,6 @@ class TestObjectTagging:
|
|||||||
storage.create_bucket("obj-tag-test")
|
storage.create_bucket("obj-tag-test")
|
||||||
storage.put_object("obj-tag-test", "tagged.txt", _stream(b"content"))
|
storage.put_object("obj-tag-test", "tagged.txt", _stream(b"content"))
|
||||||
|
|
||||||
# PUT tags
|
|
||||||
payload = b"""<?xml version="1.0" encoding="UTF-8"?>
|
payload = b"""<?xml version="1.0" encoding="UTF-8"?>
|
||||||
<Tagging>
|
<Tagging>
|
||||||
<TagSet>
|
<TagSet>
|
||||||
@@ -247,7 +231,6 @@ class TestObjectTagging:
|
|||||||
)
|
)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
# GET tags
|
|
||||||
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
||||||
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
@@ -257,12 +240,10 @@ class TestObjectTagging:
|
|||||||
assert tags["project"] == "demo"
|
assert tags["project"] == "demo"
|
||||||
assert tags["env"] == "test"
|
assert tags["env"] == "test"
|
||||||
|
|
||||||
# DELETE tags
|
|
||||||
headers = signer("DELETE", "/obj-tag-test/tagged.txt?tagging")
|
headers = signer("DELETE", "/obj-tag-test/tagged.txt?tagging")
|
||||||
resp = client.delete("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
resp = client.delete("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
# Verify empty
|
|
||||||
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
||||||
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
root = fromstring(resp.data)
|
root = fromstring(resp.data)
|
||||||
@@ -272,7 +253,6 @@ class TestObjectTagging:
|
|||||||
storage.create_bucket("tag-limit")
|
storage.create_bucket("tag-limit")
|
||||||
storage.put_object("tag-limit", "file.txt", _stream(b"x"))
|
storage.put_object("tag-limit", "file.txt", _stream(b"x"))
|
||||||
|
|
||||||
# Try to set 11 tags (limit is 10)
|
|
||||||
tags = "".join(f"<Tag><Key>key{i}</Key><Value>val{i}</Value></Tag>" for i in range(11))
|
tags = "".join(f"<Tag><Key>key{i}</Key><Value>val{i}</Value></Tag>" for i in range(11))
|
||||||
payload = f"<Tagging><TagSet>{tags}</TagSet></Tagging>".encode()
|
payload = f"<Tagging><TagSet>{tags}</TagSet></Tagging>".encode()
|
||||||
|
|
||||||
|
|||||||
374
tests/test_notifications.py
Normal file
374
tests/test_notifications.py
Normal file
@@ -0,0 +1,374 @@
|
|||||||
|
import json
|
||||||
|
import time
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.notifications import (
|
||||||
|
NotificationConfiguration,
|
||||||
|
NotificationEvent,
|
||||||
|
NotificationService,
|
||||||
|
WebhookDestination,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestNotificationEvent:
|
||||||
|
def test_default_values(self):
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="test-bucket",
|
||||||
|
object_key="test/key.txt",
|
||||||
|
)
|
||||||
|
assert event.event_name == "s3:ObjectCreated:Put"
|
||||||
|
assert event.bucket_name == "test-bucket"
|
||||||
|
assert event.object_key == "test/key.txt"
|
||||||
|
assert event.object_size == 0
|
||||||
|
assert event.etag == ""
|
||||||
|
assert event.version_id is None
|
||||||
|
assert event.request_id != ""
|
||||||
|
|
||||||
|
def test_to_s3_event(self):
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="my-bucket",
|
||||||
|
object_key="my/object.txt",
|
||||||
|
object_size=1024,
|
||||||
|
etag="abc123",
|
||||||
|
version_id="v1",
|
||||||
|
source_ip="192.168.1.1",
|
||||||
|
user_identity="user123",
|
||||||
|
)
|
||||||
|
result = event.to_s3_event()
|
||||||
|
|
||||||
|
assert "Records" in result
|
||||||
|
assert len(result["Records"]) == 1
|
||||||
|
|
||||||
|
record = result["Records"][0]
|
||||||
|
assert record["eventVersion"] == "2.1"
|
||||||
|
assert record["eventSource"] == "myfsio:s3"
|
||||||
|
assert record["eventName"] == "s3:ObjectCreated:Put"
|
||||||
|
assert record["s3"]["bucket"]["name"] == "my-bucket"
|
||||||
|
assert record["s3"]["object"]["key"] == "my/object.txt"
|
||||||
|
assert record["s3"]["object"]["size"] == 1024
|
||||||
|
assert record["s3"]["object"]["eTag"] == "abc123"
|
||||||
|
assert record["s3"]["object"]["versionId"] == "v1"
|
||||||
|
assert record["userIdentity"]["principalId"] == "user123"
|
||||||
|
assert record["requestParameters"]["sourceIPAddress"] == "192.168.1.1"
|
||||||
|
|
||||||
|
|
||||||
|
class TestWebhookDestination:
|
||||||
|
def test_default_values(self):
|
||||||
|
dest = WebhookDestination(url="http://example.com/webhook")
|
||||||
|
assert dest.url == "http://example.com/webhook"
|
||||||
|
assert dest.headers == {}
|
||||||
|
assert dest.timeout_seconds == 30
|
||||||
|
assert dest.retry_count == 3
|
||||||
|
assert dest.retry_delay_seconds == 1
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
dest = WebhookDestination(
|
||||||
|
url="http://example.com/webhook",
|
||||||
|
headers={"X-Custom": "value"},
|
||||||
|
timeout_seconds=60,
|
||||||
|
retry_count=5,
|
||||||
|
retry_delay_seconds=2,
|
||||||
|
)
|
||||||
|
result = dest.to_dict()
|
||||||
|
assert result["url"] == "http://example.com/webhook"
|
||||||
|
assert result["headers"] == {"X-Custom": "value"}
|
||||||
|
assert result["timeout_seconds"] == 60
|
||||||
|
assert result["retry_count"] == 5
|
||||||
|
assert result["retry_delay_seconds"] == 2
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"url": "http://hook.example.com",
|
||||||
|
"headers": {"Authorization": "Bearer token"},
|
||||||
|
"timeout_seconds": 45,
|
||||||
|
"retry_count": 2,
|
||||||
|
"retry_delay_seconds": 5,
|
||||||
|
}
|
||||||
|
dest = WebhookDestination.from_dict(data)
|
||||||
|
assert dest.url == "http://hook.example.com"
|
||||||
|
assert dest.headers == {"Authorization": "Bearer token"}
|
||||||
|
assert dest.timeout_seconds == 45
|
||||||
|
assert dest.retry_count == 2
|
||||||
|
assert dest.retry_delay_seconds == 5
|
||||||
|
|
||||||
|
|
||||||
|
class TestNotificationConfiguration:
|
||||||
|
def test_matches_event_exact_match(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:Put"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "any/key.txt") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Post", "any/key.txt") is False
|
||||||
|
|
||||||
|
def test_matches_event_wildcard(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "key.txt") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Copy", "key.txt") is True
|
||||||
|
assert config.matches_event("s3:ObjectRemoved:Delete", "key.txt") is False
|
||||||
|
|
||||||
|
def test_matches_event_with_prefix_filter(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
prefix_filter="logs/",
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "logs/app.log") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "data/file.txt") is False
|
||||||
|
|
||||||
|
def test_matches_event_with_suffix_filter(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
suffix_filter=".jpg",
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "photos/image.jpg") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "photos/image.png") is False
|
||||||
|
|
||||||
|
def test_matches_event_with_both_filters(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
prefix_filter="images/",
|
||||||
|
suffix_filter=".png",
|
||||||
|
)
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "images/photo.png") is True
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "images/photo.jpg") is False
|
||||||
|
assert config.matches_event("s3:ObjectCreated:Put", "documents/file.png") is False
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
config = NotificationConfiguration(
|
||||||
|
id="my-config",
|
||||||
|
events=["s3:ObjectCreated:Put", "s3:ObjectRemoved:Delete"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
prefix_filter="logs/",
|
||||||
|
suffix_filter=".log",
|
||||||
|
)
|
||||||
|
result = config.to_dict()
|
||||||
|
assert result["Id"] == "my-config"
|
||||||
|
assert result["Events"] == ["s3:ObjectCreated:Put", "s3:ObjectRemoved:Delete"]
|
||||||
|
assert "Destination" in result
|
||||||
|
assert result["Filter"]["Key"]["FilterRules"][0]["Value"] == "logs/"
|
||||||
|
assert result["Filter"]["Key"]["FilterRules"][1]["Value"] == ".log"
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"Id": "parsed-config",
|
||||||
|
"Events": ["s3:ObjectCreated:*"],
|
||||||
|
"Destination": {"url": "http://hook.example.com"},
|
||||||
|
"Filter": {
|
||||||
|
"Key": {
|
||||||
|
"FilterRules": [
|
||||||
|
{"Name": "prefix", "Value": "data/"},
|
||||||
|
{"Name": "suffix", "Value": ".csv"},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
config = NotificationConfiguration.from_dict(data)
|
||||||
|
assert config.id == "parsed-config"
|
||||||
|
assert config.events == ["s3:ObjectCreated:*"]
|
||||||
|
assert config.destination.url == "http://hook.example.com"
|
||||||
|
assert config.prefix_filter == "data/"
|
||||||
|
assert config.suffix_filter == ".csv"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def notification_service(tmp_path: Path):
|
||||||
|
service = NotificationService(tmp_path, worker_count=1)
|
||||||
|
yield service
|
||||||
|
service.shutdown()
|
||||||
|
|
||||||
|
|
||||||
|
class TestNotificationService:
|
||||||
|
def test_get_bucket_notifications_empty(self, notification_service):
|
||||||
|
result = notification_service.get_bucket_notifications("nonexistent-bucket")
|
||||||
|
assert result == []
|
||||||
|
|
||||||
|
def test_set_and_get_bucket_notifications(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="config1",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook1"),
|
||||||
|
),
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="config2",
|
||||||
|
events=["s3:ObjectRemoved:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook2"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("my-bucket", configs)
|
||||||
|
|
||||||
|
retrieved = notification_service.get_bucket_notifications("my-bucket")
|
||||||
|
assert len(retrieved) == 2
|
||||||
|
assert retrieved[0].id == "config1"
|
||||||
|
assert retrieved[1].id == "config2"
|
||||||
|
|
||||||
|
def test_delete_bucket_notifications(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="to-delete",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("delete-bucket", configs)
|
||||||
|
assert len(notification_service.get_bucket_notifications("delete-bucket")) == 1
|
||||||
|
|
||||||
|
notification_service.delete_bucket_notifications("delete-bucket")
|
||||||
|
notification_service._configs.clear()
|
||||||
|
assert len(notification_service.get_bucket_notifications("delete-bucket")) == 0
|
||||||
|
|
||||||
|
def test_emit_event_no_config(self, notification_service):
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="no-config-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
notification_service.emit_event(event)
|
||||||
|
assert notification_service._stats["events_queued"] == 0
|
||||||
|
|
||||||
|
def test_emit_event_matching_config(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="match-config",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("event-bucket", configs)
|
||||||
|
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="event-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
notification_service.emit_event(event)
|
||||||
|
assert notification_service._stats["events_queued"] == 1
|
||||||
|
|
||||||
|
def test_emit_event_non_matching_config(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="delete-only",
|
||||||
|
events=["s3:ObjectRemoved:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("delete-bucket", configs)
|
||||||
|
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="delete-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
notification_service.emit_event(event)
|
||||||
|
assert notification_service._stats["events_queued"] == 0
|
||||||
|
|
||||||
|
def test_emit_object_created(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="create-config",
|
||||||
|
events=["s3:ObjectCreated:Put"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("create-bucket", configs)
|
||||||
|
|
||||||
|
notification_service.emit_object_created(
|
||||||
|
"create-bucket",
|
||||||
|
"new-file.txt",
|
||||||
|
size=1024,
|
||||||
|
etag="abc123",
|
||||||
|
operation="Put",
|
||||||
|
)
|
||||||
|
assert notification_service._stats["events_queued"] == 1
|
||||||
|
|
||||||
|
def test_emit_object_removed(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="remove-config",
|
||||||
|
events=["s3:ObjectRemoved:Delete"],
|
||||||
|
destination=WebhookDestination(url="http://example.com/webhook"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("remove-bucket", configs)
|
||||||
|
|
||||||
|
notification_service.emit_object_removed(
|
||||||
|
"remove-bucket",
|
||||||
|
"deleted-file.txt",
|
||||||
|
operation="Delete",
|
||||||
|
)
|
||||||
|
assert notification_service._stats["events_queued"] == 1
|
||||||
|
|
||||||
|
def test_get_stats(self, notification_service):
|
||||||
|
stats = notification_service.get_stats()
|
||||||
|
assert "events_queued" in stats
|
||||||
|
assert "events_sent" in stats
|
||||||
|
assert "events_failed" in stats
|
||||||
|
|
||||||
|
@patch("app.notifications.requests.post")
|
||||||
|
def test_send_notification_success(self, mock_post, notification_service):
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 200
|
||||||
|
mock_post.return_value = mock_response
|
||||||
|
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="test-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
destination = WebhookDestination(url="http://example.com/webhook")
|
||||||
|
|
||||||
|
notification_service._send_notification(event, destination)
|
||||||
|
mock_post.assert_called_once()
|
||||||
|
|
||||||
|
@patch("app.notifications.requests.post")
|
||||||
|
def test_send_notification_retry_on_failure(self, mock_post, notification_service):
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 500
|
||||||
|
mock_response.text = "Internal Server Error"
|
||||||
|
mock_post.return_value = mock_response
|
||||||
|
|
||||||
|
event = NotificationEvent(
|
||||||
|
event_name="s3:ObjectCreated:Put",
|
||||||
|
bucket_name="test-bucket",
|
||||||
|
object_key="test.txt",
|
||||||
|
)
|
||||||
|
destination = WebhookDestination(
|
||||||
|
url="http://example.com/webhook",
|
||||||
|
retry_count=2,
|
||||||
|
retry_delay_seconds=0,
|
||||||
|
)
|
||||||
|
|
||||||
|
with pytest.raises(RuntimeError) as exc_info:
|
||||||
|
notification_service._send_notification(event, destination)
|
||||||
|
assert "Failed after 2 attempts" in str(exc_info.value)
|
||||||
|
assert mock_post.call_count == 2
|
||||||
|
|
||||||
|
def test_notification_caching(self, notification_service):
|
||||||
|
configs = [
|
||||||
|
NotificationConfiguration(
|
||||||
|
id="cached-config",
|
||||||
|
events=["s3:ObjectCreated:*"],
|
||||||
|
destination=WebhookDestination(url="http://example.com"),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
notification_service.set_bucket_notifications("cached-bucket", configs)
|
||||||
|
|
||||||
|
notification_service.get_bucket_notifications("cached-bucket")
|
||||||
|
assert "cached-bucket" in notification_service._configs
|
||||||
332
tests/test_object_lock.py
Normal file
332
tests/test_object_lock.py
Normal file
@@ -0,0 +1,332 @@
|
|||||||
|
import json
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.object_lock import (
|
||||||
|
ObjectLockConfig,
|
||||||
|
ObjectLockError,
|
||||||
|
ObjectLockRetention,
|
||||||
|
ObjectLockService,
|
||||||
|
RetentionMode,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestRetentionMode:
|
||||||
|
def test_governance_mode(self):
|
||||||
|
assert RetentionMode.GOVERNANCE.value == "GOVERNANCE"
|
||||||
|
|
||||||
|
def test_compliance_mode(self):
|
||||||
|
assert RetentionMode.COMPLIANCE.value == "COMPLIANCE"
|
||||||
|
|
||||||
|
|
||||||
|
class TestObjectLockRetention:
|
||||||
|
def test_to_dict(self):
|
||||||
|
retain_until = datetime(2025, 12, 31, 23, 59, 59, tzinfo=timezone.utc)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=retain_until,
|
||||||
|
)
|
||||||
|
result = retention.to_dict()
|
||||||
|
assert result["Mode"] == "GOVERNANCE"
|
||||||
|
assert "2025-12-31" in result["RetainUntilDate"]
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"Mode": "COMPLIANCE",
|
||||||
|
"RetainUntilDate": "2030-06-15T12:00:00+00:00",
|
||||||
|
}
|
||||||
|
retention = ObjectLockRetention.from_dict(data)
|
||||||
|
assert retention is not None
|
||||||
|
assert retention.mode == RetentionMode.COMPLIANCE
|
||||||
|
assert retention.retain_until_date.year == 2030
|
||||||
|
|
||||||
|
def test_from_dict_empty(self):
|
||||||
|
result = ObjectLockRetention.from_dict({})
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_from_dict_missing_mode(self):
|
||||||
|
data = {"RetainUntilDate": "2030-06-15T12:00:00+00:00"}
|
||||||
|
result = ObjectLockRetention.from_dict(data)
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_from_dict_missing_date(self):
|
||||||
|
data = {"Mode": "GOVERNANCE"}
|
||||||
|
result = ObjectLockRetention.from_dict(data)
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_is_expired_future_date(self):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
assert retention.is_expired() is False
|
||||||
|
|
||||||
|
def test_is_expired_past_date(self):
|
||||||
|
past = datetime.now(timezone.utc) - timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=past,
|
||||||
|
)
|
||||||
|
assert retention.is_expired() is True
|
||||||
|
|
||||||
|
|
||||||
|
class TestObjectLockConfig:
|
||||||
|
def test_to_dict_enabled(self):
|
||||||
|
config = ObjectLockConfig(enabled=True)
|
||||||
|
result = config.to_dict()
|
||||||
|
assert result["ObjectLockEnabled"] == "Enabled"
|
||||||
|
|
||||||
|
def test_to_dict_disabled(self):
|
||||||
|
config = ObjectLockConfig(enabled=False)
|
||||||
|
result = config.to_dict()
|
||||||
|
assert result["ObjectLockEnabled"] == "Disabled"
|
||||||
|
|
||||||
|
def test_from_dict_enabled(self):
|
||||||
|
data = {"ObjectLockEnabled": "Enabled"}
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
assert config.enabled is True
|
||||||
|
|
||||||
|
def test_from_dict_disabled(self):
|
||||||
|
data = {"ObjectLockEnabled": "Disabled"}
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
assert config.enabled is False
|
||||||
|
|
||||||
|
def test_from_dict_with_default_retention_days(self):
|
||||||
|
data = {
|
||||||
|
"ObjectLockEnabled": "Enabled",
|
||||||
|
"Rule": {
|
||||||
|
"DefaultRetention": {
|
||||||
|
"Mode": "GOVERNANCE",
|
||||||
|
"Days": 30,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
assert config.enabled is True
|
||||||
|
assert config.default_retention is not None
|
||||||
|
assert config.default_retention.mode == RetentionMode.GOVERNANCE
|
||||||
|
|
||||||
|
def test_from_dict_with_default_retention_years(self):
|
||||||
|
data = {
|
||||||
|
"ObjectLockEnabled": "Enabled",
|
||||||
|
"Rule": {
|
||||||
|
"DefaultRetention": {
|
||||||
|
"Mode": "COMPLIANCE",
|
||||||
|
"Years": 1,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
config = ObjectLockConfig.from_dict(data)
|
||||||
|
assert config.enabled is True
|
||||||
|
assert config.default_retention is not None
|
||||||
|
assert config.default_retention.mode == RetentionMode.COMPLIANCE
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def lock_service(tmp_path: Path):
|
||||||
|
return ObjectLockService(tmp_path)
|
||||||
|
|
||||||
|
|
||||||
|
class TestObjectLockService:
|
||||||
|
def test_get_bucket_lock_config_default(self, lock_service):
|
||||||
|
config = lock_service.get_bucket_lock_config("nonexistent-bucket")
|
||||||
|
assert config.enabled is False
|
||||||
|
assert config.default_retention is None
|
||||||
|
|
||||||
|
def test_set_and_get_bucket_lock_config(self, lock_service):
|
||||||
|
config = ObjectLockConfig(enabled=True)
|
||||||
|
lock_service.set_bucket_lock_config("my-bucket", config)
|
||||||
|
|
||||||
|
retrieved = lock_service.get_bucket_lock_config("my-bucket")
|
||||||
|
assert retrieved.enabled is True
|
||||||
|
|
||||||
|
def test_enable_bucket_lock(self, lock_service):
|
||||||
|
lock_service.enable_bucket_lock("lock-bucket")
|
||||||
|
|
||||||
|
config = lock_service.get_bucket_lock_config("lock-bucket")
|
||||||
|
assert config.enabled is True
|
||||||
|
|
||||||
|
def test_is_bucket_lock_enabled(self, lock_service):
|
||||||
|
assert lock_service.is_bucket_lock_enabled("new-bucket") is False
|
||||||
|
|
||||||
|
lock_service.enable_bucket_lock("new-bucket")
|
||||||
|
assert lock_service.is_bucket_lock_enabled("new-bucket") is True
|
||||||
|
|
||||||
|
def test_get_object_retention_not_set(self, lock_service):
|
||||||
|
result = lock_service.get_object_retention("bucket", "key.txt")
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_set_and_get_object_retention(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "key.txt", retention)
|
||||||
|
|
||||||
|
retrieved = lock_service.get_object_retention("bucket", "key.txt")
|
||||||
|
assert retrieved is not None
|
||||||
|
assert retrieved.mode == RetentionMode.GOVERNANCE
|
||||||
|
|
||||||
|
def test_cannot_modify_compliance_retention(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.COMPLIANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "locked.txt", retention)
|
||||||
|
|
||||||
|
new_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future + timedelta(days=10),
|
||||||
|
)
|
||||||
|
with pytest.raises(ObjectLockError) as exc_info:
|
||||||
|
lock_service.set_object_retention("bucket", "locked.txt", new_retention)
|
||||||
|
assert "COMPLIANCE" in str(exc_info.value)
|
||||||
|
|
||||||
|
def test_cannot_modify_governance_without_bypass(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "gov.txt", retention)
|
||||||
|
|
||||||
|
new_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future + timedelta(days=10),
|
||||||
|
)
|
||||||
|
with pytest.raises(ObjectLockError) as exc_info:
|
||||||
|
lock_service.set_object_retention("bucket", "gov.txt", new_retention)
|
||||||
|
assert "GOVERNANCE" in str(exc_info.value)
|
||||||
|
|
||||||
|
def test_can_modify_governance_with_bypass(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "bypassable.txt", retention)
|
||||||
|
|
||||||
|
new_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future + timedelta(days=10),
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "bypassable.txt", new_retention, bypass_governance=True)
|
||||||
|
retrieved = lock_service.get_object_retention("bucket", "bypassable.txt")
|
||||||
|
assert retrieved.retain_until_date > future
|
||||||
|
|
||||||
|
def test_can_modify_expired_retention(self, lock_service):
|
||||||
|
past = datetime.now(timezone.utc) - timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.COMPLIANCE,
|
||||||
|
retain_until_date=past,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "expired.txt", retention)
|
||||||
|
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
new_retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "expired.txt", new_retention)
|
||||||
|
retrieved = lock_service.get_object_retention("bucket", "expired.txt")
|
||||||
|
assert retrieved.mode == RetentionMode.GOVERNANCE
|
||||||
|
|
||||||
|
def test_get_legal_hold_not_set(self, lock_service):
|
||||||
|
result = lock_service.get_legal_hold("bucket", "key.txt")
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
def test_set_and_get_legal_hold(self, lock_service):
|
||||||
|
lock_service.set_legal_hold("bucket", "held.txt", True)
|
||||||
|
assert lock_service.get_legal_hold("bucket", "held.txt") is True
|
||||||
|
|
||||||
|
lock_service.set_legal_hold("bucket", "held.txt", False)
|
||||||
|
assert lock_service.get_legal_hold("bucket", "held.txt") is False
|
||||||
|
|
||||||
|
def test_can_delete_object_no_lock(self, lock_service):
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "unlocked.txt")
|
||||||
|
assert can_delete is True
|
||||||
|
assert reason == ""
|
||||||
|
|
||||||
|
def test_cannot_delete_object_with_legal_hold(self, lock_service):
|
||||||
|
lock_service.set_legal_hold("bucket", "held.txt", True)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "held.txt")
|
||||||
|
assert can_delete is False
|
||||||
|
assert "legal hold" in reason.lower()
|
||||||
|
|
||||||
|
def test_cannot_delete_object_with_compliance_retention(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.COMPLIANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "compliant.txt", retention)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "compliant.txt")
|
||||||
|
assert can_delete is False
|
||||||
|
assert "COMPLIANCE" in reason
|
||||||
|
|
||||||
|
def test_cannot_delete_governance_without_bypass(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "governed.txt", retention)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "governed.txt")
|
||||||
|
assert can_delete is False
|
||||||
|
assert "GOVERNANCE" in reason
|
||||||
|
|
||||||
|
def test_can_delete_governance_with_bypass(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "governed.txt", retention)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "governed.txt", bypass_governance=True)
|
||||||
|
assert can_delete is True
|
||||||
|
assert reason == ""
|
||||||
|
|
||||||
|
def test_can_delete_expired_retention(self, lock_service):
|
||||||
|
past = datetime.now(timezone.utc) - timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.COMPLIANCE,
|
||||||
|
retain_until_date=past,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "expired.txt", retention)
|
||||||
|
|
||||||
|
can_delete, reason = lock_service.can_delete_object("bucket", "expired.txt")
|
||||||
|
assert can_delete is True
|
||||||
|
|
||||||
|
def test_can_overwrite_is_same_as_delete(self, lock_service):
|
||||||
|
future = datetime.now(timezone.utc) + timedelta(days=30)
|
||||||
|
retention = ObjectLockRetention(
|
||||||
|
mode=RetentionMode.GOVERNANCE,
|
||||||
|
retain_until_date=future,
|
||||||
|
)
|
||||||
|
lock_service.set_object_retention("bucket", "overwrite.txt", retention)
|
||||||
|
|
||||||
|
can_overwrite, _ = lock_service.can_overwrite_object("bucket", "overwrite.txt")
|
||||||
|
can_delete, _ = lock_service.can_delete_object("bucket", "overwrite.txt")
|
||||||
|
assert can_overwrite == can_delete
|
||||||
|
|
||||||
|
def test_delete_object_lock_metadata(self, lock_service):
|
||||||
|
lock_service.set_legal_hold("bucket", "cleanup.txt", True)
|
||||||
|
lock_service.delete_object_lock_metadata("bucket", "cleanup.txt")
|
||||||
|
|
||||||
|
assert lock_service.get_legal_hold("bucket", "cleanup.txt") is False
|
||||||
|
|
||||||
|
def test_config_caching(self, lock_service):
|
||||||
|
config = ObjectLockConfig(enabled=True)
|
||||||
|
lock_service.set_bucket_lock_config("cached-bucket", config)
|
||||||
|
|
||||||
|
lock_service.get_bucket_lock_config("cached-bucket")
|
||||||
|
assert "cached-bucket" in lock_service._config_cache
|
||||||
297
tests/test_operation_metrics.py
Normal file
297
tests/test_operation_metrics.py
Normal file
@@ -0,0 +1,297 @@
|
|||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.operation_metrics import (
|
||||||
|
OperationMetricsCollector,
|
||||||
|
OperationStats,
|
||||||
|
classify_endpoint,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestOperationStats:
|
||||||
|
def test_initial_state(self):
|
||||||
|
stats = OperationStats()
|
||||||
|
assert stats.count == 0
|
||||||
|
assert stats.success_count == 0
|
||||||
|
assert stats.error_count == 0
|
||||||
|
assert stats.latency_sum_ms == 0.0
|
||||||
|
assert stats.bytes_in == 0
|
||||||
|
assert stats.bytes_out == 0
|
||||||
|
|
||||||
|
def test_record_success(self):
|
||||||
|
stats = OperationStats()
|
||||||
|
stats.record(latency_ms=50.0, success=True, bytes_in=100, bytes_out=200)
|
||||||
|
|
||||||
|
assert stats.count == 1
|
||||||
|
assert stats.success_count == 1
|
||||||
|
assert stats.error_count == 0
|
||||||
|
assert stats.latency_sum_ms == 50.0
|
||||||
|
assert stats.latency_min_ms == 50.0
|
||||||
|
assert stats.latency_max_ms == 50.0
|
||||||
|
assert stats.bytes_in == 100
|
||||||
|
assert stats.bytes_out == 200
|
||||||
|
|
||||||
|
def test_record_error(self):
|
||||||
|
stats = OperationStats()
|
||||||
|
stats.record(latency_ms=100.0, success=False, bytes_in=50, bytes_out=0)
|
||||||
|
|
||||||
|
assert stats.count == 1
|
||||||
|
assert stats.success_count == 0
|
||||||
|
assert stats.error_count == 1
|
||||||
|
|
||||||
|
def test_latency_min_max(self):
|
||||||
|
stats = OperationStats()
|
||||||
|
stats.record(latency_ms=50.0, success=True)
|
||||||
|
stats.record(latency_ms=10.0, success=True)
|
||||||
|
stats.record(latency_ms=100.0, success=True)
|
||||||
|
|
||||||
|
assert stats.latency_min_ms == 10.0
|
||||||
|
assert stats.latency_max_ms == 100.0
|
||||||
|
assert stats.latency_sum_ms == 160.0
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
stats = OperationStats()
|
||||||
|
stats.record(latency_ms=50.0, success=True, bytes_in=100, bytes_out=200)
|
||||||
|
stats.record(latency_ms=100.0, success=False, bytes_in=50, bytes_out=0)
|
||||||
|
|
||||||
|
result = stats.to_dict()
|
||||||
|
assert result["count"] == 2
|
||||||
|
assert result["success_count"] == 1
|
||||||
|
assert result["error_count"] == 1
|
||||||
|
assert result["latency_avg_ms"] == 75.0
|
||||||
|
assert result["latency_min_ms"] == 50.0
|
||||||
|
assert result["latency_max_ms"] == 100.0
|
||||||
|
assert result["bytes_in"] == 150
|
||||||
|
assert result["bytes_out"] == 200
|
||||||
|
|
||||||
|
def test_to_dict_empty(self):
|
||||||
|
stats = OperationStats()
|
||||||
|
result = stats.to_dict()
|
||||||
|
assert result["count"] == 0
|
||||||
|
assert result["latency_avg_ms"] == 0.0
|
||||||
|
assert result["latency_min_ms"] == 0.0
|
||||||
|
|
||||||
|
def test_merge(self):
|
||||||
|
stats1 = OperationStats()
|
||||||
|
stats1.record(latency_ms=50.0, success=True, bytes_in=100, bytes_out=200)
|
||||||
|
|
||||||
|
stats2 = OperationStats()
|
||||||
|
stats2.record(latency_ms=10.0, success=True, bytes_in=50, bytes_out=100)
|
||||||
|
stats2.record(latency_ms=100.0, success=False, bytes_in=25, bytes_out=50)
|
||||||
|
|
||||||
|
stats1.merge(stats2)
|
||||||
|
|
||||||
|
assert stats1.count == 3
|
||||||
|
assert stats1.success_count == 2
|
||||||
|
assert stats1.error_count == 1
|
||||||
|
assert stats1.latency_min_ms == 10.0
|
||||||
|
assert stats1.latency_max_ms == 100.0
|
||||||
|
assert stats1.bytes_in == 175
|
||||||
|
assert stats1.bytes_out == 350
|
||||||
|
|
||||||
|
|
||||||
|
class TestClassifyEndpoint:
|
||||||
|
def test_root_path(self):
|
||||||
|
assert classify_endpoint("/") == "service"
|
||||||
|
assert classify_endpoint("") == "service"
|
||||||
|
|
||||||
|
def test_ui_paths(self):
|
||||||
|
assert classify_endpoint("/ui") == "ui"
|
||||||
|
assert classify_endpoint("/ui/buckets") == "ui"
|
||||||
|
assert classify_endpoint("/ui/metrics") == "ui"
|
||||||
|
|
||||||
|
def test_kms_paths(self):
|
||||||
|
assert classify_endpoint("/kms") == "kms"
|
||||||
|
assert classify_endpoint("/kms/keys") == "kms"
|
||||||
|
|
||||||
|
def test_service_paths(self):
|
||||||
|
assert classify_endpoint("/myfsio/health") == "service"
|
||||||
|
|
||||||
|
def test_bucket_paths(self):
|
||||||
|
assert classify_endpoint("/mybucket") == "bucket"
|
||||||
|
assert classify_endpoint("/mybucket/") == "bucket"
|
||||||
|
|
||||||
|
def test_object_paths(self):
|
||||||
|
assert classify_endpoint("/mybucket/mykey") == "object"
|
||||||
|
assert classify_endpoint("/mybucket/folder/nested/key.txt") == "object"
|
||||||
|
|
||||||
|
|
||||||
|
class TestOperationMetricsCollector:
|
||||||
|
def test_record_and_get_stats(self, tmp_path: Path):
|
||||||
|
collector = OperationMetricsCollector(
|
||||||
|
storage_root=tmp_path,
|
||||||
|
interval_minutes=60,
|
||||||
|
retention_hours=24,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
collector.record_request(
|
||||||
|
method="GET",
|
||||||
|
endpoint_type="bucket",
|
||||||
|
status_code=200,
|
||||||
|
latency_ms=50.0,
|
||||||
|
bytes_in=0,
|
||||||
|
bytes_out=1000,
|
||||||
|
)
|
||||||
|
|
||||||
|
collector.record_request(
|
||||||
|
method="PUT",
|
||||||
|
endpoint_type="object",
|
||||||
|
status_code=201,
|
||||||
|
latency_ms=100.0,
|
||||||
|
bytes_in=500,
|
||||||
|
bytes_out=0,
|
||||||
|
)
|
||||||
|
|
||||||
|
collector.record_request(
|
||||||
|
method="GET",
|
||||||
|
endpoint_type="object",
|
||||||
|
status_code=404,
|
||||||
|
latency_ms=25.0,
|
||||||
|
bytes_in=0,
|
||||||
|
bytes_out=0,
|
||||||
|
error_code="NoSuchKey",
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = collector.get_current_stats()
|
||||||
|
|
||||||
|
assert stats["totals"]["count"] == 3
|
||||||
|
assert stats["totals"]["success_count"] == 2
|
||||||
|
assert stats["totals"]["error_count"] == 1
|
||||||
|
|
||||||
|
assert "GET" in stats["by_method"]
|
||||||
|
assert stats["by_method"]["GET"]["count"] == 2
|
||||||
|
assert "PUT" in stats["by_method"]
|
||||||
|
assert stats["by_method"]["PUT"]["count"] == 1
|
||||||
|
|
||||||
|
assert "bucket" in stats["by_endpoint"]
|
||||||
|
assert "object" in stats["by_endpoint"]
|
||||||
|
assert stats["by_endpoint"]["object"]["count"] == 2
|
||||||
|
|
||||||
|
assert stats["by_status_class"]["2xx"] == 2
|
||||||
|
assert stats["by_status_class"]["4xx"] == 1
|
||||||
|
|
||||||
|
assert stats["error_codes"]["NoSuchKey"] == 1
|
||||||
|
finally:
|
||||||
|
collector.shutdown()
|
||||||
|
|
||||||
|
def test_thread_safety(self, tmp_path: Path):
|
||||||
|
collector = OperationMetricsCollector(
|
||||||
|
storage_root=tmp_path,
|
||||||
|
interval_minutes=60,
|
||||||
|
retention_hours=24,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
num_threads = 5
|
||||||
|
requests_per_thread = 100
|
||||||
|
threads = []
|
||||||
|
|
||||||
|
def record_requests():
|
||||||
|
for _ in range(requests_per_thread):
|
||||||
|
collector.record_request(
|
||||||
|
method="GET",
|
||||||
|
endpoint_type="object",
|
||||||
|
status_code=200,
|
||||||
|
latency_ms=10.0,
|
||||||
|
)
|
||||||
|
|
||||||
|
for _ in range(num_threads):
|
||||||
|
t = threading.Thread(target=record_requests)
|
||||||
|
threads.append(t)
|
||||||
|
t.start()
|
||||||
|
|
||||||
|
for t in threads:
|
||||||
|
t.join()
|
||||||
|
|
||||||
|
stats = collector.get_current_stats()
|
||||||
|
assert stats["totals"]["count"] == num_threads * requests_per_thread
|
||||||
|
finally:
|
||||||
|
collector.shutdown()
|
||||||
|
|
||||||
|
def test_status_class_categorization(self, tmp_path: Path):
|
||||||
|
collector = OperationMetricsCollector(
|
||||||
|
storage_root=tmp_path,
|
||||||
|
interval_minutes=60,
|
||||||
|
retention_hours=24,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
collector.record_request("GET", "object", 200, 10.0)
|
||||||
|
collector.record_request("GET", "object", 204, 10.0)
|
||||||
|
collector.record_request("GET", "object", 301, 10.0)
|
||||||
|
collector.record_request("GET", "object", 304, 10.0)
|
||||||
|
collector.record_request("GET", "object", 400, 10.0)
|
||||||
|
collector.record_request("GET", "object", 403, 10.0)
|
||||||
|
collector.record_request("GET", "object", 404, 10.0)
|
||||||
|
collector.record_request("GET", "object", 500, 10.0)
|
||||||
|
collector.record_request("GET", "object", 503, 10.0)
|
||||||
|
|
||||||
|
stats = collector.get_current_stats()
|
||||||
|
assert stats["by_status_class"]["2xx"] == 2
|
||||||
|
assert stats["by_status_class"]["3xx"] == 2
|
||||||
|
assert stats["by_status_class"]["4xx"] == 3
|
||||||
|
assert stats["by_status_class"]["5xx"] == 2
|
||||||
|
finally:
|
||||||
|
collector.shutdown()
|
||||||
|
|
||||||
|
def test_error_code_tracking(self, tmp_path: Path):
|
||||||
|
collector = OperationMetricsCollector(
|
||||||
|
storage_root=tmp_path,
|
||||||
|
interval_minutes=60,
|
||||||
|
retention_hours=24,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
collector.record_request("GET", "object", 404, 10.0, error_code="NoSuchKey")
|
||||||
|
collector.record_request("GET", "object", 404, 10.0, error_code="NoSuchKey")
|
||||||
|
collector.record_request("GET", "bucket", 403, 10.0, error_code="AccessDenied")
|
||||||
|
collector.record_request("PUT", "object", 500, 10.0, error_code="InternalError")
|
||||||
|
|
||||||
|
stats = collector.get_current_stats()
|
||||||
|
assert stats["error_codes"]["NoSuchKey"] == 2
|
||||||
|
assert stats["error_codes"]["AccessDenied"] == 1
|
||||||
|
assert stats["error_codes"]["InternalError"] == 1
|
||||||
|
finally:
|
||||||
|
collector.shutdown()
|
||||||
|
|
||||||
|
def test_history_persistence(self, tmp_path: Path):
|
||||||
|
collector = OperationMetricsCollector(
|
||||||
|
storage_root=tmp_path,
|
||||||
|
interval_minutes=60,
|
||||||
|
retention_hours=24,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
collector.record_request("GET", "object", 200, 10.0)
|
||||||
|
collector._take_snapshot()
|
||||||
|
|
||||||
|
history = collector.get_history()
|
||||||
|
assert len(history) == 1
|
||||||
|
assert history[0]["totals"]["count"] == 1
|
||||||
|
|
||||||
|
config_path = tmp_path / ".myfsio.sys" / "config" / "operation_metrics.json"
|
||||||
|
assert config_path.exists()
|
||||||
|
finally:
|
||||||
|
collector.shutdown()
|
||||||
|
|
||||||
|
def test_get_history_with_hours_filter(self, tmp_path: Path):
|
||||||
|
collector = OperationMetricsCollector(
|
||||||
|
storage_root=tmp_path,
|
||||||
|
interval_minutes=60,
|
||||||
|
retention_hours=24,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
collector.record_request("GET", "object", 200, 10.0)
|
||||||
|
collector._take_snapshot()
|
||||||
|
|
||||||
|
history_all = collector.get_history()
|
||||||
|
history_recent = collector.get_history(hours=1)
|
||||||
|
|
||||||
|
assert len(history_all) >= len(history_recent)
|
||||||
|
finally:
|
||||||
|
collector.shutdown()
|
||||||
287
tests/test_replication.py
Normal file
287
tests/test_replication.py
Normal file
@@ -0,0 +1,287 @@
|
|||||||
|
import json
|
||||||
|
import time
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.connections import ConnectionStore, RemoteConnection
|
||||||
|
from app.replication import (
|
||||||
|
ReplicationManager,
|
||||||
|
ReplicationRule,
|
||||||
|
ReplicationStats,
|
||||||
|
REPLICATION_MODE_ALL,
|
||||||
|
REPLICATION_MODE_NEW_ONLY,
|
||||||
|
_create_s3_client,
|
||||||
|
)
|
||||||
|
from app.storage import ObjectStorage
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def storage(tmp_path: Path):
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
storage_root.mkdir(parents=True)
|
||||||
|
return ObjectStorage(storage_root)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def connections(tmp_path: Path):
|
||||||
|
connections_path = tmp_path / "connections.json"
|
||||||
|
store = ConnectionStore(connections_path)
|
||||||
|
conn = RemoteConnection(
|
||||||
|
id="test-conn",
|
||||||
|
name="Test Remote",
|
||||||
|
endpoint_url="http://localhost:9000",
|
||||||
|
access_key="remote-access",
|
||||||
|
secret_key="remote-secret",
|
||||||
|
region="us-east-1",
|
||||||
|
)
|
||||||
|
store.add(conn)
|
||||||
|
return store
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def replication_manager(storage, connections, tmp_path):
|
||||||
|
rules_path = tmp_path / "replication_rules.json"
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
storage_root.mkdir(exist_ok=True)
|
||||||
|
manager = ReplicationManager(storage, connections, rules_path, storage_root)
|
||||||
|
yield manager
|
||||||
|
manager.shutdown(wait=False)
|
||||||
|
|
||||||
|
|
||||||
|
class TestReplicationStats:
|
||||||
|
def test_to_dict(self):
|
||||||
|
stats = ReplicationStats(
|
||||||
|
objects_synced=10,
|
||||||
|
objects_pending=5,
|
||||||
|
objects_orphaned=2,
|
||||||
|
bytes_synced=1024,
|
||||||
|
last_sync_at=1234567890.0,
|
||||||
|
last_sync_key="test/key.txt",
|
||||||
|
)
|
||||||
|
result = stats.to_dict()
|
||||||
|
assert result["objects_synced"] == 10
|
||||||
|
assert result["objects_pending"] == 5
|
||||||
|
assert result["objects_orphaned"] == 2
|
||||||
|
assert result["bytes_synced"] == 1024
|
||||||
|
assert result["last_sync_at"] == 1234567890.0
|
||||||
|
assert result["last_sync_key"] == "test/key.txt"
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"objects_synced": 15,
|
||||||
|
"objects_pending": 3,
|
||||||
|
"objects_orphaned": 1,
|
||||||
|
"bytes_synced": 2048,
|
||||||
|
"last_sync_at": 9876543210.0,
|
||||||
|
"last_sync_key": "another/key.txt",
|
||||||
|
}
|
||||||
|
stats = ReplicationStats.from_dict(data)
|
||||||
|
assert stats.objects_synced == 15
|
||||||
|
assert stats.objects_pending == 3
|
||||||
|
assert stats.objects_orphaned == 1
|
||||||
|
assert stats.bytes_synced == 2048
|
||||||
|
assert stats.last_sync_at == 9876543210.0
|
||||||
|
assert stats.last_sync_key == "another/key.txt"
|
||||||
|
|
||||||
|
def test_from_dict_with_defaults(self):
|
||||||
|
stats = ReplicationStats.from_dict({})
|
||||||
|
assert stats.objects_synced == 0
|
||||||
|
assert stats.objects_pending == 0
|
||||||
|
assert stats.objects_orphaned == 0
|
||||||
|
assert stats.bytes_synced == 0
|
||||||
|
assert stats.last_sync_at is None
|
||||||
|
assert stats.last_sync_key is None
|
||||||
|
|
||||||
|
|
||||||
|
class TestReplicationRule:
|
||||||
|
def test_to_dict(self):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="source-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="dest-bucket",
|
||||||
|
enabled=True,
|
||||||
|
mode=REPLICATION_MODE_ALL,
|
||||||
|
created_at=1234567890.0,
|
||||||
|
)
|
||||||
|
result = rule.to_dict()
|
||||||
|
assert result["bucket_name"] == "source-bucket"
|
||||||
|
assert result["target_connection_id"] == "test-conn"
|
||||||
|
assert result["target_bucket"] == "dest-bucket"
|
||||||
|
assert result["enabled"] is True
|
||||||
|
assert result["mode"] == REPLICATION_MODE_ALL
|
||||||
|
assert result["created_at"] == 1234567890.0
|
||||||
|
assert "stats" in result
|
||||||
|
|
||||||
|
def test_from_dict(self):
|
||||||
|
data = {
|
||||||
|
"bucket_name": "my-bucket",
|
||||||
|
"target_connection_id": "conn-123",
|
||||||
|
"target_bucket": "remote-bucket",
|
||||||
|
"enabled": False,
|
||||||
|
"mode": REPLICATION_MODE_NEW_ONLY,
|
||||||
|
"created_at": 1111111111.0,
|
||||||
|
"stats": {"objects_synced": 5},
|
||||||
|
}
|
||||||
|
rule = ReplicationRule.from_dict(data)
|
||||||
|
assert rule.bucket_name == "my-bucket"
|
||||||
|
assert rule.target_connection_id == "conn-123"
|
||||||
|
assert rule.target_bucket == "remote-bucket"
|
||||||
|
assert rule.enabled is False
|
||||||
|
assert rule.mode == REPLICATION_MODE_NEW_ONLY
|
||||||
|
assert rule.created_at == 1111111111.0
|
||||||
|
assert rule.stats.objects_synced == 5
|
||||||
|
|
||||||
|
def test_from_dict_defaults_mode(self):
|
||||||
|
data = {
|
||||||
|
"bucket_name": "my-bucket",
|
||||||
|
"target_connection_id": "conn-123",
|
||||||
|
"target_bucket": "remote-bucket",
|
||||||
|
}
|
||||||
|
rule = ReplicationRule.from_dict(data)
|
||||||
|
assert rule.mode == REPLICATION_MODE_NEW_ONLY
|
||||||
|
assert rule.created_at is None
|
||||||
|
|
||||||
|
|
||||||
|
class TestReplicationManager:
|
||||||
|
def test_get_rule_not_exists(self, replication_manager):
|
||||||
|
rule = replication_manager.get_rule("nonexistent-bucket")
|
||||||
|
assert rule is None
|
||||||
|
|
||||||
|
def test_set_and_get_rule(self, replication_manager):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="my-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=True,
|
||||||
|
mode=REPLICATION_MODE_NEW_ONLY,
|
||||||
|
created_at=time.time(),
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
|
||||||
|
retrieved = replication_manager.get_rule("my-bucket")
|
||||||
|
assert retrieved is not None
|
||||||
|
assert retrieved.bucket_name == "my-bucket"
|
||||||
|
assert retrieved.target_connection_id == "test-conn"
|
||||||
|
assert retrieved.target_bucket == "remote-bucket"
|
||||||
|
|
||||||
|
def test_delete_rule(self, replication_manager):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="to-delete",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
assert replication_manager.get_rule("to-delete") is not None
|
||||||
|
|
||||||
|
replication_manager.delete_rule("to-delete")
|
||||||
|
assert replication_manager.get_rule("to-delete") is None
|
||||||
|
|
||||||
|
def test_save_and_reload_rules(self, replication_manager, tmp_path):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="persistent-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
|
||||||
|
rules_path = tmp_path / "replication_rules.json"
|
||||||
|
assert rules_path.exists()
|
||||||
|
data = json.loads(rules_path.read_text())
|
||||||
|
assert "persistent-bucket" in data
|
||||||
|
|
||||||
|
@patch("app.replication._create_s3_client")
|
||||||
|
def test_check_endpoint_health_success(self, mock_create_client, replication_manager, connections):
|
||||||
|
mock_client = MagicMock()
|
||||||
|
mock_client.list_buckets.return_value = {"Buckets": []}
|
||||||
|
mock_create_client.return_value = mock_client
|
||||||
|
|
||||||
|
conn = connections.get("test-conn")
|
||||||
|
result = replication_manager.check_endpoint_health(conn)
|
||||||
|
assert result is True
|
||||||
|
mock_client.list_buckets.assert_called_once()
|
||||||
|
|
||||||
|
@patch("app.replication._create_s3_client")
|
||||||
|
def test_check_endpoint_health_failure(self, mock_create_client, replication_manager, connections):
|
||||||
|
mock_client = MagicMock()
|
||||||
|
mock_client.list_buckets.side_effect = Exception("Connection refused")
|
||||||
|
mock_create_client.return_value = mock_client
|
||||||
|
|
||||||
|
conn = connections.get("test-conn")
|
||||||
|
result = replication_manager.check_endpoint_health(conn)
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
def test_trigger_replication_no_rule(self, replication_manager):
|
||||||
|
replication_manager.trigger_replication("no-such-bucket", "test.txt", "write")
|
||||||
|
|
||||||
|
def test_trigger_replication_disabled_rule(self, replication_manager):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="disabled-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=False,
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
replication_manager.trigger_replication("disabled-bucket", "test.txt", "write")
|
||||||
|
|
||||||
|
def test_trigger_replication_missing_connection(self, replication_manager):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="orphan-bucket",
|
||||||
|
target_connection_id="missing-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
replication_manager.trigger_replication("orphan-bucket", "test.txt", "write")
|
||||||
|
|
||||||
|
def test_replicate_task_path_traversal_blocked(self, replication_manager, connections):
|
||||||
|
rule = ReplicationRule(
|
||||||
|
bucket_name="secure-bucket",
|
||||||
|
target_connection_id="test-conn",
|
||||||
|
target_bucket="remote-bucket",
|
||||||
|
enabled=True,
|
||||||
|
)
|
||||||
|
replication_manager.set_rule(rule)
|
||||||
|
conn = connections.get("test-conn")
|
||||||
|
|
||||||
|
replication_manager._replicate_task("secure-bucket", "../../../etc/passwd", rule, conn, "write")
|
||||||
|
replication_manager._replicate_task("secure-bucket", "/root/secret", rule, conn, "write")
|
||||||
|
replication_manager._replicate_task("secure-bucket", "..\\..\\windows\\system32", rule, conn, "write")
|
||||||
|
|
||||||
|
|
||||||
|
class TestCreateS3Client:
|
||||||
|
@patch("app.replication.boto3.client")
|
||||||
|
def test_creates_client_with_correct_config(self, mock_boto_client):
|
||||||
|
conn = RemoteConnection(
|
||||||
|
id="test",
|
||||||
|
name="Test",
|
||||||
|
endpoint_url="http://localhost:9000",
|
||||||
|
access_key="access",
|
||||||
|
secret_key="secret",
|
||||||
|
region="eu-west-1",
|
||||||
|
)
|
||||||
|
_create_s3_client(conn)
|
||||||
|
|
||||||
|
mock_boto_client.assert_called_once()
|
||||||
|
call_kwargs = mock_boto_client.call_args[1]
|
||||||
|
assert call_kwargs["endpoint_url"] == "http://localhost:9000"
|
||||||
|
assert call_kwargs["aws_access_key_id"] == "access"
|
||||||
|
assert call_kwargs["aws_secret_access_key"] == "secret"
|
||||||
|
assert call_kwargs["region_name"] == "eu-west-1"
|
||||||
|
|
||||||
|
@patch("app.replication.boto3.client")
|
||||||
|
def test_health_check_mode_minimal_retries(self, mock_boto_client):
|
||||||
|
conn = RemoteConnection(
|
||||||
|
id="test",
|
||||||
|
name="Test",
|
||||||
|
endpoint_url="http://localhost:9000",
|
||||||
|
access_key="access",
|
||||||
|
secret_key="secret",
|
||||||
|
)
|
||||||
|
_create_s3_client(conn, health_check=True)
|
||||||
|
|
||||||
|
call_kwargs = mock_boto_client.call_args[1]
|
||||||
|
config = call_kwargs["config"]
|
||||||
|
assert config.retries["max_attempts"] == 1
|
||||||
@@ -220,7 +220,7 @@ def test_bucket_config_filename_allowed(tmp_path):
|
|||||||
storage.create_bucket("demo")
|
storage.create_bucket("demo")
|
||||||
storage.put_object("demo", ".bucket.json", io.BytesIO(b"{}"))
|
storage.put_object("demo", ".bucket.json", io.BytesIO(b"{}"))
|
||||||
|
|
||||||
objects = storage.list_objects("demo")
|
objects = storage.list_objects_all("demo")
|
||||||
assert any(meta.key == ".bucket.json" for meta in objects)
|
assert any(meta.key == ".bucket.json" for meta in objects)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -62,7 +62,7 @@ def test_bulk_delete_json_route(tmp_path: Path):
|
|||||||
assert set(payload["deleted"]) == {"first.txt", "missing.txt"}
|
assert set(payload["deleted"]) == {"first.txt", "missing.txt"}
|
||||||
assert payload["errors"] == []
|
assert payload["errors"] == []
|
||||||
|
|
||||||
listing = storage.list_objects("demo")
|
listing = storage.list_objects_all("demo")
|
||||||
assert {meta.key for meta in listing} == {"second.txt"}
|
assert {meta.key for meta in listing} == {"second.txt"}
|
||||||
|
|
||||||
|
|
||||||
@@ -92,5 +92,5 @@ def test_bulk_delete_validation(tmp_path: Path):
|
|||||||
assert limit_response.status_code == 400
|
assert limit_response.status_code == 400
|
||||||
assert limit_response.get_json()["status"] == "error"
|
assert limit_response.get_json()["status"] == "error"
|
||||||
|
|
||||||
still_there = storage.list_objects("demo")
|
still_there = storage.list_objects_all("demo")
|
||||||
assert {meta.key for meta in still_there} == {"keep.txt"}
|
assert {meta.key for meta in still_there} == {"keep.txt"}
|
||||||
|
|||||||
@@ -66,10 +66,9 @@ class TestUIBucketEncryption:
|
|||||||
"""Encryption card should be visible on bucket detail page."""
|
"""Encryption card should be visible on bucket detail page."""
|
||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Login first
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
|
|
||||||
@@ -81,15 +80,12 @@ class TestUIBucketEncryption:
|
|||||||
"""Should be able to enable AES-256 encryption."""
|
"""Should be able to enable AES-256 encryption."""
|
||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Login
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
# Get CSRF token
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
# Enable AES-256 encryption
|
|
||||||
response = client.post(
|
response = client.post(
|
||||||
"/ui/buckets/test-bucket/encryption",
|
"/ui/buckets/test-bucket/encryption",
|
||||||
data={
|
data={
|
||||||
@@ -102,15 +98,13 @@ class TestUIBucketEncryption:
|
|||||||
|
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
html = response.data.decode("utf-8")
|
html = response.data.decode("utf-8")
|
||||||
# Should see success message or enabled state
|
|
||||||
assert "AES-256" in html or "encryption enabled" in html.lower()
|
assert "AES-256" in html or "encryption enabled" in html.lower()
|
||||||
|
|
||||||
def test_enable_kms_encryption(self, tmp_path):
|
def test_enable_kms_encryption(self, tmp_path):
|
||||||
"""Should be able to enable KMS encryption."""
|
"""Should be able to enable KMS encryption."""
|
||||||
app = _make_encryption_app(tmp_path, kms_enabled=True)
|
app = _make_encryption_app(tmp_path, kms_enabled=True)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Create a KMS key first
|
|
||||||
with app.app_context():
|
with app.app_context():
|
||||||
kms = app.extensions.get("kms")
|
kms = app.extensions.get("kms")
|
||||||
if kms:
|
if kms:
|
||||||
@@ -118,15 +112,12 @@ class TestUIBucketEncryption:
|
|||||||
key_id = key.key_id
|
key_id = key.key_id
|
||||||
else:
|
else:
|
||||||
pytest.skip("KMS not available")
|
pytest.skip("KMS not available")
|
||||||
|
|
||||||
# Login
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
# Get CSRF token
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
# Enable KMS encryption
|
|
||||||
response = client.post(
|
response = client.post(
|
||||||
"/ui/buckets/test-bucket/encryption",
|
"/ui/buckets/test-bucket/encryption",
|
||||||
data={
|
data={
|
||||||
@@ -146,11 +137,9 @@ class TestUIBucketEncryption:
|
|||||||
"""Should be able to disable encryption."""
|
"""Should be able to disable encryption."""
|
||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Login
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
# First enable encryption
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
@@ -162,8 +151,7 @@ class TestUIBucketEncryption:
|
|||||||
"algorithm": "AES256",
|
"algorithm": "AES256",
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
# Now disable it
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
@@ -184,13 +172,12 @@ class TestUIBucketEncryption:
|
|||||||
"""Invalid encryption algorithm should be rejected."""
|
"""Invalid encryption algorithm should be rejected."""
|
||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Login
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
response = client.post(
|
response = client.post(
|
||||||
"/ui/buckets/test-bucket/encryption",
|
"/ui/buckets/test-bucket/encryption",
|
||||||
data={
|
data={
|
||||||
@@ -200,23 +187,21 @@ class TestUIBucketEncryption:
|
|||||||
},
|
},
|
||||||
follow_redirects=True,
|
follow_redirects=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
html = response.data.decode("utf-8")
|
html = response.data.decode("utf-8")
|
||||||
assert "Invalid" in html or "danger" in html
|
assert "Invalid" in html or "danger" in html
|
||||||
|
|
||||||
def test_encryption_persists_in_config(self, tmp_path):
|
def test_encryption_persists_in_config(self, tmp_path):
|
||||||
"""Encryption config should persist in bucket config."""
|
"""Encryption config should persist in bucket config."""
|
||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Login
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
# Enable encryption
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
client.post(
|
client.post(
|
||||||
"/ui/buckets/test-bucket/encryption",
|
"/ui/buckets/test-bucket/encryption",
|
||||||
data={
|
data={
|
||||||
@@ -225,8 +210,7 @@ class TestUIBucketEncryption:
|
|||||||
"algorithm": "AES256",
|
"algorithm": "AES256",
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
# Verify it's stored
|
|
||||||
with app.app_context():
|
with app.app_context():
|
||||||
storage = app.extensions["object_storage"]
|
storage = app.extensions["object_storage"]
|
||||||
config = storage.get_bucket_encryption("test-bucket")
|
config = storage.get_bucket_encryption("test-bucket")
|
||||||
@@ -243,14 +227,12 @@ class TestUIEncryptionWithoutPermission:
|
|||||||
"""Read-only user should not be able to change encryption settings."""
|
"""Read-only user should not be able to change encryption settings."""
|
||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
# Login as readonly user
|
|
||||||
client.post("/ui/login", data={"access_key": "readonly", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "readonly", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
# This should fail or be rejected
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
response = client.post(
|
response = client.post(
|
||||||
"/ui/buckets/test-bucket/encryption",
|
"/ui/buckets/test-bucket/encryption",
|
||||||
data={
|
data={
|
||||||
@@ -260,9 +242,7 @@ class TestUIEncryptionWithoutPermission:
|
|||||||
},
|
},
|
||||||
follow_redirects=True,
|
follow_redirects=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Should either redirect with error or show permission denied
|
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
html = response.data.decode("utf-8")
|
html = response.data.decode("utf-8")
|
||||||
# Should contain error about permission denied
|
|
||||||
assert "Access denied" in html or "permission" in html.lower() or "not authorized" in html.lower()
|
assert "Access denied" in html or "permission" in html.lower() or "not authorized" in html.lower()
|
||||||
|
|||||||
189
tests/test_ui_pagination.py
Normal file
189
tests/test_ui_pagination.py
Normal file
@@ -0,0 +1,189 @@
|
|||||||
|
"""Tests for UI pagination of bucket objects."""
|
||||||
|
import json
|
||||||
|
from io import BytesIO
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app import create_app
|
||||||
|
|
||||||
|
|
||||||
|
def _make_app(tmp_path: Path):
|
||||||
|
"""Create an app for testing."""
|
||||||
|
storage_root = tmp_path / "data"
|
||||||
|
iam_config = tmp_path / "iam.json"
|
||||||
|
bucket_policies = tmp_path / "bucket_policies.json"
|
||||||
|
iam_payload = {
|
||||||
|
"users": [
|
||||||
|
{
|
||||||
|
"access_key": "test",
|
||||||
|
"secret_key": "secret",
|
||||||
|
"display_name": "Test User",
|
||||||
|
"policies": [{"bucket": "*", "actions": ["list", "read", "write", "delete", "policy"]}],
|
||||||
|
},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
iam_config.write_text(json.dumps(iam_payload))
|
||||||
|
|
||||||
|
flask_app = create_app(
|
||||||
|
{
|
||||||
|
"TESTING": True,
|
||||||
|
"SECRET_KEY": "testing",
|
||||||
|
"WTF_CSRF_ENABLED": False,
|
||||||
|
"STORAGE_ROOT": storage_root,
|
||||||
|
"IAM_CONFIG": iam_config,
|
||||||
|
"BUCKET_POLICY_PATH": bucket_policies,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return flask_app
|
||||||
|
|
||||||
|
|
||||||
|
class TestPaginatedObjectListing:
|
||||||
|
"""Test paginated object listing API."""
|
||||||
|
|
||||||
|
def test_objects_api_returns_paginated_results(self, tmp_path):
|
||||||
|
"""Objects API should return paginated results."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
# Create 10 test objects
|
||||||
|
for i in range(10):
|
||||||
|
storage.put_object("test-bucket", f"file{i:02d}.txt", BytesIO(b"content"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
# Login first
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Request first page of 3 objects
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects?max_keys=3")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
data = resp.get_json()
|
||||||
|
assert len(data["objects"]) == 3
|
||||||
|
assert data["is_truncated"] is True
|
||||||
|
assert data["next_continuation_token"] is not None
|
||||||
|
assert data["total_count"] == 10
|
||||||
|
|
||||||
|
def test_objects_api_pagination_continuation(self, tmp_path):
|
||||||
|
"""Objects API should support continuation tokens."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
# Create 5 test objects
|
||||||
|
for i in range(5):
|
||||||
|
storage.put_object("test-bucket", f"file{i:02d}.txt", BytesIO(b"content"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Get first page
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects?max_keys=2")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.get_json()
|
||||||
|
|
||||||
|
first_page_keys = [obj["key"] for obj in data["objects"]]
|
||||||
|
assert len(first_page_keys) == 2
|
||||||
|
assert data["is_truncated"] is True
|
||||||
|
|
||||||
|
# Get second page
|
||||||
|
token = data["next_continuation_token"]
|
||||||
|
resp = client.get(f"/ui/buckets/test-bucket/objects?max_keys=2&continuation_token={token}")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.get_json()
|
||||||
|
|
||||||
|
second_page_keys = [obj["key"] for obj in data["objects"]]
|
||||||
|
assert len(second_page_keys) == 2
|
||||||
|
|
||||||
|
# No overlap between pages
|
||||||
|
assert set(first_page_keys).isdisjoint(set(second_page_keys))
|
||||||
|
|
||||||
|
def test_objects_api_prefix_filter(self, tmp_path):
|
||||||
|
"""Objects API should support prefix filtering."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
# Create objects with different prefixes
|
||||||
|
storage.put_object("test-bucket", "logs/access.log", BytesIO(b"log"))
|
||||||
|
storage.put_object("test-bucket", "logs/error.log", BytesIO(b"log"))
|
||||||
|
storage.put_object("test-bucket", "data/file.txt", BytesIO(b"data"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Filter by prefix
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects?prefix=logs/")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.get_json()
|
||||||
|
|
||||||
|
keys = [obj["key"] for obj in data["objects"]]
|
||||||
|
assert all(k.startswith("logs/") for k in keys)
|
||||||
|
assert len(keys) == 2
|
||||||
|
|
||||||
|
def test_objects_api_requires_authentication(self, tmp_path):
|
||||||
|
"""Objects API should require login."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
# Don't login
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects")
|
||||||
|
# Should redirect to login
|
||||||
|
assert resp.status_code == 302
|
||||||
|
assert "/ui/login" in resp.headers.get("Location", "")
|
||||||
|
|
||||||
|
def test_objects_api_returns_object_metadata(self, tmp_path):
|
||||||
|
"""Objects API should return complete object metadata."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
storage.put_object("test-bucket", "test.txt", BytesIO(b"test content"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
resp = client.get("/ui/buckets/test-bucket/objects")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.get_json()
|
||||||
|
|
||||||
|
assert len(data["objects"]) == 1
|
||||||
|
obj = data["objects"][0]
|
||||||
|
|
||||||
|
# Check all expected fields
|
||||||
|
assert obj["key"] == "test.txt"
|
||||||
|
assert obj["size"] == 12 # len("test content")
|
||||||
|
assert "last_modified" in obj
|
||||||
|
assert "last_modified_display" in obj
|
||||||
|
assert "etag" in obj
|
||||||
|
|
||||||
|
# URLs are now returned as templates (not per-object) for performance
|
||||||
|
assert "url_templates" in data
|
||||||
|
templates = data["url_templates"]
|
||||||
|
assert "preview" in templates
|
||||||
|
assert "download" in templates
|
||||||
|
assert "delete" in templates
|
||||||
|
assert "KEY_PLACEHOLDER" in templates["preview"]
|
||||||
|
|
||||||
|
def test_bucket_detail_page_loads_without_objects(self, tmp_path):
|
||||||
|
"""Bucket detail page should load even with many objects."""
|
||||||
|
app = _make_app(tmp_path)
|
||||||
|
storage = app.extensions["object_storage"]
|
||||||
|
storage.create_bucket("test-bucket")
|
||||||
|
|
||||||
|
# Create many objects
|
||||||
|
for i in range(100):
|
||||||
|
storage.put_object("test-bucket", f"file{i:03d}.txt", BytesIO(b"x"))
|
||||||
|
|
||||||
|
with app.test_client() as client:
|
||||||
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# The page should load quickly (objects loaded via JS)
|
||||||
|
resp = client.get("/ui/buckets/test-bucket")
|
||||||
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
html = resp.data.decode("utf-8")
|
||||||
|
# Should have the JavaScript loading infrastructure (external JS file)
|
||||||
|
assert "bucket-detail-main.js" in html
|
||||||
@@ -70,8 +70,12 @@ def test_ui_bucket_policy_enforcement_toggle(tmp_path: Path, enforce: bool):
|
|||||||
assert b"Access denied by bucket policy" in response.data
|
assert b"Access denied by bucket policy" in response.data
|
||||||
else:
|
else:
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert b"vid.mp4" in response.data
|
|
||||||
assert b"Access denied by bucket policy" not in response.data
|
assert b"Access denied by bucket policy" not in response.data
|
||||||
|
# Objects are now loaded via async API - check the objects endpoint
|
||||||
|
objects_response = client.get("/ui/buckets/testbucket/objects")
|
||||||
|
assert objects_response.status_code == 200
|
||||||
|
data = objects_response.get_json()
|
||||||
|
assert any(obj["key"] == "vid.mp4" for obj in data["objects"])
|
||||||
|
|
||||||
|
|
||||||
def test_ui_bucket_policy_disabled_by_default(tmp_path: Path):
|
def test_ui_bucket_policy_disabled_by_default(tmp_path: Path):
|
||||||
@@ -109,5 +113,9 @@ def test_ui_bucket_policy_disabled_by_default(tmp_path: Path):
|
|||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
response = client.get("/ui/buckets/testbucket", follow_redirects=True)
|
response = client.get("/ui/buckets/testbucket", follow_redirects=True)
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert b"vid.mp4" in response.data
|
|
||||||
assert b"Access denied by bucket policy" not in response.data
|
assert b"Access denied by bucket policy" not in response.data
|
||||||
|
# Objects are now loaded via async API - check the objects endpoint
|
||||||
|
objects_response = client.get("/ui/buckets/testbucket/objects")
|
||||||
|
assert objects_response.status_code == 200
|
||||||
|
data = objects_response.get_json()
|
||||||
|
assert any(obj["key"] == "vid.mp4" for obj in data["objects"])
|
||||||
|
|||||||
Reference in New Issue
Block a user