Compare commits
4 Commits
956d17a649
...
v0.1.3
| Author | SHA1 | Date | |
|---|---|---|---|
| bae5009ec4 | |||
| 233780617f | |||
| fd8fb21517 | |||
| c6cbe822e1 |
@@ -1,5 +1,5 @@
|
|||||||
# syntax=docker/dockerfile:1.7
|
# syntax=docker/dockerfile:1.7
|
||||||
FROM python:3.12.12-slim
|
FROM python:3.11-slim
|
||||||
|
|
||||||
ENV PYTHONDONTWRITEBYTECODE=1 \
|
ENV PYTHONDONTWRITEBYTECODE=1 \
|
||||||
PYTHONUNBUFFERED=1
|
PYTHONUNBUFFERED=1
|
||||||
|
|||||||
661
LICENSE
661
LICENSE
@@ -1,661 +0,0 @@
|
|||||||
GNU AFFERO GENERAL PUBLIC LICENSE
|
|
||||||
Version 3, 19 November 2007
|
|
||||||
|
|
||||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
|
||||||
Everyone is permitted to copy and distribute verbatim copies
|
|
||||||
of this license document, but changing it is not allowed.
|
|
||||||
|
|
||||||
Preamble
|
|
||||||
|
|
||||||
The GNU Affero General Public License is a free, copyleft license for
|
|
||||||
software and other kinds of works, specifically designed to ensure
|
|
||||||
cooperation with the community in the case of network server software.
|
|
||||||
|
|
||||||
The licenses for most software and other practical works are designed
|
|
||||||
to take away your freedom to share and change the works. By contrast,
|
|
||||||
our General Public Licenses are intended to guarantee your freedom to
|
|
||||||
share and change all versions of a program--to make sure it remains free
|
|
||||||
software for all its users.
|
|
||||||
|
|
||||||
When we speak of free software, we are referring to freedom, not
|
|
||||||
price. Our General Public Licenses are designed to make sure that you
|
|
||||||
have the freedom to distribute copies of free software (and charge for
|
|
||||||
them if you wish), that you receive source code or can get it if you
|
|
||||||
want it, that you can change the software or use pieces of it in new
|
|
||||||
free programs, and that you know you can do these things.
|
|
||||||
|
|
||||||
Developers that use our General Public Licenses protect your rights
|
|
||||||
with two steps: (1) assert copyright on the software, and (2) offer
|
|
||||||
you this License which gives you legal permission to copy, distribute
|
|
||||||
and/or modify the software.
|
|
||||||
|
|
||||||
A secondary benefit of defending all users' freedom is that
|
|
||||||
improvements made in alternate versions of the program, if they
|
|
||||||
receive widespread use, become available for other developers to
|
|
||||||
incorporate. Many developers of free software are heartened and
|
|
||||||
encouraged by the resulting cooperation. However, in the case of
|
|
||||||
software used on network servers, this result may fail to come about.
|
|
||||||
The GNU General Public License permits making a modified version and
|
|
||||||
letting the public access it on a server without ever releasing its
|
|
||||||
source code to the public.
|
|
||||||
|
|
||||||
The GNU Affero General Public License is designed specifically to
|
|
||||||
ensure that, in such cases, the modified source code becomes available
|
|
||||||
to the community. It requires the operator of a network server to
|
|
||||||
provide the source code of the modified version running there to the
|
|
||||||
users of that server. Therefore, public use of a modified version, on
|
|
||||||
a publicly accessible server, gives the public access to the source
|
|
||||||
code of the modified version.
|
|
||||||
|
|
||||||
An older license, called the Affero General Public License and
|
|
||||||
published by Affero, was designed to accomplish similar goals. This is
|
|
||||||
a different license, not a version of the Affero GPL, but Affero has
|
|
||||||
released a new version of the Affero GPL which permits relicensing under
|
|
||||||
this license.
|
|
||||||
|
|
||||||
The precise terms and conditions for copying, distribution and
|
|
||||||
modification follow.
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
0. Definitions.
|
|
||||||
|
|
||||||
"This License" refers to version 3 of the GNU Affero General Public License.
|
|
||||||
|
|
||||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
|
||||||
works, such as semiconductor masks.
|
|
||||||
|
|
||||||
"The Program" refers to any copyrightable work licensed under this
|
|
||||||
License. Each licensee is addressed as "you". "Licensees" and
|
|
||||||
"recipients" may be individuals or organizations.
|
|
||||||
|
|
||||||
To "modify" a work means to copy from or adapt all or part of the work
|
|
||||||
in a fashion requiring copyright permission, other than the making of an
|
|
||||||
exact copy. The resulting work is called a "modified version" of the
|
|
||||||
earlier work or a work "based on" the earlier work.
|
|
||||||
|
|
||||||
A "covered work" means either the unmodified Program or a work based
|
|
||||||
on the Program.
|
|
||||||
|
|
||||||
To "propagate" a work means to do anything with it that, without
|
|
||||||
permission, would make you directly or secondarily liable for
|
|
||||||
infringement under applicable copyright law, except executing it on a
|
|
||||||
computer or modifying a private copy. Propagation includes copying,
|
|
||||||
distribution (with or without modification), making available to the
|
|
||||||
public, and in some countries other activities as well.
|
|
||||||
|
|
||||||
To "convey" a work means any kind of propagation that enables other
|
|
||||||
parties to make or receive copies. Mere interaction with a user through
|
|
||||||
a computer network, with no transfer of a copy, is not conveying.
|
|
||||||
|
|
||||||
An interactive user interface displays "Appropriate Legal Notices"
|
|
||||||
to the extent that it includes a convenient and prominently visible
|
|
||||||
feature that (1) displays an appropriate copyright notice, and (2)
|
|
||||||
tells the user that there is no warranty for the work (except to the
|
|
||||||
extent that warranties are provided), that licensees may convey the
|
|
||||||
work under this License, and how to view a copy of this License. If
|
|
||||||
the interface presents a list of user commands or options, such as a
|
|
||||||
menu, a prominent item in the list meets this criterion.
|
|
||||||
|
|
||||||
1. Source Code.
|
|
||||||
|
|
||||||
The "source code" for a work means the preferred form of the work
|
|
||||||
for making modifications to it. "Object code" means any non-source
|
|
||||||
form of a work.
|
|
||||||
|
|
||||||
A "Standard Interface" means an interface that either is an official
|
|
||||||
standard defined by a recognized standards body, or, in the case of
|
|
||||||
interfaces specified for a particular programming language, one that
|
|
||||||
is widely used among developers working in that language.
|
|
||||||
|
|
||||||
The "System Libraries" of an executable work include anything, other
|
|
||||||
than the work as a whole, that (a) is included in the normal form of
|
|
||||||
packaging a Major Component, but which is not part of that Major
|
|
||||||
Component, and (b) serves only to enable use of the work with that
|
|
||||||
Major Component, or to implement a Standard Interface for which an
|
|
||||||
implementation is available to the public in source code form. A
|
|
||||||
"Major Component", in this context, means a major essential component
|
|
||||||
(kernel, window system, and so on) of the specific operating system
|
|
||||||
(if any) on which the executable work runs, or a compiler used to
|
|
||||||
produce the work, or an object code interpreter used to run it.
|
|
||||||
|
|
||||||
The "Corresponding Source" for a work in object code form means all
|
|
||||||
the source code needed to generate, install, and (for an executable
|
|
||||||
work) run the object code and to modify the work, including scripts to
|
|
||||||
control those activities. However, it does not include the work's
|
|
||||||
System Libraries, or general-purpose tools or generally available free
|
|
||||||
programs which are used unmodified in performing those activities but
|
|
||||||
which are not part of the work. For example, Corresponding Source
|
|
||||||
includes interface definition files associated with source files for
|
|
||||||
the work, and the source code for shared libraries and dynamically
|
|
||||||
linked subprograms that the work is specifically designed to require,
|
|
||||||
such as by intimate data communication or control flow between those
|
|
||||||
subprograms and other parts of the work.
|
|
||||||
|
|
||||||
The Corresponding Source need not include anything that users
|
|
||||||
can regenerate automatically from other parts of the Corresponding
|
|
||||||
Source.
|
|
||||||
|
|
||||||
The Corresponding Source for a work in source code form is that
|
|
||||||
same work.
|
|
||||||
|
|
||||||
2. Basic Permissions.
|
|
||||||
|
|
||||||
All rights granted under this License are granted for the term of
|
|
||||||
copyright on the Program, and are irrevocable provided the stated
|
|
||||||
conditions are met. This License explicitly affirms your unlimited
|
|
||||||
permission to run the unmodified Program. The output from running a
|
|
||||||
covered work is covered by this License only if the output, given its
|
|
||||||
content, constitutes a covered work. This License acknowledges your
|
|
||||||
rights of fair use or other equivalent, as provided by copyright law.
|
|
||||||
|
|
||||||
You may make, run and propagate covered works that you do not
|
|
||||||
convey, without conditions so long as your license otherwise remains
|
|
||||||
in force. You may convey covered works to others for the sole purpose
|
|
||||||
of having them make modifications exclusively for you, or provide you
|
|
||||||
with facilities for running those works, provided that you comply with
|
|
||||||
the terms of this License in conveying all material for which you do
|
|
||||||
not control copyright. Those thus making or running the covered works
|
|
||||||
for you must do so exclusively on your behalf, under your direction
|
|
||||||
and control, on terms that prohibit them from making any copies of
|
|
||||||
your copyrighted material outside their relationship with you.
|
|
||||||
|
|
||||||
Conveying under any other circumstances is permitted solely under
|
|
||||||
the conditions stated below. Sublicensing is not allowed; section 10
|
|
||||||
makes it unnecessary.
|
|
||||||
|
|
||||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
|
||||||
|
|
||||||
No covered work shall be deemed part of an effective technological
|
|
||||||
measure under any applicable law fulfilling obligations under article
|
|
||||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
|
||||||
similar laws prohibiting or restricting circumvention of such
|
|
||||||
measures.
|
|
||||||
|
|
||||||
When you convey a covered work, you waive any legal power to forbid
|
|
||||||
circumvention of technological measures to the extent such circumvention
|
|
||||||
is effected by exercising rights under this License with respect to
|
|
||||||
the covered work, and you disclaim any intention to limit operation or
|
|
||||||
modification of the work as a means of enforcing, against the work's
|
|
||||||
users, your or third parties' legal rights to forbid circumvention of
|
|
||||||
technological measures.
|
|
||||||
|
|
||||||
4. Conveying Verbatim Copies.
|
|
||||||
|
|
||||||
You may convey verbatim copies of the Program's source code as you
|
|
||||||
receive it, in any medium, provided that you conspicuously and
|
|
||||||
appropriately publish on each copy an appropriate copyright notice;
|
|
||||||
keep intact all notices stating that this License and any
|
|
||||||
non-permissive terms added in accord with section 7 apply to the code;
|
|
||||||
keep intact all notices of the absence of any warranty; and give all
|
|
||||||
recipients a copy of this License along with the Program.
|
|
||||||
|
|
||||||
You may charge any price or no price for each copy that you convey,
|
|
||||||
and you may offer support or warranty protection for a fee.
|
|
||||||
|
|
||||||
5. Conveying Modified Source Versions.
|
|
||||||
|
|
||||||
You may convey a work based on the Program, or the modifications to
|
|
||||||
produce it from the Program, in the form of source code under the
|
|
||||||
terms of section 4, provided that you also meet all of these conditions:
|
|
||||||
|
|
||||||
a) The work must carry prominent notices stating that you modified
|
|
||||||
it, and giving a relevant date.
|
|
||||||
|
|
||||||
b) The work must carry prominent notices stating that it is
|
|
||||||
released under this License and any conditions added under section
|
|
||||||
7. This requirement modifies the requirement in section 4 to
|
|
||||||
"keep intact all notices".
|
|
||||||
|
|
||||||
c) You must license the entire work, as a whole, under this
|
|
||||||
License to anyone who comes into possession of a copy. This
|
|
||||||
License will therefore apply, along with any applicable section 7
|
|
||||||
additional terms, to the whole of the work, and all its parts,
|
|
||||||
regardless of how they are packaged. This License gives no
|
|
||||||
permission to license the work in any other way, but it does not
|
|
||||||
invalidate such permission if you have separately received it.
|
|
||||||
|
|
||||||
d) If the work has interactive user interfaces, each must display
|
|
||||||
Appropriate Legal Notices; however, if the Program has interactive
|
|
||||||
interfaces that do not display Appropriate Legal Notices, your
|
|
||||||
work need not make them do so.
|
|
||||||
|
|
||||||
A compilation of a covered work with other separate and independent
|
|
||||||
works, which are not by their nature extensions of the covered work,
|
|
||||||
and which are not combined with it such as to form a larger program,
|
|
||||||
in or on a volume of a storage or distribution medium, is called an
|
|
||||||
"aggregate" if the compilation and its resulting copyright are not
|
|
||||||
used to limit the access or legal rights of the compilation's users
|
|
||||||
beyond what the individual works permit. Inclusion of a covered work
|
|
||||||
in an aggregate does not cause this License to apply to the other
|
|
||||||
parts of the aggregate.
|
|
||||||
|
|
||||||
6. Conveying Non-Source Forms.
|
|
||||||
|
|
||||||
You may convey a covered work in object code form under the terms
|
|
||||||
of sections 4 and 5, provided that you also convey the
|
|
||||||
machine-readable Corresponding Source under the terms of this License,
|
|
||||||
in one of these ways:
|
|
||||||
|
|
||||||
a) Convey the object code in, or embodied in, a physical product
|
|
||||||
(including a physical distribution medium), accompanied by the
|
|
||||||
Corresponding Source fixed on a durable physical medium
|
|
||||||
customarily used for software interchange.
|
|
||||||
|
|
||||||
b) Convey the object code in, or embodied in, a physical product
|
|
||||||
(including a physical distribution medium), accompanied by a
|
|
||||||
written offer, valid for at least three years and valid for as
|
|
||||||
long as you offer spare parts or customer support for that product
|
|
||||||
model, to give anyone who possesses the object code either (1) a
|
|
||||||
copy of the Corresponding Source for all the software in the
|
|
||||||
product that is covered by this License, on a durable physical
|
|
||||||
medium customarily used for software interchange, for a price no
|
|
||||||
more than your reasonable cost of physically performing this
|
|
||||||
conveying of source, or (2) access to copy the
|
|
||||||
Corresponding Source from a network server at no charge.
|
|
||||||
|
|
||||||
c) Convey individual copies of the object code with a copy of the
|
|
||||||
written offer to provide the Corresponding Source. This
|
|
||||||
alternative is allowed only occasionally and noncommercially, and
|
|
||||||
only if you received the object code with such an offer, in accord
|
|
||||||
with subsection 6b.
|
|
||||||
|
|
||||||
d) Convey the object code by offering access from a designated
|
|
||||||
place (gratis or for a charge), and offer equivalent access to the
|
|
||||||
Corresponding Source in the same way through the same place at no
|
|
||||||
further charge. You need not require recipients to copy the
|
|
||||||
Corresponding Source along with the object code. If the place to
|
|
||||||
copy the object code is a network server, the Corresponding Source
|
|
||||||
may be on a different server (operated by you or a third party)
|
|
||||||
that supports equivalent copying facilities, provided you maintain
|
|
||||||
clear directions next to the object code saying where to find the
|
|
||||||
Corresponding Source. Regardless of what server hosts the
|
|
||||||
Corresponding Source, you remain obligated to ensure that it is
|
|
||||||
available for as long as needed to satisfy these requirements.
|
|
||||||
|
|
||||||
e) Convey the object code using peer-to-peer transmission, provided
|
|
||||||
you inform other peers where the object code and Corresponding
|
|
||||||
Source of the work are being offered to the general public at no
|
|
||||||
charge under subsection 6d.
|
|
||||||
|
|
||||||
A separable portion of the object code, whose source code is excluded
|
|
||||||
from the Corresponding Source as a System Library, need not be
|
|
||||||
included in conveying the object code work.
|
|
||||||
|
|
||||||
A "User Product" is either (1) a "consumer product", which means any
|
|
||||||
tangible personal property which is normally used for personal, family,
|
|
||||||
or household purposes, or (2) anything designed or sold for incorporation
|
|
||||||
into a dwelling. In determining whether a product is a consumer product,
|
|
||||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
|
||||||
product received by a particular user, "normally used" refers to a
|
|
||||||
typical or common use of that class of product, regardless of the status
|
|
||||||
of the particular user or of the way in which the particular user
|
|
||||||
actually uses, or expects or is expected to use, the product. A product
|
|
||||||
is a consumer product regardless of whether the product has substantial
|
|
||||||
commercial, industrial or non-consumer uses, unless such uses represent
|
|
||||||
the only significant mode of use of the product.
|
|
||||||
|
|
||||||
"Installation Information" for a User Product means any methods,
|
|
||||||
procedures, authorization keys, or other information required to install
|
|
||||||
and execute modified versions of a covered work in that User Product from
|
|
||||||
a modified version of its Corresponding Source. The information must
|
|
||||||
suffice to ensure that the continued functioning of the modified object
|
|
||||||
code is in no case prevented or interfered with solely because
|
|
||||||
modification has been made.
|
|
||||||
|
|
||||||
If you convey an object code work under this section in, or with, or
|
|
||||||
specifically for use in, a User Product, and the conveying occurs as
|
|
||||||
part of a transaction in which the right of possession and use of the
|
|
||||||
User Product is transferred to the recipient in perpetuity or for a
|
|
||||||
fixed term (regardless of how the transaction is characterized), the
|
|
||||||
Corresponding Source conveyed under this section must be accompanied
|
|
||||||
by the Installation Information. But this requirement does not apply
|
|
||||||
if neither you nor any third party retains the ability to install
|
|
||||||
modified object code on the User Product (for example, the work has
|
|
||||||
been installed in ROM).
|
|
||||||
|
|
||||||
The requirement to provide Installation Information does not include a
|
|
||||||
requirement to continue to provide support service, warranty, or updates
|
|
||||||
for a work that has been modified or installed by the recipient, or for
|
|
||||||
the User Product in which it has been modified or installed. Access to a
|
|
||||||
network may be denied when the modification itself materially and
|
|
||||||
adversely affects the operation of the network or violates the rules and
|
|
||||||
protocols for communication across the network.
|
|
||||||
|
|
||||||
Corresponding Source conveyed, and Installation Information provided,
|
|
||||||
in accord with this section must be in a format that is publicly
|
|
||||||
documented (and with an implementation available to the public in
|
|
||||||
source code form), and must require no special password or key for
|
|
||||||
unpacking, reading or copying.
|
|
||||||
|
|
||||||
7. Additional Terms.
|
|
||||||
|
|
||||||
"Additional permissions" are terms that supplement the terms of this
|
|
||||||
License by making exceptions from one or more of its conditions.
|
|
||||||
Additional permissions that are applicable to the entire Program shall
|
|
||||||
be treated as though they were included in this License, to the extent
|
|
||||||
that they are valid under applicable law. If additional permissions
|
|
||||||
apply only to part of the Program, that part may be used separately
|
|
||||||
under those permissions, but the entire Program remains governed by
|
|
||||||
this License without regard to the additional permissions.
|
|
||||||
|
|
||||||
When you convey a copy of a covered work, you may at your option
|
|
||||||
remove any additional permissions from that copy, or from any part of
|
|
||||||
it. (Additional permissions may be written to require their own
|
|
||||||
removal in certain cases when you modify the work.) You may place
|
|
||||||
additional permissions on material, added by you to a covered work,
|
|
||||||
for which you have or can give appropriate copyright permission.
|
|
||||||
|
|
||||||
Notwithstanding any other provision of this License, for material you
|
|
||||||
add to a covered work, you may (if authorized by the copyright holders of
|
|
||||||
that material) supplement the terms of this License with terms:
|
|
||||||
|
|
||||||
a) Disclaiming warranty or limiting liability differently from the
|
|
||||||
terms of sections 15 and 16 of this License; or
|
|
||||||
|
|
||||||
b) Requiring preservation of specified reasonable legal notices or
|
|
||||||
author attributions in that material or in the Appropriate Legal
|
|
||||||
Notices displayed by works containing it; or
|
|
||||||
|
|
||||||
c) Prohibiting misrepresentation of the origin of that material, or
|
|
||||||
requiring that modified versions of such material be marked in
|
|
||||||
reasonable ways as different from the original version; or
|
|
||||||
|
|
||||||
d) Limiting the use for publicity purposes of names of licensors or
|
|
||||||
authors of the material; or
|
|
||||||
|
|
||||||
e) Declining to grant rights under trademark law for use of some
|
|
||||||
trade names, trademarks, or service marks; or
|
|
||||||
|
|
||||||
f) Requiring indemnification of licensors and authors of that
|
|
||||||
material by anyone who conveys the material (or modified versions of
|
|
||||||
it) with contractual assumptions of liability to the recipient, for
|
|
||||||
any liability that these contractual assumptions directly impose on
|
|
||||||
those licensors and authors.
|
|
||||||
|
|
||||||
All other non-permissive additional terms are considered "further
|
|
||||||
restrictions" within the meaning of section 10. If the Program as you
|
|
||||||
received it, or any part of it, contains a notice stating that it is
|
|
||||||
governed by this License along with a term that is a further
|
|
||||||
restriction, you may remove that term. If a license document contains
|
|
||||||
a further restriction but permits relicensing or conveying under this
|
|
||||||
License, you may add to a covered work material governed by the terms
|
|
||||||
of that license document, provided that the further restriction does
|
|
||||||
not survive such relicensing or conveying.
|
|
||||||
|
|
||||||
If you add terms to a covered work in accord with this section, you
|
|
||||||
must place, in the relevant source files, a statement of the
|
|
||||||
additional terms that apply to those files, or a notice indicating
|
|
||||||
where to find the applicable terms.
|
|
||||||
|
|
||||||
Additional terms, permissive or non-permissive, may be stated in the
|
|
||||||
form of a separately written license, or stated as exceptions;
|
|
||||||
the above requirements apply either way.
|
|
||||||
|
|
||||||
8. Termination.
|
|
||||||
|
|
||||||
You may not propagate or modify a covered work except as expressly
|
|
||||||
provided under this License. Any attempt otherwise to propagate or
|
|
||||||
modify it is void, and will automatically terminate your rights under
|
|
||||||
this License (including any patent licenses granted under the third
|
|
||||||
paragraph of section 11).
|
|
||||||
|
|
||||||
However, if you cease all violation of this License, then your
|
|
||||||
license from a particular copyright holder is reinstated (a)
|
|
||||||
provisionally, unless and until the copyright holder explicitly and
|
|
||||||
finally terminates your license, and (b) permanently, if the copyright
|
|
||||||
holder fails to notify you of the violation by some reasonable means
|
|
||||||
prior to 60 days after the cessation.
|
|
||||||
|
|
||||||
Moreover, your license from a particular copyright holder is
|
|
||||||
reinstated permanently if the copyright holder notifies you of the
|
|
||||||
violation by some reasonable means, this is the first time you have
|
|
||||||
received notice of violation of this License (for any work) from that
|
|
||||||
copyright holder, and you cure the violation prior to 30 days after
|
|
||||||
your receipt of the notice.
|
|
||||||
|
|
||||||
Termination of your rights under this section does not terminate the
|
|
||||||
licenses of parties who have received copies or rights from you under
|
|
||||||
this License. If your rights have been terminated and not permanently
|
|
||||||
reinstated, you do not qualify to receive new licenses for the same
|
|
||||||
material under section 10.
|
|
||||||
|
|
||||||
9. Acceptance Not Required for Having Copies.
|
|
||||||
|
|
||||||
You are not required to accept this License in order to receive or
|
|
||||||
run a copy of the Program. Ancillary propagation of a covered work
|
|
||||||
occurring solely as a consequence of using peer-to-peer transmission
|
|
||||||
to receive a copy likewise does not require acceptance. However,
|
|
||||||
nothing other than this License grants you permission to propagate or
|
|
||||||
modify any covered work. These actions infringe copyright if you do
|
|
||||||
not accept this License. Therefore, by modifying or propagating a
|
|
||||||
covered work, you indicate your acceptance of this License to do so.
|
|
||||||
|
|
||||||
10. Automatic Licensing of Downstream Recipients.
|
|
||||||
|
|
||||||
Each time you convey a covered work, the recipient automatically
|
|
||||||
receives a license from the original licensors, to run, modify and
|
|
||||||
propagate that work, subject to this License. You are not responsible
|
|
||||||
for enforcing compliance by third parties with this License.
|
|
||||||
|
|
||||||
An "entity transaction" is a transaction transferring control of an
|
|
||||||
organization, or substantially all assets of one, or subdividing an
|
|
||||||
organization, or merging organizations. If propagation of a covered
|
|
||||||
work results from an entity transaction, each party to that
|
|
||||||
transaction who receives a copy of the work also receives whatever
|
|
||||||
licenses to the work the party's predecessor in interest had or could
|
|
||||||
give under the previous paragraph, plus a right to possession of the
|
|
||||||
Corresponding Source of the work from the predecessor in interest, if
|
|
||||||
the predecessor has it or can get it with reasonable efforts.
|
|
||||||
|
|
||||||
You may not impose any further restrictions on the exercise of the
|
|
||||||
rights granted or affirmed under this License. For example, you may
|
|
||||||
not impose a license fee, royalty, or other charge for exercise of
|
|
||||||
rights granted under this License, and you may not initiate litigation
|
|
||||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
|
||||||
any patent claim is infringed by making, using, selling, offering for
|
|
||||||
sale, or importing the Program or any portion of it.
|
|
||||||
|
|
||||||
11. Patents.
|
|
||||||
|
|
||||||
A "contributor" is a copyright holder who authorizes use under this
|
|
||||||
License of the Program or a work on which the Program is based. The
|
|
||||||
work thus licensed is called the contributor's "contributor version".
|
|
||||||
|
|
||||||
A contributor's "essential patent claims" are all patent claims
|
|
||||||
owned or controlled by the contributor, whether already acquired or
|
|
||||||
hereafter acquired, that would be infringed by some manner, permitted
|
|
||||||
by this License, of making, using, or selling its contributor version,
|
|
||||||
but do not include claims that would be infringed only as a
|
|
||||||
consequence of further modification of the contributor version. For
|
|
||||||
purposes of this definition, "control" includes the right to grant
|
|
||||||
patent sublicenses in a manner consistent with the requirements of
|
|
||||||
this License.
|
|
||||||
|
|
||||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
|
||||||
patent license under the contributor's essential patent claims, to
|
|
||||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
|
||||||
propagate the contents of its contributor version.
|
|
||||||
|
|
||||||
In the following three paragraphs, a "patent license" is any express
|
|
||||||
agreement or commitment, however denominated, not to enforce a patent
|
|
||||||
(such as an express permission to practice a patent or covenant not to
|
|
||||||
sue for patent infringement). To "grant" such a patent license to a
|
|
||||||
party means to make such an agreement or commitment not to enforce a
|
|
||||||
patent against the party.
|
|
||||||
|
|
||||||
If you convey a covered work, knowingly relying on a patent license,
|
|
||||||
and the Corresponding Source of the work is not available for anyone
|
|
||||||
to copy, free of charge and under the terms of this License, through a
|
|
||||||
publicly available network server or other readily accessible means,
|
|
||||||
then you must either (1) cause the Corresponding Source to be so
|
|
||||||
available, or (2) arrange to deprive yourself of the benefit of the
|
|
||||||
patent license for this particular work, or (3) arrange, in a manner
|
|
||||||
consistent with the requirements of this License, to extend the patent
|
|
||||||
license to downstream recipients. "Knowingly relying" means you have
|
|
||||||
actual knowledge that, but for the patent license, your conveying the
|
|
||||||
covered work in a country, or your recipient's use of the covered work
|
|
||||||
in a country, would infringe one or more identifiable patents in that
|
|
||||||
country that you have reason to believe are valid.
|
|
||||||
|
|
||||||
If, pursuant to or in connection with a single transaction or
|
|
||||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
|
||||||
covered work, and grant a patent license to some of the parties
|
|
||||||
receiving the covered work authorizing them to use, propagate, modify
|
|
||||||
or convey a specific copy of the covered work, then the patent license
|
|
||||||
you grant is automatically extended to all recipients of the covered
|
|
||||||
work and works based on it.
|
|
||||||
|
|
||||||
A patent license is "discriminatory" if it does not include within
|
|
||||||
the scope of its coverage, prohibits the exercise of, or is
|
|
||||||
conditioned on the non-exercise of one or more of the rights that are
|
|
||||||
specifically granted under this License. You may not convey a covered
|
|
||||||
work if you are a party to an arrangement with a third party that is
|
|
||||||
in the business of distributing software, under which you make payment
|
|
||||||
to the third party based on the extent of your activity of conveying
|
|
||||||
the work, and under which the third party grants, to any of the
|
|
||||||
parties who would receive the covered work from you, a discriminatory
|
|
||||||
patent license (a) in connection with copies of the covered work
|
|
||||||
conveyed by you (or copies made from those copies), or (b) primarily
|
|
||||||
for and in connection with specific products or compilations that
|
|
||||||
contain the covered work, unless you entered into that arrangement,
|
|
||||||
or that patent license was granted, prior to 28 March 2007.
|
|
||||||
|
|
||||||
Nothing in this License shall be construed as excluding or limiting
|
|
||||||
any implied license or other defenses to infringement that may
|
|
||||||
otherwise be available to you under applicable patent law.
|
|
||||||
|
|
||||||
12. No Surrender of Others' Freedom.
|
|
||||||
|
|
||||||
If conditions are imposed on you (whether by court order, agreement or
|
|
||||||
otherwise) that contradict the conditions of this License, they do not
|
|
||||||
excuse you from the conditions of this License. If you cannot convey a
|
|
||||||
covered work so as to satisfy simultaneously your obligations under this
|
|
||||||
License and any other pertinent obligations, then as a consequence you may
|
|
||||||
not convey it at all. For example, if you agree to terms that obligate you
|
|
||||||
to collect a royalty for further conveying from those to whom you convey
|
|
||||||
the Program, the only way you could satisfy both those terms and this
|
|
||||||
License would be to refrain entirely from conveying the Program.
|
|
||||||
|
|
||||||
13. Remote Network Interaction; Use with the GNU General Public License.
|
|
||||||
|
|
||||||
Notwithstanding any other provision of this License, if you modify the
|
|
||||||
Program, your modified version must prominently offer all users
|
|
||||||
interacting with it remotely through a computer network (if your version
|
|
||||||
supports such interaction) an opportunity to receive the Corresponding
|
|
||||||
Source of your version by providing access to the Corresponding Source
|
|
||||||
from a network server at no charge, through some standard or customary
|
|
||||||
means of facilitating copying of software. This Corresponding Source
|
|
||||||
shall include the Corresponding Source for any work covered by version 3
|
|
||||||
of the GNU General Public License that is incorporated pursuant to the
|
|
||||||
following paragraph.
|
|
||||||
|
|
||||||
Notwithstanding any other provision of this License, you have
|
|
||||||
permission to link or combine any covered work with a work licensed
|
|
||||||
under version 3 of the GNU General Public License into a single
|
|
||||||
combined work, and to convey the resulting work. The terms of this
|
|
||||||
License will continue to apply to the part which is the covered work,
|
|
||||||
but the work with which it is combined will remain governed by version
|
|
||||||
3 of the GNU General Public License.
|
|
||||||
|
|
||||||
14. Revised Versions of this License.
|
|
||||||
|
|
||||||
The Free Software Foundation may publish revised and/or new versions of
|
|
||||||
the GNU Affero General Public License from time to time. Such new versions
|
|
||||||
will be similar in spirit to the present version, but may differ in detail to
|
|
||||||
address new problems or concerns.
|
|
||||||
|
|
||||||
Each version is given a distinguishing version number. If the
|
|
||||||
Program specifies that a certain numbered version of the GNU Affero General
|
|
||||||
Public License "or any later version" applies to it, you have the
|
|
||||||
option of following the terms and conditions either of that numbered
|
|
||||||
version or of any later version published by the Free Software
|
|
||||||
Foundation. If the Program does not specify a version number of the
|
|
||||||
GNU Affero General Public License, you may choose any version ever published
|
|
||||||
by the Free Software Foundation.
|
|
||||||
|
|
||||||
If the Program specifies that a proxy can decide which future
|
|
||||||
versions of the GNU Affero General Public License can be used, that proxy's
|
|
||||||
public statement of acceptance of a version permanently authorizes you
|
|
||||||
to choose that version for the Program.
|
|
||||||
|
|
||||||
Later license versions may give you additional or different
|
|
||||||
permissions. However, no additional obligations are imposed on any
|
|
||||||
author or copyright holder as a result of your choosing to follow a
|
|
||||||
later version.
|
|
||||||
|
|
||||||
15. Disclaimer of Warranty.
|
|
||||||
|
|
||||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
|
||||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
|
||||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
|
||||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
|
||||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
|
||||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
|
||||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
|
||||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
|
||||||
|
|
||||||
16. Limitation of Liability.
|
|
||||||
|
|
||||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
|
||||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
|
||||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
|
||||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
|
||||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
|
||||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
|
||||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
|
||||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
|
||||||
SUCH DAMAGES.
|
|
||||||
|
|
||||||
17. Interpretation of Sections 15 and 16.
|
|
||||||
|
|
||||||
If the disclaimer of warranty and limitation of liability provided
|
|
||||||
above cannot be given local legal effect according to their terms,
|
|
||||||
reviewing courts shall apply local law that most closely approximates
|
|
||||||
an absolute waiver of all civil liability in connection with the
|
|
||||||
Program, unless a warranty or assumption of liability accompanies a
|
|
||||||
copy of the Program in return for a fee.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
How to Apply These Terms to Your New Programs
|
|
||||||
|
|
||||||
If you develop a new program, and you want it to be of the greatest
|
|
||||||
possible use to the public, the best way to achieve this is to make it
|
|
||||||
free software which everyone can redistribute and change under these terms.
|
|
||||||
|
|
||||||
To do so, attach the following notices to the program. It is safest
|
|
||||||
to attach them to the start of each source file to most effectively
|
|
||||||
state the exclusion of warranty; and each file should have at least
|
|
||||||
the "copyright" line and a pointer to where the full notice is found.
|
|
||||||
|
|
||||||
<one line to give the program's name and a brief idea of what it does.>
|
|
||||||
Copyright (C) <year> <name of author>
|
|
||||||
|
|
||||||
This program is free software: you can redistribute it and/or modify
|
|
||||||
it under the terms of the GNU Affero General Public License as published by
|
|
||||||
the Free Software Foundation, either version 3 of the License, or
|
|
||||||
(at your option) any later version.
|
|
||||||
|
|
||||||
This program is distributed in the hope that it will be useful,
|
|
||||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
GNU Affero General Public License for more details.
|
|
||||||
|
|
||||||
You should have received a copy of the GNU Affero General Public License
|
|
||||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
Also add information on how to contact you by electronic and paper mail.
|
|
||||||
|
|
||||||
If your software can interact with users remotely through a computer
|
|
||||||
network, you should also make sure that it provides a way for users to
|
|
||||||
get its source. For example, if your program is a web application, its
|
|
||||||
interface could display a "Source" link that leads users to an archive
|
|
||||||
of the code. There are many ways you could offer source, and different
|
|
||||||
solutions will be better for different programs; see section 13 for the
|
|
||||||
specific requirements.
|
|
||||||
|
|
||||||
You should also get your employer (if you work as a programmer) or school,
|
|
||||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
|
||||||
For more information on this, and how to apply and follow the GNU AGPL, see
|
|
||||||
<https://www.gnu.org/licenses/>.
|
|
||||||
298
README.md
298
README.md
@@ -1,251 +1,117 @@
|
|||||||
# MyFSIO
|
# MyFSIO (Flask S3 + IAM)
|
||||||
|
|
||||||
A lightweight, S3-compatible object storage system built with Flask. MyFSIO implements core AWS S3 REST API operations with filesystem-backed storage, making it ideal for local development, testing, and self-hosted storage scenarios.
|
MyFSIO is a batteries-included, Flask-based recreation of Amazon S3 and IAM workflows built for local development. The design mirrors the [AWS S3 documentation](https://docs.aws.amazon.com/s3/) wherever practical: bucket naming, Signature Version 4 presigning, Version 2012-10-17 bucket policies, IAM-style users, and familiar REST endpoints.
|
||||||
|
|
||||||
## Features
|
## Why MyFSIO?
|
||||||
|
|
||||||
**Core Storage**
|
- **Dual servers:** Run both the API (port 5000) and UI (port 5100) with a single command: `python run.py`.
|
||||||
- S3-compatible REST API with AWS Signature Version 4 authentication
|
- **IAM + access keys:** Users, access keys, key rotation, and bucket-scoped actions (`list/read/write/delete/policy`) now live in `data/.myfsio.sys/config/iam.json` and are editable from the IAM dashboard.
|
||||||
- Bucket and object CRUD operations
|
- **Bucket policies + hot reload:** `data/.myfsio.sys/config/bucket_policies.json` uses AWS' policy grammar (Version `2012-10-17`) with a built-in watcher, so editing the JSON file applies immediately. The UI also ships Public/Private/Custom presets for faster edits.
|
||||||
- Object versioning with version history
|
- **Presigned URLs everywhere:** Signature Version 4 presigned URLs respect IAM + bucket policies and replace the now-removed "share link" feature for public access scenarios.
|
||||||
- Multipart uploads for large files
|
- **Modern UI:** Responsive tables, quick filters, preview sidebar, object-level delete buttons, a presign modal, and an inline JSON policy editor that respects dark mode keep bucket management friendly.
|
||||||
- Presigned URLs (1 second to 7 days validity)
|
- **Tests & health:** `/healthz` for smoke checks and `pytest` coverage for IAM, CRUD, presign, and policy flows.
|
||||||
|
|
||||||
**Security & Access Control**
|
## Architecture at a Glance
|
||||||
- IAM users with access key management and rotation
|
|
||||||
- Bucket policies (AWS Policy Version 2012-10-17)
|
|
||||||
- Server-side encryption (SSE-S3 and SSE-KMS)
|
|
||||||
- Built-in Key Management Service (KMS)
|
|
||||||
- Rate limiting per endpoint
|
|
||||||
|
|
||||||
**Advanced Features**
|
|
||||||
- Cross-bucket replication to remote S3-compatible endpoints
|
|
||||||
- Hot-reload for bucket policies (no restart required)
|
|
||||||
- CORS configuration per bucket
|
|
||||||
|
|
||||||
**Management UI**
|
|
||||||
- Web console for bucket and object management
|
|
||||||
- IAM dashboard for user administration
|
|
||||||
- Inline JSON policy editor with presets
|
|
||||||
- Object browser with folder navigation and bulk operations
|
|
||||||
- Dark mode support
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
```
|
```
|
||||||
+------------------+ +------------------+
|
+-----------------+ +----------------+
|
||||||
| API Server | | UI Server |
|
| API Server |<----->| Object storage |
|
||||||
| (port 5000) | | (port 5100) |
|
| (port 5000) | | (filesystem) |
|
||||||
| | | |
|
| - S3 routes | +----------------+
|
||||||
| - S3 REST API |<------->| - Web Console |
|
| - Presigned URLs |
|
||||||
| - SigV4 Auth | | - IAM Dashboard |
|
| - Bucket policy |
|
||||||
| - Presign URLs | | - Bucket Editor |
|
+-----------------+
|
||||||
+--------+---------+ +------------------+
|
^
|
||||||
|
|
|
|
||||||
v
|
+-----------------+
|
||||||
+------------------+ +------------------+
|
| UI Server |
|
||||||
| Object Storage | | System Metadata |
|
| (port 5100) |
|
||||||
| (filesystem) | | (.myfsio.sys/) |
|
| - Auth console |
|
||||||
| | | |
|
| - IAM dashboard|
|
||||||
| data/<bucket>/ | | - IAM config |
|
| - Bucket editor|
|
||||||
| <objects> | | - Bucket policies|
|
+-----------------+
|
||||||
| | | - Encryption keys|
|
|
||||||
+------------------+ +------------------+
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Quick Start
|
Both apps load the same configuration via `AppConfig` so IAM data and bucket policies stay consistent no matter which process you run.
|
||||||
|
Bucket policies are automatically reloaded whenever `bucket_policies.json` changes—no restarts required.
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Clone and setup
|
|
||||||
git clone https://gitea.jzwsite.com/kqjy/MyFSIO
|
|
||||||
cd s3
|
|
||||||
python -m venv .venv
|
python -m venv .venv
|
||||||
|
. .venv/Scripts/activate # PowerShell: .\.venv\Scripts\Activate.ps1
|
||||||
# Activate virtual environment
|
|
||||||
# Windows PowerShell:
|
|
||||||
.\.venv\Scripts\Activate.ps1
|
|
||||||
# Windows CMD:
|
|
||||||
.venv\Scripts\activate.bat
|
|
||||||
# Linux/macOS:
|
|
||||||
source .venv/bin/activate
|
|
||||||
|
|
||||||
# Install dependencies
|
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
|
|
||||||
# Start both servers
|
# Run both API and UI (default)
|
||||||
python run.py
|
python run.py
|
||||||
|
|
||||||
# Or start individually
|
# Or run individually:
|
||||||
python run.py --mode api # API only (port 5000)
|
# python run.py --mode api
|
||||||
python run.py --mode ui # UI only (port 5100)
|
# python run.py --mode ui
|
||||||
```
|
```
|
||||||
|
|
||||||
**Default Credentials:** `localadmin` / `localadmin`
|
Visit `http://127.0.0.1:5100/ui` for the console and `http://127.0.0.1:5000/` for the raw API. Override ports/hosts with the environment variables listed below.
|
||||||
|
|
||||||
- **Web Console:** http://127.0.0.1:5100/ui
|
## IAM, Access Keys, and Bucket Policies
|
||||||
- **API Endpoint:** http://127.0.0.1:5000
|
|
||||||
|
- First run creates `data/.myfsio.sys/config/iam.json` with `localadmin / localadmin` (full control). Sign in via the UI, then use the **IAM** tab to create users, rotate secrets, or edit inline policies without touching JSON by hand.
|
||||||
|
- Bucket policies live in `data/.myfsio.sys/config/bucket_policies.json` and follow the AWS `arn:aws:s3:::bucket/key` resource syntax with Version `2012-10-17`. Attach/replace/remove policies from the bucket detail page or edit the JSON by hand—changes hot reload automatically.
|
||||||
|
- IAM actions include extended verbs (`iam:list_users`, `iam:create_user`, `iam:update_policy`, etc.) so you can control who is allowed to manage other users and policies.
|
||||||
|
|
||||||
|
### Bucket Policy Presets & Hot Reload
|
||||||
|
|
||||||
|
- **Presets:** Every bucket detail view includes Public (read-only), Private (detach policy), and Custom presets. Public auto-populates a policy that grants anonymous `s3:ListBucket` + `s3:GetObject` access to the entire bucket.
|
||||||
|
- **Custom drafts:** Switching back to Custom restores your last manual edit so you can toggle between presets without losing work.
|
||||||
|
- **Hot reload:** The server watches `bucket_policies.json` and reloads statements on-the-fly—ideal for editing policies in your favorite editor while testing Via curl or the UI.
|
||||||
|
|
||||||
|
## Presigned URLs
|
||||||
|
|
||||||
|
Presigned URLs follow the AWS CLI playbook:
|
||||||
|
|
||||||
|
- Call `POST /presign/<bucket>/<key>` (or use the "Presign" button in the UI) to request a Signature Version 4 URL valid for 1 second to 7 days.
|
||||||
|
- The generated URL honors IAM permissions and bucket-policy decisions at generation-time and again when somebody fetches it.
|
||||||
|
- Because presigned URLs cover both authenticated and public sharing scenarios, the legacy "share link" feature has been removed.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
| Variable | Default | Description |
|
| Variable | Default | Description |
|
||||||
|----------|---------|-------------|
|
| --- | --- | --- |
|
||||||
| `STORAGE_ROOT` | `./data` | Filesystem root for bucket storage |
|
| `STORAGE_ROOT` | `<project>/data` | Filesystem root for bucket directories |
|
||||||
| `IAM_CONFIG` | `.myfsio.sys/config/iam.json` | IAM user and policy store |
|
| `MAX_UPLOAD_SIZE` | `1073741824` | Maximum upload size (bytes) |
|
||||||
| `BUCKET_POLICY_PATH` | `.myfsio.sys/config/bucket_policies.json` | Bucket policy store |
|
| `UI_PAGE_SIZE` | `100` | `MaxKeys` hint for listings |
|
||||||
| `API_BASE_URL` | `http://127.0.0.1:5000` | API endpoint for UI calls |
|
| `SECRET_KEY` | `dev-secret-key` | Flask session secret for the UI |
|
||||||
| `MAX_UPLOAD_SIZE` | `1073741824` | Maximum upload size in bytes (1 GB) |
|
| `IAM_CONFIG` | `<project>/data/.myfsio.sys/config/iam.json` | IAM user + policy store |
|
||||||
| `MULTIPART_MIN_PART_SIZE` | `5242880` | Minimum multipart part size (5 MB) |
|
| `BUCKET_POLICY_PATH` | `<project>/data/.myfsio.sys/config/bucket_policies.json` | Bucket policy store |
|
||||||
| `UI_PAGE_SIZE` | `100` | Default page size for listings |
|
| `API_BASE_URL` | `http://127.0.0.1:5000` | Used by the UI when calling API endpoints (presign, bucket policy) |
|
||||||
| `SECRET_KEY` | `dev-secret-key` | Flask session secret |
|
| `AWS_REGION` | `us-east-1` | Region used in Signature V4 scope |
|
||||||
| `AWS_REGION` | `us-east-1` | Region for SigV4 signing |
|
| `AWS_SERVICE` | `s3` | Service used in Signature V4 scope |
|
||||||
| `AWS_SERVICE` | `s3` | Service name for SigV4 signing |
|
|
||||||
| `ENCRYPTION_ENABLED` | `false` | Enable server-side encryption |
|
|
||||||
| `KMS_ENABLED` | `false` | Enable Key Management Service |
|
|
||||||
| `LOG_LEVEL` | `INFO` | Logging verbosity |
|
|
||||||
|
|
||||||
## Data Layout
|
> Buckets now live directly under `data/` while system metadata (versions, IAM, bucket policies, multipart uploads, etc.) lives in `data/.myfsio.sys`.
|
||||||
|
|
||||||
|
## API Cheatsheet (IAM headers required)
|
||||||
|
|
||||||
```
|
```
|
||||||
data/
|
GET / -> List buckets (XML)
|
||||||
├── <bucket>/ # User buckets with objects
|
PUT /<bucket> -> Create bucket
|
||||||
└── .myfsio.sys/ # System metadata
|
DELETE /<bucket> -> Delete bucket (must be empty)
|
||||||
├── config/
|
GET /<bucket> -> List objects (XML)
|
||||||
│ ├── iam.json # IAM users and policies
|
PUT /<bucket>/<key> -> Upload object (binary stream)
|
||||||
│ ├── bucket_policies.json # Bucket policies
|
GET /<bucket>/<key> -> Download object
|
||||||
│ ├── replication_rules.json
|
DELETE /<bucket>/<key> -> Delete object
|
||||||
│ └── connections.json # Remote S3 connections
|
POST /presign/<bucket>/<key> -> Generate AWS SigV4 presigned URL (JSON)
|
||||||
├── buckets/<bucket>/
|
GET /bucket-policy/<bucket> -> Fetch bucket policy (JSON)
|
||||||
│ ├── meta/ # Object metadata (.meta.json)
|
PUT /bucket-policy/<bucket> -> Attach/replace bucket policy (JSON)
|
||||||
│ ├── versions/ # Archived object versions
|
DELETE /bucket-policy/<bucket> -> Remove bucket policy
|
||||||
│ └── .bucket.json # Bucket config (versioning, CORS)
|
|
||||||
├── multipart/ # Active multipart uploads
|
|
||||||
└── keys/ # Encryption keys (SSE-S3/KMS)
|
|
||||||
```
|
|
||||||
|
|
||||||
## API Reference
|
|
||||||
|
|
||||||
All endpoints require AWS Signature Version 4 authentication unless using presigned URLs or public bucket policies.
|
|
||||||
|
|
||||||
### Bucket Operations
|
|
||||||
|
|
||||||
| Method | Endpoint | Description |
|
|
||||||
|--------|----------|-------------|
|
|
||||||
| `GET` | `/` | List all buckets |
|
|
||||||
| `PUT` | `/<bucket>` | Create bucket |
|
|
||||||
| `DELETE` | `/<bucket>` | Delete bucket (must be empty) |
|
|
||||||
| `HEAD` | `/<bucket>` | Check bucket exists |
|
|
||||||
|
|
||||||
### Object Operations
|
|
||||||
|
|
||||||
| Method | Endpoint | Description |
|
|
||||||
|--------|----------|-------------|
|
|
||||||
| `GET` | `/<bucket>` | List objects (supports `list-type=2`) |
|
|
||||||
| `PUT` | `/<bucket>/<key>` | Upload object |
|
|
||||||
| `GET` | `/<bucket>/<key>` | Download object |
|
|
||||||
| `DELETE` | `/<bucket>/<key>` | Delete object |
|
|
||||||
| `HEAD` | `/<bucket>/<key>` | Get object metadata |
|
|
||||||
| `POST` | `/<bucket>/<key>?uploads` | Initiate multipart upload |
|
|
||||||
| `PUT` | `/<bucket>/<key>?partNumber=N&uploadId=X` | Upload part |
|
|
||||||
| `POST` | `/<bucket>/<key>?uploadId=X` | Complete multipart upload |
|
|
||||||
| `DELETE` | `/<bucket>/<key>?uploadId=X` | Abort multipart upload |
|
|
||||||
|
|
||||||
### Presigned URLs
|
|
||||||
|
|
||||||
| Method | Endpoint | Description |
|
|
||||||
|--------|----------|-------------|
|
|
||||||
| `POST` | `/presign/<bucket>/<key>` | Generate presigned URL |
|
|
||||||
|
|
||||||
### Bucket Policies
|
|
||||||
|
|
||||||
| Method | Endpoint | Description |
|
|
||||||
|--------|----------|-------------|
|
|
||||||
| `GET` | `/bucket-policy/<bucket>` | Get bucket policy |
|
|
||||||
| `PUT` | `/bucket-policy/<bucket>` | Set bucket policy |
|
|
||||||
| `DELETE` | `/bucket-policy/<bucket>` | Delete bucket policy |
|
|
||||||
|
|
||||||
### Versioning
|
|
||||||
|
|
||||||
| Method | Endpoint | Description |
|
|
||||||
|--------|----------|-------------|
|
|
||||||
| `GET` | `/<bucket>/<key>?versionId=X` | Get specific version |
|
|
||||||
| `DELETE` | `/<bucket>/<key>?versionId=X` | Delete specific version |
|
|
||||||
| `GET` | `/<bucket>?versions` | List object versions |
|
|
||||||
|
|
||||||
### Health Check
|
|
||||||
|
|
||||||
| Method | Endpoint | Description |
|
|
||||||
|--------|----------|-------------|
|
|
||||||
| `GET` | `/healthz` | Health check endpoint |
|
|
||||||
|
|
||||||
## IAM & Access Control
|
|
||||||
|
|
||||||
### Users and Access Keys
|
|
||||||
|
|
||||||
On first run, MyFSIO creates a default admin user (`localadmin`/`localadmin`). Use the IAM dashboard to:
|
|
||||||
|
|
||||||
- Create and delete users
|
|
||||||
- Generate and rotate access keys
|
|
||||||
- Attach inline policies to users
|
|
||||||
- Control IAM management permissions
|
|
||||||
|
|
||||||
### Bucket Policies
|
|
||||||
|
|
||||||
Bucket policies follow AWS policy grammar (Version `2012-10-17`) with support for:
|
|
||||||
|
|
||||||
- Principal-based access (`*` for anonymous, specific users)
|
|
||||||
- Action-based permissions (`s3:GetObject`, `s3:PutObject`, etc.)
|
|
||||||
- Resource patterns (`arn:aws:s3:::bucket/*`)
|
|
||||||
- Condition keys
|
|
||||||
|
|
||||||
**Policy Presets:**
|
|
||||||
- **Public:** Grants anonymous read access (`s3:GetObject`, `s3:ListBucket`)
|
|
||||||
- **Private:** Removes bucket policy (IAM-only access)
|
|
||||||
- **Custom:** Manual policy editing with draft preservation
|
|
||||||
|
|
||||||
Policies hot-reload when the JSON file changes.
|
|
||||||
|
|
||||||
## Server-Side Encryption
|
|
||||||
|
|
||||||
MyFSIO supports two encryption modes:
|
|
||||||
|
|
||||||
- **SSE-S3:** Server-managed keys with automatic key rotation
|
|
||||||
- **SSE-KMS:** Customer-managed keys via built-in KMS
|
|
||||||
|
|
||||||
Enable encryption with:
|
|
||||||
```bash
|
|
||||||
ENCRYPTION_ENABLED=true python run.py
|
|
||||||
```
|
|
||||||
|
|
||||||
## Cross-Bucket Replication
|
|
||||||
|
|
||||||
Replicate objects to remote S3-compatible endpoints:
|
|
||||||
|
|
||||||
1. Configure remote connections in the UI
|
|
||||||
2. Create replication rules specifying source/destination
|
|
||||||
3. Objects are automatically replicated on upload
|
|
||||||
|
|
||||||
## Docker
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker build -t myfsio .
|
|
||||||
docker run -p 5000:5000 -p 5100:5100 -v ./data:/app/data myfsio
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Testing
|
## Testing
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Run all tests
|
pytest -q
|
||||||
pytest tests/ -v
|
|
||||||
|
|
||||||
# Run specific test file
|
|
||||||
pytest tests/test_api.py -v
|
|
||||||
|
|
||||||
# Run with coverage
|
|
||||||
pytest tests/ --cov=app --cov-report=html
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## References
|
## References
|
||||||
|
|
||||||
- [Amazon S3 Documentation](https://docs.aws.amazon.com/s3/)
|
- [Amazon Simple Storage Service Documentation](https://docs.aws.amazon.com/s3/)
|
||||||
- [AWS Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)
|
- [Signature Version 4 Signing Process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)
|
||||||
- [S3 Bucket Policy Examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html)
|
- [Amazon S3 Bucket Policy Examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html)
|
||||||
|
|||||||
128
app/__init__.py
128
app/__init__.py
@@ -1,23 +1,20 @@
|
|||||||
|
"""Application factory for the mini S3-compatible object store."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
import shutil
|
|
||||||
import sys
|
import sys
|
||||||
import time
|
import time
|
||||||
import uuid
|
import uuid
|
||||||
from logging.handlers import RotatingFileHandler
|
from logging.handlers import RotatingFileHandler
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from datetime import timedelta
|
from datetime import timedelta
|
||||||
from typing import Any, Dict, List, Optional
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
from flask import Flask, g, has_request_context, redirect, render_template, request, url_for
|
from flask import Flask, g, has_request_context, redirect, render_template, request, url_for
|
||||||
from flask_cors import CORS
|
from flask_cors import CORS
|
||||||
from flask_wtf.csrf import CSRFError
|
from flask_wtf.csrf import CSRFError
|
||||||
from werkzeug.middleware.proxy_fix import ProxyFix
|
from werkzeug.middleware.proxy_fix import ProxyFix
|
||||||
|
|
||||||
from .access_logging import AccessLoggingService
|
|
||||||
from .compression import GzipMiddleware
|
|
||||||
from .acl import AclService
|
|
||||||
from .bucket_policies import BucketPolicyStore
|
from .bucket_policies import BucketPolicyStore
|
||||||
from .config import AppConfig
|
from .config import AppConfig
|
||||||
from .connections import ConnectionStore
|
from .connections import ConnectionStore
|
||||||
@@ -25,41 +22,12 @@ from .encryption import EncryptionManager
|
|||||||
from .extensions import limiter, csrf
|
from .extensions import limiter, csrf
|
||||||
from .iam import IamService
|
from .iam import IamService
|
||||||
from .kms import KMSManager
|
from .kms import KMSManager
|
||||||
from .lifecycle import LifecycleManager
|
|
||||||
from .notifications import NotificationService
|
|
||||||
from .object_lock import ObjectLockService
|
|
||||||
from .replication import ReplicationManager
|
from .replication import ReplicationManager
|
||||||
from .secret_store import EphemeralSecretStore
|
from .secret_store import EphemeralSecretStore
|
||||||
from .storage import ObjectStorage
|
from .storage import ObjectStorage
|
||||||
from .version import get_version
|
from .version import get_version
|
||||||
|
|
||||||
|
|
||||||
def _migrate_config_file(active_path: Path, legacy_paths: List[Path]) -> Path:
|
|
||||||
"""Migrate config file from legacy locations to the active path.
|
|
||||||
|
|
||||||
Checks each legacy path in order and moves the first one found to the active path.
|
|
||||||
This ensures backward compatibility for users upgrading from older versions.
|
|
||||||
"""
|
|
||||||
active_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
if active_path.exists():
|
|
||||||
return active_path
|
|
||||||
|
|
||||||
for legacy_path in legacy_paths:
|
|
||||||
if legacy_path.exists():
|
|
||||||
try:
|
|
||||||
shutil.move(str(legacy_path), str(active_path))
|
|
||||||
except OSError:
|
|
||||||
shutil.copy2(legacy_path, active_path)
|
|
||||||
try:
|
|
||||||
legacy_path.unlink(missing_ok=True)
|
|
||||||
except OSError:
|
|
||||||
pass
|
|
||||||
break
|
|
||||||
|
|
||||||
return active_path
|
|
||||||
|
|
||||||
|
|
||||||
def create_app(
|
def create_app(
|
||||||
test_config: Optional[Dict[str, Any]] = None,
|
test_config: Optional[Dict[str, Any]] = None,
|
||||||
*,
|
*,
|
||||||
@@ -90,24 +58,13 @@ def create_app(
|
|||||||
# Trust X-Forwarded-* headers from proxies
|
# Trust X-Forwarded-* headers from proxies
|
||||||
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_prefix=1)
|
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_prefix=1)
|
||||||
|
|
||||||
# Enable gzip compression for responses (10-20x smaller JSON payloads)
|
|
||||||
if app.config.get("ENABLE_GZIP", True):
|
|
||||||
app.wsgi_app = GzipMiddleware(app.wsgi_app, compression_level=6)
|
|
||||||
|
|
||||||
_configure_cors(app)
|
_configure_cors(app)
|
||||||
_configure_logging(app)
|
_configure_logging(app)
|
||||||
|
|
||||||
limiter.init_app(app)
|
limiter.init_app(app)
|
||||||
csrf.init_app(app)
|
csrf.init_app(app)
|
||||||
|
|
||||||
storage = ObjectStorage(
|
storage = ObjectStorage(Path(app.config["STORAGE_ROOT"]))
|
||||||
Path(app.config["STORAGE_ROOT"]),
|
|
||||||
cache_ttl=app.config.get("OBJECT_CACHE_TTL", 5),
|
|
||||||
)
|
|
||||||
|
|
||||||
if app.config.get("WARM_CACHE_ON_STARTUP", True) and not app.config.get("TESTING"):
|
|
||||||
storage.warm_cache_async()
|
|
||||||
|
|
||||||
iam = IamService(
|
iam = IamService(
|
||||||
Path(app.config["IAM_CONFIG"]),
|
Path(app.config["IAM_CONFIG"]),
|
||||||
auth_max_attempts=app.config.get("AUTH_MAX_ATTEMPTS", 5),
|
auth_max_attempts=app.config.get("AUTH_MAX_ATTEMPTS", 5),
|
||||||
@@ -116,28 +73,14 @@ def create_app(
|
|||||||
bucket_policies = BucketPolicyStore(Path(app.config["BUCKET_POLICY_PATH"]))
|
bucket_policies = BucketPolicyStore(Path(app.config["BUCKET_POLICY_PATH"]))
|
||||||
secret_store = EphemeralSecretStore(default_ttl=app.config.get("SECRET_TTL_SECONDS", 300))
|
secret_store = EphemeralSecretStore(default_ttl=app.config.get("SECRET_TTL_SECONDS", 300))
|
||||||
|
|
||||||
storage_root = Path(app.config["STORAGE_ROOT"])
|
# Initialize Replication components
|
||||||
config_dir = storage_root / ".myfsio.sys" / "config"
|
connections_path = Path(app.config["STORAGE_ROOT"]) / ".connections.json"
|
||||||
config_dir.mkdir(parents=True, exist_ok=True)
|
replication_rules_path = Path(app.config["STORAGE_ROOT"]) / ".replication_rules.json"
|
||||||
|
|
||||||
connections_path = _migrate_config_file(
|
|
||||||
active_path=config_dir / "connections.json",
|
|
||||||
legacy_paths=[
|
|
||||||
storage_root / ".myfsio.sys" / "connections.json",
|
|
||||||
storage_root / ".connections.json",
|
|
||||||
],
|
|
||||||
)
|
|
||||||
replication_rules_path = _migrate_config_file(
|
|
||||||
active_path=config_dir / "replication_rules.json",
|
|
||||||
legacy_paths=[
|
|
||||||
storage_root / ".myfsio.sys" / "replication_rules.json",
|
|
||||||
storage_root / ".replication_rules.json",
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
connections = ConnectionStore(connections_path)
|
connections = ConnectionStore(connections_path)
|
||||||
replication = ReplicationManager(storage, connections, replication_rules_path, storage_root)
|
replication = ReplicationManager(storage, connections, replication_rules_path)
|
||||||
|
|
||||||
|
# Initialize encryption and KMS
|
||||||
encryption_config = {
|
encryption_config = {
|
||||||
"encryption_enabled": app.config.get("ENCRYPTION_ENABLED", False),
|
"encryption_enabled": app.config.get("ENCRYPTION_ENABLED", False),
|
||||||
"encryption_master_key_path": app.config.get("ENCRYPTION_MASTER_KEY_PATH"),
|
"encryption_master_key_path": app.config.get("ENCRYPTION_MASTER_KEY_PATH"),
|
||||||
@@ -152,26 +95,11 @@ def create_app(
|
|||||||
kms_manager = KMSManager(kms_keys_path, kms_master_key_path)
|
kms_manager = KMSManager(kms_keys_path, kms_master_key_path)
|
||||||
encryption_manager.set_kms_provider(kms_manager)
|
encryption_manager.set_kms_provider(kms_manager)
|
||||||
|
|
||||||
|
# Wrap storage with encryption layer if encryption is enabled
|
||||||
if app.config.get("ENCRYPTION_ENABLED", False):
|
if app.config.get("ENCRYPTION_ENABLED", False):
|
||||||
from .encrypted_storage import EncryptedObjectStorage
|
from .encrypted_storage import EncryptedObjectStorage
|
||||||
storage = EncryptedObjectStorage(storage, encryption_manager)
|
storage = EncryptedObjectStorage(storage, encryption_manager)
|
||||||
|
|
||||||
acl_service = AclService(storage_root)
|
|
||||||
object_lock_service = ObjectLockService(storage_root)
|
|
||||||
notification_service = NotificationService(storage_root)
|
|
||||||
access_logging_service = AccessLoggingService(storage_root)
|
|
||||||
access_logging_service.set_storage(storage)
|
|
||||||
|
|
||||||
lifecycle_manager = None
|
|
||||||
if app.config.get("LIFECYCLE_ENABLED", False):
|
|
||||||
base_storage = storage.storage if hasattr(storage, 'storage') else storage
|
|
||||||
lifecycle_manager = LifecycleManager(
|
|
||||||
base_storage,
|
|
||||||
interval_seconds=app.config.get("LIFECYCLE_INTERVAL_SECONDS", 3600),
|
|
||||||
storage_root=storage_root,
|
|
||||||
)
|
|
||||||
lifecycle_manager.start()
|
|
||||||
|
|
||||||
app.extensions["object_storage"] = storage
|
app.extensions["object_storage"] = storage
|
||||||
app.extensions["iam"] = iam
|
app.extensions["iam"] = iam
|
||||||
app.extensions["bucket_policies"] = bucket_policies
|
app.extensions["bucket_policies"] = bucket_policies
|
||||||
@@ -181,11 +109,6 @@ def create_app(
|
|||||||
app.extensions["replication"] = replication
|
app.extensions["replication"] = replication
|
||||||
app.extensions["encryption"] = encryption_manager
|
app.extensions["encryption"] = encryption_manager
|
||||||
app.extensions["kms"] = kms_manager
|
app.extensions["kms"] = kms_manager
|
||||||
app.extensions["acl"] = acl_service
|
|
||||||
app.extensions["lifecycle"] = lifecycle_manager
|
|
||||||
app.extensions["object_lock"] = object_lock_service
|
|
||||||
app.extensions["notifications"] = notification_service
|
|
||||||
app.extensions["access_logging"] = access_logging_service
|
|
||||||
|
|
||||||
@app.errorhandler(500)
|
@app.errorhandler(500)
|
||||||
def internal_error(error):
|
def internal_error(error):
|
||||||
@@ -208,22 +131,13 @@ def create_app(
|
|||||||
|
|
||||||
@app.template_filter("timestamp_to_datetime")
|
@app.template_filter("timestamp_to_datetime")
|
||||||
def timestamp_to_datetime(value: float) -> str:
|
def timestamp_to_datetime(value: float) -> str:
|
||||||
"""Format Unix timestamp as human-readable datetime in configured timezone."""
|
"""Format Unix timestamp as human-readable datetime."""
|
||||||
from datetime import datetime, timezone as dt_timezone
|
from datetime import datetime
|
||||||
from zoneinfo import ZoneInfo
|
|
||||||
if not value:
|
if not value:
|
||||||
return "Never"
|
return "Never"
|
||||||
try:
|
try:
|
||||||
dt_utc = datetime.fromtimestamp(value, dt_timezone.utc)
|
dt = datetime.fromtimestamp(value)
|
||||||
display_tz = app.config.get("DISPLAY_TIMEZONE", "UTC")
|
return dt.strftime("%Y-%m-%d %H:%M:%S")
|
||||||
if display_tz and display_tz != "UTC":
|
|
||||||
try:
|
|
||||||
tz = ZoneInfo(display_tz)
|
|
||||||
dt_local = dt_utc.astimezone(tz)
|
|
||||||
return dt_local.strftime("%Y-%m-%d %H:%M:%S")
|
|
||||||
except (KeyError, ValueError):
|
|
||||||
pass
|
|
||||||
return dt_utc.strftime("%Y-%m-%d %H:%M:%S UTC")
|
|
||||||
except (ValueError, OSError):
|
except (ValueError, OSError):
|
||||||
return "Unknown"
|
return "Unknown"
|
||||||
|
|
||||||
@@ -271,12 +185,14 @@ def create_ui_app(test_config: Optional[Dict[str, Any]] = None) -> Flask:
|
|||||||
|
|
||||||
def _configure_cors(app: Flask) -> None:
|
def _configure_cors(app: Flask) -> None:
|
||||||
origins = app.config.get("CORS_ORIGINS", ["*"])
|
origins = app.config.get("CORS_ORIGINS", ["*"])
|
||||||
methods = app.config.get("CORS_METHODS", ["GET", "PUT", "POST", "DELETE", "OPTIONS", "HEAD"])
|
methods = app.config.get("CORS_METHODS", ["GET", "PUT", "POST", "DELETE", "OPTIONS"])
|
||||||
allow_headers = app.config.get("CORS_ALLOW_HEADERS", ["*"])
|
allow_headers = app.config.get(
|
||||||
expose_headers = app.config.get("CORS_EXPOSE_HEADERS", ["*"])
|
"CORS_ALLOW_HEADERS",
|
||||||
|
["Content-Type", "X-Access-Key", "X-Secret-Key", "X-Amz-Date", "X-Amz-SignedHeaders"],
|
||||||
|
)
|
||||||
CORS(
|
CORS(
|
||||||
app,
|
app,
|
||||||
resources={r"/*": {"origins": origins, "methods": methods, "allow_headers": allow_headers, "expose_headers": expose_headers}},
|
resources={r"/*": {"origins": origins, "methods": methods, "allow_headers": allow_headers}},
|
||||||
supports_credentials=True,
|
supports_credentials=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -284,7 +200,7 @@ def _configure_cors(app: Flask) -> None:
|
|||||||
class _RequestContextFilter(logging.Filter):
|
class _RequestContextFilter(logging.Filter):
|
||||||
"""Inject request-specific attributes into log records."""
|
"""Inject request-specific attributes into log records."""
|
||||||
|
|
||||||
def filter(self, record: logging.LogRecord) -> bool:
|
def filter(self, record: logging.LogRecord) -> bool: # pragma: no cover - simple boilerplate
|
||||||
if has_request_context():
|
if has_request_context():
|
||||||
record.request_id = getattr(g, "request_id", "-")
|
record.request_id = getattr(g, "request_id", "-")
|
||||||
record.path = request.path
|
record.path = request.path
|
||||||
@@ -303,16 +219,16 @@ def _configure_logging(app: Flask) -> None:
|
|||||||
"%(asctime)s | %(levelname)s | %(request_id)s | %(method)s %(path)s | %(message)s"
|
"%(asctime)s | %(levelname)s | %(request_id)s | %(method)s %(path)s | %(message)s"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Stream Handler (stdout) - Primary for Docker
|
||||||
stream_handler = logging.StreamHandler(sys.stdout)
|
stream_handler = logging.StreamHandler(sys.stdout)
|
||||||
stream_handler.setFormatter(formatter)
|
stream_handler.setFormatter(formatter)
|
||||||
stream_handler.addFilter(_RequestContextFilter())
|
stream_handler.addFilter(_RequestContextFilter())
|
||||||
|
|
||||||
logger = app.logger
|
logger = app.logger
|
||||||
for handler in logger.handlers[:]:
|
|
||||||
handler.close()
|
|
||||||
logger.handlers.clear()
|
logger.handlers.clear()
|
||||||
logger.addHandler(stream_handler)
|
logger.addHandler(stream_handler)
|
||||||
|
|
||||||
|
# File Handler (optional, if configured)
|
||||||
if app.config.get("LOG_TO_FILE"):
|
if app.config.get("LOG_TO_FILE"):
|
||||||
log_file = Path(app.config["LOG_FILE"])
|
log_file = Path(app.config["LOG_FILE"])
|
||||||
log_file.parent.mkdir(parents=True, exist_ok=True)
|
log_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|||||||
@@ -1,265 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import io
|
|
||||||
import json
|
|
||||||
import logging
|
|
||||||
import queue
|
|
||||||
import threading
|
|
||||||
import time
|
|
||||||
import uuid
|
|
||||||
from dataclasses import dataclass, field
|
|
||||||
from datetime import datetime, timezone
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Any, Dict, List, Optional
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class AccessLogEntry:
|
|
||||||
bucket_owner: str = "-"
|
|
||||||
bucket: str = "-"
|
|
||||||
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
|
||||||
remote_ip: str = "-"
|
|
||||||
requester: str = "-"
|
|
||||||
request_id: str = field(default_factory=lambda: uuid.uuid4().hex[:16].upper())
|
|
||||||
operation: str = "-"
|
|
||||||
key: str = "-"
|
|
||||||
request_uri: str = "-"
|
|
||||||
http_status: int = 200
|
|
||||||
error_code: str = "-"
|
|
||||||
bytes_sent: int = 0
|
|
||||||
object_size: int = 0
|
|
||||||
total_time_ms: int = 0
|
|
||||||
turn_around_time_ms: int = 0
|
|
||||||
referrer: str = "-"
|
|
||||||
user_agent: str = "-"
|
|
||||||
version_id: str = "-"
|
|
||||||
host_id: str = "-"
|
|
||||||
signature_version: str = "SigV4"
|
|
||||||
cipher_suite: str = "-"
|
|
||||||
authentication_type: str = "AuthHeader"
|
|
||||||
host_header: str = "-"
|
|
||||||
tls_version: str = "-"
|
|
||||||
|
|
||||||
def to_log_line(self) -> str:
|
|
||||||
time_str = self.timestamp.strftime("[%d/%b/%Y:%H:%M:%S %z]")
|
|
||||||
return (
|
|
||||||
f'{self.bucket_owner} {self.bucket} {time_str} {self.remote_ip} '
|
|
||||||
f'{self.requester} {self.request_id} {self.operation} {self.key} '
|
|
||||||
f'"{self.request_uri}" {self.http_status} {self.error_code or "-"} '
|
|
||||||
f'{self.bytes_sent or "-"} {self.object_size or "-"} {self.total_time_ms or "-"} '
|
|
||||||
f'{self.turn_around_time_ms or "-"} "{self.referrer}" "{self.user_agent}" {self.version_id}'
|
|
||||||
)
|
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, Any]:
|
|
||||||
return {
|
|
||||||
"bucket_owner": self.bucket_owner,
|
|
||||||
"bucket": self.bucket,
|
|
||||||
"timestamp": self.timestamp.isoformat(),
|
|
||||||
"remote_ip": self.remote_ip,
|
|
||||||
"requester": self.requester,
|
|
||||||
"request_id": self.request_id,
|
|
||||||
"operation": self.operation,
|
|
||||||
"key": self.key,
|
|
||||||
"request_uri": self.request_uri,
|
|
||||||
"http_status": self.http_status,
|
|
||||||
"error_code": self.error_code,
|
|
||||||
"bytes_sent": self.bytes_sent,
|
|
||||||
"object_size": self.object_size,
|
|
||||||
"total_time_ms": self.total_time_ms,
|
|
||||||
"referrer": self.referrer,
|
|
||||||
"user_agent": self.user_agent,
|
|
||||||
"version_id": self.version_id,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class LoggingConfiguration:
|
|
||||||
target_bucket: str
|
|
||||||
target_prefix: str = ""
|
|
||||||
enabled: bool = True
|
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, Any]:
|
|
||||||
return {
|
|
||||||
"LoggingEnabled": {
|
|
||||||
"TargetBucket": self.target_bucket,
|
|
||||||
"TargetPrefix": self.target_prefix,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(cls, data: Dict[str, Any]) -> Optional["LoggingConfiguration"]:
|
|
||||||
logging_enabled = data.get("LoggingEnabled")
|
|
||||||
if not logging_enabled:
|
|
||||||
return None
|
|
||||||
return cls(
|
|
||||||
target_bucket=logging_enabled.get("TargetBucket", ""),
|
|
||||||
target_prefix=logging_enabled.get("TargetPrefix", ""),
|
|
||||||
enabled=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class AccessLoggingService:
|
|
||||||
def __init__(self, storage_root: Path, flush_interval: int = 60, max_buffer_size: int = 1000):
|
|
||||||
self.storage_root = storage_root
|
|
||||||
self.flush_interval = flush_interval
|
|
||||||
self.max_buffer_size = max_buffer_size
|
|
||||||
self._configs: Dict[str, LoggingConfiguration] = {}
|
|
||||||
self._buffer: Dict[str, List[AccessLogEntry]] = {}
|
|
||||||
self._buffer_lock = threading.Lock()
|
|
||||||
self._shutdown = threading.Event()
|
|
||||||
self._storage = None
|
|
||||||
|
|
||||||
self._flush_thread = threading.Thread(target=self._flush_loop, name="access-log-flush", daemon=True)
|
|
||||||
self._flush_thread.start()
|
|
||||||
|
|
||||||
def set_storage(self, storage: Any) -> None:
|
|
||||||
self._storage = storage
|
|
||||||
|
|
||||||
def _config_path(self, bucket_name: str) -> Path:
|
|
||||||
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "logging.json"
|
|
||||||
|
|
||||||
def get_bucket_logging(self, bucket_name: str) -> Optional[LoggingConfiguration]:
|
|
||||||
if bucket_name in self._configs:
|
|
||||||
return self._configs[bucket_name]
|
|
||||||
|
|
||||||
config_path = self._config_path(bucket_name)
|
|
||||||
if not config_path.exists():
|
|
||||||
return None
|
|
||||||
|
|
||||||
try:
|
|
||||||
data = json.loads(config_path.read_text(encoding="utf-8"))
|
|
||||||
config = LoggingConfiguration.from_dict(data)
|
|
||||||
if config:
|
|
||||||
self._configs[bucket_name] = config
|
|
||||||
return config
|
|
||||||
except (json.JSONDecodeError, OSError) as e:
|
|
||||||
logger.warning(f"Failed to load logging config for {bucket_name}: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def set_bucket_logging(self, bucket_name: str, config: LoggingConfiguration) -> None:
|
|
||||||
config_path = self._config_path(bucket_name)
|
|
||||||
config_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
config_path.write_text(json.dumps(config.to_dict(), indent=2), encoding="utf-8")
|
|
||||||
self._configs[bucket_name] = config
|
|
||||||
|
|
||||||
def delete_bucket_logging(self, bucket_name: str) -> None:
|
|
||||||
config_path = self._config_path(bucket_name)
|
|
||||||
try:
|
|
||||||
if config_path.exists():
|
|
||||||
config_path.unlink()
|
|
||||||
except OSError:
|
|
||||||
pass
|
|
||||||
self._configs.pop(bucket_name, None)
|
|
||||||
|
|
||||||
def log_request(
|
|
||||||
self,
|
|
||||||
bucket_name: str,
|
|
||||||
*,
|
|
||||||
operation: str,
|
|
||||||
key: str = "-",
|
|
||||||
remote_ip: str = "-",
|
|
||||||
requester: str = "-",
|
|
||||||
request_uri: str = "-",
|
|
||||||
http_status: int = 200,
|
|
||||||
error_code: str = "",
|
|
||||||
bytes_sent: int = 0,
|
|
||||||
object_size: int = 0,
|
|
||||||
total_time_ms: int = 0,
|
|
||||||
referrer: str = "-",
|
|
||||||
user_agent: str = "-",
|
|
||||||
version_id: str = "-",
|
|
||||||
request_id: str = "",
|
|
||||||
) -> None:
|
|
||||||
config = self.get_bucket_logging(bucket_name)
|
|
||||||
if not config or not config.enabled:
|
|
||||||
return
|
|
||||||
|
|
||||||
entry = AccessLogEntry(
|
|
||||||
bucket_owner="local-owner",
|
|
||||||
bucket=bucket_name,
|
|
||||||
remote_ip=remote_ip,
|
|
||||||
requester=requester,
|
|
||||||
request_id=request_id or uuid.uuid4().hex[:16].upper(),
|
|
||||||
operation=operation,
|
|
||||||
key=key,
|
|
||||||
request_uri=request_uri,
|
|
||||||
http_status=http_status,
|
|
||||||
error_code=error_code,
|
|
||||||
bytes_sent=bytes_sent,
|
|
||||||
object_size=object_size,
|
|
||||||
total_time_ms=total_time_ms,
|
|
||||||
referrer=referrer,
|
|
||||||
user_agent=user_agent,
|
|
||||||
version_id=version_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
target_key = f"{config.target_bucket}:{config.target_prefix}"
|
|
||||||
should_flush = False
|
|
||||||
with self._buffer_lock:
|
|
||||||
if target_key not in self._buffer:
|
|
||||||
self._buffer[target_key] = []
|
|
||||||
self._buffer[target_key].append(entry)
|
|
||||||
should_flush = len(self._buffer[target_key]) >= self.max_buffer_size
|
|
||||||
|
|
||||||
if should_flush:
|
|
||||||
self._flush_buffer(target_key)
|
|
||||||
|
|
||||||
def _flush_loop(self) -> None:
|
|
||||||
while not self._shutdown.is_set():
|
|
||||||
self._shutdown.wait(timeout=self.flush_interval)
|
|
||||||
if not self._shutdown.is_set():
|
|
||||||
self._flush_all()
|
|
||||||
|
|
||||||
def _flush_all(self) -> None:
|
|
||||||
with self._buffer_lock:
|
|
||||||
targets = list(self._buffer.keys())
|
|
||||||
|
|
||||||
for target_key in targets:
|
|
||||||
self._flush_buffer(target_key)
|
|
||||||
|
|
||||||
def _flush_buffer(self, target_key: str) -> None:
|
|
||||||
with self._buffer_lock:
|
|
||||||
entries = self._buffer.pop(target_key, [])
|
|
||||||
|
|
||||||
if not entries or not self._storage:
|
|
||||||
return
|
|
||||||
|
|
||||||
try:
|
|
||||||
bucket_name, prefix = target_key.split(":", 1)
|
|
||||||
except ValueError:
|
|
||||||
logger.error(f"Invalid target key: {target_key}")
|
|
||||||
return
|
|
||||||
|
|
||||||
now = datetime.now(timezone.utc)
|
|
||||||
log_key = f"{prefix}{now.strftime('%Y-%m-%d-%H-%M-%S')}-{uuid.uuid4().hex[:8]}"
|
|
||||||
|
|
||||||
log_content = "\n".join(entry.to_log_line() for entry in entries) + "\n"
|
|
||||||
|
|
||||||
try:
|
|
||||||
stream = io.BytesIO(log_content.encode("utf-8"))
|
|
||||||
self._storage.put_object(bucket_name, log_key, stream, enforce_quota=False)
|
|
||||||
logger.info(f"Flushed {len(entries)} access log entries to {bucket_name}/{log_key}")
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Failed to write access log to {bucket_name}/{log_key}: {e}")
|
|
||||||
with self._buffer_lock:
|
|
||||||
if target_key not in self._buffer:
|
|
||||||
self._buffer[target_key] = []
|
|
||||||
self._buffer[target_key] = entries + self._buffer[target_key]
|
|
||||||
|
|
||||||
def flush(self) -> None:
|
|
||||||
self._flush_all()
|
|
||||||
|
|
||||||
def shutdown(self) -> None:
|
|
||||||
self._shutdown.set()
|
|
||||||
self._flush_all()
|
|
||||||
self._flush_thread.join(timeout=5.0)
|
|
||||||
|
|
||||||
def get_stats(self) -> Dict[str, Any]:
|
|
||||||
with self._buffer_lock:
|
|
||||||
buffered = sum(len(entries) for entries in self._buffer.values())
|
|
||||||
return {
|
|
||||||
"buffered_entries": buffered,
|
|
||||||
"target_buckets": len(self._buffer),
|
|
||||||
}
|
|
||||||
204
app/acl.py
204
app/acl.py
@@ -1,204 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import json
|
|
||||||
from dataclasses import dataclass, field
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Any, Dict, List, Optional, Set
|
|
||||||
|
|
||||||
|
|
||||||
ACL_PERMISSION_FULL_CONTROL = "FULL_CONTROL"
|
|
||||||
ACL_PERMISSION_WRITE = "WRITE"
|
|
||||||
ACL_PERMISSION_WRITE_ACP = "WRITE_ACP"
|
|
||||||
ACL_PERMISSION_READ = "READ"
|
|
||||||
ACL_PERMISSION_READ_ACP = "READ_ACP"
|
|
||||||
|
|
||||||
ALL_PERMISSIONS = {
|
|
||||||
ACL_PERMISSION_FULL_CONTROL,
|
|
||||||
ACL_PERMISSION_WRITE,
|
|
||||||
ACL_PERMISSION_WRITE_ACP,
|
|
||||||
ACL_PERMISSION_READ,
|
|
||||||
ACL_PERMISSION_READ_ACP,
|
|
||||||
}
|
|
||||||
|
|
||||||
PERMISSION_TO_ACTIONS = {
|
|
||||||
ACL_PERMISSION_FULL_CONTROL: {"read", "write", "delete", "list", "share"},
|
|
||||||
ACL_PERMISSION_WRITE: {"write", "delete"},
|
|
||||||
ACL_PERMISSION_WRITE_ACP: {"share"},
|
|
||||||
ACL_PERMISSION_READ: {"read", "list"},
|
|
||||||
ACL_PERMISSION_READ_ACP: {"share"},
|
|
||||||
}
|
|
||||||
|
|
||||||
GRANTEE_ALL_USERS = "*"
|
|
||||||
GRANTEE_AUTHENTICATED_USERS = "authenticated"
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class AclGrant:
|
|
||||||
grantee: str
|
|
||||||
permission: str
|
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, str]:
|
|
||||||
return {"grantee": self.grantee, "permission": self.permission}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(cls, data: Dict[str, str]) -> "AclGrant":
|
|
||||||
return cls(grantee=data["grantee"], permission=data["permission"])
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Acl:
|
|
||||||
owner: str
|
|
||||||
grants: List[AclGrant] = field(default_factory=list)
|
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, Any]:
|
|
||||||
return {
|
|
||||||
"owner": self.owner,
|
|
||||||
"grants": [g.to_dict() for g in self.grants],
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(cls, data: Dict[str, Any]) -> "Acl":
|
|
||||||
return cls(
|
|
||||||
owner=data.get("owner", ""),
|
|
||||||
grants=[AclGrant.from_dict(g) for g in data.get("grants", [])],
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_allowed_actions(self, principal_id: Optional[str], is_authenticated: bool = True) -> Set[str]:
|
|
||||||
actions: Set[str] = set()
|
|
||||||
if principal_id and principal_id == self.owner:
|
|
||||||
actions.update(PERMISSION_TO_ACTIONS[ACL_PERMISSION_FULL_CONTROL])
|
|
||||||
for grant in self.grants:
|
|
||||||
if grant.grantee == GRANTEE_ALL_USERS:
|
|
||||||
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
|
|
||||||
elif grant.grantee == GRANTEE_AUTHENTICATED_USERS and is_authenticated:
|
|
||||||
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
|
|
||||||
elif principal_id and grant.grantee == principal_id:
|
|
||||||
actions.update(PERMISSION_TO_ACTIONS.get(grant.permission, set()))
|
|
||||||
return actions
|
|
||||||
|
|
||||||
|
|
||||||
CANNED_ACLS = {
|
|
||||||
"private": lambda owner: Acl(
|
|
||||||
owner=owner,
|
|
||||||
grants=[AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL)],
|
|
||||||
),
|
|
||||||
"public-read": lambda owner: Acl(
|
|
||||||
owner=owner,
|
|
||||||
grants=[
|
|
||||||
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
|
||||||
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
|
||||||
],
|
|
||||||
),
|
|
||||||
"public-read-write": lambda owner: Acl(
|
|
||||||
owner=owner,
|
|
||||||
grants=[
|
|
||||||
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
|
||||||
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
|
||||||
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_WRITE),
|
|
||||||
],
|
|
||||||
),
|
|
||||||
"authenticated-read": lambda owner: Acl(
|
|
||||||
owner=owner,
|
|
||||||
grants=[
|
|
||||||
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
|
||||||
AclGrant(grantee=GRANTEE_AUTHENTICATED_USERS, permission=ACL_PERMISSION_READ),
|
|
||||||
],
|
|
||||||
),
|
|
||||||
"bucket-owner-read": lambda owner: Acl(
|
|
||||||
owner=owner,
|
|
||||||
grants=[
|
|
||||||
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
|
||||||
],
|
|
||||||
),
|
|
||||||
"bucket-owner-full-control": lambda owner: Acl(
|
|
||||||
owner=owner,
|
|
||||||
grants=[
|
|
||||||
AclGrant(grantee=owner, permission=ACL_PERMISSION_FULL_CONTROL),
|
|
||||||
],
|
|
||||||
),
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def create_canned_acl(canned_acl: str, owner: str) -> Acl:
|
|
||||||
factory = CANNED_ACLS.get(canned_acl)
|
|
||||||
if not factory:
|
|
||||||
return CANNED_ACLS["private"](owner)
|
|
||||||
return factory(owner)
|
|
||||||
|
|
||||||
|
|
||||||
class AclService:
|
|
||||||
def __init__(self, storage_root: Path):
|
|
||||||
self.storage_root = storage_root
|
|
||||||
self._bucket_acl_cache: Dict[str, Acl] = {}
|
|
||||||
|
|
||||||
def _bucket_acl_path(self, bucket_name: str) -> Path:
|
|
||||||
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / ".acl.json"
|
|
||||||
|
|
||||||
def get_bucket_acl(self, bucket_name: str) -> Optional[Acl]:
|
|
||||||
if bucket_name in self._bucket_acl_cache:
|
|
||||||
return self._bucket_acl_cache[bucket_name]
|
|
||||||
acl_path = self._bucket_acl_path(bucket_name)
|
|
||||||
if not acl_path.exists():
|
|
||||||
return None
|
|
||||||
try:
|
|
||||||
data = json.loads(acl_path.read_text(encoding="utf-8"))
|
|
||||||
acl = Acl.from_dict(data)
|
|
||||||
self._bucket_acl_cache[bucket_name] = acl
|
|
||||||
return acl
|
|
||||||
except (OSError, json.JSONDecodeError):
|
|
||||||
return None
|
|
||||||
|
|
||||||
def set_bucket_acl(self, bucket_name: str, acl: Acl) -> None:
|
|
||||||
acl_path = self._bucket_acl_path(bucket_name)
|
|
||||||
acl_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
acl_path.write_text(json.dumps(acl.to_dict(), indent=2), encoding="utf-8")
|
|
||||||
self._bucket_acl_cache[bucket_name] = acl
|
|
||||||
|
|
||||||
def set_bucket_canned_acl(self, bucket_name: str, canned_acl: str, owner: str) -> Acl:
|
|
||||||
acl = create_canned_acl(canned_acl, owner)
|
|
||||||
self.set_bucket_acl(bucket_name, acl)
|
|
||||||
return acl
|
|
||||||
|
|
||||||
def delete_bucket_acl(self, bucket_name: str) -> None:
|
|
||||||
acl_path = self._bucket_acl_path(bucket_name)
|
|
||||||
if acl_path.exists():
|
|
||||||
acl_path.unlink()
|
|
||||||
self._bucket_acl_cache.pop(bucket_name, None)
|
|
||||||
|
|
||||||
def evaluate_bucket_acl(
|
|
||||||
self,
|
|
||||||
bucket_name: str,
|
|
||||||
principal_id: Optional[str],
|
|
||||||
action: str,
|
|
||||||
is_authenticated: bool = True,
|
|
||||||
) -> bool:
|
|
||||||
acl = self.get_bucket_acl(bucket_name)
|
|
||||||
if not acl:
|
|
||||||
return False
|
|
||||||
allowed_actions = acl.get_allowed_actions(principal_id, is_authenticated)
|
|
||||||
return action in allowed_actions
|
|
||||||
|
|
||||||
def get_object_acl(self, bucket_name: str, object_key: str, object_metadata: Dict[str, Any]) -> Optional[Acl]:
|
|
||||||
acl_data = object_metadata.get("__acl__")
|
|
||||||
if not acl_data:
|
|
||||||
return None
|
|
||||||
try:
|
|
||||||
return Acl.from_dict(acl_data)
|
|
||||||
except (TypeError, KeyError):
|
|
||||||
return None
|
|
||||||
|
|
||||||
def create_object_acl_metadata(self, acl: Acl) -> Dict[str, Any]:
|
|
||||||
return {"__acl__": acl.to_dict()}
|
|
||||||
|
|
||||||
def evaluate_object_acl(
|
|
||||||
self,
|
|
||||||
object_metadata: Dict[str, Any],
|
|
||||||
principal_id: Optional[str],
|
|
||||||
action: str,
|
|
||||||
is_authenticated: bool = True,
|
|
||||||
) -> bool:
|
|
||||||
acl = self.get_object_acl("", "", object_metadata)
|
|
||||||
if not acl:
|
|
||||||
return False
|
|
||||||
allowed_actions = acl.get_allowed_actions(principal_id, is_authenticated)
|
|
||||||
return action in allowed_actions
|
|
||||||
@@ -1,82 +1,23 @@
|
|||||||
|
"""Bucket policy loader/enforcer with a subset of AWS semantics."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import ipaddress
|
|
||||||
import json
|
import json
|
||||||
import re
|
from dataclasses import dataclass
|
||||||
import time
|
from fnmatch import fnmatch
|
||||||
from dataclasses import dataclass, field
|
|
||||||
from fnmatch import fnmatch, translate
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, Iterable, List, Optional, Pattern, Sequence, Tuple
|
from typing import Any, Dict, Iterable, List, Optional, Sequence
|
||||||
|
|
||||||
|
|
||||||
RESOURCE_PREFIX = "arn:aws:s3:::"
|
RESOURCE_PREFIX = "arn:aws:s3:::"
|
||||||
|
|
||||||
|
|
||||||
def _match_string_like(value: str, pattern: str) -> bool:
|
|
||||||
regex = translate(pattern)
|
|
||||||
return bool(re.match(regex, value, re.IGNORECASE))
|
|
||||||
|
|
||||||
|
|
||||||
def _ip_in_cidr(ip_str: str, cidr: str) -> bool:
|
|
||||||
try:
|
|
||||||
ip = ipaddress.ip_address(ip_str)
|
|
||||||
network = ipaddress.ip_network(cidr, strict=False)
|
|
||||||
return ip in network
|
|
||||||
except ValueError:
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def _evaluate_condition_operator(
|
|
||||||
operator: str,
|
|
||||||
condition_key: str,
|
|
||||||
condition_values: List[str],
|
|
||||||
context: Dict[str, Any],
|
|
||||||
) -> bool:
|
|
||||||
context_value = context.get(condition_key)
|
|
||||||
op_lower = operator.lower()
|
|
||||||
if_exists = op_lower.endswith("ifexists")
|
|
||||||
if if_exists:
|
|
||||||
op_lower = op_lower[:-8]
|
|
||||||
|
|
||||||
if context_value is None:
|
|
||||||
return if_exists
|
|
||||||
|
|
||||||
context_value_str = str(context_value)
|
|
||||||
context_value_lower = context_value_str.lower()
|
|
||||||
|
|
||||||
if op_lower == "stringequals":
|
|
||||||
return context_value_str in condition_values
|
|
||||||
elif op_lower == "stringnotequals":
|
|
||||||
return context_value_str not in condition_values
|
|
||||||
elif op_lower == "stringequalsignorecase":
|
|
||||||
return context_value_lower in [v.lower() for v in condition_values]
|
|
||||||
elif op_lower == "stringnotequalsignorecase":
|
|
||||||
return context_value_lower not in [v.lower() for v in condition_values]
|
|
||||||
elif op_lower == "stringlike":
|
|
||||||
return any(_match_string_like(context_value_str, p) for p in condition_values)
|
|
||||||
elif op_lower == "stringnotlike":
|
|
||||||
return not any(_match_string_like(context_value_str, p) for p in condition_values)
|
|
||||||
elif op_lower == "ipaddress":
|
|
||||||
return any(_ip_in_cidr(context_value_str, cidr) for cidr in condition_values)
|
|
||||||
elif op_lower == "notipaddress":
|
|
||||||
return not any(_ip_in_cidr(context_value_str, cidr) for cidr in condition_values)
|
|
||||||
elif op_lower == "bool":
|
|
||||||
bool_val = context_value_lower in ("true", "1", "yes")
|
|
||||||
return str(bool_val).lower() in [v.lower() for v in condition_values]
|
|
||||||
elif op_lower == "null":
|
|
||||||
is_null = context_value is None or context_value == ""
|
|
||||||
expected_null = condition_values[0].lower() in ("true", "1", "yes") if condition_values else True
|
|
||||||
return is_null == expected_null
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
ACTION_ALIASES = {
|
ACTION_ALIASES = {
|
||||||
|
# List actions
|
||||||
"s3:listbucket": "list",
|
"s3:listbucket": "list",
|
||||||
"s3:listallmybuckets": "list",
|
"s3:listallmybuckets": "list",
|
||||||
"s3:listbucketversions": "list",
|
"s3:listbucketversions": "list",
|
||||||
"s3:listmultipartuploads": "list",
|
"s3:listmultipartuploads": "list",
|
||||||
"s3:listparts": "list",
|
"s3:listparts": "list",
|
||||||
|
# Read actions
|
||||||
"s3:getobject": "read",
|
"s3:getobject": "read",
|
||||||
"s3:getobjectversion": "read",
|
"s3:getobjectversion": "read",
|
||||||
"s3:getobjecttagging": "read",
|
"s3:getobjecttagging": "read",
|
||||||
@@ -85,6 +26,7 @@ ACTION_ALIASES = {
|
|||||||
"s3:getbucketversioning": "read",
|
"s3:getbucketversioning": "read",
|
||||||
"s3:headobject": "read",
|
"s3:headobject": "read",
|
||||||
"s3:headbucket": "read",
|
"s3:headbucket": "read",
|
||||||
|
# Write actions
|
||||||
"s3:putobject": "write",
|
"s3:putobject": "write",
|
||||||
"s3:createbucket": "write",
|
"s3:createbucket": "write",
|
||||||
"s3:putobjecttagging": "write",
|
"s3:putobjecttagging": "write",
|
||||||
@@ -94,30 +36,26 @@ ACTION_ALIASES = {
|
|||||||
"s3:completemultipartupload": "write",
|
"s3:completemultipartupload": "write",
|
||||||
"s3:abortmultipartupload": "write",
|
"s3:abortmultipartupload": "write",
|
||||||
"s3:copyobject": "write",
|
"s3:copyobject": "write",
|
||||||
|
# Delete actions
|
||||||
"s3:deleteobject": "delete",
|
"s3:deleteobject": "delete",
|
||||||
"s3:deleteobjectversion": "delete",
|
"s3:deleteobjectversion": "delete",
|
||||||
"s3:deletebucket": "delete",
|
"s3:deletebucket": "delete",
|
||||||
"s3:deleteobjecttagging": "delete",
|
"s3:deleteobjecttagging": "delete",
|
||||||
|
# Share actions (ACL)
|
||||||
"s3:putobjectacl": "share",
|
"s3:putobjectacl": "share",
|
||||||
"s3:putbucketacl": "share",
|
"s3:putbucketacl": "share",
|
||||||
"s3:getbucketacl": "share",
|
"s3:getbucketacl": "share",
|
||||||
|
# Policy actions
|
||||||
"s3:putbucketpolicy": "policy",
|
"s3:putbucketpolicy": "policy",
|
||||||
"s3:getbucketpolicy": "policy",
|
"s3:getbucketpolicy": "policy",
|
||||||
"s3:deletebucketpolicy": "policy",
|
"s3:deletebucketpolicy": "policy",
|
||||||
|
# Replication actions
|
||||||
"s3:getreplicationconfiguration": "replication",
|
"s3:getreplicationconfiguration": "replication",
|
||||||
"s3:putreplicationconfiguration": "replication",
|
"s3:putreplicationconfiguration": "replication",
|
||||||
"s3:deletereplicationconfiguration": "replication",
|
"s3:deletereplicationconfiguration": "replication",
|
||||||
"s3:replicateobject": "replication",
|
"s3:replicateobject": "replication",
|
||||||
"s3:replicatetags": "replication",
|
"s3:replicatetags": "replication",
|
||||||
"s3:replicatedelete": "replication",
|
"s3:replicatedelete": "replication",
|
||||||
"s3:getlifecycleconfiguration": "lifecycle",
|
|
||||||
"s3:putlifecycleconfiguration": "lifecycle",
|
|
||||||
"s3:deletelifecycleconfiguration": "lifecycle",
|
|
||||||
"s3:getbucketlifecycle": "lifecycle",
|
|
||||||
"s3:putbucketlifecycle": "lifecycle",
|
|
||||||
"s3:getbucketcors": "cors",
|
|
||||||
"s3:putbucketcors": "cors",
|
|
||||||
"s3:deletebucketcors": "cors",
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -195,20 +133,7 @@ class BucketPolicyStatement:
|
|||||||
effect: str
|
effect: str
|
||||||
principals: List[str] | str
|
principals: List[str] | str
|
||||||
actions: List[str]
|
actions: List[str]
|
||||||
resources: List[Tuple[str | None, str | None]]
|
resources: List[tuple[str | None, str | None]]
|
||||||
conditions: Dict[str, Dict[str, List[str]]] = field(default_factory=dict)
|
|
||||||
_compiled_patterns: List[Tuple[str | None, Optional[Pattern[str]]]] | None = None
|
|
||||||
|
|
||||||
def _get_compiled_patterns(self) -> List[Tuple[str | None, Optional[Pattern[str]]]]:
|
|
||||||
if self._compiled_patterns is None:
|
|
||||||
self._compiled_patterns = []
|
|
||||||
for resource_bucket, key_pattern in self.resources:
|
|
||||||
if key_pattern is None:
|
|
||||||
self._compiled_patterns.append((resource_bucket, None))
|
|
||||||
else:
|
|
||||||
regex_pattern = translate(key_pattern)
|
|
||||||
self._compiled_patterns.append((resource_bucket, re.compile(regex_pattern)))
|
|
||||||
return self._compiled_patterns
|
|
||||||
|
|
||||||
def matches_principal(self, access_key: Optional[str]) -> bool:
|
def matches_principal(self, access_key: Optional[str]) -> bool:
|
||||||
if self.principals == "*":
|
if self.principals == "*":
|
||||||
@@ -224,29 +149,18 @@ class BucketPolicyStatement:
|
|||||||
def matches_resource(self, bucket: Optional[str], object_key: Optional[str]) -> bool:
|
def matches_resource(self, bucket: Optional[str], object_key: Optional[str]) -> bool:
|
||||||
bucket = (bucket or "*").lower()
|
bucket = (bucket or "*").lower()
|
||||||
key = object_key or ""
|
key = object_key or ""
|
||||||
for resource_bucket, compiled_pattern in self._get_compiled_patterns():
|
for resource_bucket, key_pattern in self.resources:
|
||||||
resource_bucket = (resource_bucket or "*").lower()
|
resource_bucket = (resource_bucket or "*").lower()
|
||||||
if resource_bucket not in {"*", bucket}:
|
if resource_bucket not in {"*", bucket}:
|
||||||
continue
|
continue
|
||||||
if compiled_pattern is None:
|
if key_pattern is None:
|
||||||
if not key:
|
if not key:
|
||||||
return True
|
return True
|
||||||
continue
|
continue
|
||||||
if compiled_pattern.match(key):
|
if fnmatch(key, key_pattern):
|
||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def matches_condition(self, context: Optional[Dict[str, Any]]) -> bool:
|
|
||||||
if not self.conditions:
|
|
||||||
return True
|
|
||||||
if context is None:
|
|
||||||
context = {}
|
|
||||||
for operator, key_values in self.conditions.items():
|
|
||||||
for condition_key, condition_values in key_values.items():
|
|
||||||
if not _evaluate_condition_operator(operator, condition_key, condition_values, context):
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
class BucketPolicyStore:
|
class BucketPolicyStore:
|
||||||
"""Loads bucket policies from disk and evaluates statements."""
|
"""Loads bucket policies from disk and evaluates statements."""
|
||||||
@@ -260,16 +174,8 @@ class BucketPolicyStore:
|
|||||||
self._policies: Dict[str, List[BucketPolicyStatement]] = {}
|
self._policies: Dict[str, List[BucketPolicyStatement]] = {}
|
||||||
self._load()
|
self._load()
|
||||||
self._last_mtime = self._current_mtime()
|
self._last_mtime = self._current_mtime()
|
||||||
# Performance: Avoid stat() on every request
|
|
||||||
self._last_stat_check = 0.0
|
|
||||||
self._stat_check_interval = 1.0 # Only check mtime every 1 second
|
|
||||||
|
|
||||||
def maybe_reload(self) -> None:
|
def maybe_reload(self) -> None:
|
||||||
# Performance: Skip stat check if we checked recently
|
|
||||||
now = time.time()
|
|
||||||
if now - self._last_stat_check < self._stat_check_interval:
|
|
||||||
return
|
|
||||||
self._last_stat_check = now
|
|
||||||
current = self._current_mtime()
|
current = self._current_mtime()
|
||||||
if current is None or current == self._last_mtime:
|
if current is None or current == self._last_mtime:
|
||||||
return
|
return
|
||||||
@@ -282,13 +188,13 @@ class BucketPolicyStore:
|
|||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------
|
||||||
def evaluate(
|
def evaluate(
|
||||||
self,
|
self,
|
||||||
access_key: Optional[str],
|
access_key: Optional[str],
|
||||||
bucket: Optional[str],
|
bucket: Optional[str],
|
||||||
object_key: Optional[str],
|
object_key: Optional[str],
|
||||||
action: str,
|
action: str,
|
||||||
context: Optional[Dict[str, Any]] = None,
|
|
||||||
) -> str | None:
|
) -> str | None:
|
||||||
bucket = (bucket or "").lower()
|
bucket = (bucket or "").lower()
|
||||||
statements = self._policies.get(bucket) or []
|
statements = self._policies.get(bucket) or []
|
||||||
@@ -300,8 +206,6 @@ class BucketPolicyStore:
|
|||||||
continue
|
continue
|
||||||
if not statement.matches_resource(bucket, object_key):
|
if not statement.matches_resource(bucket, object_key):
|
||||||
continue
|
continue
|
||||||
if not statement.matches_condition(context):
|
|
||||||
continue
|
|
||||||
if statement.effect == "deny":
|
if statement.effect == "deny":
|
||||||
return "deny"
|
return "deny"
|
||||||
decision = "allow"
|
decision = "allow"
|
||||||
@@ -325,6 +229,7 @@ class BucketPolicyStore:
|
|||||||
self._policies.pop(bucket, None)
|
self._policies.pop(bucket, None)
|
||||||
self._persist()
|
self._persist()
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------
|
||||||
def _load(self) -> None:
|
def _load(self) -> None:
|
||||||
try:
|
try:
|
||||||
content = self.policy_path.read_text(encoding='utf-8')
|
content = self.policy_path.read_text(encoding='utf-8')
|
||||||
@@ -366,7 +271,6 @@ class BucketPolicyStore:
|
|||||||
if not resources:
|
if not resources:
|
||||||
continue
|
continue
|
||||||
effect = statement.get("Effect", "Allow").lower()
|
effect = statement.get("Effect", "Allow").lower()
|
||||||
conditions = self._normalize_conditions(statement.get("Condition", {}))
|
|
||||||
statements.append(
|
statements.append(
|
||||||
BucketPolicyStatement(
|
BucketPolicyStatement(
|
||||||
sid=statement.get("Sid"),
|
sid=statement.get("Sid"),
|
||||||
@@ -374,24 +278,6 @@ class BucketPolicyStore:
|
|||||||
principals=principals,
|
principals=principals,
|
||||||
actions=actions or ["*"],
|
actions=actions or ["*"],
|
||||||
resources=resources,
|
resources=resources,
|
||||||
conditions=conditions,
|
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
return statements
|
return statements
|
||||||
|
|
||||||
def _normalize_conditions(self, condition_block: Dict[str, Any]) -> Dict[str, Dict[str, List[str]]]:
|
|
||||||
if not condition_block or not isinstance(condition_block, dict):
|
|
||||||
return {}
|
|
||||||
normalized: Dict[str, Dict[str, List[str]]] = {}
|
|
||||||
for operator, key_values in condition_block.items():
|
|
||||||
if not isinstance(key_values, dict):
|
|
||||||
continue
|
|
||||||
normalized[operator] = {}
|
|
||||||
for cond_key, cond_values in key_values.items():
|
|
||||||
if isinstance(cond_values, str):
|
|
||||||
normalized[operator][cond_key] = [cond_values]
|
|
||||||
elif isinstance(cond_values, list):
|
|
||||||
normalized[operator][cond_key] = [str(v) for v in cond_values]
|
|
||||||
else:
|
|
||||||
normalized[operator][cond_key] = [str(cond_values)]
|
|
||||||
return normalized
|
|
||||||
@@ -1,94 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import gzip
|
|
||||||
import io
|
|
||||||
from typing import Callable, Iterable, List, Tuple
|
|
||||||
|
|
||||||
COMPRESSIBLE_MIMES = frozenset([
|
|
||||||
'application/json',
|
|
||||||
'application/javascript',
|
|
||||||
'application/xml',
|
|
||||||
'text/html',
|
|
||||||
'text/css',
|
|
||||||
'text/plain',
|
|
||||||
'text/xml',
|
|
||||||
'text/javascript',
|
|
||||||
'application/x-ndjson',
|
|
||||||
])
|
|
||||||
|
|
||||||
MIN_SIZE_FOR_COMPRESSION = 500
|
|
||||||
|
|
||||||
|
|
||||||
class GzipMiddleware:
|
|
||||||
def __init__(self, app: Callable, compression_level: int = 6, min_size: int = MIN_SIZE_FOR_COMPRESSION):
|
|
||||||
self.app = app
|
|
||||||
self.compression_level = compression_level
|
|
||||||
self.min_size = min_size
|
|
||||||
|
|
||||||
def __call__(self, environ: dict, start_response: Callable) -> Iterable[bytes]:
|
|
||||||
accept_encoding = environ.get('HTTP_ACCEPT_ENCODING', '')
|
|
||||||
if 'gzip' not in accept_encoding.lower():
|
|
||||||
return self.app(environ, start_response)
|
|
||||||
|
|
||||||
response_started = False
|
|
||||||
status_code = None
|
|
||||||
response_headers: List[Tuple[str, str]] = []
|
|
||||||
content_type = None
|
|
||||||
content_length = None
|
|
||||||
should_compress = False
|
|
||||||
exc_info_holder = [None]
|
|
||||||
|
|
||||||
def custom_start_response(status: str, headers: List[Tuple[str, str]], exc_info=None):
|
|
||||||
nonlocal response_started, status_code, response_headers, content_type, content_length, should_compress
|
|
||||||
response_started = True
|
|
||||||
status_code = int(status.split(' ', 1)[0])
|
|
||||||
response_headers = list(headers)
|
|
||||||
exc_info_holder[0] = exc_info
|
|
||||||
|
|
||||||
for name, value in headers:
|
|
||||||
name_lower = name.lower()
|
|
||||||
if name_lower == 'content-type':
|
|
||||||
content_type = value.split(';')[0].strip().lower()
|
|
||||||
elif name_lower == 'content-length':
|
|
||||||
content_length = int(value)
|
|
||||||
elif name_lower == 'content-encoding':
|
|
||||||
should_compress = False
|
|
||||||
return start_response(status, headers, exc_info)
|
|
||||||
|
|
||||||
if content_type and content_type in COMPRESSIBLE_MIMES:
|
|
||||||
if content_length is None or content_length >= self.min_size:
|
|
||||||
should_compress = True
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
response_body = b''.join(self.app(environ, custom_start_response))
|
|
||||||
|
|
||||||
if not response_started:
|
|
||||||
return [response_body]
|
|
||||||
|
|
||||||
if should_compress and len(response_body) >= self.min_size:
|
|
||||||
buf = io.BytesIO()
|
|
||||||
with gzip.GzipFile(fileobj=buf, mode='wb', compresslevel=self.compression_level) as gz:
|
|
||||||
gz.write(response_body)
|
|
||||||
compressed = buf.getvalue()
|
|
||||||
|
|
||||||
if len(compressed) < len(response_body):
|
|
||||||
response_body = compressed
|
|
||||||
new_headers = []
|
|
||||||
for name, value in response_headers:
|
|
||||||
if name.lower() not in ('content-length', 'content-encoding'):
|
|
||||||
new_headers.append((name, value))
|
|
||||||
new_headers.append(('Content-Encoding', 'gzip'))
|
|
||||||
new_headers.append(('Content-Length', str(len(response_body))))
|
|
||||||
new_headers.append(('Vary', 'Accept-Encoding'))
|
|
||||||
response_headers = new_headers
|
|
||||||
|
|
||||||
status_str = f"{status_code} " + {
|
|
||||||
200: "OK", 201: "Created", 204: "No Content", 206: "Partial Content",
|
|
||||||
301: "Moved Permanently", 302: "Found", 304: "Not Modified",
|
|
||||||
400: "Bad Request", 401: "Unauthorized", 403: "Forbidden", 404: "Not Found",
|
|
||||||
405: "Method Not Allowed", 409: "Conflict", 500: "Internal Server Error",
|
|
||||||
}.get(status_code, "Unknown")
|
|
||||||
|
|
||||||
start_response(status_str, response_headers, exc_info_holder[0])
|
|
||||||
return [response_body]
|
|
||||||
143
app/config.py
143
app/config.py
@@ -1,3 +1,4 @@
|
|||||||
|
"""Configuration helpers for the S3 clone application."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import os
|
import os
|
||||||
@@ -58,7 +59,6 @@ class AppConfig:
|
|||||||
cors_origins: list[str]
|
cors_origins: list[str]
|
||||||
cors_methods: list[str]
|
cors_methods: list[str]
|
||||||
cors_allow_headers: list[str]
|
cors_allow_headers: list[str]
|
||||||
cors_expose_headers: list[str]
|
|
||||||
session_lifetime_days: int
|
session_lifetime_days: int
|
||||||
auth_max_attempts: int
|
auth_max_attempts: int
|
||||||
auth_lockout_minutes: int
|
auth_lockout_minutes: int
|
||||||
@@ -67,15 +67,11 @@ class AppConfig:
|
|||||||
stream_chunk_size: int
|
stream_chunk_size: int
|
||||||
multipart_min_part_size: int
|
multipart_min_part_size: int
|
||||||
bucket_stats_cache_ttl: int
|
bucket_stats_cache_ttl: int
|
||||||
object_cache_ttl: int
|
|
||||||
encryption_enabled: bool
|
encryption_enabled: bool
|
||||||
encryption_master_key_path: Path
|
encryption_master_key_path: Path
|
||||||
kms_enabled: bool
|
kms_enabled: bool
|
||||||
kms_keys_path: Path
|
kms_keys_path: Path
|
||||||
default_encryption_algorithm: str
|
default_encryption_algorithm: str
|
||||||
display_timezone: str
|
|
||||||
lifecycle_enabled: bool
|
|
||||||
lifecycle_interval_seconds: int
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_env(cls, overrides: Optional[Dict[str, Any]] = None) -> "AppConfig":
|
def from_env(cls, overrides: Optional[Dict[str, Any]] = None) -> "AppConfig":
|
||||||
@@ -85,7 +81,7 @@ class AppConfig:
|
|||||||
return overrides.get(name, os.getenv(name, default))
|
return overrides.get(name, os.getenv(name, default))
|
||||||
|
|
||||||
storage_root = Path(_get("STORAGE_ROOT", PROJECT_ROOT / "data")).resolve()
|
storage_root = Path(_get("STORAGE_ROOT", PROJECT_ROOT / "data")).resolve()
|
||||||
max_upload_size = int(_get("MAX_UPLOAD_SIZE", 1024 * 1024 * 1024))
|
max_upload_size = int(_get("MAX_UPLOAD_SIZE", 1024 * 1024 * 1024)) # 1 GiB default
|
||||||
ui_page_size = int(_get("UI_PAGE_SIZE", 100))
|
ui_page_size = int(_get("UI_PAGE_SIZE", 100))
|
||||||
auth_max_attempts = int(_get("AUTH_MAX_ATTEMPTS", 5))
|
auth_max_attempts = int(_get("AUTH_MAX_ATTEMPTS", 5))
|
||||||
auth_lockout_minutes = int(_get("AUTH_LOCKOUT_MINUTES", 15))
|
auth_lockout_minutes = int(_get("AUTH_LOCKOUT_MINUTES", 15))
|
||||||
@@ -93,8 +89,6 @@ class AppConfig:
|
|||||||
secret_ttl_seconds = int(_get("SECRET_TTL_SECONDS", 300))
|
secret_ttl_seconds = int(_get("SECRET_TTL_SECONDS", 300))
|
||||||
stream_chunk_size = int(_get("STREAM_CHUNK_SIZE", 64 * 1024))
|
stream_chunk_size = int(_get("STREAM_CHUNK_SIZE", 64 * 1024))
|
||||||
multipart_min_part_size = int(_get("MULTIPART_MIN_PART_SIZE", 5 * 1024 * 1024))
|
multipart_min_part_size = int(_get("MULTIPART_MIN_PART_SIZE", 5 * 1024 * 1024))
|
||||||
lifecycle_enabled = _get("LIFECYCLE_ENABLED", "false").lower() in ("true", "1", "yes")
|
|
||||||
lifecycle_interval_seconds = int(_get("LIFECYCLE_INTERVAL_SECONDS", 3600))
|
|
||||||
default_secret = "dev-secret-key"
|
default_secret = "dev-secret-key"
|
||||||
secret_key = str(_get("SECRET_KEY", default_secret))
|
secret_key = str(_get("SECRET_KEY", default_secret))
|
||||||
|
|
||||||
@@ -109,10 +103,6 @@ class AppConfig:
|
|||||||
try:
|
try:
|
||||||
secret_file.parent.mkdir(parents=True, exist_ok=True)
|
secret_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
secret_file.write_text(generated)
|
secret_file.write_text(generated)
|
||||||
try:
|
|
||||||
os.chmod(secret_file, 0o600)
|
|
||||||
except OSError:
|
|
||||||
pass
|
|
||||||
secret_key = generated
|
secret_key = generated
|
||||||
except OSError:
|
except OSError:
|
||||||
secret_key = generated
|
secret_key = generated
|
||||||
@@ -120,19 +110,19 @@ class AppConfig:
|
|||||||
iam_env_override = "IAM_CONFIG" in overrides or "IAM_CONFIG" in os.environ
|
iam_env_override = "IAM_CONFIG" in overrides or "IAM_CONFIG" in os.environ
|
||||||
bucket_policy_override = "BUCKET_POLICY_PATH" in overrides or "BUCKET_POLICY_PATH" in os.environ
|
bucket_policy_override = "BUCKET_POLICY_PATH" in overrides or "BUCKET_POLICY_PATH" in os.environ
|
||||||
|
|
||||||
default_iam_path = storage_root / ".myfsio.sys" / "config" / "iam.json"
|
default_iam_path = PROJECT_ROOT / "data" / ".myfsio.sys" / "config" / "iam.json"
|
||||||
default_bucket_policy_path = storage_root / ".myfsio.sys" / "config" / "bucket_policies.json"
|
default_bucket_policy_path = PROJECT_ROOT / "data" / ".myfsio.sys" / "config" / "bucket_policies.json"
|
||||||
|
|
||||||
iam_config_path = Path(_get("IAM_CONFIG", default_iam_path)).resolve()
|
iam_config_path = Path(_get("IAM_CONFIG", default_iam_path)).resolve()
|
||||||
bucket_policy_path = Path(_get("BUCKET_POLICY_PATH", default_bucket_policy_path)).resolve()
|
bucket_policy_path = Path(_get("BUCKET_POLICY_PATH", default_bucket_policy_path)).resolve()
|
||||||
|
|
||||||
iam_config_path = _prepare_config_file(
|
iam_config_path = _prepare_config_file(
|
||||||
iam_config_path,
|
iam_config_path,
|
||||||
legacy_path=None if iam_env_override else storage_root / "iam.json",
|
legacy_path=None if iam_env_override else PROJECT_ROOT / "data" / "iam.json",
|
||||||
)
|
)
|
||||||
bucket_policy_path = _prepare_config_file(
|
bucket_policy_path = _prepare_config_file(
|
||||||
bucket_policy_path,
|
bucket_policy_path,
|
||||||
legacy_path=None if bucket_policy_override else storage_root / "bucket_policies.json",
|
legacy_path=None if bucket_policy_override else PROJECT_ROOT / "data" / "bucket_policies.json",
|
||||||
)
|
)
|
||||||
api_base_url = _get("API_BASE_URL", None)
|
api_base_url = _get("API_BASE_URL", None)
|
||||||
if api_base_url:
|
if api_base_url:
|
||||||
@@ -143,7 +133,7 @@ class AppConfig:
|
|||||||
enforce_ui_policies = str(_get("UI_ENFORCE_BUCKET_POLICIES", "0")).lower() in {"1", "true", "yes", "on"}
|
enforce_ui_policies = str(_get("UI_ENFORCE_BUCKET_POLICIES", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
log_level = str(_get("LOG_LEVEL", "INFO")).upper()
|
log_level = str(_get("LOG_LEVEL", "INFO")).upper()
|
||||||
log_to_file = str(_get("LOG_TO_FILE", "1")).lower() in {"1", "true", "yes", "on"}
|
log_to_file = str(_get("LOG_TO_FILE", "1")).lower() in {"1", "true", "yes", "on"}
|
||||||
log_dir = Path(_get("LOG_DIR", storage_root.parent / "logs")).resolve()
|
log_dir = Path(_get("LOG_DIR", PROJECT_ROOT / "logs")).resolve()
|
||||||
log_dir.mkdir(parents=True, exist_ok=True)
|
log_dir.mkdir(parents=True, exist_ok=True)
|
||||||
log_path = log_dir / str(_get("LOG_FILE", "app.log"))
|
log_path = log_dir / str(_get("LOG_FILE", "app.log"))
|
||||||
log_max_bytes = int(_get("LOG_MAX_BYTES", 5 * 1024 * 1024))
|
log_max_bytes = int(_get("LOG_MAX_BYTES", 5 * 1024 * 1024))
|
||||||
@@ -158,20 +148,28 @@ class AppConfig:
|
|||||||
return parts or default
|
return parts or default
|
||||||
|
|
||||||
cors_origins = _csv(str(_get("CORS_ORIGINS", "*")), ["*"])
|
cors_origins = _csv(str(_get("CORS_ORIGINS", "*")), ["*"])
|
||||||
cors_methods = _csv(str(_get("CORS_METHODS", "GET,PUT,POST,DELETE,OPTIONS,HEAD")), ["GET", "PUT", "POST", "DELETE", "OPTIONS", "HEAD"])
|
cors_methods = _csv(str(_get("CORS_METHODS", "GET,PUT,POST,DELETE,OPTIONS")), ["GET", "PUT", "POST", "DELETE", "OPTIONS"])
|
||||||
cors_allow_headers = _csv(str(_get("CORS_ALLOW_HEADERS", "*")), ["*"])
|
cors_allow_headers = _csv(str(_get("CORS_ALLOW_HEADERS", "Content-Type,X-Access-Key,X-Secret-Key,X-Amz-Algorithm,X-Amz-Credential,X-Amz-Date,X-Amz-Expires,X-Amz-SignedHeaders,X-Amz-Signature")), [
|
||||||
cors_expose_headers = _csv(str(_get("CORS_EXPOSE_HEADERS", "*")), ["*"])
|
"Content-Type",
|
||||||
|
"X-Access-Key",
|
||||||
|
"X-Secret-Key",
|
||||||
|
"X-Amz-Algorithm",
|
||||||
|
"X-Amz-Credential",
|
||||||
|
"X-Amz-Date",
|
||||||
|
"X-Amz-Expires",
|
||||||
|
"X-Amz-SignedHeaders",
|
||||||
|
"X-Amz-Signature",
|
||||||
|
])
|
||||||
session_lifetime_days = int(_get("SESSION_LIFETIME_DAYS", 30))
|
session_lifetime_days = int(_get("SESSION_LIFETIME_DAYS", 30))
|
||||||
bucket_stats_cache_ttl = int(_get("BUCKET_STATS_CACHE_TTL", 60))
|
bucket_stats_cache_ttl = int(_get("BUCKET_STATS_CACHE_TTL", 60)) # Default 60 seconds
|
||||||
object_cache_ttl = int(_get("OBJECT_CACHE_TTL", 5))
|
|
||||||
|
|
||||||
|
# Encryption settings
|
||||||
encryption_enabled = str(_get("ENCRYPTION_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
encryption_enabled = str(_get("ENCRYPTION_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
encryption_keys_dir = storage_root / ".myfsio.sys" / "keys"
|
encryption_keys_dir = storage_root / ".myfsio.sys" / "keys"
|
||||||
encryption_master_key_path = Path(_get("ENCRYPTION_MASTER_KEY_PATH", encryption_keys_dir / "master.key")).resolve()
|
encryption_master_key_path = Path(_get("ENCRYPTION_MASTER_KEY_PATH", encryption_keys_dir / "master.key")).resolve()
|
||||||
kms_enabled = str(_get("KMS_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
kms_enabled = str(_get("KMS_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
kms_keys_path = Path(_get("KMS_KEYS_PATH", encryption_keys_dir / "kms_keys.json")).resolve()
|
kms_keys_path = Path(_get("KMS_KEYS_PATH", encryption_keys_dir / "kms_keys.json")).resolve()
|
||||||
default_encryption_algorithm = str(_get("DEFAULT_ENCRYPTION_ALGORITHM", "AES256"))
|
default_encryption_algorithm = str(_get("DEFAULT_ENCRYPTION_ALGORITHM", "AES256"))
|
||||||
display_timezone = str(_get("DISPLAY_TIMEZONE", "UTC"))
|
|
||||||
|
|
||||||
return cls(storage_root=storage_root,
|
return cls(storage_root=storage_root,
|
||||||
max_upload_size=max_upload_size,
|
max_upload_size=max_upload_size,
|
||||||
@@ -193,7 +191,6 @@ class AppConfig:
|
|||||||
cors_origins=cors_origins,
|
cors_origins=cors_origins,
|
||||||
cors_methods=cors_methods,
|
cors_methods=cors_methods,
|
||||||
cors_allow_headers=cors_allow_headers,
|
cors_allow_headers=cors_allow_headers,
|
||||||
cors_expose_headers=cors_expose_headers,
|
|
||||||
session_lifetime_days=session_lifetime_days,
|
session_lifetime_days=session_lifetime_days,
|
||||||
auth_max_attempts=auth_max_attempts,
|
auth_max_attempts=auth_max_attempts,
|
||||||
auth_lockout_minutes=auth_lockout_minutes,
|
auth_lockout_minutes=auth_lockout_minutes,
|
||||||
@@ -202,102 +199,11 @@ class AppConfig:
|
|||||||
stream_chunk_size=stream_chunk_size,
|
stream_chunk_size=stream_chunk_size,
|
||||||
multipart_min_part_size=multipart_min_part_size,
|
multipart_min_part_size=multipart_min_part_size,
|
||||||
bucket_stats_cache_ttl=bucket_stats_cache_ttl,
|
bucket_stats_cache_ttl=bucket_stats_cache_ttl,
|
||||||
object_cache_ttl=object_cache_ttl,
|
|
||||||
encryption_enabled=encryption_enabled,
|
encryption_enabled=encryption_enabled,
|
||||||
encryption_master_key_path=encryption_master_key_path,
|
encryption_master_key_path=encryption_master_key_path,
|
||||||
kms_enabled=kms_enabled,
|
kms_enabled=kms_enabled,
|
||||||
kms_keys_path=kms_keys_path,
|
kms_keys_path=kms_keys_path,
|
||||||
default_encryption_algorithm=default_encryption_algorithm,
|
default_encryption_algorithm=default_encryption_algorithm)
|
||||||
display_timezone=display_timezone,
|
|
||||||
lifecycle_enabled=lifecycle_enabled,
|
|
||||||
lifecycle_interval_seconds=lifecycle_interval_seconds)
|
|
||||||
|
|
||||||
def validate_and_report(self) -> list[str]:
|
|
||||||
"""Validate configuration and return a list of warnings/issues.
|
|
||||||
|
|
||||||
Call this at startup to detect potential misconfigurations before
|
|
||||||
the application fully commits to running.
|
|
||||||
"""
|
|
||||||
issues = []
|
|
||||||
|
|
||||||
try:
|
|
||||||
test_file = self.storage_root / ".write_test"
|
|
||||||
test_file.touch()
|
|
||||||
test_file.unlink()
|
|
||||||
except (OSError, PermissionError) as e:
|
|
||||||
issues.append(f"CRITICAL: STORAGE_ROOT '{self.storage_root}' is not writable: {e}")
|
|
||||||
|
|
||||||
storage_str = str(self.storage_root).lower()
|
|
||||||
if "/tmp" in storage_str or "\\temp" in storage_str or "appdata\\local\\temp" in storage_str:
|
|
||||||
issues.append(f"WARNING: STORAGE_ROOT '{self.storage_root}' appears to be a temporary directory. Data may be lost on reboot!")
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.iam_config_path.relative_to(self.storage_root)
|
|
||||||
except ValueError:
|
|
||||||
issues.append(f"WARNING: IAM_CONFIG '{self.iam_config_path}' is outside STORAGE_ROOT '{self.storage_root}'. Consider setting IAM_CONFIG explicitly or ensuring paths are aligned.")
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.bucket_policy_path.relative_to(self.storage_root)
|
|
||||||
except ValueError:
|
|
||||||
issues.append(f"WARNING: BUCKET_POLICY_PATH '{self.bucket_policy_path}' is outside STORAGE_ROOT '{self.storage_root}'. Consider setting BUCKET_POLICY_PATH explicitly.")
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.log_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
test_log = self.log_path.parent / ".write_test"
|
|
||||||
test_log.touch()
|
|
||||||
test_log.unlink()
|
|
||||||
except (OSError, PermissionError) as e:
|
|
||||||
issues.append(f"WARNING: Log directory '{self.log_path.parent}' is not writable: {e}")
|
|
||||||
|
|
||||||
log_str = str(self.log_path).lower()
|
|
||||||
if "/tmp" in log_str or "\\temp" in log_str or "appdata\\local\\temp" in log_str:
|
|
||||||
issues.append(f"WARNING: LOG_DIR '{self.log_path.parent}' appears to be a temporary directory. Logs may be lost on reboot!")
|
|
||||||
|
|
||||||
if self.encryption_enabled:
|
|
||||||
try:
|
|
||||||
self.encryption_master_key_path.relative_to(self.storage_root)
|
|
||||||
except ValueError:
|
|
||||||
issues.append(f"WARNING: ENCRYPTION_MASTER_KEY_PATH '{self.encryption_master_key_path}' is outside STORAGE_ROOT. Ensure proper backup procedures.")
|
|
||||||
|
|
||||||
if self.kms_enabled:
|
|
||||||
try:
|
|
||||||
self.kms_keys_path.relative_to(self.storage_root)
|
|
||||||
except ValueError:
|
|
||||||
issues.append(f"WARNING: KMS_KEYS_PATH '{self.kms_keys_path}' is outside STORAGE_ROOT. Ensure proper backup procedures.")
|
|
||||||
|
|
||||||
if self.secret_key == "dev-secret-key":
|
|
||||||
issues.append("WARNING: Using default SECRET_KEY. Set SECRET_KEY environment variable for production.")
|
|
||||||
|
|
||||||
if "*" in self.cors_origins:
|
|
||||||
issues.append("INFO: CORS_ORIGINS is set to '*'. Consider restricting to specific domains in production.")
|
|
||||||
|
|
||||||
return issues
|
|
||||||
|
|
||||||
def print_startup_summary(self) -> None:
|
|
||||||
"""Print a summary of the configuration at startup."""
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("MyFSIO Configuration Summary")
|
|
||||||
print("=" * 60)
|
|
||||||
print(f" STORAGE_ROOT: {self.storage_root}")
|
|
||||||
print(f" IAM_CONFIG: {self.iam_config_path}")
|
|
||||||
print(f" BUCKET_POLICY: {self.bucket_policy_path}")
|
|
||||||
print(f" LOG_PATH: {self.log_path}")
|
|
||||||
if self.api_base_url:
|
|
||||||
print(f" API_BASE_URL: {self.api_base_url}")
|
|
||||||
if self.encryption_enabled:
|
|
||||||
print(f" ENCRYPTION: Enabled (Master key: {self.encryption_master_key_path})")
|
|
||||||
if self.kms_enabled:
|
|
||||||
print(f" KMS: Enabled (Keys: {self.kms_keys_path})")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
issues = self.validate_and_report()
|
|
||||||
if issues:
|
|
||||||
print("\nConfiguration Issues Detected:")
|
|
||||||
for issue in issues:
|
|
||||||
print(f" • {issue}")
|
|
||||||
print()
|
|
||||||
else:
|
|
||||||
print(" ✓ Configuration validated successfully\n")
|
|
||||||
|
|
||||||
def to_flask_config(self) -> Dict[str, Any]:
|
def to_flask_config(self) -> Dict[str, Any]:
|
||||||
return {
|
return {
|
||||||
@@ -318,7 +224,6 @@ class AppConfig:
|
|||||||
"STREAM_CHUNK_SIZE": self.stream_chunk_size,
|
"STREAM_CHUNK_SIZE": self.stream_chunk_size,
|
||||||
"MULTIPART_MIN_PART_SIZE": self.multipart_min_part_size,
|
"MULTIPART_MIN_PART_SIZE": self.multipart_min_part_size,
|
||||||
"BUCKET_STATS_CACHE_TTL": self.bucket_stats_cache_ttl,
|
"BUCKET_STATS_CACHE_TTL": self.bucket_stats_cache_ttl,
|
||||||
"OBJECT_CACHE_TTL": self.object_cache_ttl,
|
|
||||||
"LOG_LEVEL": self.log_level,
|
"LOG_LEVEL": self.log_level,
|
||||||
"LOG_TO_FILE": self.log_to_file,
|
"LOG_TO_FILE": self.log_to_file,
|
||||||
"LOG_FILE": str(self.log_path),
|
"LOG_FILE": str(self.log_path),
|
||||||
@@ -329,14 +234,10 @@ class AppConfig:
|
|||||||
"CORS_ORIGINS": self.cors_origins,
|
"CORS_ORIGINS": self.cors_origins,
|
||||||
"CORS_METHODS": self.cors_methods,
|
"CORS_METHODS": self.cors_methods,
|
||||||
"CORS_ALLOW_HEADERS": self.cors_allow_headers,
|
"CORS_ALLOW_HEADERS": self.cors_allow_headers,
|
||||||
"CORS_EXPOSE_HEADERS": self.cors_expose_headers,
|
|
||||||
"SESSION_LIFETIME_DAYS": self.session_lifetime_days,
|
"SESSION_LIFETIME_DAYS": self.session_lifetime_days,
|
||||||
"ENCRYPTION_ENABLED": self.encryption_enabled,
|
"ENCRYPTION_ENABLED": self.encryption_enabled,
|
||||||
"ENCRYPTION_MASTER_KEY_PATH": str(self.encryption_master_key_path),
|
"ENCRYPTION_MASTER_KEY_PATH": str(self.encryption_master_key_path),
|
||||||
"KMS_ENABLED": self.kms_enabled,
|
"KMS_ENABLED": self.kms_enabled,
|
||||||
"KMS_KEYS_PATH": str(self.kms_keys_path),
|
"KMS_KEYS_PATH": str(self.kms_keys_path),
|
||||||
"DEFAULT_ENCRYPTION_ALGORITHM": self.default_encryption_algorithm,
|
"DEFAULT_ENCRYPTION_ALGORITHM": self.default_encryption_algorithm,
|
||||||
"DISPLAY_TIMEZONE": self.display_timezone,
|
|
||||||
"LIFECYCLE_ENABLED": self.lifecycle_enabled,
|
|
||||||
"LIFECYCLE_INTERVAL_SECONDS": self.lifecycle_interval_seconds,
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
"""Manage remote S3 connections."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
"""Encrypted storage layer that wraps ObjectStorage with encryption support."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import io
|
import io
|
||||||
@@ -89,8 +90,6 @@ class EncryptedObjectStorage:
|
|||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
ObjectMeta with object information
|
ObjectMeta with object information
|
||||||
|
|
||||||
Performance: Uses streaming encryption for large files to reduce memory usage.
|
|
||||||
"""
|
"""
|
||||||
should_encrypt, algorithm, detected_kms_key = self._should_encrypt(
|
should_encrypt, algorithm, detected_kms_key = self._should_encrypt(
|
||||||
bucket_name, server_side_encryption
|
bucket_name, server_side_encryption
|
||||||
@@ -100,17 +99,20 @@ class EncryptedObjectStorage:
|
|||||||
kms_key_id = detected_kms_key
|
kms_key_id = detected_kms_key
|
||||||
|
|
||||||
if should_encrypt:
|
if should_encrypt:
|
||||||
|
data = stream.read()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Performance: Use streaming encryption to avoid loading entire file into memory
|
ciphertext, enc_metadata = self.encryption.encrypt_object(
|
||||||
encrypted_stream, enc_metadata = self.encryption.encrypt_stream(
|
data,
|
||||||
stream,
|
|
||||||
algorithm=algorithm,
|
algorithm=algorithm,
|
||||||
|
kms_key_id=kms_key_id,
|
||||||
context={"bucket": bucket_name, "key": object_key},
|
context={"bucket": bucket_name, "key": object_key},
|
||||||
)
|
)
|
||||||
|
|
||||||
combined_metadata = metadata.copy() if metadata else {}
|
combined_metadata = metadata.copy() if metadata else {}
|
||||||
combined_metadata.update(enc_metadata.to_dict())
|
combined_metadata.update(enc_metadata.to_dict())
|
||||||
|
|
||||||
|
encrypted_stream = io.BytesIO(ciphertext)
|
||||||
result = self.storage.put_object(
|
result = self.storage.put_object(
|
||||||
bucket_name,
|
bucket_name,
|
||||||
object_key,
|
object_key,
|
||||||
@@ -136,24 +138,23 @@ class EncryptedObjectStorage:
|
|||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tuple of (data, metadata)
|
Tuple of (data, metadata)
|
||||||
|
|
||||||
Performance: Uses streaming decryption to reduce memory usage.
|
|
||||||
"""
|
"""
|
||||||
path = self.storage.get_object_path(bucket_name, object_key)
|
path = self.storage.get_object_path(bucket_name, object_key)
|
||||||
metadata = self.storage.get_object_metadata(bucket_name, object_key)
|
metadata = self.storage.get_object_metadata(bucket_name, object_key)
|
||||||
|
|
||||||
|
with path.open("rb") as f:
|
||||||
|
data = f.read()
|
||||||
|
|
||||||
enc_metadata = EncryptionMetadata.from_dict(metadata)
|
enc_metadata = EncryptionMetadata.from_dict(metadata)
|
||||||
if enc_metadata:
|
if enc_metadata:
|
||||||
try:
|
try:
|
||||||
# Performance: Use streaming decryption to avoid loading entire file into memory
|
data = self.encryption.decrypt_object(
|
||||||
with path.open("rb") as f:
|
data,
|
||||||
decrypted_stream = self.encryption.decrypt_stream(f, enc_metadata)
|
enc_metadata,
|
||||||
data = decrypted_stream.read()
|
context={"bucket": bucket_name, "key": object_key},
|
||||||
|
)
|
||||||
except EncryptionError as exc:
|
except EncryptionError as exc:
|
||||||
raise StorageError(f"Decryption failed: {exc}") from exc
|
raise StorageError(f"Decryption failed: {exc}") from exc
|
||||||
else:
|
|
||||||
with path.open("rb") as f:
|
|
||||||
data = f.read()
|
|
||||||
|
|
||||||
clean_metadata = {
|
clean_metadata = {
|
||||||
k: v for k, v in metadata.items()
|
k: v for k, v in metadata.items()
|
||||||
@@ -187,11 +188,8 @@ class EncryptedObjectStorage:
|
|||||||
def bucket_stats(self, bucket_name: str, cache_ttl: int = 60):
|
def bucket_stats(self, bucket_name: str, cache_ttl: int = 60):
|
||||||
return self.storage.bucket_stats(bucket_name, cache_ttl)
|
return self.storage.bucket_stats(bucket_name, cache_ttl)
|
||||||
|
|
||||||
def list_objects(self, bucket_name: str, **kwargs):
|
def list_objects(self, bucket_name: str):
|
||||||
return self.storage.list_objects(bucket_name, **kwargs)
|
return self.storage.list_objects(bucket_name)
|
||||||
|
|
||||||
def list_objects_all(self, bucket_name: str):
|
|
||||||
return self.storage.list_objects_all(bucket_name)
|
|
||||||
|
|
||||||
def get_object_path(self, bucket_name: str, object_key: str):
|
def get_object_path(self, bucket_name: str, object_key: str):
|
||||||
return self.storage.get_object_path(bucket_name, object_key)
|
return self.storage.get_object_path(bucket_name, object_key)
|
||||||
|
|||||||
@@ -157,7 +157,10 @@ class LocalKeyEncryption(EncryptionProvider):
|
|||||||
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
||||||
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
||||||
"""Decrypt data using envelope encryption."""
|
"""Decrypt data using envelope encryption."""
|
||||||
|
# Decrypt the data key
|
||||||
data_key = self._decrypt_data_key(encrypted_data_key)
|
data_key = self._decrypt_data_key(encrypted_data_key)
|
||||||
|
|
||||||
|
# Decrypt the data
|
||||||
aesgcm = AESGCM(data_key)
|
aesgcm = AESGCM(data_key)
|
||||||
try:
|
try:
|
||||||
return aesgcm.decrypt(nonce, ciphertext, None)
|
return aesgcm.decrypt(nonce, ciphertext, None)
|
||||||
@@ -180,26 +183,21 @@ class StreamingEncryptor:
|
|||||||
self.chunk_size = chunk_size
|
self.chunk_size = chunk_size
|
||||||
|
|
||||||
def _derive_chunk_nonce(self, base_nonce: bytes, chunk_index: int) -> bytes:
|
def _derive_chunk_nonce(self, base_nonce: bytes, chunk_index: int) -> bytes:
|
||||||
"""Derive a unique nonce for each chunk.
|
"""Derive a unique nonce for each chunk."""
|
||||||
|
# XOR the base nonce with the chunk index
|
||||||
Performance: Use direct byte manipulation instead of full int conversion.
|
nonce_int = int.from_bytes(base_nonce, "big")
|
||||||
"""
|
derived = nonce_int ^ chunk_index
|
||||||
# Performance: Only modify last 4 bytes instead of full 12-byte conversion
|
return derived.to_bytes(12, "big")
|
||||||
return base_nonce[:8] + (chunk_index ^ int.from_bytes(base_nonce[8:], "big")).to_bytes(4, "big")
|
|
||||||
|
|
||||||
def encrypt_stream(self, stream: BinaryIO,
|
def encrypt_stream(self, stream: BinaryIO,
|
||||||
context: Dict[str, str] | None = None) -> tuple[BinaryIO, EncryptionMetadata]:
|
context: Dict[str, str] | None = None) -> tuple[BinaryIO, EncryptionMetadata]:
|
||||||
"""Encrypt a stream and return encrypted stream + metadata.
|
"""Encrypt a stream and return encrypted stream + metadata."""
|
||||||
|
|
||||||
Performance: Writes chunks directly to output buffer instead of accumulating in list.
|
|
||||||
"""
|
|
||||||
data_key, encrypted_data_key = self.provider.generate_data_key()
|
data_key, encrypted_data_key = self.provider.generate_data_key()
|
||||||
base_nonce = secrets.token_bytes(12)
|
base_nonce = secrets.token_bytes(12)
|
||||||
|
|
||||||
aesgcm = AESGCM(data_key)
|
aesgcm = AESGCM(data_key)
|
||||||
# Performance: Write directly to BytesIO instead of accumulating chunks
|
encrypted_chunks = []
|
||||||
output = io.BytesIO()
|
|
||||||
output.write(b"\x00\x00\x00\x00") # Placeholder for chunk count
|
|
||||||
chunk_index = 0
|
chunk_index = 0
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
@@ -210,15 +208,12 @@ class StreamingEncryptor:
|
|||||||
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
||||||
encrypted_chunk = aesgcm.encrypt(chunk_nonce, chunk, None)
|
encrypted_chunk = aesgcm.encrypt(chunk_nonce, chunk, None)
|
||||||
|
|
||||||
# Write size prefix + encrypted chunk directly
|
size_prefix = len(encrypted_chunk).to_bytes(self.HEADER_SIZE, "big")
|
||||||
output.write(len(encrypted_chunk).to_bytes(self.HEADER_SIZE, "big"))
|
encrypted_chunks.append(size_prefix + encrypted_chunk)
|
||||||
output.write(encrypted_chunk)
|
|
||||||
chunk_index += 1
|
chunk_index += 1
|
||||||
|
|
||||||
# Write actual chunk count to header
|
header = chunk_index.to_bytes(4, "big")
|
||||||
output.seek(0)
|
encrypted_data = header + b"".join(encrypted_chunks)
|
||||||
output.write(chunk_index.to_bytes(4, "big"))
|
|
||||||
output.seek(0)
|
|
||||||
|
|
||||||
metadata = EncryptionMetadata(
|
metadata = EncryptionMetadata(
|
||||||
algorithm="AES256",
|
algorithm="AES256",
|
||||||
@@ -227,13 +222,10 @@ class StreamingEncryptor:
|
|||||||
encrypted_data_key=encrypted_data_key,
|
encrypted_data_key=encrypted_data_key,
|
||||||
)
|
)
|
||||||
|
|
||||||
return output, metadata
|
return io.BytesIO(encrypted_data), metadata
|
||||||
|
|
||||||
def decrypt_stream(self, stream: BinaryIO, metadata: EncryptionMetadata) -> BinaryIO:
|
def decrypt_stream(self, stream: BinaryIO, metadata: EncryptionMetadata) -> BinaryIO:
|
||||||
"""Decrypt a stream using the provided metadata.
|
"""Decrypt a stream using the provided metadata."""
|
||||||
|
|
||||||
Performance: Writes chunks directly to output buffer instead of accumulating in list.
|
|
||||||
"""
|
|
||||||
if isinstance(self.provider, LocalKeyEncryption):
|
if isinstance(self.provider, LocalKeyEncryption):
|
||||||
data_key = self.provider._decrypt_data_key(metadata.encrypted_data_key)
|
data_key = self.provider._decrypt_data_key(metadata.encrypted_data_key)
|
||||||
else:
|
else:
|
||||||
@@ -247,8 +239,7 @@ class StreamingEncryptor:
|
|||||||
raise EncryptionError("Invalid encrypted stream: missing header")
|
raise EncryptionError("Invalid encrypted stream: missing header")
|
||||||
chunk_count = int.from_bytes(chunk_count_bytes, "big")
|
chunk_count = int.from_bytes(chunk_count_bytes, "big")
|
||||||
|
|
||||||
# Performance: Write directly to BytesIO instead of accumulating chunks
|
decrypted_chunks = []
|
||||||
output = io.BytesIO()
|
|
||||||
for chunk_index in range(chunk_count):
|
for chunk_index in range(chunk_count):
|
||||||
size_bytes = stream.read(self.HEADER_SIZE)
|
size_bytes = stream.read(self.HEADER_SIZE)
|
||||||
if len(size_bytes) < self.HEADER_SIZE:
|
if len(size_bytes) < self.HEADER_SIZE:
|
||||||
@@ -262,12 +253,11 @@ class StreamingEncryptor:
|
|||||||
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
chunk_nonce = self._derive_chunk_nonce(base_nonce, chunk_index)
|
||||||
try:
|
try:
|
||||||
decrypted_chunk = aesgcm.decrypt(chunk_nonce, encrypted_chunk, None)
|
decrypted_chunk = aesgcm.decrypt(chunk_nonce, encrypted_chunk, None)
|
||||||
output.write(decrypted_chunk) # Write directly instead of appending to list
|
decrypted_chunks.append(decrypted_chunk)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
raise EncryptionError(f"Failed to decrypt chunk {chunk_index}: {exc}") from exc
|
raise EncryptionError(f"Failed to decrypt chunk {chunk_index}: {exc}") from exc
|
||||||
|
|
||||||
output.seek(0)
|
return io.BytesIO(b"".join(decrypted_chunks))
|
||||||
return output
|
|
||||||
|
|
||||||
|
|
||||||
class EncryptionManager:
|
class EncryptionManager:
|
||||||
@@ -353,106 +343,6 @@ class EncryptionManager:
|
|||||||
return encryptor.decrypt_stream(stream, metadata)
|
return encryptor.decrypt_stream(stream, metadata)
|
||||||
|
|
||||||
|
|
||||||
class SSECEncryption(EncryptionProvider):
|
|
||||||
"""SSE-C: Server-Side Encryption with Customer-Provided Keys.
|
|
||||||
|
|
||||||
The client provides the encryption key with each request.
|
|
||||||
Server encrypts/decrypts but never stores the key.
|
|
||||||
|
|
||||||
Required headers for PUT:
|
|
||||||
- x-amz-server-side-encryption-customer-algorithm: AES256
|
|
||||||
- x-amz-server-side-encryption-customer-key: Base64-encoded 256-bit key
|
|
||||||
- x-amz-server-side-encryption-customer-key-MD5: Base64-encoded MD5 of key
|
|
||||||
"""
|
|
||||||
|
|
||||||
KEY_ID = "customer-provided"
|
|
||||||
|
|
||||||
def __init__(self, customer_key: bytes):
|
|
||||||
if len(customer_key) != 32:
|
|
||||||
raise EncryptionError("Customer key must be exactly 256 bits (32 bytes)")
|
|
||||||
self.customer_key = customer_key
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_headers(cls, headers: Dict[str, str]) -> "SSECEncryption":
|
|
||||||
algorithm = headers.get("x-amz-server-side-encryption-customer-algorithm", "")
|
|
||||||
if algorithm.upper() != "AES256":
|
|
||||||
raise EncryptionError(f"Unsupported SSE-C algorithm: {algorithm}. Only AES256 is supported.")
|
|
||||||
|
|
||||||
key_b64 = headers.get("x-amz-server-side-encryption-customer-key", "")
|
|
||||||
if not key_b64:
|
|
||||||
raise EncryptionError("Missing x-amz-server-side-encryption-customer-key header")
|
|
||||||
|
|
||||||
key_md5_b64 = headers.get("x-amz-server-side-encryption-customer-key-md5", "")
|
|
||||||
|
|
||||||
try:
|
|
||||||
customer_key = base64.b64decode(key_b64)
|
|
||||||
except Exception as e:
|
|
||||||
raise EncryptionError(f"Invalid base64 in customer key: {e}") from e
|
|
||||||
|
|
||||||
if len(customer_key) != 32:
|
|
||||||
raise EncryptionError(f"Customer key must be 256 bits, got {len(customer_key) * 8} bits")
|
|
||||||
|
|
||||||
if key_md5_b64:
|
|
||||||
import hashlib
|
|
||||||
expected_md5 = base64.b64encode(hashlib.md5(customer_key).digest()).decode()
|
|
||||||
if key_md5_b64 != expected_md5:
|
|
||||||
raise EncryptionError("Customer key MD5 mismatch")
|
|
||||||
|
|
||||||
return cls(customer_key)
|
|
||||||
|
|
||||||
def encrypt(self, plaintext: bytes, context: Dict[str, str] | None = None) -> EncryptionResult:
|
|
||||||
aesgcm = AESGCM(self.customer_key)
|
|
||||||
nonce = secrets.token_bytes(12)
|
|
||||||
ciphertext = aesgcm.encrypt(nonce, plaintext, None)
|
|
||||||
|
|
||||||
return EncryptionResult(
|
|
||||||
ciphertext=ciphertext,
|
|
||||||
nonce=nonce,
|
|
||||||
key_id=self.KEY_ID,
|
|
||||||
encrypted_data_key=b"",
|
|
||||||
)
|
|
||||||
|
|
||||||
def decrypt(self, ciphertext: bytes, nonce: bytes, encrypted_data_key: bytes,
|
|
||||||
key_id: str, context: Dict[str, str] | None = None) -> bytes:
|
|
||||||
aesgcm = AESGCM(self.customer_key)
|
|
||||||
try:
|
|
||||||
return aesgcm.decrypt(nonce, ciphertext, None)
|
|
||||||
except Exception as exc:
|
|
||||||
raise EncryptionError(f"SSE-C decryption failed: {exc}") from exc
|
|
||||||
|
|
||||||
def generate_data_key(self) -> tuple[bytes, bytes]:
|
|
||||||
return self.customer_key, b""
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class SSECMetadata:
|
|
||||||
algorithm: str = "AES256"
|
|
||||||
nonce: bytes = b""
|
|
||||||
key_md5: str = ""
|
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, str]:
|
|
||||||
return {
|
|
||||||
"x-amz-server-side-encryption-customer-algorithm": self.algorithm,
|
|
||||||
"x-amz-encryption-nonce": base64.b64encode(self.nonce).decode(),
|
|
||||||
"x-amz-server-side-encryption-customer-key-MD5": self.key_md5,
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(cls, data: Dict[str, str]) -> Optional["SSECMetadata"]:
|
|
||||||
algorithm = data.get("x-amz-server-side-encryption-customer-algorithm")
|
|
||||||
if not algorithm:
|
|
||||||
return None
|
|
||||||
try:
|
|
||||||
nonce = base64.b64decode(data.get("x-amz-encryption-nonce", ""))
|
|
||||||
return cls(
|
|
||||||
algorithm=algorithm,
|
|
||||||
nonce=nonce,
|
|
||||||
key_md5=data.get("x-amz-server-side-encryption-customer-key-MD5", ""),
|
|
||||||
)
|
|
||||||
except Exception:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
class ClientEncryptionHelper:
|
class ClientEncryptionHelper:
|
||||||
"""Helpers for client-side encryption.
|
"""Helpers for client-side encryption.
|
||||||
|
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
"""Standardized error handling for API and UI responses."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
"""Application-wide extension instances."""
|
||||||
from flask import g
|
from flask import g
|
||||||
from flask_limiter import Limiter
|
from flask_limiter import Limiter
|
||||||
from flask_limiter.util import get_remote_address
|
from flask_limiter.util import get_remote_address
|
||||||
|
|||||||
126
app/iam.py
126
app/iam.py
@@ -1,21 +1,21 @@
|
|||||||
|
"""Lightweight IAM-style user and policy management."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import math
|
import math
|
||||||
import secrets
|
import secrets
|
||||||
import time
|
|
||||||
from collections import deque
|
from collections import deque
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import datetime, timedelta, timezone
|
from datetime import datetime, timedelta
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set, Tuple
|
from typing import Any, Deque, Dict, Iterable, List, Optional, Sequence, Set
|
||||||
|
|
||||||
|
|
||||||
class IamError(RuntimeError):
|
class IamError(RuntimeError):
|
||||||
"""Raised when authentication or authorization fails."""
|
"""Raised when authentication or authorization fails."""
|
||||||
|
|
||||||
|
|
||||||
S3_ACTIONS = {"list", "read", "write", "delete", "share", "policy", "replication", "lifecycle", "cors"}
|
S3_ACTIONS = {"list", "read", "write", "delete", "share", "policy", "replication"}
|
||||||
IAM_ACTIONS = {
|
IAM_ACTIONS = {
|
||||||
"iam:list_users",
|
"iam:list_users",
|
||||||
"iam:create_user",
|
"iam:create_user",
|
||||||
@@ -26,12 +26,14 @@ IAM_ACTIONS = {
|
|||||||
ALLOWED_ACTIONS = (S3_ACTIONS | IAM_ACTIONS) | {"iam:*"}
|
ALLOWED_ACTIONS = (S3_ACTIONS | IAM_ACTIONS) | {"iam:*"}
|
||||||
|
|
||||||
ACTION_ALIASES = {
|
ACTION_ALIASES = {
|
||||||
|
# List actions
|
||||||
"list": "list",
|
"list": "list",
|
||||||
"s3:listbucket": "list",
|
"s3:listbucket": "list",
|
||||||
"s3:listallmybuckets": "list",
|
"s3:listallmybuckets": "list",
|
||||||
"s3:listbucketversions": "list",
|
"s3:listbucketversions": "list",
|
||||||
"s3:listmultipartuploads": "list",
|
"s3:listmultipartuploads": "list",
|
||||||
"s3:listparts": "list",
|
"s3:listparts": "list",
|
||||||
|
# Read actions
|
||||||
"read": "read",
|
"read": "read",
|
||||||
"s3:getobject": "read",
|
"s3:getobject": "read",
|
||||||
"s3:getobjectversion": "read",
|
"s3:getobjectversion": "read",
|
||||||
@@ -41,6 +43,7 @@ ACTION_ALIASES = {
|
|||||||
"s3:getbucketversioning": "read",
|
"s3:getbucketversioning": "read",
|
||||||
"s3:headobject": "read",
|
"s3:headobject": "read",
|
||||||
"s3:headbucket": "read",
|
"s3:headbucket": "read",
|
||||||
|
# Write actions
|
||||||
"write": "write",
|
"write": "write",
|
||||||
"s3:putobject": "write",
|
"s3:putobject": "write",
|
||||||
"s3:createbucket": "write",
|
"s3:createbucket": "write",
|
||||||
@@ -51,19 +54,23 @@ ACTION_ALIASES = {
|
|||||||
"s3:completemultipartupload": "write",
|
"s3:completemultipartupload": "write",
|
||||||
"s3:abortmultipartupload": "write",
|
"s3:abortmultipartupload": "write",
|
||||||
"s3:copyobject": "write",
|
"s3:copyobject": "write",
|
||||||
|
# Delete actions
|
||||||
"delete": "delete",
|
"delete": "delete",
|
||||||
"s3:deleteobject": "delete",
|
"s3:deleteobject": "delete",
|
||||||
"s3:deleteobjectversion": "delete",
|
"s3:deleteobjectversion": "delete",
|
||||||
"s3:deletebucket": "delete",
|
"s3:deletebucket": "delete",
|
||||||
"s3:deleteobjecttagging": "delete",
|
"s3:deleteobjecttagging": "delete",
|
||||||
|
# Share actions (ACL)
|
||||||
"share": "share",
|
"share": "share",
|
||||||
"s3:putobjectacl": "share",
|
"s3:putobjectacl": "share",
|
||||||
"s3:putbucketacl": "share",
|
"s3:putbucketacl": "share",
|
||||||
"s3:getbucketacl": "share",
|
"s3:getbucketacl": "share",
|
||||||
|
# Policy actions
|
||||||
"policy": "policy",
|
"policy": "policy",
|
||||||
"s3:putbucketpolicy": "policy",
|
"s3:putbucketpolicy": "policy",
|
||||||
"s3:getbucketpolicy": "policy",
|
"s3:getbucketpolicy": "policy",
|
||||||
"s3:deletebucketpolicy": "policy",
|
"s3:deletebucketpolicy": "policy",
|
||||||
|
# Replication actions
|
||||||
"replication": "replication",
|
"replication": "replication",
|
||||||
"s3:getreplicationconfiguration": "replication",
|
"s3:getreplicationconfiguration": "replication",
|
||||||
"s3:putreplicationconfiguration": "replication",
|
"s3:putreplicationconfiguration": "replication",
|
||||||
@@ -71,16 +78,7 @@ ACTION_ALIASES = {
|
|||||||
"s3:replicateobject": "replication",
|
"s3:replicateobject": "replication",
|
||||||
"s3:replicatetags": "replication",
|
"s3:replicatetags": "replication",
|
||||||
"s3:replicatedelete": "replication",
|
"s3:replicatedelete": "replication",
|
||||||
"lifecycle": "lifecycle",
|
# IAM actions
|
||||||
"s3:getlifecycleconfiguration": "lifecycle",
|
|
||||||
"s3:putlifecycleconfiguration": "lifecycle",
|
|
||||||
"s3:deletelifecycleconfiguration": "lifecycle",
|
|
||||||
"s3:getbucketlifecycle": "lifecycle",
|
|
||||||
"s3:putbucketlifecycle": "lifecycle",
|
|
||||||
"cors": "cors",
|
|
||||||
"s3:getbucketcors": "cors",
|
|
||||||
"s3:putbucketcors": "cors",
|
|
||||||
"s3:deletebucketcors": "cors",
|
|
||||||
"iam:listusers": "iam:list_users",
|
"iam:listusers": "iam:list_users",
|
||||||
"iam:createuser": "iam:create_user",
|
"iam:createuser": "iam:create_user",
|
||||||
"iam:deleteuser": "iam:delete_user",
|
"iam:deleteuser": "iam:delete_user",
|
||||||
@@ -117,26 +115,17 @@ class IamService:
|
|||||||
self._raw_config: Dict[str, Any] = {}
|
self._raw_config: Dict[str, Any] = {}
|
||||||
self._failed_attempts: Dict[str, Deque[datetime]] = {}
|
self._failed_attempts: Dict[str, Deque[datetime]] = {}
|
||||||
self._last_load_time = 0.0
|
self._last_load_time = 0.0
|
||||||
self._credential_cache: Dict[str, Tuple[str, Principal, float]] = {}
|
|
||||||
self._cache_ttl = 60.0
|
|
||||||
self._last_stat_check = 0.0
|
|
||||||
self._stat_check_interval = 1.0
|
|
||||||
self._sessions: Dict[str, Dict[str, Any]] = {}
|
|
||||||
self._load()
|
self._load()
|
||||||
|
|
||||||
def _maybe_reload(self) -> None:
|
def _maybe_reload(self) -> None:
|
||||||
"""Reload configuration if the file has changed on disk."""
|
"""Reload configuration if the file has changed on disk."""
|
||||||
now = time.time()
|
|
||||||
if now - self._last_stat_check < self._stat_check_interval:
|
|
||||||
return
|
|
||||||
self._last_stat_check = now
|
|
||||||
try:
|
try:
|
||||||
if self.config_path.stat().st_mtime > self._last_load_time:
|
if self.config_path.stat().st_mtime > self._last_load_time:
|
||||||
self._load()
|
self._load()
|
||||||
self._credential_cache.clear()
|
|
||||||
except OSError:
|
except OSError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
# ---------------------- authz helpers ----------------------
|
||||||
def authenticate(self, access_key: str, secret_key: str) -> Principal:
|
def authenticate(self, access_key: str, secret_key: str) -> Principal:
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
access_key = (access_key or "").strip()
|
access_key = (access_key or "").strip()
|
||||||
@@ -160,7 +149,7 @@ class IamService:
|
|||||||
return
|
return
|
||||||
attempts = self._failed_attempts.setdefault(access_key, deque())
|
attempts = self._failed_attempts.setdefault(access_key, deque())
|
||||||
self._prune_attempts(attempts)
|
self._prune_attempts(attempts)
|
||||||
attempts.append(datetime.now(timezone.utc))
|
attempts.append(datetime.now())
|
||||||
|
|
||||||
def _clear_failed_attempts(self, access_key: str) -> None:
|
def _clear_failed_attempts(self, access_key: str) -> None:
|
||||||
if not access_key:
|
if not access_key:
|
||||||
@@ -168,7 +157,7 @@ class IamService:
|
|||||||
self._failed_attempts.pop(access_key, None)
|
self._failed_attempts.pop(access_key, None)
|
||||||
|
|
||||||
def _prune_attempts(self, attempts: Deque[datetime]) -> None:
|
def _prune_attempts(self, attempts: Deque[datetime]) -> None:
|
||||||
cutoff = datetime.now(timezone.utc) - self.auth_lockout_window
|
cutoff = datetime.now() - self.auth_lockout_window
|
||||||
while attempts and attempts[0] < cutoff:
|
while attempts and attempts[0] < cutoff:
|
||||||
attempts.popleft()
|
attempts.popleft()
|
||||||
|
|
||||||
@@ -189,73 +178,21 @@ class IamService:
|
|||||||
if len(attempts) < self.auth_max_attempts:
|
if len(attempts) < self.auth_max_attempts:
|
||||||
return 0
|
return 0
|
||||||
oldest = attempts[0]
|
oldest = attempts[0]
|
||||||
elapsed = (datetime.now(timezone.utc) - oldest).total_seconds()
|
elapsed = (datetime.now() - oldest).total_seconds()
|
||||||
return int(max(0, self.auth_lockout_window.total_seconds() - elapsed))
|
return int(max(0, self.auth_lockout_window.total_seconds() - elapsed))
|
||||||
|
|
||||||
def create_session_token(self, access_key: str, duration_seconds: int = 3600) -> str:
|
|
||||||
"""Create a temporary session token for an access key."""
|
|
||||||
self._maybe_reload()
|
|
||||||
record = self._users.get(access_key)
|
|
||||||
if not record:
|
|
||||||
raise IamError("Unknown access key")
|
|
||||||
self._cleanup_expired_sessions()
|
|
||||||
token = secrets.token_urlsafe(32)
|
|
||||||
expires_at = time.time() + duration_seconds
|
|
||||||
self._sessions[token] = {
|
|
||||||
"access_key": access_key,
|
|
||||||
"expires_at": expires_at,
|
|
||||||
}
|
|
||||||
return token
|
|
||||||
|
|
||||||
def validate_session_token(self, access_key: str, session_token: str) -> bool:
|
|
||||||
"""Validate a session token for an access key."""
|
|
||||||
session = self._sessions.get(session_token)
|
|
||||||
if not session:
|
|
||||||
return False
|
|
||||||
if session["access_key"] != access_key:
|
|
||||||
return False
|
|
||||||
if time.time() > session["expires_at"]:
|
|
||||||
del self._sessions[session_token]
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
def _cleanup_expired_sessions(self) -> None:
|
|
||||||
"""Remove expired session tokens."""
|
|
||||||
now = time.time()
|
|
||||||
expired = [token for token, data in self._sessions.items() if now > data["expires_at"]]
|
|
||||||
for token in expired:
|
|
||||||
del self._sessions[token]
|
|
||||||
|
|
||||||
def principal_for_key(self, access_key: str) -> Principal:
|
def principal_for_key(self, access_key: str) -> Principal:
|
||||||
now = time.time()
|
|
||||||
cached = self._credential_cache.get(access_key)
|
|
||||||
if cached:
|
|
||||||
secret, principal, cached_time = cached
|
|
||||||
if now - cached_time < self._cache_ttl:
|
|
||||||
return principal
|
|
||||||
|
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if not record:
|
if not record:
|
||||||
raise IamError("Unknown access key")
|
raise IamError("Unknown access key")
|
||||||
principal = self._build_principal(access_key, record)
|
return self._build_principal(access_key, record)
|
||||||
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
|
||||||
return principal
|
|
||||||
|
|
||||||
def secret_for_key(self, access_key: str) -> str:
|
def secret_for_key(self, access_key: str) -> str:
|
||||||
now = time.time()
|
|
||||||
cached = self._credential_cache.get(access_key)
|
|
||||||
if cached:
|
|
||||||
secret, principal, cached_time = cached
|
|
||||||
if now - cached_time < self._cache_ttl:
|
|
||||||
return secret
|
|
||||||
|
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if not record:
|
if not record:
|
||||||
raise IamError("Unknown access key")
|
raise IamError("Unknown access key")
|
||||||
principal = self._build_principal(access_key, record)
|
|
||||||
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
|
||||||
return record["secret_key"]
|
return record["secret_key"]
|
||||||
|
|
||||||
def authorize(self, principal: Principal, bucket_name: str | None, action: str) -> None:
|
def authorize(self, principal: Principal, bucket_name: str | None, action: str) -> None:
|
||||||
@@ -281,6 +218,7 @@ class IamService:
|
|||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
# ---------------------- management helpers ----------------------
|
||||||
def list_users(self) -> List[Dict[str, Any]]:
|
def list_users(self) -> List[Dict[str, Any]]:
|
||||||
listing: List[Dict[str, Any]] = []
|
listing: List[Dict[str, Any]] = []
|
||||||
for access_key, record in self._users.items():
|
for access_key, record in self._users.items():
|
||||||
@@ -353,6 +291,7 @@ class IamService:
|
|||||||
self._save()
|
self._save()
|
||||||
self._load()
|
self._load()
|
||||||
|
|
||||||
|
# ---------------------- config helpers ----------------------
|
||||||
def _load(self) -> None:
|
def _load(self) -> None:
|
||||||
try:
|
try:
|
||||||
self._last_load_time = self.config_path.stat().st_mtime
|
self._last_load_time = self.config_path.stat().st_mtime
|
||||||
@@ -398,6 +337,7 @@ class IamService:
|
|||||||
except (OSError, PermissionError) as e:
|
except (OSError, PermissionError) as e:
|
||||||
raise IamError(f"Cannot save IAM config: {e}")
|
raise IamError(f"Cannot save IAM config: {e}")
|
||||||
|
|
||||||
|
# ---------------------- insight helpers ----------------------
|
||||||
def config_summary(self) -> Dict[str, Any]:
|
def config_summary(self) -> Dict[str, Any]:
|
||||||
return {
|
return {
|
||||||
"path": str(self.config_path),
|
"path": str(self.config_path),
|
||||||
@@ -506,33 +446,11 @@ class IamService:
|
|||||||
raise IamError("User not found")
|
raise IamError("User not found")
|
||||||
|
|
||||||
def get_secret_key(self, access_key: str) -> str | None:
|
def get_secret_key(self, access_key: str) -> str | None:
|
||||||
now = time.time()
|
|
||||||
cached = self._credential_cache.get(access_key)
|
|
||||||
if cached:
|
|
||||||
secret, principal, cached_time = cached
|
|
||||||
if now - cached_time < self._cache_ttl:
|
|
||||||
return secret
|
|
||||||
|
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if record:
|
return record["secret_key"] if record else None
|
||||||
principal = self._build_principal(access_key, record)
|
|
||||||
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
|
||||||
return record["secret_key"]
|
|
||||||
return None
|
|
||||||
|
|
||||||
def get_principal(self, access_key: str) -> Principal | None:
|
def get_principal(self, access_key: str) -> Principal | None:
|
||||||
now = time.time()
|
|
||||||
cached = self._credential_cache.get(access_key)
|
|
||||||
if cached:
|
|
||||||
secret, principal, cached_time = cached
|
|
||||||
if now - cached_time < self._cache_ttl:
|
|
||||||
return principal
|
|
||||||
|
|
||||||
self._maybe_reload()
|
self._maybe_reload()
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if record:
|
return self._build_principal(access_key, record) if record else None
|
||||||
principal = self._build_principal(access_key, record)
|
|
||||||
self._credential_cache[access_key] = (record["secret_key"], principal, now)
|
|
||||||
return principal
|
|
||||||
return None
|
|
||||||
|
|||||||
21
app/kms.py
21
app/kms.py
@@ -1,3 +1,4 @@
|
|||||||
|
"""Key Management Service (KMS) for encryption key management."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import base64
|
import base64
|
||||||
@@ -211,26 +212,6 @@ class KMSManager:
|
|||||||
self._load_keys()
|
self._load_keys()
|
||||||
return list(self._keys.values())
|
return list(self._keys.values())
|
||||||
|
|
||||||
def get_default_key_id(self) -> str:
|
|
||||||
"""Get the default KMS key ID, creating one if none exist."""
|
|
||||||
self._load_keys()
|
|
||||||
for key in self._keys.values():
|
|
||||||
if key.enabled:
|
|
||||||
return key.key_id
|
|
||||||
default_key = self.create_key(description="Default KMS Key")
|
|
||||||
return default_key.key_id
|
|
||||||
|
|
||||||
def get_provider(self, key_id: str | None = None) -> "KMSEncryptionProvider":
|
|
||||||
"""Get a KMS encryption provider for the specified key."""
|
|
||||||
if key_id is None:
|
|
||||||
key_id = self.get_default_key_id()
|
|
||||||
key = self.get_key(key_id)
|
|
||||||
if not key:
|
|
||||||
raise EncryptionError(f"Key not found: {key_id}")
|
|
||||||
if not key.enabled:
|
|
||||||
raise EncryptionError(f"Key is disabled: {key_id}")
|
|
||||||
return KMSEncryptionProvider(self, key_id)
|
|
||||||
|
|
||||||
def enable_key(self, key_id: str) -> None:
|
def enable_key(self, key_id: str) -> None:
|
||||||
"""Enable a key."""
|
"""Enable a key."""
|
||||||
self._load_keys()
|
self._load_keys()
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
"""KMS and encryption API endpoints."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import base64
|
import base64
|
||||||
@@ -32,6 +33,9 @@ def _encryption():
|
|||||||
def _error_response(code: str, message: str, status: int) -> tuple[Dict[str, Any], int]:
|
def _error_response(code: str, message: str, status: int) -> tuple[Dict[str, Any], int]:
|
||||||
return {"__type": code, "message": message}, status
|
return {"__type": code, "message": message}, status
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------- Key Management ----------------------
|
||||||
|
|
||||||
@kms_api_bp.route("/keys", methods=["GET", "POST"])
|
@kms_api_bp.route("/keys", methods=["GET", "POST"])
|
||||||
@limiter.limit("30 per minute")
|
@limiter.limit("30 per minute")
|
||||||
def list_or_create_keys():
|
def list_or_create_keys():
|
||||||
@@ -61,6 +65,7 @@ def list_or_create_keys():
|
|||||||
except EncryptionError as exc:
|
except EncryptionError as exc:
|
||||||
return _error_response("KMSInternalException", str(exc), 400)
|
return _error_response("KMSInternalException", str(exc), 400)
|
||||||
|
|
||||||
|
# GET - List keys
|
||||||
keys = kms.list_keys()
|
keys = kms.list_keys()
|
||||||
return jsonify({
|
return jsonify({
|
||||||
"Keys": [{"KeyId": k.key_id, "KeyArn": k.arn} for k in keys],
|
"Keys": [{"KeyId": k.key_id, "KeyArn": k.arn} for k in keys],
|
||||||
@@ -91,6 +96,7 @@ def get_or_delete_key(key_id: str):
|
|||||||
except EncryptionError as exc:
|
except EncryptionError as exc:
|
||||||
return _error_response("NotFoundException", str(exc), 404)
|
return _error_response("NotFoundException", str(exc), 404)
|
||||||
|
|
||||||
|
# GET
|
||||||
key = kms.get_key(key_id)
|
key = kms.get_key(key_id)
|
||||||
if not key:
|
if not key:
|
||||||
return _error_response("NotFoundException", f"Key not found: {key_id}", 404)
|
return _error_response("NotFoundException", f"Key not found: {key_id}", 404)
|
||||||
@@ -143,6 +149,9 @@ def disable_key(key_id: str):
|
|||||||
except EncryptionError as exc:
|
except EncryptionError as exc:
|
||||||
return _error_response("NotFoundException", str(exc), 404)
|
return _error_response("NotFoundException", str(exc), 404)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------- Encryption Operations ----------------------
|
||||||
|
|
||||||
@kms_api_bp.route("/encrypt", methods=["POST"])
|
@kms_api_bp.route("/encrypt", methods=["POST"])
|
||||||
@limiter.limit("60 per minute")
|
@limiter.limit("60 per minute")
|
||||||
def encrypt_data():
|
def encrypt_data():
|
||||||
@@ -242,6 +251,7 @@ def generate_data_key():
|
|||||||
try:
|
try:
|
||||||
plaintext_key, encrypted_key = kms.generate_data_key(key_id, context)
|
plaintext_key, encrypted_key = kms.generate_data_key(key_id, context)
|
||||||
|
|
||||||
|
# Trim key if AES_128 requested
|
||||||
if key_spec == "AES_128":
|
if key_spec == "AES_128":
|
||||||
plaintext_key = plaintext_key[:16]
|
plaintext_key = plaintext_key[:16]
|
||||||
|
|
||||||
@@ -312,7 +322,10 @@ def re_encrypt():
|
|||||||
return _error_response("ValidationException", "CiphertextBlob must be base64 encoded", 400)
|
return _error_response("ValidationException", "CiphertextBlob must be base64 encoded", 400)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
# First decrypt, get source key id
|
||||||
plaintext, source_key_id = kms.decrypt(ciphertext, source_context)
|
plaintext, source_key_id = kms.decrypt(ciphertext, source_context)
|
||||||
|
|
||||||
|
# Re-encrypt with destination key
|
||||||
new_ciphertext = kms.encrypt(destination_key_id, plaintext, destination_context)
|
new_ciphertext = kms.encrypt(destination_key_id, plaintext, destination_context)
|
||||||
|
|
||||||
return jsonify({
|
return jsonify({
|
||||||
@@ -352,6 +365,9 @@ def generate_random():
|
|||||||
except EncryptionError as exc:
|
except EncryptionError as exc:
|
||||||
return _error_response("ValidationException", str(exc), 400)
|
return _error_response("ValidationException", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------- Client-Side Encryption Helpers ----------------------
|
||||||
|
|
||||||
@kms_api_bp.route("/client/generate-key", methods=["POST"])
|
@kms_api_bp.route("/client/generate-key", methods=["POST"])
|
||||||
@limiter.limit("30 per minute")
|
@limiter.limit("30 per minute")
|
||||||
def generate_client_key():
|
def generate_client_key():
|
||||||
@@ -411,6 +427,9 @@ def client_decrypt():
|
|||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
return _error_response("DecryptionError", str(exc), 400)
|
return _error_response("DecryptionError", str(exc), 400)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------- Encryption Materials for S3 Client-Side Encryption ----------------------
|
||||||
|
|
||||||
@kms_api_bp.route("/materials/<key_id>", methods=["POST"])
|
@kms_api_bp.route("/materials/<key_id>", methods=["POST"])
|
||||||
@limiter.limit("60 per minute")
|
@limiter.limit("60 per minute")
|
||||||
def get_encryption_materials(key_id: str):
|
def get_encryption_materials(key_id: str):
|
||||||
|
|||||||
335
app/lifecycle.py
335
app/lifecycle.py
@@ -1,335 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import json
|
|
||||||
import logging
|
|
||||||
import threading
|
|
||||||
import time
|
|
||||||
from dataclasses import dataclass, field
|
|
||||||
from datetime import datetime, timedelta, timezone
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Any, Dict, List, Optional
|
|
||||||
|
|
||||||
from .storage import ObjectStorage, StorageError
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class LifecycleResult:
|
|
||||||
bucket_name: str
|
|
||||||
objects_deleted: int = 0
|
|
||||||
versions_deleted: int = 0
|
|
||||||
uploads_aborted: int = 0
|
|
||||||
errors: List[str] = field(default_factory=list)
|
|
||||||
execution_time_seconds: float = 0.0
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class LifecycleExecutionRecord:
|
|
||||||
timestamp: float
|
|
||||||
bucket_name: str
|
|
||||||
objects_deleted: int
|
|
||||||
versions_deleted: int
|
|
||||||
uploads_aborted: int
|
|
||||||
errors: List[str]
|
|
||||||
execution_time_seconds: float
|
|
||||||
|
|
||||||
def to_dict(self) -> dict:
|
|
||||||
return {
|
|
||||||
"timestamp": self.timestamp,
|
|
||||||
"bucket_name": self.bucket_name,
|
|
||||||
"objects_deleted": self.objects_deleted,
|
|
||||||
"versions_deleted": self.versions_deleted,
|
|
||||||
"uploads_aborted": self.uploads_aborted,
|
|
||||||
"errors": self.errors,
|
|
||||||
"execution_time_seconds": self.execution_time_seconds,
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(cls, data: dict) -> "LifecycleExecutionRecord":
|
|
||||||
return cls(
|
|
||||||
timestamp=data["timestamp"],
|
|
||||||
bucket_name=data["bucket_name"],
|
|
||||||
objects_deleted=data["objects_deleted"],
|
|
||||||
versions_deleted=data["versions_deleted"],
|
|
||||||
uploads_aborted=data["uploads_aborted"],
|
|
||||||
errors=data.get("errors", []),
|
|
||||||
execution_time_seconds=data["execution_time_seconds"],
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_result(cls, result: LifecycleResult) -> "LifecycleExecutionRecord":
|
|
||||||
return cls(
|
|
||||||
timestamp=time.time(),
|
|
||||||
bucket_name=result.bucket_name,
|
|
||||||
objects_deleted=result.objects_deleted,
|
|
||||||
versions_deleted=result.versions_deleted,
|
|
||||||
uploads_aborted=result.uploads_aborted,
|
|
||||||
errors=result.errors.copy(),
|
|
||||||
execution_time_seconds=result.execution_time_seconds,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class LifecycleHistoryStore:
|
|
||||||
MAX_HISTORY_PER_BUCKET = 50
|
|
||||||
|
|
||||||
def __init__(self, storage_root: Path) -> None:
|
|
||||||
self.storage_root = storage_root
|
|
||||||
self._lock = threading.Lock()
|
|
||||||
|
|
||||||
def _get_history_path(self, bucket_name: str) -> Path:
|
|
||||||
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "lifecycle_history.json"
|
|
||||||
|
|
||||||
def load_history(self, bucket_name: str) -> List[LifecycleExecutionRecord]:
|
|
||||||
path = self._get_history_path(bucket_name)
|
|
||||||
if not path.exists():
|
|
||||||
return []
|
|
||||||
try:
|
|
||||||
with open(path, "r") as f:
|
|
||||||
data = json.load(f)
|
|
||||||
return [LifecycleExecutionRecord.from_dict(d) for d in data.get("executions", [])]
|
|
||||||
except (OSError, ValueError, KeyError) as e:
|
|
||||||
logger.error(f"Failed to load lifecycle history for {bucket_name}: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
def save_history(self, bucket_name: str, records: List[LifecycleExecutionRecord]) -> None:
|
|
||||||
path = self._get_history_path(bucket_name)
|
|
||||||
path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
data = {"executions": [r.to_dict() for r in records[:self.MAX_HISTORY_PER_BUCKET]]}
|
|
||||||
try:
|
|
||||||
with open(path, "w") as f:
|
|
||||||
json.dump(data, f, indent=2)
|
|
||||||
except OSError as e:
|
|
||||||
logger.error(f"Failed to save lifecycle history for {bucket_name}: {e}")
|
|
||||||
|
|
||||||
def add_record(self, bucket_name: str, record: LifecycleExecutionRecord) -> None:
|
|
||||||
with self._lock:
|
|
||||||
records = self.load_history(bucket_name)
|
|
||||||
records.insert(0, record)
|
|
||||||
self.save_history(bucket_name, records)
|
|
||||||
|
|
||||||
def get_history(self, bucket_name: str, limit: int = 50, offset: int = 0) -> List[LifecycleExecutionRecord]:
|
|
||||||
records = self.load_history(bucket_name)
|
|
||||||
return records[offset:offset + limit]
|
|
||||||
|
|
||||||
|
|
||||||
class LifecycleManager:
|
|
||||||
def __init__(self, storage: ObjectStorage, interval_seconds: int = 3600, storage_root: Optional[Path] = None):
|
|
||||||
self.storage = storage
|
|
||||||
self.interval_seconds = interval_seconds
|
|
||||||
self.storage_root = storage_root
|
|
||||||
self._timer: Optional[threading.Timer] = None
|
|
||||||
self._shutdown = False
|
|
||||||
self._lock = threading.Lock()
|
|
||||||
self.history_store = LifecycleHistoryStore(storage_root) if storage_root else None
|
|
||||||
|
|
||||||
def start(self) -> None:
|
|
||||||
if self._timer is not None:
|
|
||||||
return
|
|
||||||
self._shutdown = False
|
|
||||||
self._schedule_next()
|
|
||||||
logger.info(f"Lifecycle manager started with interval {self.interval_seconds}s")
|
|
||||||
|
|
||||||
def stop(self) -> None:
|
|
||||||
self._shutdown = True
|
|
||||||
if self._timer:
|
|
||||||
self._timer.cancel()
|
|
||||||
self._timer = None
|
|
||||||
logger.info("Lifecycle manager stopped")
|
|
||||||
|
|
||||||
def _schedule_next(self) -> None:
|
|
||||||
if self._shutdown:
|
|
||||||
return
|
|
||||||
self._timer = threading.Timer(self.interval_seconds, self._run_enforcement)
|
|
||||||
self._timer.daemon = True
|
|
||||||
self._timer.start()
|
|
||||||
|
|
||||||
def _run_enforcement(self) -> None:
|
|
||||||
if self._shutdown:
|
|
||||||
return
|
|
||||||
try:
|
|
||||||
self.enforce_all_buckets()
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Lifecycle enforcement failed: {e}")
|
|
||||||
finally:
|
|
||||||
self._schedule_next()
|
|
||||||
|
|
||||||
def enforce_all_buckets(self) -> Dict[str, LifecycleResult]:
|
|
||||||
results = {}
|
|
||||||
try:
|
|
||||||
buckets = self.storage.list_buckets()
|
|
||||||
for bucket in buckets:
|
|
||||||
result = self.enforce_rules(bucket.name)
|
|
||||||
if result.objects_deleted > 0 or result.versions_deleted > 0 or result.uploads_aborted > 0:
|
|
||||||
results[bucket.name] = result
|
|
||||||
except StorageError as e:
|
|
||||||
logger.error(f"Failed to list buckets for lifecycle: {e}")
|
|
||||||
return results
|
|
||||||
|
|
||||||
def enforce_rules(self, bucket_name: str) -> LifecycleResult:
|
|
||||||
start_time = time.time()
|
|
||||||
result = LifecycleResult(bucket_name=bucket_name)
|
|
||||||
|
|
||||||
try:
|
|
||||||
lifecycle = self.storage.get_bucket_lifecycle(bucket_name)
|
|
||||||
if not lifecycle:
|
|
||||||
return result
|
|
||||||
|
|
||||||
for rule in lifecycle:
|
|
||||||
if rule.get("Status") != "Enabled":
|
|
||||||
continue
|
|
||||||
rule_id = rule.get("ID", "unknown")
|
|
||||||
prefix = rule.get("Prefix", rule.get("Filter", {}).get("Prefix", ""))
|
|
||||||
|
|
||||||
self._enforce_expiration(bucket_name, rule, prefix, result)
|
|
||||||
self._enforce_noncurrent_expiration(bucket_name, rule, prefix, result)
|
|
||||||
self._enforce_abort_multipart(bucket_name, rule, result)
|
|
||||||
|
|
||||||
except StorageError as e:
|
|
||||||
result.errors.append(str(e))
|
|
||||||
logger.error(f"Lifecycle enforcement error for {bucket_name}: {e}")
|
|
||||||
|
|
||||||
result.execution_time_seconds = time.time() - start_time
|
|
||||||
if result.objects_deleted > 0 or result.versions_deleted > 0 or result.uploads_aborted > 0 or result.errors:
|
|
||||||
logger.info(
|
|
||||||
f"Lifecycle enforcement for {bucket_name}: "
|
|
||||||
f"deleted={result.objects_deleted}, versions={result.versions_deleted}, "
|
|
||||||
f"aborted={result.uploads_aborted}, time={result.execution_time_seconds:.2f}s"
|
|
||||||
)
|
|
||||||
if self.history_store:
|
|
||||||
record = LifecycleExecutionRecord.from_result(result)
|
|
||||||
self.history_store.add_record(bucket_name, record)
|
|
||||||
return result
|
|
||||||
|
|
||||||
def _enforce_expiration(
|
|
||||||
self, bucket_name: str, rule: Dict[str, Any], prefix: str, result: LifecycleResult
|
|
||||||
) -> None:
|
|
||||||
expiration = rule.get("Expiration", {})
|
|
||||||
if not expiration:
|
|
||||||
return
|
|
||||||
|
|
||||||
days = expiration.get("Days")
|
|
||||||
date_str = expiration.get("Date")
|
|
||||||
|
|
||||||
if days:
|
|
||||||
cutoff = datetime.now(timezone.utc) - timedelta(days=days)
|
|
||||||
elif date_str:
|
|
||||||
try:
|
|
||||||
cutoff = datetime.fromisoformat(date_str.replace("Z", "+00:00"))
|
|
||||||
except ValueError:
|
|
||||||
return
|
|
||||||
else:
|
|
||||||
return
|
|
||||||
|
|
||||||
try:
|
|
||||||
objects = self.storage.list_objects_all(bucket_name)
|
|
||||||
for obj in objects:
|
|
||||||
if prefix and not obj.key.startswith(prefix):
|
|
||||||
continue
|
|
||||||
if obj.last_modified < cutoff:
|
|
||||||
try:
|
|
||||||
self.storage.delete_object(bucket_name, obj.key)
|
|
||||||
result.objects_deleted += 1
|
|
||||||
except StorageError as e:
|
|
||||||
result.errors.append(f"Failed to delete {obj.key}: {e}")
|
|
||||||
except StorageError as e:
|
|
||||||
result.errors.append(f"Failed to list objects: {e}")
|
|
||||||
|
|
||||||
def _enforce_noncurrent_expiration(
|
|
||||||
self, bucket_name: str, rule: Dict[str, Any], prefix: str, result: LifecycleResult
|
|
||||||
) -> None:
|
|
||||||
noncurrent = rule.get("NoncurrentVersionExpiration", {})
|
|
||||||
noncurrent_days = noncurrent.get("NoncurrentDays")
|
|
||||||
if not noncurrent_days:
|
|
||||||
return
|
|
||||||
|
|
||||||
cutoff = datetime.now(timezone.utc) - timedelta(days=noncurrent_days)
|
|
||||||
|
|
||||||
try:
|
|
||||||
objects = self.storage.list_objects_all(bucket_name)
|
|
||||||
for obj in objects:
|
|
||||||
if prefix and not obj.key.startswith(prefix):
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
versions = self.storage.list_object_versions(bucket_name, obj.key)
|
|
||||||
for version in versions:
|
|
||||||
archived_at_str = version.get("archived_at", "")
|
|
||||||
if not archived_at_str:
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
archived_at = datetime.fromisoformat(archived_at_str.replace("Z", "+00:00"))
|
|
||||||
if archived_at < cutoff:
|
|
||||||
version_id = version.get("version_id")
|
|
||||||
if version_id:
|
|
||||||
self.storage.delete_object_version(bucket_name, obj.key, version_id)
|
|
||||||
result.versions_deleted += 1
|
|
||||||
except (ValueError, StorageError) as e:
|
|
||||||
result.errors.append(f"Failed to process version: {e}")
|
|
||||||
except StorageError:
|
|
||||||
pass
|
|
||||||
except StorageError as e:
|
|
||||||
result.errors.append(f"Failed to list objects: {e}")
|
|
||||||
|
|
||||||
try:
|
|
||||||
orphaned = self.storage.list_orphaned_objects(bucket_name)
|
|
||||||
for item in orphaned:
|
|
||||||
obj_key = item.get("key", "")
|
|
||||||
if prefix and not obj_key.startswith(prefix):
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
versions = self.storage.list_object_versions(bucket_name, obj_key)
|
|
||||||
for version in versions:
|
|
||||||
archived_at_str = version.get("archived_at", "")
|
|
||||||
if not archived_at_str:
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
archived_at = datetime.fromisoformat(archived_at_str.replace("Z", "+00:00"))
|
|
||||||
if archived_at < cutoff:
|
|
||||||
version_id = version.get("version_id")
|
|
||||||
if version_id:
|
|
||||||
self.storage.delete_object_version(bucket_name, obj_key, version_id)
|
|
||||||
result.versions_deleted += 1
|
|
||||||
except (ValueError, StorageError) as e:
|
|
||||||
result.errors.append(f"Failed to process orphaned version: {e}")
|
|
||||||
except StorageError:
|
|
||||||
pass
|
|
||||||
except StorageError as e:
|
|
||||||
result.errors.append(f"Failed to list orphaned objects: {e}")
|
|
||||||
|
|
||||||
def _enforce_abort_multipart(
|
|
||||||
self, bucket_name: str, rule: Dict[str, Any], result: LifecycleResult
|
|
||||||
) -> None:
|
|
||||||
abort_config = rule.get("AbortIncompleteMultipartUpload", {})
|
|
||||||
days_after = abort_config.get("DaysAfterInitiation")
|
|
||||||
if not days_after:
|
|
||||||
return
|
|
||||||
|
|
||||||
cutoff = datetime.now(timezone.utc) - timedelta(days=days_after)
|
|
||||||
|
|
||||||
try:
|
|
||||||
uploads = self.storage.list_multipart_uploads(bucket_name)
|
|
||||||
for upload in uploads:
|
|
||||||
created_at_str = upload.get("created_at", "")
|
|
||||||
if not created_at_str:
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
created_at = datetime.fromisoformat(created_at_str.replace("Z", "+00:00"))
|
|
||||||
if created_at < cutoff:
|
|
||||||
upload_id = upload.get("upload_id")
|
|
||||||
if upload_id:
|
|
||||||
self.storage.abort_multipart_upload(bucket_name, upload_id)
|
|
||||||
result.uploads_aborted += 1
|
|
||||||
except (ValueError, StorageError) as e:
|
|
||||||
result.errors.append(f"Failed to abort upload: {e}")
|
|
||||||
except StorageError as e:
|
|
||||||
result.errors.append(f"Failed to list multipart uploads: {e}")
|
|
||||||
|
|
||||||
def run_now(self, bucket_name: Optional[str] = None) -> Dict[str, LifecycleResult]:
|
|
||||||
if bucket_name:
|
|
||||||
return {bucket_name: self.enforce_rules(bucket_name)}
|
|
||||||
return self.enforce_all_buckets()
|
|
||||||
|
|
||||||
def get_execution_history(self, bucket_name: str, limit: int = 50, offset: int = 0) -> List[LifecycleExecutionRecord]:
|
|
||||||
if not self.history_store:
|
|
||||||
return []
|
|
||||||
return self.history_store.get_history(bucket_name, limit, offset)
|
|
||||||
@@ -1,334 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import json
|
|
||||||
import logging
|
|
||||||
import queue
|
|
||||||
import threading
|
|
||||||
import time
|
|
||||||
import uuid
|
|
||||||
from dataclasses import dataclass, field
|
|
||||||
from datetime import datetime, timezone
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Any, Dict, List, Optional
|
|
||||||
from urllib.parse import urlparse
|
|
||||||
|
|
||||||
import requests
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class NotificationEvent:
|
|
||||||
event_name: str
|
|
||||||
bucket_name: str
|
|
||||||
object_key: str
|
|
||||||
object_size: int = 0
|
|
||||||
etag: str = ""
|
|
||||||
version_id: Optional[str] = None
|
|
||||||
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
|
||||||
request_id: str = field(default_factory=lambda: uuid.uuid4().hex)
|
|
||||||
source_ip: str = ""
|
|
||||||
user_identity: str = ""
|
|
||||||
|
|
||||||
def to_s3_event(self) -> Dict[str, Any]:
|
|
||||||
return {
|
|
||||||
"Records": [
|
|
||||||
{
|
|
||||||
"eventVersion": "2.1",
|
|
||||||
"eventSource": "myfsio:s3",
|
|
||||||
"awsRegion": "local",
|
|
||||||
"eventTime": self.timestamp.strftime("%Y-%m-%dT%H:%M:%S.000Z"),
|
|
||||||
"eventName": self.event_name,
|
|
||||||
"userIdentity": {
|
|
||||||
"principalId": self.user_identity or "ANONYMOUS",
|
|
||||||
},
|
|
||||||
"requestParameters": {
|
|
||||||
"sourceIPAddress": self.source_ip or "127.0.0.1",
|
|
||||||
},
|
|
||||||
"responseElements": {
|
|
||||||
"x-amz-request-id": self.request_id,
|
|
||||||
"x-amz-id-2": self.request_id,
|
|
||||||
},
|
|
||||||
"s3": {
|
|
||||||
"s3SchemaVersion": "1.0",
|
|
||||||
"configurationId": "notification",
|
|
||||||
"bucket": {
|
|
||||||
"name": self.bucket_name,
|
|
||||||
"ownerIdentity": {"principalId": "local"},
|
|
||||||
"arn": f"arn:aws:s3:::{self.bucket_name}",
|
|
||||||
},
|
|
||||||
"object": {
|
|
||||||
"key": self.object_key,
|
|
||||||
"size": self.object_size,
|
|
||||||
"eTag": self.etag,
|
|
||||||
"versionId": self.version_id or "null",
|
|
||||||
"sequencer": f"{int(time.time() * 1000):016X}",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class WebhookDestination:
|
|
||||||
url: str
|
|
||||||
headers: Dict[str, str] = field(default_factory=dict)
|
|
||||||
timeout_seconds: int = 30
|
|
||||||
retry_count: int = 3
|
|
||||||
retry_delay_seconds: int = 1
|
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, Any]:
|
|
||||||
return {
|
|
||||||
"url": self.url,
|
|
||||||
"headers": self.headers,
|
|
||||||
"timeout_seconds": self.timeout_seconds,
|
|
||||||
"retry_count": self.retry_count,
|
|
||||||
"retry_delay_seconds": self.retry_delay_seconds,
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(cls, data: Dict[str, Any]) -> "WebhookDestination":
|
|
||||||
return cls(
|
|
||||||
url=data.get("url", ""),
|
|
||||||
headers=data.get("headers", {}),
|
|
||||||
timeout_seconds=data.get("timeout_seconds", 30),
|
|
||||||
retry_count=data.get("retry_count", 3),
|
|
||||||
retry_delay_seconds=data.get("retry_delay_seconds", 1),
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class NotificationConfiguration:
|
|
||||||
id: str
|
|
||||||
events: List[str]
|
|
||||||
destination: WebhookDestination
|
|
||||||
prefix_filter: str = ""
|
|
||||||
suffix_filter: str = ""
|
|
||||||
|
|
||||||
def matches_event(self, event_name: str, object_key: str) -> bool:
|
|
||||||
event_match = False
|
|
||||||
for pattern in self.events:
|
|
||||||
if pattern.endswith("*"):
|
|
||||||
base = pattern[:-1]
|
|
||||||
if event_name.startswith(base):
|
|
||||||
event_match = True
|
|
||||||
break
|
|
||||||
elif pattern == event_name:
|
|
||||||
event_match = True
|
|
||||||
break
|
|
||||||
|
|
||||||
if not event_match:
|
|
||||||
return False
|
|
||||||
|
|
||||||
if self.prefix_filter and not object_key.startswith(self.prefix_filter):
|
|
||||||
return False
|
|
||||||
if self.suffix_filter and not object_key.endswith(self.suffix_filter):
|
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, Any]:
|
|
||||||
return {
|
|
||||||
"Id": self.id,
|
|
||||||
"Events": self.events,
|
|
||||||
"Destination": self.destination.to_dict(),
|
|
||||||
"Filter": {
|
|
||||||
"Key": {
|
|
||||||
"FilterRules": [
|
|
||||||
{"Name": "prefix", "Value": self.prefix_filter},
|
|
||||||
{"Name": "suffix", "Value": self.suffix_filter},
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(cls, data: Dict[str, Any]) -> "NotificationConfiguration":
|
|
||||||
prefix = ""
|
|
||||||
suffix = ""
|
|
||||||
filter_data = data.get("Filter", {})
|
|
||||||
key_filter = filter_data.get("Key", {})
|
|
||||||
for rule in key_filter.get("FilterRules", []):
|
|
||||||
if rule.get("Name") == "prefix":
|
|
||||||
prefix = rule.get("Value", "")
|
|
||||||
elif rule.get("Name") == "suffix":
|
|
||||||
suffix = rule.get("Value", "")
|
|
||||||
|
|
||||||
return cls(
|
|
||||||
id=data.get("Id", uuid.uuid4().hex),
|
|
||||||
events=data.get("Events", []),
|
|
||||||
destination=WebhookDestination.from_dict(data.get("Destination", {})),
|
|
||||||
prefix_filter=prefix,
|
|
||||||
suffix_filter=suffix,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class NotificationService:
|
|
||||||
def __init__(self, storage_root: Path, worker_count: int = 2):
|
|
||||||
self.storage_root = storage_root
|
|
||||||
self._configs: Dict[str, List[NotificationConfiguration]] = {}
|
|
||||||
self._queue: queue.Queue[tuple[NotificationEvent, WebhookDestination]] = queue.Queue()
|
|
||||||
self._workers: List[threading.Thread] = []
|
|
||||||
self._shutdown = threading.Event()
|
|
||||||
self._stats = {
|
|
||||||
"events_queued": 0,
|
|
||||||
"events_sent": 0,
|
|
||||||
"events_failed": 0,
|
|
||||||
}
|
|
||||||
|
|
||||||
for i in range(worker_count):
|
|
||||||
worker = threading.Thread(target=self._worker_loop, name=f"notification-worker-{i}", daemon=True)
|
|
||||||
worker.start()
|
|
||||||
self._workers.append(worker)
|
|
||||||
|
|
||||||
def _config_path(self, bucket_name: str) -> Path:
|
|
||||||
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "notifications.json"
|
|
||||||
|
|
||||||
def get_bucket_notifications(self, bucket_name: str) -> List[NotificationConfiguration]:
|
|
||||||
if bucket_name in self._configs:
|
|
||||||
return self._configs[bucket_name]
|
|
||||||
|
|
||||||
config_path = self._config_path(bucket_name)
|
|
||||||
if not config_path.exists():
|
|
||||||
return []
|
|
||||||
|
|
||||||
try:
|
|
||||||
data = json.loads(config_path.read_text(encoding="utf-8"))
|
|
||||||
configs = [NotificationConfiguration.from_dict(c) for c in data.get("configurations", [])]
|
|
||||||
self._configs[bucket_name] = configs
|
|
||||||
return configs
|
|
||||||
except (json.JSONDecodeError, OSError) as e:
|
|
||||||
logger.warning(f"Failed to load notification config for {bucket_name}: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
def set_bucket_notifications(
|
|
||||||
self, bucket_name: str, configurations: List[NotificationConfiguration]
|
|
||||||
) -> None:
|
|
||||||
config_path = self._config_path(bucket_name)
|
|
||||||
config_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
data = {"configurations": [c.to_dict() for c in configurations]}
|
|
||||||
config_path.write_text(json.dumps(data, indent=2), encoding="utf-8")
|
|
||||||
self._configs[bucket_name] = configurations
|
|
||||||
|
|
||||||
def delete_bucket_notifications(self, bucket_name: str) -> None:
|
|
||||||
config_path = self._config_path(bucket_name)
|
|
||||||
try:
|
|
||||||
if config_path.exists():
|
|
||||||
config_path.unlink()
|
|
||||||
except OSError:
|
|
||||||
pass
|
|
||||||
self._configs.pop(bucket_name, None)
|
|
||||||
|
|
||||||
def emit_event(self, event: NotificationEvent) -> None:
|
|
||||||
configurations = self.get_bucket_notifications(event.bucket_name)
|
|
||||||
if not configurations:
|
|
||||||
return
|
|
||||||
|
|
||||||
for config in configurations:
|
|
||||||
if config.matches_event(event.event_name, event.object_key):
|
|
||||||
self._queue.put((event, config.destination))
|
|
||||||
self._stats["events_queued"] += 1
|
|
||||||
logger.debug(
|
|
||||||
f"Queued notification for {event.event_name} on {event.bucket_name}/{event.object_key}"
|
|
||||||
)
|
|
||||||
|
|
||||||
def emit_object_created(
|
|
||||||
self,
|
|
||||||
bucket_name: str,
|
|
||||||
object_key: str,
|
|
||||||
*,
|
|
||||||
size: int = 0,
|
|
||||||
etag: str = "",
|
|
||||||
version_id: Optional[str] = None,
|
|
||||||
request_id: str = "",
|
|
||||||
source_ip: str = "",
|
|
||||||
user_identity: str = "",
|
|
||||||
operation: str = "Put",
|
|
||||||
) -> None:
|
|
||||||
event = NotificationEvent(
|
|
||||||
event_name=f"s3:ObjectCreated:{operation}",
|
|
||||||
bucket_name=bucket_name,
|
|
||||||
object_key=object_key,
|
|
||||||
object_size=size,
|
|
||||||
etag=etag,
|
|
||||||
version_id=version_id,
|
|
||||||
request_id=request_id or uuid.uuid4().hex,
|
|
||||||
source_ip=source_ip,
|
|
||||||
user_identity=user_identity,
|
|
||||||
)
|
|
||||||
self.emit_event(event)
|
|
||||||
|
|
||||||
def emit_object_removed(
|
|
||||||
self,
|
|
||||||
bucket_name: str,
|
|
||||||
object_key: str,
|
|
||||||
*,
|
|
||||||
version_id: Optional[str] = None,
|
|
||||||
request_id: str = "",
|
|
||||||
source_ip: str = "",
|
|
||||||
user_identity: str = "",
|
|
||||||
operation: str = "Delete",
|
|
||||||
) -> None:
|
|
||||||
event = NotificationEvent(
|
|
||||||
event_name=f"s3:ObjectRemoved:{operation}",
|
|
||||||
bucket_name=bucket_name,
|
|
||||||
object_key=object_key,
|
|
||||||
version_id=version_id,
|
|
||||||
request_id=request_id or uuid.uuid4().hex,
|
|
||||||
source_ip=source_ip,
|
|
||||||
user_identity=user_identity,
|
|
||||||
)
|
|
||||||
self.emit_event(event)
|
|
||||||
|
|
||||||
def _worker_loop(self) -> None:
|
|
||||||
while not self._shutdown.is_set():
|
|
||||||
try:
|
|
||||||
event, destination = self._queue.get(timeout=1.0)
|
|
||||||
except queue.Empty:
|
|
||||||
continue
|
|
||||||
|
|
||||||
try:
|
|
||||||
self._send_notification(event, destination)
|
|
||||||
self._stats["events_sent"] += 1
|
|
||||||
except Exception as e:
|
|
||||||
self._stats["events_failed"] += 1
|
|
||||||
logger.error(f"Failed to send notification: {e}")
|
|
||||||
finally:
|
|
||||||
self._queue.task_done()
|
|
||||||
|
|
||||||
def _send_notification(self, event: NotificationEvent, destination: WebhookDestination) -> None:
|
|
||||||
payload = event.to_s3_event()
|
|
||||||
headers = {"Content-Type": "application/json", **destination.headers}
|
|
||||||
|
|
||||||
last_error = None
|
|
||||||
for attempt in range(destination.retry_count):
|
|
||||||
try:
|
|
||||||
response = requests.post(
|
|
||||||
destination.url,
|
|
||||||
json=payload,
|
|
||||||
headers=headers,
|
|
||||||
timeout=destination.timeout_seconds,
|
|
||||||
)
|
|
||||||
if response.status_code < 400:
|
|
||||||
logger.info(
|
|
||||||
f"Notification sent: {event.event_name} -> {destination.url} (status={response.status_code})"
|
|
||||||
)
|
|
||||||
return
|
|
||||||
last_error = f"HTTP {response.status_code}: {response.text[:200]}"
|
|
||||||
except requests.RequestException as e:
|
|
||||||
last_error = str(e)
|
|
||||||
|
|
||||||
if attempt < destination.retry_count - 1:
|
|
||||||
time.sleep(destination.retry_delay_seconds * (attempt + 1))
|
|
||||||
|
|
||||||
raise RuntimeError(f"Failed after {destination.retry_count} attempts: {last_error}")
|
|
||||||
|
|
||||||
def get_stats(self) -> Dict[str, int]:
|
|
||||||
return dict(self._stats)
|
|
||||||
|
|
||||||
def shutdown(self) -> None:
|
|
||||||
self._shutdown.set()
|
|
||||||
for worker in self._workers:
|
|
||||||
worker.join(timeout=5.0)
|
|
||||||
@@ -1,234 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import json
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from datetime import datetime, timezone
|
|
||||||
from enum import Enum
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Any, Dict, Optional
|
|
||||||
|
|
||||||
|
|
||||||
class RetentionMode(Enum):
|
|
||||||
GOVERNANCE = "GOVERNANCE"
|
|
||||||
COMPLIANCE = "COMPLIANCE"
|
|
||||||
|
|
||||||
|
|
||||||
class ObjectLockError(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class ObjectLockRetention:
|
|
||||||
mode: RetentionMode
|
|
||||||
retain_until_date: datetime
|
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, str]:
|
|
||||||
return {
|
|
||||||
"Mode": self.mode.value,
|
|
||||||
"RetainUntilDate": self.retain_until_date.isoformat(),
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(cls, data: Dict[str, Any]) -> Optional["ObjectLockRetention"]:
|
|
||||||
if not data:
|
|
||||||
return None
|
|
||||||
mode_str = data.get("Mode")
|
|
||||||
date_str = data.get("RetainUntilDate")
|
|
||||||
if not mode_str or not date_str:
|
|
||||||
return None
|
|
||||||
try:
|
|
||||||
mode = RetentionMode(mode_str)
|
|
||||||
retain_until = datetime.fromisoformat(date_str.replace("Z", "+00:00"))
|
|
||||||
return cls(mode=mode, retain_until_date=retain_until)
|
|
||||||
except (ValueError, KeyError):
|
|
||||||
return None
|
|
||||||
|
|
||||||
def is_expired(self) -> bool:
|
|
||||||
return datetime.now(timezone.utc) > self.retain_until_date
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class ObjectLockConfig:
|
|
||||||
enabled: bool = False
|
|
||||||
default_retention: Optional[ObjectLockRetention] = None
|
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, Any]:
|
|
||||||
result: Dict[str, Any] = {"ObjectLockEnabled": "Enabled" if self.enabled else "Disabled"}
|
|
||||||
if self.default_retention:
|
|
||||||
result["Rule"] = {
|
|
||||||
"DefaultRetention": {
|
|
||||||
"Mode": self.default_retention.mode.value,
|
|
||||||
"Days": None,
|
|
||||||
"Years": None,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return result
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(cls, data: Dict[str, Any]) -> "ObjectLockConfig":
|
|
||||||
enabled = data.get("ObjectLockEnabled") == "Enabled"
|
|
||||||
default_retention = None
|
|
||||||
rule = data.get("Rule")
|
|
||||||
if rule and "DefaultRetention" in rule:
|
|
||||||
dr = rule["DefaultRetention"]
|
|
||||||
mode_str = dr.get("Mode", "GOVERNANCE")
|
|
||||||
days = dr.get("Days")
|
|
||||||
years = dr.get("Years")
|
|
||||||
if days or years:
|
|
||||||
from datetime import timedelta
|
|
||||||
now = datetime.now(timezone.utc)
|
|
||||||
if years:
|
|
||||||
delta = timedelta(days=int(years) * 365)
|
|
||||||
else:
|
|
||||||
delta = timedelta(days=int(days))
|
|
||||||
default_retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode(mode_str),
|
|
||||||
retain_until_date=now + delta,
|
|
||||||
)
|
|
||||||
return cls(enabled=enabled, default_retention=default_retention)
|
|
||||||
|
|
||||||
|
|
||||||
class ObjectLockService:
|
|
||||||
def __init__(self, storage_root: Path):
|
|
||||||
self.storage_root = storage_root
|
|
||||||
self._config_cache: Dict[str, ObjectLockConfig] = {}
|
|
||||||
|
|
||||||
def _bucket_lock_config_path(self, bucket_name: str) -> Path:
|
|
||||||
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "object_lock.json"
|
|
||||||
|
|
||||||
def _object_lock_meta_path(self, bucket_name: str, object_key: str) -> Path:
|
|
||||||
safe_key = object_key.replace("/", "_").replace("\\", "_")
|
|
||||||
return (
|
|
||||||
self.storage_root / ".myfsio.sys" / "buckets" / bucket_name /
|
|
||||||
"locks" / f"{safe_key}.lock.json"
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_bucket_lock_config(self, bucket_name: str) -> ObjectLockConfig:
|
|
||||||
if bucket_name in self._config_cache:
|
|
||||||
return self._config_cache[bucket_name]
|
|
||||||
|
|
||||||
config_path = self._bucket_lock_config_path(bucket_name)
|
|
||||||
if not config_path.exists():
|
|
||||||
return ObjectLockConfig(enabled=False)
|
|
||||||
|
|
||||||
try:
|
|
||||||
data = json.loads(config_path.read_text(encoding="utf-8"))
|
|
||||||
config = ObjectLockConfig.from_dict(data)
|
|
||||||
self._config_cache[bucket_name] = config
|
|
||||||
return config
|
|
||||||
except (json.JSONDecodeError, OSError):
|
|
||||||
return ObjectLockConfig(enabled=False)
|
|
||||||
|
|
||||||
def set_bucket_lock_config(self, bucket_name: str, config: ObjectLockConfig) -> None:
|
|
||||||
config_path = self._bucket_lock_config_path(bucket_name)
|
|
||||||
config_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
config_path.write_text(json.dumps(config.to_dict()), encoding="utf-8")
|
|
||||||
self._config_cache[bucket_name] = config
|
|
||||||
|
|
||||||
def enable_bucket_lock(self, bucket_name: str) -> None:
|
|
||||||
config = self.get_bucket_lock_config(bucket_name)
|
|
||||||
config.enabled = True
|
|
||||||
self.set_bucket_lock_config(bucket_name, config)
|
|
||||||
|
|
||||||
def is_bucket_lock_enabled(self, bucket_name: str) -> bool:
|
|
||||||
return self.get_bucket_lock_config(bucket_name).enabled
|
|
||||||
|
|
||||||
def get_object_retention(self, bucket_name: str, object_key: str) -> Optional[ObjectLockRetention]:
|
|
||||||
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
|
||||||
if not meta_path.exists():
|
|
||||||
return None
|
|
||||||
try:
|
|
||||||
data = json.loads(meta_path.read_text(encoding="utf-8"))
|
|
||||||
return ObjectLockRetention.from_dict(data.get("retention", {}))
|
|
||||||
except (json.JSONDecodeError, OSError):
|
|
||||||
return None
|
|
||||||
|
|
||||||
def set_object_retention(
|
|
||||||
self,
|
|
||||||
bucket_name: str,
|
|
||||||
object_key: str,
|
|
||||||
retention: ObjectLockRetention,
|
|
||||||
bypass_governance: bool = False,
|
|
||||||
) -> None:
|
|
||||||
existing = self.get_object_retention(bucket_name, object_key)
|
|
||||||
if existing and not existing.is_expired():
|
|
||||||
if existing.mode == RetentionMode.COMPLIANCE:
|
|
||||||
raise ObjectLockError(
|
|
||||||
"Cannot modify retention on object with COMPLIANCE mode until retention expires"
|
|
||||||
)
|
|
||||||
if existing.mode == RetentionMode.GOVERNANCE and not bypass_governance:
|
|
||||||
raise ObjectLockError(
|
|
||||||
"Cannot modify GOVERNANCE retention without bypass-governance permission"
|
|
||||||
)
|
|
||||||
|
|
||||||
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
|
||||||
meta_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
existing_data: Dict[str, Any] = {}
|
|
||||||
if meta_path.exists():
|
|
||||||
try:
|
|
||||||
existing_data = json.loads(meta_path.read_text(encoding="utf-8"))
|
|
||||||
except (json.JSONDecodeError, OSError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
existing_data["retention"] = retention.to_dict()
|
|
||||||
meta_path.write_text(json.dumps(existing_data), encoding="utf-8")
|
|
||||||
|
|
||||||
def get_legal_hold(self, bucket_name: str, object_key: str) -> bool:
|
|
||||||
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
|
||||||
if not meta_path.exists():
|
|
||||||
return False
|
|
||||||
try:
|
|
||||||
data = json.loads(meta_path.read_text(encoding="utf-8"))
|
|
||||||
return data.get("legal_hold", False)
|
|
||||||
except (json.JSONDecodeError, OSError):
|
|
||||||
return False
|
|
||||||
|
|
||||||
def set_legal_hold(self, bucket_name: str, object_key: str, enabled: bool) -> None:
|
|
||||||
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
|
||||||
meta_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
existing_data: Dict[str, Any] = {}
|
|
||||||
if meta_path.exists():
|
|
||||||
try:
|
|
||||||
existing_data = json.loads(meta_path.read_text(encoding="utf-8"))
|
|
||||||
except (json.JSONDecodeError, OSError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
existing_data["legal_hold"] = enabled
|
|
||||||
meta_path.write_text(json.dumps(existing_data), encoding="utf-8")
|
|
||||||
|
|
||||||
def can_delete_object(
|
|
||||||
self,
|
|
||||||
bucket_name: str,
|
|
||||||
object_key: str,
|
|
||||||
bypass_governance: bool = False,
|
|
||||||
) -> tuple[bool, str]:
|
|
||||||
if self.get_legal_hold(bucket_name, object_key):
|
|
||||||
return False, "Object is under legal hold"
|
|
||||||
|
|
||||||
retention = self.get_object_retention(bucket_name, object_key)
|
|
||||||
if retention and not retention.is_expired():
|
|
||||||
if retention.mode == RetentionMode.COMPLIANCE:
|
|
||||||
return False, f"Object is locked in COMPLIANCE mode until {retention.retain_until_date.isoformat()}"
|
|
||||||
if retention.mode == RetentionMode.GOVERNANCE:
|
|
||||||
if not bypass_governance:
|
|
||||||
return False, f"Object is locked in GOVERNANCE mode until {retention.retain_until_date.isoformat()}"
|
|
||||||
|
|
||||||
return True, ""
|
|
||||||
|
|
||||||
def can_overwrite_object(
|
|
||||||
self,
|
|
||||||
bucket_name: str,
|
|
||||||
object_key: str,
|
|
||||||
bypass_governance: bool = False,
|
|
||||||
) -> tuple[bool, str]:
|
|
||||||
return self.can_delete_object(bucket_name, object_key, bypass_governance)
|
|
||||||
|
|
||||||
def delete_object_lock_metadata(self, bucket_name: str, object_key: str) -> None:
|
|
||||||
meta_path = self._object_lock_meta_path(bucket_name, object_key)
|
|
||||||
try:
|
|
||||||
if meta_path.exists():
|
|
||||||
meta_path.unlink()
|
|
||||||
except OSError:
|
|
||||||
pass
|
|
||||||
@@ -1,3 +1,4 @@
|
|||||||
|
"""Background replication worker."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
@@ -8,7 +9,7 @@ import time
|
|||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, List, Optional
|
from typing import Dict, Optional
|
||||||
|
|
||||||
import boto3
|
import boto3
|
||||||
from botocore.config import Config
|
from botocore.config import Config
|
||||||
@@ -21,47 +22,18 @@ from .storage import ObjectStorage, StorageError
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
REPLICATION_USER_AGENT = "S3ReplicationAgent/1.0"
|
REPLICATION_USER_AGENT = "S3ReplicationAgent/1.0"
|
||||||
REPLICATION_CONNECT_TIMEOUT = 5
|
|
||||||
REPLICATION_READ_TIMEOUT = 30
|
|
||||||
STREAMING_THRESHOLD_BYTES = 10 * 1024 * 1024
|
|
||||||
|
|
||||||
REPLICATION_MODE_NEW_ONLY = "new_only"
|
REPLICATION_MODE_NEW_ONLY = "new_only"
|
||||||
REPLICATION_MODE_ALL = "all"
|
REPLICATION_MODE_ALL = "all"
|
||||||
|
|
||||||
|
|
||||||
def _create_s3_client(connection: RemoteConnection, *, health_check: bool = False) -> Any:
|
|
||||||
"""Create a boto3 S3 client for the given connection.
|
|
||||||
Args:
|
|
||||||
connection: Remote S3 connection configuration
|
|
||||||
health_check: If True, use minimal retries for quick health checks
|
|
||||||
"""
|
|
||||||
config = Config(
|
|
||||||
user_agent_extra=REPLICATION_USER_AGENT,
|
|
||||||
connect_timeout=REPLICATION_CONNECT_TIMEOUT,
|
|
||||||
read_timeout=REPLICATION_READ_TIMEOUT,
|
|
||||||
retries={'max_attempts': 1 if health_check else 2},
|
|
||||||
signature_version='s3v4',
|
|
||||||
s3={'addressing_style': 'path'},
|
|
||||||
request_checksum_calculation='when_required',
|
|
||||||
response_checksum_validation='when_required',
|
|
||||||
)
|
|
||||||
return boto3.client(
|
|
||||||
"s3",
|
|
||||||
endpoint_url=connection.endpoint_url,
|
|
||||||
aws_access_key_id=connection.access_key,
|
|
||||||
aws_secret_access_key=connection.secret_key,
|
|
||||||
region_name=connection.region or 'us-east-1',
|
|
||||||
config=config,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ReplicationStats:
|
class ReplicationStats:
|
||||||
"""Statistics for replication operations - computed dynamically."""
|
"""Statistics for replication operations - computed dynamically."""
|
||||||
objects_synced: int = 0
|
objects_synced: int = 0 # Objects that exist in both source and destination
|
||||||
objects_pending: int = 0
|
objects_pending: int = 0 # Objects in source but not in destination
|
||||||
objects_orphaned: int = 0
|
objects_orphaned: int = 0 # Objects in destination but not in source (will be deleted)
|
||||||
bytes_synced: int = 0
|
bytes_synced: int = 0 # Total bytes synced to destination
|
||||||
last_sync_at: Optional[float] = None
|
last_sync_at: Optional[float] = None
|
||||||
last_sync_key: Optional[str] = None
|
last_sync_key: Optional[str] = None
|
||||||
|
|
||||||
@@ -87,40 +59,6 @@ class ReplicationStats:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class ReplicationFailure:
|
|
||||||
object_key: str
|
|
||||||
error_message: str
|
|
||||||
timestamp: float
|
|
||||||
failure_count: int
|
|
||||||
bucket_name: str
|
|
||||||
action: str
|
|
||||||
last_error_code: Optional[str] = None
|
|
||||||
|
|
||||||
def to_dict(self) -> dict:
|
|
||||||
return {
|
|
||||||
"object_key": self.object_key,
|
|
||||||
"error_message": self.error_message,
|
|
||||||
"timestamp": self.timestamp,
|
|
||||||
"failure_count": self.failure_count,
|
|
||||||
"bucket_name": self.bucket_name,
|
|
||||||
"action": self.action,
|
|
||||||
"last_error_code": self.last_error_code,
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(cls, data: dict) -> "ReplicationFailure":
|
|
||||||
return cls(
|
|
||||||
object_key=data["object_key"],
|
|
||||||
error_message=data["error_message"],
|
|
||||||
timestamp=data["timestamp"],
|
|
||||||
failure_count=data["failure_count"],
|
|
||||||
bucket_name=data["bucket_name"],
|
|
||||||
action=data["action"],
|
|
||||||
last_error_code=data.get("last_error_code"),
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ReplicationRule:
|
class ReplicationRule:
|
||||||
bucket_name: str
|
bucket_name: str
|
||||||
@@ -145,6 +83,7 @@ class ReplicationRule:
|
|||||||
@classmethod
|
@classmethod
|
||||||
def from_dict(cls, data: dict) -> "ReplicationRule":
|
def from_dict(cls, data: dict) -> "ReplicationRule":
|
||||||
stats_data = data.pop("stats", {})
|
stats_data = data.pop("stats", {})
|
||||||
|
# Handle old rules without mode/created_at
|
||||||
if "mode" not in data:
|
if "mode" not in data:
|
||||||
data["mode"] = REPLICATION_MODE_NEW_ONLY
|
data["mode"] = REPLICATION_MODE_NEW_ONLY
|
||||||
if "created_at" not in data:
|
if "created_at" not in data:
|
||||||
@@ -154,98 +93,16 @@ class ReplicationRule:
|
|||||||
return rule
|
return rule
|
||||||
|
|
||||||
|
|
||||||
class ReplicationFailureStore:
|
|
||||||
MAX_FAILURES_PER_BUCKET = 50
|
|
||||||
|
|
||||||
def __init__(self, storage_root: Path) -> None:
|
|
||||||
self.storage_root = storage_root
|
|
||||||
self._lock = threading.Lock()
|
|
||||||
|
|
||||||
def _get_failures_path(self, bucket_name: str) -> Path:
|
|
||||||
return self.storage_root / ".myfsio.sys" / "buckets" / bucket_name / "replication_failures.json"
|
|
||||||
|
|
||||||
def load_failures(self, bucket_name: str) -> List[ReplicationFailure]:
|
|
||||||
path = self._get_failures_path(bucket_name)
|
|
||||||
if not path.exists():
|
|
||||||
return []
|
|
||||||
try:
|
|
||||||
with open(path, "r") as f:
|
|
||||||
data = json.load(f)
|
|
||||||
return [ReplicationFailure.from_dict(d) for d in data.get("failures", [])]
|
|
||||||
except (OSError, ValueError, KeyError) as e:
|
|
||||||
logger.error(f"Failed to load replication failures for {bucket_name}: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
def save_failures(self, bucket_name: str, failures: List[ReplicationFailure]) -> None:
|
|
||||||
path = self._get_failures_path(bucket_name)
|
|
||||||
path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
data = {"failures": [f.to_dict() for f in failures[:self.MAX_FAILURES_PER_BUCKET]]}
|
|
||||||
try:
|
|
||||||
with open(path, "w") as f:
|
|
||||||
json.dump(data, f, indent=2)
|
|
||||||
except OSError as e:
|
|
||||||
logger.error(f"Failed to save replication failures for {bucket_name}: {e}")
|
|
||||||
|
|
||||||
def add_failure(self, bucket_name: str, failure: ReplicationFailure) -> None:
|
|
||||||
with self._lock:
|
|
||||||
failures = self.load_failures(bucket_name)
|
|
||||||
existing = next((f for f in failures if f.object_key == failure.object_key), None)
|
|
||||||
if existing:
|
|
||||||
existing.failure_count += 1
|
|
||||||
existing.timestamp = failure.timestamp
|
|
||||||
existing.error_message = failure.error_message
|
|
||||||
existing.last_error_code = failure.last_error_code
|
|
||||||
else:
|
|
||||||
failures.insert(0, failure)
|
|
||||||
self.save_failures(bucket_name, failures)
|
|
||||||
|
|
||||||
def remove_failure(self, bucket_name: str, object_key: str) -> bool:
|
|
||||||
with self._lock:
|
|
||||||
failures = self.load_failures(bucket_name)
|
|
||||||
original_len = len(failures)
|
|
||||||
failures = [f for f in failures if f.object_key != object_key]
|
|
||||||
if len(failures) < original_len:
|
|
||||||
self.save_failures(bucket_name, failures)
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
def clear_failures(self, bucket_name: str) -> None:
|
|
||||||
with self._lock:
|
|
||||||
path = self._get_failures_path(bucket_name)
|
|
||||||
if path.exists():
|
|
||||||
path.unlink()
|
|
||||||
|
|
||||||
def get_failure(self, bucket_name: str, object_key: str) -> Optional[ReplicationFailure]:
|
|
||||||
failures = self.load_failures(bucket_name)
|
|
||||||
return next((f for f in failures if f.object_key == object_key), None)
|
|
||||||
|
|
||||||
def get_failure_count(self, bucket_name: str) -> int:
|
|
||||||
return len(self.load_failures(bucket_name))
|
|
||||||
|
|
||||||
|
|
||||||
class ReplicationManager:
|
class ReplicationManager:
|
||||||
def __init__(self, storage: ObjectStorage, connections: ConnectionStore, rules_path: Path, storage_root: Path) -> None:
|
def __init__(self, storage: ObjectStorage, connections: ConnectionStore, rules_path: Path) -> None:
|
||||||
self.storage = storage
|
self.storage = storage
|
||||||
self.connections = connections
|
self.connections = connections
|
||||||
self.rules_path = rules_path
|
self.rules_path = rules_path
|
||||||
self.storage_root = storage_root
|
|
||||||
self._rules: Dict[str, ReplicationRule] = {}
|
self._rules: Dict[str, ReplicationRule] = {}
|
||||||
self._stats_lock = threading.Lock()
|
self._stats_lock = threading.Lock()
|
||||||
self._executor = ThreadPoolExecutor(max_workers=4, thread_name_prefix="ReplicationWorker")
|
self._executor = ThreadPoolExecutor(max_workers=4, thread_name_prefix="ReplicationWorker")
|
||||||
self._shutdown = False
|
|
||||||
self.failure_store = ReplicationFailureStore(storage_root)
|
|
||||||
self.reload_rules()
|
self.reload_rules()
|
||||||
|
|
||||||
def shutdown(self, wait: bool = True) -> None:
|
|
||||||
"""Shutdown the replication executor gracefully.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
wait: If True, wait for pending tasks to complete
|
|
||||||
"""
|
|
||||||
self._shutdown = True
|
|
||||||
self._executor.shutdown(wait=wait)
|
|
||||||
logger.info("Replication manager shut down")
|
|
||||||
|
|
||||||
def reload_rules(self) -> None:
|
def reload_rules(self) -> None:
|
||||||
if not self.rules_path.exists():
|
if not self.rules_path.exists():
|
||||||
self._rules = {}
|
self._rules = {}
|
||||||
@@ -264,33 +121,13 @@ class ReplicationManager:
|
|||||||
with open(self.rules_path, "w") as f:
|
with open(self.rules_path, "w") as f:
|
||||||
json.dump(data, f, indent=2)
|
json.dump(data, f, indent=2)
|
||||||
|
|
||||||
def check_endpoint_health(self, connection: RemoteConnection) -> bool:
|
|
||||||
"""Check if a remote endpoint is reachable and responsive.
|
|
||||||
|
|
||||||
Returns True if endpoint is healthy, False otherwise.
|
|
||||||
Uses short timeouts to prevent blocking.
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
s3 = _create_s3_client(connection, health_check=True)
|
|
||||||
s3.list_buckets()
|
|
||||||
return True
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Endpoint health check failed for {connection.name} ({connection.endpoint_url}): {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def get_rule(self, bucket_name: str) -> Optional[ReplicationRule]:
|
def get_rule(self, bucket_name: str) -> Optional[ReplicationRule]:
|
||||||
return self._rules.get(bucket_name)
|
return self._rules.get(bucket_name)
|
||||||
|
|
||||||
def set_rule(self, rule: ReplicationRule) -> None:
|
def set_rule(self, rule: ReplicationRule) -> None:
|
||||||
old_rule = self._rules.get(rule.bucket_name)
|
|
||||||
was_all_mode = old_rule and old_rule.mode == REPLICATION_MODE_ALL if old_rule else False
|
|
||||||
self._rules[rule.bucket_name] = rule
|
self._rules[rule.bucket_name] = rule
|
||||||
self.save_rules()
|
self.save_rules()
|
||||||
|
|
||||||
if rule.mode == REPLICATION_MODE_ALL and rule.enabled and not was_all_mode:
|
|
||||||
logger.info(f"Replication mode ALL enabled for {rule.bucket_name}, triggering sync of existing objects")
|
|
||||||
self._executor.submit(self.replicate_existing_objects, rule.bucket_name)
|
|
||||||
|
|
||||||
def delete_rule(self, bucket_name: str) -> None:
|
def delete_rule(self, bucket_name: str) -> None:
|
||||||
if bucket_name in self._rules:
|
if bucket_name in self._rules:
|
||||||
del self._rules[bucket_name]
|
del self._rules[bucket_name]
|
||||||
@@ -314,13 +151,21 @@ class ReplicationManager:
|
|||||||
|
|
||||||
connection = self.connections.get(rule.target_connection_id)
|
connection = self.connections.get(rule.target_connection_id)
|
||||||
if not connection:
|
if not connection:
|
||||||
return rule.stats
|
return rule.stats # Return cached stats if connection unavailable
|
||||||
|
|
||||||
try:
|
try:
|
||||||
source_objects = self.storage.list_objects_all(bucket_name)
|
# Get source objects
|
||||||
|
source_objects = self.storage.list_objects(bucket_name)
|
||||||
source_keys = {obj.key: obj.size for obj in source_objects}
|
source_keys = {obj.key: obj.size for obj in source_objects}
|
||||||
|
|
||||||
s3 = _create_s3_client(connection)
|
# Get destination objects
|
||||||
|
s3 = boto3.client(
|
||||||
|
"s3",
|
||||||
|
endpoint_url=connection.endpoint_url,
|
||||||
|
aws_access_key_id=connection.access_key,
|
||||||
|
aws_secret_access_key=connection.secret_key,
|
||||||
|
region_name=connection.region,
|
||||||
|
)
|
||||||
|
|
||||||
dest_keys = set()
|
dest_keys = set()
|
||||||
bytes_synced = 0
|
bytes_synced = 0
|
||||||
@@ -333,18 +178,24 @@ class ReplicationManager:
|
|||||||
bytes_synced += obj.get('Size', 0)
|
bytes_synced += obj.get('Size', 0)
|
||||||
except ClientError as e:
|
except ClientError as e:
|
||||||
if e.response['Error']['Code'] == 'NoSuchBucket':
|
if e.response['Error']['Code'] == 'NoSuchBucket':
|
||||||
|
# Destination bucket doesn't exist yet
|
||||||
dest_keys = set()
|
dest_keys = set()
|
||||||
else:
|
else:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
synced = source_keys.keys() & dest_keys
|
# Compute stats
|
||||||
orphaned = dest_keys - source_keys.keys()
|
synced = source_keys.keys() & dest_keys # Objects in both
|
||||||
|
orphaned = dest_keys - source_keys.keys() # In dest but not source
|
||||||
|
|
||||||
|
# For "new_only" mode, we can't determine pending since we don't know
|
||||||
|
# which objects existed before replication was enabled. Only "all" mode
|
||||||
|
# should show pending (objects that should be replicated but aren't yet).
|
||||||
if rule.mode == REPLICATION_MODE_ALL:
|
if rule.mode == REPLICATION_MODE_ALL:
|
||||||
pending = source_keys.keys() - dest_keys
|
pending = source_keys.keys() - dest_keys # In source but not dest
|
||||||
else:
|
else:
|
||||||
pending = set()
|
pending = set() # New-only mode: don't show pre-existing as pending
|
||||||
|
|
||||||
|
# Update cached stats with computed values
|
||||||
rule.stats.objects_synced = len(synced)
|
rule.stats.objects_synced = len(synced)
|
||||||
rule.stats.objects_pending = len(pending)
|
rule.stats.objects_pending = len(pending)
|
||||||
rule.stats.objects_orphaned = len(orphaned)
|
rule.stats.objects_orphaned = len(orphaned)
|
||||||
@@ -354,7 +205,7 @@ class ReplicationManager:
|
|||||||
|
|
||||||
except (ClientError, StorageError) as e:
|
except (ClientError, StorageError) as e:
|
||||||
logger.error(f"Failed to compute sync status for {bucket_name}: {e}")
|
logger.error(f"Failed to compute sync status for {bucket_name}: {e}")
|
||||||
return rule.stats
|
return rule.stats # Return cached stats on error
|
||||||
|
|
||||||
def replicate_existing_objects(self, bucket_name: str) -> None:
|
def replicate_existing_objects(self, bucket_name: str) -> None:
|
||||||
"""Trigger replication for all existing objects in a bucket."""
|
"""Trigger replication for all existing objects in a bucket."""
|
||||||
@@ -367,12 +218,8 @@ class ReplicationManager:
|
|||||||
logger.warning(f"Cannot replicate existing objects: Connection {rule.target_connection_id} not found")
|
logger.warning(f"Cannot replicate existing objects: Connection {rule.target_connection_id} not found")
|
||||||
return
|
return
|
||||||
|
|
||||||
if not self.check_endpoint_health(connection):
|
|
||||||
logger.warning(f"Cannot replicate existing objects: Endpoint {connection.name} ({connection.endpoint_url}) is not reachable")
|
|
||||||
return
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
objects = self.storage.list_objects_all(bucket_name)
|
objects = self.storage.list_objects(bucket_name)
|
||||||
logger.info(f"Starting replication of {len(objects)} existing objects from {bucket_name}")
|
logger.info(f"Starting replication of {len(objects)} existing objects from {bucket_name}")
|
||||||
for obj in objects:
|
for obj in objects:
|
||||||
self._executor.submit(self._replicate_task, bucket_name, obj.key, rule, connection, "write")
|
self._executor.submit(self._replicate_task, bucket_name, obj.key, rule, connection, "write")
|
||||||
@@ -386,7 +233,13 @@ class ReplicationManager:
|
|||||||
raise ValueError(f"Connection {connection_id} not found")
|
raise ValueError(f"Connection {connection_id} not found")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
s3 = _create_s3_client(connection)
|
s3 = boto3.client(
|
||||||
|
"s3",
|
||||||
|
endpoint_url=connection.endpoint_url,
|
||||||
|
aws_access_key_id=connection.access_key,
|
||||||
|
aws_secret_access_key=connection.secret_key,
|
||||||
|
region_name=connection.region,
|
||||||
|
)
|
||||||
s3.create_bucket(Bucket=bucket_name)
|
s3.create_bucket(Bucket=bucket_name)
|
||||||
except ClientError as e:
|
except ClientError as e:
|
||||||
logger.error(f"Failed to create remote bucket {bucket_name}: {e}")
|
logger.error(f"Failed to create remote bucket {bucket_name}: {e}")
|
||||||
@@ -402,21 +255,9 @@ class ReplicationManager:
|
|||||||
logger.warning(f"Replication skipped for {bucket_name}/{object_key}: Connection {rule.target_connection_id} not found")
|
logger.warning(f"Replication skipped for {bucket_name}/{object_key}: Connection {rule.target_connection_id} not found")
|
||||||
return
|
return
|
||||||
|
|
||||||
if not self.check_endpoint_health(connection):
|
|
||||||
logger.warning(f"Replication skipped for {bucket_name}/{object_key}: Endpoint {connection.name} ({connection.endpoint_url}) is not reachable")
|
|
||||||
return
|
|
||||||
|
|
||||||
self._executor.submit(self._replicate_task, bucket_name, object_key, rule, connection, action)
|
self._executor.submit(self._replicate_task, bucket_name, object_key, rule, connection, action)
|
||||||
|
|
||||||
def _replicate_task(self, bucket_name: str, object_key: str, rule: ReplicationRule, conn: RemoteConnection, action: str) -> None:
|
def _replicate_task(self, bucket_name: str, object_key: str, rule: ReplicationRule, conn: RemoteConnection, action: str) -> None:
|
||||||
if self._shutdown:
|
|
||||||
return
|
|
||||||
|
|
||||||
current_rule = self.get_rule(bucket_name)
|
|
||||||
if not current_rule or not current_rule.enabled:
|
|
||||||
logger.debug(f"Replication skipped for {bucket_name}/{object_key}: rule disabled or removed")
|
|
||||||
return
|
|
||||||
|
|
||||||
if ".." in object_key or object_key.startswith("/") or object_key.startswith("\\"):
|
if ".." in object_key or object_key.startswith("/") or object_key.startswith("\\"):
|
||||||
logger.error(f"Invalid object key in replication (path traversal attempt): {object_key}")
|
logger.error(f"Invalid object key in replication (path traversal attempt): {object_key}")
|
||||||
return
|
return
|
||||||
@@ -428,27 +269,25 @@ class ReplicationManager:
|
|||||||
logger.error(f"Object key validation failed in replication: {e}")
|
logger.error(f"Object key validation failed in replication: {e}")
|
||||||
return
|
return
|
||||||
|
|
||||||
|
file_size = 0
|
||||||
try:
|
try:
|
||||||
s3 = _create_s3_client(conn)
|
config = Config(user_agent_extra=REPLICATION_USER_AGENT)
|
||||||
|
s3 = boto3.client(
|
||||||
|
"s3",
|
||||||
|
endpoint_url=conn.endpoint_url,
|
||||||
|
aws_access_key_id=conn.access_key,
|
||||||
|
aws_secret_access_key=conn.secret_key,
|
||||||
|
region_name=conn.region,
|
||||||
|
config=config,
|
||||||
|
)
|
||||||
|
|
||||||
if action == "delete":
|
if action == "delete":
|
||||||
try:
|
try:
|
||||||
s3.delete_object(Bucket=rule.target_bucket, Key=object_key)
|
s3.delete_object(Bucket=rule.target_bucket, Key=object_key)
|
||||||
logger.info(f"Replicated DELETE {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
logger.info(f"Replicated DELETE {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
||||||
self._update_last_sync(bucket_name, object_key)
|
self._update_last_sync(bucket_name, object_key)
|
||||||
self.failure_store.remove_failure(bucket_name, object_key)
|
|
||||||
except ClientError as e:
|
except ClientError as e:
|
||||||
error_code = e.response.get('Error', {}).get('Code')
|
|
||||||
logger.error(f"Replication DELETE failed for {bucket_name}/{object_key}: {e}")
|
logger.error(f"Replication DELETE failed for {bucket_name}/{object_key}: {e}")
|
||||||
self.failure_store.add_failure(bucket_name, ReplicationFailure(
|
|
||||||
object_key=object_key,
|
|
||||||
error_message=str(e),
|
|
||||||
timestamp=time.time(),
|
|
||||||
failure_count=1,
|
|
||||||
bucket_name=bucket_name,
|
|
||||||
action="delete",
|
|
||||||
last_error_code=error_code,
|
|
||||||
))
|
|
||||||
return
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -457,153 +296,61 @@ class ReplicationManager:
|
|||||||
logger.error(f"Source object not found: {bucket_name}/{object_key}")
|
logger.error(f"Source object not found: {bucket_name}/{object_key}")
|
||||||
return
|
return
|
||||||
|
|
||||||
|
metadata = self.storage.get_object_metadata(bucket_name, object_key)
|
||||||
|
|
||||||
|
extra_args = {}
|
||||||
|
if metadata:
|
||||||
|
extra_args["Metadata"] = metadata
|
||||||
|
|
||||||
|
# Guess content type to prevent corruption/wrong handling
|
||||||
content_type, _ = mimetypes.guess_type(path)
|
content_type, _ = mimetypes.guess_type(path)
|
||||||
file_size = path.stat().st_size
|
file_size = path.stat().st_size
|
||||||
|
|
||||||
logger.info(f"Replicating {bucket_name}/{object_key}: Size={file_size}, ContentType={content_type}")
|
logger.info(f"Replicating {bucket_name}/{object_key}: Size={file_size}, ContentType={content_type}")
|
||||||
|
|
||||||
def do_upload() -> None:
|
|
||||||
"""Upload object using appropriate method based on file size.
|
|
||||||
|
|
||||||
For small files (< 10 MiB): Read into memory for simpler handling
|
|
||||||
For large files: Use streaming upload to avoid memory issues
|
|
||||||
"""
|
|
||||||
extra_args = {}
|
|
||||||
if content_type:
|
|
||||||
extra_args["ContentType"] = content_type
|
|
||||||
|
|
||||||
if file_size >= STREAMING_THRESHOLD_BYTES:
|
|
||||||
s3.upload_file(
|
|
||||||
str(path),
|
|
||||||
rule.target_bucket,
|
|
||||||
object_key,
|
|
||||||
ExtraArgs=extra_args if extra_args else None,
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
file_content = path.read_bytes()
|
|
||||||
put_kwargs = {
|
|
||||||
"Bucket": rule.target_bucket,
|
|
||||||
"Key": object_key,
|
|
||||||
"Body": file_content,
|
|
||||||
**extra_args,
|
|
||||||
}
|
|
||||||
s3.put_object(**put_kwargs)
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
do_upload()
|
with path.open("rb") as f:
|
||||||
|
s3.put_object(
|
||||||
|
Bucket=rule.target_bucket,
|
||||||
|
Key=object_key,
|
||||||
|
Body=f,
|
||||||
|
ContentLength=file_size,
|
||||||
|
ContentType=content_type or "application/octet-stream",
|
||||||
|
Metadata=metadata or {}
|
||||||
|
)
|
||||||
except (ClientError, S3UploadFailedError) as e:
|
except (ClientError, S3UploadFailedError) as e:
|
||||||
error_code = None
|
is_no_bucket = False
|
||||||
if isinstance(e, ClientError):
|
if isinstance(e, ClientError):
|
||||||
error_code = e.response['Error']['Code']
|
if e.response['Error']['Code'] == 'NoSuchBucket':
|
||||||
|
is_no_bucket = True
|
||||||
elif isinstance(e, S3UploadFailedError):
|
elif isinstance(e, S3UploadFailedError):
|
||||||
if "NoSuchBucket" in str(e):
|
if "NoSuchBucket" in str(e):
|
||||||
error_code = 'NoSuchBucket'
|
is_no_bucket = True
|
||||||
|
|
||||||
if error_code == 'NoSuchBucket':
|
if is_no_bucket:
|
||||||
logger.info(f"Target bucket {rule.target_bucket} not found. Attempting to create it.")
|
logger.info(f"Target bucket {rule.target_bucket} not found. Attempting to create it.")
|
||||||
bucket_ready = False
|
|
||||||
try:
|
try:
|
||||||
s3.create_bucket(Bucket=rule.target_bucket)
|
s3.create_bucket(Bucket=rule.target_bucket)
|
||||||
bucket_ready = True
|
# Retry upload
|
||||||
logger.info(f"Created target bucket {rule.target_bucket}")
|
with path.open("rb") as f:
|
||||||
except ClientError as bucket_err:
|
s3.put_object(
|
||||||
if bucket_err.response['Error']['Code'] in ('BucketAlreadyExists', 'BucketAlreadyOwnedByYou'):
|
Bucket=rule.target_bucket,
|
||||||
logger.debug(f"Bucket {rule.target_bucket} already exists (created by another thread)")
|
Key=object_key,
|
||||||
bucket_ready = True
|
Body=f,
|
||||||
else:
|
ContentLength=file_size,
|
||||||
logger.error(f"Failed to create target bucket {rule.target_bucket}: {bucket_err}")
|
ContentType=content_type or "application/octet-stream",
|
||||||
raise e
|
Metadata=metadata or {}
|
||||||
|
)
|
||||||
if bucket_ready:
|
except Exception as create_err:
|
||||||
do_upload()
|
logger.error(f"Failed to create target bucket {rule.target_bucket}: {create_err}")
|
||||||
|
raise e # Raise original error
|
||||||
else:
|
else:
|
||||||
raise e
|
raise e
|
||||||
|
|
||||||
logger.info(f"Replicated {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
logger.info(f"Replicated {bucket_name}/{object_key} to {conn.name} ({rule.target_bucket})")
|
||||||
self._update_last_sync(bucket_name, object_key)
|
self._update_last_sync(bucket_name, object_key)
|
||||||
self.failure_store.remove_failure(bucket_name, object_key)
|
|
||||||
|
|
||||||
except (ClientError, OSError, ValueError) as e:
|
except (ClientError, OSError, ValueError) as e:
|
||||||
error_code = None
|
|
||||||
if isinstance(e, ClientError):
|
|
||||||
error_code = e.response.get('Error', {}).get('Code')
|
|
||||||
logger.error(f"Replication failed for {bucket_name}/{object_key}: {e}")
|
logger.error(f"Replication failed for {bucket_name}/{object_key}: {e}")
|
||||||
self.failure_store.add_failure(bucket_name, ReplicationFailure(
|
except Exception:
|
||||||
object_key=object_key,
|
|
||||||
error_message=str(e),
|
|
||||||
timestamp=time.time(),
|
|
||||||
failure_count=1,
|
|
||||||
bucket_name=bucket_name,
|
|
||||||
action=action,
|
|
||||||
last_error_code=error_code,
|
|
||||||
))
|
|
||||||
except Exception as e:
|
|
||||||
logger.exception(f"Unexpected error during replication for {bucket_name}/{object_key}")
|
logger.exception(f"Unexpected error during replication for {bucket_name}/{object_key}")
|
||||||
self.failure_store.add_failure(bucket_name, ReplicationFailure(
|
|
||||||
object_key=object_key,
|
|
||||||
error_message=str(e),
|
|
||||||
timestamp=time.time(),
|
|
||||||
failure_count=1,
|
|
||||||
bucket_name=bucket_name,
|
|
||||||
action=action,
|
|
||||||
last_error_code=None,
|
|
||||||
))
|
|
||||||
|
|
||||||
def get_failed_items(self, bucket_name: str, limit: int = 50, offset: int = 0) -> List[ReplicationFailure]:
|
|
||||||
failures = self.failure_store.load_failures(bucket_name)
|
|
||||||
return failures[offset:offset + limit]
|
|
||||||
|
|
||||||
def get_failure_count(self, bucket_name: str) -> int:
|
|
||||||
return self.failure_store.get_failure_count(bucket_name)
|
|
||||||
|
|
||||||
def retry_failed_item(self, bucket_name: str, object_key: str) -> bool:
|
|
||||||
failure = self.failure_store.get_failure(bucket_name, object_key)
|
|
||||||
if not failure:
|
|
||||||
return False
|
|
||||||
|
|
||||||
rule = self.get_rule(bucket_name)
|
|
||||||
if not rule or not rule.enabled:
|
|
||||||
return False
|
|
||||||
|
|
||||||
connection = self.connections.get(rule.target_connection_id)
|
|
||||||
if not connection:
|
|
||||||
logger.warning(f"Cannot retry: Connection {rule.target_connection_id} not found")
|
|
||||||
return False
|
|
||||||
|
|
||||||
if not self.check_endpoint_health(connection):
|
|
||||||
logger.warning(f"Cannot retry: Endpoint {connection.name} is not reachable")
|
|
||||||
return False
|
|
||||||
|
|
||||||
self._executor.submit(self._replicate_task, bucket_name, object_key, rule, connection, failure.action)
|
|
||||||
return True
|
|
||||||
|
|
||||||
def retry_all_failed(self, bucket_name: str) -> Dict[str, int]:
|
|
||||||
failures = self.failure_store.load_failures(bucket_name)
|
|
||||||
if not failures:
|
|
||||||
return {"submitted": 0, "skipped": 0}
|
|
||||||
|
|
||||||
rule = self.get_rule(bucket_name)
|
|
||||||
if not rule or not rule.enabled:
|
|
||||||
return {"submitted": 0, "skipped": len(failures)}
|
|
||||||
|
|
||||||
connection = self.connections.get(rule.target_connection_id)
|
|
||||||
if not connection:
|
|
||||||
logger.warning(f"Cannot retry: Connection {rule.target_connection_id} not found")
|
|
||||||
return {"submitted": 0, "skipped": len(failures)}
|
|
||||||
|
|
||||||
if not self.check_endpoint_health(connection):
|
|
||||||
logger.warning(f"Cannot retry: Endpoint {connection.name} is not reachable")
|
|
||||||
return {"submitted": 0, "skipped": len(failures)}
|
|
||||||
|
|
||||||
submitted = 0
|
|
||||||
for failure in failures:
|
|
||||||
self._executor.submit(self._replicate_task, bucket_name, failure.object_key, rule, connection, failure.action)
|
|
||||||
submitted += 1
|
|
||||||
|
|
||||||
return {"submitted": submitted, "skipped": 0}
|
|
||||||
|
|
||||||
def dismiss_failure(self, bucket_name: str, object_key: str) -> bool:
|
|
||||||
return self.failure_store.remove_failure(bucket_name, object_key)
|
|
||||||
|
|
||||||
def clear_failures(self, bucket_name: str) -> None:
|
|
||||||
self.failure_store.clear_failures(bucket_name)
|
|
||||||
|
|||||||
1057
app/s3_api.py
1057
app/s3_api.py
File diff suppressed because it is too large
Load Diff
@@ -1,3 +1,4 @@
|
|||||||
|
"""Ephemeral store for one-time secrets communicated to the UI."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import secrets
|
import secrets
|
||||||
|
|||||||
665
app/storage.py
665
app/storage.py
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,7 @@
|
|||||||
|
"""Central location for the application version string."""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
APP_VERSION = "0.2.2"
|
APP_VERSION = "0.1.3"
|
||||||
|
|
||||||
|
|
||||||
def get_version() -> str:
|
def get_version() -> str:
|
||||||
|
|||||||
802
docs.md
802
docs.md
@@ -33,63 +33,6 @@ python run.py --mode api # API only (port 5000)
|
|||||||
python run.py --mode ui # UI only (port 5100)
|
python run.py --mode ui # UI only (port 5100)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configuration validation
|
|
||||||
|
|
||||||
Validate your configuration before deploying:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Show configuration summary
|
|
||||||
python run.py --show-config
|
|
||||||
./myfsio --show-config
|
|
||||||
|
|
||||||
# Validate and check for issues (exits with code 1 if critical issues found)
|
|
||||||
python run.py --check-config
|
|
||||||
./myfsio --check-config
|
|
||||||
```
|
|
||||||
|
|
||||||
### Linux Installation (Recommended for Production)
|
|
||||||
|
|
||||||
For production deployments on Linux, use the provided installation script:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Download the binary and install script
|
|
||||||
# Then run the installer with sudo:
|
|
||||||
sudo ./scripts/install.sh --binary ./myfsio
|
|
||||||
|
|
||||||
# Or with custom paths:
|
|
||||||
sudo ./scripts/install.sh \
|
|
||||||
--binary ./myfsio \
|
|
||||||
--install-dir /opt/myfsio \
|
|
||||||
--data-dir /mnt/storage/myfsio \
|
|
||||||
--log-dir /var/log/myfsio \
|
|
||||||
--api-url https://s3.example.com \
|
|
||||||
--user myfsio
|
|
||||||
|
|
||||||
# Non-interactive mode (for automation):
|
|
||||||
sudo ./scripts/install.sh --binary ./myfsio -y
|
|
||||||
```
|
|
||||||
|
|
||||||
The installer will:
|
|
||||||
1. Create a dedicated system user
|
|
||||||
2. Set up directories with proper permissions
|
|
||||||
3. Generate a secure `SECRET_KEY`
|
|
||||||
4. Create an environment file at `/opt/myfsio/myfsio.env`
|
|
||||||
5. Install and configure a systemd service
|
|
||||||
|
|
||||||
After installation:
|
|
||||||
```bash
|
|
||||||
sudo systemctl start myfsio # Start the service
|
|
||||||
sudo systemctl enable myfsio # Enable on boot
|
|
||||||
sudo systemctl status myfsio # Check status
|
|
||||||
sudo journalctl -u myfsio -f # View logs
|
|
||||||
```
|
|
||||||
|
|
||||||
To uninstall:
|
|
||||||
```bash
|
|
||||||
sudo ./scripts/uninstall.sh # Full removal
|
|
||||||
sudo ./scripts/uninstall.sh --keep-data # Keep data directory
|
|
||||||
```
|
|
||||||
|
|
||||||
### Docker quickstart
|
### Docker quickstart
|
||||||
|
|
||||||
The repo now ships a `Dockerfile` so you can run both services in one container:
|
The repo now ships a `Dockerfile` so you can run both services in one container:
|
||||||
@@ -126,143 +69,23 @@ The repo now tracks a human-friendly release string inside `app/version.py` (see
|
|||||||
|
|
||||||
## 3. Configuration Reference
|
## 3. Configuration Reference
|
||||||
|
|
||||||
All configuration is done via environment variables. The table below lists every supported variable.
|
|
||||||
|
|
||||||
### Core Settings
|
|
||||||
|
|
||||||
| Variable | Default | Notes |
|
| Variable | Default | Notes |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `STORAGE_ROOT` | `<repo>/data` | Filesystem home for all buckets/objects. |
|
| `STORAGE_ROOT` | `<repo>/data` | Filesystem home for all buckets/objects. |
|
||||||
| `MAX_UPLOAD_SIZE` | `1073741824` (1 GiB) | Bytes. Caps incoming uploads in both API + UI. |
|
| `MAX_UPLOAD_SIZE` | `1073741824` | Bytes. Caps incoming uploads in both API + UI. |
|
||||||
| `UI_PAGE_SIZE` | `100` | `MaxKeys` hint shown in listings. |
|
| `UI_PAGE_SIZE` | `100` | `MaxKeys` hint shown in listings. |
|
||||||
| `SECRET_KEY` | Auto-generated | Flask session key. Auto-generates and persists if not set. **Set explicitly in production.** |
|
| `SECRET_KEY` | `dev-secret-key` | Flask session key for UI auth. |
|
||||||
| `API_BASE_URL` | `None` | Public URL for presigned URLs. Required behind proxies. |
|
| `IAM_CONFIG` | `<repo>/data/.myfsio.sys/config/iam.json` | Stores users, secrets, and inline policies. |
|
||||||
|
| `BUCKET_POLICY_PATH` | `<repo>/data/.myfsio.sys/config/bucket_policies.json` | Bucket policy store (auto hot-reload). |
|
||||||
|
| `API_BASE_URL` | `None` | Used by the UI to hit API endpoints (presign/policy). If unset, the UI will auto-detect the host or use `X-Forwarded-*` headers. |
|
||||||
| `AWS_REGION` | `us-east-1` | Region embedded in SigV4 credential scope. |
|
| `AWS_REGION` | `us-east-1` | Region embedded in SigV4 credential scope. |
|
||||||
| `AWS_SERVICE` | `s3` | Service string for SigV4. |
|
| `AWS_SERVICE` | `s3` | Service string for SigV4. |
|
||||||
|
|
||||||
### IAM & Security
|
|
||||||
|
|
||||||
| Variable | Default | Notes |
|
|
||||||
| --- | --- | --- |
|
|
||||||
| `IAM_CONFIG` | `data/.myfsio.sys/config/iam.json` | Stores users, secrets, and inline policies. |
|
|
||||||
| `BUCKET_POLICY_PATH` | `data/.myfsio.sys/config/bucket_policies.json` | Bucket policy store (auto hot-reload). |
|
|
||||||
| `AUTH_MAX_ATTEMPTS` | `5` | Failed login attempts before lockout. |
|
|
||||||
| `AUTH_LOCKOUT_MINUTES` | `15` | Lockout duration after max failed attempts. |
|
|
||||||
| `SESSION_LIFETIME_DAYS` | `30` | How long UI sessions remain valid. |
|
|
||||||
| `SECRET_TTL_SECONDS` | `300` | TTL for ephemeral secrets (presigned URLs). |
|
|
||||||
| `UI_ENFORCE_BUCKET_POLICIES` | `false` | Whether the UI should enforce bucket policies. |
|
|
||||||
|
|
||||||
### CORS (Cross-Origin Resource Sharing)
|
|
||||||
|
|
||||||
| Variable | Default | Notes |
|
|
||||||
| --- | --- | --- |
|
|
||||||
| `CORS_ORIGINS` | `*` | Comma-separated allowed origins. Use specific domains in production. |
|
|
||||||
| `CORS_METHODS` | `GET,PUT,POST,DELETE,OPTIONS,HEAD` | Allowed HTTP methods. |
|
|
||||||
| `CORS_ALLOW_HEADERS` | `*` | Allowed request headers. |
|
|
||||||
| `CORS_EXPOSE_HEADERS` | `*` | Response headers visible to browsers (e.g., `ETag`). |
|
|
||||||
|
|
||||||
### Rate Limiting
|
|
||||||
|
|
||||||
| Variable | Default | Notes |
|
|
||||||
| --- | --- | --- |
|
|
||||||
| `RATE_LIMIT_DEFAULT` | `200 per minute` | Default rate limit for API endpoints. |
|
|
||||||
| `RATE_LIMIT_STORAGE_URI` | `memory://` | Storage backend for rate limits. Use `redis://host:port` for distributed setups. |
|
|
||||||
|
|
||||||
### Logging
|
|
||||||
|
|
||||||
| Variable | Default | Notes |
|
|
||||||
| --- | --- | --- |
|
|
||||||
| `LOG_LEVEL` | `INFO` | Log verbosity: `DEBUG`, `INFO`, `WARNING`, `ERROR`. |
|
|
||||||
| `LOG_TO_FILE` | `true` | Enable file logging. |
|
|
||||||
| `LOG_DIR` | `<repo>/logs` | Directory for log files. |
|
|
||||||
| `LOG_FILE` | `app.log` | Log filename. |
|
|
||||||
| `LOG_MAX_BYTES` | `5242880` (5 MB) | Max log file size before rotation. |
|
|
||||||
| `LOG_BACKUP_COUNT` | `3` | Number of rotated log files to keep. |
|
|
||||||
|
|
||||||
### Encryption
|
|
||||||
|
|
||||||
| Variable | Default | Notes |
|
|
||||||
| --- | --- | --- |
|
|
||||||
| `ENCRYPTION_ENABLED` | `false` | Enable server-side encryption support. |
|
| `ENCRYPTION_ENABLED` | `false` | Enable server-side encryption support. |
|
||||||
| `ENCRYPTION_MASTER_KEY_PATH` | `data/.myfsio.sys/keys/master.key` | Path to the master encryption key file. |
|
|
||||||
| `DEFAULT_ENCRYPTION_ALGORITHM` | `AES256` | Default algorithm for new encrypted objects. |
|
|
||||||
| `KMS_ENABLED` | `false` | Enable KMS key management for encryption. |
|
| `KMS_ENABLED` | `false` | Enable KMS key management for encryption. |
|
||||||
| `KMS_KEYS_PATH` | `data/.myfsio.sys/keys/kms_keys.json` | Path to store KMS key metadata. |
|
| `KMS_KEYS_PATH` | `data/kms_keys.json` | Path to store KMS key metadata. |
|
||||||
|
| `ENCRYPTION_MASTER_KEY_PATH` | `data/master.key` | Path to the master encryption key file. |
|
||||||
|
|
||||||
|
Set env vars (or pass overrides to `create_app`) to point the servers at custom paths.
|
||||||
## Lifecycle Rules
|
|
||||||
|
|
||||||
Lifecycle rules automate object management by scheduling deletions based on object age.
|
|
||||||
|
|
||||||
### Enabling Lifecycle Enforcement
|
|
||||||
|
|
||||||
By default, lifecycle enforcement is disabled. Enable it by setting the environment variable:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
LIFECYCLE_ENABLED=true python run.py
|
|
||||||
```
|
|
||||||
|
|
||||||
Or in your `myfsio.env` file:
|
|
||||||
```
|
|
||||||
LIFECYCLE_ENABLED=true
|
|
||||||
LIFECYCLE_INTERVAL_SECONDS=3600 # Check interval (default: 1 hour)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Configuring Rules
|
|
||||||
|
|
||||||
Once enabled, configure lifecycle rules via:
|
|
||||||
- **Web UI:** Bucket Details → Lifecycle tab → Add Rule
|
|
||||||
- **S3 API:** `PUT /<bucket>?lifecycle` with XML configuration
|
|
||||||
|
|
||||||
### Available Actions
|
|
||||||
|
|
||||||
| Action | Description |
|
|
||||||
|--------|-------------|
|
|
||||||
| **Expiration** | Delete current version objects after N days |
|
|
||||||
| **NoncurrentVersionExpiration** | Delete old versions N days after becoming noncurrent (requires versioning) |
|
|
||||||
| **AbortIncompleteMultipartUpload** | Clean up incomplete multipart uploads after N days |
|
|
||||||
|
|
||||||
### Example Configuration (XML)
|
|
||||||
|
|
||||||
```xml
|
|
||||||
<LifecycleConfiguration>
|
|
||||||
<Rule>
|
|
||||||
<ID>DeleteOldLogs</ID>
|
|
||||||
<Status>Enabled</Status>
|
|
||||||
<Filter><Prefix>logs/</Prefix></Filter>
|
|
||||||
<Expiration><Days>30</Days></Expiration>
|
|
||||||
</Rule>
|
|
||||||
</LifecycleConfiguration>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Performance Tuning
|
|
||||||
|
|
||||||
| Variable | Default | Notes |
|
|
||||||
| --- | --- | --- |
|
|
||||||
| `STREAM_CHUNK_SIZE` | `65536` (64 KB) | Chunk size for streaming large files. |
|
|
||||||
| `MULTIPART_MIN_PART_SIZE` | `5242880` (5 MB) | Minimum part size for multipart uploads. |
|
|
||||||
| `BUCKET_STATS_CACHE_TTL` | `60` | Seconds to cache bucket statistics. |
|
|
||||||
| `BULK_DELETE_MAX_KEYS` | `500` | Maximum keys per bulk delete request. |
|
|
||||||
|
|
||||||
### Server Settings
|
|
||||||
|
|
||||||
| Variable | Default | Notes |
|
|
||||||
| --- | --- | --- |
|
|
||||||
| `APP_HOST` | `0.0.0.0` | Network interface to bind to. |
|
|
||||||
| `APP_PORT` | `5000` | API server port (UI uses 5100). |
|
|
||||||
| `FLASK_DEBUG` | `0` | Enable Flask debug mode. **Never enable in production.** |
|
|
||||||
|
|
||||||
### Production Checklist
|
|
||||||
|
|
||||||
Before deploying to production, ensure you:
|
|
||||||
|
|
||||||
1. **Set `SECRET_KEY`** - Use a strong, unique value (e.g., `openssl rand -base64 32`)
|
|
||||||
2. **Restrict CORS** - Set `CORS_ORIGINS` to your specific domains instead of `*`
|
|
||||||
3. **Configure `API_BASE_URL`** - Required for correct presigned URLs behind proxies
|
|
||||||
4. **Enable HTTPS** - Use a reverse proxy (nginx, Cloudflare) with TLS termination
|
|
||||||
5. **Review rate limits** - Adjust `RATE_LIMIT_DEFAULT` based on your needs
|
|
||||||
6. **Secure master keys** - Back up `ENCRYPTION_MASTER_KEY_PATH` if using encryption
|
|
||||||
7. **Use `--prod` flag** - Runs with Waitress instead of Flask dev server
|
|
||||||
|
|
||||||
### Proxy Configuration
|
### Proxy Configuration
|
||||||
|
|
||||||
@@ -272,340 +95,8 @@ If running behind a reverse proxy (e.g., Nginx, Cloudflare, or a tunnel), ensure
|
|||||||
|
|
||||||
The application automatically trusts these headers to generate correct presigned URLs (e.g., `https://s3.example.com/...` instead of `http://127.0.0.1:5000/...`). Alternatively, you can explicitly set `API_BASE_URL` to your public endpoint.
|
The application automatically trusts these headers to generate correct presigned URLs (e.g., `https://s3.example.com/...` instead of `http://127.0.0.1:5000/...`). Alternatively, you can explicitly set `API_BASE_URL` to your public endpoint.
|
||||||
|
|
||||||
## 4. Upgrading and Updates
|
|
||||||
|
|
||||||
### Version Checking
|
|
||||||
|
|
||||||
The application version is tracked in `app/version.py` and exposed via:
|
|
||||||
- **Health endpoint:** `GET /healthz` returns JSON with `version` field
|
|
||||||
- **Metrics dashboard:** Navigate to `/ui/metrics` to see the running version in the System Status card
|
|
||||||
|
|
||||||
To check your current version:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# API health endpoint
|
|
||||||
curl http://localhost:5000/healthz
|
|
||||||
|
|
||||||
# Or inspect version.py directly
|
|
||||||
cat app/version.py | grep APP_VERSION
|
|
||||||
```
|
|
||||||
|
|
||||||
### Pre-Update Backup Procedures
|
|
||||||
|
|
||||||
**Always backup before upgrading to prevent data loss:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Stop the application
|
|
||||||
# Ctrl+C if running in terminal, or:
|
|
||||||
docker stop myfsio # if using Docker
|
|
||||||
|
|
||||||
# 2. Backup configuration files (CRITICAL)
|
|
||||||
mkdir -p backups/$(date +%Y%m%d_%H%M%S)
|
|
||||||
cp -r data/.myfsio.sys/config backups/$(date +%Y%m%d_%H%M%S)/
|
|
||||||
|
|
||||||
# 3. Backup all data (optional but recommended)
|
|
||||||
tar -czf backups/data_$(date +%Y%m%d_%H%M%S).tar.gz data/
|
|
||||||
|
|
||||||
# 4. Backup logs for audit trail
|
|
||||||
cp -r logs backups/$(date +%Y%m%d_%H%M%S)/
|
|
||||||
```
|
|
||||||
|
|
||||||
**Windows PowerShell:**
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
# Create timestamped backup
|
|
||||||
$timestamp = Get-Date -Format "yyyyMMdd_HHmmss"
|
|
||||||
New-Item -ItemType Directory -Path "backups\$timestamp" -Force
|
|
||||||
|
|
||||||
# Backup configs
|
|
||||||
Copy-Item -Recurse "data\.myfsio.sys\config" "backups\$timestamp\"
|
|
||||||
|
|
||||||
# Backup entire data directory
|
|
||||||
Compress-Archive -Path "data\" -DestinationPath "backups\data_$timestamp.zip"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Critical files to backup:**
|
|
||||||
- `data/.myfsio.sys/config/iam.json` – User accounts and access keys
|
|
||||||
- `data/.myfsio.sys/config/bucket_policies.json` – Bucket access policies
|
|
||||||
- `data/.myfsio.sys/config/kms_keys.json` – Encryption keys (if using KMS)
|
|
||||||
- `data/.myfsio.sys/config/secret_store.json` – Application secrets
|
|
||||||
|
|
||||||
### Update Procedures
|
|
||||||
|
|
||||||
#### Source Installation Updates
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Backup (see above)
|
|
||||||
# 2. Pull latest code
|
|
||||||
git fetch origin
|
|
||||||
git checkout main # or your target branch/tag
|
|
||||||
git pull
|
|
||||||
|
|
||||||
# 3. Check for dependency changes
|
|
||||||
pip install -r requirements.txt
|
|
||||||
|
|
||||||
# 4. Review CHANGELOG/release notes for breaking changes
|
|
||||||
cat CHANGELOG.md # if available
|
|
||||||
|
|
||||||
# 5. Run migration scripts (if any)
|
|
||||||
# python scripts/migrate_vX_to_vY.py # example
|
|
||||||
|
|
||||||
# 6. Restart application
|
|
||||||
python run.py
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Docker Updates
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Backup (see above)
|
|
||||||
# 2. Pull/rebuild image
|
|
||||||
docker pull yourregistry/myfsio:latest
|
|
||||||
# OR rebuild from source:
|
|
||||||
docker build -t myfsio:latest .
|
|
||||||
|
|
||||||
# 3. Stop and remove old container
|
|
||||||
docker stop myfsio
|
|
||||||
docker rm myfsio
|
|
||||||
|
|
||||||
# 4. Start new container with same volumes
|
|
||||||
docker run -d \
|
|
||||||
--name myfsio \
|
|
||||||
-p 5000:5000 -p 5100:5100 \
|
|
||||||
-v "$(pwd)/data:/app/data" \
|
|
||||||
-v "$(pwd)/logs:/app/logs" \
|
|
||||||
-e SECRET_KEY="your-secret" \
|
|
||||||
myfsio:latest
|
|
||||||
|
|
||||||
# 5. Verify health
|
|
||||||
curl http://localhost:5000/healthz
|
|
||||||
```
|
|
||||||
|
|
||||||
### Version Compatibility Checks
|
|
||||||
|
|
||||||
Before upgrading across major versions, verify compatibility:
|
|
||||||
|
|
||||||
| From Version | To Version | Breaking Changes | Migration Required |
|
|
||||||
|--------------|------------|------------------|-------------------|
|
|
||||||
| 0.1.x | 0.2.x | None expected | No |
|
|
||||||
| 0.1.6 | 0.1.7 | None | No |
|
|
||||||
| < 0.1.0 | >= 0.1.0 | New IAM config format | Yes - run migration script |
|
|
||||||
|
|
||||||
**Automatic compatibility detection:**
|
|
||||||
|
|
||||||
The application will log warnings on startup if config files need migration:
|
|
||||||
|
|
||||||
```
|
|
||||||
WARNING: IAM config format is outdated (v1). Please run: python scripts/migrate_iam.py
|
|
||||||
```
|
|
||||||
|
|
||||||
**Manual compatibility check:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Compare version schemas
|
|
||||||
python -c "from app.version import APP_VERSION; print(f'Running: {APP_VERSION}')"
|
|
||||||
python scripts/check_compatibility.py data/.myfsio.sys/config/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Migration Steps for Breaking Changes
|
|
||||||
|
|
||||||
When release notes indicate breaking changes, follow these steps:
|
|
||||||
|
|
||||||
#### Config Format Migrations
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Backup first (critical!)
|
|
||||||
cp data/.myfsio.sys/config/iam.json data/.myfsio.sys/config/iam.json.backup
|
|
||||||
|
|
||||||
# 2. Run provided migration script
|
|
||||||
python scripts/migrate_iam_v1_to_v2.py
|
|
||||||
|
|
||||||
# 3. Validate migration
|
|
||||||
python scripts/validate_config.py
|
|
||||||
|
|
||||||
# 4. Test with read-only mode first (if available)
|
|
||||||
# python run.py --read-only
|
|
||||||
|
|
||||||
# 5. Restart normally
|
|
||||||
python run.py
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Database/Storage Schema Changes
|
|
||||||
|
|
||||||
If object metadata format changes:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Run storage migration script
|
|
||||||
python scripts/migrate_storage.py --dry-run # preview changes
|
|
||||||
|
|
||||||
# 2. Apply migration
|
|
||||||
python scripts/migrate_storage.py --apply
|
|
||||||
|
|
||||||
# 3. Verify integrity
|
|
||||||
python scripts/verify_storage.py
|
|
||||||
```
|
|
||||||
|
|
||||||
#### IAM Policy Updates
|
|
||||||
|
|
||||||
If IAM action names change (e.g., `s3:Get` → `s3:GetObject`):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Migration script will update all policies
|
|
||||||
python scripts/migrate_policies.py \
|
|
||||||
--input data/.myfsio.sys/config/iam.json \
|
|
||||||
--backup data/.myfsio.sys/config/iam.json.v1
|
|
||||||
|
|
||||||
# Review changes before committing
|
|
||||||
python scripts/diff_policies.py \
|
|
||||||
data/.myfsio.sys/config/iam.json.v1 \
|
|
||||||
data/.myfsio.sys/config/iam.json
|
|
||||||
```
|
|
||||||
|
|
||||||
### Rollback Procedures
|
|
||||||
|
|
||||||
If an update causes issues, rollback to the previous version:
|
|
||||||
|
|
||||||
#### Quick Rollback (Source)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Stop application
|
|
||||||
# Ctrl+C or kill process
|
|
||||||
|
|
||||||
# 2. Revert code
|
|
||||||
git checkout <previous-version-tag>
|
|
||||||
# OR
|
|
||||||
git reset --hard HEAD~1
|
|
||||||
|
|
||||||
# 3. Restore configs from backup
|
|
||||||
cp backups/20241213_103000/config/* data/.myfsio.sys/config/
|
|
||||||
|
|
||||||
# 4. Downgrade dependencies if needed
|
|
||||||
pip install -r requirements.txt
|
|
||||||
|
|
||||||
# 5. Restart
|
|
||||||
python run.py
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Docker Rollback
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Stop current container
|
|
||||||
docker stop myfsio
|
|
||||||
docker rm myfsio
|
|
||||||
|
|
||||||
# 2. Start previous version
|
|
||||||
docker run -d \
|
|
||||||
--name myfsio \
|
|
||||||
-p 5000:5000 -p 5100:5100 \
|
|
||||||
-v "$(pwd)/data:/app/data" \
|
|
||||||
-v "$(pwd)/logs:/app/logs" \
|
|
||||||
-e SECRET_KEY="your-secret" \
|
|
||||||
myfsio:0.1.3 # specify previous version tag
|
|
||||||
|
|
||||||
# 3. Verify
|
|
||||||
curl http://localhost:5000/healthz
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Emergency Config Restore
|
|
||||||
|
|
||||||
If only config is corrupted but code is fine:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Stop app
|
|
||||||
# Restore from latest backup
|
|
||||||
cp backups/20241213_103000/config/iam.json data/.myfsio.sys/config/
|
|
||||||
cp backups/20241213_103000/config/bucket_policies.json data/.myfsio.sys/config/
|
|
||||||
|
|
||||||
# Restart app
|
|
||||||
python run.py
|
|
||||||
```
|
|
||||||
|
|
||||||
### Blue-Green Deployment (Zero Downtime)
|
|
||||||
|
|
||||||
For production environments requiring zero downtime:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Run new version on different port (e.g., 5001/5101)
|
|
||||||
APP_PORT=5001 UI_PORT=5101 python run.py &
|
|
||||||
|
|
||||||
# 2. Health check new instance
|
|
||||||
curl http://localhost:5001/healthz
|
|
||||||
|
|
||||||
# 3. Update load balancer to route to new ports
|
|
||||||
|
|
||||||
# 4. Monitor for issues
|
|
||||||
|
|
||||||
# 5. Gracefully stop old instance
|
|
||||||
kill -SIGTERM <old-pid>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Post-Update Verification
|
|
||||||
|
|
||||||
After any update, verify functionality:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Health check
|
|
||||||
curl http://localhost:5000/healthz
|
|
||||||
|
|
||||||
# 2. Login to UI
|
|
||||||
open http://localhost:5100/ui
|
|
||||||
|
|
||||||
# 3. Test IAM authentication
|
|
||||||
curl -H "X-Amz-Security-Token: <your-access-key>:<your-secret>" \
|
|
||||||
http://localhost:5000/
|
|
||||||
|
|
||||||
# 4. Test presigned URL generation
|
|
||||||
# Via UI or API
|
|
||||||
|
|
||||||
# 5. Check logs for errors
|
|
||||||
tail -n 100 logs/myfsio.log
|
|
||||||
```
|
|
||||||
|
|
||||||
### Automated Update Scripts
|
|
||||||
|
|
||||||
Create a custom update script for your environment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#!/bin/bash
|
|
||||||
# update.sh - Automated update with rollback capability
|
|
||||||
|
|
||||||
set -e # Exit on error
|
|
||||||
|
|
||||||
VERSION_NEW="$1"
|
|
||||||
BACKUP_DIR="backups/$(date +%Y%m%d_%H%M%S)"
|
|
||||||
|
|
||||||
echo "Creating backup..."
|
|
||||||
mkdir -p "$BACKUP_DIR"
|
|
||||||
cp -r data/.myfsio.sys/config "$BACKUP_DIR/"
|
|
||||||
|
|
||||||
echo "Updating to version $VERSION_NEW..."
|
|
||||||
git fetch origin
|
|
||||||
git checkout "v$VERSION_NEW"
|
|
||||||
pip install -r requirements.txt
|
|
||||||
|
|
||||||
echo "Starting application..."
|
|
||||||
python run.py &
|
|
||||||
APP_PID=$!
|
|
||||||
|
|
||||||
# Wait and health check
|
|
||||||
sleep 5
|
|
||||||
if curl -f http://localhost:5000/healthz; then
|
|
||||||
echo "Update successful!"
|
|
||||||
else
|
|
||||||
echo "Health check failed, rolling back..."
|
|
||||||
kill $APP_PID
|
|
||||||
git checkout -
|
|
||||||
cp -r "$BACKUP_DIR/config/*" data/.myfsio.sys/config/
|
|
||||||
python run.py &
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
## 4. Authentication & IAM
|
## 4. Authentication & IAM
|
||||||
|
|
||||||
MyFSIO implements a comprehensive Identity and Access Management (IAM) system that controls who can access your buckets and what operations they can perform. The system supports both simple action-based permissions and AWS-compatible policy syntax.
|
|
||||||
|
|
||||||
### Getting Started
|
|
||||||
|
|
||||||
1. On first boot, `data/.myfsio.sys/config/iam.json` is seeded with `localadmin / localadmin` that has wildcard access.
|
1. On first boot, `data/.myfsio.sys/config/iam.json` is seeded with `localadmin / localadmin` that has wildcard access.
|
||||||
2. Sign into the UI using those credentials, then open **IAM**:
|
2. Sign into the UI using those credentials, then open **IAM**:
|
||||||
- **Create user**: supply a display name and optional JSON inline policy array.
|
- **Create user**: supply a display name and optional JSON inline policy array.
|
||||||
@@ -613,241 +104,48 @@ MyFSIO implements a comprehensive Identity and Access Management (IAM) system th
|
|||||||
- **Policy editor**: select a user, paste an array of objects (`{"bucket": "*", "actions": ["list", "read"]}`), and submit. Alias support includes AWS-style verbs (e.g., `s3:GetObject`).
|
- **Policy editor**: select a user, paste an array of objects (`{"bucket": "*", "actions": ["list", "read"]}`), and submit. Alias support includes AWS-style verbs (e.g., `s3:GetObject`).
|
||||||
3. Wildcard action `iam:*` is supported for admin user definitions.
|
3. Wildcard action `iam:*` is supported for admin user definitions.
|
||||||
|
|
||||||
### Authentication
|
The API expects every request to include `X-Access-Key` and `X-Secret-Key` headers. The UI persists them in the Flask session after login.
|
||||||
|
|
||||||
The API expects every request to include authentication headers. The UI persists them in the Flask session after login.
|
|
||||||
|
|
||||||
| Header | Description |
|
|
||||||
| --- | --- |
|
|
||||||
| `X-Access-Key` | The user's access key identifier |
|
|
||||||
| `X-Secret-Key` | The user's secret key for signing |
|
|
||||||
|
|
||||||
**Security Features:**
|
|
||||||
- **Lockout Protection**: After `AUTH_MAX_ATTEMPTS` (default: 5) failed login attempts, the account is locked for `AUTH_LOCKOUT_MINUTES` (default: 15 minutes).
|
|
||||||
- **Session Management**: UI sessions remain valid for `SESSION_LIFETIME_DAYS` (default: 30 days).
|
|
||||||
- **Hot Reload**: IAM configuration changes take effect immediately without restart.
|
|
||||||
|
|
||||||
### Permission Model
|
|
||||||
|
|
||||||
MyFSIO uses a two-layer permission model:
|
|
||||||
|
|
||||||
1. **IAM User Policies** – Define what a user can do across the system (stored in `iam.json`)
|
|
||||||
2. **Bucket Policies** – Define who can access a specific bucket (stored in `bucket_policies.json`)
|
|
||||||
|
|
||||||
Both layers are evaluated for each request. A user must have permission in their IAM policy AND the bucket policy must allow the action (or have no explicit deny).
|
|
||||||
|
|
||||||
### Available IAM Actions
|
### Available IAM Actions
|
||||||
|
|
||||||
#### S3 Actions (Bucket/Object Operations)
|
|
||||||
|
|
||||||
| Action | Description | AWS Aliases |
|
| Action | Description | AWS Aliases |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `list` | List buckets and objects | `s3:ListBucket`, `s3:ListAllMyBuckets`, `s3:ListBucketVersions`, `s3:ListMultipartUploads`, `s3:ListParts` |
|
| `list` | List buckets and objects | `s3:ListBucket`, `s3:ListAllMyBuckets`, `s3:ListBucketVersions`, `s3:ListMultipartUploads`, `s3:ListParts` |
|
||||||
| `read` | Download objects, get metadata | `s3:GetObject`, `s3:GetObjectVersion`, `s3:GetObjectTagging`, `s3:GetObjectVersionTagging`, `s3:GetObjectAcl`, `s3:GetBucketVersioning`, `s3:HeadObject`, `s3:HeadBucket` |
|
| `read` | Download objects | `s3:GetObject`, `s3:GetObjectVersion`, `s3:GetObjectTagging`, `s3:HeadObject`, `s3:HeadBucket` |
|
||||||
| `write` | Upload objects, create buckets, manage tags | `s3:PutObject`, `s3:CreateBucket`, `s3:PutObjectTagging`, `s3:PutBucketVersioning`, `s3:CreateMultipartUpload`, `s3:UploadPart`, `s3:CompleteMultipartUpload`, `s3:AbortMultipartUpload`, `s3:CopyObject` |
|
| `write` | Upload objects, create buckets | `s3:PutObject`, `s3:CreateBucket`, `s3:CreateMultipartUpload`, `s3:UploadPart`, `s3:CompleteMultipartUpload`, `s3:AbortMultipartUpload`, `s3:CopyObject` |
|
||||||
| `delete` | Remove objects, versions, and buckets | `s3:DeleteObject`, `s3:DeleteObjectVersion`, `s3:DeleteBucket`, `s3:DeleteObjectTagging` |
|
| `delete` | Remove objects and buckets | `s3:DeleteObject`, `s3:DeleteObjectVersion`, `s3:DeleteBucket` |
|
||||||
| `share` | Manage Access Control Lists (ACLs) | `s3:PutObjectAcl`, `s3:PutBucketAcl`, `s3:GetBucketAcl` |
|
| `share` | Manage ACLs | `s3:PutObjectAcl`, `s3:PutBucketAcl`, `s3:GetBucketAcl` |
|
||||||
| `policy` | Manage bucket policies | `s3:PutBucketPolicy`, `s3:GetBucketPolicy`, `s3:DeleteBucketPolicy` |
|
| `policy` | Manage bucket policies | `s3:PutBucketPolicy`, `s3:GetBucketPolicy`, `s3:DeleteBucketPolicy` |
|
||||||
| `lifecycle` | Manage lifecycle rules | `s3:GetLifecycleConfiguration`, `s3:PutLifecycleConfiguration`, `s3:DeleteLifecycleConfiguration`, `s3:GetBucketLifecycle`, `s3:PutBucketLifecycle` |
|
| `replication` | Configure and manage replication | `s3:GetReplicationConfiguration`, `s3:PutReplicationConfiguration`, `s3:ReplicateObject`, `s3:ReplicateTags`, `s3:ReplicateDelete` |
|
||||||
| `cors` | Manage CORS configuration | `s3:GetBucketCors`, `s3:PutBucketCors`, `s3:DeleteBucketCors` |
|
| `iam:list_users` | View IAM users | `iam:ListUsers` |
|
||||||
| `replication` | Configure and manage replication | `s3:GetReplicationConfiguration`, `s3:PutReplicationConfiguration`, `s3:DeleteReplicationConfiguration`, `s3:ReplicateObject`, `s3:ReplicateTags`, `s3:ReplicateDelete` |
|
| `iam:create_user` | Create IAM users | `iam:CreateUser` |
|
||||||
|
|
||||||
#### IAM Actions (User Management)
|
|
||||||
|
|
||||||
| Action | Description | AWS Aliases |
|
|
||||||
| --- | --- | --- |
|
|
||||||
| `iam:list_users` | View all IAM users and their policies | `iam:ListUsers` |
|
|
||||||
| `iam:create_user` | Create new IAM users | `iam:CreateUser` |
|
|
||||||
| `iam:delete_user` | Delete IAM users | `iam:DeleteUser` |
|
| `iam:delete_user` | Delete IAM users | `iam:DeleteUser` |
|
||||||
| `iam:rotate_key` | Rotate user secret keys | `iam:RotateAccessKey` |
|
| `iam:rotate_key` | Rotate user secrets | `iam:RotateAccessKey` |
|
||||||
| `iam:update_policy` | Modify user policies | `iam:PutUserPolicy` |
|
| `iam:update_policy` | Modify user policies | `iam:PutUserPolicy` |
|
||||||
| `iam:*` | **Admin wildcard** – grants all IAM actions | — |
|
| `iam:*` | All IAM actions (admin wildcard) | — |
|
||||||
|
|
||||||
#### Wildcards
|
### Example Policies
|
||||||
|
|
||||||
| Wildcard | Scope | Description |
|
|
||||||
| --- | --- | --- |
|
|
||||||
| `*` (in actions) | All S3 actions | Grants `list`, `read`, `write`, `delete`, `share`, `policy`, `lifecycle`, `cors`, `replication` |
|
|
||||||
| `iam:*` | All IAM actions | Grants all `iam:*` actions for user management |
|
|
||||||
| `*` (in bucket) | All buckets | Policy applies to every bucket |
|
|
||||||
|
|
||||||
### IAM Policy Structure
|
|
||||||
|
|
||||||
User policies are stored as a JSON array of policy objects. Each object specifies a bucket and the allowed actions:
|
|
||||||
|
|
||||||
|
**Full Control (admin):**
|
||||||
```json
|
```json
|
||||||
[
|
[{"bucket": "*", "actions": ["list", "read", "write", "delete", "share", "policy", "replication", "iam:*"]}]
|
||||||
{
|
|
||||||
"bucket": "<bucket-name-or-wildcard>",
|
|
||||||
"actions": ["<action1>", "<action2>", ...]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Fields:**
|
**Read-Only:**
|
||||||
- `bucket`: The bucket name (case-insensitive) or `*` for all buckets
|
|
||||||
- `actions`: Array of action strings (simple names or AWS aliases)
|
|
||||||
|
|
||||||
### Example User Policies
|
|
||||||
|
|
||||||
**Full Administrator (complete system access):**
|
|
||||||
```json
|
|
||||||
[{"bucket": "*", "actions": ["list", "read", "write", "delete", "share", "policy", "lifecycle", "cors", "replication", "iam:*"]}]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Read-Only User (browse and download only):**
|
|
||||||
```json
|
```json
|
||||||
[{"bucket": "*", "actions": ["list", "read"]}]
|
[{"bucket": "*", "actions": ["list", "read"]}]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Single Bucket Full Access (no access to other buckets):**
|
**Single Bucket Access (no listing other buckets):**
|
||||||
```json
|
```json
|
||||||
[{"bucket": "user-bucket", "actions": ["list", "read", "write", "delete"]}]
|
[{"bucket": "user-bucket", "actions": ["read", "write", "delete"]}]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Multiple Bucket Access (different permissions per bucket):**
|
**Bucket Access with Replication:**
|
||||||
```json
|
```json
|
||||||
[
|
[{"bucket": "my-bucket", "actions": ["list", "read", "write", "delete", "replication"]}]
|
||||||
{"bucket": "public-data", "actions": ["list", "read"]},
|
|
||||||
{"bucket": "my-uploads", "actions": ["list", "read", "write", "delete"]},
|
|
||||||
{"bucket": "team-shared", "actions": ["list", "read", "write"]}
|
|
||||||
]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**IAM Manager (manage users but no data access):**
|
|
||||||
```json
|
|
||||||
[{"bucket": "*", "actions": ["iam:list_users", "iam:create_user", "iam:delete_user", "iam:rotate_key", "iam:update_policy"]}]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Replication Operator (manage replication only):**
|
|
||||||
```json
|
|
||||||
[{"bucket": "*", "actions": ["list", "read", "replication"]}]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Lifecycle Manager (configure object expiration):**
|
|
||||||
```json
|
|
||||||
[{"bucket": "*", "actions": ["list", "lifecycle"]}]
|
|
||||||
```
|
|
||||||
|
|
||||||
**CORS Administrator (configure cross-origin access):**
|
|
||||||
```json
|
|
||||||
[{"bucket": "*", "actions": ["cors"]}]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Bucket Administrator (full bucket config, no IAM access):**
|
|
||||||
```json
|
|
||||||
[{"bucket": "my-bucket", "actions": ["list", "read", "write", "delete", "policy", "lifecycle", "cors"]}]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Upload-Only User (write but cannot read back):**
|
|
||||||
```json
|
|
||||||
[{"bucket": "drop-box", "actions": ["write"]}]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Backup Operator (read, list, and replicate):**
|
|
||||||
```json
|
|
||||||
[{"bucket": "*", "actions": ["list", "read", "replication"]}]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Using AWS-Style Action Names
|
|
||||||
|
|
||||||
You can use AWS S3 action names instead of simple names. They are automatically normalized:
|
|
||||||
|
|
||||||
```json
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"bucket": "my-bucket",
|
|
||||||
"actions": [
|
|
||||||
"s3:ListBucket",
|
|
||||||
"s3:GetObject",
|
|
||||||
"s3:PutObject",
|
|
||||||
"s3:DeleteObject"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
This is equivalent to:
|
|
||||||
```json
|
|
||||||
[{"bucket": "my-bucket", "actions": ["list", "read", "write", "delete"]}]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Managing Users via API
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# List all users (requires iam:list_users)
|
|
||||||
curl http://localhost:5000/iam/users \
|
|
||||||
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
|
||||||
|
|
||||||
# Create a new user (requires iam:create_user)
|
|
||||||
curl -X POST http://localhost:5000/iam/users \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
|
||||||
-d '{
|
|
||||||
"display_name": "New User",
|
|
||||||
"policies": [{"bucket": "*", "actions": ["list", "read"]}]
|
|
||||||
}'
|
|
||||||
|
|
||||||
# Rotate user secret (requires iam:rotate_key)
|
|
||||||
curl -X POST http://localhost:5000/iam/users/<access-key>/rotate \
|
|
||||||
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
|
||||||
|
|
||||||
# Update user policies (requires iam:update_policy)
|
|
||||||
curl -X PUT http://localhost:5000/iam/users/<access-key>/policies \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
|
||||||
-d '[{"bucket": "*", "actions": ["list", "read", "write"]}]'
|
|
||||||
|
|
||||||
# Delete a user (requires iam:delete_user)
|
|
||||||
curl -X DELETE http://localhost:5000/iam/users/<access-key> \
|
|
||||||
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
|
||||||
```
|
|
||||||
|
|
||||||
### Permission Precedence
|
|
||||||
|
|
||||||
When a request is made, permissions are evaluated in this order:
|
|
||||||
|
|
||||||
1. **Authentication** – Verify the access key and secret key are valid
|
|
||||||
2. **Lockout Check** – Ensure the account is not locked due to failed attempts
|
|
||||||
3. **IAM Policy Check** – Verify the user has the required action for the target bucket
|
|
||||||
4. **Bucket Policy Check** – If a bucket policy exists, verify it allows the action
|
|
||||||
|
|
||||||
A request is allowed only if:
|
|
||||||
- The IAM policy grants the action, AND
|
|
||||||
- The bucket policy allows the action (or no bucket policy exists)
|
|
||||||
|
|
||||||
### Common Permission Scenarios
|
|
||||||
|
|
||||||
| Scenario | Required Actions |
|
|
||||||
| --- | --- |
|
|
||||||
| Browse bucket contents | `list` |
|
|
||||||
| Download a file | `read` |
|
|
||||||
| Upload a file | `write` |
|
|
||||||
| Delete a file | `delete` |
|
|
||||||
| Generate presigned URL (GET) | `read` |
|
|
||||||
| Generate presigned URL (PUT) | `write` |
|
|
||||||
| Generate presigned URL (DELETE) | `delete` |
|
|
||||||
| Enable versioning | `write` (includes `s3:PutBucketVersioning`) |
|
|
||||||
| View bucket policy | `policy` |
|
|
||||||
| Modify bucket policy | `policy` |
|
|
||||||
| Configure lifecycle rules | `lifecycle` |
|
|
||||||
| View lifecycle rules | `lifecycle` |
|
|
||||||
| Configure CORS | `cors` |
|
|
||||||
| View CORS rules | `cors` |
|
|
||||||
| Configure replication | `replication` (admin-only for creation) |
|
|
||||||
| Pause/resume replication | `replication` |
|
|
||||||
| Manage other users | `iam:*` or specific `iam:` actions |
|
|
||||||
| Set bucket quotas | `iam:*` or `iam:list_users` (admin feature) |
|
|
||||||
|
|
||||||
### Security Best Practices
|
|
||||||
|
|
||||||
1. **Principle of Least Privilege** – Grant only the permissions users need
|
|
||||||
2. **Avoid Wildcards** – Use specific bucket names instead of `*` when possible
|
|
||||||
3. **Rotate Secrets Regularly** – Use the rotate key feature periodically
|
|
||||||
4. **Separate Admin Accounts** – Don't use admin accounts for daily operations
|
|
||||||
5. **Monitor Failed Logins** – Check logs for repeated authentication failures
|
|
||||||
6. **Use Bucket Policies for Fine-Grained Control** – Combine with IAM for defense in depth
|
|
||||||
|
|
||||||
## 5. Bucket Policies & Presets
|
## 5. Bucket Policies & Presets
|
||||||
|
|
||||||
- **Storage**: Policies are persisted in `data/.myfsio.sys/config/bucket_policies.json` under `{"policies": {"bucket": {...}}}`.
|
- **Storage**: Policies are persisted in `data/.myfsio.sys/config/bucket_policies.json` under `{"policies": {"bucket": {...}}}`.
|
||||||
@@ -878,48 +176,6 @@ curl -X PUT http://127.0.0.1:5000/bucket-policy/test \
|
|||||||
|
|
||||||
The UI will reflect this change as soon as the request completes thanks to the hot reload.
|
The UI will reflect this change as soon as the request completes thanks to the hot reload.
|
||||||
|
|
||||||
### UI Object Browser
|
|
||||||
|
|
||||||
The bucket detail page includes a powerful object browser with the following features:
|
|
||||||
|
|
||||||
#### Folder Navigation
|
|
||||||
|
|
||||||
Objects with forward slashes (`/`) in their keys are displayed as a folder hierarchy. Click a folder row to navigate into it. A breadcrumb navigation bar shows your current path and allows quick navigation back to parent folders or the root.
|
|
||||||
|
|
||||||
#### Pagination & Infinite Scroll
|
|
||||||
|
|
||||||
- Objects load in configurable batches (50, 100, 150, 200, or 250 per page)
|
|
||||||
- Scroll to the bottom to automatically load more objects (infinite scroll)
|
|
||||||
- A **Load more** button is available as a fallback for touch devices or when infinite scroll doesn't trigger
|
|
||||||
- The footer shows the current load status (e.g., "Showing 100 of 500 objects")
|
|
||||||
|
|
||||||
#### Bulk Operations
|
|
||||||
|
|
||||||
- Select multiple objects using checkboxes
|
|
||||||
- **Bulk Delete**: Delete multiple objects at once
|
|
||||||
- **Bulk Download**: Download selected objects as individual files
|
|
||||||
|
|
||||||
#### Search & Filter
|
|
||||||
|
|
||||||
Use the search box to filter objects by name in real-time. The filter applies to the currently loaded objects.
|
|
||||||
|
|
||||||
#### Error Handling
|
|
||||||
|
|
||||||
If object loading fails (e.g., network error), a friendly error message is displayed with a **Retry** button to attempt loading again.
|
|
||||||
|
|
||||||
#### Object Preview
|
|
||||||
|
|
||||||
Click any object row to view its details in the preview sidebar:
|
|
||||||
- File size and last modified date
|
|
||||||
- ETag (content hash)
|
|
||||||
- Custom metadata (if present)
|
|
||||||
- Download and presign (share link) buttons
|
|
||||||
- Version history (when versioning is enabled)
|
|
||||||
|
|
||||||
#### Drag & Drop Upload
|
|
||||||
|
|
||||||
Drag files directly onto the objects table to upload them to the current bucket and folder path.
|
|
||||||
|
|
||||||
## 6. Presigned URLs
|
## 6. Presigned URLs
|
||||||
|
|
||||||
- Trigger from the UI using the **Presign** button after selecting an object.
|
- Trigger from the UI using the **Presign** button after selecting an object.
|
||||||
@@ -1321,3 +577,9 @@ DELETE /bucket-policy/<bucket> # Delete policy
|
|||||||
GET /<bucket>?quota # Get bucket quota
|
GET /<bucket>?quota # Get bucket quota
|
||||||
PUT /<bucket>?quota # Set bucket quota (admin only)
|
PUT /<bucket>?quota # Set bucket quota (admin only)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## 14. Next Steps
|
||||||
|
|
||||||
|
- Tailor IAM + policy JSON files for team-ready presets.
|
||||||
|
- Wrap `run_api.py` with gunicorn or another WSGI server for long-running workloads.
|
||||||
|
- Extend `bucket_policies.json` to cover Deny statements that simulate production security controls.
|
||||||
|
|||||||
@@ -1,5 +0,0 @@
|
|||||||
[pytest]
|
|
||||||
testpaths = tests
|
|
||||||
norecursedirs = data .git __pycache__ .venv
|
|
||||||
markers =
|
|
||||||
integration: marks tests as integration tests (may require external services)
|
|
||||||
@@ -1,11 +1,10 @@
|
|||||||
Flask>=3.1.2
|
Flask>=3.1.2
|
||||||
Flask-Limiter>=4.1.1
|
Flask-Limiter>=4.1.0
|
||||||
Flask-Cors>=6.0.2
|
Flask-Cors>=6.0.1
|
||||||
Flask-WTF>=1.2.2
|
Flask-WTF>=1.2.2
|
||||||
python-dotenv>=1.2.1
|
pytest>=9.0.1
|
||||||
pytest>=9.0.2
|
|
||||||
requests>=2.32.5
|
requests>=2.32.5
|
||||||
boto3>=1.42.14
|
boto3>=1.42.1
|
||||||
waitress>=3.0.2
|
waitress>=3.0.2
|
||||||
psutil>=7.1.3
|
psutil>=7.1.3
|
||||||
cryptography>=46.0.3
|
cryptography>=46.0.3
|
||||||
48
run.py
48
run.py
@@ -6,20 +6,8 @@ import os
|
|||||||
import sys
|
import sys
|
||||||
import warnings
|
import warnings
|
||||||
from multiprocessing import Process
|
from multiprocessing import Process
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
|
|
||||||
for _env_file in [
|
|
||||||
Path("/opt/myfsio/myfsio.env"),
|
|
||||||
Path.cwd() / ".env",
|
|
||||||
Path.cwd() / "myfsio.env",
|
|
||||||
]:
|
|
||||||
if _env_file.exists():
|
|
||||||
load_dotenv(_env_file, override=True)
|
|
||||||
|
|
||||||
from app import create_api_app, create_ui_app
|
from app import create_api_app, create_ui_app
|
||||||
from app.config import AppConfig
|
|
||||||
|
|
||||||
|
|
||||||
def _server_host() -> str:
|
def _server_host() -> str:
|
||||||
@@ -67,48 +55,12 @@ if __name__ == "__main__":
|
|||||||
parser.add_argument("--ui-port", type=int, default=5100)
|
parser.add_argument("--ui-port", type=int, default=5100)
|
||||||
parser.add_argument("--prod", action="store_true", help="Run in production mode using Waitress")
|
parser.add_argument("--prod", action="store_true", help="Run in production mode using Waitress")
|
||||||
parser.add_argument("--dev", action="store_true", help="Force development mode (Flask dev server)")
|
parser.add_argument("--dev", action="store_true", help="Force development mode (Flask dev server)")
|
||||||
parser.add_argument("--check-config", action="store_true", help="Validate configuration and exit")
|
|
||||||
parser.add_argument("--show-config", action="store_true", help="Show configuration summary and exit")
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
# Handle config check/show modes
|
|
||||||
if args.check_config or args.show_config:
|
|
||||||
config = AppConfig.from_env()
|
|
||||||
config.print_startup_summary()
|
|
||||||
if args.check_config:
|
|
||||||
issues = config.validate_and_report()
|
|
||||||
critical = [i for i in issues if i.startswith("CRITICAL:")]
|
|
||||||
sys.exit(1 if critical else 0)
|
|
||||||
sys.exit(0)
|
|
||||||
|
|
||||||
# Default to production mode when running as compiled binary
|
# Default to production mode when running as compiled binary
|
||||||
# unless --dev is explicitly passed
|
# unless --dev is explicitly passed
|
||||||
prod_mode = args.prod or (_is_frozen() and not args.dev)
|
prod_mode = args.prod or (_is_frozen() and not args.dev)
|
||||||
|
|
||||||
# Validate configuration before starting
|
|
||||||
config = AppConfig.from_env()
|
|
||||||
|
|
||||||
# Show startup summary only on first run (when marker file doesn't exist)
|
|
||||||
first_run_marker = config.storage_root / ".myfsio.sys" / ".initialized"
|
|
||||||
is_first_run = not first_run_marker.exists()
|
|
||||||
|
|
||||||
if is_first_run:
|
|
||||||
config.print_startup_summary()
|
|
||||||
|
|
||||||
# Check for critical issues that should prevent startup
|
|
||||||
issues = config.validate_and_report()
|
|
||||||
critical_issues = [i for i in issues if i.startswith("CRITICAL:")]
|
|
||||||
if critical_issues:
|
|
||||||
print("ABORTING: Critical configuration issues detected. Fix them before starting.")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
# Create the marker file to indicate successful first run
|
|
||||||
try:
|
|
||||||
first_run_marker.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
first_run_marker.write_text(f"Initialized on {__import__('datetime').datetime.now().isoformat()}\n")
|
|
||||||
except OSError:
|
|
||||||
pass # Non-critical, just skip marker creation
|
|
||||||
|
|
||||||
if prod_mode:
|
if prod_mode:
|
||||||
print("Running in production mode (Waitress)")
|
print("Running in production mode (Waitress)")
|
||||||
else:
|
else:
|
||||||
|
|||||||
@@ -1,370 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# MyFSIO Installation Script
|
|
||||||
# This script sets up MyFSIO for production use on Linux systems.
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./install.sh [OPTIONS]
|
|
||||||
#
|
|
||||||
# Options:
|
|
||||||
# --install-dir DIR Installation directory (default: /opt/myfsio)
|
|
||||||
# --data-dir DIR Data directory (default: /var/lib/myfsio)
|
|
||||||
# --log-dir DIR Log directory (default: /var/log/myfsio)
|
|
||||||
# --user USER System user to run as (default: myfsio)
|
|
||||||
# --port PORT API port (default: 5000)
|
|
||||||
# --ui-port PORT UI port (default: 5100)
|
|
||||||
# --api-url URL Public API URL (for presigned URLs behind proxy)
|
|
||||||
# --no-systemd Skip systemd service creation
|
|
||||||
# --binary PATH Path to myfsio binary (will download if not provided)
|
|
||||||
# -y, --yes Skip confirmation prompts
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
INSTALL_DIR="/opt/myfsio"
|
|
||||||
DATA_DIR="/var/lib/myfsio"
|
|
||||||
LOG_DIR="/var/log/myfsio"
|
|
||||||
SERVICE_USER="myfsio"
|
|
||||||
API_PORT="5000"
|
|
||||||
UI_PORT="5100"
|
|
||||||
API_URL=""
|
|
||||||
SKIP_SYSTEMD=false
|
|
||||||
BINARY_PATH=""
|
|
||||||
AUTO_YES=false
|
|
||||||
|
|
||||||
while [[ $# -gt 0 ]]; do
|
|
||||||
case $1 in
|
|
||||||
--install-dir)
|
|
||||||
INSTALL_DIR="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--data-dir)
|
|
||||||
DATA_DIR="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--log-dir)
|
|
||||||
LOG_DIR="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--user)
|
|
||||||
SERVICE_USER="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--port)
|
|
||||||
API_PORT="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--ui-port)
|
|
||||||
UI_PORT="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--api-url)
|
|
||||||
API_URL="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--no-systemd)
|
|
||||||
SKIP_SYSTEMD=true
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--binary)
|
|
||||||
BINARY_PATH="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
-y|--yes)
|
|
||||||
AUTO_YES=true
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
-h|--help)
|
|
||||||
head -30 "$0" | tail -25
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "Unknown option: $1"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "============================================================"
|
|
||||||
echo " MyFSIO Installation Script"
|
|
||||||
echo " S3-Compatible Object Storage"
|
|
||||||
echo "============================================================"
|
|
||||||
echo ""
|
|
||||||
echo "Documentation: https://go.jzwsite.com/myfsio"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [[ $EUID -ne 0 ]]; then
|
|
||||||
echo "Error: This script must be run as root (use sudo)"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 1: Review Installation Configuration"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
echo " Install directory: $INSTALL_DIR"
|
|
||||||
echo " Data directory: $DATA_DIR"
|
|
||||||
echo " Log directory: $LOG_DIR"
|
|
||||||
echo " Service user: $SERVICE_USER"
|
|
||||||
echo " API port: $API_PORT"
|
|
||||||
echo " UI port: $UI_PORT"
|
|
||||||
if [[ -n "$API_URL" ]]; then
|
|
||||||
echo " Public API URL: $API_URL"
|
|
||||||
fi
|
|
||||||
if [[ -n "$BINARY_PATH" ]]; then
|
|
||||||
echo " Binary path: $BINARY_PATH"
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [[ "$AUTO_YES" != true ]]; then
|
|
||||||
read -p "Do you want to proceed with these settings? [y/N] " -n 1 -r
|
|
||||||
echo
|
|
||||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
|
||||||
echo "Installation cancelled."
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 2: Creating System User"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
if id "$SERVICE_USER" &>/dev/null; then
|
|
||||||
echo " [OK] User '$SERVICE_USER' already exists"
|
|
||||||
else
|
|
||||||
useradd --system --no-create-home --shell /usr/sbin/nologin "$SERVICE_USER"
|
|
||||||
echo " [OK] Created user '$SERVICE_USER'"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 3: Creating Directories"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
mkdir -p "$INSTALL_DIR"
|
|
||||||
echo " [OK] Created $INSTALL_DIR"
|
|
||||||
mkdir -p "$DATA_DIR"
|
|
||||||
echo " [OK] Created $DATA_DIR"
|
|
||||||
mkdir -p "$LOG_DIR"
|
|
||||||
echo " [OK] Created $LOG_DIR"
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 4: Installing Binary"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
if [[ -n "$BINARY_PATH" ]]; then
|
|
||||||
if [[ -f "$BINARY_PATH" ]]; then
|
|
||||||
cp "$BINARY_PATH" "$INSTALL_DIR/myfsio"
|
|
||||||
echo " [OK] Copied binary from $BINARY_PATH"
|
|
||||||
else
|
|
||||||
echo " [ERROR] Binary not found at $BINARY_PATH"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
elif [[ -f "./myfsio" ]]; then
|
|
||||||
cp "./myfsio" "$INSTALL_DIR/myfsio"
|
|
||||||
echo " [OK] Copied binary from ./myfsio"
|
|
||||||
else
|
|
||||||
echo " [ERROR] No binary provided."
|
|
||||||
echo " Use --binary PATH or place 'myfsio' in current directory"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
chmod +x "$INSTALL_DIR/myfsio"
|
|
||||||
echo " [OK] Set executable permissions"
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 5: Generating Secret Key"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
SECRET_KEY=$(openssl rand -base64 32)
|
|
||||||
echo " [OK] Generated secure SECRET_KEY"
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 6: Creating Configuration File"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
cat > "$INSTALL_DIR/myfsio.env" << EOF
|
|
||||||
# MyFSIO Configuration
|
|
||||||
# Generated by install.sh on $(date)
|
|
||||||
# Documentation: https://go.jzwsite.com/myfsio
|
|
||||||
|
|
||||||
# Storage paths
|
|
||||||
STORAGE_ROOT=$DATA_DIR
|
|
||||||
LOG_DIR=$LOG_DIR
|
|
||||||
|
|
||||||
# Network
|
|
||||||
APP_HOST=0.0.0.0
|
|
||||||
APP_PORT=$API_PORT
|
|
||||||
|
|
||||||
# Security - CHANGE IN PRODUCTION
|
|
||||||
SECRET_KEY=$SECRET_KEY
|
|
||||||
CORS_ORIGINS=*
|
|
||||||
|
|
||||||
# Public URL (set this if behind a reverse proxy)
|
|
||||||
$(if [[ -n "$API_URL" ]]; then echo "API_BASE_URL=$API_URL"; else echo "# API_BASE_URL=https://s3.example.com"; fi)
|
|
||||||
|
|
||||||
# Logging
|
|
||||||
LOG_LEVEL=INFO
|
|
||||||
LOG_TO_FILE=true
|
|
||||||
|
|
||||||
# Rate limiting
|
|
||||||
RATE_LIMIT_DEFAULT=200 per minute
|
|
||||||
|
|
||||||
# Optional: Encryption (uncomment to enable)
|
|
||||||
# ENCRYPTION_ENABLED=true
|
|
||||||
# KMS_ENABLED=true
|
|
||||||
EOF
|
|
||||||
chmod 600 "$INSTALL_DIR/myfsio.env"
|
|
||||||
echo " [OK] Created $INSTALL_DIR/myfsio.env"
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 7: Setting Permissions"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
chown -R "$SERVICE_USER:$SERVICE_USER" "$INSTALL_DIR"
|
|
||||||
echo " [OK] Set ownership for $INSTALL_DIR"
|
|
||||||
chown -R "$SERVICE_USER:$SERVICE_USER" "$DATA_DIR"
|
|
||||||
echo " [OK] Set ownership for $DATA_DIR"
|
|
||||||
chown -R "$SERVICE_USER:$SERVICE_USER" "$LOG_DIR"
|
|
||||||
echo " [OK] Set ownership for $LOG_DIR"
|
|
||||||
|
|
||||||
if [[ "$SKIP_SYSTEMD" != true ]]; then
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 8: Creating Systemd Service"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
cat > /etc/systemd/system/myfsio.service << EOF
|
|
||||||
[Unit]
|
|
||||||
Description=MyFSIO S3-Compatible Storage
|
|
||||||
Documentation=https://go.jzwsite.com/myfsio
|
|
||||||
After=network.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=simple
|
|
||||||
User=$SERVICE_USER
|
|
||||||
Group=$SERVICE_USER
|
|
||||||
WorkingDirectory=$INSTALL_DIR
|
|
||||||
EnvironmentFile=$INSTALL_DIR/myfsio.env
|
|
||||||
ExecStart=$INSTALL_DIR/myfsio
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
# Security hardening
|
|
||||||
NoNewPrivileges=true
|
|
||||||
ProtectSystem=strict
|
|
||||||
ProtectHome=true
|
|
||||||
ReadWritePaths=$DATA_DIR $LOG_DIR
|
|
||||||
PrivateTmp=true
|
|
||||||
|
|
||||||
# Resource limits (adjust as needed)
|
|
||||||
# LimitNOFILE=65535
|
|
||||||
# MemoryMax=2G
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
|
|
||||||
systemctl daemon-reload
|
|
||||||
echo " [OK] Created /etc/systemd/system/myfsio.service"
|
|
||||||
echo " [OK] Reloaded systemd daemon"
|
|
||||||
else
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 8: Skipping Systemd Service (--no-systemd flag used)"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "============================================================"
|
|
||||||
echo " Installation Complete!"
|
|
||||||
echo "============================================================"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [[ "$SKIP_SYSTEMD" != true ]]; then
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 9: Start the Service"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [[ "$AUTO_YES" != true ]]; then
|
|
||||||
read -p "Would you like to start MyFSIO now? [Y/n] " -n 1 -r
|
|
||||||
echo
|
|
||||||
START_SERVICE=true
|
|
||||||
if [[ $REPLY =~ ^[Nn]$ ]]; then
|
|
||||||
START_SERVICE=false
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
START_SERVICE=true
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$START_SERVICE" == true ]]; then
|
|
||||||
echo " Starting MyFSIO service..."
|
|
||||||
systemctl start myfsio
|
|
||||||
echo " [OK] Service started"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
read -p "Would you like to enable MyFSIO to start on boot? [Y/n] " -n 1 -r
|
|
||||||
echo
|
|
||||||
if [[ ! $REPLY =~ ^[Nn]$ ]]; then
|
|
||||||
systemctl enable myfsio
|
|
||||||
echo " [OK] Service enabled on boot"
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
sleep 2
|
|
||||||
echo " Service Status:"
|
|
||||||
echo " ---------------"
|
|
||||||
if systemctl is-active --quiet myfsio; then
|
|
||||||
echo " [OK] MyFSIO is running"
|
|
||||||
else
|
|
||||||
echo " [WARNING] MyFSIO may not have started correctly"
|
|
||||||
echo " Check logs with: journalctl -u myfsio -f"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo " [SKIPPED] Service not started"
|
|
||||||
echo ""
|
|
||||||
echo " To start manually, run:"
|
|
||||||
echo " sudo systemctl start myfsio"
|
|
||||||
echo ""
|
|
||||||
echo " To enable on boot, run:"
|
|
||||||
echo " sudo systemctl enable myfsio"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "============================================================"
|
|
||||||
echo " Summary"
|
|
||||||
echo "============================================================"
|
|
||||||
echo ""
|
|
||||||
echo "Access Points:"
|
|
||||||
echo " API: http://$(hostname -I 2>/dev/null | awk '{print $1}' || echo "localhost"):$API_PORT"
|
|
||||||
echo " UI: http://$(hostname -I 2>/dev/null | awk '{print $1}' || echo "localhost"):$UI_PORT/ui"
|
|
||||||
echo ""
|
|
||||||
echo "Default Credentials:"
|
|
||||||
echo " Username: localadmin"
|
|
||||||
echo " Password: localadmin"
|
|
||||||
echo " [!] WARNING: Change these immediately after first login!"
|
|
||||||
echo ""
|
|
||||||
echo "Configuration Files:"
|
|
||||||
echo " Environment: $INSTALL_DIR/myfsio.env"
|
|
||||||
echo " IAM Users: $DATA_DIR/.myfsio.sys/config/iam.json"
|
|
||||||
echo " Bucket Policies: $DATA_DIR/.myfsio.sys/config/bucket_policies.json"
|
|
||||||
echo ""
|
|
||||||
echo "Useful Commands:"
|
|
||||||
echo " Check status: sudo systemctl status myfsio"
|
|
||||||
echo " View logs: sudo journalctl -u myfsio -f"
|
|
||||||
echo " Restart: sudo systemctl restart myfsio"
|
|
||||||
echo " Stop: sudo systemctl stop myfsio"
|
|
||||||
echo ""
|
|
||||||
echo "Documentation: https://go.jzwsite.com/myfsio"
|
|
||||||
echo ""
|
|
||||||
echo "============================================================"
|
|
||||||
echo " Thank you for installing MyFSIO!"
|
|
||||||
echo "============================================================"
|
|
||||||
echo ""
|
|
||||||
@@ -1,244 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# MyFSIO Uninstall Script
|
|
||||||
# This script removes MyFSIO from your system.
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./uninstall.sh [OPTIONS]
|
|
||||||
#
|
|
||||||
# Options:
|
|
||||||
# --keep-data Don't remove data directory
|
|
||||||
# --keep-logs Don't remove log directory
|
|
||||||
# --install-dir DIR Installation directory (default: /opt/myfsio)
|
|
||||||
# --data-dir DIR Data directory (default: /var/lib/myfsio)
|
|
||||||
# --log-dir DIR Log directory (default: /var/log/myfsio)
|
|
||||||
# --user USER System user (default: myfsio)
|
|
||||||
# -y, --yes Skip confirmation prompts
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
INSTALL_DIR="/opt/myfsio"
|
|
||||||
DATA_DIR="/var/lib/myfsio"
|
|
||||||
LOG_DIR="/var/log/myfsio"
|
|
||||||
SERVICE_USER="myfsio"
|
|
||||||
KEEP_DATA=false
|
|
||||||
KEEP_LOGS=false
|
|
||||||
AUTO_YES=false
|
|
||||||
|
|
||||||
while [[ $# -gt 0 ]]; do
|
|
||||||
case $1 in
|
|
||||||
--keep-data)
|
|
||||||
KEEP_DATA=true
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--keep-logs)
|
|
||||||
KEEP_LOGS=true
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--install-dir)
|
|
||||||
INSTALL_DIR="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--data-dir)
|
|
||||||
DATA_DIR="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--log-dir)
|
|
||||||
LOG_DIR="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--user)
|
|
||||||
SERVICE_USER="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
-y|--yes)
|
|
||||||
AUTO_YES=true
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
-h|--help)
|
|
||||||
head -20 "$0" | tail -15
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "Unknown option: $1"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "============================================================"
|
|
||||||
echo " MyFSIO Uninstallation Script"
|
|
||||||
echo "============================================================"
|
|
||||||
echo ""
|
|
||||||
echo "Documentation: https://go.jzwsite.com/myfsio"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [[ $EUID -ne 0 ]]; then
|
|
||||||
echo "Error: This script must be run as root (use sudo)"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 1: Review What Will Be Removed"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
echo "The following items will be removed:"
|
|
||||||
echo ""
|
|
||||||
echo " Install directory: $INSTALL_DIR"
|
|
||||||
if [[ "$KEEP_DATA" != true ]]; then
|
|
||||||
echo " Data directory: $DATA_DIR (ALL YOUR DATA WILL BE DELETED!)"
|
|
||||||
else
|
|
||||||
echo " Data directory: $DATA_DIR (WILL BE KEPT)"
|
|
||||||
fi
|
|
||||||
if [[ "$KEEP_LOGS" != true ]]; then
|
|
||||||
echo " Log directory: $LOG_DIR"
|
|
||||||
else
|
|
||||||
echo " Log directory: $LOG_DIR (WILL BE KEPT)"
|
|
||||||
fi
|
|
||||||
echo " Systemd service: /etc/systemd/system/myfsio.service"
|
|
||||||
echo " System user: $SERVICE_USER"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [[ "$AUTO_YES" != true ]]; then
|
|
||||||
echo "WARNING: This action cannot be undone!"
|
|
||||||
echo ""
|
|
||||||
read -p "Are you sure you want to uninstall MyFSIO? [y/N] " -n 1 -r
|
|
||||||
echo
|
|
||||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
|
||||||
echo ""
|
|
||||||
echo "Uninstallation cancelled."
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$KEEP_DATA" != true ]]; then
|
|
||||||
echo ""
|
|
||||||
read -p "This will DELETE ALL YOUR DATA. Type 'DELETE' to confirm: " CONFIRM
|
|
||||||
if [[ "$CONFIRM" != "DELETE" ]]; then
|
|
||||||
echo ""
|
|
||||||
echo "Uninstallation cancelled."
|
|
||||||
echo "Tip: Use --keep-data to preserve your data directory"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 2: Stopping Service"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
if systemctl is-active --quiet myfsio 2>/dev/null; then
|
|
||||||
systemctl stop myfsio
|
|
||||||
echo " [OK] Stopped myfsio service"
|
|
||||||
else
|
|
||||||
echo " [SKIP] Service not running"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 3: Disabling Service"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
if systemctl is-enabled --quiet myfsio 2>/dev/null; then
|
|
||||||
systemctl disable myfsio
|
|
||||||
echo " [OK] Disabled myfsio service"
|
|
||||||
else
|
|
||||||
echo " [SKIP] Service not enabled"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 4: Removing Systemd Service File"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
if [[ -f /etc/systemd/system/myfsio.service ]]; then
|
|
||||||
rm -f /etc/systemd/system/myfsio.service
|
|
||||||
systemctl daemon-reload
|
|
||||||
echo " [OK] Removed /etc/systemd/system/myfsio.service"
|
|
||||||
echo " [OK] Reloaded systemd daemon"
|
|
||||||
else
|
|
||||||
echo " [SKIP] Service file not found"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 5: Removing Installation Directory"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
if [[ -d "$INSTALL_DIR" ]]; then
|
|
||||||
rm -rf "$INSTALL_DIR"
|
|
||||||
echo " [OK] Removed $INSTALL_DIR"
|
|
||||||
else
|
|
||||||
echo " [SKIP] Directory not found: $INSTALL_DIR"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 6: Removing Data Directory"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
if [[ "$KEEP_DATA" != true ]]; then
|
|
||||||
if [[ -d "$DATA_DIR" ]]; then
|
|
||||||
rm -rf "$DATA_DIR"
|
|
||||||
echo " [OK] Removed $DATA_DIR"
|
|
||||||
else
|
|
||||||
echo " [SKIP] Directory not found: $DATA_DIR"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo " [KEPT] Data preserved at: $DATA_DIR"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 7: Removing Log Directory"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
if [[ "$KEEP_LOGS" != true ]]; then
|
|
||||||
if [[ -d "$LOG_DIR" ]]; then
|
|
||||||
rm -rf "$LOG_DIR"
|
|
||||||
echo " [OK] Removed $LOG_DIR"
|
|
||||||
else
|
|
||||||
echo " [SKIP] Directory not found: $LOG_DIR"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo " [KEPT] Logs preserved at: $LOG_DIR"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo "STEP 8: Removing System User"
|
|
||||||
echo "------------------------------------------------------------"
|
|
||||||
echo ""
|
|
||||||
if id "$SERVICE_USER" &>/dev/null; then
|
|
||||||
userdel "$SERVICE_USER" 2>/dev/null || true
|
|
||||||
echo " [OK] Removed user '$SERVICE_USER'"
|
|
||||||
else
|
|
||||||
echo " [SKIP] User not found: $SERVICE_USER"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "============================================================"
|
|
||||||
echo " Uninstallation Complete!"
|
|
||||||
echo "============================================================"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [[ "$KEEP_DATA" == true ]]; then
|
|
||||||
echo "Your data has been preserved at: $DATA_DIR"
|
|
||||||
echo ""
|
|
||||||
echo "To reinstall MyFSIO with existing data, run:"
|
|
||||||
echo " curl -fsSL https://go.jzwsite.com/myfsio-install | sudo bash"
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$KEEP_LOGS" == true ]]; then
|
|
||||||
echo "Your logs have been preserved at: $LOG_DIR"
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Thank you for using MyFSIO."
|
|
||||||
echo "Documentation: https://go.jzwsite.com/myfsio"
|
|
||||||
echo ""
|
|
||||||
echo "============================================================"
|
|
||||||
echo ""
|
|
||||||
1235
static/css/main.css
1235
static/css/main.css
File diff suppressed because it is too large
Load Diff
BIN
static/images/MyFISO.ico
Normal file
BIN
static/images/MyFISO.ico
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 200 KiB |
BIN
static/images/MyFISO.png
Normal file
BIN
static/images/MyFISO.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 628 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 200 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 872 KiB |
File diff suppressed because it is too large
Load Diff
@@ -1,192 +0,0 @@
|
|||||||
window.BucketDetailOperations = (function() {
|
|
||||||
'use strict';
|
|
||||||
|
|
||||||
let showMessage = function() {};
|
|
||||||
let escapeHtml = function(s) { return s; };
|
|
||||||
|
|
||||||
function init(config) {
|
|
||||||
showMessage = config.showMessage || showMessage;
|
|
||||||
escapeHtml = config.escapeHtml || escapeHtml;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function loadLifecycleRules(card, endpoint) {
|
|
||||||
if (!card || !endpoint) return;
|
|
||||||
const body = card.querySelector('[data-lifecycle-body]');
|
|
||||||
if (!body) return;
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(endpoint);
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(data.error || 'Failed to load')}</td></tr>`;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const rules = data.rules || [];
|
|
||||||
if (rules.length === 0) {
|
|
||||||
body.innerHTML = '<tr><td colspan="5" class="text-center text-muted py-3">No lifecycle rules configured</td></tr>';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
body.innerHTML = rules.map(rule => {
|
|
||||||
const actions = [];
|
|
||||||
if (rule.expiration_days) actions.push(`Delete after ${rule.expiration_days} days`);
|
|
||||||
if (rule.noncurrent_days) actions.push(`Delete old versions after ${rule.noncurrent_days} days`);
|
|
||||||
if (rule.abort_mpu_days) actions.push(`Abort incomplete MPU after ${rule.abort_mpu_days} days`);
|
|
||||||
|
|
||||||
return `
|
|
||||||
<tr>
|
|
||||||
<td class="fw-medium">${escapeHtml(rule.id)}</td>
|
|
||||||
<td><code>${escapeHtml(rule.prefix || '(all)')}</code></td>
|
|
||||||
<td>${actions.map(a => `<div class="small">${escapeHtml(a)}</div>`).join('')}</td>
|
|
||||||
<td>
|
|
||||||
<span class="badge ${rule.status === 'Enabled' ? 'text-bg-success' : 'text-bg-secondary'}">${escapeHtml(rule.status)}</span>
|
|
||||||
</td>
|
|
||||||
<td class="text-end">
|
|
||||||
<button class="btn btn-sm btn-outline-danger" onclick="BucketDetailOperations.deleteLifecycleRule('${escapeHtml(rule.id)}')">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="12" height="12" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
|
||||||
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
|
||||||
</svg>
|
|
||||||
</button>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
`;
|
|
||||||
}).join('');
|
|
||||||
} catch (err) {
|
|
||||||
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(err.message)}</td></tr>`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function loadCorsRules(card, endpoint) {
|
|
||||||
if (!card || !endpoint) return;
|
|
||||||
const body = document.getElementById('cors-rules-body');
|
|
||||||
if (!body) return;
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(endpoint);
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(data.error || 'Failed to load')}</td></tr>`;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const rules = data.rules || [];
|
|
||||||
if (rules.length === 0) {
|
|
||||||
body.innerHTML = '<tr><td colspan="5" class="text-center text-muted py-3">No CORS rules configured</td></tr>';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
body.innerHTML = rules.map((rule, idx) => `
|
|
||||||
<tr>
|
|
||||||
<td>${(rule.allowed_origins || []).map(o => `<code class="d-block">${escapeHtml(o)}</code>`).join('')}</td>
|
|
||||||
<td>${(rule.allowed_methods || []).map(m => `<span class="badge text-bg-secondary me-1">${escapeHtml(m)}</span>`).join('')}</td>
|
|
||||||
<td class="small text-muted">${(rule.allowed_headers || []).slice(0, 3).join(', ')}${(rule.allowed_headers || []).length > 3 ? '...' : ''}</td>
|
|
||||||
<td class="text-muted">${rule.max_age_seconds || 0}s</td>
|
|
||||||
<td class="text-end">
|
|
||||||
<button class="btn btn-sm btn-outline-danger" onclick="BucketDetailOperations.deleteCorsRule(${idx})">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="12" height="12" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>
|
|
||||||
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
|
||||||
</svg>
|
|
||||||
</button>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
`).join('');
|
|
||||||
} catch (err) {
|
|
||||||
body.innerHTML = `<tr><td colspan="5" class="text-center text-danger py-3">${escapeHtml(err.message)}</td></tr>`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function loadAcl(card, endpoint) {
|
|
||||||
if (!card || !endpoint) return;
|
|
||||||
const body = card.querySelector('[data-acl-body]');
|
|
||||||
if (!body) return;
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(endpoint);
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
body.innerHTML = `<tr><td colspan="3" class="text-center text-danger py-3">${escapeHtml(data.error || 'Failed to load')}</td></tr>`;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const grants = data.grants || [];
|
|
||||||
if (grants.length === 0) {
|
|
||||||
body.innerHTML = '<tr><td colspan="3" class="text-center text-muted py-3">No ACL grants configured</td></tr>';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
body.innerHTML = grants.map(grant => {
|
|
||||||
const grantee = grant.grantee_type === 'CanonicalUser'
|
|
||||||
? grant.display_name || grant.grantee_id
|
|
||||||
: grant.grantee_uri || grant.grantee_type;
|
|
||||||
return `
|
|
||||||
<tr>
|
|
||||||
<td class="fw-medium">${escapeHtml(grantee)}</td>
|
|
||||||
<td><span class="badge text-bg-info">${escapeHtml(grant.permission)}</span></td>
|
|
||||||
<td class="text-muted small">${escapeHtml(grant.grantee_type)}</td>
|
|
||||||
</tr>
|
|
||||||
`;
|
|
||||||
}).join('');
|
|
||||||
} catch (err) {
|
|
||||||
body.innerHTML = `<tr><td colspan="3" class="text-center text-danger py-3">${escapeHtml(err.message)}</td></tr>`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function deleteLifecycleRule(ruleId) {
|
|
||||||
if (!confirm(`Delete lifecycle rule "${ruleId}"?`)) return;
|
|
||||||
const card = document.getElementById('lifecycle-rules-card');
|
|
||||||
if (!card) return;
|
|
||||||
const endpoint = card.dataset.lifecycleUrl;
|
|
||||||
const csrfToken = window.getCsrfToken ? window.getCsrfToken() : '';
|
|
||||||
|
|
||||||
try {
|
|
||||||
const resp = await fetch(endpoint, {
|
|
||||||
method: 'DELETE',
|
|
||||||
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken },
|
|
||||||
body: JSON.stringify({ rule_id: ruleId })
|
|
||||||
});
|
|
||||||
const data = await resp.json();
|
|
||||||
if (!resp.ok) throw new Error(data.error || 'Failed to delete');
|
|
||||||
showMessage({ title: 'Rule deleted', body: `Lifecycle rule "${ruleId}" has been deleted.`, variant: 'success' });
|
|
||||||
loadLifecycleRules(card, endpoint);
|
|
||||||
} catch (err) {
|
|
||||||
showMessage({ title: 'Delete failed', body: err.message, variant: 'danger' });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function deleteCorsRule(index) {
|
|
||||||
if (!confirm('Delete this CORS rule?')) return;
|
|
||||||
const card = document.getElementById('cors-rules-card');
|
|
||||||
if (!card) return;
|
|
||||||
const endpoint = card.dataset.corsUrl;
|
|
||||||
const csrfToken = window.getCsrfToken ? window.getCsrfToken() : '';
|
|
||||||
|
|
||||||
try {
|
|
||||||
const resp = await fetch(endpoint, {
|
|
||||||
method: 'DELETE',
|
|
||||||
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken },
|
|
||||||
body: JSON.stringify({ rule_index: index })
|
|
||||||
});
|
|
||||||
const data = await resp.json();
|
|
||||||
if (!resp.ok) throw new Error(data.error || 'Failed to delete');
|
|
||||||
showMessage({ title: 'Rule deleted', body: 'CORS rule has been deleted.', variant: 'success' });
|
|
||||||
loadCorsRules(card, endpoint);
|
|
||||||
} catch (err) {
|
|
||||||
showMessage({ title: 'Delete failed', body: err.message, variant: 'danger' });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
init: init,
|
|
||||||
loadLifecycleRules: loadLifecycleRules,
|
|
||||||
loadCorsRules: loadCorsRules,
|
|
||||||
loadAcl: loadAcl,
|
|
||||||
deleteLifecycleRule: deleteLifecycleRule,
|
|
||||||
deleteCorsRule: deleteCorsRule
|
|
||||||
};
|
|
||||||
})();
|
|
||||||
@@ -1,548 +0,0 @@
|
|||||||
window.BucketDetailUpload = (function() {
|
|
||||||
'use strict';
|
|
||||||
|
|
||||||
const MULTIPART_THRESHOLD = 8 * 1024 * 1024;
|
|
||||||
const CHUNK_SIZE = 8 * 1024 * 1024;
|
|
||||||
|
|
||||||
let state = {
|
|
||||||
isUploading: false,
|
|
||||||
uploadProgress: { current: 0, total: 0, currentFile: '' }
|
|
||||||
};
|
|
||||||
|
|
||||||
let elements = {};
|
|
||||||
let callbacks = {};
|
|
||||||
|
|
||||||
function init(config) {
|
|
||||||
elements = {
|
|
||||||
uploadForm: config.uploadForm,
|
|
||||||
uploadFileInput: config.uploadFileInput,
|
|
||||||
uploadModal: config.uploadModal,
|
|
||||||
uploadModalEl: config.uploadModalEl,
|
|
||||||
uploadSubmitBtn: config.uploadSubmitBtn,
|
|
||||||
uploadCancelBtn: config.uploadCancelBtn,
|
|
||||||
uploadBtnText: config.uploadBtnText,
|
|
||||||
uploadDropZone: config.uploadDropZone,
|
|
||||||
uploadDropZoneLabel: config.uploadDropZoneLabel,
|
|
||||||
uploadProgressStack: config.uploadProgressStack,
|
|
||||||
uploadKeyPrefix: config.uploadKeyPrefix,
|
|
||||||
singleFileOptions: config.singleFileOptions,
|
|
||||||
bulkUploadProgress: config.bulkUploadProgress,
|
|
||||||
bulkUploadStatus: config.bulkUploadStatus,
|
|
||||||
bulkUploadCounter: config.bulkUploadCounter,
|
|
||||||
bulkUploadProgressBar: config.bulkUploadProgressBar,
|
|
||||||
bulkUploadCurrentFile: config.bulkUploadCurrentFile,
|
|
||||||
bulkUploadResults: config.bulkUploadResults,
|
|
||||||
bulkUploadSuccessAlert: config.bulkUploadSuccessAlert,
|
|
||||||
bulkUploadErrorAlert: config.bulkUploadErrorAlert,
|
|
||||||
bulkUploadSuccessCount: config.bulkUploadSuccessCount,
|
|
||||||
bulkUploadErrorCount: config.bulkUploadErrorCount,
|
|
||||||
bulkUploadErrorList: config.bulkUploadErrorList,
|
|
||||||
floatingProgress: config.floatingProgress,
|
|
||||||
floatingProgressBar: config.floatingProgressBar,
|
|
||||||
floatingProgressStatus: config.floatingProgressStatus,
|
|
||||||
floatingProgressTitle: config.floatingProgressTitle,
|
|
||||||
floatingProgressExpand: config.floatingProgressExpand
|
|
||||||
};
|
|
||||||
|
|
||||||
callbacks = {
|
|
||||||
showMessage: config.showMessage || function() {},
|
|
||||||
formatBytes: config.formatBytes || function(b) { return b + ' bytes'; },
|
|
||||||
escapeHtml: config.escapeHtml || function(s) { return s; },
|
|
||||||
onUploadComplete: config.onUploadComplete || function() {},
|
|
||||||
hasFolders: config.hasFolders || function() { return false; },
|
|
||||||
getCurrentPrefix: config.getCurrentPrefix || function() { return ''; }
|
|
||||||
};
|
|
||||||
|
|
||||||
setupEventListeners();
|
|
||||||
setupBeforeUnload();
|
|
||||||
}
|
|
||||||
|
|
||||||
function isUploading() {
|
|
||||||
return state.isUploading;
|
|
||||||
}
|
|
||||||
|
|
||||||
function setupBeforeUnload() {
|
|
||||||
window.addEventListener('beforeunload', (e) => {
|
|
||||||
if (state.isUploading) {
|
|
||||||
e.preventDefault();
|
|
||||||
e.returnValue = 'Upload in progress. Are you sure you want to leave?';
|
|
||||||
return e.returnValue;
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function showFloatingProgress() {
|
|
||||||
if (elements.floatingProgress) {
|
|
||||||
elements.floatingProgress.classList.remove('d-none');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function hideFloatingProgress() {
|
|
||||||
if (elements.floatingProgress) {
|
|
||||||
elements.floatingProgress.classList.add('d-none');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function updateFloatingProgress(current, total, currentFile) {
|
|
||||||
state.uploadProgress = { current, total, currentFile: currentFile || '' };
|
|
||||||
if (elements.floatingProgressBar && total > 0) {
|
|
||||||
const percent = Math.round((current / total) * 100);
|
|
||||||
elements.floatingProgressBar.style.width = `${percent}%`;
|
|
||||||
}
|
|
||||||
if (elements.floatingProgressStatus) {
|
|
||||||
if (currentFile) {
|
|
||||||
elements.floatingProgressStatus.textContent = `${current}/${total} files - ${currentFile}`;
|
|
||||||
} else {
|
|
||||||
elements.floatingProgressStatus.textContent = `${current}/${total} files completed`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if (elements.floatingProgressTitle) {
|
|
||||||
elements.floatingProgressTitle.textContent = `Uploading ${total} file${total !== 1 ? 's' : ''}...`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function refreshUploadDropLabel() {
|
|
||||||
if (!elements.uploadDropZoneLabel || !elements.uploadFileInput) return;
|
|
||||||
const files = elements.uploadFileInput.files;
|
|
||||||
if (!files || files.length === 0) {
|
|
||||||
elements.uploadDropZoneLabel.textContent = 'No file selected';
|
|
||||||
if (elements.singleFileOptions) elements.singleFileOptions.classList.remove('d-none');
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
elements.uploadDropZoneLabel.textContent = files.length === 1 ? files[0].name : `${files.length} files selected`;
|
|
||||||
if (elements.singleFileOptions) {
|
|
||||||
elements.singleFileOptions.classList.toggle('d-none', files.length > 1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function updateUploadBtnText() {
|
|
||||||
if (!elements.uploadBtnText || !elements.uploadFileInput) return;
|
|
||||||
const files = elements.uploadFileInput.files;
|
|
||||||
if (!files || files.length <= 1) {
|
|
||||||
elements.uploadBtnText.textContent = 'Upload';
|
|
||||||
} else {
|
|
||||||
elements.uploadBtnText.textContent = `Upload ${files.length} files`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function resetUploadUI() {
|
|
||||||
if (elements.bulkUploadProgress) elements.bulkUploadProgress.classList.add('d-none');
|
|
||||||
if (elements.bulkUploadResults) elements.bulkUploadResults.classList.add('d-none');
|
|
||||||
if (elements.bulkUploadSuccessAlert) elements.bulkUploadSuccessAlert.classList.remove('d-none');
|
|
||||||
if (elements.bulkUploadErrorAlert) elements.bulkUploadErrorAlert.classList.add('d-none');
|
|
||||||
if (elements.bulkUploadErrorList) elements.bulkUploadErrorList.innerHTML = '';
|
|
||||||
if (elements.uploadSubmitBtn) elements.uploadSubmitBtn.disabled = false;
|
|
||||||
if (elements.uploadFileInput) elements.uploadFileInput.disabled = false;
|
|
||||||
if (elements.uploadProgressStack) elements.uploadProgressStack.innerHTML = '';
|
|
||||||
if (elements.uploadDropZone) {
|
|
||||||
elements.uploadDropZone.classList.remove('upload-locked');
|
|
||||||
elements.uploadDropZone.style.pointerEvents = '';
|
|
||||||
}
|
|
||||||
state.isUploading = false;
|
|
||||||
hideFloatingProgress();
|
|
||||||
}
|
|
||||||
|
|
||||||
function setUploadLockState(locked) {
|
|
||||||
if (elements.uploadDropZone) {
|
|
||||||
elements.uploadDropZone.classList.toggle('upload-locked', locked);
|
|
||||||
elements.uploadDropZone.style.pointerEvents = locked ? 'none' : '';
|
|
||||||
}
|
|
||||||
if (elements.uploadFileInput) {
|
|
||||||
elements.uploadFileInput.disabled = locked;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function createProgressItem(file) {
|
|
||||||
const item = document.createElement('div');
|
|
||||||
item.className = 'upload-progress-item';
|
|
||||||
item.dataset.state = 'uploading';
|
|
||||||
item.innerHTML = `
|
|
||||||
<div class="d-flex justify-content-between align-items-start">
|
|
||||||
<div class="min-width-0 flex-grow-1">
|
|
||||||
<div class="file-name">${callbacks.escapeHtml(file.name)}</div>
|
|
||||||
<div class="file-size">${callbacks.formatBytes(file.size)}</div>
|
|
||||||
</div>
|
|
||||||
<div class="upload-status text-end ms-2">Preparing...</div>
|
|
||||||
</div>
|
|
||||||
<div class="progress-container">
|
|
||||||
<div class="progress">
|
|
||||||
<div class="progress-bar bg-primary" role="progressbar" style="width: 0%"></div>
|
|
||||||
</div>
|
|
||||||
<div class="progress-text">
|
|
||||||
<span class="progress-loaded">0 B</span>
|
|
||||||
<span class="progress-percent">0%</span>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
`;
|
|
||||||
return item;
|
|
||||||
}
|
|
||||||
|
|
||||||
function updateProgressItem(item, { loaded, total, status, progressState, error }) {
|
|
||||||
if (progressState) item.dataset.state = progressState;
|
|
||||||
const statusEl = item.querySelector('.upload-status');
|
|
||||||
const progressBar = item.querySelector('.progress-bar');
|
|
||||||
const progressLoaded = item.querySelector('.progress-loaded');
|
|
||||||
const progressPercent = item.querySelector('.progress-percent');
|
|
||||||
|
|
||||||
if (status) {
|
|
||||||
statusEl.textContent = status;
|
|
||||||
statusEl.className = 'upload-status text-end ms-2';
|
|
||||||
if (progressState === 'success') statusEl.classList.add('success');
|
|
||||||
if (progressState === 'error') statusEl.classList.add('error');
|
|
||||||
}
|
|
||||||
if (typeof loaded === 'number' && typeof total === 'number' && total > 0) {
|
|
||||||
const percent = Math.round((loaded / total) * 100);
|
|
||||||
progressBar.style.width = `${percent}%`;
|
|
||||||
progressLoaded.textContent = `${callbacks.formatBytes(loaded)} / ${callbacks.formatBytes(total)}`;
|
|
||||||
progressPercent.textContent = `${percent}%`;
|
|
||||||
}
|
|
||||||
if (error) {
|
|
||||||
const progressContainer = item.querySelector('.progress-container');
|
|
||||||
if (progressContainer) {
|
|
||||||
progressContainer.innerHTML = `<div class="text-danger small mt-1">${callbacks.escapeHtml(error)}</div>`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function uploadMultipart(file, objectKey, metadata, progressItem, urls) {
|
|
||||||
const csrfToken = document.querySelector('input[name="csrf_token"]')?.value;
|
|
||||||
|
|
||||||
updateProgressItem(progressItem, { status: 'Initiating...', loaded: 0, total: file.size });
|
|
||||||
const initResp = await fetch(urls.initUrl, {
|
|
||||||
method: 'POST',
|
|
||||||
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken || '' },
|
|
||||||
body: JSON.stringify({ object_key: objectKey, metadata })
|
|
||||||
});
|
|
||||||
if (!initResp.ok) {
|
|
||||||
const err = await initResp.json().catch(() => ({}));
|
|
||||||
throw new Error(err.error || 'Failed to initiate upload');
|
|
||||||
}
|
|
||||||
const { upload_id } = await initResp.json();
|
|
||||||
|
|
||||||
const partUrl = urls.partTemplate.replace('UPLOAD_ID_PLACEHOLDER', upload_id);
|
|
||||||
const completeUrl = urls.completeTemplate.replace('UPLOAD_ID_PLACEHOLDER', upload_id);
|
|
||||||
const abortUrl = urls.abortTemplate.replace('UPLOAD_ID_PLACEHOLDER', upload_id);
|
|
||||||
|
|
||||||
const parts = [];
|
|
||||||
const totalParts = Math.ceil(file.size / CHUNK_SIZE);
|
|
||||||
let uploadedBytes = 0;
|
|
||||||
|
|
||||||
try {
|
|
||||||
for (let partNumber = 1; partNumber <= totalParts; partNumber++) {
|
|
||||||
const start = (partNumber - 1) * CHUNK_SIZE;
|
|
||||||
const end = Math.min(start + CHUNK_SIZE, file.size);
|
|
||||||
const chunk = file.slice(start, end);
|
|
||||||
|
|
||||||
updateProgressItem(progressItem, {
|
|
||||||
status: `Part ${partNumber}/${totalParts}`,
|
|
||||||
loaded: uploadedBytes,
|
|
||||||
total: file.size
|
|
||||||
});
|
|
||||||
|
|
||||||
const partResp = await fetch(`${partUrl}?partNumber=${partNumber}`, {
|
|
||||||
method: 'PUT',
|
|
||||||
headers: { 'X-CSRFToken': csrfToken || '' },
|
|
||||||
body: chunk
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!partResp.ok) {
|
|
||||||
const err = await partResp.json().catch(() => ({}));
|
|
||||||
throw new Error(err.error || `Part ${partNumber} failed`);
|
|
||||||
}
|
|
||||||
|
|
||||||
const partData = await partResp.json();
|
|
||||||
parts.push({ part_number: partNumber, etag: partData.etag });
|
|
||||||
uploadedBytes += chunk.size;
|
|
||||||
|
|
||||||
updateProgressItem(progressItem, {
|
|
||||||
loaded: uploadedBytes,
|
|
||||||
total: file.size
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
updateProgressItem(progressItem, { status: 'Completing...', loaded: file.size, total: file.size });
|
|
||||||
const completeResp = await fetch(completeUrl, {
|
|
||||||
method: 'POST',
|
|
||||||
headers: { 'Content-Type': 'application/json', 'X-CSRFToken': csrfToken || '' },
|
|
||||||
body: JSON.stringify({ parts })
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!completeResp.ok) {
|
|
||||||
const err = await completeResp.json().catch(() => ({}));
|
|
||||||
throw new Error(err.error || 'Failed to complete upload');
|
|
||||||
}
|
|
||||||
|
|
||||||
return await completeResp.json();
|
|
||||||
} catch (err) {
|
|
||||||
try {
|
|
||||||
await fetch(abortUrl, { method: 'DELETE', headers: { 'X-CSRFToken': csrfToken || '' } });
|
|
||||||
} catch {}
|
|
||||||
throw err;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function uploadRegular(file, objectKey, metadata, progressItem, formAction) {
|
|
||||||
return new Promise((resolve, reject) => {
|
|
||||||
const formData = new FormData();
|
|
||||||
formData.append('object', file);
|
|
||||||
formData.append('object_key', objectKey);
|
|
||||||
if (metadata) formData.append('metadata', JSON.stringify(metadata));
|
|
||||||
const csrfToken = document.querySelector('input[name="csrf_token"]')?.value;
|
|
||||||
if (csrfToken) formData.append('csrf_token', csrfToken);
|
|
||||||
|
|
||||||
const xhr = new XMLHttpRequest();
|
|
||||||
xhr.open('POST', formAction, true);
|
|
||||||
xhr.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
|
|
||||||
|
|
||||||
xhr.upload.addEventListener('progress', (e) => {
|
|
||||||
if (e.lengthComputable) {
|
|
||||||
updateProgressItem(progressItem, {
|
|
||||||
status: 'Uploading...',
|
|
||||||
loaded: e.loaded,
|
|
||||||
total: e.total
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
xhr.addEventListener('load', () => {
|
|
||||||
if (xhr.status >= 200 && xhr.status < 300) {
|
|
||||||
try {
|
|
||||||
const data = JSON.parse(xhr.responseText);
|
|
||||||
if (data.status === 'error') {
|
|
||||||
reject(new Error(data.message || 'Upload failed'));
|
|
||||||
} else {
|
|
||||||
resolve(data);
|
|
||||||
}
|
|
||||||
} catch {
|
|
||||||
resolve({});
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
try {
|
|
||||||
const data = JSON.parse(xhr.responseText);
|
|
||||||
reject(new Error(data.message || `Upload failed (${xhr.status})`));
|
|
||||||
} catch {
|
|
||||||
reject(new Error(`Upload failed (${xhr.status})`));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
xhr.addEventListener('error', () => reject(new Error('Network error')));
|
|
||||||
xhr.addEventListener('abort', () => reject(new Error('Upload aborted')));
|
|
||||||
|
|
||||||
xhr.send(formData);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
async function uploadSingleFile(file, keyPrefix, metadata, progressItem, urls) {
|
|
||||||
const objectKey = keyPrefix ? `${keyPrefix}${file.name}` : file.name;
|
|
||||||
const shouldUseMultipart = file.size >= MULTIPART_THRESHOLD && urls.initUrl;
|
|
||||||
|
|
||||||
if (!progressItem && elements.uploadProgressStack) {
|
|
||||||
progressItem = createProgressItem(file);
|
|
||||||
elements.uploadProgressStack.appendChild(progressItem);
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
let result;
|
|
||||||
if (shouldUseMultipart) {
|
|
||||||
updateProgressItem(progressItem, { status: 'Multipart upload...', loaded: 0, total: file.size });
|
|
||||||
result = await uploadMultipart(file, objectKey, metadata, progressItem, urls);
|
|
||||||
} else {
|
|
||||||
updateProgressItem(progressItem, { status: 'Uploading...', loaded: 0, total: file.size });
|
|
||||||
result = await uploadRegular(file, objectKey, metadata, progressItem, urls.formAction);
|
|
||||||
}
|
|
||||||
updateProgressItem(progressItem, { progressState: 'success', status: 'Complete', loaded: file.size, total: file.size });
|
|
||||||
return result;
|
|
||||||
} catch (err) {
|
|
||||||
updateProgressItem(progressItem, { progressState: 'error', status: 'Failed', error: err.message });
|
|
||||||
throw err;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function performBulkUpload(files, urls) {
|
|
||||||
if (state.isUploading || !files || files.length === 0) return;
|
|
||||||
|
|
||||||
state.isUploading = true;
|
|
||||||
setUploadLockState(true);
|
|
||||||
const keyPrefix = (elements.uploadKeyPrefix?.value || '').trim();
|
|
||||||
const metadataRaw = elements.uploadForm?.querySelector('textarea[name="metadata"]')?.value?.trim();
|
|
||||||
let metadata = null;
|
|
||||||
if (metadataRaw) {
|
|
||||||
try {
|
|
||||||
metadata = JSON.parse(metadataRaw);
|
|
||||||
} catch {
|
|
||||||
callbacks.showMessage({ title: 'Invalid metadata', body: 'Metadata must be valid JSON.', variant: 'danger' });
|
|
||||||
resetUploadUI();
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (elements.bulkUploadProgress) elements.bulkUploadProgress.classList.remove('d-none');
|
|
||||||
if (elements.bulkUploadResults) elements.bulkUploadResults.classList.add('d-none');
|
|
||||||
if (elements.uploadSubmitBtn) elements.uploadSubmitBtn.disabled = true;
|
|
||||||
if (elements.uploadFileInput) elements.uploadFileInput.disabled = true;
|
|
||||||
|
|
||||||
const successFiles = [];
|
|
||||||
const errorFiles = [];
|
|
||||||
const total = files.length;
|
|
||||||
|
|
||||||
updateFloatingProgress(0, total, files[0]?.name || '');
|
|
||||||
|
|
||||||
for (let i = 0; i < total; i++) {
|
|
||||||
const file = files[i];
|
|
||||||
const current = i + 1;
|
|
||||||
|
|
||||||
if (elements.bulkUploadCounter) elements.bulkUploadCounter.textContent = `${current}/${total}`;
|
|
||||||
if (elements.bulkUploadCurrentFile) elements.bulkUploadCurrentFile.textContent = `Uploading: ${file.name}`;
|
|
||||||
if (elements.bulkUploadProgressBar) {
|
|
||||||
const percent = Math.round((current / total) * 100);
|
|
||||||
elements.bulkUploadProgressBar.style.width = `${percent}%`;
|
|
||||||
}
|
|
||||||
updateFloatingProgress(i, total, file.name);
|
|
||||||
|
|
||||||
try {
|
|
||||||
await uploadSingleFile(file, keyPrefix, metadata, null, urls);
|
|
||||||
successFiles.push(file.name);
|
|
||||||
} catch (error) {
|
|
||||||
errorFiles.push({ name: file.name, error: error.message || 'Unknown error' });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
updateFloatingProgress(total, total);
|
|
||||||
|
|
||||||
if (elements.bulkUploadProgress) elements.bulkUploadProgress.classList.add('d-none');
|
|
||||||
if (elements.bulkUploadResults) elements.bulkUploadResults.classList.remove('d-none');
|
|
||||||
|
|
||||||
if (elements.bulkUploadSuccessCount) elements.bulkUploadSuccessCount.textContent = successFiles.length;
|
|
||||||
if (successFiles.length === 0 && elements.bulkUploadSuccessAlert) {
|
|
||||||
elements.bulkUploadSuccessAlert.classList.add('d-none');
|
|
||||||
}
|
|
||||||
|
|
||||||
if (errorFiles.length > 0) {
|
|
||||||
if (elements.bulkUploadErrorCount) elements.bulkUploadErrorCount.textContent = errorFiles.length;
|
|
||||||
if (elements.bulkUploadErrorAlert) elements.bulkUploadErrorAlert.classList.remove('d-none');
|
|
||||||
if (elements.bulkUploadErrorList) {
|
|
||||||
elements.bulkUploadErrorList.innerHTML = errorFiles
|
|
||||||
.map(f => `<li><strong>${callbacks.escapeHtml(f.name)}</strong>: ${callbacks.escapeHtml(f.error)}</li>`)
|
|
||||||
.join('');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
state.isUploading = false;
|
|
||||||
setUploadLockState(false);
|
|
||||||
|
|
||||||
if (successFiles.length > 0) {
|
|
||||||
if (elements.uploadBtnText) elements.uploadBtnText.textContent = 'Refreshing...';
|
|
||||||
callbacks.onUploadComplete(successFiles, errorFiles);
|
|
||||||
} else {
|
|
||||||
if (elements.uploadSubmitBtn) elements.uploadSubmitBtn.disabled = false;
|
|
||||||
if (elements.uploadFileInput) elements.uploadFileInput.disabled = false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function setupEventListeners() {
|
|
||||||
if (elements.uploadFileInput) {
|
|
||||||
elements.uploadFileInput.addEventListener('change', () => {
|
|
||||||
if (state.isUploading) return;
|
|
||||||
refreshUploadDropLabel();
|
|
||||||
updateUploadBtnText();
|
|
||||||
resetUploadUI();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (elements.uploadDropZone) {
|
|
||||||
elements.uploadDropZone.addEventListener('click', () => {
|
|
||||||
if (state.isUploading) return;
|
|
||||||
elements.uploadFileInput?.click();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (elements.floatingProgressExpand) {
|
|
||||||
elements.floatingProgressExpand.addEventListener('click', () => {
|
|
||||||
if (elements.uploadModal) {
|
|
||||||
elements.uploadModal.show();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (elements.uploadModalEl) {
|
|
||||||
elements.uploadModalEl.addEventListener('hide.bs.modal', () => {
|
|
||||||
if (state.isUploading) {
|
|
||||||
showFloatingProgress();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
elements.uploadModalEl.addEventListener('hidden.bs.modal', () => {
|
|
||||||
if (!state.isUploading) {
|
|
||||||
resetUploadUI();
|
|
||||||
if (elements.uploadFileInput) elements.uploadFileInput.value = '';
|
|
||||||
refreshUploadDropLabel();
|
|
||||||
updateUploadBtnText();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
elements.uploadModalEl.addEventListener('show.bs.modal', () => {
|
|
||||||
if (state.isUploading) {
|
|
||||||
hideFloatingProgress();
|
|
||||||
}
|
|
||||||
if (callbacks.hasFolders() && callbacks.getCurrentPrefix()) {
|
|
||||||
if (elements.uploadKeyPrefix) {
|
|
||||||
elements.uploadKeyPrefix.value = callbacks.getCurrentPrefix();
|
|
||||||
}
|
|
||||||
} else if (elements.uploadKeyPrefix) {
|
|
||||||
elements.uploadKeyPrefix.value = '';
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function wireDropTarget(target, options) {
|
|
||||||
const { highlightClass = '', autoOpenModal = false } = options || {};
|
|
||||||
if (!target) return;
|
|
||||||
|
|
||||||
const preventDefaults = (event) => {
|
|
||||||
event.preventDefault();
|
|
||||||
event.stopPropagation();
|
|
||||||
};
|
|
||||||
|
|
||||||
['dragenter', 'dragover'].forEach((eventName) => {
|
|
||||||
target.addEventListener(eventName, (event) => {
|
|
||||||
preventDefaults(event);
|
|
||||||
if (state.isUploading) return;
|
|
||||||
if (highlightClass) {
|
|
||||||
target.classList.add(highlightClass);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
['dragleave', 'drop'].forEach((eventName) => {
|
|
||||||
target.addEventListener(eventName, (event) => {
|
|
||||||
preventDefaults(event);
|
|
||||||
if (highlightClass) {
|
|
||||||
target.classList.remove(highlightClass);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
target.addEventListener('drop', (event) => {
|
|
||||||
if (state.isUploading) return;
|
|
||||||
if (!event.dataTransfer?.files?.length || !elements.uploadFileInput) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
elements.uploadFileInput.files = event.dataTransfer.files;
|
|
||||||
elements.uploadFileInput.dispatchEvent(new Event('change', { bubbles: true }));
|
|
||||||
if (autoOpenModal && elements.uploadModal) {
|
|
||||||
elements.uploadModal.show();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
init: init,
|
|
||||||
isUploading: isUploading,
|
|
||||||
performBulkUpload: performBulkUpload,
|
|
||||||
wireDropTarget: wireDropTarget,
|
|
||||||
resetUploadUI: resetUploadUI,
|
|
||||||
refreshUploadDropLabel: refreshUploadDropLabel,
|
|
||||||
updateUploadBtnText: updateUploadBtnText
|
|
||||||
};
|
|
||||||
})();
|
|
||||||
@@ -1,120 +0,0 @@
|
|||||||
window.BucketDetailUtils = (function() {
|
|
||||||
'use strict';
|
|
||||||
|
|
||||||
function setupJsonAutoIndent(textarea) {
|
|
||||||
if (!textarea) return;
|
|
||||||
|
|
||||||
textarea.addEventListener('keydown', function(e) {
|
|
||||||
if (e.key === 'Enter') {
|
|
||||||
e.preventDefault();
|
|
||||||
|
|
||||||
const start = this.selectionStart;
|
|
||||||
const end = this.selectionEnd;
|
|
||||||
const value = this.value;
|
|
||||||
|
|
||||||
const lineStart = value.lastIndexOf('\n', start - 1) + 1;
|
|
||||||
const currentLine = value.substring(lineStart, start);
|
|
||||||
|
|
||||||
const indentMatch = currentLine.match(/^(\s*)/);
|
|
||||||
let indent = indentMatch ? indentMatch[1] : '';
|
|
||||||
|
|
||||||
const trimmedLine = currentLine.trim();
|
|
||||||
const lastChar = trimmedLine.slice(-1);
|
|
||||||
|
|
||||||
let newIndent = indent;
|
|
||||||
let insertAfter = '';
|
|
||||||
|
|
||||||
if (lastChar === '{' || lastChar === '[') {
|
|
||||||
newIndent = indent + ' ';
|
|
||||||
|
|
||||||
const charAfterCursor = value.substring(start, start + 1).trim();
|
|
||||||
if ((lastChar === '{' && charAfterCursor === '}') ||
|
|
||||||
(lastChar === '[' && charAfterCursor === ']')) {
|
|
||||||
insertAfter = '\n' + indent;
|
|
||||||
}
|
|
||||||
} else if (lastChar === ',' || lastChar === ':') {
|
|
||||||
newIndent = indent;
|
|
||||||
}
|
|
||||||
|
|
||||||
const insertion = '\n' + newIndent + insertAfter;
|
|
||||||
const newValue = value.substring(0, start) + insertion + value.substring(end);
|
|
||||||
|
|
||||||
this.value = newValue;
|
|
||||||
|
|
||||||
const newCursorPos = start + 1 + newIndent.length;
|
|
||||||
this.selectionStart = this.selectionEnd = newCursorPos;
|
|
||||||
|
|
||||||
this.dispatchEvent(new Event('input', { bubbles: true }));
|
|
||||||
}
|
|
||||||
|
|
||||||
if (e.key === 'Tab') {
|
|
||||||
e.preventDefault();
|
|
||||||
const start = this.selectionStart;
|
|
||||||
const end = this.selectionEnd;
|
|
||||||
|
|
||||||
if (e.shiftKey) {
|
|
||||||
const lineStart = this.value.lastIndexOf('\n', start - 1) + 1;
|
|
||||||
const lineContent = this.value.substring(lineStart, start);
|
|
||||||
if (lineContent.startsWith(' ')) {
|
|
||||||
this.value = this.value.substring(0, lineStart) +
|
|
||||||
this.value.substring(lineStart + 2);
|
|
||||||
this.selectionStart = this.selectionEnd = Math.max(lineStart, start - 2);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
this.value = this.value.substring(0, start) + ' ' + this.value.substring(end);
|
|
||||||
this.selectionStart = this.selectionEnd = start + 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
this.dispatchEvent(new Event('input', { bubbles: true }));
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function formatBytes(bytes) {
|
|
||||||
if (!Number.isFinite(bytes)) return `${bytes} bytes`;
|
|
||||||
const units = ['bytes', 'KB', 'MB', 'GB', 'TB'];
|
|
||||||
let i = 0;
|
|
||||||
let size = bytes;
|
|
||||||
while (size >= 1024 && i < units.length - 1) {
|
|
||||||
size /= 1024;
|
|
||||||
i++;
|
|
||||||
}
|
|
||||||
return `${size.toFixed(i === 0 ? 0 : 1)} ${units[i]}`;
|
|
||||||
}
|
|
||||||
|
|
||||||
function escapeHtml(value) {
|
|
||||||
if (value === null || value === undefined) return '';
|
|
||||||
return String(value)
|
|
||||||
.replace(/&/g, '&')
|
|
||||||
.replace(/</g, '<')
|
|
||||||
.replace(/>/g, '>')
|
|
||||||
.replace(/"/g, '"')
|
|
||||||
.replace(/'/g, ''');
|
|
||||||
}
|
|
||||||
|
|
||||||
function fallbackCopy(text) {
|
|
||||||
const textArea = document.createElement('textarea');
|
|
||||||
textArea.value = text;
|
|
||||||
textArea.style.position = 'fixed';
|
|
||||||
textArea.style.left = '-9999px';
|
|
||||||
textArea.style.top = '-9999px';
|
|
||||||
document.body.appendChild(textArea);
|
|
||||||
textArea.focus();
|
|
||||||
textArea.select();
|
|
||||||
let success = false;
|
|
||||||
try {
|
|
||||||
success = document.execCommand('copy');
|
|
||||||
} catch {
|
|
||||||
success = false;
|
|
||||||
}
|
|
||||||
document.body.removeChild(textArea);
|
|
||||||
return success;
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
setupJsonAutoIndent: setupJsonAutoIndent,
|
|
||||||
formatBytes: formatBytes,
|
|
||||||
escapeHtml: escapeHtml,
|
|
||||||
fallbackCopy: fallbackCopy
|
|
||||||
};
|
|
||||||
})();
|
|
||||||
@@ -1,344 +0,0 @@
|
|||||||
window.ConnectionsManagement = (function() {
|
|
||||||
'use strict';
|
|
||||||
|
|
||||||
var endpoints = {};
|
|
||||||
var csrfToken = '';
|
|
||||||
|
|
||||||
function init(config) {
|
|
||||||
endpoints = config.endpoints || {};
|
|
||||||
csrfToken = config.csrfToken || '';
|
|
||||||
|
|
||||||
setupEventListeners();
|
|
||||||
checkAllConnectionHealth();
|
|
||||||
}
|
|
||||||
|
|
||||||
function togglePassword(id) {
|
|
||||||
var input = document.getElementById(id);
|
|
||||||
if (input) {
|
|
||||||
input.type = input.type === 'password' ? 'text' : 'password';
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function testConnection(formId, resultId) {
|
|
||||||
var form = document.getElementById(formId);
|
|
||||||
var resultDiv = document.getElementById(resultId);
|
|
||||||
if (!form || !resultDiv) return;
|
|
||||||
|
|
||||||
var formData = new FormData(form);
|
|
||||||
var data = {};
|
|
||||||
formData.forEach(function(value, key) {
|
|
||||||
if (key !== 'csrf_token') {
|
|
||||||
data[key] = value;
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
resultDiv.innerHTML = '<div class="text-info"><span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Testing connection...</div>';
|
|
||||||
|
|
||||||
var controller = new AbortController();
|
|
||||||
var timeoutId = setTimeout(function() { controller.abort(); }, 20000);
|
|
||||||
|
|
||||||
try {
|
|
||||||
var response = await fetch(endpoints.test, {
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
'Content-Type': 'application/json',
|
|
||||||
'X-CSRFToken': csrfToken
|
|
||||||
},
|
|
||||||
body: JSON.stringify(data),
|
|
||||||
signal: controller.signal
|
|
||||||
});
|
|
||||||
clearTimeout(timeoutId);
|
|
||||||
|
|
||||||
var result = await response.json();
|
|
||||||
if (response.ok) {
|
|
||||||
resultDiv.innerHTML = '<div class="text-success">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>' +
|
|
||||||
'</svg>' + window.UICore.escapeHtml(result.message) + '</div>';
|
|
||||||
} else {
|
|
||||||
resultDiv.innerHTML = '<div class="text-danger">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>' +
|
|
||||||
'</svg>' + window.UICore.escapeHtml(result.message) + '</div>';
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
clearTimeout(timeoutId);
|
|
||||||
var message = error.name === 'AbortError'
|
|
||||||
? 'Connection test timed out - endpoint may be unreachable'
|
|
||||||
: 'Connection failed: Network error';
|
|
||||||
resultDiv.innerHTML = '<div class="text-danger">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>' +
|
|
||||||
'</svg>' + message + '</div>';
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function checkConnectionHealth(connectionId, statusEl) {
|
|
||||||
if (!statusEl) return;
|
|
||||||
|
|
||||||
try {
|
|
||||||
var controller = new AbortController();
|
|
||||||
var timeoutId = setTimeout(function() { controller.abort(); }, 15000);
|
|
||||||
|
|
||||||
var response = await fetch(endpoints.healthTemplate.replace('CONNECTION_ID', connectionId), {
|
|
||||||
signal: controller.signal
|
|
||||||
});
|
|
||||||
clearTimeout(timeoutId);
|
|
||||||
|
|
||||||
var data = await response.json();
|
|
||||||
if (data.healthy) {
|
|
||||||
statusEl.innerHTML = '<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-success" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/></svg>';
|
|
||||||
statusEl.setAttribute('data-status', 'healthy');
|
|
||||||
statusEl.setAttribute('title', 'Connected');
|
|
||||||
} else {
|
|
||||||
statusEl.innerHTML = '<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-danger" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/></svg>';
|
|
||||||
statusEl.setAttribute('data-status', 'unhealthy');
|
|
||||||
statusEl.setAttribute('title', data.error || 'Unreachable');
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
statusEl.innerHTML = '<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-warning" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M8.982 1.566a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566zM8 5c.535 0 .954.462.9.995l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995A.905.905 0 0 1 8 5zm.002 6a1 1 0 1 1 0 2 1 1 0 0 1 0-2z"/></svg>';
|
|
||||||
statusEl.setAttribute('data-status', 'unknown');
|
|
||||||
statusEl.setAttribute('title', 'Could not check status');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function checkAllConnectionHealth() {
|
|
||||||
var rows = document.querySelectorAll('tr[data-connection-id]');
|
|
||||||
rows.forEach(function(row, index) {
|
|
||||||
var connectionId = row.getAttribute('data-connection-id');
|
|
||||||
var statusEl = row.querySelector('.connection-status');
|
|
||||||
if (statusEl) {
|
|
||||||
setTimeout(function() {
|
|
||||||
checkConnectionHealth(connectionId, statusEl);
|
|
||||||
}, index * 200);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function updateConnectionCount() {
|
|
||||||
var countBadge = document.querySelector('.badge.bg-primary.bg-opacity-10.text-primary.fs-6');
|
|
||||||
if (countBadge) {
|
|
||||||
var remaining = document.querySelectorAll('tr[data-connection-id]').length;
|
|
||||||
countBadge.textContent = remaining + ' connection' + (remaining !== 1 ? 's' : '');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function createConnectionRowHtml(conn) {
|
|
||||||
var ak = conn.access_key || '';
|
|
||||||
var maskedKey = ak.length > 12 ? ak.slice(0, 8) + '...' + ak.slice(-4) : ak;
|
|
||||||
|
|
||||||
return '<tr data-connection-id="' + window.UICore.escapeHtml(conn.id) + '">' +
|
|
||||||
'<td class="text-center">' +
|
|
||||||
'<span class="connection-status" data-status="checking" title="Checking...">' +
|
|
||||||
'<span class="spinner-border spinner-border-sm text-muted" role="status" style="width: 12px; height: 12px;"></span>' +
|
|
||||||
'</span></td>' +
|
|
||||||
'<td><div class="d-flex align-items-center gap-2">' +
|
|
||||||
'<div class="connection-icon"><svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/></svg></div>' +
|
|
||||||
'<span class="fw-medium">' + window.UICore.escapeHtml(conn.name) + '</span>' +
|
|
||||||
'</div></td>' +
|
|
||||||
'<td><span class="text-muted small text-truncate d-inline-block" style="max-width: 200px;" title="' + window.UICore.escapeHtml(conn.endpoint_url) + '">' + window.UICore.escapeHtml(conn.endpoint_url) + '</span></td>' +
|
|
||||||
'<td><span class="badge bg-primary bg-opacity-10 text-primary">' + window.UICore.escapeHtml(conn.region) + '</span></td>' +
|
|
||||||
'<td><code class="small">' + window.UICore.escapeHtml(maskedKey) + '</code></td>' +
|
|
||||||
'<td class="text-end"><div class="btn-group btn-group-sm" role="group">' +
|
|
||||||
'<button type="button" class="btn btn-outline-secondary" data-bs-toggle="modal" data-bs-target="#editConnectionModal" ' +
|
|
||||||
'data-id="' + window.UICore.escapeHtml(conn.id) + '" data-name="' + window.UICore.escapeHtml(conn.name) + '" ' +
|
|
||||||
'data-endpoint="' + window.UICore.escapeHtml(conn.endpoint_url) + '" data-region="' + window.UICore.escapeHtml(conn.region) + '" ' +
|
|
||||||
'data-access="' + window.UICore.escapeHtml(conn.access_key) + '" data-secret="' + window.UICore.escapeHtml(conn.secret_key || '') + '" title="Edit connection">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/></svg></button>' +
|
|
||||||
'<button type="button" class="btn btn-outline-danger" data-bs-toggle="modal" data-bs-target="#deleteConnectionModal" ' +
|
|
||||||
'data-id="' + window.UICore.escapeHtml(conn.id) + '" data-name="' + window.UICore.escapeHtml(conn.name) + '" title="Delete connection">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>' +
|
|
||||||
'<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/></svg></button>' +
|
|
||||||
'</div></td></tr>';
|
|
||||||
}
|
|
||||||
|
|
||||||
function setupEventListeners() {
|
|
||||||
var testBtn = document.getElementById('testConnectionBtn');
|
|
||||||
if (testBtn) {
|
|
||||||
testBtn.addEventListener('click', function() {
|
|
||||||
testConnection('createConnectionForm', 'testResult');
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
var editTestBtn = document.getElementById('editTestConnectionBtn');
|
|
||||||
if (editTestBtn) {
|
|
||||||
editTestBtn.addEventListener('click', function() {
|
|
||||||
testConnection('editConnectionForm', 'editTestResult');
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
var editModal = document.getElementById('editConnectionModal');
|
|
||||||
if (editModal) {
|
|
||||||
editModal.addEventListener('show.bs.modal', function(event) {
|
|
||||||
var button = event.relatedTarget;
|
|
||||||
if (!button) return;
|
|
||||||
|
|
||||||
var id = button.getAttribute('data-id');
|
|
||||||
|
|
||||||
document.getElementById('edit_name').value = button.getAttribute('data-name') || '';
|
|
||||||
document.getElementById('edit_endpoint_url').value = button.getAttribute('data-endpoint') || '';
|
|
||||||
document.getElementById('edit_region').value = button.getAttribute('data-region') || '';
|
|
||||||
document.getElementById('edit_access_key').value = button.getAttribute('data-access') || '';
|
|
||||||
document.getElementById('edit_secret_key').value = button.getAttribute('data-secret') || '';
|
|
||||||
document.getElementById('editTestResult').innerHTML = '';
|
|
||||||
|
|
||||||
var form = document.getElementById('editConnectionForm');
|
|
||||||
form.action = endpoints.updateTemplate.replace('CONNECTION_ID', id);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
var deleteModal = document.getElementById('deleteConnectionModal');
|
|
||||||
if (deleteModal) {
|
|
||||||
deleteModal.addEventListener('show.bs.modal', function(event) {
|
|
||||||
var button = event.relatedTarget;
|
|
||||||
if (!button) return;
|
|
||||||
|
|
||||||
var id = button.getAttribute('data-id');
|
|
||||||
var name = button.getAttribute('data-name');
|
|
||||||
|
|
||||||
document.getElementById('deleteConnectionName').textContent = name;
|
|
||||||
var form = document.getElementById('deleteConnectionForm');
|
|
||||||
form.action = endpoints.deleteTemplate.replace('CONNECTION_ID', id);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
var createForm = document.getElementById('createConnectionForm');
|
|
||||||
if (createForm) {
|
|
||||||
createForm.addEventListener('submit', function(e) {
|
|
||||||
e.preventDefault();
|
|
||||||
window.UICore.submitFormAjax(createForm, {
|
|
||||||
successMessage: 'Connection created',
|
|
||||||
onSuccess: function(data) {
|
|
||||||
createForm.reset();
|
|
||||||
document.getElementById('testResult').innerHTML = '';
|
|
||||||
|
|
||||||
if (data.connection) {
|
|
||||||
var emptyState = document.querySelector('.empty-state');
|
|
||||||
if (emptyState) {
|
|
||||||
var cardBody = emptyState.closest('.card-body');
|
|
||||||
if (cardBody) {
|
|
||||||
cardBody.innerHTML = '<div class="table-responsive"><table class="table table-hover align-middle mb-0">' +
|
|
||||||
'<thead class="table-light"><tr>' +
|
|
||||||
'<th scope="col" style="width: 50px;">Status</th>' +
|
|
||||||
'<th scope="col">Name</th><th scope="col">Endpoint</th>' +
|
|
||||||
'<th scope="col">Region</th><th scope="col">Access Key</th>' +
|
|
||||||
'<th scope="col" class="text-end">Actions</th></tr></thead>' +
|
|
||||||
'<tbody></tbody></table></div>';
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var tbody = document.querySelector('table tbody');
|
|
||||||
if (tbody) {
|
|
||||||
tbody.insertAdjacentHTML('beforeend', createConnectionRowHtml(data.connection));
|
|
||||||
var newRow = tbody.lastElementChild;
|
|
||||||
var statusEl = newRow.querySelector('.connection-status');
|
|
||||||
if (statusEl) {
|
|
||||||
checkConnectionHealth(data.connection.id, statusEl);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
updateConnectionCount();
|
|
||||||
} else {
|
|
||||||
location.reload();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
var editForm = document.getElementById('editConnectionForm');
|
|
||||||
if (editForm) {
|
|
||||||
editForm.addEventListener('submit', function(e) {
|
|
||||||
e.preventDefault();
|
|
||||||
window.UICore.submitFormAjax(editForm, {
|
|
||||||
successMessage: 'Connection updated',
|
|
||||||
onSuccess: function(data) {
|
|
||||||
var modal = bootstrap.Modal.getInstance(document.getElementById('editConnectionModal'));
|
|
||||||
if (modal) modal.hide();
|
|
||||||
|
|
||||||
var connId = editForm.action.split('/').slice(-2)[0];
|
|
||||||
var row = document.querySelector('tr[data-connection-id="' + connId + '"]');
|
|
||||||
if (row && data.connection) {
|
|
||||||
var nameCell = row.querySelector('.fw-medium');
|
|
||||||
if (nameCell) nameCell.textContent = data.connection.name;
|
|
||||||
|
|
||||||
var endpointCell = row.querySelector('.text-truncate');
|
|
||||||
if (endpointCell) {
|
|
||||||
endpointCell.textContent = data.connection.endpoint_url;
|
|
||||||
endpointCell.title = data.connection.endpoint_url;
|
|
||||||
}
|
|
||||||
|
|
||||||
var regionBadge = row.querySelector('.badge.bg-primary');
|
|
||||||
if (regionBadge) regionBadge.textContent = data.connection.region;
|
|
||||||
|
|
||||||
var accessCode = row.querySelector('code.small');
|
|
||||||
if (accessCode && data.connection.access_key) {
|
|
||||||
var ak = data.connection.access_key;
|
|
||||||
accessCode.textContent = ak.slice(0, 8) + '...' + ak.slice(-4);
|
|
||||||
}
|
|
||||||
|
|
||||||
var editBtn = row.querySelector('[data-bs-target="#editConnectionModal"]');
|
|
||||||
if (editBtn) {
|
|
||||||
editBtn.setAttribute('data-name', data.connection.name);
|
|
||||||
editBtn.setAttribute('data-endpoint', data.connection.endpoint_url);
|
|
||||||
editBtn.setAttribute('data-region', data.connection.region);
|
|
||||||
editBtn.setAttribute('data-access', data.connection.access_key);
|
|
||||||
if (data.connection.secret_key) {
|
|
||||||
editBtn.setAttribute('data-secret', data.connection.secret_key);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var deleteBtn = row.querySelector('[data-bs-target="#deleteConnectionModal"]');
|
|
||||||
if (deleteBtn) {
|
|
||||||
deleteBtn.setAttribute('data-name', data.connection.name);
|
|
||||||
}
|
|
||||||
|
|
||||||
var statusEl = row.querySelector('.connection-status');
|
|
||||||
if (statusEl) {
|
|
||||||
checkConnectionHealth(connId, statusEl);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
var deleteForm = document.getElementById('deleteConnectionForm');
|
|
||||||
if (deleteForm) {
|
|
||||||
deleteForm.addEventListener('submit', function(e) {
|
|
||||||
e.preventDefault();
|
|
||||||
window.UICore.submitFormAjax(deleteForm, {
|
|
||||||
successMessage: 'Connection deleted',
|
|
||||||
onSuccess: function(data) {
|
|
||||||
var modal = bootstrap.Modal.getInstance(document.getElementById('deleteConnectionModal'));
|
|
||||||
if (modal) modal.hide();
|
|
||||||
|
|
||||||
var connId = deleteForm.action.split('/').slice(-2)[0];
|
|
||||||
var row = document.querySelector('tr[data-connection-id="' + connId + '"]');
|
|
||||||
if (row) {
|
|
||||||
row.remove();
|
|
||||||
}
|
|
||||||
|
|
||||||
updateConnectionCount();
|
|
||||||
|
|
||||||
if (document.querySelectorAll('tr[data-connection-id]').length === 0) {
|
|
||||||
location.reload();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
init: init,
|
|
||||||
togglePassword: togglePassword,
|
|
||||||
testConnection: testConnection,
|
|
||||||
checkConnectionHealth: checkConnectionHealth
|
|
||||||
};
|
|
||||||
})();
|
|
||||||
@@ -1,545 +0,0 @@
|
|||||||
window.IAMManagement = (function() {
|
|
||||||
'use strict';
|
|
||||||
|
|
||||||
var users = [];
|
|
||||||
var currentUserKey = null;
|
|
||||||
var endpoints = {};
|
|
||||||
var csrfToken = '';
|
|
||||||
var iamLocked = false;
|
|
||||||
|
|
||||||
var policyModal = null;
|
|
||||||
var editUserModal = null;
|
|
||||||
var deleteUserModal = null;
|
|
||||||
var rotateSecretModal = null;
|
|
||||||
var currentRotateKey = null;
|
|
||||||
var currentEditKey = null;
|
|
||||||
var currentDeleteKey = null;
|
|
||||||
|
|
||||||
var policyTemplates = {
|
|
||||||
full: [{ bucket: '*', actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'replication', 'iam:list_users', 'iam:*'] }],
|
|
||||||
readonly: [{ bucket: '*', actions: ['list', 'read'] }],
|
|
||||||
writer: [{ bucket: '*', actions: ['list', 'read', 'write'] }]
|
|
||||||
};
|
|
||||||
|
|
||||||
function init(config) {
|
|
||||||
users = config.users || [];
|
|
||||||
currentUserKey = config.currentUserKey || null;
|
|
||||||
endpoints = config.endpoints || {};
|
|
||||||
csrfToken = config.csrfToken || '';
|
|
||||||
iamLocked = config.iamLocked || false;
|
|
||||||
|
|
||||||
if (iamLocked) return;
|
|
||||||
|
|
||||||
initModals();
|
|
||||||
setupJsonAutoIndent();
|
|
||||||
setupCopyButtons();
|
|
||||||
setupPolicyEditor();
|
|
||||||
setupCreateUserModal();
|
|
||||||
setupEditUserModal();
|
|
||||||
setupDeleteUserModal();
|
|
||||||
setupRotateSecretModal();
|
|
||||||
setupFormHandlers();
|
|
||||||
}
|
|
||||||
|
|
||||||
function initModals() {
|
|
||||||
var policyModalEl = document.getElementById('policyEditorModal');
|
|
||||||
var editModalEl = document.getElementById('editUserModal');
|
|
||||||
var deleteModalEl = document.getElementById('deleteUserModal');
|
|
||||||
var rotateModalEl = document.getElementById('rotateSecretModal');
|
|
||||||
|
|
||||||
if (policyModalEl) policyModal = new bootstrap.Modal(policyModalEl);
|
|
||||||
if (editModalEl) editUserModal = new bootstrap.Modal(editModalEl);
|
|
||||||
if (deleteModalEl) deleteUserModal = new bootstrap.Modal(deleteModalEl);
|
|
||||||
if (rotateModalEl) rotateSecretModal = new bootstrap.Modal(rotateModalEl);
|
|
||||||
}
|
|
||||||
|
|
||||||
function setupJsonAutoIndent() {
|
|
||||||
window.UICore.setupJsonAutoIndent(document.getElementById('policyEditorDocument'));
|
|
||||||
window.UICore.setupJsonAutoIndent(document.getElementById('createUserPolicies'));
|
|
||||||
}
|
|
||||||
|
|
||||||
function setupCopyButtons() {
|
|
||||||
document.querySelectorAll('.config-copy').forEach(function(button) {
|
|
||||||
button.addEventListener('click', async function() {
|
|
||||||
var targetId = button.dataset.copyTarget;
|
|
||||||
var target = document.getElementById(targetId);
|
|
||||||
if (!target) return;
|
|
||||||
await window.UICore.copyToClipboard(target.innerText, button, 'Copy JSON');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
var secretCopyButton = document.querySelector('[data-secret-copy]');
|
|
||||||
if (secretCopyButton) {
|
|
||||||
secretCopyButton.addEventListener('click', async function() {
|
|
||||||
var secretInput = document.getElementById('disclosedSecretValue');
|
|
||||||
if (!secretInput) return;
|
|
||||||
await window.UICore.copyToClipboard(secretInput.value, secretCopyButton, 'Copy');
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function getUserPolicies(accessKey) {
|
|
||||||
var user = users.find(function(u) { return u.access_key === accessKey; });
|
|
||||||
return user ? JSON.stringify(user.policies, null, 2) : '';
|
|
||||||
}
|
|
||||||
|
|
||||||
function applyPolicyTemplate(name, textareaEl) {
|
|
||||||
if (policyTemplates[name] && textareaEl) {
|
|
||||||
textareaEl.value = JSON.stringify(policyTemplates[name], null, 2);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function setupPolicyEditor() {
|
|
||||||
var userLabelEl = document.getElementById('policyEditorUserLabel');
|
|
||||||
var userInputEl = document.getElementById('policyEditorUser');
|
|
||||||
var textareaEl = document.getElementById('policyEditorDocument');
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-policy-template]').forEach(function(button) {
|
|
||||||
button.addEventListener('click', function() {
|
|
||||||
applyPolicyTemplate(button.dataset.policyTemplate, textareaEl);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-policy-editor]').forEach(function(button) {
|
|
||||||
button.addEventListener('click', function() {
|
|
||||||
var key = button.getAttribute('data-access-key');
|
|
||||||
if (!key) return;
|
|
||||||
|
|
||||||
userLabelEl.textContent = key;
|
|
||||||
userInputEl.value = key;
|
|
||||||
textareaEl.value = getUserPolicies(key);
|
|
||||||
|
|
||||||
policyModal.show();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function setupCreateUserModal() {
|
|
||||||
var createUserPoliciesEl = document.getElementById('createUserPolicies');
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-create-policy-template]').forEach(function(button) {
|
|
||||||
button.addEventListener('click', function() {
|
|
||||||
applyPolicyTemplate(button.dataset.createPolicyTemplate, createUserPoliciesEl);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function setupEditUserModal() {
|
|
||||||
var editUserForm = document.getElementById('editUserForm');
|
|
||||||
var editUserDisplayName = document.getElementById('editUserDisplayName');
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-edit-user]').forEach(function(btn) {
|
|
||||||
btn.addEventListener('click', function() {
|
|
||||||
var key = btn.dataset.editUser;
|
|
||||||
var name = btn.dataset.displayName;
|
|
||||||
currentEditKey = key;
|
|
||||||
editUserDisplayName.value = name;
|
|
||||||
editUserForm.action = endpoints.updateUser.replace('ACCESS_KEY', key);
|
|
||||||
editUserModal.show();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function setupDeleteUserModal() {
|
|
||||||
var deleteUserForm = document.getElementById('deleteUserForm');
|
|
||||||
var deleteUserLabel = document.getElementById('deleteUserLabel');
|
|
||||||
var deleteSelfWarning = document.getElementById('deleteSelfWarning');
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-delete-user]').forEach(function(btn) {
|
|
||||||
btn.addEventListener('click', function() {
|
|
||||||
var key = btn.dataset.deleteUser;
|
|
||||||
currentDeleteKey = key;
|
|
||||||
deleteUserLabel.textContent = key;
|
|
||||||
deleteUserForm.action = endpoints.deleteUser.replace('ACCESS_KEY', key);
|
|
||||||
|
|
||||||
if (key === currentUserKey) {
|
|
||||||
deleteSelfWarning.classList.remove('d-none');
|
|
||||||
} else {
|
|
||||||
deleteSelfWarning.classList.add('d-none');
|
|
||||||
}
|
|
||||||
|
|
||||||
deleteUserModal.show();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function setupRotateSecretModal() {
|
|
||||||
var rotateUserLabel = document.getElementById('rotateUserLabel');
|
|
||||||
var confirmRotateBtn = document.getElementById('confirmRotateBtn');
|
|
||||||
var rotateCancelBtn = document.getElementById('rotateCancelBtn');
|
|
||||||
var rotateDoneBtn = document.getElementById('rotateDoneBtn');
|
|
||||||
var rotateSecretConfirm = document.getElementById('rotateSecretConfirm');
|
|
||||||
var rotateSecretResult = document.getElementById('rotateSecretResult');
|
|
||||||
var newSecretKeyInput = document.getElementById('newSecretKey');
|
|
||||||
var copyNewSecretBtn = document.getElementById('copyNewSecret');
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-rotate-user]').forEach(function(btn) {
|
|
||||||
btn.addEventListener('click', function() {
|
|
||||||
currentRotateKey = btn.dataset.rotateUser;
|
|
||||||
rotateUserLabel.textContent = currentRotateKey;
|
|
||||||
|
|
||||||
rotateSecretConfirm.classList.remove('d-none');
|
|
||||||
rotateSecretResult.classList.add('d-none');
|
|
||||||
confirmRotateBtn.classList.remove('d-none');
|
|
||||||
rotateCancelBtn.classList.remove('d-none');
|
|
||||||
rotateDoneBtn.classList.add('d-none');
|
|
||||||
|
|
||||||
rotateSecretModal.show();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
if (confirmRotateBtn) {
|
|
||||||
confirmRotateBtn.addEventListener('click', async function() {
|
|
||||||
if (!currentRotateKey) return;
|
|
||||||
|
|
||||||
window.UICore.setButtonLoading(confirmRotateBtn, true, 'Rotating...');
|
|
||||||
|
|
||||||
try {
|
|
||||||
var url = endpoints.rotateSecret.replace('ACCESS_KEY', currentRotateKey);
|
|
||||||
var response = await fetch(url, {
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
'Accept': 'application/json',
|
|
||||||
'X-CSRFToken': csrfToken
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
var data = await response.json();
|
|
||||||
throw new Error(data.error || 'Failed to rotate secret');
|
|
||||||
}
|
|
||||||
|
|
||||||
var data = await response.json();
|
|
||||||
newSecretKeyInput.value = data.secret_key;
|
|
||||||
|
|
||||||
rotateSecretConfirm.classList.add('d-none');
|
|
||||||
rotateSecretResult.classList.remove('d-none');
|
|
||||||
confirmRotateBtn.classList.add('d-none');
|
|
||||||
rotateCancelBtn.classList.add('d-none');
|
|
||||||
rotateDoneBtn.classList.remove('d-none');
|
|
||||||
|
|
||||||
} catch (err) {
|
|
||||||
if (window.showToast) {
|
|
||||||
window.showToast(err.message, 'Error', 'danger');
|
|
||||||
}
|
|
||||||
rotateSecretModal.hide();
|
|
||||||
} finally {
|
|
||||||
window.UICore.setButtonLoading(confirmRotateBtn, false);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (copyNewSecretBtn) {
|
|
||||||
copyNewSecretBtn.addEventListener('click', async function() {
|
|
||||||
await window.UICore.copyToClipboard(newSecretKeyInput.value, copyNewSecretBtn, 'Copy');
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (rotateDoneBtn) {
|
|
||||||
rotateDoneBtn.addEventListener('click', function() {
|
|
||||||
window.location.reload();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function createUserCardHtml(accessKey, displayName, policies) {
|
|
||||||
var policyBadges = '';
|
|
||||||
if (policies && policies.length > 0) {
|
|
||||||
policyBadges = policies.map(function(p) {
|
|
||||||
var actionText = p.actions && p.actions.includes('*') ? 'full' : (p.actions ? p.actions.length : 0);
|
|
||||||
return '<span class="badge bg-primary bg-opacity-10 text-primary">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="10" height="10" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>' +
|
|
||||||
'</svg>' + window.UICore.escapeHtml(p.bucket) +
|
|
||||||
'<span class="opacity-75">(' + actionText + ')</span></span>';
|
|
||||||
}).join('');
|
|
||||||
} else {
|
|
||||||
policyBadges = '<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>';
|
|
||||||
}
|
|
||||||
|
|
||||||
return '<div class="col-md-6 col-xl-4">' +
|
|
||||||
'<div class="card h-100 iam-user-card">' +
|
|
||||||
'<div class="card-body">' +
|
|
||||||
'<div class="d-flex align-items-start justify-content-between mb-3">' +
|
|
||||||
'<div class="d-flex align-items-center gap-3 min-width-0 overflow-hidden">' +
|
|
||||||
'<div class="user-avatar user-avatar-lg flex-shrink-0">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>' +
|
|
||||||
'</svg></div>' +
|
|
||||||
'<div class="min-width-0">' +
|
|
||||||
'<h6 class="fw-semibold mb-0 text-truncate" title="' + window.UICore.escapeHtml(displayName) + '">' + window.UICore.escapeHtml(displayName) + '</h6>' +
|
|
||||||
'<code class="small text-muted d-block text-truncate" title="' + window.UICore.escapeHtml(accessKey) + '">' + window.UICore.escapeHtml(accessKey) + '</code>' +
|
|
||||||
'</div></div>' +
|
|
||||||
'<div class="dropdown flex-shrink-0">' +
|
|
||||||
'<button class="btn btn-sm btn-icon" type="button" data-bs-toggle="dropdown" aria-expanded="false">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M9.5 13a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0z"/>' +
|
|
||||||
'</svg></button>' +
|
|
||||||
'<ul class="dropdown-menu dropdown-menu-end">' +
|
|
||||||
'<li><button class="dropdown-item" type="button" data-edit-user="' + window.UICore.escapeHtml(accessKey) + '" data-display-name="' + window.UICore.escapeHtml(displayName) + '">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16"><path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/></svg>Edit Name</button></li>' +
|
|
||||||
'<li><button class="dropdown-item" type="button" data-rotate-user="' + window.UICore.escapeHtml(accessKey) + '">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16"><path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/><path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/></svg>Rotate Secret</button></li>' +
|
|
||||||
'<li><hr class="dropdown-divider"></li>' +
|
|
||||||
'<li><button class="dropdown-item text-danger" type="button" data-delete-user="' + window.UICore.escapeHtml(accessKey) + '">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16"><path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/><path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/></svg>Delete User</button></li>' +
|
|
||||||
'</ul></div></div>' +
|
|
||||||
'<div class="mb-3">' +
|
|
||||||
'<div class="small text-muted mb-2">Bucket Permissions</div>' +
|
|
||||||
'<div class="d-flex flex-wrap gap-1">' + policyBadges + '</div></div>' +
|
|
||||||
'<button class="btn btn-outline-primary btn-sm w-100" type="button" data-policy-editor data-access-key="' + window.UICore.escapeHtml(accessKey) + '">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16"><path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/><path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/></svg>Manage Policies</button>' +
|
|
||||||
'</div></div></div>';
|
|
||||||
}
|
|
||||||
|
|
||||||
function attachUserCardHandlers(cardElement, accessKey, displayName) {
|
|
||||||
var editBtn = cardElement.querySelector('[data-edit-user]');
|
|
||||||
if (editBtn) {
|
|
||||||
editBtn.addEventListener('click', function() {
|
|
||||||
currentEditKey = accessKey;
|
|
||||||
document.getElementById('editUserDisplayName').value = displayName;
|
|
||||||
document.getElementById('editUserForm').action = endpoints.updateUser.replace('ACCESS_KEY', accessKey);
|
|
||||||
editUserModal.show();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
var deleteBtn = cardElement.querySelector('[data-delete-user]');
|
|
||||||
if (deleteBtn) {
|
|
||||||
deleteBtn.addEventListener('click', function() {
|
|
||||||
currentDeleteKey = accessKey;
|
|
||||||
document.getElementById('deleteUserLabel').textContent = accessKey;
|
|
||||||
document.getElementById('deleteUserForm').action = endpoints.deleteUser.replace('ACCESS_KEY', accessKey);
|
|
||||||
var deleteSelfWarning = document.getElementById('deleteSelfWarning');
|
|
||||||
if (accessKey === currentUserKey) {
|
|
||||||
deleteSelfWarning.classList.remove('d-none');
|
|
||||||
} else {
|
|
||||||
deleteSelfWarning.classList.add('d-none');
|
|
||||||
}
|
|
||||||
deleteUserModal.show();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
var rotateBtn = cardElement.querySelector('[data-rotate-user]');
|
|
||||||
if (rotateBtn) {
|
|
||||||
rotateBtn.addEventListener('click', function() {
|
|
||||||
currentRotateKey = accessKey;
|
|
||||||
document.getElementById('rotateUserLabel').textContent = accessKey;
|
|
||||||
document.getElementById('rotateSecretConfirm').classList.remove('d-none');
|
|
||||||
document.getElementById('rotateSecretResult').classList.add('d-none');
|
|
||||||
document.getElementById('confirmRotateBtn').classList.remove('d-none');
|
|
||||||
document.getElementById('rotateCancelBtn').classList.remove('d-none');
|
|
||||||
document.getElementById('rotateDoneBtn').classList.add('d-none');
|
|
||||||
rotateSecretModal.show();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
var policyBtn = cardElement.querySelector('[data-policy-editor]');
|
|
||||||
if (policyBtn) {
|
|
||||||
policyBtn.addEventListener('click', function() {
|
|
||||||
document.getElementById('policyEditorUserLabel').textContent = accessKey;
|
|
||||||
document.getElementById('policyEditorUser').value = accessKey;
|
|
||||||
document.getElementById('policyEditorDocument').value = getUserPolicies(accessKey);
|
|
||||||
policyModal.show();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function updateUserCount() {
|
|
||||||
var countEl = document.querySelector('.card-header .text-muted.small');
|
|
||||||
if (countEl) {
|
|
||||||
var count = document.querySelectorAll('.iam-user-card').length;
|
|
||||||
countEl.textContent = count + ' user' + (count !== 1 ? 's' : '') + ' configured';
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function setupFormHandlers() {
|
|
||||||
var createUserForm = document.querySelector('#createUserModal form');
|
|
||||||
if (createUserForm) {
|
|
||||||
createUserForm.addEventListener('submit', function(e) {
|
|
||||||
e.preventDefault();
|
|
||||||
window.UICore.submitFormAjax(createUserForm, {
|
|
||||||
successMessage: 'User created',
|
|
||||||
onSuccess: function(data) {
|
|
||||||
var modal = bootstrap.Modal.getInstance(document.getElementById('createUserModal'));
|
|
||||||
if (modal) modal.hide();
|
|
||||||
createUserForm.reset();
|
|
||||||
|
|
||||||
var existingAlert = document.querySelector('.alert.alert-info.border-0.shadow-sm');
|
|
||||||
if (existingAlert) existingAlert.remove();
|
|
||||||
|
|
||||||
if (data.secret_key) {
|
|
||||||
var alertHtml = '<div class="alert alert-info border-0 shadow-sm mb-4" role="alert" id="newUserSecretAlert">' +
|
|
||||||
'<div class="d-flex align-items-start gap-2 mb-2">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="bi bi-key flex-shrink-0 mt-1" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M0 8a4 4 0 0 1 7.465-2H14a.5.5 0 0 1 .354.146l1.5 1.5a.5.5 0 0 1 0 .708l-1.5 1.5a.5.5 0 0 1-.708 0L13 9.207l-.646.647a.5.5 0 0 1-.708 0L11 9.207l-.646.647a.5.5 0 0 1-.708 0L9 9.207l-.646.647A.5.5 0 0 1 8 10h-.535A4 4 0 0 1 0 8zm4-3a3 3 0 1 0 2.712 4.285A.5.5 0 0 1 7.163 9h.63l.853-.854a.5.5 0 0 1 .708 0l.646.647.646-.647a.5.5 0 0 1 .708 0l.646.647.646-.647a.5.5 0 0 1 .708 0l.646.647.793-.793-1-1h-6.63a.5.5 0 0 1-.451-.285A3 3 0 0 0 4 5z"/><path d="M4 8a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>' +
|
|
||||||
'</svg>' +
|
|
||||||
'<div class="flex-grow-1">' +
|
|
||||||
'<div class="fw-semibold">New user created: <code>' + window.UICore.escapeHtml(data.access_key) + '</code></div>' +
|
|
||||||
'<p class="mb-2 small">This secret is only shown once. Copy it now and store it securely.</p>' +
|
|
||||||
'</div>' +
|
|
||||||
'<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"></button>' +
|
|
||||||
'</div>' +
|
|
||||||
'<div class="input-group">' +
|
|
||||||
'<span class="input-group-text"><strong>Secret key</strong></span>' +
|
|
||||||
'<input class="form-control font-monospace" type="text" value="' + window.UICore.escapeHtml(data.secret_key) + '" readonly id="newUserSecret" />' +
|
|
||||||
'<button class="btn btn-outline-primary" type="button" id="copyNewUserSecret">Copy</button>' +
|
|
||||||
'</div></div>';
|
|
||||||
var container = document.querySelector('.page-header');
|
|
||||||
if (container) {
|
|
||||||
container.insertAdjacentHTML('afterend', alertHtml);
|
|
||||||
document.getElementById('copyNewUserSecret').addEventListener('click', async function() {
|
|
||||||
await window.UICore.copyToClipboard(data.secret_key, this, 'Copy');
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var usersGrid = document.querySelector('.row.g-3');
|
|
||||||
var emptyState = document.querySelector('.empty-state');
|
|
||||||
if (emptyState) {
|
|
||||||
var emptyCol = emptyState.closest('.col-12');
|
|
||||||
if (emptyCol) emptyCol.remove();
|
|
||||||
if (!usersGrid) {
|
|
||||||
var cardBody = document.querySelector('.card-body.px-4.pb-4');
|
|
||||||
if (cardBody) {
|
|
||||||
cardBody.innerHTML = '<div class="row g-3"></div>';
|
|
||||||
usersGrid = cardBody.querySelector('.row.g-3');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (usersGrid) {
|
|
||||||
var cardHtml = createUserCardHtml(data.access_key, data.display_name, data.policies);
|
|
||||||
usersGrid.insertAdjacentHTML('beforeend', cardHtml);
|
|
||||||
var newCard = usersGrid.lastElementChild;
|
|
||||||
attachUserCardHandlers(newCard, data.access_key, data.display_name);
|
|
||||||
users.push({
|
|
||||||
access_key: data.access_key,
|
|
||||||
display_name: data.display_name,
|
|
||||||
policies: data.policies || []
|
|
||||||
});
|
|
||||||
updateUserCount();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
var policyEditorForm = document.getElementById('policyEditorForm');
|
|
||||||
if (policyEditorForm) {
|
|
||||||
policyEditorForm.addEventListener('submit', function(e) {
|
|
||||||
e.preventDefault();
|
|
||||||
var userInputEl = document.getElementById('policyEditorUser');
|
|
||||||
var key = userInputEl.value;
|
|
||||||
if (!key) return;
|
|
||||||
|
|
||||||
var template = policyEditorForm.dataset.actionTemplate;
|
|
||||||
policyEditorForm.action = template.replace('ACCESS_KEY_PLACEHOLDER', key);
|
|
||||||
|
|
||||||
window.UICore.submitFormAjax(policyEditorForm, {
|
|
||||||
successMessage: 'Policies updated',
|
|
||||||
onSuccess: function(data) {
|
|
||||||
policyModal.hide();
|
|
||||||
|
|
||||||
var userCard = document.querySelector('[data-access-key="' + key + '"]');
|
|
||||||
if (userCard) {
|
|
||||||
var badgeContainer = userCard.closest('.iam-user-card').querySelector('.d-flex.flex-wrap.gap-1');
|
|
||||||
if (badgeContainer && data.policies) {
|
|
||||||
var badges = data.policies.map(function(p) {
|
|
||||||
return '<span class="badge bg-primary bg-opacity-10 text-primary">' +
|
|
||||||
'<svg xmlns="http://www.w3.org/2000/svg" width="10" height="10" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
|
||||||
'<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>' +
|
|
||||||
'</svg>' + window.UICore.escapeHtml(p.bucket) +
|
|
||||||
'<span class="opacity-75">(' + (p.actions.includes('*') ? 'full' : p.actions.length) + ')</span></span>';
|
|
||||||
}).join('');
|
|
||||||
badgeContainer.innerHTML = badges || '<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>';
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var userIndex = users.findIndex(function(u) { return u.access_key === key; });
|
|
||||||
if (userIndex >= 0 && data.policies) {
|
|
||||||
users[userIndex].policies = data.policies;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
var editUserForm = document.getElementById('editUserForm');
|
|
||||||
if (editUserForm) {
|
|
||||||
editUserForm.addEventListener('submit', function(e) {
|
|
||||||
e.preventDefault();
|
|
||||||
var key = currentEditKey;
|
|
||||||
window.UICore.submitFormAjax(editUserForm, {
|
|
||||||
successMessage: 'User updated',
|
|
||||||
onSuccess: function(data) {
|
|
||||||
editUserModal.hide();
|
|
||||||
|
|
||||||
var newName = data.display_name || document.getElementById('editUserDisplayName').value;
|
|
||||||
var editBtn = document.querySelector('[data-edit-user="' + key + '"]');
|
|
||||||
if (editBtn) {
|
|
||||||
editBtn.setAttribute('data-display-name', newName);
|
|
||||||
var card = editBtn.closest('.iam-user-card');
|
|
||||||
if (card) {
|
|
||||||
var nameEl = card.querySelector('h6');
|
|
||||||
if (nameEl) {
|
|
||||||
nameEl.textContent = newName;
|
|
||||||
nameEl.title = newName;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var userIndex = users.findIndex(function(u) { return u.access_key === key; });
|
|
||||||
if (userIndex >= 0) {
|
|
||||||
users[userIndex].display_name = newName;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (key === currentUserKey) {
|
|
||||||
document.querySelectorAll('.sidebar-user .user-name').forEach(function(el) {
|
|
||||||
var truncated = newName.length > 16 ? newName.substring(0, 16) + '...' : newName;
|
|
||||||
el.textContent = truncated;
|
|
||||||
el.title = newName;
|
|
||||||
});
|
|
||||||
document.querySelectorAll('.sidebar-user[data-username]').forEach(function(el) {
|
|
||||||
el.setAttribute('data-username', newName);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
var deleteUserForm = document.getElementById('deleteUserForm');
|
|
||||||
if (deleteUserForm) {
|
|
||||||
deleteUserForm.addEventListener('submit', function(e) {
|
|
||||||
e.preventDefault();
|
|
||||||
var key = currentDeleteKey;
|
|
||||||
window.UICore.submitFormAjax(deleteUserForm, {
|
|
||||||
successMessage: 'User deleted',
|
|
||||||
onSuccess: function(data) {
|
|
||||||
deleteUserModal.hide();
|
|
||||||
|
|
||||||
if (key === currentUserKey) {
|
|
||||||
window.location.href = '/ui/';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
var deleteBtn = document.querySelector('[data-delete-user="' + key + '"]');
|
|
||||||
if (deleteBtn) {
|
|
||||||
var cardCol = deleteBtn.closest('[class*="col-"]');
|
|
||||||
if (cardCol) {
|
|
||||||
cardCol.remove();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
users = users.filter(function(u) { return u.access_key !== key; });
|
|
||||||
updateUserCount();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
init: init
|
|
||||||
};
|
|
||||||
})();
|
|
||||||
@@ -1,324 +0,0 @@
|
|||||||
window.UICore = (function() {
|
|
||||||
'use strict';
|
|
||||||
|
|
||||||
function getCsrfToken() {
|
|
||||||
const meta = document.querySelector('meta[name="csrf-token"]');
|
|
||||||
return meta ? meta.getAttribute('content') : '';
|
|
||||||
}
|
|
||||||
|
|
||||||
function formatBytes(bytes) {
|
|
||||||
if (!Number.isFinite(bytes)) return bytes + ' bytes';
|
|
||||||
const units = ['bytes', 'KB', 'MB', 'GB', 'TB'];
|
|
||||||
let i = 0;
|
|
||||||
let size = bytes;
|
|
||||||
while (size >= 1024 && i < units.length - 1) {
|
|
||||||
size /= 1024;
|
|
||||||
i++;
|
|
||||||
}
|
|
||||||
return size.toFixed(i === 0 ? 0 : 1) + ' ' + units[i];
|
|
||||||
}
|
|
||||||
|
|
||||||
function escapeHtml(value) {
|
|
||||||
if (value === null || value === undefined) return '';
|
|
||||||
return String(value)
|
|
||||||
.replace(/&/g, '&')
|
|
||||||
.replace(/</g, '<')
|
|
||||||
.replace(/>/g, '>')
|
|
||||||
.replace(/"/g, '"')
|
|
||||||
.replace(/'/g, ''');
|
|
||||||
}
|
|
||||||
|
|
||||||
async function submitFormAjax(form, options) {
|
|
||||||
options = options || {};
|
|
||||||
var onSuccess = options.onSuccess || function() {};
|
|
||||||
var onError = options.onError || function() {};
|
|
||||||
var successMessage = options.successMessage || 'Operation completed';
|
|
||||||
|
|
||||||
var formData = new FormData(form);
|
|
||||||
var csrfToken = getCsrfToken();
|
|
||||||
var submitBtn = form.querySelector('[type="submit"]');
|
|
||||||
var originalHtml = submitBtn ? submitBtn.innerHTML : '';
|
|
||||||
|
|
||||||
try {
|
|
||||||
if (submitBtn) {
|
|
||||||
submitBtn.disabled = true;
|
|
||||||
submitBtn.innerHTML = '<span class="spinner-border spinner-border-sm me-1"></span>Saving...';
|
|
||||||
}
|
|
||||||
|
|
||||||
var formAction = form.getAttribute('action') || form.action;
|
|
||||||
var response = await fetch(formAction, {
|
|
||||||
method: form.getAttribute('method') || 'POST',
|
|
||||||
headers: {
|
|
||||||
'X-CSRFToken': csrfToken,
|
|
||||||
'Accept': 'application/json',
|
|
||||||
'X-Requested-With': 'XMLHttpRequest'
|
|
||||||
},
|
|
||||||
body: formData,
|
|
||||||
redirect: 'follow'
|
|
||||||
});
|
|
||||||
|
|
||||||
var contentType = response.headers.get('content-type') || '';
|
|
||||||
if (!contentType.includes('application/json')) {
|
|
||||||
throw new Error('Server returned an unexpected response. Please try again.');
|
|
||||||
}
|
|
||||||
|
|
||||||
var data = await response.json();
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
throw new Error(data.error || 'HTTP ' + response.status);
|
|
||||||
}
|
|
||||||
|
|
||||||
window.showToast(data.message || successMessage, 'Success', 'success');
|
|
||||||
onSuccess(data);
|
|
||||||
|
|
||||||
} catch (err) {
|
|
||||||
window.showToast(err.message, 'Error', 'error');
|
|
||||||
onError(err);
|
|
||||||
} finally {
|
|
||||||
if (submitBtn) {
|
|
||||||
submitBtn.disabled = false;
|
|
||||||
submitBtn.innerHTML = originalHtml;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function PollingManager() {
|
|
||||||
this.intervals = {};
|
|
||||||
this.callbacks = {};
|
|
||||||
this.timers = {};
|
|
||||||
this.defaults = {
|
|
||||||
replication: 30000,
|
|
||||||
lifecycle: 60000,
|
|
||||||
connectionHealth: 60000,
|
|
||||||
bucketStats: 120000
|
|
||||||
};
|
|
||||||
this._loadSettings();
|
|
||||||
}
|
|
||||||
|
|
||||||
PollingManager.prototype._loadSettings = function() {
|
|
||||||
try {
|
|
||||||
var stored = localStorage.getItem('myfsio-polling-intervals');
|
|
||||||
if (stored) {
|
|
||||||
var settings = JSON.parse(stored);
|
|
||||||
for (var key in settings) {
|
|
||||||
if (settings.hasOwnProperty(key)) {
|
|
||||||
this.defaults[key] = settings[key];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} catch (e) {
|
|
||||||
console.warn('Failed to load polling settings:', e);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
PollingManager.prototype.saveSettings = function(settings) {
|
|
||||||
try {
|
|
||||||
for (var key in settings) {
|
|
||||||
if (settings.hasOwnProperty(key)) {
|
|
||||||
this.defaults[key] = settings[key];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
localStorage.setItem('myfsio-polling-intervals', JSON.stringify(this.defaults));
|
|
||||||
} catch (e) {
|
|
||||||
console.warn('Failed to save polling settings:', e);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
PollingManager.prototype.start = function(key, callback, interval) {
|
|
||||||
this.stop(key);
|
|
||||||
var ms = interval !== undefined ? interval : (this.defaults[key] || 30000);
|
|
||||||
if (ms <= 0) return;
|
|
||||||
|
|
||||||
this.callbacks[key] = callback;
|
|
||||||
this.intervals[key] = ms;
|
|
||||||
|
|
||||||
callback();
|
|
||||||
|
|
||||||
var self = this;
|
|
||||||
this.timers[key] = setInterval(function() {
|
|
||||||
if (!document.hidden) {
|
|
||||||
callback();
|
|
||||||
}
|
|
||||||
}, ms);
|
|
||||||
};
|
|
||||||
|
|
||||||
PollingManager.prototype.stop = function(key) {
|
|
||||||
if (this.timers[key]) {
|
|
||||||
clearInterval(this.timers[key]);
|
|
||||||
delete this.timers[key];
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
PollingManager.prototype.stopAll = function() {
|
|
||||||
for (var key in this.timers) {
|
|
||||||
if (this.timers.hasOwnProperty(key)) {
|
|
||||||
clearInterval(this.timers[key]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
this.timers = {};
|
|
||||||
};
|
|
||||||
|
|
||||||
PollingManager.prototype.updateInterval = function(key, newInterval) {
|
|
||||||
var callback = this.callbacks[key];
|
|
||||||
this.defaults[key] = newInterval;
|
|
||||||
this.saveSettings(this.defaults);
|
|
||||||
if (callback) {
|
|
||||||
this.start(key, callback, newInterval);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
PollingManager.prototype.getSettings = function() {
|
|
||||||
var result = {};
|
|
||||||
for (var key in this.defaults) {
|
|
||||||
if (this.defaults.hasOwnProperty(key)) {
|
|
||||||
result[key] = this.defaults[key];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return result;
|
|
||||||
};
|
|
||||||
|
|
||||||
var pollingManager = new PollingManager();
|
|
||||||
|
|
||||||
document.addEventListener('visibilitychange', function() {
|
|
||||||
if (document.hidden) {
|
|
||||||
pollingManager.stopAll();
|
|
||||||
} else {
|
|
||||||
for (var key in pollingManager.callbacks) {
|
|
||||||
if (pollingManager.callbacks.hasOwnProperty(key)) {
|
|
||||||
pollingManager.start(key, pollingManager.callbacks[key], pollingManager.intervals[key]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
return {
|
|
||||||
getCsrfToken: getCsrfToken,
|
|
||||||
formatBytes: formatBytes,
|
|
||||||
escapeHtml: escapeHtml,
|
|
||||||
submitFormAjax: submitFormAjax,
|
|
||||||
PollingManager: PollingManager,
|
|
||||||
pollingManager: pollingManager
|
|
||||||
};
|
|
||||||
})();
|
|
||||||
|
|
||||||
window.pollingManager = window.UICore.pollingManager;
|
|
||||||
|
|
||||||
window.UICore.copyToClipboard = async function(text, button, originalText) {
|
|
||||||
try {
|
|
||||||
await navigator.clipboard.writeText(text);
|
|
||||||
if (button) {
|
|
||||||
var prevText = button.textContent;
|
|
||||||
button.textContent = 'Copied!';
|
|
||||||
setTimeout(function() {
|
|
||||||
button.textContent = originalText || prevText;
|
|
||||||
}, 1500);
|
|
||||||
}
|
|
||||||
return true;
|
|
||||||
} catch (err) {
|
|
||||||
console.error('Copy failed:', err);
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
window.UICore.setButtonLoading = function(button, isLoading, loadingText) {
|
|
||||||
if (!button) return;
|
|
||||||
if (isLoading) {
|
|
||||||
button._originalHtml = button.innerHTML;
|
|
||||||
button._originalDisabled = button.disabled;
|
|
||||||
button.disabled = true;
|
|
||||||
button.innerHTML = '<span class="spinner-border spinner-border-sm me-1"></span>' + (loadingText || 'Loading...');
|
|
||||||
} else {
|
|
||||||
button.disabled = button._originalDisabled || false;
|
|
||||||
button.innerHTML = button._originalHtml || button.innerHTML;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
window.UICore.updateBadgeCount = function(selector, count, singular, plural) {
|
|
||||||
var badge = document.querySelector(selector);
|
|
||||||
if (badge) {
|
|
||||||
var label = count === 1 ? (singular || '') : (plural || 's');
|
|
||||||
badge.textContent = count + ' ' + label;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
window.UICore.setupJsonAutoIndent = function(textarea) {
|
|
||||||
if (!textarea) return;
|
|
||||||
|
|
||||||
textarea.addEventListener('keydown', function(e) {
|
|
||||||
if (e.key === 'Enter') {
|
|
||||||
e.preventDefault();
|
|
||||||
|
|
||||||
var start = this.selectionStart;
|
|
||||||
var end = this.selectionEnd;
|
|
||||||
var value = this.value;
|
|
||||||
|
|
||||||
var lineStart = value.lastIndexOf('\n', start - 1) + 1;
|
|
||||||
var currentLine = value.substring(lineStart, start);
|
|
||||||
|
|
||||||
var indentMatch = currentLine.match(/^(\s*)/);
|
|
||||||
var indent = indentMatch ? indentMatch[1] : '';
|
|
||||||
|
|
||||||
var trimmedLine = currentLine.trim();
|
|
||||||
var lastChar = trimmedLine.slice(-1);
|
|
||||||
|
|
||||||
var newIndent = indent;
|
|
||||||
var insertAfter = '';
|
|
||||||
|
|
||||||
if (lastChar === '{' || lastChar === '[') {
|
|
||||||
newIndent = indent + ' ';
|
|
||||||
|
|
||||||
var charAfterCursor = value.substring(start, start + 1).trim();
|
|
||||||
if ((lastChar === '{' && charAfterCursor === '}') ||
|
|
||||||
(lastChar === '[' && charAfterCursor === ']')) {
|
|
||||||
insertAfter = '\n' + indent;
|
|
||||||
}
|
|
||||||
} else if (lastChar === ',' || lastChar === ':') {
|
|
||||||
newIndent = indent;
|
|
||||||
}
|
|
||||||
|
|
||||||
var insertion = '\n' + newIndent + insertAfter;
|
|
||||||
var newValue = value.substring(0, start) + insertion + value.substring(end);
|
|
||||||
|
|
||||||
this.value = newValue;
|
|
||||||
|
|
||||||
var newCursorPos = start + 1 + newIndent.length;
|
|
||||||
this.selectionStart = this.selectionEnd = newCursorPos;
|
|
||||||
|
|
||||||
this.dispatchEvent(new Event('input', { bubbles: true }));
|
|
||||||
}
|
|
||||||
|
|
||||||
if (e.key === 'Tab') {
|
|
||||||
e.preventDefault();
|
|
||||||
var start = this.selectionStart;
|
|
||||||
var end = this.selectionEnd;
|
|
||||||
|
|
||||||
if (e.shiftKey) {
|
|
||||||
var lineStart = this.value.lastIndexOf('\n', start - 1) + 1;
|
|
||||||
var lineContent = this.value.substring(lineStart, start);
|
|
||||||
if (lineContent.startsWith(' ')) {
|
|
||||||
this.value = this.value.substring(0, lineStart) +
|
|
||||||
this.value.substring(lineStart + 2);
|
|
||||||
this.selectionStart = this.selectionEnd = Math.max(lineStart, start - 2);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
this.value = this.value.substring(0, start) + ' ' + this.value.substring(end);
|
|
||||||
this.selectionStart = this.selectionEnd = start + 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
this.dispatchEvent(new Event('input', { bubbles: true }));
|
|
||||||
}
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
document.addEventListener('DOMContentLoaded', function() {
|
|
||||||
var flashMessage = sessionStorage.getItem('flashMessage');
|
|
||||||
if (flashMessage) {
|
|
||||||
sessionStorage.removeItem('flashMessage');
|
|
||||||
try {
|
|
||||||
var msg = JSON.parse(flashMessage);
|
|
||||||
if (window.showToast) {
|
|
||||||
window.showToast(msg.body || msg.title, msg.title, msg.variant || 'info');
|
|
||||||
}
|
|
||||||
} catch (e) {}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
@@ -5,8 +5,8 @@
|
|||||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||||
{% if principal %}<meta name="csrf-token" content="{{ csrf_token() }}" />{% endif %}
|
{% if principal %}<meta name="csrf-token" content="{{ csrf_token() }}" />{% endif %}
|
||||||
<title>MyFSIO Console</title>
|
<title>MyFSIO Console</title>
|
||||||
<link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFSIO.png') }}" />
|
<link rel="icon" type="image/png" href="{{ url_for('static', filename='images/MyFISO.png') }}" />
|
||||||
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFSIO.ico') }}" />
|
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='images/MyFISO.ico') }}" />
|
||||||
<link
|
<link
|
||||||
href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css"
|
href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css"
|
||||||
rel="stylesheet"
|
rel="stylesheet"
|
||||||
@@ -24,218 +24,105 @@
|
|||||||
document.documentElement.dataset.bsTheme = 'light';
|
document.documentElement.dataset.bsTheme = 'light';
|
||||||
document.documentElement.dataset.theme = 'light';
|
document.documentElement.dataset.theme = 'light';
|
||||||
}
|
}
|
||||||
try {
|
|
||||||
if (localStorage.getItem('myfsio-sidebar-collapsed') === 'true') {
|
|
||||||
document.documentElement.classList.add('sidebar-will-collapse');
|
|
||||||
}
|
|
||||||
} catch (err) {}
|
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}" />
|
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}" />
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<header class="mobile-header d-lg-none">
|
<nav class="navbar navbar-expand-lg myfsio-nav shadow-sm">
|
||||||
<button class="sidebar-toggle-btn" type="button" data-bs-toggle="offcanvas" data-bs-target="#mobileSidebar" aria-controls="mobileSidebar" aria-label="Toggle navigation">
|
<div class="container-fluid">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">
|
<a class="navbar-brand fw-semibold" href="{{ url_for('ui.buckets_overview') }}">
|
||||||
<path fill-rule="evenodd" d="M2.5 12a.5.5 0 0 1 .5-.5h10a.5.5 0 0 1 0 1H3a.5.5 0 0 1-.5-.5zm0-4a.5.5 0 0 1 .5-.5h10a.5.5 0 0 1 0 1H3a.5.5 0 0 1-.5-.5zm0-4a.5.5 0 0 1 .5-.5h10a.5.5 0 0 1 0 1H3a.5.5 0 0 1-.5-.5z"/>
|
<img
|
||||||
</svg>
|
src="{{ url_for('static', filename='images/MyFISO.png') }}"
|
||||||
</button>
|
alt="MyFSIO logo"
|
||||||
<a class="mobile-brand" href="{{ url_for('ui.buckets_overview') }}">
|
class="myfsio-logo"
|
||||||
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO logo" width="28" height="28" />
|
width="32"
|
||||||
<span>MyFSIO</span>
|
height="32"
|
||||||
|
decoding="async"
|
||||||
|
/>
|
||||||
|
<span class="myfsio-title">MyFSIO</span>
|
||||||
</a>
|
</a>
|
||||||
<button class="theme-toggle-mobile" type="button" id="themeToggleMobile" aria-label="Toggle dark mode">
|
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navContent" aria-controls="navContent" aria-expanded="false" aria-label="Toggle navigation">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon-mobile" id="themeToggleSunMobile" viewBox="0 0 16 16">
|
<span class="navbar-toggler-icon"></span>
|
||||||
<path d="M8 11.5a3.5 3.5 0 1 1 0-7 3.5 3.5 0 0 1 0 7zm0 1.5a5 5 0 1 0 0-10 5 5 0 0 0 0 10zM8 0a.5.5 0 0 1 .5.5v1.555a.5.5 0 0 1-1 0V.5A.5.5 0 0 1 8 0zm0 12.945a.5.5 0 0 1 .5.5v2.055a.5.5 0 0 1-1 0v-2.055a.5.5 0 0 1 .5-.5zM2.343 2.343a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.708.707l-1.1-1.1a.5.5 0 0 1 0-.707zm9.507 9.507a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.707.708l-1.1-1.1a.5.5 0 0 1 0-.708zM0 8a.5.5 0 0 1 .5-.5h1.555a.5.5 0 0 1 0 1H.5A.5.5 0 0 1 0 8zm12.945 0a.5.5 0 0 1 .5-.5H15.5a.5.5 0 0 1 0 1h-2.055a.5.5 0 0 1-.5-.5zM2.343 13.657a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 1 1 .708.707l-1.1 1.1a.5.5 0 0 1-.708 0zm9.507-9.507a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 0 1 .707.708l-1.1 1.1a.5.5 0 0 1-.707 0z"/>
|
</button>
|
||||||
|
<div class="collapse navbar-collapse" id="navContent">
|
||||||
|
<ul class="navbar-nav me-auto mb-2 mb-lg-0">
|
||||||
|
{% if principal %}
|
||||||
|
<li class="nav-item">
|
||||||
|
<a class="nav-link" href="{{ url_for('ui.buckets_overview') }}">Buckets</a>
|
||||||
|
</li>
|
||||||
|
{% if can_manage_iam %}
|
||||||
|
<li class="nav-item">
|
||||||
|
<a class="nav-link" href="{{ url_for('ui.iam_dashboard') }}">IAM</a>
|
||||||
|
</li>
|
||||||
|
<li class="nav-item">
|
||||||
|
<a class="nav-link" href="{{ url_for('ui.connections_dashboard') }}">Connections</a>
|
||||||
|
</li>
|
||||||
|
<li class="nav-item">
|
||||||
|
<a class="nav-link" href="{{ url_for('ui.metrics_dashboard') }}">Metrics</a>
|
||||||
|
</li>
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
{% if principal %}
|
||||||
|
<li class="nav-item">
|
||||||
|
<a class="nav-link" href="{{ url_for('ui.docs_page') }}">Docs</a>
|
||||||
|
</li>
|
||||||
|
{% endif %}
|
||||||
|
</ul>
|
||||||
|
<div class="ms-lg-auto d-flex align-items-center gap-3 text-light flex-wrap">
|
||||||
|
<button
|
||||||
|
class="btn btn-outline-light btn-sm theme-toggle"
|
||||||
|
type="button"
|
||||||
|
id="themeToggle"
|
||||||
|
aria-pressed="false"
|
||||||
|
aria-label="Toggle dark mode"
|
||||||
|
>
|
||||||
|
<span id="themeToggleLabel" class="visually-hidden">Toggle dark mode</span>
|
||||||
|
<svg
|
||||||
|
xmlns="http://www.w3.org/2000/svg"
|
||||||
|
width="16"
|
||||||
|
height="16"
|
||||||
|
fill="currentColor"
|
||||||
|
class="theme-icon"
|
||||||
|
id="themeToggleSun"
|
||||||
|
viewBox="0 0 16 16"
|
||||||
|
aria-hidden="true"
|
||||||
|
>
|
||||||
|
<path
|
||||||
|
d="M8 11.5a3.5 3.5 0 1 1 0-7 3.5 3.5 0 0 1 0 7zm0 1.5a5 5 0 1 0 0-10 5 5 0 0 0 0 10zM8 0a.5.5 0 0 1 .5.5v1.555a.5.5 0 0 1-1 0V.5A.5.5 0 0 1 8 0zm0 12.945a.5.5 0 0 1 .5.5v2.055a.5.5 0 0 1-1 0v-2.055a.5.5 0 0 1 .5-.5zM2.343 2.343a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.708.707l-1.1-1.1a.5.5 0 0 1 0-.707zm9.507 9.507a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.707.708l-1.1-1.1a.5.5 0 0 1 0-.708zM0 8a.5.5 0 0 1 .5-.5h1.555a.5.5 0 0 1 0 1H.5A.5.5 0 0 1 0 8zm12.945 0a.5.5 0 0 1 .5-.5H15.5a.5.5 0 0 1 0 1h-2.055a.5.5 0 0 1-.5-.5zM2.343 13.657a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 1 1 .708.707l-1.1 1.1a.5.5 0 0 1-.708 0zm9.507-9.507a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 0 1 .707.708l-1.1 1.1a.5.5 0 0 1-.707 0z"
|
||||||
|
/>
|
||||||
</svg>
|
</svg>
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon-mobile" id="themeToggleMoonMobile" viewBox="0 0 16 16">
|
<svg
|
||||||
|
xmlns="http://www.w3.org/2000/svg"
|
||||||
|
width="16"
|
||||||
|
height="16"
|
||||||
|
fill="currentColor"
|
||||||
|
class="theme-icon d-none"
|
||||||
|
id="themeToggleMoon"
|
||||||
|
viewBox="0 0 16 16"
|
||||||
|
aria-hidden="true"
|
||||||
|
>
|
||||||
<path d="M6 .278a.768.768 0 0 1 .08.858 7.208 7.208 0 0 0-.878 3.46c0 4.021 3.278 7.277 7.318 7.277.527 0 1.04-.055 1.533-.16a.787.787 0 0 1 .81.316.733.733 0 0 1-.031.893A8.349 8.349 0 0 1 8.344 16C3.734 16 0 12.286 0 7.71 0 4.266 2.114 1.312 5.124.06A.752.752 0 0 1 6 .278z"/>
|
<path d="M6 .278a.768.768 0 0 1 .08.858 7.208 7.208 0 0 0-.878 3.46c0 4.021 3.278 7.277 7.318 7.277.527 0 1.04-.055 1.533-.16a.787.787 0 0 1 .81.316.733.733 0 0 1-.031.893A8.349 8.349 0 0 1 8.344 16C3.734 16 0 12.286 0 7.71 0 4.266 2.114 1.312 5.124.06A.752.752 0 0 1 6 .278z"/>
|
||||||
<path d="M10.794 3.148a.217.217 0 0 1 .412 0l.387 1.162c.173.518.579.924 1.097 1.097l1.162.387a.217.217 0 0 1 0 .412l-1.162.387a1.734 1.734 0 0 0-1.097 1.097l-.387 1.162a.217.217 0 0 1-.412 0l-.387-1.162A1.734 1.734 0 0 0 9.31 6.593l-1.162-.387a.217.217 0 0 1 0-.412l1.162-.387a1.734 1.734 0 0 0 1.097-1.097l.387-1.162zM13.863.099a.145.145 0 0 1 .274 0l.258.774c.115.346.386.617.732.732l.774.258a.145.145 0 0 1 0 .274l-.774.258a1.156 1.156 0 0 0-.732.732l-.258.774a.145.145 0 0 1-.274 0l-.258-.774a1.156 1.156 0 0 0-.732-.732l-.774-.258a.145.145 0 0 1 0-.274l.774-.258c.346-.115.617-.386.732-.732L13.863.1z"/>
|
<path d="M10.794 3.148a.217.217 0 0 1 .412 0l.387 1.162c.173.518.579.924 1.097 1.097l1.162.387a.217.217 0 0 1 0 .412l-1.162.387a1.734 1.734 0 0 0-1.097 1.097l-.387 1.162a.217.217 0 0 1-.412 0l-.387-1.162A1.734 1.734 0 0 0 9.31 6.593l-1.162-.387a.217.217 0 0 1 0-.412l1.162-.387a1.734 1.734 0 0 0 1.097-1.097l.387-1.162zM13.863.099a.145.145 0 0 1 .274 0l.258.774c.115.346.386.617.732.732l.774.258a.145.145 0 0 1 0 .274l-.774.258a1.156 1.156 0 0 0-.732.732l-.258.774a.145.145 0 0 1-.274 0l-.258-.774a1.156 1.156 0 0 0-.732-.732l-.774-.258a.145.145 0 0 1 0-.274l.774-.258c.346-.115.617-.386.732-.732L13.863.1z"/>
|
||||||
</svg>
|
</svg>
|
||||||
</button>
|
</button>
|
||||||
</header>
|
|
||||||
|
|
||||||
<div class="offcanvas offcanvas-start sidebar-offcanvas" tabindex="-1" id="mobileSidebar" aria-labelledby="mobileSidebarLabel">
|
|
||||||
<div class="offcanvas-header sidebar-header">
|
|
||||||
<a class="sidebar-brand" href="{{ url_for('ui.buckets_overview') }}">
|
|
||||||
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO logo" class="sidebar-logo" width="36" height="36" />
|
|
||||||
<span class="sidebar-title">MyFSIO</span>
|
|
||||||
</a>
|
|
||||||
<button type="button" class="btn-close btn-close-white" data-bs-dismiss="offcanvas" aria-label="Close"></button>
|
|
||||||
</div>
|
|
||||||
<div class="offcanvas-body sidebar-body">
|
|
||||||
<nav class="sidebar-nav">
|
|
||||||
{% if principal %}
|
{% if principal %}
|
||||||
<div class="nav-section">
|
<div class="text-end small">
|
||||||
<span class="nav-section-title">Navigation</span>
|
<div class="fw-semibold" title="{{ principal.display_name }}">{{ principal.display_name | truncate(20, true) }}</div>
|
||||||
<a href="{{ url_for('ui.buckets_overview') }}" class="sidebar-link {% if request.endpoint == 'ui.buckets_overview' or request.endpoint == 'ui.bucket_detail' %}active{% endif %}">
|
<div class="opacity-75">{{ principal.access_key }}</div>
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
|
||||||
</svg>
|
|
||||||
<span>Buckets</span>
|
|
||||||
</a>
|
|
||||||
{% if can_manage_iam %}
|
|
||||||
<a href="{{ url_for('ui.iam_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.iam_dashboard' %}active{% endif %}">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M15 14s1 0 1-1-1-4-5-4-5 3-5 4 1 1 1 1h8zm-7.978-1A.261.261 0 0 1 7 12.996c.001-.264.167-1.03.76-1.72C8.312 10.629 9.282 10 11 10c1.717 0 2.687.63 3.24 1.276.593.69.758 1.457.76 1.72l-.008.002a.274.274 0 0 1-.014.002H7.022zM11 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4zm3-2a3 3 0 1 1-6 0 3 3 0 0 1 6 0zM6.936 9.28a5.88 5.88 0 0 0-1.23-.247A7.35 7.35 0 0 0 5 9c-4 0-5 3-5 4 0 .667.333 1 1 1h4.216A2.238 2.238 0 0 1 5 13c0-1.01.377-2.042 1.09-2.904.243-.294.526-.569.846-.816zM4.92 10A5.493 5.493 0 0 0 4 13H1c0-.26.164-1.03.76-1.724.545-.636 1.492-1.256 3.16-1.275zM1.5 5.5a3 3 0 1 1 6 0 3 3 0 0 1-6 0zm3-2a2 2 0 1 0 0 4 2 2 0 0 0 0-4z"/>
|
|
||||||
</svg>
|
|
||||||
<span>IAM</span>
|
|
||||||
</a>
|
|
||||||
<a href="{{ url_for('ui.connections_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.connections_dashboard' %}active{% endif %}">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path fill-rule="evenodd" d="M6 3.5A1.5 1.5 0 0 1 7.5 2h1A1.5 1.5 0 0 1 10 3.5v1A1.5 1.5 0 0 1 8.5 6v1H14a.5.5 0 0 1 .5.5v1a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0v-1A.5.5 0 0 1 2 7h5.5V6A1.5 1.5 0 0 1 6 4.5v-1zM8.5 5a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1zM0 11.5A1.5 1.5 0 0 1 1.5 10h1A1.5 1.5 0 0 1 4 11.5v1A1.5 1.5 0 0 1 2.5 14h-1A1.5 1.5 0 0 1 0 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5A1.5 1.5 0 0 1 7.5 10h1a1.5 1.5 0 0 1 1.5 1.5v1A1.5 1.5 0 0 1 8.5 14h-1A1.5 1.5 0 0 1 6 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5a1.5 1.5 0 0 1 1.5-1.5h1a1.5 1.5 0 0 1 1.5 1.5v1a1.5 1.5 0 0 1-1.5 1.5h-1a1.5 1.5 0 0 1-1.5-1.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1z"/>
|
|
||||||
</svg>
|
|
||||||
<span>Connections</span>
|
|
||||||
</a>
|
|
||||||
<a href="{{ url_for('ui.metrics_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.metrics_dashboard' %}active{% endif %}">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M8 4a.5.5 0 0 1 .5.5V6a.5.5 0 0 1-1 0V4.5A.5.5 0 0 1 8 4zM3.732 5.732a.5.5 0 0 1 .707 0l.915.914a.5.5 0 1 1-.708.708l-.914-.915a.5.5 0 0 1 0-.707zM2 10a.5.5 0 0 1 .5-.5h1.586a.5.5 0 0 1 0 1H2.5A.5.5 0 0 1 2 10zm9.5 0a.5.5 0 0 1 .5-.5h1.5a.5.5 0 0 1 0 1H12a.5.5 0 0 1-.5-.5zm.754-4.246a.389.389 0 0 0-.527-.02L7.547 9.31a.91.91 0 1 0 1.302 1.258l3.434-4.297a.389.389 0 0 0-.029-.518z"/>
|
|
||||||
<path fill-rule="evenodd" d="M0 10a8 8 0 1 1 15.547 2.661c-.442 1.253-1.845 1.602-2.932 1.25C11.309 13.488 9.475 13 8 13c-1.474 0-3.31.488-4.615.911-1.087.352-2.49.003-2.932-1.25A7.988 7.988 0 0 1 0 10zm8-7a7 7 0 0 0-6.603 9.329c.203.575.923.876 1.68.63C4.397 12.533 6.358 12 8 12s3.604.532 4.923.96c.757.245 1.477-.056 1.68-.631A7 7 0 0 0 8 3z"/>
|
|
||||||
</svg>
|
|
||||||
<span>Metrics</span>
|
|
||||||
</a>
|
|
||||||
{% endif %}
|
|
||||||
</div>
|
</div>
|
||||||
<div class="nav-section">
|
<form method="post" action="{{ url_for('ui.logout') }}">
|
||||||
<span class="nav-section-title">Resources</span>
|
|
||||||
<a href="{{ url_for('ui.docs_page') }}" class="sidebar-link {% if request.endpoint == 'ui.docs_page' %}active{% endif %}">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M1 2.828c.885-.37 2.154-.769 3.388-.893 1.33-.134 2.458.063 3.112.752v9.746c-.935-.53-2.12-.603-3.213-.493-1.18.12-2.37.461-3.287.811V2.828zm7.5-.141c.654-.689 1.782-.886 3.112-.752 1.234.124 2.503.523 3.388.893v9.923c-.918-.35-2.107-.692-3.287-.81-1.094-.111-2.278-.039-3.213.492V2.687zM8 1.783C7.015.936 5.587.81 4.287.94c-1.514.153-3.042.672-3.994 1.105A.5.5 0 0 0 0 2.5v11a.5.5 0 0 0 .707.455c.882-.4 2.303-.881 3.68-1.02 1.409-.142 2.59.087 3.223.877a.5.5 0 0 0 .78 0c.633-.79 1.814-1.019 3.222-.877 1.378.139 2.8.62 3.681 1.02A.5.5 0 0 0 16 13.5v-11a.5.5 0 0 0-.293-.455c-.952-.433-2.48-.952-3.994-1.105C10.413.809 8.985.936 8 1.783z"/>
|
|
||||||
</svg>
|
|
||||||
<span>Documentation</span>
|
|
||||||
</a>
|
|
||||||
</div>
|
|
||||||
{% endif %}
|
|
||||||
</nav>
|
|
||||||
{% if principal %}
|
|
||||||
<div class="sidebar-footer">
|
|
||||||
<div class="sidebar-user">
|
|
||||||
<div class="user-avatar">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M11 6a3 3 0 1 1-6 0 3 3 0 0 1 6 0z"/>
|
|
||||||
<path fill-rule="evenodd" d="M0 8a8 8 0 1 1 16 0A8 8 0 0 1 0 8zm8-7a7 7 0 0 0-5.468 11.37C3.242 11.226 4.805 10 8 10s4.757 1.225 5.468 2.37A7 7 0 0 0 8 1z"/>
|
|
||||||
</svg>
|
|
||||||
</div>
|
|
||||||
<div class="user-info">
|
|
||||||
<div class="user-name" title="{{ principal.display_name }}">{{ principal.display_name | truncate(16, true) }}</div>
|
|
||||||
<div class="user-key">{{ principal.access_key | truncate(12, true) }}</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
<form method="post" action="{{ url_for('ui.logout') }}" class="w-100">
|
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<button class="sidebar-logout-btn" type="submit">
|
<button class="btn btn-outline-light btn-sm" type="submit">Sign out</button>
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path fill-rule="evenodd" d="M10 12.5a.5.5 0 0 1-.5.5h-8a.5.5 0 0 1-.5-.5v-9a.5.5 0 0 1 .5-.5h8a.5.5 0 0 1 .5.5v2a.5.5 0 0 0 1 0v-2A1.5 1.5 0 0 0 9.5 2h-8A1.5 1.5 0 0 0 0 3.5v9A1.5 1.5 0 0 0 1.5 14h8a1.5 1.5 0 0 0 1.5-1.5v-2a.5.5 0 0 0-1 0v2z"/>
|
|
||||||
<path fill-rule="evenodd" d="M15.854 8.354a.5.5 0 0 0 0-.708l-3-3a.5.5 0 0 0-.708.708L14.293 7.5H5.5a.5.5 0 0 0 0 1h8.793l-2.147 2.146a.5.5 0 0 0 .708.708l3-3z"/>
|
|
||||||
</svg>
|
|
||||||
<span>Sign out</span>
|
|
||||||
</button>
|
|
||||||
</form>
|
|
||||||
</div>
|
|
||||||
{% endif %}
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<aside class="sidebar d-none d-lg-flex" id="desktopSidebar">
|
|
||||||
<div class="sidebar-header">
|
|
||||||
<div class="sidebar-brand" id="sidebarBrand">
|
|
||||||
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO logo" class="sidebar-logo" width="36" height="36" />
|
|
||||||
<span class="sidebar-title">MyFSIO</span>
|
|
||||||
</div>
|
|
||||||
<button class="sidebar-collapse-btn" type="button" id="sidebarCollapseBtn" aria-label="Collapse sidebar">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path fill-rule="evenodd" d="M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z"/>
|
|
||||||
</svg>
|
|
||||||
</button>
|
|
||||||
</div>
|
|
||||||
<div class="sidebar-body">
|
|
||||||
<nav class="sidebar-nav">
|
|
||||||
{% if principal %}
|
|
||||||
<div class="nav-section">
|
|
||||||
<span class="nav-section-title">Navigation</span>
|
|
||||||
<a href="{{ url_for('ui.buckets_overview') }}" class="sidebar-link {% if request.endpoint == 'ui.buckets_overview' or request.endpoint == 'ui.bucket_detail' %}active{% endif %}" data-tooltip="Buckets">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
|
||||||
</svg>
|
|
||||||
<span class="sidebar-link-text">Buckets</span>
|
|
||||||
</a>
|
|
||||||
{% if can_manage_iam %}
|
|
||||||
<a href="{{ url_for('ui.iam_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.iam_dashboard' %}active{% endif %}" data-tooltip="IAM">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M15 14s1 0 1-1-1-4-5-4-5 3-5 4 1 1 1 1h8zm-7.978-1A.261.261 0 0 1 7 12.996c.001-.264.167-1.03.76-1.72C8.312 10.629 9.282 10 11 10c1.717 0 2.687.63 3.24 1.276.593.69.758 1.457.76 1.72l-.008.002a.274.274 0 0 1-.014.002H7.022zM11 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4zm3-2a3 3 0 1 1-6 0 3 3 0 0 1 6 0zM6.936 9.28a5.88 5.88 0 0 0-1.23-.247A7.35 7.35 0 0 0 5 9c-4 0-5 3-5 4 0 .667.333 1 1 1h4.216A2.238 2.238 0 0 1 5 13c0-1.01.377-2.042 1.09-2.904.243-.294.526-.569.846-.816zM4.92 10A5.493 5.493 0 0 0 4 13H1c0-.26.164-1.03.76-1.724.545-.636 1.492-1.256 3.16-1.275zM1.5 5.5a3 3 0 1 1 6 0 3 3 0 0 1-6 0zm3-2a2 2 0 1 0 0 4 2 2 0 0 0 0-4z"/>
|
|
||||||
</svg>
|
|
||||||
<span class="sidebar-link-text">IAM</span>
|
|
||||||
</a>
|
|
||||||
<a href="{{ url_for('ui.connections_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.connections_dashboard' %}active{% endif %}" data-tooltip="Connections">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path fill-rule="evenodd" d="M6 3.5A1.5 1.5 0 0 1 7.5 2h1A1.5 1.5 0 0 1 10 3.5v1A1.5 1.5 0 0 1 8.5 6v1H14a.5.5 0 0 1 .5.5v1a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0V8h-5v.5a.5.5 0 0 1-1 0v-1A.5.5 0 0 1 2 7h5.5V6A1.5 1.5 0 0 1 6 4.5v-1zM8.5 5a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1zM0 11.5A1.5 1.5 0 0 1 1.5 10h1A1.5 1.5 0 0 1 4 11.5v1A1.5 1.5 0 0 1 2.5 14h-1A1.5 1.5 0 0 1 0 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5A1.5 1.5 0 0 1 7.5 10h1a1.5 1.5 0 0 1 1.5 1.5v1A1.5 1.5 0 0 1 8.5 14h-1A1.5 1.5 0 0 1 6 12.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1zm4.5.5a1.5 1.5 0 0 1 1.5-1.5h1a1.5 1.5 0 0 1 1.5 1.5v1a1.5 1.5 0 0 1-1.5 1.5h-1a1.5 1.5 0 0 1-1.5-1.5v-1zm1.5-.5a.5.5 0 0 0-.5.5v1a.5.5 0 0 0 .5.5h1a.5.5 0 0 0 .5-.5v-1a.5.5 0 0 0-.5-.5h-1z"/>
|
|
||||||
</svg>
|
|
||||||
<span class="sidebar-link-text">Connections</span>
|
|
||||||
</a>
|
|
||||||
<a href="{{ url_for('ui.metrics_dashboard') }}" class="sidebar-link {% if request.endpoint == 'ui.metrics_dashboard' %}active{% endif %}" data-tooltip="Metrics">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M8 4a.5.5 0 0 1 .5.5V6a.5.5 0 0 1-1 0V4.5A.5.5 0 0 1 8 4zM3.732 5.732a.5.5 0 0 1 .707 0l.915.914a.5.5 0 1 1-.708.708l-.914-.915a.5.5 0 0 1 0-.707zM2 10a.5.5 0 0 1 .5-.5h1.586a.5.5 0 0 1 0 1H2.5A.5.5 0 0 1 2 10zm9.5 0a.5.5 0 0 1 .5-.5h1.5a.5.5 0 0 1 0 1H12a.5.5 0 0 1-.5-.5zm.754-4.246a.389.389 0 0 0-.527-.02L7.547 9.31a.91.91 0 1 0 1.302 1.258l3.434-4.297a.389.389 0 0 0-.029-.518z"/>
|
|
||||||
<path fill-rule="evenodd" d="M0 10a8 8 0 1 1 15.547 2.661c-.442 1.253-1.845 1.602-2.932 1.25C11.309 13.488 9.475 13 8 13c-1.474 0-3.31.488-4.615.911-1.087.352-2.49.003-2.932-1.25A7.988 7.988 0 0 1 0 10zm8-7a7 7 0 0 0-6.603 9.329c.203.575.923.876 1.68.63C4.397 12.533 6.358 12 8 12s3.604.532 4.923.96c.757.245 1.477-.056 1.68-.631A7 7 0 0 0 8 3z"/>
|
|
||||||
</svg>
|
|
||||||
<span class="sidebar-link-text">Metrics</span>
|
|
||||||
</a>
|
|
||||||
{% endif %}
|
|
||||||
</div>
|
|
||||||
<div class="nav-section">
|
|
||||||
<span class="nav-section-title">Resources</span>
|
|
||||||
<a href="{{ url_for('ui.docs_page') }}" class="sidebar-link {% if request.endpoint == 'ui.docs_page' %}active{% endif %}" data-tooltip="Documentation">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M1 2.828c.885-.37 2.154-.769 3.388-.893 1.33-.134 2.458.063 3.112.752v9.746c-.935-.53-2.12-.603-3.213-.493-1.18.12-2.37.461-3.287.811V2.828zm7.5-.141c.654-.689 1.782-.886 3.112-.752 1.234.124 2.503.523 3.388.893v9.923c-.918-.35-2.107-.692-3.287-.81-1.094-.111-2.278-.039-3.213.492V2.687zM8 1.783C7.015.936 5.587.81 4.287.94c-1.514.153-3.042.672-3.994 1.105A.5.5 0 0 0 0 2.5v11a.5.5 0 0 0 .707.455c.882-.4 2.303-.881 3.68-1.02 1.409-.142 2.59.087 3.223.877a.5.5 0 0 0 .78 0c.633-.79 1.814-1.019 3.222-.877 1.378.139 2.8.62 3.681 1.02A.5.5 0 0 0 16 13.5v-11a.5.5 0 0 0-.293-.455c-.952-.433-2.48-.952-3.994-1.105C10.413.809 8.985.936 8 1.783z"/>
|
|
||||||
</svg>
|
|
||||||
<span class="sidebar-link-text">Documentation</span>
|
|
||||||
</a>
|
|
||||||
</div>
|
|
||||||
{% endif %}
|
|
||||||
</nav>
|
|
||||||
</div>
|
|
||||||
<div class="sidebar-footer">
|
|
||||||
<button class="theme-toggle-sidebar" type="button" id="themeToggle" aria-label="Toggle dark mode">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon" id="themeToggleSun" viewBox="0 0 16 16">
|
|
||||||
<path d="M8 11.5a3.5 3.5 0 1 1 0-7 3.5 3.5 0 0 1 0 7zm0 1.5a5 5 0 1 0 0-10 5 5 0 0 0 0 10zM8 0a.5.5 0 0 1 .5.5v1.555a.5.5 0 0 1-1 0V.5A.5.5 0 0 1 8 0zm0 12.945a.5.5 0 0 1 .5.5v2.055a.5.5 0 0 1-1 0v-2.055a.5.5 0 0 1 .5-.5zM2.343 2.343a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.708.707l-1.1-1.1a.5.5 0 0 1 0-.707zm9.507 9.507a.5.5 0 0 1 .707 0l1.1 1.1a.5.5 0 1 1-.707.708l-1.1-1.1a.5.5 0 0 1 0-.708zM0 8a.5.5 0 0 1 .5-.5h1.555a.5.5 0 0 1 0 1H.5A.5.5 0 0 1 0 8zm12.945 0a.5.5 0 0 1 .5-.5H15.5a.5.5 0 0 1 0 1h-2.055a.5.5 0 0 1-.5-.5zM2.343 13.657a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 1 1 .708.707l-1.1 1.1a.5.5 0 0 1-.708 0zm9.507-9.507a.5.5 0 0 1 0-.707l1.1-1.1a.5.5 0 0 1 .707.708l-1.1 1.1a.5.5 0 0 1-.707 0z"/>
|
|
||||||
</svg>
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" class="theme-icon" id="themeToggleMoon" viewBox="0 0 16 16">
|
|
||||||
<path d="M6 .278a.768.768 0 0 1 .08.858 7.208 7.208 0 0 0-.878 3.46c0 4.021 3.278 7.277 7.318 7.277.527 0 1.04-.055 1.533-.16a.787.787 0 0 1 .81.316.733.733 0 0 1-.031.893A8.349 8.349 0 0 1 8.344 16C3.734 16 0 12.286 0 7.71 0 4.266 2.114 1.312 5.124.06A.752.752 0 0 1 6 .278z"/>
|
|
||||||
<path d="M10.794 3.148a.217.217 0 0 1 .412 0l.387 1.162c.173.518.579.924 1.097 1.097l1.162.387a.217.217 0 0 1 0 .412l-1.162.387a1.734 1.734 0 0 0-1.097 1.097l-.387 1.162a.217.217 0 0 1-.412 0l-.387-1.162A1.734 1.734 0 0 0 9.31 6.593l-1.162-.387a.217.217 0 0 1 0-.412l1.162-.387a1.734 1.734 0 0 0 1.097-1.097l.387-1.162zM13.863.099a.145.145 0 0 1 .274 0l.258.774c.115.346.386.617.732.732l.774.258a.145.145 0 0 1 0 .274l-.774.258a1.156 1.156 0 0 0-.732.732l-.258.774a.145.145 0 0 1-.274 0l-.258-.774a1.156 1.156 0 0 0-.732-.732l-.774-.258a.145.145 0 0 1 0-.274l.774-.258c.346-.115.617-.386.732-.732L13.863.1z"/>
|
|
||||||
</svg>
|
|
||||||
<span class="theme-toggle-text">Toggle theme</span>
|
|
||||||
</button>
|
|
||||||
{% if principal %}
|
|
||||||
<div class="sidebar-user" data-username="{{ principal.display_name }}">
|
|
||||||
<div class="user-avatar">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M11 6a3 3 0 1 1-6 0 3 3 0 0 1 6 0z"/>
|
|
||||||
<path fill-rule="evenodd" d="M0 8a8 8 0 1 1 16 0A8 8 0 0 1 0 8zm8-7a7 7 0 0 0-5.468 11.37C3.242 11.226 4.805 10 8 10s4.757 1.225 5.468 2.37A7 7 0 0 0 8 1z"/>
|
|
||||||
</svg>
|
|
||||||
</div>
|
|
||||||
<div class="user-info">
|
|
||||||
<div class="user-name" title="{{ principal.display_name }}">{{ principal.display_name | truncate(16, true) }}</div>
|
|
||||||
<div class="user-key">{{ principal.access_key | truncate(12, true) }}</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
<form method="post" action="{{ url_for('ui.logout') }}" class="w-100">
|
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
|
||||||
<button class="sidebar-logout-btn" type="submit">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path fill-rule="evenodd" d="M10 12.5a.5.5 0 0 1-.5.5h-8a.5.5 0 0 1-.5-.5v-9a.5.5 0 0 1 .5-.5h8a.5.5 0 0 1 .5.5v2a.5.5 0 0 0 1 0v-2A1.5 1.5 0 0 0 9.5 2h-8A1.5 1.5 0 0 0 0 3.5v9A1.5 1.5 0 0 0 1.5 14h8a1.5 1.5 0 0 0 1.5-1.5v-2a.5.5 0 0 0-1 0v2z"/>
|
|
||||||
<path fill-rule="evenodd" d="M15.854 8.354a.5.5 0 0 0 0-.708l-3-3a.5.5 0 0 0-.708.708L14.293 7.5H5.5a.5.5 0 0 0 0 1h8.793l-2.147 2.146a.5.5 0 0 0 .708.708l3-3z"/>
|
|
||||||
</svg>
|
|
||||||
<span class="logout-text">Sign out</span>
|
|
||||||
</button>
|
|
||||||
</form>
|
</form>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
</aside>
|
</div>
|
||||||
|
</div>
|
||||||
<div class="main-wrapper">
|
</nav>
|
||||||
<main class="main-content">
|
<main class="container py-4">
|
||||||
{% block content %}{% endblock %}
|
{% block content %}{% endblock %}
|
||||||
</main>
|
</main>
|
||||||
</div>
|
|
||||||
<div class="toast-container position-fixed bottom-0 end-0 p-3">
|
<div class="toast-container position-fixed bottom-0 end-0 p-3">
|
||||||
<div id="liveToast" class="toast" role="alert" aria-live="assertive" aria-atomic="true">
|
<div id="liveToast" class="toast" role="alert" aria-live="assertive" aria-atomic="true">
|
||||||
<div class="toast-header">
|
<div class="toast-header">
|
||||||
@@ -275,11 +162,9 @@
|
|||||||
(function () {
|
(function () {
|
||||||
const storageKey = 'myfsio-theme';
|
const storageKey = 'myfsio-theme';
|
||||||
const toggle = document.getElementById('themeToggle');
|
const toggle = document.getElementById('themeToggle');
|
||||||
const toggleMobile = document.getElementById('themeToggleMobile');
|
const label = document.getElementById('themeToggleLabel');
|
||||||
const sunIcon = document.getElementById('themeToggleSun');
|
const sunIcon = document.getElementById('themeToggleSun');
|
||||||
const moonIcon = document.getElementById('themeToggleMoon');
|
const moonIcon = document.getElementById('themeToggleMoon');
|
||||||
const sunIconMobile = document.getElementById('themeToggleSunMobile');
|
|
||||||
const moonIconMobile = document.getElementById('themeToggleMoonMobile');
|
|
||||||
|
|
||||||
const applyTheme = (theme) => {
|
const applyTheme = (theme) => {
|
||||||
document.documentElement.dataset.bsTheme = theme;
|
document.documentElement.dataset.bsTheme = theme;
|
||||||
@@ -287,79 +172,34 @@
|
|||||||
try {
|
try {
|
||||||
localStorage.setItem(storageKey, theme);
|
localStorage.setItem(storageKey, theme);
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
console.log("Error: local storage not available, cannot save theme preference.");
|
/* localStorage unavailable */
|
||||||
|
}
|
||||||
|
if (label) {
|
||||||
|
label.textContent = theme === 'dark' ? 'Switch to light mode' : 'Switch to dark mode';
|
||||||
|
}
|
||||||
|
if (toggle) {
|
||||||
|
toggle.setAttribute('aria-pressed', theme === 'dark' ? 'true' : 'false');
|
||||||
|
toggle.setAttribute('title', theme === 'dark' ? 'Switch to light mode' : 'Switch to dark mode');
|
||||||
|
toggle.setAttribute('aria-label', theme === 'dark' ? 'Switch to light mode' : 'Switch to dark mode');
|
||||||
}
|
}
|
||||||
const isDark = theme === 'dark';
|
|
||||||
if (sunIcon && moonIcon) {
|
if (sunIcon && moonIcon) {
|
||||||
|
const isDark = theme === 'dark';
|
||||||
sunIcon.classList.toggle('d-none', !isDark);
|
sunIcon.classList.toggle('d-none', !isDark);
|
||||||
moonIcon.classList.toggle('d-none', isDark);
|
moonIcon.classList.toggle('d-none', isDark);
|
||||||
}
|
}
|
||||||
if (sunIconMobile && moonIconMobile) {
|
|
||||||
sunIconMobile.classList.toggle('d-none', !isDark);
|
|
||||||
moonIconMobile.classList.toggle('d-none', isDark);
|
|
||||||
}
|
|
||||||
[toggle, toggleMobile].forEach(btn => {
|
|
||||||
if (btn) {
|
|
||||||
btn.setAttribute('aria-pressed', isDark ? 'true' : 'false');
|
|
||||||
btn.setAttribute('title', isDark ? 'Switch to light mode' : 'Switch to dark mode');
|
|
||||||
btn.setAttribute('aria-label', isDark ? 'Switch to light mode' : 'Switch to dark mode');
|
|
||||||
}
|
|
||||||
});
|
|
||||||
};
|
};
|
||||||
|
|
||||||
const current = document.documentElement.dataset.bsTheme || 'light';
|
const current = document.documentElement.dataset.bsTheme || 'light';
|
||||||
applyTheme(current);
|
applyTheme(current);
|
||||||
|
|
||||||
const handleToggle = () => {
|
toggle?.addEventListener('click', () => {
|
||||||
const next = document.documentElement.dataset.bsTheme === 'dark' ? 'light' : 'dark';
|
const next = document.documentElement.dataset.bsTheme === 'dark' ? 'light' : 'dark';
|
||||||
applyTheme(next);
|
applyTheme(next);
|
||||||
};
|
|
||||||
|
|
||||||
toggle?.addEventListener('click', handleToggle);
|
|
||||||
toggleMobile?.addEventListener('click', handleToggle);
|
|
||||||
})();
|
|
||||||
</script>
|
|
||||||
<script>
|
|
||||||
(function () {
|
|
||||||
const sidebar = document.getElementById('desktopSidebar');
|
|
||||||
const collapseBtn = document.getElementById('sidebarCollapseBtn');
|
|
||||||
const sidebarBrand = document.getElementById('sidebarBrand');
|
|
||||||
const storageKey = 'myfsio-sidebar-collapsed';
|
|
||||||
|
|
||||||
if (!sidebar || !collapseBtn) return;
|
|
||||||
|
|
||||||
const applyCollapsed = (collapsed) => {
|
|
||||||
sidebar.classList.toggle('sidebar-collapsed', collapsed);
|
|
||||||
document.body.classList.toggle('sidebar-is-collapsed', collapsed);
|
|
||||||
document.documentElement.classList.remove('sidebar-will-collapse');
|
|
||||||
try {
|
|
||||||
localStorage.setItem(storageKey, collapsed ? 'true' : 'false');
|
|
||||||
} catch (err) {}
|
|
||||||
};
|
|
||||||
|
|
||||||
try {
|
|
||||||
const stored = localStorage.getItem(storageKey);
|
|
||||||
applyCollapsed(stored === 'true');
|
|
||||||
} catch (err) {
|
|
||||||
document.documentElement.classList.remove('sidebar-will-collapse');
|
|
||||||
}
|
|
||||||
|
|
||||||
collapseBtn.addEventListener('click', () => {
|
|
||||||
const isCollapsed = sidebar.classList.contains('sidebar-collapsed');
|
|
||||||
applyCollapsed(!isCollapsed);
|
|
||||||
});
|
|
||||||
|
|
||||||
sidebarBrand?.addEventListener('click', (e) => {
|
|
||||||
const isCollapsed = sidebar.classList.contains('sidebar-collapsed');
|
|
||||||
if (isCollapsed) {
|
|
||||||
e.preventDefault();
|
|
||||||
applyCollapsed(false);
|
|
||||||
}
|
|
||||||
});
|
});
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
<script>
|
<script>
|
||||||
|
// Toast utility
|
||||||
window.showToast = function(message, title = 'Notification', type = 'info') {
|
window.showToast = function(message, title = 'Notification', type = 'info') {
|
||||||
const toastEl = document.getElementById('liveToast');
|
const toastEl = document.getElementById('liveToast');
|
||||||
const toastTitle = document.getElementById('toastTitle');
|
const toastTitle = document.getElementById('toastTitle');
|
||||||
@@ -368,6 +208,7 @@
|
|||||||
toastTitle.textContent = title;
|
toastTitle.textContent = title;
|
||||||
toastMessage.textContent = message;
|
toastMessage.textContent = message;
|
||||||
|
|
||||||
|
// Reset classes
|
||||||
toastEl.classList.remove('text-bg-primary', 'text-bg-success', 'text-bg-danger', 'text-bg-warning');
|
toastEl.classList.remove('text-bg-primary', 'text-bg-success', 'text-bg-danger', 'text-bg-warning');
|
||||||
|
|
||||||
if (type === 'success') toastEl.classList.add('text-bg-success');
|
if (type === 'success') toastEl.classList.add('text-bg-success');
|
||||||
@@ -380,11 +221,13 @@
|
|||||||
</script>
|
</script>
|
||||||
<script>
|
<script>
|
||||||
(function () {
|
(function () {
|
||||||
|
// Show flashed messages as toasts
|
||||||
{% with messages = get_flashed_messages(with_categories=true) %}
|
{% with messages = get_flashed_messages(with_categories=true) %}
|
||||||
{% if messages %}
|
{% if messages %}
|
||||||
{% for category, message in messages %}
|
{% for category, message in messages %}
|
||||||
|
// Map Flask categories to Toast types
|
||||||
|
// Flask: success, danger, warning, info
|
||||||
|
// Toast: success, error, warning, info
|
||||||
var type = "{{ category }}";
|
var type = "{{ category }}";
|
||||||
if (type === "danger") type = "error";
|
if (type === "danger") type = "error";
|
||||||
window.showToast({{ message | tojson | safe }}, "Notification", type);
|
window.showToast({{ message | tojson | safe }}, "Notification", type);
|
||||||
@@ -393,8 +236,6 @@
|
|||||||
{% endwith %}
|
{% endwith %}
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
<script src="{{ url_for('static', filename='js/ui-core.js') }}"></script>
|
|
||||||
{% block extra_scripts %}{% endblock %}
|
{% block extra_scripts %}{% endblock %}
|
||||||
|
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -46,7 +46,8 @@
|
|||||||
<div class="d-flex align-items-center gap-3">
|
<div class="d-flex align-items-center gap-3">
|
||||||
<div class="bucket-icon">
|
<div class="bucket-icon">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="22" height="22" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="22" height="22" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
||||||
|
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H11a.5.5 0 0 1 0 1h-1v1h1a.5.5 0 0 1 0 1h-1v1a.5.5 0 0 1-1 0v-1H6v1a.5.5 0 0 1-1 0v-1H4a.5.5 0 0 1 0-1h1v-1H4a.5.5 0 0 1 0-1h1.5A1.5 1.5 0 0 1 7 10.5V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1zm5 7.5v1h3v-1a.5.5 0 0 0-.5-.5h-2a.5.5 0 0 0-.5.5z"/>
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<div>
|
<div>
|
||||||
@@ -104,7 +105,7 @@
|
|||||||
</h1>
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<form method="post" action="{{ url_for('ui.create_bucket') }}" id="createBucketForm">
|
<form method="post" action="{{ url_for('ui.create_bucket') }}">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<div class="modal-body pt-0">
|
<div class="modal-body pt-0">
|
||||||
<label class="form-label fw-medium">Bucket name</label>
|
<label class="form-label fw-medium">Bucket name</label>
|
||||||
@@ -130,10 +131,10 @@
|
|||||||
{{ super() }}
|
{{ super() }}
|
||||||
<script>
|
<script>
|
||||||
(function () {
|
(function () {
|
||||||
|
// Search functionality
|
||||||
const searchInput = document.getElementById('bucket-search');
|
const searchInput = document.getElementById('bucket-search');
|
||||||
const bucketItems = document.querySelectorAll('.bucket-item');
|
const bucketItems = document.querySelectorAll('.bucket-item');
|
||||||
const noBucketsMsg = document.querySelector('.text-center.py-5');
|
const noBucketsMsg = document.querySelector('.text-center.py-5'); // The "No buckets found" empty state
|
||||||
|
|
||||||
if (searchInput) {
|
if (searchInput) {
|
||||||
searchInput.addEventListener('input', (e) => {
|
searchInput.addEventListener('input', (e) => {
|
||||||
@@ -152,6 +153,7 @@
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// View toggle functionality
|
||||||
const viewGrid = document.getElementById('view-grid');
|
const viewGrid = document.getElementById('view-grid');
|
||||||
const viewList = document.getElementById('view-list');
|
const viewList = document.getElementById('view-list');
|
||||||
const container = document.getElementById('buckets-container');
|
const container = document.getElementById('buckets-container');
|
||||||
@@ -166,7 +168,8 @@
|
|||||||
});
|
});
|
||||||
cards.forEach(card => {
|
cards.forEach(card => {
|
||||||
card.classList.remove('h-100');
|
card.classList.remove('h-100');
|
||||||
|
// Optional: Add flex-row to card-body content if we want a horizontal layout
|
||||||
|
// For now, full-width stacked cards is a good list view
|
||||||
});
|
});
|
||||||
localStorage.setItem('bucket-view-pref', 'list');
|
localStorage.setItem('bucket-view-pref', 'list');
|
||||||
} else {
|
} else {
|
||||||
@@ -185,6 +188,7 @@
|
|||||||
viewGrid.addEventListener('change', () => setView('grid'));
|
viewGrid.addEventListener('change', () => setView('grid'));
|
||||||
viewList.addEventListener('change', () => setView('list'));
|
viewList.addEventListener('change', () => setView('list'));
|
||||||
|
|
||||||
|
// Restore preference
|
||||||
const pref = localStorage.getItem('bucket-view-pref');
|
const pref = localStorage.getItem('bucket-view-pref');
|
||||||
if (pref === 'list') {
|
if (pref === 'list') {
|
||||||
viewList.checked = true;
|
viewList.checked = true;
|
||||||
@@ -205,25 +209,6 @@
|
|||||||
});
|
});
|
||||||
row.style.cursor = 'pointer';
|
row.style.cursor = 'pointer';
|
||||||
});
|
});
|
||||||
|
|
||||||
var createForm = document.getElementById('createBucketForm');
|
|
||||||
if (createForm) {
|
|
||||||
createForm.addEventListener('submit', function(e) {
|
|
||||||
e.preventDefault();
|
|
||||||
window.UICore.submitFormAjax(createForm, {
|
|
||||||
successMessage: 'Bucket created',
|
|
||||||
onSuccess: function(data) {
|
|
||||||
var modal = bootstrap.Modal.getInstance(document.getElementById('createBucketModal'));
|
|
||||||
if (modal) modal.hide();
|
|
||||||
if (data.bucket_name) {
|
|
||||||
window.location.href = '{{ url_for("ui.bucket_detail", bucket_name="__BUCKET__") }}'.replace('__BUCKET__', data.bucket_name);
|
|
||||||
} else {
|
|
||||||
location.reload();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
@@ -8,8 +8,8 @@
|
|||||||
<p class="text-uppercase text-muted small mb-1">Replication</p>
|
<p class="text-uppercase text-muted small mb-1">Replication</p>
|
||||||
<h1 class="h3 mb-1 d-flex align-items-center gap-2">
|
<h1 class="h3 mb-1 d-flex align-items-center gap-2">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" fill="currentColor" class="text-primary" viewBox="0 0 16 16">
|
||||||
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
||||||
<path d="M10.232 8.768l.546-.353a.25.25 0 0 0 0-.418l-.546-.354a.25.25 0 0 1-.116-.21V6.25a.25.25 0 0 0-.25-.25h-.5a.25.25 0 0 0-.25.25v1.183a.25.25 0 0 1-.116.21l-.546.354a.25.25 0 0 0 0 .418l.546.353a.25.25 0 0 1 .116.21v1.183a.25.25 0 0 0 .25.25h.5a.25.25 0 0 0 .25-.25V8.978a.25.25 0 0 1 .116-.21z"/>
|
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
||||||
</svg>
|
</svg>
|
||||||
Remote Connections
|
Remote Connections
|
||||||
</h1>
|
</h1>
|
||||||
@@ -57,7 +57,7 @@
|
|||||||
<label for="secret_key" class="form-label fw-medium">Secret Key</label>
|
<label for="secret_key" class="form-label fw-medium">Secret Key</label>
|
||||||
<div class="input-group">
|
<div class="input-group">
|
||||||
<input type="password" class="form-control font-monospace" id="secret_key" name="secret_key" required>
|
<input type="password" class="form-control font-monospace" id="secret_key" name="secret_key" required>
|
||||||
<button class="btn btn-outline-secondary" type="button" onclick="ConnectionsManagement.togglePassword('secret_key')" title="Toggle visibility">
|
<button class="btn btn-outline-secondary" type="button" onclick="togglePassword('secret_key')" title="Toggle visibility">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
||||||
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
||||||
@@ -104,7 +104,6 @@
|
|||||||
<table class="table table-hover align-middle mb-0">
|
<table class="table table-hover align-middle mb-0">
|
||||||
<thead class="table-light">
|
<thead class="table-light">
|
||||||
<tr>
|
<tr>
|
||||||
<th scope="col" style="width: 50px;">Status</th>
|
|
||||||
<th scope="col">Name</th>
|
<th scope="col">Name</th>
|
||||||
<th scope="col">Endpoint</th>
|
<th scope="col">Endpoint</th>
|
||||||
<th scope="col">Region</th>
|
<th scope="col">Region</th>
|
||||||
@@ -114,17 +113,13 @@
|
|||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
{% for conn in connections %}
|
{% for conn in connections %}
|
||||||
<tr data-connection-id="{{ conn.id }}">
|
<tr>
|
||||||
<td class="text-center">
|
|
||||||
<span class="connection-status" data-status="checking" title="Checking...">
|
|
||||||
<span class="spinner-border spinner-border-sm text-muted" role="status" style="width: 12px; height: 12px;"></span>
|
|
||||||
</span>
|
|
||||||
</td>
|
|
||||||
<td>
|
<td>
|
||||||
<div class="d-flex align-items-center gap-2">
|
<div class="d-flex align-items-center gap-2">
|
||||||
<div class="connection-icon">
|
<div class="connection-icon">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
||||||
|
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<span class="fw-medium">{{ conn.name }}</span>
|
<span class="fw-medium">{{ conn.name }}</span>
|
||||||
@@ -173,7 +168,8 @@
|
|||||||
<div class="empty-state text-center py-5">
|
<div class="empty-state text-center py-5">
|
||||||
<div class="empty-state-icon mx-auto mb-3">
|
<div class="empty-state-icon mx-auto mb-3">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="48" height="48" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
<path d="M4.5 5a.5.5 0 1 0 0-1 .5.5 0 0 0 0 1zM3 4.5a.5.5 0 1 1-1 0 .5.5 0 0 1 1 0z"/>
|
||||||
|
<path d="M0 4a2 2 0 0 1 2-2h12a2 2 0 0 1 2 2v1a2 2 0 0 1-2 2H8.5v3a1.5 1.5 0 0 1 1.5 1.5H12a.5.5 0 0 1 0 1H4a.5.5 0 0 1 0-1h2A1.5 1.5 0 0 1 7.5 10V7H2a2 2 0 0 1-2-2V4zm1 0v1a1 1 0 0 0 1 1h12a1 1 0 0 0 1-1V4a1 1 0 0 0-1-1H2a1 1 0 0 0-1 1z"/>
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<h5 class="fw-semibold mb-2">No connections yet</h5>
|
<h5 class="fw-semibold mb-2">No connections yet</h5>
|
||||||
@@ -185,6 +181,7 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Edit Connection Modal -->
|
||||||
<div class="modal fade" id="editConnectionModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="editConnectionModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
@@ -220,7 +217,7 @@
|
|||||||
<label for="edit_secret_key" class="form-label fw-medium">Secret Key</label>
|
<label for="edit_secret_key" class="form-label fw-medium">Secret Key</label>
|
||||||
<div class="input-group">
|
<div class="input-group">
|
||||||
<input type="password" class="form-control font-monospace" id="edit_secret_key" name="secret_key" required>
|
<input type="password" class="form-control font-monospace" id="edit_secret_key" name="secret_key" required>
|
||||||
<button class="btn btn-outline-secondary" type="button" onclick="ConnectionsManagement.togglePassword('edit_secret_key')">
|
<button class="btn btn-outline-secondary" type="button" onclick="togglePassword('edit_secret_key')">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
||||||
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
||||||
@@ -250,6 +247,7 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Delete Connection Modal -->
|
||||||
<div class="modal fade" id="deleteConnectionModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="deleteConnectionModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
@@ -289,16 +287,80 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<script src="{{ url_for('static', filename='js/connections-management.js') }}"></script>
|
|
||||||
<script>
|
<script>
|
||||||
ConnectionsManagement.init({
|
function togglePassword(id) {
|
||||||
csrfToken: "{{ csrf_token() }}",
|
const input = document.getElementById(id);
|
||||||
endpoints: {
|
if (input.type === "password") {
|
||||||
test: "{{ url_for('ui.test_connection') }}",
|
input.type = "text";
|
||||||
updateTemplate: "{{ url_for('ui.update_connection', connection_id='CONNECTION_ID') }}",
|
} else {
|
||||||
deleteTemplate: "{{ url_for('ui.delete_connection', connection_id='CONNECTION_ID') }}",
|
input.type = "password";
|
||||||
healthTemplate: "/ui/connections/CONNECTION_ID/health"
|
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test Connection Logic
|
||||||
|
async function testConnection(formId, resultId) {
|
||||||
|
const form = document.getElementById(formId);
|
||||||
|
const resultDiv = document.getElementById(resultId);
|
||||||
|
const formData = new FormData(form);
|
||||||
|
const data = Object.fromEntries(formData.entries());
|
||||||
|
|
||||||
|
resultDiv.innerHTML = '<div class="text-info"><span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Testing...</div>';
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch("{{ url_for('ui.test_connection') }}", {
|
||||||
|
method: "POST",
|
||||||
|
headers: {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"X-CSRFToken": "{{ csrf_token() }}"
|
||||||
|
},
|
||||||
|
body: JSON.stringify(data)
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await response.json();
|
||||||
|
if (response.ok) {
|
||||||
|
resultDiv.innerHTML = `<div class="text-success"><i class="bi bi-check-circle"></i> ${result.message}</div>`;
|
||||||
|
} else {
|
||||||
|
resultDiv.innerHTML = `<div class="text-danger"><i class="bi bi-exclamation-circle"></i> ${result.message}</div>`;
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
resultDiv.innerHTML = `<div class="text-danger"><i class="bi bi-exclamation-circle"></i> Connection failed</div>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
document.getElementById('testConnectionBtn').addEventListener('click', () => {
|
||||||
|
testConnection('createConnectionForm', 'testResult');
|
||||||
|
});
|
||||||
|
|
||||||
|
document.getElementById('editTestConnectionBtn').addEventListener('click', () => {
|
||||||
|
testConnection('editConnectionForm', 'editTestResult');
|
||||||
|
});
|
||||||
|
|
||||||
|
// Modal Event Listeners
|
||||||
|
const editModal = document.getElementById('editConnectionModal');
|
||||||
|
editModal.addEventListener('show.bs.modal', event => {
|
||||||
|
const button = event.relatedTarget;
|
||||||
|
const id = button.getAttribute('data-id');
|
||||||
|
|
||||||
|
document.getElementById('edit_name').value = button.getAttribute('data-name');
|
||||||
|
document.getElementById('edit_endpoint_url').value = button.getAttribute('data-endpoint');
|
||||||
|
document.getElementById('edit_region').value = button.getAttribute('data-region');
|
||||||
|
document.getElementById('edit_access_key').value = button.getAttribute('data-access');
|
||||||
|
document.getElementById('edit_secret_key').value = button.getAttribute('data-secret');
|
||||||
|
document.getElementById('editTestResult').innerHTML = '';
|
||||||
|
|
||||||
|
const form = document.getElementById('editConnectionForm');
|
||||||
|
form.action = "{{ url_for('ui.update_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
|
||||||
|
});
|
||||||
|
|
||||||
|
const deleteModal = document.getElementById('deleteConnectionModal');
|
||||||
|
deleteModal.addEventListener('show.bs.modal', event => {
|
||||||
|
const button = event.relatedTarget;
|
||||||
|
const id = button.getAttribute('data-id');
|
||||||
|
const name = button.getAttribute('data-name');
|
||||||
|
|
||||||
|
document.getElementById('deleteConnectionName').textContent = name;
|
||||||
|
const form = document.getElementById('deleteConnectionForm');
|
||||||
|
form.action = "{{ url_for('ui.delete_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
|
||||||
});
|
});
|
||||||
</script>
|
</script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
@@ -14,37 +14,6 @@
|
|||||||
</div>
|
</div>
|
||||||
</section>
|
</section>
|
||||||
<div class="row g-4">
|
<div class="row g-4">
|
||||||
<div class="col-12 d-xl-none">
|
|
||||||
<div class="card shadow-sm docs-sidebar-mobile mb-0">
|
|
||||||
<div class="card-body py-3">
|
|
||||||
<div class="d-flex align-items-center justify-content-between mb-2">
|
|
||||||
<h3 class="h6 text-uppercase text-muted mb-0">On this page</h3>
|
|
||||||
<button class="btn btn-sm btn-outline-secondary" type="button" data-bs-toggle="collapse" data-bs-target="#mobileDocsToc" aria-expanded="false" aria-controls="mobileDocsToc">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path fill-rule="evenodd" d="M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z"/>
|
|
||||||
</svg>
|
|
||||||
</button>
|
|
||||||
</div>
|
|
||||||
<div class="collapse" id="mobileDocsToc">
|
|
||||||
<ul class="list-unstyled docs-toc mb-0 small">
|
|
||||||
<li><a href="#setup">Set up & run</a></li>
|
|
||||||
<li><a href="#background">Running in background</a></li>
|
|
||||||
<li><a href="#auth">Authentication & IAM</a></li>
|
|
||||||
<li><a href="#console">Console tour</a></li>
|
|
||||||
<li><a href="#automation">Automation / CLI</a></li>
|
|
||||||
<li><a href="#api">REST endpoints</a></li>
|
|
||||||
<li><a href="#examples">API Examples</a></li>
|
|
||||||
<li><a href="#replication">Site Replication</a></li>
|
|
||||||
<li><a href="#versioning">Object Versioning</a></li>
|
|
||||||
<li><a href="#quotas">Bucket Quotas</a></li>
|
|
||||||
<li><a href="#encryption">Encryption</a></li>
|
|
||||||
<li><a href="#lifecycle">Lifecycle Rules</a></li>
|
|
||||||
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
|
||||||
</ul>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
<div class="col-xl-8">
|
<div class="col-xl-8">
|
||||||
<article id="setup" class="card shadow-sm docs-section">
|
<article id="setup" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
@@ -78,16 +47,16 @@ python run.py --mode ui
|
|||||||
<table class="table table-sm table-bordered small mb-0">
|
<table class="table table-sm table-bordered small mb-0">
|
||||||
<thead class="table-light">
|
<thead class="table-light">
|
||||||
<tr>
|
<tr>
|
||||||
<th style="min-width: 180px;">Variable</th>
|
<th>Variable</th>
|
||||||
<th style="min-width: 120px;">Default</th>
|
<th>Default</th>
|
||||||
<th class="text-wrap" style="min-width: 250px;">Description</th>
|
<th>Description</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
<tr>
|
<tr>
|
||||||
<td><code>API_BASE_URL</code></td>
|
<td><code>API_BASE_URL</code></td>
|
||||||
<td><code>None</code></td>
|
<td><code>http://127.0.0.1:5000</code></td>
|
||||||
<td>The public URL of the API. <strong>Required</strong> if running behind a proxy. Ensures presigned URLs are generated correctly.</td>
|
<td>The public URL of the API. <strong>Required</strong> if running behind a proxy or if the UI and API are on different domains. Ensures presigned URLs are generated correctly.</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td><code>STORAGE_ROOT</code></td>
|
<td><code>STORAGE_ROOT</code></td>
|
||||||
@@ -96,13 +65,13 @@ python run.py --mode ui
|
|||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td><code>MAX_UPLOAD_SIZE</code></td>
|
<td><code>MAX_UPLOAD_SIZE</code></td>
|
||||||
<td><code>1 GB</code></td>
|
<td><code>5 GB</code></td>
|
||||||
<td>Max request body size in bytes.</td>
|
<td>Max request body size.</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td><code>SECRET_KEY</code></td>
|
<td><code>SECRET_KEY</code></td>
|
||||||
<td>(Auto-generated)</td>
|
<td>(Random)</td>
|
||||||
<td>Flask session key. Auto-generates if not set. <strong>Set explicitly in production.</strong></td>
|
<td>Flask session key. Set this in production.</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td><code>APP_HOST</code></td>
|
<td><code>APP_HOST</code></td>
|
||||||
@@ -112,51 +81,7 @@ python run.py --mode ui
|
|||||||
<tr>
|
<tr>
|
||||||
<td><code>APP_PORT</code></td>
|
<td><code>APP_PORT</code></td>
|
||||||
<td><code>5000</code></td>
|
<td><code>5000</code></td>
|
||||||
<td>Listen port (UI uses 5100).</td>
|
<td>Listen port.</td>
|
||||||
</tr>
|
|
||||||
<tr class="table-secondary">
|
|
||||||
<td colspan="3" class="fw-semibold">CORS Settings</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><code>CORS_ORIGINS</code></td>
|
|
||||||
<td><code>*</code></td>
|
|
||||||
<td>Allowed origins. <strong>Restrict in production.</strong></td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><code>CORS_METHODS</code></td>
|
|
||||||
<td><code>GET,PUT,POST,DELETE,OPTIONS,HEAD</code></td>
|
|
||||||
<td>Allowed HTTP methods.</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><code>CORS_ALLOW_HEADERS</code></td>
|
|
||||||
<td><code>*</code></td>
|
|
||||||
<td>Allowed request headers.</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><code>CORS_EXPOSE_HEADERS</code></td>
|
|
||||||
<td><code>*</code></td>
|
|
||||||
<td>Response headers visible to browsers (e.g., <code>ETag</code>).</td>
|
|
||||||
</tr>
|
|
||||||
<tr class="table-secondary">
|
|
||||||
<td colspan="3" class="fw-semibold">Security Settings</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><code>AUTH_MAX_ATTEMPTS</code></td>
|
|
||||||
<td><code>5</code></td>
|
|
||||||
<td>Failed login attempts before lockout.</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><code>AUTH_LOCKOUT_MINUTES</code></td>
|
|
||||||
<td><code>15</code></td>
|
|
||||||
<td>Lockout duration after max failed attempts.</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><code>RATE_LIMIT_DEFAULT</code></td>
|
|
||||||
<td><code>200 per minute</code></td>
|
|
||||||
<td>Default API rate limit.</td>
|
|
||||||
</tr>
|
|
||||||
<tr class="table-secondary">
|
|
||||||
<td colspan="3" class="fw-semibold">Encryption Settings</td>
|
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td><code>ENCRYPTION_ENABLED</code></td>
|
<td><code>ENCRYPTION_ENABLED</code></td>
|
||||||
@@ -168,25 +93,9 @@ python run.py --mode ui
|
|||||||
<td><code>false</code></td>
|
<td><code>false</code></td>
|
||||||
<td>Enable KMS key management for encryption.</td>
|
<td>Enable KMS key management for encryption.</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr class="table-secondary">
|
|
||||||
<td colspan="3" class="fw-semibold">Logging Settings</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><code>LOG_LEVEL</code></td>
|
|
||||||
<td><code>INFO</code></td>
|
|
||||||
<td>Log verbosity: DEBUG, INFO, WARNING, ERROR.</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><code>LOG_TO_FILE</code></td>
|
|
||||||
<td><code>true</code></td>
|
|
||||||
<td>Enable file logging.</td>
|
|
||||||
</tr>
|
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
</div>
|
</div>
|
||||||
<div class="alert alert-warning mt-3 mb-0 small">
|
|
||||||
<strong>Production Checklist:</strong> Set <code>SECRET_KEY</code>, restrict <code>CORS_ORIGINS</code>, configure <code>API_BASE_URL</code>, enable HTTPS via reverse proxy, and use <code>--prod</code> flag.
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="background" class="card shadow-sm docs-section">
|
<article id="background" class="card shadow-sm docs-section">
|
||||||
@@ -231,7 +140,7 @@ WorkingDirectory=/opt/myfsio
|
|||||||
ExecStart=/opt/myfsio/myfsio
|
ExecStart=/opt/myfsio/myfsio
|
||||||
Restart=on-failure
|
Restart=on-failure
|
||||||
RestartSec=5
|
RestartSec=5
|
||||||
Environment=STORAGE_ROOT=/var/lib/myfsio
|
Environment=MYFSIO_DATA_DIR=/var/lib/myfsio
|
||||||
Environment=API_BASE_URL=https://s3.example.com
|
Environment=API_BASE_URL=https://s3.example.com
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
@@ -286,15 +195,6 @@ sudo journalctl -u myfsio -f # View logs</code></pre>
|
|||||||
<li>Progress rows highlight retries, throughput, and completion even if you close the modal.</li>
|
<li>Progress rows highlight retries, throughput, and completion even if you close the modal.</li>
|
||||||
</ul>
|
</ul>
|
||||||
</div>
|
</div>
|
||||||
<div>
|
|
||||||
<h3 class="h6 text-uppercase text-muted">Object browser</h3>
|
|
||||||
<ul>
|
|
||||||
<li>Navigate folder hierarchies using breadcrumbs. Objects with <code>/</code> in keys display as folders.</li>
|
|
||||||
<li>Infinite scroll loads more objects automatically. Choose batch size (50–250) from the footer dropdown.</li>
|
|
||||||
<li>Bulk select objects for multi-delete or multi-download. Filter by name using the search box.</li>
|
|
||||||
<li>If loading fails, click <strong>Retry</strong> to attempt again—no page refresh needed.</li>
|
|
||||||
</ul>
|
|
||||||
</div>
|
|
||||||
<div>
|
<div>
|
||||||
<h3 class="h6 text-uppercase text-muted">Object details</h3>
|
<h3 class="h6 text-uppercase text-muted">Object details</h3>
|
||||||
<ul>
|
<ul>
|
||||||
@@ -438,62 +338,10 @@ curl -X POST {{ api_base }}/presign/demo/notes.txt \
|
|||||||
<span class="docs-section-kicker">07</span>
|
<span class="docs-section-kicker">07</span>
|
||||||
<h2 class="h4 mb-0">API Examples</h2>
|
<h2 class="h4 mb-0">API Examples</h2>
|
||||||
</div>
|
</div>
|
||||||
<p class="text-muted">Common operations using popular SDKs and tools.</p>
|
<p class="text-muted">Common operations using boto3.</p>
|
||||||
|
|
||||||
<h3 class="h6 text-uppercase text-muted mt-4">Python (boto3)</h3>
|
<h5 class="mt-4">Multipart Upload</h5>
|
||||||
<pre class="mb-4"><code class="language-python">import boto3
|
<pre><code class="language-python">import boto3
|
||||||
|
|
||||||
s3 = boto3.client(
|
|
||||||
's3',
|
|
||||||
endpoint_url='{{ api_base }}',
|
|
||||||
aws_access_key_id='<access_key>',
|
|
||||||
aws_secret_access_key='<secret_key>'
|
|
||||||
)
|
|
||||||
|
|
||||||
# List buckets
|
|
||||||
buckets = s3.list_buckets()['Buckets']
|
|
||||||
|
|
||||||
# Create bucket
|
|
||||||
s3.create_bucket(Bucket='mybucket')
|
|
||||||
|
|
||||||
# Upload file
|
|
||||||
s3.upload_file('local.txt', 'mybucket', 'remote.txt')
|
|
||||||
|
|
||||||
# Download file
|
|
||||||
s3.download_file('mybucket', 'remote.txt', 'downloaded.txt')
|
|
||||||
|
|
||||||
# Generate presigned URL (valid 1 hour)
|
|
||||||
url = s3.generate_presigned_url(
|
|
||||||
'get_object',
|
|
||||||
Params={'Bucket': 'mybucket', 'Key': 'remote.txt'},
|
|
||||||
ExpiresIn=3600
|
|
||||||
)</code></pre>
|
|
||||||
|
|
||||||
<h3 class="h6 text-uppercase text-muted mt-4">JavaScript (AWS SDK v3)</h3>
|
|
||||||
<pre class="mb-4"><code class="language-javascript">import { S3Client, ListBucketsCommand, PutObjectCommand } from '@aws-sdk/client-s3';
|
|
||||||
|
|
||||||
const s3 = new S3Client({
|
|
||||||
endpoint: '{{ api_base }}',
|
|
||||||
region: 'us-east-1',
|
|
||||||
credentials: {
|
|
||||||
accessKeyId: '<access_key>',
|
|
||||||
secretAccessKey: '<secret_key>'
|
|
||||||
},
|
|
||||||
forcePathStyle: true // Required for S3-compatible services
|
|
||||||
});
|
|
||||||
|
|
||||||
// List buckets
|
|
||||||
const { Buckets } = await s3.send(new ListBucketsCommand({}));
|
|
||||||
|
|
||||||
// Upload object
|
|
||||||
await s3.send(new PutObjectCommand({
|
|
||||||
Bucket: 'mybucket',
|
|
||||||
Key: 'hello.txt',
|
|
||||||
Body: 'Hello, World!'
|
|
||||||
}));</code></pre>
|
|
||||||
|
|
||||||
<h3 class="h6 text-uppercase text-muted mt-4">Multipart Upload (Python)</h3>
|
|
||||||
<pre class="mb-4"><code class="language-python">import boto3
|
|
||||||
|
|
||||||
s3 = boto3.client('s3', endpoint_url='{{ api_base }}')
|
s3 = boto3.client('s3', endpoint_url='{{ api_base }}')
|
||||||
|
|
||||||
@@ -501,9 +349,9 @@ s3 = boto3.client('s3', endpoint_url='{{ api_base }}')
|
|||||||
response = s3.create_multipart_upload(Bucket='mybucket', Key='large.bin')
|
response = s3.create_multipart_upload(Bucket='mybucket', Key='large.bin')
|
||||||
upload_id = response['UploadId']
|
upload_id = response['UploadId']
|
||||||
|
|
||||||
# Upload parts (minimum 5MB each, except last part)
|
# Upload parts
|
||||||
parts = []
|
parts = []
|
||||||
chunks = [b'chunk1...', b'chunk2...']
|
chunks = [b'chunk1', b'chunk2'] # Example data chunks
|
||||||
for part_number, chunk in enumerate(chunks, start=1):
|
for part_number, chunk in enumerate(chunks, start=1):
|
||||||
response = s3.upload_part(
|
response = s3.upload_part(
|
||||||
Bucket='mybucket',
|
Bucket='mybucket',
|
||||||
@@ -521,19 +369,6 @@ s3.complete_multipart_upload(
|
|||||||
UploadId=upload_id,
|
UploadId=upload_id,
|
||||||
MultipartUpload={'Parts': parts}
|
MultipartUpload={'Parts': parts}
|
||||||
)</code></pre>
|
)</code></pre>
|
||||||
|
|
||||||
<h3 class="h6 text-uppercase text-muted mt-4">Presigned URLs for Sharing</h3>
|
|
||||||
<pre class="mb-0"><code class="language-bash"># Generate a download link valid for 15 minutes
|
|
||||||
curl -X POST "{{ api_base }}/presign/mybucket/photo.jpg" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
|
||||||
-d '{"method": "GET", "expires_in": 900}'
|
|
||||||
|
|
||||||
# Generate an upload link (PUT) valid for 1 hour
|
|
||||||
curl -X POST "{{ api_base }}/presign/mybucket/upload.bin" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
|
||||||
-d '{"method": "PUT", "expires_in": 3600}'</code></pre>
|
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="replication" class="card shadow-sm docs-section">
|
<article id="replication" class="card shadow-sm docs-section">
|
||||||
@@ -557,46 +392,15 @@ curl -X POST "{{ api_base }}/presign/mybucket/upload.bin" \
|
|||||||
</li>
|
</li>
|
||||||
</ol>
|
</ol>
|
||||||
|
|
||||||
<div class="alert alert-light border mb-3 overflow-hidden">
|
<div class="alert alert-light border mb-0">
|
||||||
<div class="d-flex flex-column flex-sm-row gap-2 mb-2">
|
<div class="d-flex gap-2">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-terminal text-muted mt-1 flex-shrink-0 d-none d-sm-block" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-terminal text-muted mt-1" viewBox="0 0 16 16">
|
||||||
<path d="M6 9a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 0 1h-3A.5.5 0 0 1 6 9zM3.854 4.146a.5.5 0 1 0-.708.708L4.793 6.5 3.146 8.146a.5.5 0 1 0 .708.708l2-2a.5.5 0 0 0 0-.708l-2-2z"/>
|
<path d="M6 9a.5.5 0 0 1 .5-.5h3a.5.5 0 0 1 0 1h-3A.5.5 0 0 1 6 9zM3.854 4.146a.5.5 0 1 0-.708.708L4.793 6.5 3.146 8.146a.5.5 0 1 0 .708.708l2-2a.5.5 0 0 0 0-.708l-2-2z"/>
|
||||||
<path d="M2 1a2 2 0 0 0-2 2v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V3a2 2 0 0 0-2-2H2zm12 1a1 1 0 0 1 1 1v10a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1h12z"/>
|
<path d="M2 1a2 2 0 0 0-2 2v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V3a2 2 0 0 0-2-2H2zm12 1a1 1 0 0 1 1 1v10a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1h12z"/>
|
||||||
</svg>
|
</svg>
|
||||||
<div class="flex-grow-1 min-width-0">
|
<div>
|
||||||
<strong>Headless Target Setup</strong>
|
<strong>Headless Target Setup?</strong>
|
||||||
<p class="small text-muted mb-2">If your target server has no UI, create a <code>setup_target.py</code> script to bootstrap credentials:</p>
|
<p class="small text-muted mb-0">If your target server has no UI, use the Python API directly to bootstrap credentials. See <code>docs.md</code> in the project root for the <code>setup_target.py</code> script.</p>
|
||||||
<pre class="mb-0 overflow-auto" style="max-width: 100%;"><code class="language-python"># setup_target.py
|
|
||||||
from pathlib import Path
|
|
||||||
from app.iam import IamService
|
|
||||||
from app.storage import ObjectStorage
|
|
||||||
|
|
||||||
# Initialize services (paths match default config)
|
|
||||||
data_dir = Path("data")
|
|
||||||
iam = IamService(data_dir / ".myfsio.sys" / "config" / "iam.json")
|
|
||||||
storage = ObjectStorage(data_dir)
|
|
||||||
|
|
||||||
# 1. Create the bucket
|
|
||||||
bucket_name = "backup-bucket"
|
|
||||||
try:
|
|
||||||
storage.create_bucket(bucket_name)
|
|
||||||
print(f"Bucket '{bucket_name}' created.")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Bucket creation skipped: {e}")
|
|
||||||
|
|
||||||
# 2. Create the user
|
|
||||||
try:
|
|
||||||
creds = iam.create_user(
|
|
||||||
display_name="Replication User",
|
|
||||||
policies=[{"bucket": bucket_name, "actions": ["write", "read", "list"]}]
|
|
||||||
)
|
|
||||||
print("\n--- CREDENTIALS GENERATED ---")
|
|
||||||
print(f"Access Key: {creds['access_key']}")
|
|
||||||
print(f"Secret Key: {creds['secret_key']}")
|
|
||||||
print("-----------------------------")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"User creation failed: {e}")</code></pre>
|
|
||||||
<p class="small text-muted mt-2 mb-0">Save and run: <code>python setup_target.py</code></p>
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -607,129 +411,11 @@ except Exception as e:
|
|||||||
<li>Follow the steps above to replicate <strong>A → B</strong>.</li>
|
<li>Follow the steps above to replicate <strong>A → B</strong>.</li>
|
||||||
<li>Repeat the process on Server B to replicate <strong>B → A</strong> (create a connection to A, enable rule).</li>
|
<li>Repeat the process on Server B to replicate <strong>B → A</strong> (create a connection to A, enable rule).</li>
|
||||||
</ol>
|
</ol>
|
||||||
<p class="small text-muted mb-3">
|
<p class="small text-muted mb-0">
|
||||||
<strong>Loop Prevention:</strong> The system automatically detects replication traffic using a custom User-Agent (<code>S3ReplicationAgent</code>). This prevents infinite loops where an object replicated from A to B is immediately replicated back to A.
|
<strong>Loop Prevention:</strong> The system automatically detects replication traffic using a custom User-Agent (<code>S3ReplicationAgent</code>). This prevents infinite loops where an object replicated from A to B is immediately replicated back to A.
|
||||||
<br>
|
<br>
|
||||||
<strong>Deletes:</strong> Deleting an object on one server will propagate the deletion to the other server.
|
<strong>Deletes:</strong> Deleting an object on one server will propagate the deletion to the other server.
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
<h3 class="h6 text-uppercase text-muted mt-4">Error Handling & Rate Limits</h3>
|
|
||||||
<p class="small text-muted mb-3">The replication system handles transient failures automatically:</p>
|
|
||||||
<div class="table-responsive mb-3">
|
|
||||||
<table class="table table-sm table-bordered small">
|
|
||||||
<thead class="table-light">
|
|
||||||
<tr>
|
|
||||||
<th>Behavior</th>
|
|
||||||
<th>Details</th>
|
|
||||||
</tr>
|
|
||||||
</thead>
|
|
||||||
<tbody>
|
|
||||||
<tr>
|
|
||||||
<td><strong>Retry Logic</strong></td>
|
|
||||||
<td>boto3 automatically handles 429 (rate limit) errors using exponential backoff with <code>max_attempts=2</code></td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><strong>Concurrency</strong></td>
|
|
||||||
<td>Uses a ThreadPoolExecutor with 4 parallel workers for replication tasks</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><strong>Timeouts</strong></td>
|
|
||||||
<td>Connect: 5s, Read: 30s. Large files use streaming transfers</td>
|
|
||||||
</tr>
|
|
||||||
</tbody>
|
|
||||||
</table>
|
|
||||||
</div>
|
|
||||||
<div class="alert alert-warning border mb-0">
|
|
||||||
<div class="d-flex gap-2">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-exclamation-triangle text-warning mt-1 flex-shrink-0" viewBox="0 0 16 16">
|
|
||||||
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
|
||||||
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
|
||||||
</svg>
|
|
||||||
<div>
|
|
||||||
<strong>Large File Counts:</strong> When replicating buckets with many objects, the target server's rate limits may cause delays. There is no built-in pause mechanism. Consider increasing <code>RATE_LIMIT_DEFAULT</code> on the target server during bulk replication operations.
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</article>
|
|
||||||
<article id="versioning" class="card shadow-sm docs-section">
|
|
||||||
<div class="card-body">
|
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
|
||||||
<span class="docs-section-kicker">09</span>
|
|
||||||
<h2 class="h4 mb-0">Object Versioning</h2>
|
|
||||||
</div>
|
|
||||||
<p class="text-muted">Keep multiple versions of objects to protect against accidental deletions and overwrites. Restore previous versions at any time.</p>
|
|
||||||
|
|
||||||
<h3 class="h6 text-uppercase text-muted mt-4">Enabling Versioning</h3>
|
|
||||||
<ol class="docs-steps mb-3">
|
|
||||||
<li>Navigate to your bucket's <strong>Properties</strong> tab.</li>
|
|
||||||
<li>Find the <strong>Versioning</strong> card and click <strong>Enable</strong>.</li>
|
|
||||||
<li>All subsequent uploads will create new versions instead of overwriting.</li>
|
|
||||||
</ol>
|
|
||||||
|
|
||||||
<h3 class="h6 text-uppercase text-muted mt-4">Version Operations</h3>
|
|
||||||
<div class="table-responsive mb-3">
|
|
||||||
<table class="table table-sm table-bordered small">
|
|
||||||
<thead class="table-light">
|
|
||||||
<tr>
|
|
||||||
<th>Operation</th>
|
|
||||||
<th>Description</th>
|
|
||||||
</tr>
|
|
||||||
</thead>
|
|
||||||
<tbody>
|
|
||||||
<tr>
|
|
||||||
<td><strong>View Versions</strong></td>
|
|
||||||
<td>Click the version icon on any object to see all historical versions with timestamps and sizes.</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><strong>Restore Version</strong></td>
|
|
||||||
<td>Click <strong>Restore</strong> on any version to make it the current version (creates a copy).</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><strong>Delete Current</strong></td>
|
|
||||||
<td>Deleting an object archives it. Previous versions remain accessible.</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><strong>Purge All</strong></td>
|
|
||||||
<td>Permanently delete an object and all its versions. This cannot be undone.</td>
|
|
||||||
</tr>
|
|
||||||
</tbody>
|
|
||||||
</table>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<h3 class="h6 text-uppercase text-muted mt-4">Archived Objects</h3>
|
|
||||||
<p class="small text-muted mb-3">When you delete a versioned object, it becomes "archived" - the current version is removed but historical versions remain. The <strong>Archived</strong> tab shows these objects so you can restore them.</p>
|
|
||||||
|
|
||||||
<h3 class="h6 text-uppercase text-muted mt-4">API Usage</h3>
|
|
||||||
<pre class="mb-3"><code class="language-bash"># Enable versioning
|
|
||||||
curl -X PUT "{{ api_base }}/<bucket>?versioning" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
|
||||||
-d '{"Status": "Enabled"}'
|
|
||||||
|
|
||||||
# Get versioning status
|
|
||||||
curl "{{ api_base }}/<bucket>?versioning" \
|
|
||||||
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
|
||||||
|
|
||||||
# List object versions
|
|
||||||
curl "{{ api_base }}/<bucket>?versions" \
|
|
||||||
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
|
||||||
|
|
||||||
# Get specific version
|
|
||||||
curl "{{ api_base }}/<bucket>/<key>?versionId=<version-id>" \
|
|
||||||
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"</code></pre>
|
|
||||||
|
|
||||||
<div class="alert alert-light border mb-0">
|
|
||||||
<div class="d-flex gap-2">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1" viewBox="0 0 16 16">
|
|
||||||
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
|
||||||
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
|
||||||
</svg>
|
|
||||||
<div>
|
|
||||||
<strong>Storage Impact:</strong> Each version consumes storage. Enable quotas to limit total bucket size including all versions.
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="quotas" class="card shadow-sm docs-section">
|
<article id="quotas" class="card shadow-sm docs-section">
|
||||||
@@ -894,92 +580,10 @@ curl -X DELETE "{{ api_base }}/kms/keys/{key-id}?waiting_period_days=30" \
|
|||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="lifecycle" class="card shadow-sm docs-section">
|
|
||||||
<div class="card-body">
|
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
|
||||||
<span class="docs-section-kicker">12</span>
|
|
||||||
<h2 class="h4 mb-0">Lifecycle Rules</h2>
|
|
||||||
</div>
|
|
||||||
<p class="text-muted">Automatically delete expired objects, clean up old versions, and abort incomplete multipart uploads using time-based lifecycle rules.</p>
|
|
||||||
|
|
||||||
<h3 class="h6 text-uppercase text-muted mt-4">How It Works</h3>
|
|
||||||
<p class="small text-muted mb-3">
|
|
||||||
Lifecycle rules run on a background timer (Python <code>threading.Timer</code>), not a system cronjob. The enforcement cycle triggers every <strong>3600 seconds (1 hour)</strong> by default. Each cycle scans all buckets with lifecycle configurations and applies matching rules.
|
|
||||||
</p>
|
|
||||||
|
|
||||||
<h3 class="h6 text-uppercase text-muted mt-4">Expiration Types</h3>
|
|
||||||
<div class="table-responsive mb-3">
|
|
||||||
<table class="table table-sm table-bordered small">
|
|
||||||
<thead class="table-light">
|
|
||||||
<tr>
|
|
||||||
<th>Type</th>
|
|
||||||
<th>Description</th>
|
|
||||||
</tr>
|
|
||||||
</thead>
|
|
||||||
<tbody>
|
|
||||||
<tr>
|
|
||||||
<td><strong>Expiration (Days)</strong></td>
|
|
||||||
<td>Delete current objects older than N days from their last modification</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><strong>Expiration (Date)</strong></td>
|
|
||||||
<td>Delete current objects after a specific date (ISO 8601 format)</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><strong>NoncurrentVersionExpiration</strong></td>
|
|
||||||
<td>Delete non-current (archived) versions older than N days from when they became non-current</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><strong>AbortIncompleteMultipartUpload</strong></td>
|
|
||||||
<td>Abort multipart uploads that have been in progress longer than N days</td>
|
|
||||||
</tr>
|
|
||||||
</tbody>
|
|
||||||
</table>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<h3 class="h6 text-uppercase text-muted mt-4">API Usage</h3>
|
|
||||||
<pre class="mb-3"><code class="language-bash"># Set lifecycle rule (delete objects older than 30 days)
|
|
||||||
curl -X PUT "{{ api_base }}/<bucket>?lifecycle" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
|
||||||
-d '[{
|
|
||||||
"ID": "expire-old-objects",
|
|
||||||
"Status": "Enabled",
|
|
||||||
"Prefix": "",
|
|
||||||
"Expiration": {"Days": 30}
|
|
||||||
}]'
|
|
||||||
|
|
||||||
# Abort incomplete multipart uploads after 7 days
|
|
||||||
curl -X PUT "{{ api_base }}/<bucket>?lifecycle" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
|
||||||
-d '[{
|
|
||||||
"ID": "cleanup-multipart",
|
|
||||||
"Status": "Enabled",
|
|
||||||
"AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7}
|
|
||||||
}]'
|
|
||||||
|
|
||||||
# Get current lifecycle configuration
|
|
||||||
curl "{{ api_base }}/<bucket>?lifecycle" \
|
|
||||||
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"</code></pre>
|
|
||||||
|
|
||||||
<div class="alert alert-light border mb-0">
|
|
||||||
<div class="d-flex gap-2">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1 flex-shrink-0" viewBox="0 0 16 16">
|
|
||||||
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
|
||||||
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
|
||||||
</svg>
|
|
||||||
<div>
|
|
||||||
<strong>Prefix Filtering:</strong> Use the <code>Prefix</code> field to scope rules to specific paths (e.g., <code>"logs/"</code>). Leave empty to apply to all objects in the bucket.
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</article>
|
|
||||||
<article id="troubleshooting" class="card shadow-sm docs-section">
|
<article id="troubleshooting" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">13</span>
|
<span class="docs-section-kicker">12</span>
|
||||||
<h2 class="h4 mb-0">Troubleshooting & tips</h2>
|
<h2 class="h4 mb-0">Troubleshooting & tips</h2>
|
||||||
</div>
|
</div>
|
||||||
<div class="table-responsive">
|
<div class="table-responsive">
|
||||||
@@ -1017,11 +621,6 @@ curl "{{ api_base }}/<bucket>?lifecycle" \
|
|||||||
<td>Proxy headers missing or <code>API_BASE_URL</code> incorrect</td>
|
<td>Proxy headers missing or <code>API_BASE_URL</code> incorrect</td>
|
||||||
<td>Ensure your proxy sends <code>X-Forwarded-Host</code>/<code>Proto</code> headers, or explicitly set <code>API_BASE_URL</code> to your public domain.</td>
|
<td>Ensure your proxy sends <code>X-Forwarded-Host</code>/<code>Proto</code> headers, or explicitly set <code>API_BASE_URL</code> to your public domain.</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
|
||||||
<td>Large folder uploads hitting rate limits (429)</td>
|
|
||||||
<td><code>RATE_LIMIT_DEFAULT</code> exceeded (200/min)</td>
|
|
||||||
<td>Increase rate limit in env config, use Redis backend (<code>RATE_LIMIT_STORAGE_URI=redis://host:port</code>) for distributed setups, or upload in smaller batches.</td>
|
|
||||||
</tr>
|
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
</div>
|
</div>
|
||||||
@@ -1041,10 +640,8 @@ curl "{{ api_base }}/<bucket>?lifecycle" \
|
|||||||
<li><a href="#api">REST endpoints</a></li>
|
<li><a href="#api">REST endpoints</a></li>
|
||||||
<li><a href="#examples">API Examples</a></li>
|
<li><a href="#examples">API Examples</a></li>
|
||||||
<li><a href="#replication">Site Replication</a></li>
|
<li><a href="#replication">Site Replication</a></li>
|
||||||
<li><a href="#versioning">Object Versioning</a></li>
|
|
||||||
<li><a href="#quotas">Bucket Quotas</a></li>
|
<li><a href="#quotas">Bucket Quotas</a></li>
|
||||||
<li><a href="#encryption">Encryption</a></li>
|
<li><a href="#encryption">Encryption</a></li>
|
||||||
<li><a href="#lifecycle">Lifecycle Rules</a></li>
|
|
||||||
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
||||||
</ul>
|
</ul>
|
||||||
<div class="docs-sidebar-callouts">
|
<div class="docs-sidebar-callouts">
|
||||||
|
|||||||
@@ -10,7 +10,6 @@
|
|||||||
</svg>
|
</svg>
|
||||||
IAM Configuration
|
IAM Configuration
|
||||||
</h1>
|
</h1>
|
||||||
<p class="text-muted mb-0 mt-1">Create and manage users with fine-grained bucket permissions.</p>
|
|
||||||
</div>
|
</div>
|
||||||
<div class="d-flex gap-2">
|
<div class="d-flex gap-2">
|
||||||
{% if not iam_locked %}
|
{% if not iam_locked %}
|
||||||
@@ -110,68 +109,35 @@
|
|||||||
{% else %}
|
{% else %}
|
||||||
<div class="card-body px-4 pb-4">
|
<div class="card-body px-4 pb-4">
|
||||||
{% if users %}
|
{% if users %}
|
||||||
<div class="row g-3">
|
<div class="table-responsive">
|
||||||
|
<table class="table table-hover align-middle mb-0">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th scope="col">User</th>
|
||||||
|
<th scope="col">Policies</th>
|
||||||
|
<th scope="col" class="text-end">Actions</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
{% for user in users %}
|
{% for user in users %}
|
||||||
<div class="col-md-6 col-xl-4">
|
<tr>
|
||||||
<div class="card h-100 iam-user-card">
|
<td>
|
||||||
<div class="card-body">
|
<div class="d-flex align-items-center gap-3">
|
||||||
<div class="d-flex align-items-start justify-content-between mb-3">
|
<div class="user-avatar">
|
||||||
<div class="d-flex align-items-center gap-3 min-width-0 overflow-hidden">
|
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<div class="user-avatar user-avatar-lg flex-shrink-0">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
|
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<div class="min-width-0">
|
<div>
|
||||||
<h6 class="fw-semibold mb-0 text-truncate" title="{{ user.display_name }}">{{ user.display_name }}</h6>
|
<div class="fw-medium">{{ user.display_name }}</div>
|
||||||
<code class="small text-muted d-block text-truncate" title="{{ user.access_key }}">{{ user.access_key }}</code>
|
<code class="small text-muted">{{ user.access_key }}</code>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="dropdown flex-shrink-0">
|
</td>
|
||||||
<button class="btn btn-sm btn-icon" type="button" data-bs-toggle="dropdown" aria-expanded="false">
|
<td>
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
|
||||||
<path d="M9.5 13a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0z"/>
|
|
||||||
</svg>
|
|
||||||
</button>
|
|
||||||
<ul class="dropdown-menu dropdown-menu-end">
|
|
||||||
<li>
|
|
||||||
<button class="dropdown-item" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
|
|
||||||
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
|
|
||||||
</svg>
|
|
||||||
Edit Name
|
|
||||||
</button>
|
|
||||||
</li>
|
|
||||||
<li>
|
|
||||||
<button class="dropdown-item" type="button" data-rotate-user="{{ user.access_key }}">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
|
|
||||||
<path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
|
|
||||||
<path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
|
|
||||||
</svg>
|
|
||||||
Rotate Secret
|
|
||||||
</button>
|
|
||||||
</li>
|
|
||||||
<li><hr class="dropdown-divider"></li>
|
|
||||||
<li>
|
|
||||||
<button class="dropdown-item text-danger" type="button" data-delete-user="{{ user.access_key }}">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16">
|
|
||||||
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
|
|
||||||
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
|
||||||
</svg>
|
|
||||||
Delete User
|
|
||||||
</button>
|
|
||||||
</li>
|
|
||||||
</ul>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
<div class="mb-3">
|
|
||||||
<div class="small text-muted mb-2">Bucket Permissions</div>
|
|
||||||
<div class="d-flex flex-wrap gap-1">
|
<div class="d-flex flex-wrap gap-1">
|
||||||
{% for policy in user.policies %}
|
{% for policy in user.policies %}
|
||||||
<span class="badge bg-primary bg-opacity-10 text-primary">
|
<span class="badge bg-primary bg-opacity-10 text-primary">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="10" height="10" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
|
||||||
<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>
|
|
||||||
</svg>
|
|
||||||
{{ policy.bucket }}
|
{{ policy.bucket }}
|
||||||
{% if '*' in policy.actions %}
|
{% if '*' in policy.actions %}
|
||||||
<span class="opacity-75">(full)</span>
|
<span class="opacity-75">(full)</span>
|
||||||
@@ -183,18 +149,38 @@
|
|||||||
<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>
|
<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</td>
|
||||||
<button class="btn btn-outline-primary btn-sm w-100" type="button" data-policy-editor data-access-key="{{ user.access_key }}">
|
<td class="text-end">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
<div class="btn-group btn-group-sm" role="group">
|
||||||
|
<button class="btn btn-outline-primary" type="button" data-rotate-user="{{ user.access_key }}" title="Rotate Secret">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/>
|
||||||
|
<path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
<button class="btn btn-outline-secondary" type="button" data-edit-user="{{ user.access_key }}" data-display-name="{{ user.display_name }}" title="Edit User">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
<button class="btn btn-outline-secondary" type="button" data-policy-editor data-access-key="{{ user.access_key }}" title="Edit Policies">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/>
|
<path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/>
|
||||||
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/>
|
<path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/>
|
||||||
</svg>
|
</svg>
|
||||||
Manage Policies
|
</button>
|
||||||
|
<button class="btn btn-outline-danger" type="button" data-delete-user="{{ user.access_key }}" title="Delete User">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">
|
||||||
|
<path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/>
|
||||||
|
<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/>
|
||||||
|
</svg>
|
||||||
</button>
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</td>
|
||||||
</div>
|
</tr>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
</div>
|
</div>
|
||||||
{% else %}
|
{% else %}
|
||||||
<div class="empty-state text-center py-5">
|
<div class="empty-state text-center py-5">
|
||||||
@@ -217,6 +203,7 @@
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Create User Modal -->
|
||||||
<div class="modal fade" id="createUserModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="createUserModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
@@ -265,6 +252,7 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Policy Editor Modal -->
|
||||||
<div class="modal fade" id="policyEditorModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="policyEditorModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-lg modal-dialog-centered">
|
<div class="modal-dialog modal-lg modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
@@ -315,6 +303,7 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Edit User Modal -->
|
||||||
<div class="modal fade" id="editUserModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="editUserModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
@@ -349,14 +338,15 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Delete User Modal -->
|
||||||
<div class="modal fade" id="deleteUserModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="deleteUserModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
<div class="modal-header border-0 pb-0">
|
<div class="modal-header border-0 pb-0">
|
||||||
<h1 class="modal-title fs-5 fw-semibold">
|
<h1 class="modal-title fs-5 fw-semibold">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-danger" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="text-danger" viewBox="0 0 16 16">
|
||||||
<path d="M11 5a3 3 0 1 1-6 0 3 3 0 0 1 6 0M8 7a2 2 0 1 0 0-4 2 2 0 0 0 0 4m.256 7a4.5 4.5 0 0 1-.229-1.004H3c.001-.246.154-.986.832-1.664C4.484 10.68 5.711 10 8 10q.39 0 .74.025c.226-.341.496-.65.804-.918Q9.077 9.014 8 9c-5 0-6 3-6 4s1 1 1 1h5.256Z"/>
|
<path d="M1 14s-1 0-1-1 1-4 6-4 6 3 6 4-1 1-1 1H1zm5-6a3 3 0 1 0 0-6 3 3 0 0 0 0 6z"/>
|
||||||
<path d="M12.5 16a3.5 3.5 0 1 0 0-7 3.5 3.5 0 0 0 0 7m-.646-4.854.646.647.646-.647a.5.5 0 0 1 .708.708l-.647.646.647.646a.5.5 0 0 1-.708.708l-.646-.647-.646.647a.5.5 0 0 1-.708-.708l.647-.646-.647-.646a.5.5 0 0 1 .708-.708"/>
|
<path fill-rule="evenodd" d="M11 1.5v1h5v1h-1v9a2 2 0 0 1-2 2H3a2 2 0 0 1-2-2v-9H0v-1h5v-1a1 1 0 0 1 1-1h4a1 1 0 0 1 1 1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118z"/>
|
||||||
</svg>
|
</svg>
|
||||||
Delete User
|
Delete User
|
||||||
</h1>
|
</h1>
|
||||||
@@ -392,6 +382,7 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Rotate Secret Modal -->
|
||||||
<div class="modal fade" id="rotateSecretModal" tabindex="-1" aria-hidden="true">
|
<div class="modal fade" id="rotateSecretModal" tabindex="-1" aria-hidden="true">
|
||||||
<div class="modal-dialog modal-dialog-centered">
|
<div class="modal-dialog modal-dialog-centered">
|
||||||
<div class="modal-content">
|
<div class="modal-content">
|
||||||
@@ -454,20 +445,272 @@
|
|||||||
|
|
||||||
{% block extra_scripts %}
|
{% block extra_scripts %}
|
||||||
{{ super() }}
|
{{ super() }}
|
||||||
<script src="{{ url_for('static', filename='js/iam-management.js') }}"></script>
|
|
||||||
<script>
|
<script>
|
||||||
IAMManagement.init({
|
(function () {
|
||||||
users: JSON.parse(document.getElementById('iamUsersJson').textContent || '[]'),
|
const currentUserKey = {{ principal.access_key | tojson }};
|
||||||
currentUserKey: {{ principal.access_key | tojson }},
|
const configCopyButtons = document.querySelectorAll('.config-copy');
|
||||||
iamLocked: {{ iam_locked | tojson }},
|
configCopyButtons.forEach((button) => {
|
||||||
csrfToken: "{{ csrf_token() }}",
|
button.addEventListener('click', async () => {
|
||||||
endpoints: {
|
const targetId = button.dataset.copyTarget;
|
||||||
createUser: "{{ url_for('ui.create_iam_user') }}",
|
const target = document.getElementById(targetId);
|
||||||
updateUser: "{{ url_for('ui.update_iam_user', access_key='ACCESS_KEY') }}",
|
if (!target) return;
|
||||||
deleteUser: "{{ url_for('ui.delete_iam_user', access_key='ACCESS_KEY') }}",
|
const text = target.innerText;
|
||||||
updatePolicies: "{{ url_for('ui.update_iam_policies', access_key='ACCESS_KEY') }}",
|
try {
|
||||||
rotateSecret: "{{ url_for('ui.rotate_iam_secret', access_key='ACCESS_KEY') }}"
|
await navigator.clipboard.writeText(text);
|
||||||
|
button.textContent = 'Copied!';
|
||||||
|
setTimeout(() => {
|
||||||
|
button.textContent = 'Copy JSON';
|
||||||
|
}, 1500);
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Unable to copy IAM config', err);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
const secretCopyButton = document.querySelector('[data-secret-copy]');
|
||||||
|
if (secretCopyButton) {
|
||||||
|
secretCopyButton.addEventListener('click', async () => {
|
||||||
|
const secretInput = document.getElementById('disclosedSecretValue');
|
||||||
|
if (!secretInput) return;
|
||||||
|
try {
|
||||||
|
await navigator.clipboard.writeText(secretInput.value);
|
||||||
|
secretCopyButton.textContent = 'Copied!';
|
||||||
|
setTimeout(() => {
|
||||||
|
secretCopyButton.textContent = 'Copy';
|
||||||
|
}, 1500);
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Unable to copy IAM secret', err);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
const iamUsersData = document.getElementById('iamUsersJson');
|
||||||
|
const users = iamUsersData ? JSON.parse(iamUsersData.textContent || '[]') : [];
|
||||||
|
|
||||||
|
// Policy Editor Logic
|
||||||
|
const policyModalEl = document.getElementById('policyEditorModal');
|
||||||
|
const policyModal = new bootstrap.Modal(policyModalEl);
|
||||||
|
const userLabelEl = document.getElementById('policyEditorUserLabel');
|
||||||
|
const userInputEl = document.getElementById('policyEditorUser');
|
||||||
|
const textareaEl = document.getElementById('policyEditorDocument');
|
||||||
|
const formEl = document.getElementById('policyEditorForm');
|
||||||
|
const templateButtons = document.querySelectorAll('[data-policy-template]');
|
||||||
|
const iamLocked = {{ iam_locked | tojson }};
|
||||||
|
|
||||||
|
if (iamLocked) return;
|
||||||
|
|
||||||
|
const userPolicies = (accessKey) => {
|
||||||
|
const target = users.find((user) => user.access_key === accessKey);
|
||||||
|
return target ? JSON.stringify(target.policies, null, 2) : '';
|
||||||
|
};
|
||||||
|
|
||||||
|
const applyTemplate = (name) => {
|
||||||
|
const templates = {
|
||||||
|
full: [
|
||||||
|
{
|
||||||
|
bucket: '*',
|
||||||
|
actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'replication', 'iam:list_users', 'iam:*'],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
readonly: [
|
||||||
|
{
|
||||||
|
bucket: '*',
|
||||||
|
actions: ['list', 'read'],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
writer: [
|
||||||
|
{
|
||||||
|
bucket: '*',
|
||||||
|
actions: ['list', 'read', 'write'],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
if (templates[name]) {
|
||||||
|
textareaEl.value = JSON.stringify(templates[name], null, 2);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
templateButtons.forEach((button) => {
|
||||||
|
button.addEventListener('click', () => applyTemplate(button.dataset.policyTemplate));
|
||||||
|
});
|
||||||
|
|
||||||
|
// Create User modal template buttons
|
||||||
|
const createUserPoliciesEl = document.getElementById('createUserPolicies');
|
||||||
|
const createTemplateButtons = document.querySelectorAll('[data-create-policy-template]');
|
||||||
|
|
||||||
|
const applyCreateTemplate = (name) => {
|
||||||
|
const templates = {
|
||||||
|
full: [
|
||||||
|
{
|
||||||
|
bucket: '*',
|
||||||
|
actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'replication', 'iam:list_users', 'iam:*'],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
readonly: [
|
||||||
|
{
|
||||||
|
bucket: '*',
|
||||||
|
actions: ['list', 'read'],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
writer: [
|
||||||
|
{
|
||||||
|
bucket: '*',
|
||||||
|
actions: ['list', 'read', 'write'],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
if (templates[name] && createUserPoliciesEl) {
|
||||||
|
createUserPoliciesEl.value = JSON.stringify(templates[name], null, 2);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
createTemplateButtons.forEach((button) => {
|
||||||
|
button.addEventListener('click', () => applyCreateTemplate(button.dataset.createPolicyTemplate));
|
||||||
|
});
|
||||||
|
|
||||||
|
formEl?.addEventListener('submit', (event) => {
|
||||||
|
const key = userInputEl.value;
|
||||||
|
if (!key) {
|
||||||
|
event.preventDefault();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const template = formEl.dataset.actionTemplate;
|
||||||
|
formEl.action = template.replace('ACCESS_KEY_PLACEHOLDER', key);
|
||||||
|
});
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-policy-editor]').forEach((button) => {
|
||||||
|
button.addEventListener('click', () => {
|
||||||
|
const key = button.getAttribute('data-access-key');
|
||||||
|
if (!key) return;
|
||||||
|
|
||||||
|
userLabelEl.textContent = key;
|
||||||
|
userInputEl.value = key;
|
||||||
|
textareaEl.value = userPolicies(key);
|
||||||
|
|
||||||
|
policyModal.show();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// Edit User Logic
|
||||||
|
const editUserModal = new bootstrap.Modal(document.getElementById('editUserModal'));
|
||||||
|
const editUserForm = document.getElementById('editUserForm');
|
||||||
|
const editUserDisplayName = document.getElementById('editUserDisplayName');
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-edit-user]').forEach(btn => {
|
||||||
|
btn.addEventListener('click', () => {
|
||||||
|
const key = btn.dataset.editUser;
|
||||||
|
const name = btn.dataset.displayName;
|
||||||
|
editUserDisplayName.value = name;
|
||||||
|
editUserForm.action = "{{ url_for('ui.update_iam_user', access_key='ACCESS_KEY') }}".replace('ACCESS_KEY', key);
|
||||||
|
editUserModal.show();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// Delete User Logic
|
||||||
|
const deleteUserModal = new bootstrap.Modal(document.getElementById('deleteUserModal'));
|
||||||
|
const deleteUserForm = document.getElementById('deleteUserForm');
|
||||||
|
const deleteUserLabel = document.getElementById('deleteUserLabel');
|
||||||
|
const deleteSelfWarning = document.getElementById('deleteSelfWarning');
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-delete-user]').forEach(btn => {
|
||||||
|
btn.addEventListener('click', () => {
|
||||||
|
const key = btn.dataset.deleteUser;
|
||||||
|
deleteUserLabel.textContent = key;
|
||||||
|
deleteUserForm.action = "{{ url_for('ui.delete_iam_user', access_key='ACCESS_KEY') }}".replace('ACCESS_KEY', key);
|
||||||
|
|
||||||
|
if (key === currentUserKey) {
|
||||||
|
deleteSelfWarning.classList.remove('d-none');
|
||||||
|
} else {
|
||||||
|
deleteSelfWarning.classList.add('d-none');
|
||||||
|
}
|
||||||
|
|
||||||
|
deleteUserModal.show();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// Rotate Secret Logic
|
||||||
|
const rotateSecretModal = new bootstrap.Modal(document.getElementById('rotateSecretModal'));
|
||||||
|
const rotateUserLabel = document.getElementById('rotateUserLabel');
|
||||||
|
const confirmRotateBtn = document.getElementById('confirmRotateBtn');
|
||||||
|
const rotateCancelBtn = document.getElementById('rotateCancelBtn');
|
||||||
|
const rotateDoneBtn = document.getElementById('rotateDoneBtn');
|
||||||
|
const rotateSecretConfirm = document.getElementById('rotateSecretConfirm');
|
||||||
|
const rotateSecretResult = document.getElementById('rotateSecretResult');
|
||||||
|
const newSecretKeyInput = document.getElementById('newSecretKey');
|
||||||
|
const copyNewSecretBtn = document.getElementById('copyNewSecret');
|
||||||
|
let currentRotateKey = null;
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-rotate-user]').forEach(btn => {
|
||||||
|
btn.addEventListener('click', () => {
|
||||||
|
currentRotateKey = btn.dataset.rotateUser;
|
||||||
|
rotateUserLabel.textContent = currentRotateKey;
|
||||||
|
|
||||||
|
// Reset Modal State
|
||||||
|
rotateSecretConfirm.classList.remove('d-none');
|
||||||
|
rotateSecretResult.classList.add('d-none');
|
||||||
|
confirmRotateBtn.classList.remove('d-none');
|
||||||
|
rotateCancelBtn.classList.remove('d-none');
|
||||||
|
rotateDoneBtn.classList.add('d-none');
|
||||||
|
|
||||||
|
rotateSecretModal.show();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
confirmRotateBtn.addEventListener('click', async () => {
|
||||||
|
if (!currentRotateKey) return;
|
||||||
|
|
||||||
|
confirmRotateBtn.disabled = true;
|
||||||
|
confirmRotateBtn.textContent = "Rotating...";
|
||||||
|
|
||||||
|
try {
|
||||||
|
const url = "{{ url_for('ui.rotate_iam_secret', access_key='ACCESS_KEY') }}".replace('ACCESS_KEY', currentRotateKey);
|
||||||
|
const response = await fetch(url, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: {
|
||||||
|
'Accept': 'application/json',
|
||||||
|
'X-CSRFToken': "{{ csrf_token() }}"
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const data = await response.json();
|
||||||
|
throw new Error(data.error || 'Failed to rotate secret');
|
||||||
|
}
|
||||||
|
|
||||||
|
const data = await response.json();
|
||||||
|
newSecretKeyInput.value = data.secret_key;
|
||||||
|
|
||||||
|
// Show Result
|
||||||
|
rotateSecretConfirm.classList.add('d-none');
|
||||||
|
rotateSecretResult.classList.remove('d-none');
|
||||||
|
confirmRotateBtn.classList.add('d-none');
|
||||||
|
rotateCancelBtn.classList.add('d-none');
|
||||||
|
rotateDoneBtn.classList.remove('d-none');
|
||||||
|
|
||||||
|
} catch (err) {
|
||||||
|
if (window.showToast) {
|
||||||
|
window.showToast(err.message, 'Error', 'danger');
|
||||||
|
}
|
||||||
|
rotateSecretModal.hide();
|
||||||
|
} finally {
|
||||||
|
confirmRotateBtn.disabled = false;
|
||||||
|
confirmRotateBtn.textContent = "Rotate Key";
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
copyNewSecretBtn.addEventListener('click', async () => {
|
||||||
|
try {
|
||||||
|
await navigator.clipboard.writeText(newSecretKeyInput.value);
|
||||||
|
copyNewSecretBtn.textContent = 'Copied!';
|
||||||
|
setTimeout(() => copyNewSecretBtn.textContent = 'Copy', 1500);
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to copy', err);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
rotateDoneBtn.addEventListener('click', () => {
|
||||||
|
window.location.reload();
|
||||||
|
});
|
||||||
|
})();
|
||||||
</script>
|
</script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
@@ -35,7 +35,7 @@
|
|||||||
<div class="card shadow-lg login-card position-relative">
|
<div class="card shadow-lg login-card position-relative">
|
||||||
<div class="card-body p-4 p-md-5">
|
<div class="card-body p-4 p-md-5">
|
||||||
<div class="text-center mb-4 d-lg-none">
|
<div class="text-center mb-4 d-lg-none">
|
||||||
<img src="{{ url_for('static', filename='images/MyFSIO.png') }}" alt="MyFSIO" width="48" height="48" class="mb-3 rounded-3">
|
<img src="{{ url_for('static', filename='images/MyFISO.png') }}" alt="MyFSIO" width="48" height="48" class="mb-3 rounded-3">
|
||||||
<h2 class="h4 fw-bold">MyFSIO</h2>
|
<h2 class="h4 fw-bold">MyFSIO</h2>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="h4 mb-1 d-none d-lg-block">Sign in</h2>
|
<h2 class="h4 mb-1 d-none d-lg-block">Sign in</h2>
|
||||||
|
|||||||
@@ -6,11 +6,11 @@
|
|||||||
<p class="text-muted mb-0">Real-time server performance and storage usage</p>
|
<p class="text-muted mb-0">Real-time server performance and storage usage</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="d-flex gap-2 align-items-center">
|
<div class="d-flex gap-2 align-items-center">
|
||||||
<span class="d-flex align-items-center gap-2 text-muted small" id="metricsLiveIndicator">
|
<span class="d-flex align-items-center gap-2 text-muted small">
|
||||||
<span class="live-indicator"></span>
|
<span class="live-indicator"></span>
|
||||||
Auto-refresh: <span id="refreshCountdown">5</span>s
|
Live
|
||||||
</span>
|
</span>
|
||||||
<button class="btn btn-outline-secondary btn-sm" id="refreshMetricsBtn">
|
<button class="btn btn-outline-secondary btn-sm" onclick="window.location.reload()">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-arrow-clockwise me-1" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-arrow-clockwise me-1" viewBox="0 0 16 16">
|
||||||
<path fill-rule="evenodd" d="M8 3a5 5 0 1 0 4.546 2.914.5.5 0 0 1 .908-.417A6 6 0 1 1 8 2v1z"/>
|
<path fill-rule="evenodd" d="M8 3a5 5 0 1 0 4.546 2.914.5.5 0 0 1 .908-.417A6 6 0 1 1 8 2v1z"/>
|
||||||
<path d="M8 4.466V.534a.25.25 0 0 1 .41-.192l2.36 1.966c.12.1.12.284 0 .384L8.41 4.658A.25.25 0 0 1 8 4.466z"/>
|
<path d="M8 4.466V.534a.25.25 0 0 1 .41-.192l2.36 1.966c.12.1.12.284 0 .384L8.41 4.658A.25.25 0 0 1 8 4.466z"/>
|
||||||
@@ -32,13 +32,15 @@
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="display-6 fw-bold mb-2 stat-value"><span data-metric="cpu_percent">{{ cpu_percent }}</span><span class="fs-4 fw-normal text-muted">%</span></h2>
|
<h2 class="display-6 fw-bold mb-2 stat-value">{{ cpu_percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
<div class="progress" style="height: 8px; border-radius: 4px;">
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
<div class="progress-bar bg-primary" data-metric="cpu_bar" role="progressbar" style="width: {{ cpu_percent }}%"></div>
|
<div class="progress-bar {% if cpu_percent > 80 %}bg-danger{% elif cpu_percent > 50 %}bg-warning{% else %}bg-primary{% endif %}" role="progressbar" style="width: {{ cpu_percent }}%"></div>
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-2 d-flex justify-content-between">
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
<small class="text-muted">Current load</small>
|
<small class="text-muted">Current load</small>
|
||||||
<small data-metric="cpu_status" class="text-success">Normal</small>
|
<small class="{% if cpu_percent > 80 %}text-danger{% elif cpu_percent > 50 %}text-warning{% else %}text-success{% endif %}">
|
||||||
|
{% if cpu_percent > 80 %}High{% elif cpu_percent > 50 %}Medium{% else %}Normal{% endif %}
|
||||||
|
</small>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -55,13 +57,13 @@
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="display-6 fw-bold mb-2 stat-value"><span data-metric="memory_percent">{{ memory.percent }}</span><span class="fs-4 fw-normal text-muted">%</span></h2>
|
<h2 class="display-6 fw-bold mb-2 stat-value">{{ memory.percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
<div class="progress" style="height: 8px; border-radius: 4px;">
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
<div class="progress-bar bg-info" data-metric="memory_bar" role="progressbar" style="width: {{ memory.percent }}%"></div>
|
<div class="progress-bar bg-info" role="progressbar" style="width: {{ memory.percent }}%"></div>
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-2 d-flex justify-content-between">
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
<small class="text-muted"><span data-metric="memory_used">{{ memory.used }}</span> used</small>
|
<small class="text-muted">{{ memory.used }} used</small>
|
||||||
<small class="text-muted"><span data-metric="memory_total">{{ memory.total }}</span> total</small>
|
<small class="text-muted">{{ memory.total }} total</small>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -79,13 +81,13 @@
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="display-6 fw-bold mb-2 stat-value"><span data-metric="disk_percent">{{ disk.percent }}</span><span class="fs-4 fw-normal text-muted">%</span></h2>
|
<h2 class="display-6 fw-bold mb-2 stat-value">{{ disk.percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
<div class="progress" style="height: 8px; border-radius: 4px;">
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
<div class="progress-bar bg-warning" data-metric="disk_bar" role="progressbar" style="width: {{ disk.percent }}%"></div>
|
<div class="progress-bar {% if disk.percent > 90 %}bg-danger{% elif disk.percent > 75 %}bg-warning{% else %}bg-warning{% endif %}" role="progressbar" style="width: {{ disk.percent }}%"></div>
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-2 d-flex justify-content-between">
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
<small class="text-muted"><span data-metric="disk_free">{{ disk.free }}</span> free</small>
|
<small class="text-muted">{{ disk.free }} free</small>
|
||||||
<small class="text-muted"><span data-metric="disk_total">{{ disk.total }}</span> total</small>
|
<small class="text-muted">{{ disk.total }} total</small>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -102,15 +104,15 @@
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="display-6 fw-bold mb-2 stat-value" data-metric="storage_used">{{ app.storage_used }}</h2>
|
<h2 class="display-6 fw-bold mb-2 stat-value">{{ app.storage_used }}</h2>
|
||||||
<div class="d-flex gap-3 mt-3">
|
<div class="d-flex gap-3 mt-3">
|
||||||
<div class="text-center flex-fill">
|
<div class="text-center flex-fill">
|
||||||
<div class="h5 fw-bold mb-0" data-metric="buckets_count">{{ app.buckets }}</div>
|
<div class="h5 fw-bold mb-0">{{ app.buckets }}</div>
|
||||||
<small class="text-muted">Buckets</small>
|
<small class="text-muted">Buckets</small>
|
||||||
</div>
|
</div>
|
||||||
<div class="vr"></div>
|
<div class="vr"></div>
|
||||||
<div class="text-center flex-fill">
|
<div class="text-center flex-fill">
|
||||||
<div class="h5 fw-bold mb-0" data-metric="objects_count">{{ app.objects }}</div>
|
<div class="h5 fw-bold mb-0">{{ app.objects }}</div>
|
||||||
<small class="text-muted">Objects</small>
|
<small class="text-muted">Objects</small>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -124,6 +126,7 @@
|
|||||||
<div class="card shadow-sm border-0">
|
<div class="card shadow-sm border-0">
|
||||||
<div class="card-header bg-transparent border-0 pt-4 px-4 d-flex justify-content-between align-items-center">
|
<div class="card-header bg-transparent border-0 pt-4 px-4 d-flex justify-content-between align-items-center">
|
||||||
<h5 class="card-title mb-0 fw-semibold">System Overview</h5>
|
<h5 class="card-title mb-0 fw-semibold">System Overview</h5>
|
||||||
|
<span class="badge bg-primary-subtle text-primary">Live</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="card-body p-4">
|
<div class="card-body p-4">
|
||||||
<div class="table-responsive">
|
<div class="table-responsive">
|
||||||
@@ -217,45 +220,27 @@
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="col-lg-4">
|
<div class="col-lg-4">
|
||||||
{% set has_issues = (cpu_percent > 80) or (memory.percent > 85) or (disk.percent > 90) %}
|
<div class="card shadow-sm border-0 h-100 overflow-hidden" style="background: linear-gradient(135deg, #3b82f6 0%, #8b5cf6 100%);">
|
||||||
<div class="card shadow-sm border-0 h-100 overflow-hidden" style="background: linear-gradient(135deg, {% if has_issues %}#ef4444 0%, #f97316{% else %}#3b82f6 0%, #8b5cf6{% endif %} 100%);">
|
|
||||||
<div class="card-body p-4 d-flex flex-column justify-content-center text-white position-relative">
|
<div class="card-body p-4 d-flex flex-column justify-content-center text-white position-relative">
|
||||||
<div class="position-absolute top-0 end-0 opacity-25" style="transform: translate(20%, -20%);">
|
<div class="position-absolute top-0 end-0 opacity-25" style="transform: translate(20%, -20%);">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="160" height="160" fill="currentColor" class="bi bi-{% if has_issues %}exclamation-triangle{% else %}cloud-check{% endif %}" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="160" height="160" fill="currentColor" class="bi bi-cloud-check" viewBox="0 0 16 16">
|
||||||
{% if has_issues %}
|
|
||||||
<path d="M7.938 2.016A.13.13 0 0 1 8.002 2a.13.13 0 0 1 .063.016.146.146 0 0 1 .054.057l6.857 11.667c.036.06.035.124.002.183a.163.163 0 0 1-.054.06.116.116 0 0 1-.066.017H1.146a.115.115 0 0 1-.066-.017.163.163 0 0 1-.054-.06.176.176 0 0 1 .002-.183L7.884 2.073a.147.147 0 0 1 .054-.057zm1.044-.45a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566z"/>
|
|
||||||
<path d="M7.002 12a1 1 0 1 1 2 0 1 1 0 0 1-2 0zM7.1 5.995a.905.905 0 1 1 1.8 0l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995z"/>
|
|
||||||
{% else %}
|
|
||||||
<path fill-rule="evenodd" d="M10.354 6.146a.5.5 0 0 1 0 .708l-3 3a.5.5 0 0 1-.708 0l-1.5-1.5a.5.5 0 1 1 .708-.708L7 8.793l2.646-2.647a.5.5 0 0 1 .708 0z"/>
|
<path fill-rule="evenodd" d="M10.354 6.146a.5.5 0 0 1 0 .708l-3 3a.5.5 0 0 1-.708 0l-1.5-1.5a.5.5 0 1 1 .708-.708L7 8.793l2.646-2.647a.5.5 0 0 1 .708 0z"/>
|
||||||
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/>
|
||||||
{% endif %}
|
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
<div class="mb-3">
|
<div class="mb-3">
|
||||||
<span class="badge bg-white {% if has_issues %}text-danger{% else %}text-primary{% endif %} fw-semibold px-3 py-2">
|
<span class="badge bg-white text-primary fw-semibold px-3 py-2">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-{% if has_issues %}exclamation-circle-fill{% else %}check-circle-fill{% endif %} me-1" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-check-circle-fill me-1" viewBox="0 0 16 16">
|
||||||
{% if has_issues %}
|
|
||||||
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM8 4a.905.905 0 0 0-.9.995l.35 3.507a.552.552 0 0 0 1.1 0l.35-3.507A.905.905 0 0 0 8 4zm.002 6a1 1 0 1 0 0 2 1 1 0 0 0 0-2z"/>
|
|
||||||
{% else %}
|
|
||||||
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
||||||
{% endif %}
|
|
||||||
</svg>
|
</svg>
|
||||||
v{{ app.version }}
|
Healthy
|
||||||
</span>
|
</span>
|
||||||
</div>
|
</div>
|
||||||
<h4 class="card-title fw-bold mb-3">System Health</h4>
|
<h4 class="card-title fw-bold mb-3">System Status</h4>
|
||||||
{% if has_issues %}
|
<p class="card-text opacity-90 mb-4">All systems operational. Your storage infrastructure is running smoothly with no detected issues.</p>
|
||||||
<ul class="list-unstyled small mb-4 opacity-90">
|
|
||||||
{% if cpu_percent > 80 %}<li class="mb-1">CPU usage is high ({{ cpu_percent }}%)</li>{% endif %}
|
|
||||||
{% if memory.percent > 85 %}<li class="mb-1">Memory usage is high ({{ memory.percent }}%)</li>{% endif %}
|
|
||||||
{% if disk.percent > 90 %}<li class="mb-1">Disk space is critically low ({{ disk.percent }}% used)</li>{% endif %}
|
|
||||||
</ul>
|
|
||||||
{% else %}
|
|
||||||
<p class="card-text opacity-90 mb-4 small">All resources are within normal operating parameters.</p>
|
|
||||||
{% endif %}
|
|
||||||
<div class="d-flex gap-4">
|
<div class="d-flex gap-4">
|
||||||
<div>
|
<div>
|
||||||
<div class="h3 fw-bold mb-0">{{ app.uptime_days }}d</div>
|
<div class="h3 fw-bold mb-0">99.9%</div>
|
||||||
<small class="opacity-75">Uptime</small>
|
<small class="opacity-75">Uptime</small>
|
||||||
</div>
|
</div>
|
||||||
<div>
|
<div>
|
||||||
@@ -268,109 +253,3 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|
||||||
{% block extra_scripts %}
|
|
||||||
<script>
|
|
||||||
(function() {
|
|
||||||
var refreshInterval = 5000;
|
|
||||||
var countdown = 5;
|
|
||||||
var countdownEl = document.getElementById('refreshCountdown');
|
|
||||||
var refreshBtn = document.getElementById('refreshMetricsBtn');
|
|
||||||
var countdownTimer = null;
|
|
||||||
var fetchTimer = null;
|
|
||||||
|
|
||||||
function updateMetrics() {
|
|
||||||
fetch('/ui/metrics/api')
|
|
||||||
.then(function(resp) { return resp.json(); })
|
|
||||||
.then(function(data) {
|
|
||||||
var el;
|
|
||||||
el = document.querySelector('[data-metric="cpu_percent"]');
|
|
||||||
if (el) el.textContent = data.cpu_percent;
|
|
||||||
el = document.querySelector('[data-metric="cpu_bar"]');
|
|
||||||
if (el) {
|
|
||||||
el.style.width = data.cpu_percent + '%';
|
|
||||||
el.className = 'progress-bar ' + (data.cpu_percent > 80 ? 'bg-danger' : data.cpu_percent > 50 ? 'bg-warning' : 'bg-primary');
|
|
||||||
}
|
|
||||||
el = document.querySelector('[data-metric="cpu_status"]');
|
|
||||||
if (el) {
|
|
||||||
el.textContent = data.cpu_percent > 80 ? 'High' : data.cpu_percent > 50 ? 'Medium' : 'Normal';
|
|
||||||
el.className = data.cpu_percent > 80 ? 'text-danger' : data.cpu_percent > 50 ? 'text-warning' : 'text-success';
|
|
||||||
}
|
|
||||||
|
|
||||||
el = document.querySelector('[data-metric="memory_percent"]');
|
|
||||||
if (el) el.textContent = data.memory.percent;
|
|
||||||
el = document.querySelector('[data-metric="memory_bar"]');
|
|
||||||
if (el) el.style.width = data.memory.percent + '%';
|
|
||||||
el = document.querySelector('[data-metric="memory_used"]');
|
|
||||||
if (el) el.textContent = data.memory.used;
|
|
||||||
el = document.querySelector('[data-metric="memory_total"]');
|
|
||||||
if (el) el.textContent = data.memory.total;
|
|
||||||
|
|
||||||
el = document.querySelector('[data-metric="disk_percent"]');
|
|
||||||
if (el) el.textContent = data.disk.percent;
|
|
||||||
el = document.querySelector('[data-metric="disk_bar"]');
|
|
||||||
if (el) {
|
|
||||||
el.style.width = data.disk.percent + '%';
|
|
||||||
el.className = 'progress-bar ' + (data.disk.percent > 90 ? 'bg-danger' : 'bg-warning');
|
|
||||||
}
|
|
||||||
el = document.querySelector('[data-metric="disk_free"]');
|
|
||||||
if (el) el.textContent = data.disk.free;
|
|
||||||
el = document.querySelector('[data-metric="disk_total"]');
|
|
||||||
if (el) el.textContent = data.disk.total;
|
|
||||||
|
|
||||||
el = document.querySelector('[data-metric="storage_used"]');
|
|
||||||
if (el) el.textContent = data.app.storage_used;
|
|
||||||
el = document.querySelector('[data-metric="buckets_count"]');
|
|
||||||
if (el) el.textContent = data.app.buckets;
|
|
||||||
el = document.querySelector('[data-metric="objects_count"]');
|
|
||||||
if (el) el.textContent = data.app.objects;
|
|
||||||
|
|
||||||
countdown = 5;
|
|
||||||
})
|
|
||||||
.catch(function(err) {
|
|
||||||
console.error('Metrics fetch error:', err);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function startCountdown() {
|
|
||||||
if (countdownTimer) clearInterval(countdownTimer);
|
|
||||||
countdown = 5;
|
|
||||||
if (countdownEl) countdownEl.textContent = countdown;
|
|
||||||
countdownTimer = setInterval(function() {
|
|
||||||
countdown--;
|
|
||||||
if (countdownEl) countdownEl.textContent = countdown;
|
|
||||||
if (countdown <= 0) {
|
|
||||||
countdown = 5;
|
|
||||||
}
|
|
||||||
}, 1000);
|
|
||||||
}
|
|
||||||
|
|
||||||
function startPolling() {
|
|
||||||
if (fetchTimer) clearInterval(fetchTimer);
|
|
||||||
fetchTimer = setInterval(function() {
|
|
||||||
if (!document.hidden) {
|
|
||||||
updateMetrics();
|
|
||||||
}
|
|
||||||
}, refreshInterval);
|
|
||||||
startCountdown();
|
|
||||||
}
|
|
||||||
|
|
||||||
if (refreshBtn) {
|
|
||||||
refreshBtn.addEventListener('click', function() {
|
|
||||||
updateMetrics();
|
|
||||||
countdown = 5;
|
|
||||||
if (countdownEl) countdownEl.textContent = countdown;
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
document.addEventListener('visibilitychange', function() {
|
|
||||||
if (!document.hidden) {
|
|
||||||
updateMetrics();
|
|
||||||
startPolling();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
startPolling();
|
|
||||||
})();
|
|
||||||
</script>
|
|
||||||
{% endblock %}
|
|
||||||
|
|||||||
@@ -1,339 +0,0 @@
|
|||||||
import io
|
|
||||||
import json
|
|
||||||
import time
|
|
||||||
from datetime import datetime, timezone
|
|
||||||
from pathlib import Path
|
|
||||||
from unittest.mock import MagicMock, patch
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from app.access_logging import (
|
|
||||||
AccessLogEntry,
|
|
||||||
AccessLoggingService,
|
|
||||||
LoggingConfiguration,
|
|
||||||
)
|
|
||||||
from app.storage import ObjectStorage
|
|
||||||
|
|
||||||
|
|
||||||
class TestAccessLogEntry:
|
|
||||||
def test_default_values(self):
|
|
||||||
entry = AccessLogEntry()
|
|
||||||
assert entry.bucket_owner == "-"
|
|
||||||
assert entry.bucket == "-"
|
|
||||||
assert entry.remote_ip == "-"
|
|
||||||
assert entry.requester == "-"
|
|
||||||
assert entry.operation == "-"
|
|
||||||
assert entry.http_status == 200
|
|
||||||
assert len(entry.request_id) == 16
|
|
||||||
|
|
||||||
def test_to_log_line(self):
|
|
||||||
entry = AccessLogEntry(
|
|
||||||
bucket_owner="owner123",
|
|
||||||
bucket="my-bucket",
|
|
||||||
remote_ip="192.168.1.1",
|
|
||||||
requester="user456",
|
|
||||||
request_id="REQ123456789012",
|
|
||||||
operation="REST.PUT.OBJECT",
|
|
||||||
key="test/key.txt",
|
|
||||||
request_uri="PUT /my-bucket/test/key.txt HTTP/1.1",
|
|
||||||
http_status=200,
|
|
||||||
bytes_sent=1024,
|
|
||||||
object_size=2048,
|
|
||||||
total_time_ms=150,
|
|
||||||
referrer="http://example.com",
|
|
||||||
user_agent="aws-cli/2.0",
|
|
||||||
version_id="v1",
|
|
||||||
)
|
|
||||||
log_line = entry.to_log_line()
|
|
||||||
|
|
||||||
assert "owner123" in log_line
|
|
||||||
assert "my-bucket" in log_line
|
|
||||||
assert "192.168.1.1" in log_line
|
|
||||||
assert "user456" in log_line
|
|
||||||
assert "REST.PUT.OBJECT" in log_line
|
|
||||||
assert "test/key.txt" in log_line
|
|
||||||
assert "200" in log_line
|
|
||||||
|
|
||||||
def test_to_dict(self):
|
|
||||||
entry = AccessLogEntry(
|
|
||||||
bucket_owner="owner",
|
|
||||||
bucket="bucket",
|
|
||||||
remote_ip="10.0.0.1",
|
|
||||||
requester="admin",
|
|
||||||
request_id="ABC123",
|
|
||||||
operation="REST.GET.OBJECT",
|
|
||||||
key="file.txt",
|
|
||||||
request_uri="GET /bucket/file.txt HTTP/1.1",
|
|
||||||
http_status=200,
|
|
||||||
bytes_sent=512,
|
|
||||||
object_size=512,
|
|
||||||
total_time_ms=50,
|
|
||||||
)
|
|
||||||
result = entry.to_dict()
|
|
||||||
|
|
||||||
assert result["bucket_owner"] == "owner"
|
|
||||||
assert result["bucket"] == "bucket"
|
|
||||||
assert result["remote_ip"] == "10.0.0.1"
|
|
||||||
assert result["requester"] == "admin"
|
|
||||||
assert result["operation"] == "REST.GET.OBJECT"
|
|
||||||
assert result["key"] == "file.txt"
|
|
||||||
assert result["http_status"] == 200
|
|
||||||
assert result["bytes_sent"] == 512
|
|
||||||
|
|
||||||
|
|
||||||
class TestLoggingConfiguration:
|
|
||||||
def test_default_values(self):
|
|
||||||
config = LoggingConfiguration(target_bucket="log-bucket")
|
|
||||||
assert config.target_bucket == "log-bucket"
|
|
||||||
assert config.target_prefix == ""
|
|
||||||
assert config.enabled is True
|
|
||||||
|
|
||||||
def test_to_dict(self):
|
|
||||||
config = LoggingConfiguration(
|
|
||||||
target_bucket="logs",
|
|
||||||
target_prefix="access-logs/",
|
|
||||||
enabled=True,
|
|
||||||
)
|
|
||||||
result = config.to_dict()
|
|
||||||
|
|
||||||
assert "LoggingEnabled" in result
|
|
||||||
assert result["LoggingEnabled"]["TargetBucket"] == "logs"
|
|
||||||
assert result["LoggingEnabled"]["TargetPrefix"] == "access-logs/"
|
|
||||||
|
|
||||||
def test_from_dict(self):
|
|
||||||
data = {
|
|
||||||
"LoggingEnabled": {
|
|
||||||
"TargetBucket": "my-logs",
|
|
||||||
"TargetPrefix": "bucket-logs/",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
config = LoggingConfiguration.from_dict(data)
|
|
||||||
|
|
||||||
assert config is not None
|
|
||||||
assert config.target_bucket == "my-logs"
|
|
||||||
assert config.target_prefix == "bucket-logs/"
|
|
||||||
assert config.enabled is True
|
|
||||||
|
|
||||||
def test_from_dict_no_logging(self):
|
|
||||||
data = {}
|
|
||||||
config = LoggingConfiguration.from_dict(data)
|
|
||||||
assert config is None
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def storage(tmp_path: Path):
|
|
||||||
storage_root = tmp_path / "data"
|
|
||||||
storage_root.mkdir(parents=True)
|
|
||||||
return ObjectStorage(storage_root)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def logging_service(tmp_path: Path, storage):
|
|
||||||
service = AccessLoggingService(
|
|
||||||
tmp_path,
|
|
||||||
flush_interval=3600,
|
|
||||||
max_buffer_size=10,
|
|
||||||
)
|
|
||||||
service.set_storage(storage)
|
|
||||||
yield service
|
|
||||||
service.shutdown()
|
|
||||||
|
|
||||||
|
|
||||||
class TestAccessLoggingService:
|
|
||||||
def test_get_bucket_logging_not_configured(self, logging_service):
|
|
||||||
result = logging_service.get_bucket_logging("unconfigured-bucket")
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
def test_set_and_get_bucket_logging(self, logging_service):
|
|
||||||
config = LoggingConfiguration(
|
|
||||||
target_bucket="log-bucket",
|
|
||||||
target_prefix="logs/",
|
|
||||||
)
|
|
||||||
logging_service.set_bucket_logging("source-bucket", config)
|
|
||||||
|
|
||||||
retrieved = logging_service.get_bucket_logging("source-bucket")
|
|
||||||
assert retrieved is not None
|
|
||||||
assert retrieved.target_bucket == "log-bucket"
|
|
||||||
assert retrieved.target_prefix == "logs/"
|
|
||||||
|
|
||||||
def test_delete_bucket_logging(self, logging_service):
|
|
||||||
config = LoggingConfiguration(target_bucket="logs")
|
|
||||||
logging_service.set_bucket_logging("to-delete", config)
|
|
||||||
assert logging_service.get_bucket_logging("to-delete") is not None
|
|
||||||
|
|
||||||
logging_service.delete_bucket_logging("to-delete")
|
|
||||||
logging_service._configs.clear()
|
|
||||||
assert logging_service.get_bucket_logging("to-delete") is None
|
|
||||||
|
|
||||||
def test_log_request_no_config(self, logging_service):
|
|
||||||
logging_service.log_request(
|
|
||||||
"no-config-bucket",
|
|
||||||
operation="REST.GET.OBJECT",
|
|
||||||
key="test.txt",
|
|
||||||
)
|
|
||||||
stats = logging_service.get_stats()
|
|
||||||
assert stats["buffered_entries"] == 0
|
|
||||||
|
|
||||||
def test_log_request_with_config(self, logging_service, storage):
|
|
||||||
storage.create_bucket("log-target")
|
|
||||||
|
|
||||||
config = LoggingConfiguration(
|
|
||||||
target_bucket="log-target",
|
|
||||||
target_prefix="access/",
|
|
||||||
)
|
|
||||||
logging_service.set_bucket_logging("source-bucket", config)
|
|
||||||
|
|
||||||
logging_service.log_request(
|
|
||||||
"source-bucket",
|
|
||||||
operation="REST.PUT.OBJECT",
|
|
||||||
key="uploaded.txt",
|
|
||||||
remote_ip="192.168.1.100",
|
|
||||||
requester="test-user",
|
|
||||||
http_status=200,
|
|
||||||
bytes_sent=1024,
|
|
||||||
)
|
|
||||||
|
|
||||||
stats = logging_service.get_stats()
|
|
||||||
assert stats["buffered_entries"] == 1
|
|
||||||
|
|
||||||
def test_log_request_disabled_config(self, logging_service):
|
|
||||||
config = LoggingConfiguration(
|
|
||||||
target_bucket="logs",
|
|
||||||
enabled=False,
|
|
||||||
)
|
|
||||||
logging_service.set_bucket_logging("disabled-bucket", config)
|
|
||||||
|
|
||||||
logging_service.log_request(
|
|
||||||
"disabled-bucket",
|
|
||||||
operation="REST.GET.OBJECT",
|
|
||||||
key="test.txt",
|
|
||||||
)
|
|
||||||
|
|
||||||
stats = logging_service.get_stats()
|
|
||||||
assert stats["buffered_entries"] == 0
|
|
||||||
|
|
||||||
def test_flush_buffer(self, logging_service, storage):
|
|
||||||
storage.create_bucket("flush-target")
|
|
||||||
|
|
||||||
config = LoggingConfiguration(
|
|
||||||
target_bucket="flush-target",
|
|
||||||
target_prefix="logs/",
|
|
||||||
)
|
|
||||||
logging_service.set_bucket_logging("flush-source", config)
|
|
||||||
|
|
||||||
for i in range(3):
|
|
||||||
logging_service.log_request(
|
|
||||||
"flush-source",
|
|
||||||
operation="REST.GET.OBJECT",
|
|
||||||
key=f"file{i}.txt",
|
|
||||||
)
|
|
||||||
|
|
||||||
logging_service.flush()
|
|
||||||
|
|
||||||
objects = storage.list_objects_all("flush-target")
|
|
||||||
assert len(objects) >= 1
|
|
||||||
|
|
||||||
def test_auto_flush_on_buffer_size(self, logging_service, storage):
|
|
||||||
storage.create_bucket("auto-flush-target")
|
|
||||||
|
|
||||||
config = LoggingConfiguration(
|
|
||||||
target_bucket="auto-flush-target",
|
|
||||||
target_prefix="",
|
|
||||||
)
|
|
||||||
logging_service.set_bucket_logging("auto-source", config)
|
|
||||||
|
|
||||||
for i in range(15):
|
|
||||||
logging_service.log_request(
|
|
||||||
"auto-source",
|
|
||||||
operation="REST.GET.OBJECT",
|
|
||||||
key=f"file{i}.txt",
|
|
||||||
)
|
|
||||||
|
|
||||||
objects = storage.list_objects_all("auto-flush-target")
|
|
||||||
assert len(objects) >= 1
|
|
||||||
|
|
||||||
def test_get_stats(self, logging_service, storage):
|
|
||||||
storage.create_bucket("stats-target")
|
|
||||||
config = LoggingConfiguration(target_bucket="stats-target")
|
|
||||||
logging_service.set_bucket_logging("stats-bucket", config)
|
|
||||||
|
|
||||||
logging_service.log_request(
|
|
||||||
"stats-bucket",
|
|
||||||
operation="REST.GET.OBJECT",
|
|
||||||
key="test.txt",
|
|
||||||
)
|
|
||||||
|
|
||||||
stats = logging_service.get_stats()
|
|
||||||
assert "buffered_entries" in stats
|
|
||||||
assert "target_buckets" in stats
|
|
||||||
assert stats["buffered_entries"] >= 1
|
|
||||||
|
|
||||||
def test_shutdown_flushes_buffer(self, tmp_path, storage):
|
|
||||||
storage.create_bucket("shutdown-target")
|
|
||||||
|
|
||||||
service = AccessLoggingService(tmp_path, flush_interval=3600, max_buffer_size=100)
|
|
||||||
service.set_storage(storage)
|
|
||||||
|
|
||||||
config = LoggingConfiguration(target_bucket="shutdown-target")
|
|
||||||
service.set_bucket_logging("shutdown-source", config)
|
|
||||||
|
|
||||||
service.log_request(
|
|
||||||
"shutdown-source",
|
|
||||||
operation="REST.PUT.OBJECT",
|
|
||||||
key="final.txt",
|
|
||||||
)
|
|
||||||
|
|
||||||
service.shutdown()
|
|
||||||
|
|
||||||
objects = storage.list_objects_all("shutdown-target")
|
|
||||||
assert len(objects) >= 1
|
|
||||||
|
|
||||||
def test_logging_caching(self, logging_service):
|
|
||||||
config = LoggingConfiguration(target_bucket="cached-logs")
|
|
||||||
logging_service.set_bucket_logging("cached-bucket", config)
|
|
||||||
|
|
||||||
logging_service.get_bucket_logging("cached-bucket")
|
|
||||||
assert "cached-bucket" in logging_service._configs
|
|
||||||
|
|
||||||
def test_log_request_all_fields(self, logging_service, storage):
|
|
||||||
storage.create_bucket("detailed-target")
|
|
||||||
|
|
||||||
config = LoggingConfiguration(target_bucket="detailed-target", target_prefix="detailed/")
|
|
||||||
logging_service.set_bucket_logging("detailed-source", config)
|
|
||||||
|
|
||||||
logging_service.log_request(
|
|
||||||
"detailed-source",
|
|
||||||
operation="REST.PUT.OBJECT",
|
|
||||||
key="detailed/file.txt",
|
|
||||||
remote_ip="10.0.0.1",
|
|
||||||
requester="admin-user",
|
|
||||||
request_uri="PUT /detailed-source/detailed/file.txt HTTP/1.1",
|
|
||||||
http_status=201,
|
|
||||||
error_code="",
|
|
||||||
bytes_sent=2048,
|
|
||||||
object_size=2048,
|
|
||||||
total_time_ms=100,
|
|
||||||
referrer="http://admin.example.com",
|
|
||||||
user_agent="curl/7.68.0",
|
|
||||||
version_id="v1.0",
|
|
||||||
request_id="CUSTOM_REQ_ID",
|
|
||||||
)
|
|
||||||
|
|
||||||
stats = logging_service.get_stats()
|
|
||||||
assert stats["buffered_entries"] == 1
|
|
||||||
|
|
||||||
def test_failed_flush_returns_to_buffer(self, logging_service):
|
|
||||||
config = LoggingConfiguration(target_bucket="nonexistent-target")
|
|
||||||
logging_service.set_bucket_logging("fail-source", config)
|
|
||||||
|
|
||||||
logging_service.log_request(
|
|
||||||
"fail-source",
|
|
||||||
operation="REST.GET.OBJECT",
|
|
||||||
key="test.txt",
|
|
||||||
)
|
|
||||||
|
|
||||||
initial_count = logging_service.get_stats()["buffered_entries"]
|
|
||||||
logging_service.flush()
|
|
||||||
|
|
||||||
final_count = logging_service.get_stats()["buffered_entries"]
|
|
||||||
assert final_count >= initial_count
|
|
||||||
@@ -1,284 +0,0 @@
|
|||||||
import json
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from app.acl import (
|
|
||||||
Acl,
|
|
||||||
AclGrant,
|
|
||||||
AclService,
|
|
||||||
ACL_PERMISSION_FULL_CONTROL,
|
|
||||||
ACL_PERMISSION_READ,
|
|
||||||
ACL_PERMISSION_WRITE,
|
|
||||||
ACL_PERMISSION_READ_ACP,
|
|
||||||
ACL_PERMISSION_WRITE_ACP,
|
|
||||||
GRANTEE_ALL_USERS,
|
|
||||||
GRANTEE_AUTHENTICATED_USERS,
|
|
||||||
PERMISSION_TO_ACTIONS,
|
|
||||||
create_canned_acl,
|
|
||||||
CANNED_ACLS,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestAclGrant:
|
|
||||||
def test_to_dict(self):
|
|
||||||
grant = AclGrant(grantee="user123", permission=ACL_PERMISSION_READ)
|
|
||||||
result = grant.to_dict()
|
|
||||||
assert result == {"grantee": "user123", "permission": "READ"}
|
|
||||||
|
|
||||||
def test_from_dict(self):
|
|
||||||
data = {"grantee": "admin", "permission": "FULL_CONTROL"}
|
|
||||||
grant = AclGrant.from_dict(data)
|
|
||||||
assert grant.grantee == "admin"
|
|
||||||
assert grant.permission == ACL_PERMISSION_FULL_CONTROL
|
|
||||||
|
|
||||||
|
|
||||||
class TestAcl:
|
|
||||||
def test_to_dict(self):
|
|
||||||
acl = Acl(
|
|
||||||
owner="owner-user",
|
|
||||||
grants=[
|
|
||||||
AclGrant(grantee="owner-user", permission=ACL_PERMISSION_FULL_CONTROL),
|
|
||||||
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
result = acl.to_dict()
|
|
||||||
assert result["owner"] == "owner-user"
|
|
||||||
assert len(result["grants"]) == 2
|
|
||||||
assert result["grants"][0]["grantee"] == "owner-user"
|
|
||||||
assert result["grants"][1]["grantee"] == "*"
|
|
||||||
|
|
||||||
def test_from_dict(self):
|
|
||||||
data = {
|
|
||||||
"owner": "the-owner",
|
|
||||||
"grants": [
|
|
||||||
{"grantee": "the-owner", "permission": "FULL_CONTROL"},
|
|
||||||
{"grantee": "authenticated", "permission": "READ"},
|
|
||||||
],
|
|
||||||
}
|
|
||||||
acl = Acl.from_dict(data)
|
|
||||||
assert acl.owner == "the-owner"
|
|
||||||
assert len(acl.grants) == 2
|
|
||||||
assert acl.grants[0].grantee == "the-owner"
|
|
||||||
assert acl.grants[1].grantee == GRANTEE_AUTHENTICATED_USERS
|
|
||||||
|
|
||||||
def test_from_dict_empty_grants(self):
|
|
||||||
data = {"owner": "solo-owner"}
|
|
||||||
acl = Acl.from_dict(data)
|
|
||||||
assert acl.owner == "solo-owner"
|
|
||||||
assert len(acl.grants) == 0
|
|
||||||
|
|
||||||
def test_get_allowed_actions_owner(self):
|
|
||||||
acl = Acl(owner="owner123", grants=[])
|
|
||||||
actions = acl.get_allowed_actions("owner123", is_authenticated=True)
|
|
||||||
assert actions == PERMISSION_TO_ACTIONS[ACL_PERMISSION_FULL_CONTROL]
|
|
||||||
|
|
||||||
def test_get_allowed_actions_all_users(self):
|
|
||||||
acl = Acl(
|
|
||||||
owner="owner",
|
|
||||||
grants=[AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ)],
|
|
||||||
)
|
|
||||||
actions = acl.get_allowed_actions(None, is_authenticated=False)
|
|
||||||
assert "read" in actions
|
|
||||||
assert "list" in actions
|
|
||||||
assert "write" not in actions
|
|
||||||
|
|
||||||
def test_get_allowed_actions_authenticated_users(self):
|
|
||||||
acl = Acl(
|
|
||||||
owner="owner",
|
|
||||||
grants=[AclGrant(grantee=GRANTEE_AUTHENTICATED_USERS, permission=ACL_PERMISSION_WRITE)],
|
|
||||||
)
|
|
||||||
actions_authenticated = acl.get_allowed_actions("some-user", is_authenticated=True)
|
|
||||||
assert "write" in actions_authenticated
|
|
||||||
assert "delete" in actions_authenticated
|
|
||||||
|
|
||||||
actions_anonymous = acl.get_allowed_actions(None, is_authenticated=False)
|
|
||||||
assert "write" not in actions_anonymous
|
|
||||||
|
|
||||||
def test_get_allowed_actions_specific_grantee(self):
|
|
||||||
acl = Acl(
|
|
||||||
owner="owner",
|
|
||||||
grants=[
|
|
||||||
AclGrant(grantee="user-abc", permission=ACL_PERMISSION_READ),
|
|
||||||
AclGrant(grantee="user-xyz", permission=ACL_PERMISSION_WRITE),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
abc_actions = acl.get_allowed_actions("user-abc", is_authenticated=True)
|
|
||||||
assert "read" in abc_actions
|
|
||||||
assert "list" in abc_actions
|
|
||||||
assert "write" not in abc_actions
|
|
||||||
|
|
||||||
xyz_actions = acl.get_allowed_actions("user-xyz", is_authenticated=True)
|
|
||||||
assert "write" in xyz_actions
|
|
||||||
assert "read" not in xyz_actions
|
|
||||||
|
|
||||||
def test_get_allowed_actions_combined(self):
|
|
||||||
acl = Acl(
|
|
||||||
owner="owner",
|
|
||||||
grants=[
|
|
||||||
AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ),
|
|
||||||
AclGrant(grantee="special-user", permission=ACL_PERMISSION_WRITE),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
actions = acl.get_allowed_actions("special-user", is_authenticated=True)
|
|
||||||
assert "read" in actions
|
|
||||||
assert "list" in actions
|
|
||||||
assert "write" in actions
|
|
||||||
assert "delete" in actions
|
|
||||||
|
|
||||||
|
|
||||||
class TestCannedAcls:
|
|
||||||
def test_private_acl(self):
|
|
||||||
acl = create_canned_acl("private", "the-owner")
|
|
||||||
assert acl.owner == "the-owner"
|
|
||||||
assert len(acl.grants) == 1
|
|
||||||
assert acl.grants[0].grantee == "the-owner"
|
|
||||||
assert acl.grants[0].permission == ACL_PERMISSION_FULL_CONTROL
|
|
||||||
|
|
||||||
def test_public_read_acl(self):
|
|
||||||
acl = create_canned_acl("public-read", "owner")
|
|
||||||
assert acl.owner == "owner"
|
|
||||||
has_owner_full_control = any(
|
|
||||||
g.grantee == "owner" and g.permission == ACL_PERMISSION_FULL_CONTROL for g in acl.grants
|
|
||||||
)
|
|
||||||
has_public_read = any(
|
|
||||||
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_READ for g in acl.grants
|
|
||||||
)
|
|
||||||
assert has_owner_full_control
|
|
||||||
assert has_public_read
|
|
||||||
|
|
||||||
def test_public_read_write_acl(self):
|
|
||||||
acl = create_canned_acl("public-read-write", "owner")
|
|
||||||
assert acl.owner == "owner"
|
|
||||||
has_public_read = any(
|
|
||||||
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_READ for g in acl.grants
|
|
||||||
)
|
|
||||||
has_public_write = any(
|
|
||||||
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_WRITE for g in acl.grants
|
|
||||||
)
|
|
||||||
assert has_public_read
|
|
||||||
assert has_public_write
|
|
||||||
|
|
||||||
def test_authenticated_read_acl(self):
|
|
||||||
acl = create_canned_acl("authenticated-read", "owner")
|
|
||||||
has_authenticated_read = any(
|
|
||||||
g.grantee == GRANTEE_AUTHENTICATED_USERS and g.permission == ACL_PERMISSION_READ for g in acl.grants
|
|
||||||
)
|
|
||||||
assert has_authenticated_read
|
|
||||||
|
|
||||||
def test_unknown_canned_acl_defaults_to_private(self):
|
|
||||||
acl = create_canned_acl("unknown-acl", "owner")
|
|
||||||
private_acl = create_canned_acl("private", "owner")
|
|
||||||
assert acl.to_dict() == private_acl.to_dict()
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def acl_service(tmp_path: Path):
|
|
||||||
return AclService(tmp_path)
|
|
||||||
|
|
||||||
|
|
||||||
class TestAclService:
|
|
||||||
def test_get_bucket_acl_not_exists(self, acl_service):
|
|
||||||
result = acl_service.get_bucket_acl("nonexistent-bucket")
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
def test_set_and_get_bucket_acl(self, acl_service):
|
|
||||||
acl = Acl(
|
|
||||||
owner="bucket-owner",
|
|
||||||
grants=[AclGrant(grantee="bucket-owner", permission=ACL_PERMISSION_FULL_CONTROL)],
|
|
||||||
)
|
|
||||||
acl_service.set_bucket_acl("my-bucket", acl)
|
|
||||||
|
|
||||||
retrieved = acl_service.get_bucket_acl("my-bucket")
|
|
||||||
assert retrieved is not None
|
|
||||||
assert retrieved.owner == "bucket-owner"
|
|
||||||
assert len(retrieved.grants) == 1
|
|
||||||
|
|
||||||
def test_bucket_acl_caching(self, acl_service):
|
|
||||||
acl = Acl(owner="cached-owner", grants=[])
|
|
||||||
acl_service.set_bucket_acl("cached-bucket", acl)
|
|
||||||
|
|
||||||
acl_service.get_bucket_acl("cached-bucket")
|
|
||||||
assert "cached-bucket" in acl_service._bucket_acl_cache
|
|
||||||
|
|
||||||
retrieved = acl_service.get_bucket_acl("cached-bucket")
|
|
||||||
assert retrieved.owner == "cached-owner"
|
|
||||||
|
|
||||||
def test_set_bucket_canned_acl(self, acl_service):
|
|
||||||
result = acl_service.set_bucket_canned_acl("new-bucket", "public-read", "the-owner")
|
|
||||||
assert result.owner == "the-owner"
|
|
||||||
|
|
||||||
retrieved = acl_service.get_bucket_acl("new-bucket")
|
|
||||||
assert retrieved is not None
|
|
||||||
has_public_read = any(
|
|
||||||
g.grantee == GRANTEE_ALL_USERS and g.permission == ACL_PERMISSION_READ for g in retrieved.grants
|
|
||||||
)
|
|
||||||
assert has_public_read
|
|
||||||
|
|
||||||
def test_delete_bucket_acl(self, acl_service):
|
|
||||||
acl = Acl(owner="to-delete-owner", grants=[])
|
|
||||||
acl_service.set_bucket_acl("delete-me", acl)
|
|
||||||
assert acl_service.get_bucket_acl("delete-me") is not None
|
|
||||||
|
|
||||||
acl_service.delete_bucket_acl("delete-me")
|
|
||||||
acl_service._bucket_acl_cache.clear()
|
|
||||||
assert acl_service.get_bucket_acl("delete-me") is None
|
|
||||||
|
|
||||||
def test_evaluate_bucket_acl_allowed(self, acl_service):
|
|
||||||
acl = Acl(
|
|
||||||
owner="owner",
|
|
||||||
grants=[AclGrant(grantee=GRANTEE_ALL_USERS, permission=ACL_PERMISSION_READ)],
|
|
||||||
)
|
|
||||||
acl_service.set_bucket_acl("public-bucket", acl)
|
|
||||||
|
|
||||||
result = acl_service.evaluate_bucket_acl("public-bucket", None, "read", is_authenticated=False)
|
|
||||||
assert result is True
|
|
||||||
|
|
||||||
def test_evaluate_bucket_acl_denied(self, acl_service):
|
|
||||||
acl = Acl(
|
|
||||||
owner="owner",
|
|
||||||
grants=[AclGrant(grantee="owner", permission=ACL_PERMISSION_FULL_CONTROL)],
|
|
||||||
)
|
|
||||||
acl_service.set_bucket_acl("private-bucket", acl)
|
|
||||||
|
|
||||||
result = acl_service.evaluate_bucket_acl("private-bucket", "other-user", "write", is_authenticated=True)
|
|
||||||
assert result is False
|
|
||||||
|
|
||||||
def test_evaluate_bucket_acl_no_acl(self, acl_service):
|
|
||||||
result = acl_service.evaluate_bucket_acl("no-acl-bucket", "anyone", "read")
|
|
||||||
assert result is False
|
|
||||||
|
|
||||||
def test_get_object_acl_from_metadata(self, acl_service):
|
|
||||||
metadata = {
|
|
||||||
"__acl__": {
|
|
||||||
"owner": "object-owner",
|
|
||||||
"grants": [{"grantee": "object-owner", "permission": "FULL_CONTROL"}],
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result = acl_service.get_object_acl("bucket", "key", metadata)
|
|
||||||
assert result is not None
|
|
||||||
assert result.owner == "object-owner"
|
|
||||||
|
|
||||||
def test_get_object_acl_no_acl_in_metadata(self, acl_service):
|
|
||||||
metadata = {"Content-Type": "text/plain"}
|
|
||||||
result = acl_service.get_object_acl("bucket", "key", metadata)
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
def test_create_object_acl_metadata(self, acl_service):
|
|
||||||
acl = Acl(owner="obj-owner", grants=[])
|
|
||||||
result = acl_service.create_object_acl_metadata(acl)
|
|
||||||
assert "__acl__" in result
|
|
||||||
assert result["__acl__"]["owner"] == "obj-owner"
|
|
||||||
|
|
||||||
def test_evaluate_object_acl(self, acl_service):
|
|
||||||
metadata = {
|
|
||||||
"__acl__": {
|
|
||||||
"owner": "obj-owner",
|
|
||||||
"grants": [{"grantee": "*", "permission": "READ"}],
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result = acl_service.evaluate_object_acl(metadata, None, "read", is_authenticated=False)
|
|
||||||
assert result is True
|
|
||||||
|
|
||||||
result = acl_service.evaluate_object_acl(metadata, None, "write", is_authenticated=False)
|
|
||||||
assert result is False
|
|
||||||
@@ -8,6 +8,8 @@ def client(app):
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def auth_headers(app):
|
def auth_headers(app):
|
||||||
|
# Create a test user and return headers
|
||||||
|
# Using the user defined in conftest.py
|
||||||
return {
|
return {
|
||||||
"X-Access-Key": "test",
|
"X-Access-Key": "test",
|
||||||
"X-Secret-Key": "secret"
|
"X-Secret-Key": "secret"
|
||||||
@@ -74,15 +76,18 @@ def test_multipart_upload_flow(client, auth_headers):
|
|||||||
def test_abort_multipart_upload(client, auth_headers):
|
def test_abort_multipart_upload(client, auth_headers):
|
||||||
client.put("/abort-bucket", headers=auth_headers)
|
client.put("/abort-bucket", headers=auth_headers)
|
||||||
|
|
||||||
|
# Initiate
|
||||||
resp = client.post("/abort-bucket/file.txt?uploads", headers=auth_headers)
|
resp = client.post("/abort-bucket/file.txt?uploads", headers=auth_headers)
|
||||||
upload_id = fromstring(resp.data).find("UploadId").text
|
upload_id = fromstring(resp.data).find("UploadId").text
|
||||||
|
|
||||||
|
# Abort
|
||||||
resp = client.delete(f"/abort-bucket/file.txt?uploadId={upload_id}", headers=auth_headers)
|
resp = client.delete(f"/abort-bucket/file.txt?uploadId={upload_id}", headers=auth_headers)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
# Try to upload part (should fail)
|
||||||
resp = client.put(
|
resp = client.put(
|
||||||
f"/abort-bucket/file.txt?partNumber=1&uploadId={upload_id}",
|
f"/abort-bucket/file.txt?partNumber=1&uploadId={upload_id}",
|
||||||
headers=auth_headers,
|
headers=auth_headers,
|
||||||
data=b"data"
|
data=b"data"
|
||||||
)
|
)
|
||||||
assert resp.status_code == 404
|
assert resp.status_code == 404 # NoSuchUpload
|
||||||
|
|||||||
@@ -38,7 +38,7 @@ def test_unicode_bucket_and_object_names(tmp_path: Path):
|
|||||||
assert storage.get_object_path("unicode-test", key).exists()
|
assert storage.get_object_path("unicode-test", key).exists()
|
||||||
|
|
||||||
# Verify listing
|
# Verify listing
|
||||||
objects = storage.list_objects_all("unicode-test")
|
objects = storage.list_objects("unicode-test")
|
||||||
assert any(o.key == key for o in objects)
|
assert any(o.key == key for o in objects)
|
||||||
|
|
||||||
def test_special_characters_in_metadata(tmp_path: Path):
|
def test_special_characters_in_metadata(tmp_path: Path):
|
||||||
|
|||||||
@@ -22,10 +22,11 @@ class TestLocalKeyEncryption:
|
|||||||
key_path = tmp_path / "keys" / "master.key"
|
key_path = tmp_path / "keys" / "master.key"
|
||||||
provider = LocalKeyEncryption(key_path)
|
provider = LocalKeyEncryption(key_path)
|
||||||
|
|
||||||
|
# Access master key to trigger creation
|
||||||
key = provider.master_key
|
key = provider.master_key
|
||||||
|
|
||||||
assert key_path.exists()
|
assert key_path.exists()
|
||||||
assert len(key) == 32
|
assert len(key) == 32 # 256-bit key
|
||||||
|
|
||||||
def test_load_existing_master_key(self, tmp_path):
|
def test_load_existing_master_key(self, tmp_path):
|
||||||
"""Test loading an existing master key."""
|
"""Test loading an existing master key."""
|
||||||
@@ -49,6 +50,7 @@ class TestLocalKeyEncryption:
|
|||||||
|
|
||||||
plaintext = b"Hello, World! This is a test message."
|
plaintext = b"Hello, World! This is a test message."
|
||||||
|
|
||||||
|
# Encrypt
|
||||||
result = provider.encrypt(plaintext)
|
result = provider.encrypt(plaintext)
|
||||||
|
|
||||||
assert result.ciphertext != plaintext
|
assert result.ciphertext != plaintext
|
||||||
@@ -56,6 +58,7 @@ class TestLocalKeyEncryption:
|
|||||||
assert len(result.nonce) == 12
|
assert len(result.nonce) == 12
|
||||||
assert len(result.encrypted_data_key) > 0
|
assert len(result.encrypted_data_key) > 0
|
||||||
|
|
||||||
|
# Decrypt
|
||||||
decrypted = provider.decrypt(
|
decrypted = provider.decrypt(
|
||||||
result.ciphertext,
|
result.ciphertext,
|
||||||
result.nonce,
|
result.nonce,
|
||||||
@@ -77,8 +80,11 @@ class TestLocalKeyEncryption:
|
|||||||
result1 = provider.encrypt(plaintext)
|
result1 = provider.encrypt(plaintext)
|
||||||
result2 = provider.encrypt(plaintext)
|
result2 = provider.encrypt(plaintext)
|
||||||
|
|
||||||
|
# Different encrypted data keys
|
||||||
assert result1.encrypted_data_key != result2.encrypted_data_key
|
assert result1.encrypted_data_key != result2.encrypted_data_key
|
||||||
|
# Different nonces
|
||||||
assert result1.nonce != result2.nonce
|
assert result1.nonce != result2.nonce
|
||||||
|
# Different ciphertexts
|
||||||
assert result1.ciphertext != result2.ciphertext
|
assert result1.ciphertext != result2.ciphertext
|
||||||
|
|
||||||
def test_generate_data_key(self, tmp_path):
|
def test_generate_data_key(self, tmp_path):
|
||||||
@@ -91,8 +97,9 @@ class TestLocalKeyEncryption:
|
|||||||
plaintext_key, encrypted_key = provider.generate_data_key()
|
plaintext_key, encrypted_key = provider.generate_data_key()
|
||||||
|
|
||||||
assert len(plaintext_key) == 32
|
assert len(plaintext_key) == 32
|
||||||
assert len(encrypted_key) > 32
|
assert len(encrypted_key) > 32 # nonce + ciphertext + tag
|
||||||
|
|
||||||
|
# Verify we can decrypt the key
|
||||||
decrypted_key = provider._decrypt_data_key(encrypted_key)
|
decrypted_key = provider._decrypt_data_key(encrypted_key)
|
||||||
assert decrypted_key == plaintext_key
|
assert decrypted_key == plaintext_key
|
||||||
|
|
||||||
@@ -100,15 +107,18 @@ class TestLocalKeyEncryption:
|
|||||||
"""Test that decryption fails with wrong master key."""
|
"""Test that decryption fails with wrong master key."""
|
||||||
from app.encryption import LocalKeyEncryption, EncryptionError
|
from app.encryption import LocalKeyEncryption, EncryptionError
|
||||||
|
|
||||||
|
# Create two providers with different keys
|
||||||
key_path1 = tmp_path / "master1.key"
|
key_path1 = tmp_path / "master1.key"
|
||||||
key_path2 = tmp_path / "master2.key"
|
key_path2 = tmp_path / "master2.key"
|
||||||
|
|
||||||
provider1 = LocalKeyEncryption(key_path1)
|
provider1 = LocalKeyEncryption(key_path1)
|
||||||
provider2 = LocalKeyEncryption(key_path2)
|
provider2 = LocalKeyEncryption(key_path2)
|
||||||
|
|
||||||
|
# Encrypt with provider1
|
||||||
plaintext = b"Secret message"
|
plaintext = b"Secret message"
|
||||||
result = provider1.encrypt(plaintext)
|
result = provider1.encrypt(plaintext)
|
||||||
|
|
||||||
|
# Try to decrypt with provider2
|
||||||
with pytest.raises(EncryptionError):
|
with pytest.raises(EncryptionError):
|
||||||
provider2.decrypt(
|
provider2.decrypt(
|
||||||
result.ciphertext,
|
result.ciphertext,
|
||||||
@@ -186,15 +196,18 @@ class TestStreamingEncryptor:
|
|||||||
provider = LocalKeyEncryption(key_path)
|
provider = LocalKeyEncryption(key_path)
|
||||||
encryptor = StreamingEncryptor(provider, chunk_size=1024)
|
encryptor = StreamingEncryptor(provider, chunk_size=1024)
|
||||||
|
|
||||||
original_data = b"A" * 5000 + b"B" * 5000 + b"C" * 5000
|
# Create test data
|
||||||
|
original_data = b"A" * 5000 + b"B" * 5000 + b"C" * 5000 # 15KB
|
||||||
stream = io.BytesIO(original_data)
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
|
# Encrypt
|
||||||
encrypted_stream, metadata = encryptor.encrypt_stream(stream)
|
encrypted_stream, metadata = encryptor.encrypt_stream(stream)
|
||||||
encrypted_data = encrypted_stream.read()
|
encrypted_data = encrypted_stream.read()
|
||||||
|
|
||||||
assert encrypted_data != original_data
|
assert encrypted_data != original_data
|
||||||
assert metadata.algorithm == "AES256"
|
assert metadata.algorithm == "AES256"
|
||||||
|
|
||||||
|
# Decrypt
|
||||||
encrypted_stream = io.BytesIO(encrypted_data)
|
encrypted_stream = io.BytesIO(encrypted_data)
|
||||||
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
|
decrypted_stream = encryptor.decrypt_stream(encrypted_stream, metadata)
|
||||||
decrypted_data = decrypted_stream.read()
|
decrypted_data = decrypted_stream.read()
|
||||||
@@ -306,6 +319,7 @@ class TestClientEncryptionHelper:
|
|||||||
assert key_info["algorithm"] == "AES-256-GCM"
|
assert key_info["algorithm"] == "AES-256-GCM"
|
||||||
assert "created_at" in key_info
|
assert "created_at" in key_info
|
||||||
|
|
||||||
|
# Verify key is 256 bits
|
||||||
key = base64.b64decode(key_info["key"])
|
key = base64.b64decode(key_info["key"])
|
||||||
assert len(key) == 32
|
assert len(key) == 32
|
||||||
|
|
||||||
@@ -411,6 +425,7 @@ class TestKMSManager:
|
|||||||
assert key is not None
|
assert key is not None
|
||||||
assert key.key_id == "test-key"
|
assert key.key_id == "test-key"
|
||||||
|
|
||||||
|
# Non-existent key
|
||||||
assert kms.get_key("non-existent") is None
|
assert kms.get_key("non-existent") is None
|
||||||
|
|
||||||
def test_enable_disable_key(self, tmp_path):
|
def test_enable_disable_key(self, tmp_path):
|
||||||
@@ -424,11 +439,14 @@ class TestKMSManager:
|
|||||||
|
|
||||||
kms.create_key("Test key", key_id="test-key")
|
kms.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
|
# Initially enabled
|
||||||
assert kms.get_key("test-key").enabled
|
assert kms.get_key("test-key").enabled
|
||||||
|
|
||||||
|
# Disable
|
||||||
kms.disable_key("test-key")
|
kms.disable_key("test-key")
|
||||||
assert not kms.get_key("test-key").enabled
|
assert not kms.get_key("test-key").enabled
|
||||||
|
|
||||||
|
# Enable
|
||||||
kms.enable_key("test-key")
|
kms.enable_key("test-key")
|
||||||
assert kms.get_key("test-key").enabled
|
assert kms.get_key("test-key").enabled
|
||||||
|
|
||||||
@@ -485,9 +503,11 @@ class TestKMSManager:
|
|||||||
|
|
||||||
ciphertext = kms.encrypt("test-key", plaintext, context)
|
ciphertext = kms.encrypt("test-key", plaintext, context)
|
||||||
|
|
||||||
|
# Decrypt with same context succeeds
|
||||||
decrypted, _ = kms.decrypt(ciphertext, context)
|
decrypted, _ = kms.decrypt(ciphertext, context)
|
||||||
assert decrypted == plaintext
|
assert decrypted == plaintext
|
||||||
|
|
||||||
|
# Decrypt with different context fails
|
||||||
with pytest.raises(EncryptionError):
|
with pytest.raises(EncryptionError):
|
||||||
kms.decrypt(ciphertext, {"different": "context"})
|
kms.decrypt(ciphertext, {"different": "context"})
|
||||||
|
|
||||||
@@ -507,6 +527,7 @@ class TestKMSManager:
|
|||||||
assert len(plaintext_key) == 32
|
assert len(plaintext_key) == 32
|
||||||
assert len(encrypted_key) > 0
|
assert len(encrypted_key) > 0
|
||||||
|
|
||||||
|
# Decrypt the encrypted key
|
||||||
decrypted_key = kms.decrypt_data_key("test-key", encrypted_key)
|
decrypted_key = kms.decrypt_data_key("test-key", encrypted_key)
|
||||||
|
|
||||||
assert decrypted_key == plaintext_key
|
assert decrypted_key == plaintext_key
|
||||||
@@ -540,8 +561,13 @@ class TestKMSManager:
|
|||||||
|
|
||||||
plaintext = b"Data to re-encrypt"
|
plaintext = b"Data to re-encrypt"
|
||||||
|
|
||||||
|
# Encrypt with key-1
|
||||||
ciphertext1 = kms.encrypt("key-1", plaintext)
|
ciphertext1 = kms.encrypt("key-1", plaintext)
|
||||||
|
|
||||||
|
# Re-encrypt with key-2
|
||||||
ciphertext2 = kms.re_encrypt(ciphertext1, "key-2")
|
ciphertext2 = kms.re_encrypt(ciphertext1, "key-2")
|
||||||
|
|
||||||
|
# Decrypt with key-2
|
||||||
decrypted, key_id = kms.decrypt(ciphertext2)
|
decrypted, key_id = kms.decrypt(ciphertext2)
|
||||||
|
|
||||||
assert decrypted == plaintext
|
assert decrypted == plaintext
|
||||||
@@ -561,7 +587,7 @@ class TestKMSManager:
|
|||||||
|
|
||||||
assert len(random1) == 32
|
assert len(random1) == 32
|
||||||
assert len(random2) == 32
|
assert len(random2) == 32
|
||||||
assert random1 != random2
|
assert random1 != random2 # Very unlikely to be equal
|
||||||
|
|
||||||
def test_keys_persist_across_instances(self, tmp_path):
|
def test_keys_persist_across_instances(self, tmp_path):
|
||||||
"""Test that keys persist and can be loaded by new instances."""
|
"""Test that keys persist and can be loaded by new instances."""
|
||||||
@@ -570,12 +596,14 @@ class TestKMSManager:
|
|||||||
keys_path = tmp_path / "kms_keys.json"
|
keys_path = tmp_path / "kms_keys.json"
|
||||||
master_key_path = tmp_path / "master.key"
|
master_key_path = tmp_path / "master.key"
|
||||||
|
|
||||||
|
# Create key with first instance
|
||||||
kms1 = KMSManager(keys_path, master_key_path)
|
kms1 = KMSManager(keys_path, master_key_path)
|
||||||
kms1.create_key("Test key", key_id="test-key")
|
kms1.create_key("Test key", key_id="test-key")
|
||||||
|
|
||||||
plaintext = b"Persistent encryption test"
|
plaintext = b"Persistent encryption test"
|
||||||
ciphertext = kms1.encrypt("test-key", plaintext)
|
ciphertext = kms1.encrypt("test-key", plaintext)
|
||||||
|
|
||||||
|
# Create new instance and verify key works
|
||||||
kms2 = KMSManager(keys_path, master_key_path)
|
kms2 = KMSManager(keys_path, master_key_path)
|
||||||
|
|
||||||
decrypted, key_id = kms2.decrypt(ciphertext)
|
decrypted, key_id = kms2.decrypt(ciphertext)
|
||||||
@@ -637,11 +665,13 @@ class TestEncryptedStorage:
|
|||||||
|
|
||||||
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
||||||
|
|
||||||
|
# Create bucket with encryption config
|
||||||
storage.create_bucket("test-bucket")
|
storage.create_bucket("test-bucket")
|
||||||
storage.set_bucket_encryption("test-bucket", {
|
storage.set_bucket_encryption("test-bucket", {
|
||||||
"Rules": [{"SSEAlgorithm": "AES256"}]
|
"Rules": [{"SSEAlgorithm": "AES256"}]
|
||||||
})
|
})
|
||||||
|
|
||||||
|
# Put object
|
||||||
original_data = b"This is secret data that should be encrypted"
|
original_data = b"This is secret data that should be encrypted"
|
||||||
stream = io.BytesIO(original_data)
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
@@ -653,10 +683,12 @@ class TestEncryptedStorage:
|
|||||||
|
|
||||||
assert meta is not None
|
assert meta is not None
|
||||||
|
|
||||||
|
# Verify file on disk is encrypted (not plaintext)
|
||||||
file_path = storage_root / "test-bucket" / "secret.txt"
|
file_path = storage_root / "test-bucket" / "secret.txt"
|
||||||
stored_data = file_path.read_bytes()
|
stored_data = file_path.read_bytes()
|
||||||
assert stored_data != original_data
|
assert stored_data != original_data
|
||||||
|
|
||||||
|
# Get object - should be decrypted
|
||||||
data, metadata = encrypted_storage.get_object_data("test-bucket", "secret.txt")
|
data, metadata = encrypted_storage.get_object_data("test-bucket", "secret.txt")
|
||||||
|
|
||||||
assert data == original_data
|
assert data == original_data
|
||||||
@@ -679,12 +711,14 @@ class TestEncryptedStorage:
|
|||||||
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
encrypted_storage = EncryptedObjectStorage(storage, encryption)
|
||||||
|
|
||||||
storage.create_bucket("test-bucket")
|
storage.create_bucket("test-bucket")
|
||||||
|
# No encryption config
|
||||||
|
|
||||||
original_data = b"Unencrypted data"
|
original_data = b"Unencrypted data"
|
||||||
stream = io.BytesIO(original_data)
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
encrypted_storage.put_object("test-bucket", "plain.txt", stream)
|
encrypted_storage.put_object("test-bucket", "plain.txt", stream)
|
||||||
|
|
||||||
|
# Verify file on disk is NOT encrypted
|
||||||
file_path = storage_root / "test-bucket" / "plain.txt"
|
file_path = storage_root / "test-bucket" / "plain.txt"
|
||||||
stored_data = file_path.read_bytes()
|
stored_data = file_path.read_bytes()
|
||||||
assert stored_data == original_data
|
assert stored_data == original_data
|
||||||
@@ -711,6 +745,7 @@ class TestEncryptedStorage:
|
|||||||
original_data = b"Explicitly encrypted data"
|
original_data = b"Explicitly encrypted data"
|
||||||
stream = io.BytesIO(original_data)
|
stream = io.BytesIO(original_data)
|
||||||
|
|
||||||
|
# Request encryption explicitly
|
||||||
encrypted_storage.put_object(
|
encrypted_storage.put_object(
|
||||||
"test-bucket",
|
"test-bucket",
|
||||||
"encrypted.txt",
|
"encrypted.txt",
|
||||||
@@ -718,9 +753,11 @@ class TestEncryptedStorage:
|
|||||||
server_side_encryption="AES256",
|
server_side_encryption="AES256",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Verify file is encrypted
|
||||||
file_path = storage_root / "test-bucket" / "encrypted.txt"
|
file_path = storage_root / "test-bucket" / "encrypted.txt"
|
||||||
stored_data = file_path.read_bytes()
|
stored_data = file_path.read_bytes()
|
||||||
assert stored_data != original_data
|
assert stored_data != original_data
|
||||||
|
|
||||||
|
# Get object - should be decrypted
|
||||||
data, _ = encrypted_storage.get_object_data("test-bucket", "encrypted.txt")
|
data, _ = encrypted_storage.get_object_data("test-bucket", "encrypted.txt")
|
||||||
assert data == original_data
|
assert data == original_data
|
||||||
|
|||||||
@@ -24,6 +24,7 @@ def kms_client(tmp_path):
|
|||||||
"KMS_KEYS_PATH": str(tmp_path / "kms_keys.json"),
|
"KMS_KEYS_PATH": str(tmp_path / "kms_keys.json"),
|
||||||
})
|
})
|
||||||
|
|
||||||
|
# Create default IAM config with admin user
|
||||||
iam_config = {
|
iam_config = {
|
||||||
"users": [
|
"users": [
|
||||||
{
|
{
|
||||||
@@ -82,6 +83,7 @@ class TestKMSKeyManagement:
|
|||||||
|
|
||||||
def test_list_keys(self, kms_client, auth_headers):
|
def test_list_keys(self, kms_client, auth_headers):
|
||||||
"""Test listing KMS keys."""
|
"""Test listing KMS keys."""
|
||||||
|
# Create some keys
|
||||||
kms_client.post("/kms/keys", json={"Description": "Key 1"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"Description": "Key 1"}, headers=auth_headers)
|
||||||
kms_client.post("/kms/keys", json={"Description": "Key 2"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"Description": "Key 2"}, headers=auth_headers)
|
||||||
|
|
||||||
@@ -95,6 +97,7 @@ class TestKMSKeyManagement:
|
|||||||
|
|
||||||
def test_get_key(self, kms_client, auth_headers):
|
def test_get_key(self, kms_client, auth_headers):
|
||||||
"""Test getting a specific key."""
|
"""Test getting a specific key."""
|
||||||
|
# Create a key
|
||||||
create_response = kms_client.post(
|
create_response = kms_client.post(
|
||||||
"/kms/keys",
|
"/kms/keys",
|
||||||
json={"KeyId": "test-key", "Description": "Test key"},
|
json={"KeyId": "test-key", "Description": "Test key"},
|
||||||
@@ -117,28 +120,36 @@ class TestKMSKeyManagement:
|
|||||||
|
|
||||||
def test_delete_key(self, kms_client, auth_headers):
|
def test_delete_key(self, kms_client, auth_headers):
|
||||||
"""Test deleting a key."""
|
"""Test deleting a key."""
|
||||||
|
# Create a key
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
# Delete it
|
||||||
response = kms_client.delete("/kms/keys/test-key", headers=auth_headers)
|
response = kms_client.delete("/kms/keys/test-key", headers=auth_headers)
|
||||||
|
|
||||||
assert response.status_code == 204
|
assert response.status_code == 204
|
||||||
|
|
||||||
|
# Verify it's gone
|
||||||
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
assert get_response.status_code == 404
|
assert get_response.status_code == 404
|
||||||
|
|
||||||
def test_enable_disable_key(self, kms_client, auth_headers):
|
def test_enable_disable_key(self, kms_client, auth_headers):
|
||||||
"""Test enabling and disabling a key."""
|
"""Test enabling and disabling a key."""
|
||||||
|
# Create a key
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
|
# Disable
|
||||||
response = kms_client.post("/kms/keys/test-key/disable", headers=auth_headers)
|
response = kms_client.post("/kms/keys/test-key/disable", headers=auth_headers)
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
# Verify disabled
|
||||||
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
assert get_response.get_json()["KeyMetadata"]["Enabled"] is False
|
assert get_response.get_json()["KeyMetadata"]["Enabled"] is False
|
||||||
|
|
||||||
|
# Enable
|
||||||
response = kms_client.post("/kms/keys/test-key/enable", headers=auth_headers)
|
response = kms_client.post("/kms/keys/test-key/enable", headers=auth_headers)
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
|
|
||||||
|
# Verify enabled
|
||||||
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
get_response = kms_client.get("/kms/keys/test-key", headers=auth_headers)
|
||||||
assert get_response.get_json()["KeyMetadata"]["Enabled"] is True
|
assert get_response.get_json()["KeyMetadata"]["Enabled"] is True
|
||||||
|
|
||||||
@@ -148,11 +159,13 @@ class TestKMSEncryption:
|
|||||||
|
|
||||||
def test_encrypt_decrypt(self, kms_client, auth_headers):
|
def test_encrypt_decrypt(self, kms_client, auth_headers):
|
||||||
"""Test encrypting and decrypting data."""
|
"""Test encrypting and decrypting data."""
|
||||||
|
# Create a key
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "test-key"}, headers=auth_headers)
|
||||||
|
|
||||||
plaintext = b"Hello, World!"
|
plaintext = b"Hello, World!"
|
||||||
plaintext_b64 = base64.b64encode(plaintext).decode()
|
plaintext_b64 = base64.b64encode(plaintext).decode()
|
||||||
|
|
||||||
|
# Encrypt
|
||||||
encrypt_response = kms_client.post(
|
encrypt_response = kms_client.post(
|
||||||
"/kms/encrypt",
|
"/kms/encrypt",
|
||||||
json={"KeyId": "test-key", "Plaintext": plaintext_b64},
|
json={"KeyId": "test-key", "Plaintext": plaintext_b64},
|
||||||
@@ -165,6 +178,7 @@ class TestKMSEncryption:
|
|||||||
assert "CiphertextBlob" in encrypt_data
|
assert "CiphertextBlob" in encrypt_data
|
||||||
assert encrypt_data["KeyId"] == "test-key"
|
assert encrypt_data["KeyId"] == "test-key"
|
||||||
|
|
||||||
|
# Decrypt
|
||||||
decrypt_response = kms_client.post(
|
decrypt_response = kms_client.post(
|
||||||
"/kms/decrypt",
|
"/kms/decrypt",
|
||||||
json={"CiphertextBlob": encrypt_data["CiphertextBlob"]},
|
json={"CiphertextBlob": encrypt_data["CiphertextBlob"]},
|
||||||
@@ -185,6 +199,7 @@ class TestKMSEncryption:
|
|||||||
plaintext_b64 = base64.b64encode(plaintext).decode()
|
plaintext_b64 = base64.b64encode(plaintext).decode()
|
||||||
context = {"purpose": "testing", "bucket": "my-bucket"}
|
context = {"purpose": "testing", "bucket": "my-bucket"}
|
||||||
|
|
||||||
|
# Encrypt with context
|
||||||
encrypt_response = kms_client.post(
|
encrypt_response = kms_client.post(
|
||||||
"/kms/encrypt",
|
"/kms/encrypt",
|
||||||
json={
|
json={
|
||||||
@@ -198,6 +213,7 @@ class TestKMSEncryption:
|
|||||||
assert encrypt_response.status_code == 200
|
assert encrypt_response.status_code == 200
|
||||||
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
||||||
|
|
||||||
|
# Decrypt with same context succeeds
|
||||||
decrypt_response = kms_client.post(
|
decrypt_response = kms_client.post(
|
||||||
"/kms/decrypt",
|
"/kms/decrypt",
|
||||||
json={
|
json={
|
||||||
@@ -209,6 +225,7 @@ class TestKMSEncryption:
|
|||||||
|
|
||||||
assert decrypt_response.status_code == 200
|
assert decrypt_response.status_code == 200
|
||||||
|
|
||||||
|
# Decrypt with wrong context fails
|
||||||
wrong_context_response = kms_client.post(
|
wrong_context_response = kms_client.post(
|
||||||
"/kms/decrypt",
|
"/kms/decrypt",
|
||||||
json={
|
json={
|
||||||
@@ -308,9 +325,11 @@ class TestKMSReEncrypt:
|
|||||||
|
|
||||||
def test_re_encrypt(self, kms_client, auth_headers):
|
def test_re_encrypt(self, kms_client, auth_headers):
|
||||||
"""Test re-encrypting data with a different key."""
|
"""Test re-encrypting data with a different key."""
|
||||||
|
# Create two keys
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "key-1"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "key-1"}, headers=auth_headers)
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "key-2"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "key-2"}, headers=auth_headers)
|
||||||
|
|
||||||
|
# Encrypt with key-1
|
||||||
plaintext = b"Data to re-encrypt"
|
plaintext = b"Data to re-encrypt"
|
||||||
encrypt_response = kms_client.post(
|
encrypt_response = kms_client.post(
|
||||||
"/kms/encrypt",
|
"/kms/encrypt",
|
||||||
@@ -323,6 +342,7 @@ class TestKMSReEncrypt:
|
|||||||
|
|
||||||
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
ciphertext = encrypt_response.get_json()["CiphertextBlob"]
|
||||||
|
|
||||||
|
# Re-encrypt with key-2
|
||||||
re_encrypt_response = kms_client.post(
|
re_encrypt_response = kms_client.post(
|
||||||
"/kms/re-encrypt",
|
"/kms/re-encrypt",
|
||||||
json={
|
json={
|
||||||
@@ -338,6 +358,7 @@ class TestKMSReEncrypt:
|
|||||||
assert data["SourceKeyId"] == "key-1"
|
assert data["SourceKeyId"] == "key-1"
|
||||||
assert data["KeyId"] == "key-2"
|
assert data["KeyId"] == "key-2"
|
||||||
|
|
||||||
|
# Verify new ciphertext can be decrypted
|
||||||
decrypt_response = kms_client.post(
|
decrypt_response = kms_client.post(
|
||||||
"/kms/decrypt",
|
"/kms/decrypt",
|
||||||
json={"CiphertextBlob": data["CiphertextBlob"]},
|
json={"CiphertextBlob": data["CiphertextBlob"]},
|
||||||
@@ -377,7 +398,7 @@ class TestKMSRandom:
|
|||||||
data = response.get_json()
|
data = response.get_json()
|
||||||
|
|
||||||
random_bytes = base64.b64decode(data["Plaintext"])
|
random_bytes = base64.b64decode(data["Plaintext"])
|
||||||
assert len(random_bytes) == 32
|
assert len(random_bytes) == 32 # Default is 32 bytes
|
||||||
|
|
||||||
|
|
||||||
class TestClientSideEncryption:
|
class TestClientSideEncryption:
|
||||||
@@ -401,9 +422,11 @@ class TestClientSideEncryption:
|
|||||||
|
|
||||||
def test_client_encrypt_decrypt(self, kms_client, auth_headers):
|
def test_client_encrypt_decrypt(self, kms_client, auth_headers):
|
||||||
"""Test client-side encryption and decryption."""
|
"""Test client-side encryption and decryption."""
|
||||||
|
# Generate a key
|
||||||
key_response = kms_client.post("/kms/client/generate-key", headers=auth_headers)
|
key_response = kms_client.post("/kms/client/generate-key", headers=auth_headers)
|
||||||
key = key_response.get_json()["key"]
|
key = key_response.get_json()["key"]
|
||||||
|
|
||||||
|
# Encrypt
|
||||||
plaintext = b"Client-side encrypted data"
|
plaintext = b"Client-side encrypted data"
|
||||||
encrypt_response = kms_client.post(
|
encrypt_response = kms_client.post(
|
||||||
"/kms/client/encrypt",
|
"/kms/client/encrypt",
|
||||||
@@ -417,6 +440,7 @@ class TestClientSideEncryption:
|
|||||||
assert encrypt_response.status_code == 200
|
assert encrypt_response.status_code == 200
|
||||||
encrypted = encrypt_response.get_json()
|
encrypted = encrypt_response.get_json()
|
||||||
|
|
||||||
|
# Decrypt
|
||||||
decrypt_response = kms_client.post(
|
decrypt_response = kms_client.post(
|
||||||
"/kms/client/decrypt",
|
"/kms/client/decrypt",
|
||||||
json={
|
json={
|
||||||
@@ -437,6 +461,7 @@ class TestEncryptionMaterials:
|
|||||||
|
|
||||||
def test_get_encryption_materials(self, kms_client, auth_headers):
|
def test_get_encryption_materials(self, kms_client, auth_headers):
|
||||||
"""Test getting encryption materials for client-side S3 encryption."""
|
"""Test getting encryption materials for client-side S3 encryption."""
|
||||||
|
# Create a key
|
||||||
kms_client.post("/kms/keys", json={"KeyId": "s3-key"}, headers=auth_headers)
|
kms_client.post("/kms/keys", json={"KeyId": "s3-key"}, headers=auth_headers)
|
||||||
|
|
||||||
response = kms_client.post(
|
response = kms_client.post(
|
||||||
@@ -453,6 +478,7 @@ class TestEncryptionMaterials:
|
|||||||
assert data["KeyId"] == "s3-key"
|
assert data["KeyId"] == "s3-key"
|
||||||
assert data["Algorithm"] == "AES-256-GCM"
|
assert data["Algorithm"] == "AES-256-GCM"
|
||||||
|
|
||||||
|
# Verify key is 256 bits
|
||||||
key = base64.b64decode(data["PlaintextKey"])
|
key = base64.b64decode(data["PlaintextKey"])
|
||||||
assert len(key) == 32
|
assert len(key) == 32
|
||||||
|
|
||||||
@@ -464,6 +490,7 @@ class TestKMSAuthentication:
|
|||||||
"""Test that unauthenticated requests are rejected."""
|
"""Test that unauthenticated requests are rejected."""
|
||||||
response = kms_client.get("/kms/keys")
|
response = kms_client.get("/kms/keys")
|
||||||
|
|
||||||
|
# Should fail with 403 (no credentials)
|
||||||
assert response.status_code == 403
|
assert response.status_code == 403
|
||||||
|
|
||||||
def test_invalid_credentials_fail(self, kms_client):
|
def test_invalid_credentials_fail(self, kms_client):
|
||||||
|
|||||||
@@ -1,238 +0,0 @@
|
|||||||
import io
|
|
||||||
import time
|
|
||||||
from datetime import datetime, timedelta, timezone
|
|
||||||
from pathlib import Path
|
|
||||||
from unittest.mock import MagicMock, patch
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from app.lifecycle import LifecycleManager, LifecycleResult
|
|
||||||
from app.storage import ObjectStorage
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def storage(tmp_path: Path):
|
|
||||||
storage_root = tmp_path / "data"
|
|
||||||
storage_root.mkdir(parents=True)
|
|
||||||
return ObjectStorage(storage_root)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def lifecycle_manager(storage):
|
|
||||||
manager = LifecycleManager(storage, interval_seconds=3600)
|
|
||||||
yield manager
|
|
||||||
manager.stop()
|
|
||||||
|
|
||||||
|
|
||||||
class TestLifecycleResult:
|
|
||||||
def test_default_values(self):
|
|
||||||
result = LifecycleResult(bucket_name="test-bucket")
|
|
||||||
assert result.bucket_name == "test-bucket"
|
|
||||||
assert result.objects_deleted == 0
|
|
||||||
assert result.versions_deleted == 0
|
|
||||||
assert result.uploads_aborted == 0
|
|
||||||
assert result.errors == []
|
|
||||||
assert result.execution_time_seconds == 0.0
|
|
||||||
|
|
||||||
|
|
||||||
class TestLifecycleManager:
|
|
||||||
def test_start_and_stop(self, lifecycle_manager):
|
|
||||||
lifecycle_manager.start()
|
|
||||||
assert lifecycle_manager._timer is not None
|
|
||||||
assert lifecycle_manager._shutdown is False
|
|
||||||
|
|
||||||
lifecycle_manager.stop()
|
|
||||||
assert lifecycle_manager._shutdown is True
|
|
||||||
assert lifecycle_manager._timer is None
|
|
||||||
|
|
||||||
def test_start_only_once(self, lifecycle_manager):
|
|
||||||
lifecycle_manager.start()
|
|
||||||
first_timer = lifecycle_manager._timer
|
|
||||||
|
|
||||||
lifecycle_manager.start()
|
|
||||||
assert lifecycle_manager._timer is first_timer
|
|
||||||
|
|
||||||
def test_enforce_rules_no_lifecycle(self, lifecycle_manager, storage):
|
|
||||||
storage.create_bucket("no-lifecycle-bucket")
|
|
||||||
|
|
||||||
result = lifecycle_manager.enforce_rules("no-lifecycle-bucket")
|
|
||||||
assert result.bucket_name == "no-lifecycle-bucket"
|
|
||||||
assert result.objects_deleted == 0
|
|
||||||
|
|
||||||
def test_enforce_rules_disabled_rule(self, lifecycle_manager, storage):
|
|
||||||
storage.create_bucket("disabled-bucket")
|
|
||||||
storage.set_bucket_lifecycle("disabled-bucket", [
|
|
||||||
{
|
|
||||||
"ID": "disabled-rule",
|
|
||||||
"Status": "Disabled",
|
|
||||||
"Prefix": "",
|
|
||||||
"Expiration": {"Days": 1},
|
|
||||||
}
|
|
||||||
])
|
|
||||||
|
|
||||||
old_object = storage.put_object(
|
|
||||||
"disabled-bucket",
|
|
||||||
"old-file.txt",
|
|
||||||
io.BytesIO(b"old content"),
|
|
||||||
)
|
|
||||||
|
|
||||||
result = lifecycle_manager.enforce_rules("disabled-bucket")
|
|
||||||
assert result.objects_deleted == 0
|
|
||||||
|
|
||||||
def test_enforce_expiration_by_days(self, lifecycle_manager, storage):
|
|
||||||
storage.create_bucket("expire-bucket")
|
|
||||||
storage.set_bucket_lifecycle("expire-bucket", [
|
|
||||||
{
|
|
||||||
"ID": "expire-30-days",
|
|
||||||
"Status": "Enabled",
|
|
||||||
"Prefix": "",
|
|
||||||
"Expiration": {"Days": 30},
|
|
||||||
}
|
|
||||||
])
|
|
||||||
|
|
||||||
storage.put_object(
|
|
||||||
"expire-bucket",
|
|
||||||
"recent-file.txt",
|
|
||||||
io.BytesIO(b"recent content"),
|
|
||||||
)
|
|
||||||
|
|
||||||
result = lifecycle_manager.enforce_rules("expire-bucket")
|
|
||||||
assert result.objects_deleted == 0
|
|
||||||
|
|
||||||
def test_enforce_expiration_with_prefix(self, lifecycle_manager, storage):
|
|
||||||
storage.create_bucket("prefix-bucket")
|
|
||||||
storage.set_bucket_lifecycle("prefix-bucket", [
|
|
||||||
{
|
|
||||||
"ID": "expire-logs",
|
|
||||||
"Status": "Enabled",
|
|
||||||
"Prefix": "logs/",
|
|
||||||
"Expiration": {"Days": 1},
|
|
||||||
}
|
|
||||||
])
|
|
||||||
|
|
||||||
storage.put_object("prefix-bucket", "logs/old.log", io.BytesIO(b"log data"))
|
|
||||||
storage.put_object("prefix-bucket", "data/keep.txt", io.BytesIO(b"keep this"))
|
|
||||||
|
|
||||||
result = lifecycle_manager.enforce_rules("prefix-bucket")
|
|
||||||
|
|
||||||
def test_enforce_all_buckets(self, lifecycle_manager, storage):
|
|
||||||
storage.create_bucket("bucket1")
|
|
||||||
storage.create_bucket("bucket2")
|
|
||||||
|
|
||||||
results = lifecycle_manager.enforce_all_buckets()
|
|
||||||
assert isinstance(results, dict)
|
|
||||||
|
|
||||||
def test_run_now_single_bucket(self, lifecycle_manager, storage):
|
|
||||||
storage.create_bucket("run-now-bucket")
|
|
||||||
|
|
||||||
results = lifecycle_manager.run_now("run-now-bucket")
|
|
||||||
assert "run-now-bucket" in results
|
|
||||||
|
|
||||||
def test_run_now_all_buckets(self, lifecycle_manager, storage):
|
|
||||||
storage.create_bucket("all-bucket-1")
|
|
||||||
storage.create_bucket("all-bucket-2")
|
|
||||||
|
|
||||||
results = lifecycle_manager.run_now()
|
|
||||||
assert isinstance(results, dict)
|
|
||||||
|
|
||||||
def test_enforce_abort_multipart(self, lifecycle_manager, storage):
|
|
||||||
storage.create_bucket("multipart-bucket")
|
|
||||||
storage.set_bucket_lifecycle("multipart-bucket", [
|
|
||||||
{
|
|
||||||
"ID": "abort-old-uploads",
|
|
||||||
"Status": "Enabled",
|
|
||||||
"Prefix": "",
|
|
||||||
"AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7},
|
|
||||||
}
|
|
||||||
])
|
|
||||||
|
|
||||||
upload_id = storage.initiate_multipart_upload("multipart-bucket", "large-file.bin")
|
|
||||||
|
|
||||||
result = lifecycle_manager.enforce_rules("multipart-bucket")
|
|
||||||
assert result.uploads_aborted == 0
|
|
||||||
|
|
||||||
def test_enforce_noncurrent_version_expiration(self, lifecycle_manager, storage):
|
|
||||||
storage.create_bucket("versioned-bucket")
|
|
||||||
storage.set_bucket_versioning("versioned-bucket", True)
|
|
||||||
storage.set_bucket_lifecycle("versioned-bucket", [
|
|
||||||
{
|
|
||||||
"ID": "expire-old-versions",
|
|
||||||
"Status": "Enabled",
|
|
||||||
"Prefix": "",
|
|
||||||
"NoncurrentVersionExpiration": {"NoncurrentDays": 30},
|
|
||||||
}
|
|
||||||
])
|
|
||||||
|
|
||||||
storage.put_object("versioned-bucket", "file.txt", io.BytesIO(b"v1"))
|
|
||||||
storage.put_object("versioned-bucket", "file.txt", io.BytesIO(b"v2"))
|
|
||||||
|
|
||||||
result = lifecycle_manager.enforce_rules("versioned-bucket")
|
|
||||||
assert result.bucket_name == "versioned-bucket"
|
|
||||||
|
|
||||||
def test_execution_time_tracking(self, lifecycle_manager, storage):
|
|
||||||
storage.create_bucket("timed-bucket")
|
|
||||||
storage.set_bucket_lifecycle("timed-bucket", [
|
|
||||||
{
|
|
||||||
"ID": "timer-test",
|
|
||||||
"Status": "Enabled",
|
|
||||||
"Expiration": {"Days": 1},
|
|
||||||
}
|
|
||||||
])
|
|
||||||
|
|
||||||
result = lifecycle_manager.enforce_rules("timed-bucket")
|
|
||||||
assert result.execution_time_seconds >= 0
|
|
||||||
|
|
||||||
def test_enforce_rules_with_error(self, lifecycle_manager, storage):
|
|
||||||
result = lifecycle_manager.enforce_rules("nonexistent-bucket")
|
|
||||||
assert len(result.errors) > 0 or result.objects_deleted == 0
|
|
||||||
|
|
||||||
def test_lifecycle_with_date_expiration(self, lifecycle_manager, storage):
|
|
||||||
storage.create_bucket("date-bucket")
|
|
||||||
past_date = (datetime.now(timezone.utc) - timedelta(days=1)).strftime("%Y-%m-%dT00:00:00Z")
|
|
||||||
storage.set_bucket_lifecycle("date-bucket", [
|
|
||||||
{
|
|
||||||
"ID": "expire-by-date",
|
|
||||||
"Status": "Enabled",
|
|
||||||
"Prefix": "",
|
|
||||||
"Expiration": {"Date": past_date},
|
|
||||||
}
|
|
||||||
])
|
|
||||||
|
|
||||||
storage.put_object("date-bucket", "should-expire.txt", io.BytesIO(b"content"))
|
|
||||||
|
|
||||||
result = lifecycle_manager.enforce_rules("date-bucket")
|
|
||||||
|
|
||||||
def test_enforce_with_filter_prefix(self, lifecycle_manager, storage):
|
|
||||||
storage.create_bucket("filter-bucket")
|
|
||||||
storage.set_bucket_lifecycle("filter-bucket", [
|
|
||||||
{
|
|
||||||
"ID": "filter-prefix-rule",
|
|
||||||
"Status": "Enabled",
|
|
||||||
"Filter": {"Prefix": "archive/"},
|
|
||||||
"Expiration": {"Days": 1},
|
|
||||||
}
|
|
||||||
])
|
|
||||||
|
|
||||||
result = lifecycle_manager.enforce_rules("filter-bucket")
|
|
||||||
assert result.bucket_name == "filter-bucket"
|
|
||||||
|
|
||||||
|
|
||||||
class TestLifecycleManagerScheduling:
|
|
||||||
def test_schedule_next_respects_shutdown(self, storage):
|
|
||||||
manager = LifecycleManager(storage, interval_seconds=1)
|
|
||||||
manager._shutdown = True
|
|
||||||
manager._schedule_next()
|
|
||||||
assert manager._timer is None
|
|
||||||
|
|
||||||
@patch.object(LifecycleManager, "enforce_all_buckets")
|
|
||||||
def test_run_enforcement_catches_exceptions(self, mock_enforce, storage):
|
|
||||||
mock_enforce.side_effect = Exception("Test error")
|
|
||||||
manager = LifecycleManager(storage, interval_seconds=3600)
|
|
||||||
manager._shutdown = True
|
|
||||||
manager._run_enforcement()
|
|
||||||
|
|
||||||
def test_shutdown_flag_prevents_scheduling(self, storage):
|
|
||||||
manager = LifecycleManager(storage, interval_seconds=1)
|
|
||||||
manager.start()
|
|
||||||
manager.stop()
|
|
||||||
assert manager._shutdown is True
|
|
||||||
@@ -4,6 +4,7 @@ import pytest
|
|||||||
from xml.etree.ElementTree import fromstring
|
from xml.etree.ElementTree import fromstring
|
||||||
|
|
||||||
|
|
||||||
|
# Helper to create file-like stream
|
||||||
def _stream(data: bytes):
|
def _stream(data: bytes):
|
||||||
return io.BytesIO(data)
|
return io.BytesIO(data)
|
||||||
|
|
||||||
@@ -18,11 +19,13 @@ class TestListObjectsV2:
|
|||||||
"""Tests for ListObjectsV2 endpoint."""
|
"""Tests for ListObjectsV2 endpoint."""
|
||||||
|
|
||||||
def test_list_objects_v2_basic(self, client, signer, storage):
|
def test_list_objects_v2_basic(self, client, signer, storage):
|
||||||
|
# Create bucket and objects
|
||||||
storage.create_bucket("v2-test")
|
storage.create_bucket("v2-test")
|
||||||
storage.put_object("v2-test", "file1.txt", _stream(b"hello"))
|
storage.put_object("v2-test", "file1.txt", _stream(b"hello"))
|
||||||
storage.put_object("v2-test", "file2.txt", _stream(b"world"))
|
storage.put_object("v2-test", "file2.txt", _stream(b"world"))
|
||||||
storage.put_object("v2-test", "folder/file3.txt", _stream(b"nested"))
|
storage.put_object("v2-test", "folder/file3.txt", _stream(b"nested"))
|
||||||
|
|
||||||
|
# ListObjectsV2 request
|
||||||
headers = signer("GET", "/v2-test?list-type=2")
|
headers = signer("GET", "/v2-test?list-type=2")
|
||||||
resp = client.get("/v2-test", query_string={"list-type": "2"}, headers=headers)
|
resp = client.get("/v2-test", query_string={"list-type": "2"}, headers=headers)
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
@@ -43,6 +46,7 @@ class TestListObjectsV2:
|
|||||||
storage.put_object("prefix-test", "photos/2024/mar.jpg", _stream(b"mar"))
|
storage.put_object("prefix-test", "photos/2024/mar.jpg", _stream(b"mar"))
|
||||||
storage.put_object("prefix-test", "docs/readme.md", _stream(b"readme"))
|
storage.put_object("prefix-test", "docs/readme.md", _stream(b"readme"))
|
||||||
|
|
||||||
|
# List with prefix and delimiter
|
||||||
headers = signer("GET", "/prefix-test?list-type=2&prefix=photos/&delimiter=/")
|
headers = signer("GET", "/prefix-test?list-type=2&prefix=photos/&delimiter=/")
|
||||||
resp = client.get(
|
resp = client.get(
|
||||||
"/prefix-test",
|
"/prefix-test",
|
||||||
@@ -52,10 +56,11 @@ class TestListObjectsV2:
|
|||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
|
|
||||||
root = fromstring(resp.data)
|
root = fromstring(resp.data)
|
||||||
|
# Should show common prefixes for 2023/ and 2024/
|
||||||
prefixes = [el.find("Prefix").text for el in root.findall("CommonPrefixes")]
|
prefixes = [el.find("Prefix").text for el in root.findall("CommonPrefixes")]
|
||||||
assert "photos/2023/" in prefixes
|
assert "photos/2023/" in prefixes
|
||||||
assert "photos/2024/" in prefixes
|
assert "photos/2024/" in prefixes
|
||||||
assert len(root.findall("Contents")) == 0
|
assert len(root.findall("Contents")) == 0 # No direct files under photos/
|
||||||
|
|
||||||
|
|
||||||
class TestPutBucketVersioning:
|
class TestPutBucketVersioning:
|
||||||
@@ -73,6 +78,7 @@ class TestPutBucketVersioning:
|
|||||||
resp = client.put("/version-test", query_string={"versioning": ""}, data=payload, headers=headers)
|
resp = client.put("/version-test", query_string={"versioning": ""}, data=payload, headers=headers)
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
# Verify via GET
|
||||||
headers = signer("GET", "/version-test?versioning")
|
headers = signer("GET", "/version-test?versioning")
|
||||||
resp = client.get("/version-test", query_string={"versioning": ""}, headers=headers)
|
resp = client.get("/version-test", query_string={"versioning": ""}, headers=headers)
|
||||||
root = fromstring(resp.data)
|
root = fromstring(resp.data)
|
||||||
@@ -104,13 +110,15 @@ class TestDeleteBucketTagging:
|
|||||||
storage.create_bucket("tag-delete-test")
|
storage.create_bucket("tag-delete-test")
|
||||||
storage.set_bucket_tags("tag-delete-test", [{"Key": "env", "Value": "test"}])
|
storage.set_bucket_tags("tag-delete-test", [{"Key": "env", "Value": "test"}])
|
||||||
|
|
||||||
|
# Delete tags
|
||||||
headers = signer("DELETE", "/tag-delete-test?tagging")
|
headers = signer("DELETE", "/tag-delete-test?tagging")
|
||||||
resp = client.delete("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
resp = client.delete("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
# Verify tags are gone
|
||||||
headers = signer("GET", "/tag-delete-test?tagging")
|
headers = signer("GET", "/tag-delete-test?tagging")
|
||||||
resp = client.get("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
resp = client.get("/tag-delete-test", query_string={"tagging": ""}, headers=headers)
|
||||||
assert resp.status_code == 404
|
assert resp.status_code == 404 # NoSuchTagSet
|
||||||
|
|
||||||
|
|
||||||
class TestDeleteBucketCors:
|
class TestDeleteBucketCors:
|
||||||
@@ -122,13 +130,15 @@ class TestDeleteBucketCors:
|
|||||||
{"AllowedOrigins": ["*"], "AllowedMethods": ["GET"]}
|
{"AllowedOrigins": ["*"], "AllowedMethods": ["GET"]}
|
||||||
])
|
])
|
||||||
|
|
||||||
|
# Delete CORS
|
||||||
headers = signer("DELETE", "/cors-delete-test?cors")
|
headers = signer("DELETE", "/cors-delete-test?cors")
|
||||||
resp = client.delete("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
resp = client.delete("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
# Verify CORS is gone
|
||||||
headers = signer("GET", "/cors-delete-test?cors")
|
headers = signer("GET", "/cors-delete-test?cors")
|
||||||
resp = client.get("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
resp = client.get("/cors-delete-test", query_string={"cors": ""}, headers=headers)
|
||||||
assert resp.status_code == 404
|
assert resp.status_code == 404 # NoSuchCORSConfiguration
|
||||||
|
|
||||||
|
|
||||||
class TestGetBucketLocation:
|
class TestGetBucketLocation:
|
||||||
@@ -163,6 +173,7 @@ class TestBucketAcl:
|
|||||||
def test_put_bucket_acl(self, client, signer, storage):
|
def test_put_bucket_acl(self, client, signer, storage):
|
||||||
storage.create_bucket("acl-put-test")
|
storage.create_bucket("acl-put-test")
|
||||||
|
|
||||||
|
# PUT with canned ACL header
|
||||||
headers = signer("PUT", "/acl-put-test?acl")
|
headers = signer("PUT", "/acl-put-test?acl")
|
||||||
headers["x-amz-acl"] = "public-read"
|
headers["x-amz-acl"] = "public-read"
|
||||||
resp = client.put("/acl-put-test", query_string={"acl": ""}, headers=headers)
|
resp = client.put("/acl-put-test", query_string={"acl": ""}, headers=headers)
|
||||||
@@ -177,6 +188,7 @@ class TestCopyObject:
|
|||||||
storage.create_bucket("copy-dst")
|
storage.create_bucket("copy-dst")
|
||||||
storage.put_object("copy-src", "original.txt", _stream(b"original content"))
|
storage.put_object("copy-src", "original.txt", _stream(b"original content"))
|
||||||
|
|
||||||
|
# Copy object
|
||||||
headers = signer("PUT", "/copy-dst/copied.txt")
|
headers = signer("PUT", "/copy-dst/copied.txt")
|
||||||
headers["x-amz-copy-source"] = "/copy-src/original.txt"
|
headers["x-amz-copy-source"] = "/copy-src/original.txt"
|
||||||
resp = client.put("/copy-dst/copied.txt", headers=headers)
|
resp = client.put("/copy-dst/copied.txt", headers=headers)
|
||||||
@@ -187,6 +199,7 @@ class TestCopyObject:
|
|||||||
assert root.find("ETag") is not None
|
assert root.find("ETag") is not None
|
||||||
assert root.find("LastModified") is not None
|
assert root.find("LastModified") is not None
|
||||||
|
|
||||||
|
# Verify copy exists
|
||||||
path = storage.get_object_path("copy-dst", "copied.txt")
|
path = storage.get_object_path("copy-dst", "copied.txt")
|
||||||
assert path.read_bytes() == b"original content"
|
assert path.read_bytes() == b"original content"
|
||||||
|
|
||||||
@@ -195,6 +208,7 @@ class TestCopyObject:
|
|||||||
storage.create_bucket("meta-dst")
|
storage.create_bucket("meta-dst")
|
||||||
storage.put_object("meta-src", "source.txt", _stream(b"data"), metadata={"old": "value"})
|
storage.put_object("meta-src", "source.txt", _stream(b"data"), metadata={"old": "value"})
|
||||||
|
|
||||||
|
# Copy with REPLACE directive
|
||||||
headers = signer("PUT", "/meta-dst/target.txt")
|
headers = signer("PUT", "/meta-dst/target.txt")
|
||||||
headers["x-amz-copy-source"] = "/meta-src/source.txt"
|
headers["x-amz-copy-source"] = "/meta-src/source.txt"
|
||||||
headers["x-amz-metadata-directive"] = "REPLACE"
|
headers["x-amz-metadata-directive"] = "REPLACE"
|
||||||
@@ -202,6 +216,7 @@ class TestCopyObject:
|
|||||||
resp = client.put("/meta-dst/target.txt", headers=headers)
|
resp = client.put("/meta-dst/target.txt", headers=headers)
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
|
|
||||||
|
# Verify new metadata (note: header keys are Title-Cased)
|
||||||
meta = storage.get_object_metadata("meta-dst", "target.txt")
|
meta = storage.get_object_metadata("meta-dst", "target.txt")
|
||||||
assert "New" in meta or "new" in meta
|
assert "New" in meta or "new" in meta
|
||||||
assert "old" not in meta and "Old" not in meta
|
assert "old" not in meta and "Old" not in meta
|
||||||
@@ -214,6 +229,7 @@ class TestObjectTagging:
|
|||||||
storage.create_bucket("obj-tag-test")
|
storage.create_bucket("obj-tag-test")
|
||||||
storage.put_object("obj-tag-test", "tagged.txt", _stream(b"content"))
|
storage.put_object("obj-tag-test", "tagged.txt", _stream(b"content"))
|
||||||
|
|
||||||
|
# PUT tags
|
||||||
payload = b"""<?xml version="1.0" encoding="UTF-8"?>
|
payload = b"""<?xml version="1.0" encoding="UTF-8"?>
|
||||||
<Tagging>
|
<Tagging>
|
||||||
<TagSet>
|
<TagSet>
|
||||||
@@ -231,6 +247,7 @@ class TestObjectTagging:
|
|||||||
)
|
)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
# GET tags
|
||||||
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
||||||
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
@@ -240,10 +257,12 @@ class TestObjectTagging:
|
|||||||
assert tags["project"] == "demo"
|
assert tags["project"] == "demo"
|
||||||
assert tags["env"] == "test"
|
assert tags["env"] == "test"
|
||||||
|
|
||||||
|
# DELETE tags
|
||||||
headers = signer("DELETE", "/obj-tag-test/tagged.txt?tagging")
|
headers = signer("DELETE", "/obj-tag-test/tagged.txt?tagging")
|
||||||
resp = client.delete("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
resp = client.delete("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
assert resp.status_code == 204
|
assert resp.status_code == 204
|
||||||
|
|
||||||
|
# Verify empty
|
||||||
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
headers = signer("GET", "/obj-tag-test/tagged.txt?tagging")
|
||||||
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
resp = client.get("/obj-tag-test/tagged.txt", query_string={"tagging": ""}, headers=headers)
|
||||||
root = fromstring(resp.data)
|
root = fromstring(resp.data)
|
||||||
@@ -253,6 +272,7 @@ class TestObjectTagging:
|
|||||||
storage.create_bucket("tag-limit")
|
storage.create_bucket("tag-limit")
|
||||||
storage.put_object("tag-limit", "file.txt", _stream(b"x"))
|
storage.put_object("tag-limit", "file.txt", _stream(b"x"))
|
||||||
|
|
||||||
|
# Try to set 11 tags (limit is 10)
|
||||||
tags = "".join(f"<Tag><Key>key{i}</Key><Value>val{i}</Value></Tag>" for i in range(11))
|
tags = "".join(f"<Tag><Key>key{i}</Key><Value>val{i}</Value></Tag>" for i in range(11))
|
||||||
payload = f"<Tagging><TagSet>{tags}</TagSet></Tagging>".encode()
|
payload = f"<Tagging><TagSet>{tags}</TagSet></Tagging>".encode()
|
||||||
|
|
||||||
|
|||||||
@@ -1,374 +0,0 @@
|
|||||||
import json
|
|
||||||
import time
|
|
||||||
from datetime import datetime, timezone
|
|
||||||
from pathlib import Path
|
|
||||||
from unittest.mock import MagicMock, patch
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from app.notifications import (
|
|
||||||
NotificationConfiguration,
|
|
||||||
NotificationEvent,
|
|
||||||
NotificationService,
|
|
||||||
WebhookDestination,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestNotificationEvent:
|
|
||||||
def test_default_values(self):
|
|
||||||
event = NotificationEvent(
|
|
||||||
event_name="s3:ObjectCreated:Put",
|
|
||||||
bucket_name="test-bucket",
|
|
||||||
object_key="test/key.txt",
|
|
||||||
)
|
|
||||||
assert event.event_name == "s3:ObjectCreated:Put"
|
|
||||||
assert event.bucket_name == "test-bucket"
|
|
||||||
assert event.object_key == "test/key.txt"
|
|
||||||
assert event.object_size == 0
|
|
||||||
assert event.etag == ""
|
|
||||||
assert event.version_id is None
|
|
||||||
assert event.request_id != ""
|
|
||||||
|
|
||||||
def test_to_s3_event(self):
|
|
||||||
event = NotificationEvent(
|
|
||||||
event_name="s3:ObjectCreated:Put",
|
|
||||||
bucket_name="my-bucket",
|
|
||||||
object_key="my/object.txt",
|
|
||||||
object_size=1024,
|
|
||||||
etag="abc123",
|
|
||||||
version_id="v1",
|
|
||||||
source_ip="192.168.1.1",
|
|
||||||
user_identity="user123",
|
|
||||||
)
|
|
||||||
result = event.to_s3_event()
|
|
||||||
|
|
||||||
assert "Records" in result
|
|
||||||
assert len(result["Records"]) == 1
|
|
||||||
|
|
||||||
record = result["Records"][0]
|
|
||||||
assert record["eventVersion"] == "2.1"
|
|
||||||
assert record["eventSource"] == "myfsio:s3"
|
|
||||||
assert record["eventName"] == "s3:ObjectCreated:Put"
|
|
||||||
assert record["s3"]["bucket"]["name"] == "my-bucket"
|
|
||||||
assert record["s3"]["object"]["key"] == "my/object.txt"
|
|
||||||
assert record["s3"]["object"]["size"] == 1024
|
|
||||||
assert record["s3"]["object"]["eTag"] == "abc123"
|
|
||||||
assert record["s3"]["object"]["versionId"] == "v1"
|
|
||||||
assert record["userIdentity"]["principalId"] == "user123"
|
|
||||||
assert record["requestParameters"]["sourceIPAddress"] == "192.168.1.1"
|
|
||||||
|
|
||||||
|
|
||||||
class TestWebhookDestination:
|
|
||||||
def test_default_values(self):
|
|
||||||
dest = WebhookDestination(url="http://example.com/webhook")
|
|
||||||
assert dest.url == "http://example.com/webhook"
|
|
||||||
assert dest.headers == {}
|
|
||||||
assert dest.timeout_seconds == 30
|
|
||||||
assert dest.retry_count == 3
|
|
||||||
assert dest.retry_delay_seconds == 1
|
|
||||||
|
|
||||||
def test_to_dict(self):
|
|
||||||
dest = WebhookDestination(
|
|
||||||
url="http://example.com/webhook",
|
|
||||||
headers={"X-Custom": "value"},
|
|
||||||
timeout_seconds=60,
|
|
||||||
retry_count=5,
|
|
||||||
retry_delay_seconds=2,
|
|
||||||
)
|
|
||||||
result = dest.to_dict()
|
|
||||||
assert result["url"] == "http://example.com/webhook"
|
|
||||||
assert result["headers"] == {"X-Custom": "value"}
|
|
||||||
assert result["timeout_seconds"] == 60
|
|
||||||
assert result["retry_count"] == 5
|
|
||||||
assert result["retry_delay_seconds"] == 2
|
|
||||||
|
|
||||||
def test_from_dict(self):
|
|
||||||
data = {
|
|
||||||
"url": "http://hook.example.com",
|
|
||||||
"headers": {"Authorization": "Bearer token"},
|
|
||||||
"timeout_seconds": 45,
|
|
||||||
"retry_count": 2,
|
|
||||||
"retry_delay_seconds": 5,
|
|
||||||
}
|
|
||||||
dest = WebhookDestination.from_dict(data)
|
|
||||||
assert dest.url == "http://hook.example.com"
|
|
||||||
assert dest.headers == {"Authorization": "Bearer token"}
|
|
||||||
assert dest.timeout_seconds == 45
|
|
||||||
assert dest.retry_count == 2
|
|
||||||
assert dest.retry_delay_seconds == 5
|
|
||||||
|
|
||||||
|
|
||||||
class TestNotificationConfiguration:
|
|
||||||
def test_matches_event_exact_match(self):
|
|
||||||
config = NotificationConfiguration(
|
|
||||||
id="config1",
|
|
||||||
events=["s3:ObjectCreated:Put"],
|
|
||||||
destination=WebhookDestination(url="http://example.com"),
|
|
||||||
)
|
|
||||||
assert config.matches_event("s3:ObjectCreated:Put", "any/key.txt") is True
|
|
||||||
assert config.matches_event("s3:ObjectCreated:Post", "any/key.txt") is False
|
|
||||||
|
|
||||||
def test_matches_event_wildcard(self):
|
|
||||||
config = NotificationConfiguration(
|
|
||||||
id="config1",
|
|
||||||
events=["s3:ObjectCreated:*"],
|
|
||||||
destination=WebhookDestination(url="http://example.com"),
|
|
||||||
)
|
|
||||||
assert config.matches_event("s3:ObjectCreated:Put", "key.txt") is True
|
|
||||||
assert config.matches_event("s3:ObjectCreated:Copy", "key.txt") is True
|
|
||||||
assert config.matches_event("s3:ObjectRemoved:Delete", "key.txt") is False
|
|
||||||
|
|
||||||
def test_matches_event_with_prefix_filter(self):
|
|
||||||
config = NotificationConfiguration(
|
|
||||||
id="config1",
|
|
||||||
events=["s3:ObjectCreated:*"],
|
|
||||||
destination=WebhookDestination(url="http://example.com"),
|
|
||||||
prefix_filter="logs/",
|
|
||||||
)
|
|
||||||
assert config.matches_event("s3:ObjectCreated:Put", "logs/app.log") is True
|
|
||||||
assert config.matches_event("s3:ObjectCreated:Put", "data/file.txt") is False
|
|
||||||
|
|
||||||
def test_matches_event_with_suffix_filter(self):
|
|
||||||
config = NotificationConfiguration(
|
|
||||||
id="config1",
|
|
||||||
events=["s3:ObjectCreated:*"],
|
|
||||||
destination=WebhookDestination(url="http://example.com"),
|
|
||||||
suffix_filter=".jpg",
|
|
||||||
)
|
|
||||||
assert config.matches_event("s3:ObjectCreated:Put", "photos/image.jpg") is True
|
|
||||||
assert config.matches_event("s3:ObjectCreated:Put", "photos/image.png") is False
|
|
||||||
|
|
||||||
def test_matches_event_with_both_filters(self):
|
|
||||||
config = NotificationConfiguration(
|
|
||||||
id="config1",
|
|
||||||
events=["s3:ObjectCreated:*"],
|
|
||||||
destination=WebhookDestination(url="http://example.com"),
|
|
||||||
prefix_filter="images/",
|
|
||||||
suffix_filter=".png",
|
|
||||||
)
|
|
||||||
assert config.matches_event("s3:ObjectCreated:Put", "images/photo.png") is True
|
|
||||||
assert config.matches_event("s3:ObjectCreated:Put", "images/photo.jpg") is False
|
|
||||||
assert config.matches_event("s3:ObjectCreated:Put", "documents/file.png") is False
|
|
||||||
|
|
||||||
def test_to_dict(self):
|
|
||||||
config = NotificationConfiguration(
|
|
||||||
id="my-config",
|
|
||||||
events=["s3:ObjectCreated:Put", "s3:ObjectRemoved:Delete"],
|
|
||||||
destination=WebhookDestination(url="http://example.com"),
|
|
||||||
prefix_filter="logs/",
|
|
||||||
suffix_filter=".log",
|
|
||||||
)
|
|
||||||
result = config.to_dict()
|
|
||||||
assert result["Id"] == "my-config"
|
|
||||||
assert result["Events"] == ["s3:ObjectCreated:Put", "s3:ObjectRemoved:Delete"]
|
|
||||||
assert "Destination" in result
|
|
||||||
assert result["Filter"]["Key"]["FilterRules"][0]["Value"] == "logs/"
|
|
||||||
assert result["Filter"]["Key"]["FilterRules"][1]["Value"] == ".log"
|
|
||||||
|
|
||||||
def test_from_dict(self):
|
|
||||||
data = {
|
|
||||||
"Id": "parsed-config",
|
|
||||||
"Events": ["s3:ObjectCreated:*"],
|
|
||||||
"Destination": {"url": "http://hook.example.com"},
|
|
||||||
"Filter": {
|
|
||||||
"Key": {
|
|
||||||
"FilterRules": [
|
|
||||||
{"Name": "prefix", "Value": "data/"},
|
|
||||||
{"Name": "suffix", "Value": ".csv"},
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
config = NotificationConfiguration.from_dict(data)
|
|
||||||
assert config.id == "parsed-config"
|
|
||||||
assert config.events == ["s3:ObjectCreated:*"]
|
|
||||||
assert config.destination.url == "http://hook.example.com"
|
|
||||||
assert config.prefix_filter == "data/"
|
|
||||||
assert config.suffix_filter == ".csv"
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def notification_service(tmp_path: Path):
|
|
||||||
service = NotificationService(tmp_path, worker_count=1)
|
|
||||||
yield service
|
|
||||||
service.shutdown()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNotificationService:
|
|
||||||
def test_get_bucket_notifications_empty(self, notification_service):
|
|
||||||
result = notification_service.get_bucket_notifications("nonexistent-bucket")
|
|
||||||
assert result == []
|
|
||||||
|
|
||||||
def test_set_and_get_bucket_notifications(self, notification_service):
|
|
||||||
configs = [
|
|
||||||
NotificationConfiguration(
|
|
||||||
id="config1",
|
|
||||||
events=["s3:ObjectCreated:*"],
|
|
||||||
destination=WebhookDestination(url="http://example.com/webhook1"),
|
|
||||||
),
|
|
||||||
NotificationConfiguration(
|
|
||||||
id="config2",
|
|
||||||
events=["s3:ObjectRemoved:*"],
|
|
||||||
destination=WebhookDestination(url="http://example.com/webhook2"),
|
|
||||||
),
|
|
||||||
]
|
|
||||||
notification_service.set_bucket_notifications("my-bucket", configs)
|
|
||||||
|
|
||||||
retrieved = notification_service.get_bucket_notifications("my-bucket")
|
|
||||||
assert len(retrieved) == 2
|
|
||||||
assert retrieved[0].id == "config1"
|
|
||||||
assert retrieved[1].id == "config2"
|
|
||||||
|
|
||||||
def test_delete_bucket_notifications(self, notification_service):
|
|
||||||
configs = [
|
|
||||||
NotificationConfiguration(
|
|
||||||
id="to-delete",
|
|
||||||
events=["s3:ObjectCreated:*"],
|
|
||||||
destination=WebhookDestination(url="http://example.com"),
|
|
||||||
),
|
|
||||||
]
|
|
||||||
notification_service.set_bucket_notifications("delete-bucket", configs)
|
|
||||||
assert len(notification_service.get_bucket_notifications("delete-bucket")) == 1
|
|
||||||
|
|
||||||
notification_service.delete_bucket_notifications("delete-bucket")
|
|
||||||
notification_service._configs.clear()
|
|
||||||
assert len(notification_service.get_bucket_notifications("delete-bucket")) == 0
|
|
||||||
|
|
||||||
def test_emit_event_no_config(self, notification_service):
|
|
||||||
event = NotificationEvent(
|
|
||||||
event_name="s3:ObjectCreated:Put",
|
|
||||||
bucket_name="no-config-bucket",
|
|
||||||
object_key="test.txt",
|
|
||||||
)
|
|
||||||
notification_service.emit_event(event)
|
|
||||||
assert notification_service._stats["events_queued"] == 0
|
|
||||||
|
|
||||||
def test_emit_event_matching_config(self, notification_service):
|
|
||||||
configs = [
|
|
||||||
NotificationConfiguration(
|
|
||||||
id="match-config",
|
|
||||||
events=["s3:ObjectCreated:*"],
|
|
||||||
destination=WebhookDestination(url="http://example.com/webhook"),
|
|
||||||
),
|
|
||||||
]
|
|
||||||
notification_service.set_bucket_notifications("event-bucket", configs)
|
|
||||||
|
|
||||||
event = NotificationEvent(
|
|
||||||
event_name="s3:ObjectCreated:Put",
|
|
||||||
bucket_name="event-bucket",
|
|
||||||
object_key="test.txt",
|
|
||||||
)
|
|
||||||
notification_service.emit_event(event)
|
|
||||||
assert notification_service._stats["events_queued"] == 1
|
|
||||||
|
|
||||||
def test_emit_event_non_matching_config(self, notification_service):
|
|
||||||
configs = [
|
|
||||||
NotificationConfiguration(
|
|
||||||
id="delete-only",
|
|
||||||
events=["s3:ObjectRemoved:*"],
|
|
||||||
destination=WebhookDestination(url="http://example.com/webhook"),
|
|
||||||
),
|
|
||||||
]
|
|
||||||
notification_service.set_bucket_notifications("delete-bucket", configs)
|
|
||||||
|
|
||||||
event = NotificationEvent(
|
|
||||||
event_name="s3:ObjectCreated:Put",
|
|
||||||
bucket_name="delete-bucket",
|
|
||||||
object_key="test.txt",
|
|
||||||
)
|
|
||||||
notification_service.emit_event(event)
|
|
||||||
assert notification_service._stats["events_queued"] == 0
|
|
||||||
|
|
||||||
def test_emit_object_created(self, notification_service):
|
|
||||||
configs = [
|
|
||||||
NotificationConfiguration(
|
|
||||||
id="create-config",
|
|
||||||
events=["s3:ObjectCreated:Put"],
|
|
||||||
destination=WebhookDestination(url="http://example.com/webhook"),
|
|
||||||
),
|
|
||||||
]
|
|
||||||
notification_service.set_bucket_notifications("create-bucket", configs)
|
|
||||||
|
|
||||||
notification_service.emit_object_created(
|
|
||||||
"create-bucket",
|
|
||||||
"new-file.txt",
|
|
||||||
size=1024,
|
|
||||||
etag="abc123",
|
|
||||||
operation="Put",
|
|
||||||
)
|
|
||||||
assert notification_service._stats["events_queued"] == 1
|
|
||||||
|
|
||||||
def test_emit_object_removed(self, notification_service):
|
|
||||||
configs = [
|
|
||||||
NotificationConfiguration(
|
|
||||||
id="remove-config",
|
|
||||||
events=["s3:ObjectRemoved:Delete"],
|
|
||||||
destination=WebhookDestination(url="http://example.com/webhook"),
|
|
||||||
),
|
|
||||||
]
|
|
||||||
notification_service.set_bucket_notifications("remove-bucket", configs)
|
|
||||||
|
|
||||||
notification_service.emit_object_removed(
|
|
||||||
"remove-bucket",
|
|
||||||
"deleted-file.txt",
|
|
||||||
operation="Delete",
|
|
||||||
)
|
|
||||||
assert notification_service._stats["events_queued"] == 1
|
|
||||||
|
|
||||||
def test_get_stats(self, notification_service):
|
|
||||||
stats = notification_service.get_stats()
|
|
||||||
assert "events_queued" in stats
|
|
||||||
assert "events_sent" in stats
|
|
||||||
assert "events_failed" in stats
|
|
||||||
|
|
||||||
@patch("app.notifications.requests.post")
|
|
||||||
def test_send_notification_success(self, mock_post, notification_service):
|
|
||||||
mock_response = MagicMock()
|
|
||||||
mock_response.status_code = 200
|
|
||||||
mock_post.return_value = mock_response
|
|
||||||
|
|
||||||
event = NotificationEvent(
|
|
||||||
event_name="s3:ObjectCreated:Put",
|
|
||||||
bucket_name="test-bucket",
|
|
||||||
object_key="test.txt",
|
|
||||||
)
|
|
||||||
destination = WebhookDestination(url="http://example.com/webhook")
|
|
||||||
|
|
||||||
notification_service._send_notification(event, destination)
|
|
||||||
mock_post.assert_called_once()
|
|
||||||
|
|
||||||
@patch("app.notifications.requests.post")
|
|
||||||
def test_send_notification_retry_on_failure(self, mock_post, notification_service):
|
|
||||||
mock_response = MagicMock()
|
|
||||||
mock_response.status_code = 500
|
|
||||||
mock_response.text = "Internal Server Error"
|
|
||||||
mock_post.return_value = mock_response
|
|
||||||
|
|
||||||
event = NotificationEvent(
|
|
||||||
event_name="s3:ObjectCreated:Put",
|
|
||||||
bucket_name="test-bucket",
|
|
||||||
object_key="test.txt",
|
|
||||||
)
|
|
||||||
destination = WebhookDestination(
|
|
||||||
url="http://example.com/webhook",
|
|
||||||
retry_count=2,
|
|
||||||
retry_delay_seconds=0,
|
|
||||||
)
|
|
||||||
|
|
||||||
with pytest.raises(RuntimeError) as exc_info:
|
|
||||||
notification_service._send_notification(event, destination)
|
|
||||||
assert "Failed after 2 attempts" in str(exc_info.value)
|
|
||||||
assert mock_post.call_count == 2
|
|
||||||
|
|
||||||
def test_notification_caching(self, notification_service):
|
|
||||||
configs = [
|
|
||||||
NotificationConfiguration(
|
|
||||||
id="cached-config",
|
|
||||||
events=["s3:ObjectCreated:*"],
|
|
||||||
destination=WebhookDestination(url="http://example.com"),
|
|
||||||
),
|
|
||||||
]
|
|
||||||
notification_service.set_bucket_notifications("cached-bucket", configs)
|
|
||||||
|
|
||||||
notification_service.get_bucket_notifications("cached-bucket")
|
|
||||||
assert "cached-bucket" in notification_service._configs
|
|
||||||
@@ -1,332 +0,0 @@
|
|||||||
import json
|
|
||||||
from datetime import datetime, timedelta, timezone
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from app.object_lock import (
|
|
||||||
ObjectLockConfig,
|
|
||||||
ObjectLockError,
|
|
||||||
ObjectLockRetention,
|
|
||||||
ObjectLockService,
|
|
||||||
RetentionMode,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestRetentionMode:
|
|
||||||
def test_governance_mode(self):
|
|
||||||
assert RetentionMode.GOVERNANCE.value == "GOVERNANCE"
|
|
||||||
|
|
||||||
def test_compliance_mode(self):
|
|
||||||
assert RetentionMode.COMPLIANCE.value == "COMPLIANCE"
|
|
||||||
|
|
||||||
|
|
||||||
class TestObjectLockRetention:
|
|
||||||
def test_to_dict(self):
|
|
||||||
retain_until = datetime(2025, 12, 31, 23, 59, 59, tzinfo=timezone.utc)
|
|
||||||
retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.GOVERNANCE,
|
|
||||||
retain_until_date=retain_until,
|
|
||||||
)
|
|
||||||
result = retention.to_dict()
|
|
||||||
assert result["Mode"] == "GOVERNANCE"
|
|
||||||
assert "2025-12-31" in result["RetainUntilDate"]
|
|
||||||
|
|
||||||
def test_from_dict(self):
|
|
||||||
data = {
|
|
||||||
"Mode": "COMPLIANCE",
|
|
||||||
"RetainUntilDate": "2030-06-15T12:00:00+00:00",
|
|
||||||
}
|
|
||||||
retention = ObjectLockRetention.from_dict(data)
|
|
||||||
assert retention is not None
|
|
||||||
assert retention.mode == RetentionMode.COMPLIANCE
|
|
||||||
assert retention.retain_until_date.year == 2030
|
|
||||||
|
|
||||||
def test_from_dict_empty(self):
|
|
||||||
result = ObjectLockRetention.from_dict({})
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
def test_from_dict_missing_mode(self):
|
|
||||||
data = {"RetainUntilDate": "2030-06-15T12:00:00+00:00"}
|
|
||||||
result = ObjectLockRetention.from_dict(data)
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
def test_from_dict_missing_date(self):
|
|
||||||
data = {"Mode": "GOVERNANCE"}
|
|
||||||
result = ObjectLockRetention.from_dict(data)
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
def test_is_expired_future_date(self):
|
|
||||||
future = datetime.now(timezone.utc) + timedelta(days=30)
|
|
||||||
retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.GOVERNANCE,
|
|
||||||
retain_until_date=future,
|
|
||||||
)
|
|
||||||
assert retention.is_expired() is False
|
|
||||||
|
|
||||||
def test_is_expired_past_date(self):
|
|
||||||
past = datetime.now(timezone.utc) - timedelta(days=30)
|
|
||||||
retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.GOVERNANCE,
|
|
||||||
retain_until_date=past,
|
|
||||||
)
|
|
||||||
assert retention.is_expired() is True
|
|
||||||
|
|
||||||
|
|
||||||
class TestObjectLockConfig:
|
|
||||||
def test_to_dict_enabled(self):
|
|
||||||
config = ObjectLockConfig(enabled=True)
|
|
||||||
result = config.to_dict()
|
|
||||||
assert result["ObjectLockEnabled"] == "Enabled"
|
|
||||||
|
|
||||||
def test_to_dict_disabled(self):
|
|
||||||
config = ObjectLockConfig(enabled=False)
|
|
||||||
result = config.to_dict()
|
|
||||||
assert result["ObjectLockEnabled"] == "Disabled"
|
|
||||||
|
|
||||||
def test_from_dict_enabled(self):
|
|
||||||
data = {"ObjectLockEnabled": "Enabled"}
|
|
||||||
config = ObjectLockConfig.from_dict(data)
|
|
||||||
assert config.enabled is True
|
|
||||||
|
|
||||||
def test_from_dict_disabled(self):
|
|
||||||
data = {"ObjectLockEnabled": "Disabled"}
|
|
||||||
config = ObjectLockConfig.from_dict(data)
|
|
||||||
assert config.enabled is False
|
|
||||||
|
|
||||||
def test_from_dict_with_default_retention_days(self):
|
|
||||||
data = {
|
|
||||||
"ObjectLockEnabled": "Enabled",
|
|
||||||
"Rule": {
|
|
||||||
"DefaultRetention": {
|
|
||||||
"Mode": "GOVERNANCE",
|
|
||||||
"Days": 30,
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
config = ObjectLockConfig.from_dict(data)
|
|
||||||
assert config.enabled is True
|
|
||||||
assert config.default_retention is not None
|
|
||||||
assert config.default_retention.mode == RetentionMode.GOVERNANCE
|
|
||||||
|
|
||||||
def test_from_dict_with_default_retention_years(self):
|
|
||||||
data = {
|
|
||||||
"ObjectLockEnabled": "Enabled",
|
|
||||||
"Rule": {
|
|
||||||
"DefaultRetention": {
|
|
||||||
"Mode": "COMPLIANCE",
|
|
||||||
"Years": 1,
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
config = ObjectLockConfig.from_dict(data)
|
|
||||||
assert config.enabled is True
|
|
||||||
assert config.default_retention is not None
|
|
||||||
assert config.default_retention.mode == RetentionMode.COMPLIANCE
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def lock_service(tmp_path: Path):
|
|
||||||
return ObjectLockService(tmp_path)
|
|
||||||
|
|
||||||
|
|
||||||
class TestObjectLockService:
|
|
||||||
def test_get_bucket_lock_config_default(self, lock_service):
|
|
||||||
config = lock_service.get_bucket_lock_config("nonexistent-bucket")
|
|
||||||
assert config.enabled is False
|
|
||||||
assert config.default_retention is None
|
|
||||||
|
|
||||||
def test_set_and_get_bucket_lock_config(self, lock_service):
|
|
||||||
config = ObjectLockConfig(enabled=True)
|
|
||||||
lock_service.set_bucket_lock_config("my-bucket", config)
|
|
||||||
|
|
||||||
retrieved = lock_service.get_bucket_lock_config("my-bucket")
|
|
||||||
assert retrieved.enabled is True
|
|
||||||
|
|
||||||
def test_enable_bucket_lock(self, lock_service):
|
|
||||||
lock_service.enable_bucket_lock("lock-bucket")
|
|
||||||
|
|
||||||
config = lock_service.get_bucket_lock_config("lock-bucket")
|
|
||||||
assert config.enabled is True
|
|
||||||
|
|
||||||
def test_is_bucket_lock_enabled(self, lock_service):
|
|
||||||
assert lock_service.is_bucket_lock_enabled("new-bucket") is False
|
|
||||||
|
|
||||||
lock_service.enable_bucket_lock("new-bucket")
|
|
||||||
assert lock_service.is_bucket_lock_enabled("new-bucket") is True
|
|
||||||
|
|
||||||
def test_get_object_retention_not_set(self, lock_service):
|
|
||||||
result = lock_service.get_object_retention("bucket", "key.txt")
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
def test_set_and_get_object_retention(self, lock_service):
|
|
||||||
future = datetime.now(timezone.utc) + timedelta(days=30)
|
|
||||||
retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.GOVERNANCE,
|
|
||||||
retain_until_date=future,
|
|
||||||
)
|
|
||||||
lock_service.set_object_retention("bucket", "key.txt", retention)
|
|
||||||
|
|
||||||
retrieved = lock_service.get_object_retention("bucket", "key.txt")
|
|
||||||
assert retrieved is not None
|
|
||||||
assert retrieved.mode == RetentionMode.GOVERNANCE
|
|
||||||
|
|
||||||
def test_cannot_modify_compliance_retention(self, lock_service):
|
|
||||||
future = datetime.now(timezone.utc) + timedelta(days=30)
|
|
||||||
retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.COMPLIANCE,
|
|
||||||
retain_until_date=future,
|
|
||||||
)
|
|
||||||
lock_service.set_object_retention("bucket", "locked.txt", retention)
|
|
||||||
|
|
||||||
new_retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.GOVERNANCE,
|
|
||||||
retain_until_date=future + timedelta(days=10),
|
|
||||||
)
|
|
||||||
with pytest.raises(ObjectLockError) as exc_info:
|
|
||||||
lock_service.set_object_retention("bucket", "locked.txt", new_retention)
|
|
||||||
assert "COMPLIANCE" in str(exc_info.value)
|
|
||||||
|
|
||||||
def test_cannot_modify_governance_without_bypass(self, lock_service):
|
|
||||||
future = datetime.now(timezone.utc) + timedelta(days=30)
|
|
||||||
retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.GOVERNANCE,
|
|
||||||
retain_until_date=future,
|
|
||||||
)
|
|
||||||
lock_service.set_object_retention("bucket", "gov.txt", retention)
|
|
||||||
|
|
||||||
new_retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.GOVERNANCE,
|
|
||||||
retain_until_date=future + timedelta(days=10),
|
|
||||||
)
|
|
||||||
with pytest.raises(ObjectLockError) as exc_info:
|
|
||||||
lock_service.set_object_retention("bucket", "gov.txt", new_retention)
|
|
||||||
assert "GOVERNANCE" in str(exc_info.value)
|
|
||||||
|
|
||||||
def test_can_modify_governance_with_bypass(self, lock_service):
|
|
||||||
future = datetime.now(timezone.utc) + timedelta(days=30)
|
|
||||||
retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.GOVERNANCE,
|
|
||||||
retain_until_date=future,
|
|
||||||
)
|
|
||||||
lock_service.set_object_retention("bucket", "bypassable.txt", retention)
|
|
||||||
|
|
||||||
new_retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.GOVERNANCE,
|
|
||||||
retain_until_date=future + timedelta(days=10),
|
|
||||||
)
|
|
||||||
lock_service.set_object_retention("bucket", "bypassable.txt", new_retention, bypass_governance=True)
|
|
||||||
retrieved = lock_service.get_object_retention("bucket", "bypassable.txt")
|
|
||||||
assert retrieved.retain_until_date > future
|
|
||||||
|
|
||||||
def test_can_modify_expired_retention(self, lock_service):
|
|
||||||
past = datetime.now(timezone.utc) - timedelta(days=30)
|
|
||||||
retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.COMPLIANCE,
|
|
||||||
retain_until_date=past,
|
|
||||||
)
|
|
||||||
lock_service.set_object_retention("bucket", "expired.txt", retention)
|
|
||||||
|
|
||||||
future = datetime.now(timezone.utc) + timedelta(days=30)
|
|
||||||
new_retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.GOVERNANCE,
|
|
||||||
retain_until_date=future,
|
|
||||||
)
|
|
||||||
lock_service.set_object_retention("bucket", "expired.txt", new_retention)
|
|
||||||
retrieved = lock_service.get_object_retention("bucket", "expired.txt")
|
|
||||||
assert retrieved.mode == RetentionMode.GOVERNANCE
|
|
||||||
|
|
||||||
def test_get_legal_hold_not_set(self, lock_service):
|
|
||||||
result = lock_service.get_legal_hold("bucket", "key.txt")
|
|
||||||
assert result is False
|
|
||||||
|
|
||||||
def test_set_and_get_legal_hold(self, lock_service):
|
|
||||||
lock_service.set_legal_hold("bucket", "held.txt", True)
|
|
||||||
assert lock_service.get_legal_hold("bucket", "held.txt") is True
|
|
||||||
|
|
||||||
lock_service.set_legal_hold("bucket", "held.txt", False)
|
|
||||||
assert lock_service.get_legal_hold("bucket", "held.txt") is False
|
|
||||||
|
|
||||||
def test_can_delete_object_no_lock(self, lock_service):
|
|
||||||
can_delete, reason = lock_service.can_delete_object("bucket", "unlocked.txt")
|
|
||||||
assert can_delete is True
|
|
||||||
assert reason == ""
|
|
||||||
|
|
||||||
def test_cannot_delete_object_with_legal_hold(self, lock_service):
|
|
||||||
lock_service.set_legal_hold("bucket", "held.txt", True)
|
|
||||||
|
|
||||||
can_delete, reason = lock_service.can_delete_object("bucket", "held.txt")
|
|
||||||
assert can_delete is False
|
|
||||||
assert "legal hold" in reason.lower()
|
|
||||||
|
|
||||||
def test_cannot_delete_object_with_compliance_retention(self, lock_service):
|
|
||||||
future = datetime.now(timezone.utc) + timedelta(days=30)
|
|
||||||
retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.COMPLIANCE,
|
|
||||||
retain_until_date=future,
|
|
||||||
)
|
|
||||||
lock_service.set_object_retention("bucket", "compliant.txt", retention)
|
|
||||||
|
|
||||||
can_delete, reason = lock_service.can_delete_object("bucket", "compliant.txt")
|
|
||||||
assert can_delete is False
|
|
||||||
assert "COMPLIANCE" in reason
|
|
||||||
|
|
||||||
def test_cannot_delete_governance_without_bypass(self, lock_service):
|
|
||||||
future = datetime.now(timezone.utc) + timedelta(days=30)
|
|
||||||
retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.GOVERNANCE,
|
|
||||||
retain_until_date=future,
|
|
||||||
)
|
|
||||||
lock_service.set_object_retention("bucket", "governed.txt", retention)
|
|
||||||
|
|
||||||
can_delete, reason = lock_service.can_delete_object("bucket", "governed.txt")
|
|
||||||
assert can_delete is False
|
|
||||||
assert "GOVERNANCE" in reason
|
|
||||||
|
|
||||||
def test_can_delete_governance_with_bypass(self, lock_service):
|
|
||||||
future = datetime.now(timezone.utc) + timedelta(days=30)
|
|
||||||
retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.GOVERNANCE,
|
|
||||||
retain_until_date=future,
|
|
||||||
)
|
|
||||||
lock_service.set_object_retention("bucket", "governed.txt", retention)
|
|
||||||
|
|
||||||
can_delete, reason = lock_service.can_delete_object("bucket", "governed.txt", bypass_governance=True)
|
|
||||||
assert can_delete is True
|
|
||||||
assert reason == ""
|
|
||||||
|
|
||||||
def test_can_delete_expired_retention(self, lock_service):
|
|
||||||
past = datetime.now(timezone.utc) - timedelta(days=30)
|
|
||||||
retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.COMPLIANCE,
|
|
||||||
retain_until_date=past,
|
|
||||||
)
|
|
||||||
lock_service.set_object_retention("bucket", "expired.txt", retention)
|
|
||||||
|
|
||||||
can_delete, reason = lock_service.can_delete_object("bucket", "expired.txt")
|
|
||||||
assert can_delete is True
|
|
||||||
|
|
||||||
def test_can_overwrite_is_same_as_delete(self, lock_service):
|
|
||||||
future = datetime.now(timezone.utc) + timedelta(days=30)
|
|
||||||
retention = ObjectLockRetention(
|
|
||||||
mode=RetentionMode.GOVERNANCE,
|
|
||||||
retain_until_date=future,
|
|
||||||
)
|
|
||||||
lock_service.set_object_retention("bucket", "overwrite.txt", retention)
|
|
||||||
|
|
||||||
can_overwrite, _ = lock_service.can_overwrite_object("bucket", "overwrite.txt")
|
|
||||||
can_delete, _ = lock_service.can_delete_object("bucket", "overwrite.txt")
|
|
||||||
assert can_overwrite == can_delete
|
|
||||||
|
|
||||||
def test_delete_object_lock_metadata(self, lock_service):
|
|
||||||
lock_service.set_legal_hold("bucket", "cleanup.txt", True)
|
|
||||||
lock_service.delete_object_lock_metadata("bucket", "cleanup.txt")
|
|
||||||
|
|
||||||
assert lock_service.get_legal_hold("bucket", "cleanup.txt") is False
|
|
||||||
|
|
||||||
def test_config_caching(self, lock_service):
|
|
||||||
config = ObjectLockConfig(enabled=True)
|
|
||||||
lock_service.set_bucket_lock_config("cached-bucket", config)
|
|
||||||
|
|
||||||
lock_service.get_bucket_lock_config("cached-bucket")
|
|
||||||
assert "cached-bucket" in lock_service._config_cache
|
|
||||||
@@ -1,287 +0,0 @@
|
|||||||
import json
|
|
||||||
import time
|
|
||||||
from pathlib import Path
|
|
||||||
from unittest.mock import MagicMock, patch
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from app.connections import ConnectionStore, RemoteConnection
|
|
||||||
from app.replication import (
|
|
||||||
ReplicationManager,
|
|
||||||
ReplicationRule,
|
|
||||||
ReplicationStats,
|
|
||||||
REPLICATION_MODE_ALL,
|
|
||||||
REPLICATION_MODE_NEW_ONLY,
|
|
||||||
_create_s3_client,
|
|
||||||
)
|
|
||||||
from app.storage import ObjectStorage
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def storage(tmp_path: Path):
|
|
||||||
storage_root = tmp_path / "data"
|
|
||||||
storage_root.mkdir(parents=True)
|
|
||||||
return ObjectStorage(storage_root)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def connections(tmp_path: Path):
|
|
||||||
connections_path = tmp_path / "connections.json"
|
|
||||||
store = ConnectionStore(connections_path)
|
|
||||||
conn = RemoteConnection(
|
|
||||||
id="test-conn",
|
|
||||||
name="Test Remote",
|
|
||||||
endpoint_url="http://localhost:9000",
|
|
||||||
access_key="remote-access",
|
|
||||||
secret_key="remote-secret",
|
|
||||||
region="us-east-1",
|
|
||||||
)
|
|
||||||
store.add(conn)
|
|
||||||
return store
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def replication_manager(storage, connections, tmp_path):
|
|
||||||
rules_path = tmp_path / "replication_rules.json"
|
|
||||||
storage_root = tmp_path / "data"
|
|
||||||
storage_root.mkdir(exist_ok=True)
|
|
||||||
manager = ReplicationManager(storage, connections, rules_path, storage_root)
|
|
||||||
yield manager
|
|
||||||
manager.shutdown(wait=False)
|
|
||||||
|
|
||||||
|
|
||||||
class TestReplicationStats:
|
|
||||||
def test_to_dict(self):
|
|
||||||
stats = ReplicationStats(
|
|
||||||
objects_synced=10,
|
|
||||||
objects_pending=5,
|
|
||||||
objects_orphaned=2,
|
|
||||||
bytes_synced=1024,
|
|
||||||
last_sync_at=1234567890.0,
|
|
||||||
last_sync_key="test/key.txt",
|
|
||||||
)
|
|
||||||
result = stats.to_dict()
|
|
||||||
assert result["objects_synced"] == 10
|
|
||||||
assert result["objects_pending"] == 5
|
|
||||||
assert result["objects_orphaned"] == 2
|
|
||||||
assert result["bytes_synced"] == 1024
|
|
||||||
assert result["last_sync_at"] == 1234567890.0
|
|
||||||
assert result["last_sync_key"] == "test/key.txt"
|
|
||||||
|
|
||||||
def test_from_dict(self):
|
|
||||||
data = {
|
|
||||||
"objects_synced": 15,
|
|
||||||
"objects_pending": 3,
|
|
||||||
"objects_orphaned": 1,
|
|
||||||
"bytes_synced": 2048,
|
|
||||||
"last_sync_at": 9876543210.0,
|
|
||||||
"last_sync_key": "another/key.txt",
|
|
||||||
}
|
|
||||||
stats = ReplicationStats.from_dict(data)
|
|
||||||
assert stats.objects_synced == 15
|
|
||||||
assert stats.objects_pending == 3
|
|
||||||
assert stats.objects_orphaned == 1
|
|
||||||
assert stats.bytes_synced == 2048
|
|
||||||
assert stats.last_sync_at == 9876543210.0
|
|
||||||
assert stats.last_sync_key == "another/key.txt"
|
|
||||||
|
|
||||||
def test_from_dict_with_defaults(self):
|
|
||||||
stats = ReplicationStats.from_dict({})
|
|
||||||
assert stats.objects_synced == 0
|
|
||||||
assert stats.objects_pending == 0
|
|
||||||
assert stats.objects_orphaned == 0
|
|
||||||
assert stats.bytes_synced == 0
|
|
||||||
assert stats.last_sync_at is None
|
|
||||||
assert stats.last_sync_key is None
|
|
||||||
|
|
||||||
|
|
||||||
class TestReplicationRule:
|
|
||||||
def test_to_dict(self):
|
|
||||||
rule = ReplicationRule(
|
|
||||||
bucket_name="source-bucket",
|
|
||||||
target_connection_id="test-conn",
|
|
||||||
target_bucket="dest-bucket",
|
|
||||||
enabled=True,
|
|
||||||
mode=REPLICATION_MODE_ALL,
|
|
||||||
created_at=1234567890.0,
|
|
||||||
)
|
|
||||||
result = rule.to_dict()
|
|
||||||
assert result["bucket_name"] == "source-bucket"
|
|
||||||
assert result["target_connection_id"] == "test-conn"
|
|
||||||
assert result["target_bucket"] == "dest-bucket"
|
|
||||||
assert result["enabled"] is True
|
|
||||||
assert result["mode"] == REPLICATION_MODE_ALL
|
|
||||||
assert result["created_at"] == 1234567890.0
|
|
||||||
assert "stats" in result
|
|
||||||
|
|
||||||
def test_from_dict(self):
|
|
||||||
data = {
|
|
||||||
"bucket_name": "my-bucket",
|
|
||||||
"target_connection_id": "conn-123",
|
|
||||||
"target_bucket": "remote-bucket",
|
|
||||||
"enabled": False,
|
|
||||||
"mode": REPLICATION_MODE_NEW_ONLY,
|
|
||||||
"created_at": 1111111111.0,
|
|
||||||
"stats": {"objects_synced": 5},
|
|
||||||
}
|
|
||||||
rule = ReplicationRule.from_dict(data)
|
|
||||||
assert rule.bucket_name == "my-bucket"
|
|
||||||
assert rule.target_connection_id == "conn-123"
|
|
||||||
assert rule.target_bucket == "remote-bucket"
|
|
||||||
assert rule.enabled is False
|
|
||||||
assert rule.mode == REPLICATION_MODE_NEW_ONLY
|
|
||||||
assert rule.created_at == 1111111111.0
|
|
||||||
assert rule.stats.objects_synced == 5
|
|
||||||
|
|
||||||
def test_from_dict_defaults_mode(self):
|
|
||||||
data = {
|
|
||||||
"bucket_name": "my-bucket",
|
|
||||||
"target_connection_id": "conn-123",
|
|
||||||
"target_bucket": "remote-bucket",
|
|
||||||
}
|
|
||||||
rule = ReplicationRule.from_dict(data)
|
|
||||||
assert rule.mode == REPLICATION_MODE_NEW_ONLY
|
|
||||||
assert rule.created_at is None
|
|
||||||
|
|
||||||
|
|
||||||
class TestReplicationManager:
|
|
||||||
def test_get_rule_not_exists(self, replication_manager):
|
|
||||||
rule = replication_manager.get_rule("nonexistent-bucket")
|
|
||||||
assert rule is None
|
|
||||||
|
|
||||||
def test_set_and_get_rule(self, replication_manager):
|
|
||||||
rule = ReplicationRule(
|
|
||||||
bucket_name="my-bucket",
|
|
||||||
target_connection_id="test-conn",
|
|
||||||
target_bucket="remote-bucket",
|
|
||||||
enabled=True,
|
|
||||||
mode=REPLICATION_MODE_NEW_ONLY,
|
|
||||||
created_at=time.time(),
|
|
||||||
)
|
|
||||||
replication_manager.set_rule(rule)
|
|
||||||
|
|
||||||
retrieved = replication_manager.get_rule("my-bucket")
|
|
||||||
assert retrieved is not None
|
|
||||||
assert retrieved.bucket_name == "my-bucket"
|
|
||||||
assert retrieved.target_connection_id == "test-conn"
|
|
||||||
assert retrieved.target_bucket == "remote-bucket"
|
|
||||||
|
|
||||||
def test_delete_rule(self, replication_manager):
|
|
||||||
rule = ReplicationRule(
|
|
||||||
bucket_name="to-delete",
|
|
||||||
target_connection_id="test-conn",
|
|
||||||
target_bucket="remote-bucket",
|
|
||||||
)
|
|
||||||
replication_manager.set_rule(rule)
|
|
||||||
assert replication_manager.get_rule("to-delete") is not None
|
|
||||||
|
|
||||||
replication_manager.delete_rule("to-delete")
|
|
||||||
assert replication_manager.get_rule("to-delete") is None
|
|
||||||
|
|
||||||
def test_save_and_reload_rules(self, replication_manager, tmp_path):
|
|
||||||
rule = ReplicationRule(
|
|
||||||
bucket_name="persistent-bucket",
|
|
||||||
target_connection_id="test-conn",
|
|
||||||
target_bucket="remote-bucket",
|
|
||||||
enabled=True,
|
|
||||||
)
|
|
||||||
replication_manager.set_rule(rule)
|
|
||||||
|
|
||||||
rules_path = tmp_path / "replication_rules.json"
|
|
||||||
assert rules_path.exists()
|
|
||||||
data = json.loads(rules_path.read_text())
|
|
||||||
assert "persistent-bucket" in data
|
|
||||||
|
|
||||||
@patch("app.replication._create_s3_client")
|
|
||||||
def test_check_endpoint_health_success(self, mock_create_client, replication_manager, connections):
|
|
||||||
mock_client = MagicMock()
|
|
||||||
mock_client.list_buckets.return_value = {"Buckets": []}
|
|
||||||
mock_create_client.return_value = mock_client
|
|
||||||
|
|
||||||
conn = connections.get("test-conn")
|
|
||||||
result = replication_manager.check_endpoint_health(conn)
|
|
||||||
assert result is True
|
|
||||||
mock_client.list_buckets.assert_called_once()
|
|
||||||
|
|
||||||
@patch("app.replication._create_s3_client")
|
|
||||||
def test_check_endpoint_health_failure(self, mock_create_client, replication_manager, connections):
|
|
||||||
mock_client = MagicMock()
|
|
||||||
mock_client.list_buckets.side_effect = Exception("Connection refused")
|
|
||||||
mock_create_client.return_value = mock_client
|
|
||||||
|
|
||||||
conn = connections.get("test-conn")
|
|
||||||
result = replication_manager.check_endpoint_health(conn)
|
|
||||||
assert result is False
|
|
||||||
|
|
||||||
def test_trigger_replication_no_rule(self, replication_manager):
|
|
||||||
replication_manager.trigger_replication("no-such-bucket", "test.txt", "write")
|
|
||||||
|
|
||||||
def test_trigger_replication_disabled_rule(self, replication_manager):
|
|
||||||
rule = ReplicationRule(
|
|
||||||
bucket_name="disabled-bucket",
|
|
||||||
target_connection_id="test-conn",
|
|
||||||
target_bucket="remote-bucket",
|
|
||||||
enabled=False,
|
|
||||||
)
|
|
||||||
replication_manager.set_rule(rule)
|
|
||||||
replication_manager.trigger_replication("disabled-bucket", "test.txt", "write")
|
|
||||||
|
|
||||||
def test_trigger_replication_missing_connection(self, replication_manager):
|
|
||||||
rule = ReplicationRule(
|
|
||||||
bucket_name="orphan-bucket",
|
|
||||||
target_connection_id="missing-conn",
|
|
||||||
target_bucket="remote-bucket",
|
|
||||||
enabled=True,
|
|
||||||
)
|
|
||||||
replication_manager.set_rule(rule)
|
|
||||||
replication_manager.trigger_replication("orphan-bucket", "test.txt", "write")
|
|
||||||
|
|
||||||
def test_replicate_task_path_traversal_blocked(self, replication_manager, connections):
|
|
||||||
rule = ReplicationRule(
|
|
||||||
bucket_name="secure-bucket",
|
|
||||||
target_connection_id="test-conn",
|
|
||||||
target_bucket="remote-bucket",
|
|
||||||
enabled=True,
|
|
||||||
)
|
|
||||||
replication_manager.set_rule(rule)
|
|
||||||
conn = connections.get("test-conn")
|
|
||||||
|
|
||||||
replication_manager._replicate_task("secure-bucket", "../../../etc/passwd", rule, conn, "write")
|
|
||||||
replication_manager._replicate_task("secure-bucket", "/root/secret", rule, conn, "write")
|
|
||||||
replication_manager._replicate_task("secure-bucket", "..\\..\\windows\\system32", rule, conn, "write")
|
|
||||||
|
|
||||||
|
|
||||||
class TestCreateS3Client:
|
|
||||||
@patch("app.replication.boto3.client")
|
|
||||||
def test_creates_client_with_correct_config(self, mock_boto_client):
|
|
||||||
conn = RemoteConnection(
|
|
||||||
id="test",
|
|
||||||
name="Test",
|
|
||||||
endpoint_url="http://localhost:9000",
|
|
||||||
access_key="access",
|
|
||||||
secret_key="secret",
|
|
||||||
region="eu-west-1",
|
|
||||||
)
|
|
||||||
_create_s3_client(conn)
|
|
||||||
|
|
||||||
mock_boto_client.assert_called_once()
|
|
||||||
call_kwargs = mock_boto_client.call_args[1]
|
|
||||||
assert call_kwargs["endpoint_url"] == "http://localhost:9000"
|
|
||||||
assert call_kwargs["aws_access_key_id"] == "access"
|
|
||||||
assert call_kwargs["aws_secret_access_key"] == "secret"
|
|
||||||
assert call_kwargs["region_name"] == "eu-west-1"
|
|
||||||
|
|
||||||
@patch("app.replication.boto3.client")
|
|
||||||
def test_health_check_mode_minimal_retries(self, mock_boto_client):
|
|
||||||
conn = RemoteConnection(
|
|
||||||
id="test",
|
|
||||||
name="Test",
|
|
||||||
endpoint_url="http://localhost:9000",
|
|
||||||
access_key="access",
|
|
||||||
secret_key="secret",
|
|
||||||
)
|
|
||||||
_create_s3_client(conn, health_check=True)
|
|
||||||
|
|
||||||
call_kwargs = mock_boto_client.call_args[1]
|
|
||||||
config = call_kwargs["config"]
|
|
||||||
assert config.retries["max_attempts"] == 1
|
|
||||||
@@ -220,7 +220,7 @@ def test_bucket_config_filename_allowed(tmp_path):
|
|||||||
storage.create_bucket("demo")
|
storage.create_bucket("demo")
|
||||||
storage.put_object("demo", ".bucket.json", io.BytesIO(b"{}"))
|
storage.put_object("demo", ".bucket.json", io.BytesIO(b"{}"))
|
||||||
|
|
||||||
objects = storage.list_objects_all("demo")
|
objects = storage.list_objects("demo")
|
||||||
assert any(meta.key == ".bucket.json" for meta in objects)
|
assert any(meta.key == ".bucket.json" for meta in objects)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -62,7 +62,7 @@ def test_bulk_delete_json_route(tmp_path: Path):
|
|||||||
assert set(payload["deleted"]) == {"first.txt", "missing.txt"}
|
assert set(payload["deleted"]) == {"first.txt", "missing.txt"}
|
||||||
assert payload["errors"] == []
|
assert payload["errors"] == []
|
||||||
|
|
||||||
listing = storage.list_objects_all("demo")
|
listing = storage.list_objects("demo")
|
||||||
assert {meta.key for meta in listing} == {"second.txt"}
|
assert {meta.key for meta in listing} == {"second.txt"}
|
||||||
|
|
||||||
|
|
||||||
@@ -92,5 +92,5 @@ def test_bulk_delete_validation(tmp_path: Path):
|
|||||||
assert limit_response.status_code == 400
|
assert limit_response.status_code == 400
|
||||||
assert limit_response.get_json()["status"] == "error"
|
assert limit_response.get_json()["status"] == "error"
|
||||||
|
|
||||||
still_there = storage.list_objects_all("demo")
|
still_there = storage.list_objects("demo")
|
||||||
assert {meta.key for meta in still_there} == {"keep.txt"}
|
assert {meta.key for meta in still_there} == {"keep.txt"}
|
||||||
|
|||||||
@@ -67,6 +67,7 @@ class TestUIBucketEncryption:
|
|||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Login first
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
@@ -81,11 +82,14 @@ class TestUIBucketEncryption:
|
|||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Login
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Get CSRF token
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
# Enable AES-256 encryption
|
||||||
response = client.post(
|
response = client.post(
|
||||||
"/ui/buckets/test-bucket/encryption",
|
"/ui/buckets/test-bucket/encryption",
|
||||||
data={
|
data={
|
||||||
@@ -98,6 +102,7 @@ class TestUIBucketEncryption:
|
|||||||
|
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
html = response.data.decode("utf-8")
|
html = response.data.decode("utf-8")
|
||||||
|
# Should see success message or enabled state
|
||||||
assert "AES-256" in html or "encryption enabled" in html.lower()
|
assert "AES-256" in html or "encryption enabled" in html.lower()
|
||||||
|
|
||||||
def test_enable_kms_encryption(self, tmp_path):
|
def test_enable_kms_encryption(self, tmp_path):
|
||||||
@@ -105,6 +110,7 @@ class TestUIBucketEncryption:
|
|||||||
app = _make_encryption_app(tmp_path, kms_enabled=True)
|
app = _make_encryption_app(tmp_path, kms_enabled=True)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Create a KMS key first
|
||||||
with app.app_context():
|
with app.app_context():
|
||||||
kms = app.extensions.get("kms")
|
kms = app.extensions.get("kms")
|
||||||
if kms:
|
if kms:
|
||||||
@@ -113,11 +119,14 @@ class TestUIBucketEncryption:
|
|||||||
else:
|
else:
|
||||||
pytest.skip("KMS not available")
|
pytest.skip("KMS not available")
|
||||||
|
|
||||||
|
# Login
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Get CSRF token
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
|
# Enable KMS encryption
|
||||||
response = client.post(
|
response = client.post(
|
||||||
"/ui/buckets/test-bucket/encryption",
|
"/ui/buckets/test-bucket/encryption",
|
||||||
data={
|
data={
|
||||||
@@ -138,8 +147,10 @@ class TestUIBucketEncryption:
|
|||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Login
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# First enable encryption
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
@@ -152,6 +163,7 @@ class TestUIBucketEncryption:
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Now disable it
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
@@ -173,6 +185,7 @@ class TestUIBucketEncryption:
|
|||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Login
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
@@ -197,8 +210,10 @@ class TestUIBucketEncryption:
|
|||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Login
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# Enable encryption
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
@@ -211,6 +226,7 @@ class TestUIBucketEncryption:
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Verify it's stored
|
||||||
with app.app_context():
|
with app.app_context():
|
||||||
storage = app.extensions["object_storage"]
|
storage = app.extensions["object_storage"]
|
||||||
config = storage.get_bucket_encryption("test-bucket")
|
config = storage.get_bucket_encryption("test-bucket")
|
||||||
@@ -228,8 +244,10 @@ class TestUIEncryptionWithoutPermission:
|
|||||||
app = _make_encryption_app(tmp_path)
|
app = _make_encryption_app(tmp_path)
|
||||||
client = app.test_client()
|
client = app.test_client()
|
||||||
|
|
||||||
|
# Login as readonly user
|
||||||
client.post("/ui/login", data={"access_key": "readonly", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "readonly", "secret_key": "secret"}, follow_redirects=True)
|
||||||
|
|
||||||
|
# This should fail or be rejected
|
||||||
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
response = client.get("/ui/buckets/test-bucket?tab=properties")
|
||||||
csrf_token = get_csrf_token(response)
|
csrf_token = get_csrf_token(response)
|
||||||
|
|
||||||
@@ -243,6 +261,8 @@ class TestUIEncryptionWithoutPermission:
|
|||||||
follow_redirects=True,
|
follow_redirects=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Should either redirect with error or show permission denied
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
html = response.data.decode("utf-8")
|
html = response.data.decode("utf-8")
|
||||||
|
# Should contain error about permission denied
|
||||||
assert "Access denied" in html or "permission" in html.lower() or "not authorized" in html.lower()
|
assert "Access denied" in html or "permission" in html.lower() or "not authorized" in html.lower()
|
||||||
|
|||||||
@@ -1,188 +0,0 @@
|
|||||||
"""Tests for UI pagination of bucket objects."""
|
|
||||||
import json
|
|
||||||
from io import BytesIO
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from app import create_app
|
|
||||||
|
|
||||||
|
|
||||||
def _make_app(tmp_path: Path):
|
|
||||||
"""Create an app for testing."""
|
|
||||||
storage_root = tmp_path / "data"
|
|
||||||
iam_config = tmp_path / "iam.json"
|
|
||||||
bucket_policies = tmp_path / "bucket_policies.json"
|
|
||||||
iam_payload = {
|
|
||||||
"users": [
|
|
||||||
{
|
|
||||||
"access_key": "test",
|
|
||||||
"secret_key": "secret",
|
|
||||||
"display_name": "Test User",
|
|
||||||
"policies": [{"bucket": "*", "actions": ["list", "read", "write", "delete", "policy"]}],
|
|
||||||
},
|
|
||||||
]
|
|
||||||
}
|
|
||||||
iam_config.write_text(json.dumps(iam_payload))
|
|
||||||
|
|
||||||
flask_app = create_app(
|
|
||||||
{
|
|
||||||
"TESTING": True,
|
|
||||||
"WTF_CSRF_ENABLED": False,
|
|
||||||
"STORAGE_ROOT": storage_root,
|
|
||||||
"IAM_CONFIG": iam_config,
|
|
||||||
"BUCKET_POLICY_PATH": bucket_policies,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
return flask_app
|
|
||||||
|
|
||||||
|
|
||||||
class TestPaginatedObjectListing:
|
|
||||||
"""Test paginated object listing API."""
|
|
||||||
|
|
||||||
def test_objects_api_returns_paginated_results(self, tmp_path):
|
|
||||||
"""Objects API should return paginated results."""
|
|
||||||
app = _make_app(tmp_path)
|
|
||||||
storage = app.extensions["object_storage"]
|
|
||||||
storage.create_bucket("test-bucket")
|
|
||||||
|
|
||||||
# Create 10 test objects
|
|
||||||
for i in range(10):
|
|
||||||
storage.put_object("test-bucket", f"file{i:02d}.txt", BytesIO(b"content"))
|
|
||||||
|
|
||||||
with app.test_client() as client:
|
|
||||||
# Login first
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
|
||||||
|
|
||||||
# Request first page of 3 objects
|
|
||||||
resp = client.get("/ui/buckets/test-bucket/objects?max_keys=3")
|
|
||||||
assert resp.status_code == 200
|
|
||||||
|
|
||||||
data = resp.get_json()
|
|
||||||
assert len(data["objects"]) == 3
|
|
||||||
assert data["is_truncated"] is True
|
|
||||||
assert data["next_continuation_token"] is not None
|
|
||||||
assert data["total_count"] == 10
|
|
||||||
|
|
||||||
def test_objects_api_pagination_continuation(self, tmp_path):
|
|
||||||
"""Objects API should support continuation tokens."""
|
|
||||||
app = _make_app(tmp_path)
|
|
||||||
storage = app.extensions["object_storage"]
|
|
||||||
storage.create_bucket("test-bucket")
|
|
||||||
|
|
||||||
# Create 5 test objects
|
|
||||||
for i in range(5):
|
|
||||||
storage.put_object("test-bucket", f"file{i:02d}.txt", BytesIO(b"content"))
|
|
||||||
|
|
||||||
with app.test_client() as client:
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
|
||||||
|
|
||||||
# Get first page
|
|
||||||
resp = client.get("/ui/buckets/test-bucket/objects?max_keys=2")
|
|
||||||
assert resp.status_code == 200
|
|
||||||
data = resp.get_json()
|
|
||||||
|
|
||||||
first_page_keys = [obj["key"] for obj in data["objects"]]
|
|
||||||
assert len(first_page_keys) == 2
|
|
||||||
assert data["is_truncated"] is True
|
|
||||||
|
|
||||||
# Get second page
|
|
||||||
token = data["next_continuation_token"]
|
|
||||||
resp = client.get(f"/ui/buckets/test-bucket/objects?max_keys=2&continuation_token={token}")
|
|
||||||
assert resp.status_code == 200
|
|
||||||
data = resp.get_json()
|
|
||||||
|
|
||||||
second_page_keys = [obj["key"] for obj in data["objects"]]
|
|
||||||
assert len(second_page_keys) == 2
|
|
||||||
|
|
||||||
# No overlap between pages
|
|
||||||
assert set(first_page_keys).isdisjoint(set(second_page_keys))
|
|
||||||
|
|
||||||
def test_objects_api_prefix_filter(self, tmp_path):
|
|
||||||
"""Objects API should support prefix filtering."""
|
|
||||||
app = _make_app(tmp_path)
|
|
||||||
storage = app.extensions["object_storage"]
|
|
||||||
storage.create_bucket("test-bucket")
|
|
||||||
|
|
||||||
# Create objects with different prefixes
|
|
||||||
storage.put_object("test-bucket", "logs/access.log", BytesIO(b"log"))
|
|
||||||
storage.put_object("test-bucket", "logs/error.log", BytesIO(b"log"))
|
|
||||||
storage.put_object("test-bucket", "data/file.txt", BytesIO(b"data"))
|
|
||||||
|
|
||||||
with app.test_client() as client:
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
|
||||||
|
|
||||||
# Filter by prefix
|
|
||||||
resp = client.get("/ui/buckets/test-bucket/objects?prefix=logs/")
|
|
||||||
assert resp.status_code == 200
|
|
||||||
data = resp.get_json()
|
|
||||||
|
|
||||||
keys = [obj["key"] for obj in data["objects"]]
|
|
||||||
assert all(k.startswith("logs/") for k in keys)
|
|
||||||
assert len(keys) == 2
|
|
||||||
|
|
||||||
def test_objects_api_requires_authentication(self, tmp_path):
|
|
||||||
"""Objects API should require login."""
|
|
||||||
app = _make_app(tmp_path)
|
|
||||||
storage = app.extensions["object_storage"]
|
|
||||||
storage.create_bucket("test-bucket")
|
|
||||||
|
|
||||||
with app.test_client() as client:
|
|
||||||
# Don't login
|
|
||||||
resp = client.get("/ui/buckets/test-bucket/objects")
|
|
||||||
# Should redirect to login
|
|
||||||
assert resp.status_code == 302
|
|
||||||
assert "/ui/login" in resp.headers.get("Location", "")
|
|
||||||
|
|
||||||
def test_objects_api_returns_object_metadata(self, tmp_path):
|
|
||||||
"""Objects API should return complete object metadata."""
|
|
||||||
app = _make_app(tmp_path)
|
|
||||||
storage = app.extensions["object_storage"]
|
|
||||||
storage.create_bucket("test-bucket")
|
|
||||||
storage.put_object("test-bucket", "test.txt", BytesIO(b"test content"))
|
|
||||||
|
|
||||||
with app.test_client() as client:
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
|
||||||
|
|
||||||
resp = client.get("/ui/buckets/test-bucket/objects")
|
|
||||||
assert resp.status_code == 200
|
|
||||||
data = resp.get_json()
|
|
||||||
|
|
||||||
assert len(data["objects"]) == 1
|
|
||||||
obj = data["objects"][0]
|
|
||||||
|
|
||||||
# Check all expected fields
|
|
||||||
assert obj["key"] == "test.txt"
|
|
||||||
assert obj["size"] == 12 # len("test content")
|
|
||||||
assert "last_modified" in obj
|
|
||||||
assert "last_modified_display" in obj
|
|
||||||
assert "etag" in obj
|
|
||||||
|
|
||||||
# URLs are now returned as templates (not per-object) for performance
|
|
||||||
assert "url_templates" in data
|
|
||||||
templates = data["url_templates"]
|
|
||||||
assert "preview" in templates
|
|
||||||
assert "download" in templates
|
|
||||||
assert "delete" in templates
|
|
||||||
assert "KEY_PLACEHOLDER" in templates["preview"]
|
|
||||||
|
|
||||||
def test_bucket_detail_page_loads_without_objects(self, tmp_path):
|
|
||||||
"""Bucket detail page should load even with many objects."""
|
|
||||||
app = _make_app(tmp_path)
|
|
||||||
storage = app.extensions["object_storage"]
|
|
||||||
storage.create_bucket("test-bucket")
|
|
||||||
|
|
||||||
# Create many objects
|
|
||||||
for i in range(100):
|
|
||||||
storage.put_object("test-bucket", f"file{i:03d}.txt", BytesIO(b"x"))
|
|
||||||
|
|
||||||
with app.test_client() as client:
|
|
||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
|
||||||
|
|
||||||
# The page should load quickly (objects loaded via JS)
|
|
||||||
resp = client.get("/ui/buckets/test-bucket")
|
|
||||||
assert resp.status_code == 200
|
|
||||||
|
|
||||||
html = resp.data.decode("utf-8")
|
|
||||||
# Should have the JavaScript loading infrastructure (external JS file)
|
|
||||||
assert "bucket-detail-main.js" in html
|
|
||||||
@@ -70,12 +70,8 @@ def test_ui_bucket_policy_enforcement_toggle(tmp_path: Path, enforce: bool):
|
|||||||
assert b"Access denied by bucket policy" in response.data
|
assert b"Access denied by bucket policy" in response.data
|
||||||
else:
|
else:
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
|
assert b"vid.mp4" in response.data
|
||||||
assert b"Access denied by bucket policy" not in response.data
|
assert b"Access denied by bucket policy" not in response.data
|
||||||
# Objects are now loaded via async API - check the objects endpoint
|
|
||||||
objects_response = client.get("/ui/buckets/testbucket/objects")
|
|
||||||
assert objects_response.status_code == 200
|
|
||||||
data = objects_response.get_json()
|
|
||||||
assert any(obj["key"] == "vid.mp4" for obj in data["objects"])
|
|
||||||
|
|
||||||
|
|
||||||
def test_ui_bucket_policy_disabled_by_default(tmp_path: Path):
|
def test_ui_bucket_policy_disabled_by_default(tmp_path: Path):
|
||||||
@@ -113,9 +109,5 @@ def test_ui_bucket_policy_disabled_by_default(tmp_path: Path):
|
|||||||
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
client.post("/ui/login", data={"access_key": "test", "secret_key": "secret"}, follow_redirects=True)
|
||||||
response = client.get("/ui/buckets/testbucket", follow_redirects=True)
|
response = client.get("/ui/buckets/testbucket", follow_redirects=True)
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
|
assert b"vid.mp4" in response.data
|
||||||
assert b"Access denied by bucket policy" not in response.data
|
assert b"Access denied by bucket policy" not in response.data
|
||||||
# Objects are now loaded via async API - check the objects endpoint
|
|
||||||
objects_response = client.get("/ui/buckets/testbucket/objects")
|
|
||||||
assert objects_response.status_code == 200
|
|
||||||
data = objects_response.get_json()
|
|
||||||
assert any(obj["key"] == "vid.mp4" for obj in data["objects"])
|
|
||||||
|
|||||||
Reference in New Issue
Block a user