Compare commits
25 Commits
caf01d6ada
...
v0.2.2
| Author | SHA1 | Date | |
|---|---|---|---|
| bb6590fc5e | |||
| 4de936cea9 | |||
| adb9017580 | |||
| 4adfcc4131 | |||
| ebc315c1cc | |||
| 5ab62a00ff | |||
| 9c3518de63 | |||
| a52657e684 | |||
| 53297abe1e | |||
| a3b9db544c | |||
| f5d2e1c488 | |||
| f04c6a9cdc | |||
| 7a494abb96 | |||
| 956d17a649 | |||
| 5522f9ac04 | |||
| 3742f0228e | |||
| ba694cb717 | |||
| 433d291b4b | |||
| 899db3421b | |||
| e3509e997f | |||
| 1c30200db0 | |||
| 7ff422d4dc | |||
| 546d51af9a | |||
| 0d1fe05fd0 | |||
| c5d4b2f1cd |
@@ -32,6 +32,6 @@ ENV APP_HOST=0.0.0.0 \
|
|||||||
FLASK_DEBUG=0
|
FLASK_DEBUG=0
|
||||||
|
|
||||||
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
||||||
CMD python -c "import requests; requests.get('http://localhost:5000/healthz', timeout=2)"
|
CMD python -c "import requests; requests.get('http://localhost:5000/myfsio/health', timeout=2)"
|
||||||
|
|
||||||
CMD ["./docker-entrypoint.sh"]
|
CMD ["./docker-entrypoint.sh"]
|
||||||
|
|||||||
661
LICENSE
Normal file
661
LICENSE
Normal file
@@ -0,0 +1,661 @@
|
|||||||
|
GNU AFFERO GENERAL PUBLIC LICENSE
|
||||||
|
Version 3, 19 November 2007
|
||||||
|
|
||||||
|
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||||
|
Everyone is permitted to copy and distribute verbatim copies
|
||||||
|
of this license document, but changing it is not allowed.
|
||||||
|
|
||||||
|
Preamble
|
||||||
|
|
||||||
|
The GNU Affero General Public License is a free, copyleft license for
|
||||||
|
software and other kinds of works, specifically designed to ensure
|
||||||
|
cooperation with the community in the case of network server software.
|
||||||
|
|
||||||
|
The licenses for most software and other practical works are designed
|
||||||
|
to take away your freedom to share and change the works. By contrast,
|
||||||
|
our General Public Licenses are intended to guarantee your freedom to
|
||||||
|
share and change all versions of a program--to make sure it remains free
|
||||||
|
software for all its users.
|
||||||
|
|
||||||
|
When we speak of free software, we are referring to freedom, not
|
||||||
|
price. Our General Public Licenses are designed to make sure that you
|
||||||
|
have the freedom to distribute copies of free software (and charge for
|
||||||
|
them if you wish), that you receive source code or can get it if you
|
||||||
|
want it, that you can change the software or use pieces of it in new
|
||||||
|
free programs, and that you know you can do these things.
|
||||||
|
|
||||||
|
Developers that use our General Public Licenses protect your rights
|
||||||
|
with two steps: (1) assert copyright on the software, and (2) offer
|
||||||
|
you this License which gives you legal permission to copy, distribute
|
||||||
|
and/or modify the software.
|
||||||
|
|
||||||
|
A secondary benefit of defending all users' freedom is that
|
||||||
|
improvements made in alternate versions of the program, if they
|
||||||
|
receive widespread use, become available for other developers to
|
||||||
|
incorporate. Many developers of free software are heartened and
|
||||||
|
encouraged by the resulting cooperation. However, in the case of
|
||||||
|
software used on network servers, this result may fail to come about.
|
||||||
|
The GNU General Public License permits making a modified version and
|
||||||
|
letting the public access it on a server without ever releasing its
|
||||||
|
source code to the public.
|
||||||
|
|
||||||
|
The GNU Affero General Public License is designed specifically to
|
||||||
|
ensure that, in such cases, the modified source code becomes available
|
||||||
|
to the community. It requires the operator of a network server to
|
||||||
|
provide the source code of the modified version running there to the
|
||||||
|
users of that server. Therefore, public use of a modified version, on
|
||||||
|
a publicly accessible server, gives the public access to the source
|
||||||
|
code of the modified version.
|
||||||
|
|
||||||
|
An older license, called the Affero General Public License and
|
||||||
|
published by Affero, was designed to accomplish similar goals. This is
|
||||||
|
a different license, not a version of the Affero GPL, but Affero has
|
||||||
|
released a new version of the Affero GPL which permits relicensing under
|
||||||
|
this license.
|
||||||
|
|
||||||
|
The precise terms and conditions for copying, distribution and
|
||||||
|
modification follow.
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
0. Definitions.
|
||||||
|
|
||||||
|
"This License" refers to version 3 of the GNU Affero General Public License.
|
||||||
|
|
||||||
|
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||||
|
works, such as semiconductor masks.
|
||||||
|
|
||||||
|
"The Program" refers to any copyrightable work licensed under this
|
||||||
|
License. Each licensee is addressed as "you". "Licensees" and
|
||||||
|
"recipients" may be individuals or organizations.
|
||||||
|
|
||||||
|
To "modify" a work means to copy from or adapt all or part of the work
|
||||||
|
in a fashion requiring copyright permission, other than the making of an
|
||||||
|
exact copy. The resulting work is called a "modified version" of the
|
||||||
|
earlier work or a work "based on" the earlier work.
|
||||||
|
|
||||||
|
A "covered work" means either the unmodified Program or a work based
|
||||||
|
on the Program.
|
||||||
|
|
||||||
|
To "propagate" a work means to do anything with it that, without
|
||||||
|
permission, would make you directly or secondarily liable for
|
||||||
|
infringement under applicable copyright law, except executing it on a
|
||||||
|
computer or modifying a private copy. Propagation includes copying,
|
||||||
|
distribution (with or without modification), making available to the
|
||||||
|
public, and in some countries other activities as well.
|
||||||
|
|
||||||
|
To "convey" a work means any kind of propagation that enables other
|
||||||
|
parties to make or receive copies. Mere interaction with a user through
|
||||||
|
a computer network, with no transfer of a copy, is not conveying.
|
||||||
|
|
||||||
|
An interactive user interface displays "Appropriate Legal Notices"
|
||||||
|
to the extent that it includes a convenient and prominently visible
|
||||||
|
feature that (1) displays an appropriate copyright notice, and (2)
|
||||||
|
tells the user that there is no warranty for the work (except to the
|
||||||
|
extent that warranties are provided), that licensees may convey the
|
||||||
|
work under this License, and how to view a copy of this License. If
|
||||||
|
the interface presents a list of user commands or options, such as a
|
||||||
|
menu, a prominent item in the list meets this criterion.
|
||||||
|
|
||||||
|
1. Source Code.
|
||||||
|
|
||||||
|
The "source code" for a work means the preferred form of the work
|
||||||
|
for making modifications to it. "Object code" means any non-source
|
||||||
|
form of a work.
|
||||||
|
|
||||||
|
A "Standard Interface" means an interface that either is an official
|
||||||
|
standard defined by a recognized standards body, or, in the case of
|
||||||
|
interfaces specified for a particular programming language, one that
|
||||||
|
is widely used among developers working in that language.
|
||||||
|
|
||||||
|
The "System Libraries" of an executable work include anything, other
|
||||||
|
than the work as a whole, that (a) is included in the normal form of
|
||||||
|
packaging a Major Component, but which is not part of that Major
|
||||||
|
Component, and (b) serves only to enable use of the work with that
|
||||||
|
Major Component, or to implement a Standard Interface for which an
|
||||||
|
implementation is available to the public in source code form. A
|
||||||
|
"Major Component", in this context, means a major essential component
|
||||||
|
(kernel, window system, and so on) of the specific operating system
|
||||||
|
(if any) on which the executable work runs, or a compiler used to
|
||||||
|
produce the work, or an object code interpreter used to run it.
|
||||||
|
|
||||||
|
The "Corresponding Source" for a work in object code form means all
|
||||||
|
the source code needed to generate, install, and (for an executable
|
||||||
|
work) run the object code and to modify the work, including scripts to
|
||||||
|
control those activities. However, it does not include the work's
|
||||||
|
System Libraries, or general-purpose tools or generally available free
|
||||||
|
programs which are used unmodified in performing those activities but
|
||||||
|
which are not part of the work. For example, Corresponding Source
|
||||||
|
includes interface definition files associated with source files for
|
||||||
|
the work, and the source code for shared libraries and dynamically
|
||||||
|
linked subprograms that the work is specifically designed to require,
|
||||||
|
such as by intimate data communication or control flow between those
|
||||||
|
subprograms and other parts of the work.
|
||||||
|
|
||||||
|
The Corresponding Source need not include anything that users
|
||||||
|
can regenerate automatically from other parts of the Corresponding
|
||||||
|
Source.
|
||||||
|
|
||||||
|
The Corresponding Source for a work in source code form is that
|
||||||
|
same work.
|
||||||
|
|
||||||
|
2. Basic Permissions.
|
||||||
|
|
||||||
|
All rights granted under this License are granted for the term of
|
||||||
|
copyright on the Program, and are irrevocable provided the stated
|
||||||
|
conditions are met. This License explicitly affirms your unlimited
|
||||||
|
permission to run the unmodified Program. The output from running a
|
||||||
|
covered work is covered by this License only if the output, given its
|
||||||
|
content, constitutes a covered work. This License acknowledges your
|
||||||
|
rights of fair use or other equivalent, as provided by copyright law.
|
||||||
|
|
||||||
|
You may make, run and propagate covered works that you do not
|
||||||
|
convey, without conditions so long as your license otherwise remains
|
||||||
|
in force. You may convey covered works to others for the sole purpose
|
||||||
|
of having them make modifications exclusively for you, or provide you
|
||||||
|
with facilities for running those works, provided that you comply with
|
||||||
|
the terms of this License in conveying all material for which you do
|
||||||
|
not control copyright. Those thus making or running the covered works
|
||||||
|
for you must do so exclusively on your behalf, under your direction
|
||||||
|
and control, on terms that prohibit them from making any copies of
|
||||||
|
your copyrighted material outside their relationship with you.
|
||||||
|
|
||||||
|
Conveying under any other circumstances is permitted solely under
|
||||||
|
the conditions stated below. Sublicensing is not allowed; section 10
|
||||||
|
makes it unnecessary.
|
||||||
|
|
||||||
|
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||||
|
|
||||||
|
No covered work shall be deemed part of an effective technological
|
||||||
|
measure under any applicable law fulfilling obligations under article
|
||||||
|
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||||
|
similar laws prohibiting or restricting circumvention of such
|
||||||
|
measures.
|
||||||
|
|
||||||
|
When you convey a covered work, you waive any legal power to forbid
|
||||||
|
circumvention of technological measures to the extent such circumvention
|
||||||
|
is effected by exercising rights under this License with respect to
|
||||||
|
the covered work, and you disclaim any intention to limit operation or
|
||||||
|
modification of the work as a means of enforcing, against the work's
|
||||||
|
users, your or third parties' legal rights to forbid circumvention of
|
||||||
|
technological measures.
|
||||||
|
|
||||||
|
4. Conveying Verbatim Copies.
|
||||||
|
|
||||||
|
You may convey verbatim copies of the Program's source code as you
|
||||||
|
receive it, in any medium, provided that you conspicuously and
|
||||||
|
appropriately publish on each copy an appropriate copyright notice;
|
||||||
|
keep intact all notices stating that this License and any
|
||||||
|
non-permissive terms added in accord with section 7 apply to the code;
|
||||||
|
keep intact all notices of the absence of any warranty; and give all
|
||||||
|
recipients a copy of this License along with the Program.
|
||||||
|
|
||||||
|
You may charge any price or no price for each copy that you convey,
|
||||||
|
and you may offer support or warranty protection for a fee.
|
||||||
|
|
||||||
|
5. Conveying Modified Source Versions.
|
||||||
|
|
||||||
|
You may convey a work based on the Program, or the modifications to
|
||||||
|
produce it from the Program, in the form of source code under the
|
||||||
|
terms of section 4, provided that you also meet all of these conditions:
|
||||||
|
|
||||||
|
a) The work must carry prominent notices stating that you modified
|
||||||
|
it, and giving a relevant date.
|
||||||
|
|
||||||
|
b) The work must carry prominent notices stating that it is
|
||||||
|
released under this License and any conditions added under section
|
||||||
|
7. This requirement modifies the requirement in section 4 to
|
||||||
|
"keep intact all notices".
|
||||||
|
|
||||||
|
c) You must license the entire work, as a whole, under this
|
||||||
|
License to anyone who comes into possession of a copy. This
|
||||||
|
License will therefore apply, along with any applicable section 7
|
||||||
|
additional terms, to the whole of the work, and all its parts,
|
||||||
|
regardless of how they are packaged. This License gives no
|
||||||
|
permission to license the work in any other way, but it does not
|
||||||
|
invalidate such permission if you have separately received it.
|
||||||
|
|
||||||
|
d) If the work has interactive user interfaces, each must display
|
||||||
|
Appropriate Legal Notices; however, if the Program has interactive
|
||||||
|
interfaces that do not display Appropriate Legal Notices, your
|
||||||
|
work need not make them do so.
|
||||||
|
|
||||||
|
A compilation of a covered work with other separate and independent
|
||||||
|
works, which are not by their nature extensions of the covered work,
|
||||||
|
and which are not combined with it such as to form a larger program,
|
||||||
|
in or on a volume of a storage or distribution medium, is called an
|
||||||
|
"aggregate" if the compilation and its resulting copyright are not
|
||||||
|
used to limit the access or legal rights of the compilation's users
|
||||||
|
beyond what the individual works permit. Inclusion of a covered work
|
||||||
|
in an aggregate does not cause this License to apply to the other
|
||||||
|
parts of the aggregate.
|
||||||
|
|
||||||
|
6. Conveying Non-Source Forms.
|
||||||
|
|
||||||
|
You may convey a covered work in object code form under the terms
|
||||||
|
of sections 4 and 5, provided that you also convey the
|
||||||
|
machine-readable Corresponding Source under the terms of this License,
|
||||||
|
in one of these ways:
|
||||||
|
|
||||||
|
a) Convey the object code in, or embodied in, a physical product
|
||||||
|
(including a physical distribution medium), accompanied by the
|
||||||
|
Corresponding Source fixed on a durable physical medium
|
||||||
|
customarily used for software interchange.
|
||||||
|
|
||||||
|
b) Convey the object code in, or embodied in, a physical product
|
||||||
|
(including a physical distribution medium), accompanied by a
|
||||||
|
written offer, valid for at least three years and valid for as
|
||||||
|
long as you offer spare parts or customer support for that product
|
||||||
|
model, to give anyone who possesses the object code either (1) a
|
||||||
|
copy of the Corresponding Source for all the software in the
|
||||||
|
product that is covered by this License, on a durable physical
|
||||||
|
medium customarily used for software interchange, for a price no
|
||||||
|
more than your reasonable cost of physically performing this
|
||||||
|
conveying of source, or (2) access to copy the
|
||||||
|
Corresponding Source from a network server at no charge.
|
||||||
|
|
||||||
|
c) Convey individual copies of the object code with a copy of the
|
||||||
|
written offer to provide the Corresponding Source. This
|
||||||
|
alternative is allowed only occasionally and noncommercially, and
|
||||||
|
only if you received the object code with such an offer, in accord
|
||||||
|
with subsection 6b.
|
||||||
|
|
||||||
|
d) Convey the object code by offering access from a designated
|
||||||
|
place (gratis or for a charge), and offer equivalent access to the
|
||||||
|
Corresponding Source in the same way through the same place at no
|
||||||
|
further charge. You need not require recipients to copy the
|
||||||
|
Corresponding Source along with the object code. If the place to
|
||||||
|
copy the object code is a network server, the Corresponding Source
|
||||||
|
may be on a different server (operated by you or a third party)
|
||||||
|
that supports equivalent copying facilities, provided you maintain
|
||||||
|
clear directions next to the object code saying where to find the
|
||||||
|
Corresponding Source. Regardless of what server hosts the
|
||||||
|
Corresponding Source, you remain obligated to ensure that it is
|
||||||
|
available for as long as needed to satisfy these requirements.
|
||||||
|
|
||||||
|
e) Convey the object code using peer-to-peer transmission, provided
|
||||||
|
you inform other peers where the object code and Corresponding
|
||||||
|
Source of the work are being offered to the general public at no
|
||||||
|
charge under subsection 6d.
|
||||||
|
|
||||||
|
A separable portion of the object code, whose source code is excluded
|
||||||
|
from the Corresponding Source as a System Library, need not be
|
||||||
|
included in conveying the object code work.
|
||||||
|
|
||||||
|
A "User Product" is either (1) a "consumer product", which means any
|
||||||
|
tangible personal property which is normally used for personal, family,
|
||||||
|
or household purposes, or (2) anything designed or sold for incorporation
|
||||||
|
into a dwelling. In determining whether a product is a consumer product,
|
||||||
|
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||||
|
product received by a particular user, "normally used" refers to a
|
||||||
|
typical or common use of that class of product, regardless of the status
|
||||||
|
of the particular user or of the way in which the particular user
|
||||||
|
actually uses, or expects or is expected to use, the product. A product
|
||||||
|
is a consumer product regardless of whether the product has substantial
|
||||||
|
commercial, industrial or non-consumer uses, unless such uses represent
|
||||||
|
the only significant mode of use of the product.
|
||||||
|
|
||||||
|
"Installation Information" for a User Product means any methods,
|
||||||
|
procedures, authorization keys, or other information required to install
|
||||||
|
and execute modified versions of a covered work in that User Product from
|
||||||
|
a modified version of its Corresponding Source. The information must
|
||||||
|
suffice to ensure that the continued functioning of the modified object
|
||||||
|
code is in no case prevented or interfered with solely because
|
||||||
|
modification has been made.
|
||||||
|
|
||||||
|
If you convey an object code work under this section in, or with, or
|
||||||
|
specifically for use in, a User Product, and the conveying occurs as
|
||||||
|
part of a transaction in which the right of possession and use of the
|
||||||
|
User Product is transferred to the recipient in perpetuity or for a
|
||||||
|
fixed term (regardless of how the transaction is characterized), the
|
||||||
|
Corresponding Source conveyed under this section must be accompanied
|
||||||
|
by the Installation Information. But this requirement does not apply
|
||||||
|
if neither you nor any third party retains the ability to install
|
||||||
|
modified object code on the User Product (for example, the work has
|
||||||
|
been installed in ROM).
|
||||||
|
|
||||||
|
The requirement to provide Installation Information does not include a
|
||||||
|
requirement to continue to provide support service, warranty, or updates
|
||||||
|
for a work that has been modified or installed by the recipient, or for
|
||||||
|
the User Product in which it has been modified or installed. Access to a
|
||||||
|
network may be denied when the modification itself materially and
|
||||||
|
adversely affects the operation of the network or violates the rules and
|
||||||
|
protocols for communication across the network.
|
||||||
|
|
||||||
|
Corresponding Source conveyed, and Installation Information provided,
|
||||||
|
in accord with this section must be in a format that is publicly
|
||||||
|
documented (and with an implementation available to the public in
|
||||||
|
source code form), and must require no special password or key for
|
||||||
|
unpacking, reading or copying.
|
||||||
|
|
||||||
|
7. Additional Terms.
|
||||||
|
|
||||||
|
"Additional permissions" are terms that supplement the terms of this
|
||||||
|
License by making exceptions from one or more of its conditions.
|
||||||
|
Additional permissions that are applicable to the entire Program shall
|
||||||
|
be treated as though they were included in this License, to the extent
|
||||||
|
that they are valid under applicable law. If additional permissions
|
||||||
|
apply only to part of the Program, that part may be used separately
|
||||||
|
under those permissions, but the entire Program remains governed by
|
||||||
|
this License without regard to the additional permissions.
|
||||||
|
|
||||||
|
When you convey a copy of a covered work, you may at your option
|
||||||
|
remove any additional permissions from that copy, or from any part of
|
||||||
|
it. (Additional permissions may be written to require their own
|
||||||
|
removal in certain cases when you modify the work.) You may place
|
||||||
|
additional permissions on material, added by you to a covered work,
|
||||||
|
for which you have or can give appropriate copyright permission.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, for material you
|
||||||
|
add to a covered work, you may (if authorized by the copyright holders of
|
||||||
|
that material) supplement the terms of this License with terms:
|
||||||
|
|
||||||
|
a) Disclaiming warranty or limiting liability differently from the
|
||||||
|
terms of sections 15 and 16 of this License; or
|
||||||
|
|
||||||
|
b) Requiring preservation of specified reasonable legal notices or
|
||||||
|
author attributions in that material or in the Appropriate Legal
|
||||||
|
Notices displayed by works containing it; or
|
||||||
|
|
||||||
|
c) Prohibiting misrepresentation of the origin of that material, or
|
||||||
|
requiring that modified versions of such material be marked in
|
||||||
|
reasonable ways as different from the original version; or
|
||||||
|
|
||||||
|
d) Limiting the use for publicity purposes of names of licensors or
|
||||||
|
authors of the material; or
|
||||||
|
|
||||||
|
e) Declining to grant rights under trademark law for use of some
|
||||||
|
trade names, trademarks, or service marks; or
|
||||||
|
|
||||||
|
f) Requiring indemnification of licensors and authors of that
|
||||||
|
material by anyone who conveys the material (or modified versions of
|
||||||
|
it) with contractual assumptions of liability to the recipient, for
|
||||||
|
any liability that these contractual assumptions directly impose on
|
||||||
|
those licensors and authors.
|
||||||
|
|
||||||
|
All other non-permissive additional terms are considered "further
|
||||||
|
restrictions" within the meaning of section 10. If the Program as you
|
||||||
|
received it, or any part of it, contains a notice stating that it is
|
||||||
|
governed by this License along with a term that is a further
|
||||||
|
restriction, you may remove that term. If a license document contains
|
||||||
|
a further restriction but permits relicensing or conveying under this
|
||||||
|
License, you may add to a covered work material governed by the terms
|
||||||
|
of that license document, provided that the further restriction does
|
||||||
|
not survive such relicensing or conveying.
|
||||||
|
|
||||||
|
If you add terms to a covered work in accord with this section, you
|
||||||
|
must place, in the relevant source files, a statement of the
|
||||||
|
additional terms that apply to those files, or a notice indicating
|
||||||
|
where to find the applicable terms.
|
||||||
|
|
||||||
|
Additional terms, permissive or non-permissive, may be stated in the
|
||||||
|
form of a separately written license, or stated as exceptions;
|
||||||
|
the above requirements apply either way.
|
||||||
|
|
||||||
|
8. Termination.
|
||||||
|
|
||||||
|
You may not propagate or modify a covered work except as expressly
|
||||||
|
provided under this License. Any attempt otherwise to propagate or
|
||||||
|
modify it is void, and will automatically terminate your rights under
|
||||||
|
this License (including any patent licenses granted under the third
|
||||||
|
paragraph of section 11).
|
||||||
|
|
||||||
|
However, if you cease all violation of this License, then your
|
||||||
|
license from a particular copyright holder is reinstated (a)
|
||||||
|
provisionally, unless and until the copyright holder explicitly and
|
||||||
|
finally terminates your license, and (b) permanently, if the copyright
|
||||||
|
holder fails to notify you of the violation by some reasonable means
|
||||||
|
prior to 60 days after the cessation.
|
||||||
|
|
||||||
|
Moreover, your license from a particular copyright holder is
|
||||||
|
reinstated permanently if the copyright holder notifies you of the
|
||||||
|
violation by some reasonable means, this is the first time you have
|
||||||
|
received notice of violation of this License (for any work) from that
|
||||||
|
copyright holder, and you cure the violation prior to 30 days after
|
||||||
|
your receipt of the notice.
|
||||||
|
|
||||||
|
Termination of your rights under this section does not terminate the
|
||||||
|
licenses of parties who have received copies or rights from you under
|
||||||
|
this License. If your rights have been terminated and not permanently
|
||||||
|
reinstated, you do not qualify to receive new licenses for the same
|
||||||
|
material under section 10.
|
||||||
|
|
||||||
|
9. Acceptance Not Required for Having Copies.
|
||||||
|
|
||||||
|
You are not required to accept this License in order to receive or
|
||||||
|
run a copy of the Program. Ancillary propagation of a covered work
|
||||||
|
occurring solely as a consequence of using peer-to-peer transmission
|
||||||
|
to receive a copy likewise does not require acceptance. However,
|
||||||
|
nothing other than this License grants you permission to propagate or
|
||||||
|
modify any covered work. These actions infringe copyright if you do
|
||||||
|
not accept this License. Therefore, by modifying or propagating a
|
||||||
|
covered work, you indicate your acceptance of this License to do so.
|
||||||
|
|
||||||
|
10. Automatic Licensing of Downstream Recipients.
|
||||||
|
|
||||||
|
Each time you convey a covered work, the recipient automatically
|
||||||
|
receives a license from the original licensors, to run, modify and
|
||||||
|
propagate that work, subject to this License. You are not responsible
|
||||||
|
for enforcing compliance by third parties with this License.
|
||||||
|
|
||||||
|
An "entity transaction" is a transaction transferring control of an
|
||||||
|
organization, or substantially all assets of one, or subdividing an
|
||||||
|
organization, or merging organizations. If propagation of a covered
|
||||||
|
work results from an entity transaction, each party to that
|
||||||
|
transaction who receives a copy of the work also receives whatever
|
||||||
|
licenses to the work the party's predecessor in interest had or could
|
||||||
|
give under the previous paragraph, plus a right to possession of the
|
||||||
|
Corresponding Source of the work from the predecessor in interest, if
|
||||||
|
the predecessor has it or can get it with reasonable efforts.
|
||||||
|
|
||||||
|
You may not impose any further restrictions on the exercise of the
|
||||||
|
rights granted or affirmed under this License. For example, you may
|
||||||
|
not impose a license fee, royalty, or other charge for exercise of
|
||||||
|
rights granted under this License, and you may not initiate litigation
|
||||||
|
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||||
|
any patent claim is infringed by making, using, selling, offering for
|
||||||
|
sale, or importing the Program or any portion of it.
|
||||||
|
|
||||||
|
11. Patents.
|
||||||
|
|
||||||
|
A "contributor" is a copyright holder who authorizes use under this
|
||||||
|
License of the Program or a work on which the Program is based. The
|
||||||
|
work thus licensed is called the contributor's "contributor version".
|
||||||
|
|
||||||
|
A contributor's "essential patent claims" are all patent claims
|
||||||
|
owned or controlled by the contributor, whether already acquired or
|
||||||
|
hereafter acquired, that would be infringed by some manner, permitted
|
||||||
|
by this License, of making, using, or selling its contributor version,
|
||||||
|
but do not include claims that would be infringed only as a
|
||||||
|
consequence of further modification of the contributor version. For
|
||||||
|
purposes of this definition, "control" includes the right to grant
|
||||||
|
patent sublicenses in a manner consistent with the requirements of
|
||||||
|
this License.
|
||||||
|
|
||||||
|
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||||
|
patent license under the contributor's essential patent claims, to
|
||||||
|
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||||
|
propagate the contents of its contributor version.
|
||||||
|
|
||||||
|
In the following three paragraphs, a "patent license" is any express
|
||||||
|
agreement or commitment, however denominated, not to enforce a patent
|
||||||
|
(such as an express permission to practice a patent or covenant not to
|
||||||
|
sue for patent infringement). To "grant" such a patent license to a
|
||||||
|
party means to make such an agreement or commitment not to enforce a
|
||||||
|
patent against the party.
|
||||||
|
|
||||||
|
If you convey a covered work, knowingly relying on a patent license,
|
||||||
|
and the Corresponding Source of the work is not available for anyone
|
||||||
|
to copy, free of charge and under the terms of this License, through a
|
||||||
|
publicly available network server or other readily accessible means,
|
||||||
|
then you must either (1) cause the Corresponding Source to be so
|
||||||
|
available, or (2) arrange to deprive yourself of the benefit of the
|
||||||
|
patent license for this particular work, or (3) arrange, in a manner
|
||||||
|
consistent with the requirements of this License, to extend the patent
|
||||||
|
license to downstream recipients. "Knowingly relying" means you have
|
||||||
|
actual knowledge that, but for the patent license, your conveying the
|
||||||
|
covered work in a country, or your recipient's use of the covered work
|
||||||
|
in a country, would infringe one or more identifiable patents in that
|
||||||
|
country that you have reason to believe are valid.
|
||||||
|
|
||||||
|
If, pursuant to or in connection with a single transaction or
|
||||||
|
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||||
|
covered work, and grant a patent license to some of the parties
|
||||||
|
receiving the covered work authorizing them to use, propagate, modify
|
||||||
|
or convey a specific copy of the covered work, then the patent license
|
||||||
|
you grant is automatically extended to all recipients of the covered
|
||||||
|
work and works based on it.
|
||||||
|
|
||||||
|
A patent license is "discriminatory" if it does not include within
|
||||||
|
the scope of its coverage, prohibits the exercise of, or is
|
||||||
|
conditioned on the non-exercise of one or more of the rights that are
|
||||||
|
specifically granted under this License. You may not convey a covered
|
||||||
|
work if you are a party to an arrangement with a third party that is
|
||||||
|
in the business of distributing software, under which you make payment
|
||||||
|
to the third party based on the extent of your activity of conveying
|
||||||
|
the work, and under which the third party grants, to any of the
|
||||||
|
parties who would receive the covered work from you, a discriminatory
|
||||||
|
patent license (a) in connection with copies of the covered work
|
||||||
|
conveyed by you (or copies made from those copies), or (b) primarily
|
||||||
|
for and in connection with specific products or compilations that
|
||||||
|
contain the covered work, unless you entered into that arrangement,
|
||||||
|
or that patent license was granted, prior to 28 March 2007.
|
||||||
|
|
||||||
|
Nothing in this License shall be construed as excluding or limiting
|
||||||
|
any implied license or other defenses to infringement that may
|
||||||
|
otherwise be available to you under applicable patent law.
|
||||||
|
|
||||||
|
12. No Surrender of Others' Freedom.
|
||||||
|
|
||||||
|
If conditions are imposed on you (whether by court order, agreement or
|
||||||
|
otherwise) that contradict the conditions of this License, they do not
|
||||||
|
excuse you from the conditions of this License. If you cannot convey a
|
||||||
|
covered work so as to satisfy simultaneously your obligations under this
|
||||||
|
License and any other pertinent obligations, then as a consequence you may
|
||||||
|
not convey it at all. For example, if you agree to terms that obligate you
|
||||||
|
to collect a royalty for further conveying from those to whom you convey
|
||||||
|
the Program, the only way you could satisfy both those terms and this
|
||||||
|
License would be to refrain entirely from conveying the Program.
|
||||||
|
|
||||||
|
13. Remote Network Interaction; Use with the GNU General Public License.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, if you modify the
|
||||||
|
Program, your modified version must prominently offer all users
|
||||||
|
interacting with it remotely through a computer network (if your version
|
||||||
|
supports such interaction) an opportunity to receive the Corresponding
|
||||||
|
Source of your version by providing access to the Corresponding Source
|
||||||
|
from a network server at no charge, through some standard or customary
|
||||||
|
means of facilitating copying of software. This Corresponding Source
|
||||||
|
shall include the Corresponding Source for any work covered by version 3
|
||||||
|
of the GNU General Public License that is incorporated pursuant to the
|
||||||
|
following paragraph.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, you have
|
||||||
|
permission to link or combine any covered work with a work licensed
|
||||||
|
under version 3 of the GNU General Public License into a single
|
||||||
|
combined work, and to convey the resulting work. The terms of this
|
||||||
|
License will continue to apply to the part which is the covered work,
|
||||||
|
but the work with which it is combined will remain governed by version
|
||||||
|
3 of the GNU General Public License.
|
||||||
|
|
||||||
|
14. Revised Versions of this License.
|
||||||
|
|
||||||
|
The Free Software Foundation may publish revised and/or new versions of
|
||||||
|
the GNU Affero General Public License from time to time. Such new versions
|
||||||
|
will be similar in spirit to the present version, but may differ in detail to
|
||||||
|
address new problems or concerns.
|
||||||
|
|
||||||
|
Each version is given a distinguishing version number. If the
|
||||||
|
Program specifies that a certain numbered version of the GNU Affero General
|
||||||
|
Public License "or any later version" applies to it, you have the
|
||||||
|
option of following the terms and conditions either of that numbered
|
||||||
|
version or of any later version published by the Free Software
|
||||||
|
Foundation. If the Program does not specify a version number of the
|
||||||
|
GNU Affero General Public License, you may choose any version ever published
|
||||||
|
by the Free Software Foundation.
|
||||||
|
|
||||||
|
If the Program specifies that a proxy can decide which future
|
||||||
|
versions of the GNU Affero General Public License can be used, that proxy's
|
||||||
|
public statement of acceptance of a version permanently authorizes you
|
||||||
|
to choose that version for the Program.
|
||||||
|
|
||||||
|
Later license versions may give you additional or different
|
||||||
|
permissions. However, no additional obligations are imposed on any
|
||||||
|
author or copyright holder as a result of your choosing to follow a
|
||||||
|
later version.
|
||||||
|
|
||||||
|
15. Disclaimer of Warranty.
|
||||||
|
|
||||||
|
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||||
|
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||||
|
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||||
|
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||||
|
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||||
|
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||||
|
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||||
|
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||||
|
|
||||||
|
16. Limitation of Liability.
|
||||||
|
|
||||||
|
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||||
|
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||||
|
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||||
|
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||||
|
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||||
|
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||||
|
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||||
|
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||||
|
SUCH DAMAGES.
|
||||||
|
|
||||||
|
17. Interpretation of Sections 15 and 16.
|
||||||
|
|
||||||
|
If the disclaimer of warranty and limitation of liability provided
|
||||||
|
above cannot be given local legal effect according to their terms,
|
||||||
|
reviewing courts shall apply local law that most closely approximates
|
||||||
|
an absolute waiver of all civil liability in connection with the
|
||||||
|
Program, unless a warranty or assumption of liability accompanies a
|
||||||
|
copy of the Program in return for a fee.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
How to Apply These Terms to Your New Programs
|
||||||
|
|
||||||
|
If you develop a new program, and you want it to be of the greatest
|
||||||
|
possible use to the public, the best way to achieve this is to make it
|
||||||
|
free software which everyone can redistribute and change under these terms.
|
||||||
|
|
||||||
|
To do so, attach the following notices to the program. It is safest
|
||||||
|
to attach them to the start of each source file to most effectively
|
||||||
|
state the exclusion of warranty; and each file should have at least
|
||||||
|
the "copyright" line and a pointer to where the full notice is found.
|
||||||
|
|
||||||
|
<one line to give the program's name and a brief idea of what it does.>
|
||||||
|
Copyright (C) <year> <name of author>
|
||||||
|
|
||||||
|
This program is free software: you can redistribute it and/or modify
|
||||||
|
it under the terms of the GNU Affero General Public License as published by
|
||||||
|
the Free Software Foundation, either version 3 of the License, or
|
||||||
|
(at your option) any later version.
|
||||||
|
|
||||||
|
This program is distributed in the hope that it will be useful,
|
||||||
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
GNU Affero General Public License for more details.
|
||||||
|
|
||||||
|
You should have received a copy of the GNU Affero General Public License
|
||||||
|
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
Also add information on how to contact you by electronic and paper mail.
|
||||||
|
|
||||||
|
If your software can interact with users remotely through a computer
|
||||||
|
network, you should also make sure that it provides a way for users to
|
||||||
|
get its source. For example, if your program is a web application, its
|
||||||
|
interface could display a "Source" link that leads users to an archive
|
||||||
|
of the code. There are many ways you could offer source, and different
|
||||||
|
solutions will be better for different programs; see section 13 for the
|
||||||
|
specific requirements.
|
||||||
|
|
||||||
|
You should also get your employer (if you work as a programmer) or school,
|
||||||
|
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||||
|
For more information on this, and how to apply and follow the GNU AGPL, see
|
||||||
|
<https://www.gnu.org/licenses/>.
|
||||||
16
README.md
16
README.md
@@ -149,19 +149,13 @@ All endpoints require AWS Signature Version 4 authentication unless using presig
|
|||||||
| `POST` | `/<bucket>/<key>?uploadId=X` | Complete multipart upload |
|
| `POST` | `/<bucket>/<key>?uploadId=X` | Complete multipart upload |
|
||||||
| `DELETE` | `/<bucket>/<key>?uploadId=X` | Abort multipart upload |
|
| `DELETE` | `/<bucket>/<key>?uploadId=X` | Abort multipart upload |
|
||||||
|
|
||||||
### Presigned URLs
|
### Bucket Policies (S3-compatible)
|
||||||
|
|
||||||
| Method | Endpoint | Description |
|
| Method | Endpoint | Description |
|
||||||
|--------|----------|-------------|
|
|--------|----------|-------------|
|
||||||
| `POST` | `/presign/<bucket>/<key>` | Generate presigned URL |
|
| `GET` | `/<bucket>?policy` | Get bucket policy |
|
||||||
|
| `PUT` | `/<bucket>?policy` | Set bucket policy |
|
||||||
### Bucket Policies
|
| `DELETE` | `/<bucket>?policy` | Delete bucket policy |
|
||||||
|
|
||||||
| Method | Endpoint | Description |
|
|
||||||
|--------|----------|-------------|
|
|
||||||
| `GET` | `/bucket-policy/<bucket>` | Get bucket policy |
|
|
||||||
| `PUT` | `/bucket-policy/<bucket>` | Set bucket policy |
|
|
||||||
| `DELETE` | `/bucket-policy/<bucket>` | Delete bucket policy |
|
|
||||||
|
|
||||||
### Versioning
|
### Versioning
|
||||||
|
|
||||||
@@ -175,7 +169,7 @@ All endpoints require AWS Signature Version 4 authentication unless using presig
|
|||||||
|
|
||||||
| Method | Endpoint | Description |
|
| Method | Endpoint | Description |
|
||||||
|--------|----------|-------------|
|
|--------|----------|-------------|
|
||||||
| `GET` | `/healthz` | Health check endpoint |
|
| `GET` | `/myfsio/health` | Health check endpoint |
|
||||||
|
|
||||||
## IAM & Access Control
|
## IAM & Access Control
|
||||||
|
|
||||||
|
|||||||
@@ -16,6 +16,8 @@ from flask_wtf.csrf import CSRFError
|
|||||||
from werkzeug.middleware.proxy_fix import ProxyFix
|
from werkzeug.middleware.proxy_fix import ProxyFix
|
||||||
|
|
||||||
from .access_logging import AccessLoggingService
|
from .access_logging import AccessLoggingService
|
||||||
|
from .operation_metrics import OperationMetricsCollector, classify_endpoint
|
||||||
|
from .compression import GzipMiddleware
|
||||||
from .acl import AclService
|
from .acl import AclService
|
||||||
from .bucket_policies import BucketPolicyStore
|
from .bucket_policies import BucketPolicyStore
|
||||||
from .config import AppConfig
|
from .config import AppConfig
|
||||||
@@ -89,13 +91,24 @@ def create_app(
|
|||||||
# Trust X-Forwarded-* headers from proxies
|
# Trust X-Forwarded-* headers from proxies
|
||||||
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_prefix=1)
|
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_prefix=1)
|
||||||
|
|
||||||
|
# Enable gzip compression for responses (10-20x smaller JSON payloads)
|
||||||
|
if app.config.get("ENABLE_GZIP", True):
|
||||||
|
app.wsgi_app = GzipMiddleware(app.wsgi_app, compression_level=6)
|
||||||
|
|
||||||
_configure_cors(app)
|
_configure_cors(app)
|
||||||
_configure_logging(app)
|
_configure_logging(app)
|
||||||
|
|
||||||
limiter.init_app(app)
|
limiter.init_app(app)
|
||||||
csrf.init_app(app)
|
csrf.init_app(app)
|
||||||
|
|
||||||
storage = ObjectStorage(Path(app.config["STORAGE_ROOT"]))
|
storage = ObjectStorage(
|
||||||
|
Path(app.config["STORAGE_ROOT"]),
|
||||||
|
cache_ttl=app.config.get("OBJECT_CACHE_TTL", 5),
|
||||||
|
)
|
||||||
|
|
||||||
|
if app.config.get("WARM_CACHE_ON_STARTUP", True) and not app.config.get("TESTING"):
|
||||||
|
storage.warm_cache_async()
|
||||||
|
|
||||||
iam = IamService(
|
iam = IamService(
|
||||||
Path(app.config["IAM_CONFIG"]),
|
Path(app.config["IAM_CONFIG"]),
|
||||||
auth_max_attempts=app.config.get("AUTH_MAX_ATTEMPTS", 5),
|
auth_max_attempts=app.config.get("AUTH_MAX_ATTEMPTS", 5),
|
||||||
@@ -175,6 +188,15 @@ def create_app(
|
|||||||
app.extensions["notifications"] = notification_service
|
app.extensions["notifications"] = notification_service
|
||||||
app.extensions["access_logging"] = access_logging_service
|
app.extensions["access_logging"] = access_logging_service
|
||||||
|
|
||||||
|
operation_metrics_collector = None
|
||||||
|
if app.config.get("OPERATION_METRICS_ENABLED", False):
|
||||||
|
operation_metrics_collector = OperationMetricsCollector(
|
||||||
|
storage_root,
|
||||||
|
interval_minutes=app.config.get("OPERATION_METRICS_INTERVAL_MINUTES", 5),
|
||||||
|
retention_hours=app.config.get("OPERATION_METRICS_RETENTION_HOURS", 24),
|
||||||
|
)
|
||||||
|
app.extensions["operation_metrics"] = operation_metrics_collector
|
||||||
|
|
||||||
@app.errorhandler(500)
|
@app.errorhandler(500)
|
||||||
def internal_error(error):
|
def internal_error(error):
|
||||||
return render_template('500.html'), 500
|
return render_template('500.html'), 500
|
||||||
@@ -215,6 +237,30 @@ def create_app(
|
|||||||
except (ValueError, OSError):
|
except (ValueError, OSError):
|
||||||
return "Unknown"
|
return "Unknown"
|
||||||
|
|
||||||
|
@app.template_filter("format_datetime")
|
||||||
|
def format_datetime_filter(dt, include_tz: bool = True) -> str:
|
||||||
|
"""Format datetime object as human-readable string in configured timezone."""
|
||||||
|
from datetime import datetime, timezone as dt_timezone
|
||||||
|
from zoneinfo import ZoneInfo
|
||||||
|
if not dt:
|
||||||
|
return ""
|
||||||
|
try:
|
||||||
|
display_tz = app.config.get("DISPLAY_TIMEZONE", "UTC")
|
||||||
|
if display_tz and display_tz != "UTC":
|
||||||
|
try:
|
||||||
|
tz = ZoneInfo(display_tz)
|
||||||
|
if dt.tzinfo is None:
|
||||||
|
dt = dt.replace(tzinfo=dt_timezone.utc)
|
||||||
|
dt = dt.astimezone(tz)
|
||||||
|
except (KeyError, ValueError):
|
||||||
|
pass
|
||||||
|
tz_abbr = dt.strftime("%Z") or "UTC"
|
||||||
|
if include_tz:
|
||||||
|
return f"{dt.strftime('%b %d, %Y %H:%M')} ({tz_abbr})"
|
||||||
|
return dt.strftime("%b %d, %Y %H:%M")
|
||||||
|
except (ValueError, AttributeError):
|
||||||
|
return str(dt)
|
||||||
|
|
||||||
if include_api:
|
if include_api:
|
||||||
from .s3_api import s3_api_bp
|
from .s3_api import s3_api_bp
|
||||||
from .kms_api import kms_api_bp
|
from .kms_api import kms_api_bp
|
||||||
@@ -242,9 +288,9 @@ def create_app(
|
|||||||
return render_template("404.html"), 404
|
return render_template("404.html"), 404
|
||||||
return error
|
return error
|
||||||
|
|
||||||
@app.get("/healthz")
|
@app.get("/myfsio/health")
|
||||||
def healthcheck() -> Dict[str, str]:
|
def healthcheck() -> Dict[str, str]:
|
||||||
return {"status": "ok", "version": app.config.get("APP_VERSION", "unknown")}
|
return {"status": "ok"}
|
||||||
|
|
||||||
return app
|
return app
|
||||||
|
|
||||||
@@ -320,6 +366,7 @@ def _configure_logging(app: Flask) -> None:
|
|||||||
def _log_request_start() -> None:
|
def _log_request_start() -> None:
|
||||||
g.request_id = uuid.uuid4().hex
|
g.request_id = uuid.uuid4().hex
|
||||||
g.request_started_at = time.perf_counter()
|
g.request_started_at = time.perf_counter()
|
||||||
|
g.request_bytes_in = request.content_length or 0
|
||||||
app.logger.info(
|
app.logger.info(
|
||||||
"Request started",
|
"Request started",
|
||||||
extra={"path": request.path, "method": request.method, "remote_addr": request.remote_addr},
|
extra={"path": request.path, "method": request.method, "remote_addr": request.remote_addr},
|
||||||
@@ -341,4 +388,21 @@ def _configure_logging(app: Flask) -> None:
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
response.headers["X-Request-Duration-ms"] = f"{duration_ms:.2f}"
|
response.headers["X-Request-Duration-ms"] = f"{duration_ms:.2f}"
|
||||||
|
|
||||||
|
operation_metrics = app.extensions.get("operation_metrics")
|
||||||
|
if operation_metrics:
|
||||||
|
bytes_in = getattr(g, "request_bytes_in", 0)
|
||||||
|
bytes_out = response.content_length or 0
|
||||||
|
error_code = getattr(g, "s3_error_code", None)
|
||||||
|
endpoint_type = classify_endpoint(request.path)
|
||||||
|
operation_metrics.record_request(
|
||||||
|
method=request.method,
|
||||||
|
endpoint_type=endpoint_type,
|
||||||
|
status_code=response.status_code,
|
||||||
|
latency_ms=duration_ms,
|
||||||
|
bytes_in=bytes_in,
|
||||||
|
bytes_out=bytes_out,
|
||||||
|
error_code=error_code,
|
||||||
|
)
|
||||||
|
|
||||||
return response
|
return response
|
||||||
|
|||||||
@@ -1,9 +1,10 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import ipaddress
|
||||||
import json
|
import json
|
||||||
import re
|
import re
|
||||||
import time
|
import time
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass, field
|
||||||
from fnmatch import fnmatch, translate
|
from fnmatch import fnmatch, translate
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, Iterable, List, Optional, Pattern, Sequence, Tuple
|
from typing import Any, Dict, Iterable, List, Optional, Pattern, Sequence, Tuple
|
||||||
@@ -11,14 +12,71 @@ from typing import Any, Dict, Iterable, List, Optional, Pattern, Sequence, Tuple
|
|||||||
|
|
||||||
RESOURCE_PREFIX = "arn:aws:s3:::"
|
RESOURCE_PREFIX = "arn:aws:s3:::"
|
||||||
|
|
||||||
|
|
||||||
|
def _match_string_like(value: str, pattern: str) -> bool:
|
||||||
|
regex = translate(pattern)
|
||||||
|
return bool(re.match(regex, value, re.IGNORECASE))
|
||||||
|
|
||||||
|
|
||||||
|
def _ip_in_cidr(ip_str: str, cidr: str) -> bool:
|
||||||
|
try:
|
||||||
|
ip = ipaddress.ip_address(ip_str)
|
||||||
|
network = ipaddress.ip_network(cidr, strict=False)
|
||||||
|
return ip in network
|
||||||
|
except ValueError:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _evaluate_condition_operator(
|
||||||
|
operator: str,
|
||||||
|
condition_key: str,
|
||||||
|
condition_values: List[str],
|
||||||
|
context: Dict[str, Any],
|
||||||
|
) -> bool:
|
||||||
|
context_value = context.get(condition_key)
|
||||||
|
op_lower = operator.lower()
|
||||||
|
if_exists = op_lower.endswith("ifexists")
|
||||||
|
if if_exists:
|
||||||
|
op_lower = op_lower[:-8]
|
||||||
|
|
||||||
|
if context_value is None:
|
||||||
|
return if_exists
|
||||||
|
|
||||||
|
context_value_str = str(context_value)
|
||||||
|
context_value_lower = context_value_str.lower()
|
||||||
|
|
||||||
|
if op_lower == "stringequals":
|
||||||
|
return context_value_str in condition_values
|
||||||
|
elif op_lower == "stringnotequals":
|
||||||
|
return context_value_str not in condition_values
|
||||||
|
elif op_lower == "stringequalsignorecase":
|
||||||
|
return context_value_lower in [v.lower() for v in condition_values]
|
||||||
|
elif op_lower == "stringnotequalsignorecase":
|
||||||
|
return context_value_lower not in [v.lower() for v in condition_values]
|
||||||
|
elif op_lower == "stringlike":
|
||||||
|
return any(_match_string_like(context_value_str, p) for p in condition_values)
|
||||||
|
elif op_lower == "stringnotlike":
|
||||||
|
return not any(_match_string_like(context_value_str, p) for p in condition_values)
|
||||||
|
elif op_lower == "ipaddress":
|
||||||
|
return any(_ip_in_cidr(context_value_str, cidr) for cidr in condition_values)
|
||||||
|
elif op_lower == "notipaddress":
|
||||||
|
return not any(_ip_in_cidr(context_value_str, cidr) for cidr in condition_values)
|
||||||
|
elif op_lower == "bool":
|
||||||
|
bool_val = context_value_lower in ("true", "1", "yes")
|
||||||
|
return str(bool_val).lower() in [v.lower() for v in condition_values]
|
||||||
|
elif op_lower == "null":
|
||||||
|
is_null = context_value is None or context_value == ""
|
||||||
|
expected_null = condition_values[0].lower() in ("true", "1", "yes") if condition_values else True
|
||||||
|
return is_null == expected_null
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
ACTION_ALIASES = {
|
ACTION_ALIASES = {
|
||||||
# List actions
|
|
||||||
"s3:listbucket": "list",
|
"s3:listbucket": "list",
|
||||||
"s3:listallmybuckets": "list",
|
"s3:listallmybuckets": "list",
|
||||||
"s3:listbucketversions": "list",
|
"s3:listbucketversions": "list",
|
||||||
"s3:listmultipartuploads": "list",
|
"s3:listmultipartuploads": "list",
|
||||||
"s3:listparts": "list",
|
"s3:listparts": "list",
|
||||||
# Read actions
|
|
||||||
"s3:getobject": "read",
|
"s3:getobject": "read",
|
||||||
"s3:getobjectversion": "read",
|
"s3:getobjectversion": "read",
|
||||||
"s3:getobjecttagging": "read",
|
"s3:getobjecttagging": "read",
|
||||||
@@ -27,7 +85,6 @@ ACTION_ALIASES = {
|
|||||||
"s3:getbucketversioning": "read",
|
"s3:getbucketversioning": "read",
|
||||||
"s3:headobject": "read",
|
"s3:headobject": "read",
|
||||||
"s3:headbucket": "read",
|
"s3:headbucket": "read",
|
||||||
# Write actions
|
|
||||||
"s3:putobject": "write",
|
"s3:putobject": "write",
|
||||||
"s3:createbucket": "write",
|
"s3:createbucket": "write",
|
||||||
"s3:putobjecttagging": "write",
|
"s3:putobjecttagging": "write",
|
||||||
@@ -37,26 +94,30 @@ ACTION_ALIASES = {
|
|||||||
"s3:completemultipartupload": "write",
|
"s3:completemultipartupload": "write",
|
||||||
"s3:abortmultipartupload": "write",
|
"s3:abortmultipartupload": "write",
|
||||||
"s3:copyobject": "write",
|
"s3:copyobject": "write",
|
||||||
# Delete actions
|
|
||||||
"s3:deleteobject": "delete",
|
"s3:deleteobject": "delete",
|
||||||
"s3:deleteobjectversion": "delete",
|
"s3:deleteobjectversion": "delete",
|
||||||
"s3:deletebucket": "delete",
|
"s3:deletebucket": "delete",
|
||||||
"s3:deleteobjecttagging": "delete",
|
"s3:deleteobjecttagging": "delete",
|
||||||
# Share actions (ACL)
|
|
||||||
"s3:putobjectacl": "share",
|
"s3:putobjectacl": "share",
|
||||||
"s3:putbucketacl": "share",
|
"s3:putbucketacl": "share",
|
||||||
"s3:getbucketacl": "share",
|
"s3:getbucketacl": "share",
|
||||||
# Policy actions
|
|
||||||
"s3:putbucketpolicy": "policy",
|
"s3:putbucketpolicy": "policy",
|
||||||
"s3:getbucketpolicy": "policy",
|
"s3:getbucketpolicy": "policy",
|
||||||
"s3:deletebucketpolicy": "policy",
|
"s3:deletebucketpolicy": "policy",
|
||||||
# Replication actions
|
|
||||||
"s3:getreplicationconfiguration": "replication",
|
"s3:getreplicationconfiguration": "replication",
|
||||||
"s3:putreplicationconfiguration": "replication",
|
"s3:putreplicationconfiguration": "replication",
|
||||||
"s3:deletereplicationconfiguration": "replication",
|
"s3:deletereplicationconfiguration": "replication",
|
||||||
"s3:replicateobject": "replication",
|
"s3:replicateobject": "replication",
|
||||||
"s3:replicatetags": "replication",
|
"s3:replicatetags": "replication",
|
||||||
"s3:replicatedelete": "replication",
|
"s3:replicatedelete": "replication",
|
||||||
|
"s3:getlifecycleconfiguration": "lifecycle",
|
||||||
|
"s3:putlifecycleconfiguration": "lifecycle",
|
||||||
|
"s3:deletelifecycleconfiguration": "lifecycle",
|
||||||
|
"s3:getbucketlifecycle": "lifecycle",
|
||||||
|
"s3:putbucketlifecycle": "lifecycle",
|
||||||
|
"s3:getbucketcors": "cors",
|
||||||
|
"s3:putbucketcors": "cors",
|
||||||
|
"s3:deletebucketcors": "cors",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -135,18 +196,16 @@ class BucketPolicyStatement:
|
|||||||
principals: List[str] | str
|
principals: List[str] | str
|
||||||
actions: List[str]
|
actions: List[str]
|
||||||
resources: List[Tuple[str | None, str | None]]
|
resources: List[Tuple[str | None, str | None]]
|
||||||
# Performance: Pre-compiled regex patterns for resource matching
|
conditions: Dict[str, Dict[str, List[str]]] = field(default_factory=dict)
|
||||||
_compiled_patterns: List[Tuple[str | None, Optional[Pattern[str]]]] | None = None
|
_compiled_patterns: List[Tuple[str | None, Optional[Pattern[str]]]] | None = None
|
||||||
|
|
||||||
def _get_compiled_patterns(self) -> List[Tuple[str | None, Optional[Pattern[str]]]]:
|
def _get_compiled_patterns(self) -> List[Tuple[str | None, Optional[Pattern[str]]]]:
|
||||||
"""Lazily compile fnmatch patterns to regex for faster matching."""
|
|
||||||
if self._compiled_patterns is None:
|
if self._compiled_patterns is None:
|
||||||
self._compiled_patterns = []
|
self._compiled_patterns = []
|
||||||
for resource_bucket, key_pattern in self.resources:
|
for resource_bucket, key_pattern in self.resources:
|
||||||
if key_pattern is None:
|
if key_pattern is None:
|
||||||
self._compiled_patterns.append((resource_bucket, None))
|
self._compiled_patterns.append((resource_bucket, None))
|
||||||
else:
|
else:
|
||||||
# Convert fnmatch pattern to regex
|
|
||||||
regex_pattern = translate(key_pattern)
|
regex_pattern = translate(key_pattern)
|
||||||
self._compiled_patterns.append((resource_bucket, re.compile(regex_pattern)))
|
self._compiled_patterns.append((resource_bucket, re.compile(regex_pattern)))
|
||||||
return self._compiled_patterns
|
return self._compiled_patterns
|
||||||
@@ -173,11 +232,21 @@ class BucketPolicyStatement:
|
|||||||
if not key:
|
if not key:
|
||||||
return True
|
return True
|
||||||
continue
|
continue
|
||||||
# Performance: Use pre-compiled regex instead of fnmatch
|
|
||||||
if compiled_pattern.match(key):
|
if compiled_pattern.match(key):
|
||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
def matches_condition(self, context: Optional[Dict[str, Any]]) -> bool:
|
||||||
|
if not self.conditions:
|
||||||
|
return True
|
||||||
|
if context is None:
|
||||||
|
context = {}
|
||||||
|
for operator, key_values in self.conditions.items():
|
||||||
|
for condition_key, condition_values in key_values.items():
|
||||||
|
if not _evaluate_condition_operator(operator, condition_key, condition_values, context):
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
class BucketPolicyStore:
|
class BucketPolicyStore:
|
||||||
"""Loads bucket policies from disk and evaluates statements."""
|
"""Loads bucket policies from disk and evaluates statements."""
|
||||||
@@ -219,6 +288,7 @@ class BucketPolicyStore:
|
|||||||
bucket: Optional[str],
|
bucket: Optional[str],
|
||||||
object_key: Optional[str],
|
object_key: Optional[str],
|
||||||
action: str,
|
action: str,
|
||||||
|
context: Optional[Dict[str, Any]] = None,
|
||||||
) -> str | None:
|
) -> str | None:
|
||||||
bucket = (bucket or "").lower()
|
bucket = (bucket or "").lower()
|
||||||
statements = self._policies.get(bucket) or []
|
statements = self._policies.get(bucket) or []
|
||||||
@@ -230,6 +300,8 @@ class BucketPolicyStore:
|
|||||||
continue
|
continue
|
||||||
if not statement.matches_resource(bucket, object_key):
|
if not statement.matches_resource(bucket, object_key):
|
||||||
continue
|
continue
|
||||||
|
if not statement.matches_condition(context):
|
||||||
|
continue
|
||||||
if statement.effect == "deny":
|
if statement.effect == "deny":
|
||||||
return "deny"
|
return "deny"
|
||||||
decision = "allow"
|
decision = "allow"
|
||||||
@@ -294,6 +366,7 @@ class BucketPolicyStore:
|
|||||||
if not resources:
|
if not resources:
|
||||||
continue
|
continue
|
||||||
effect = statement.get("Effect", "Allow").lower()
|
effect = statement.get("Effect", "Allow").lower()
|
||||||
|
conditions = self._normalize_conditions(statement.get("Condition", {}))
|
||||||
statements.append(
|
statements.append(
|
||||||
BucketPolicyStatement(
|
BucketPolicyStatement(
|
||||||
sid=statement.get("Sid"),
|
sid=statement.get("Sid"),
|
||||||
@@ -301,6 +374,24 @@ class BucketPolicyStore:
|
|||||||
principals=principals,
|
principals=principals,
|
||||||
actions=actions or ["*"],
|
actions=actions or ["*"],
|
||||||
resources=resources,
|
resources=resources,
|
||||||
|
conditions=conditions,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
return statements
|
return statements
|
||||||
|
|
||||||
|
def _normalize_conditions(self, condition_block: Dict[str, Any]) -> Dict[str, Dict[str, List[str]]]:
|
||||||
|
if not condition_block or not isinstance(condition_block, dict):
|
||||||
|
return {}
|
||||||
|
normalized: Dict[str, Dict[str, List[str]]] = {}
|
||||||
|
for operator, key_values in condition_block.items():
|
||||||
|
if not isinstance(key_values, dict):
|
||||||
|
continue
|
||||||
|
normalized[operator] = {}
|
||||||
|
for cond_key, cond_values in key_values.items():
|
||||||
|
if isinstance(cond_values, str):
|
||||||
|
normalized[operator][cond_key] = [cond_values]
|
||||||
|
elif isinstance(cond_values, list):
|
||||||
|
normalized[operator][cond_key] = [str(v) for v in cond_values]
|
||||||
|
else:
|
||||||
|
normalized[operator][cond_key] = [str(cond_values)]
|
||||||
|
return normalized
|
||||||
94
app/compression.py
Normal file
94
app/compression.py
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import gzip
|
||||||
|
import io
|
||||||
|
from typing import Callable, Iterable, List, Tuple
|
||||||
|
|
||||||
|
COMPRESSIBLE_MIMES = frozenset([
|
||||||
|
'application/json',
|
||||||
|
'application/javascript',
|
||||||
|
'application/xml',
|
||||||
|
'text/html',
|
||||||
|
'text/css',
|
||||||
|
'text/plain',
|
||||||
|
'text/xml',
|
||||||
|
'text/javascript',
|
||||||
|
'application/x-ndjson',
|
||||||
|
])
|
||||||
|
|
||||||
|
MIN_SIZE_FOR_COMPRESSION = 500
|
||||||
|
|
||||||
|
|
||||||
|
class GzipMiddleware:
|
||||||
|
def __init__(self, app: Callable, compression_level: int = 6, min_size: int = MIN_SIZE_FOR_COMPRESSION):
|
||||||
|
self.app = app
|
||||||
|
self.compression_level = compression_level
|
||||||
|
self.min_size = min_size
|
||||||
|
|
||||||
|
def __call__(self, environ: dict, start_response: Callable) -> Iterable[bytes]:
|
||||||
|
accept_encoding = environ.get('HTTP_ACCEPT_ENCODING', '')
|
||||||
|
if 'gzip' not in accept_encoding.lower():
|
||||||
|
return self.app(environ, start_response)
|
||||||
|
|
||||||
|
response_started = False
|
||||||
|
status_code = None
|
||||||
|
response_headers: List[Tuple[str, str]] = []
|
||||||
|
content_type = None
|
||||||
|
content_length = None
|
||||||
|
should_compress = False
|
||||||
|
exc_info_holder = [None]
|
||||||
|
|
||||||
|
def custom_start_response(status: str, headers: List[Tuple[str, str]], exc_info=None):
|
||||||
|
nonlocal response_started, status_code, response_headers, content_type, content_length, should_compress
|
||||||
|
response_started = True
|
||||||
|
status_code = int(status.split(' ', 1)[0])
|
||||||
|
response_headers = list(headers)
|
||||||
|
exc_info_holder[0] = exc_info
|
||||||
|
|
||||||
|
for name, value in headers:
|
||||||
|
name_lower = name.lower()
|
||||||
|
if name_lower == 'content-type':
|
||||||
|
content_type = value.split(';')[0].strip().lower()
|
||||||
|
elif name_lower == 'content-length':
|
||||||
|
content_length = int(value)
|
||||||
|
elif name_lower == 'content-encoding':
|
||||||
|
should_compress = False
|
||||||
|
return start_response(status, headers, exc_info)
|
||||||
|
|
||||||
|
if content_type and content_type in COMPRESSIBLE_MIMES:
|
||||||
|
if content_length is None or content_length >= self.min_size:
|
||||||
|
should_compress = True
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
response_body = b''.join(self.app(environ, custom_start_response))
|
||||||
|
|
||||||
|
if not response_started:
|
||||||
|
return [response_body]
|
||||||
|
|
||||||
|
if should_compress and len(response_body) >= self.min_size:
|
||||||
|
buf = io.BytesIO()
|
||||||
|
with gzip.GzipFile(fileobj=buf, mode='wb', compresslevel=self.compression_level) as gz:
|
||||||
|
gz.write(response_body)
|
||||||
|
compressed = buf.getvalue()
|
||||||
|
|
||||||
|
if len(compressed) < len(response_body):
|
||||||
|
response_body = compressed
|
||||||
|
new_headers = []
|
||||||
|
for name, value in response_headers:
|
||||||
|
if name.lower() not in ('content-length', 'content-encoding'):
|
||||||
|
new_headers.append((name, value))
|
||||||
|
new_headers.append(('Content-Encoding', 'gzip'))
|
||||||
|
new_headers.append(('Content-Length', str(len(response_body))))
|
||||||
|
new_headers.append(('Vary', 'Accept-Encoding'))
|
||||||
|
response_headers = new_headers
|
||||||
|
|
||||||
|
status_str = f"{status_code} " + {
|
||||||
|
200: "OK", 201: "Created", 204: "No Content", 206: "Partial Content",
|
||||||
|
301: "Moved Permanently", 302: "Found", 304: "Not Modified",
|
||||||
|
400: "Bad Request", 401: "Unauthorized", 403: "Forbidden", 404: "Not Found",
|
||||||
|
405: "Method Not Allowed", 409: "Conflict", 500: "Internal Server Error",
|
||||||
|
}.get(status_code, "Unknown")
|
||||||
|
|
||||||
|
start_response(status_str, response_headers, exc_info_holder[0])
|
||||||
|
return [response_body]
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import os
|
import os
|
||||||
|
import re
|
||||||
import secrets
|
import secrets
|
||||||
import shutil
|
import shutil
|
||||||
import sys
|
import sys
|
||||||
@@ -9,6 +10,13 @@ from dataclasses import dataclass
|
|||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, Optional
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_rate_limit(value: str) -> str:
|
||||||
|
pattern = r"^\d+\s+per\s+(second|minute|hour|day)$"
|
||||||
|
if not re.match(pattern, value):
|
||||||
|
raise ValueError(f"Invalid rate limit format: {value}. Expected format: '200 per minute'")
|
||||||
|
return value
|
||||||
|
|
||||||
if getattr(sys, "frozen", False):
|
if getattr(sys, "frozen", False):
|
||||||
# Running in a PyInstaller bundle
|
# Running in a PyInstaller bundle
|
||||||
PROJECT_ROOT = Path(sys._MEIPASS)
|
PROJECT_ROOT = Path(sys._MEIPASS)
|
||||||
@@ -67,6 +75,7 @@ class AppConfig:
|
|||||||
stream_chunk_size: int
|
stream_chunk_size: int
|
||||||
multipart_min_part_size: int
|
multipart_min_part_size: int
|
||||||
bucket_stats_cache_ttl: int
|
bucket_stats_cache_ttl: int
|
||||||
|
object_cache_ttl: int
|
||||||
encryption_enabled: bool
|
encryption_enabled: bool
|
||||||
encryption_master_key_path: Path
|
encryption_master_key_path: Path
|
||||||
kms_enabled: bool
|
kms_enabled: bool
|
||||||
@@ -75,6 +84,12 @@ class AppConfig:
|
|||||||
display_timezone: str
|
display_timezone: str
|
||||||
lifecycle_enabled: bool
|
lifecycle_enabled: bool
|
||||||
lifecycle_interval_seconds: int
|
lifecycle_interval_seconds: int
|
||||||
|
metrics_history_enabled: bool
|
||||||
|
metrics_history_retention_hours: int
|
||||||
|
metrics_history_interval_minutes: int
|
||||||
|
operation_metrics_enabled: bool
|
||||||
|
operation_metrics_interval_minutes: int
|
||||||
|
operation_metrics_retention_hours: int
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_env(cls, overrides: Optional[Dict[str, Any]] = None) -> "AppConfig":
|
def from_env(cls, overrides: Optional[Dict[str, Any]] = None) -> "AppConfig":
|
||||||
@@ -147,7 +162,7 @@ class AppConfig:
|
|||||||
log_path = log_dir / str(_get("LOG_FILE", "app.log"))
|
log_path = log_dir / str(_get("LOG_FILE", "app.log"))
|
||||||
log_max_bytes = int(_get("LOG_MAX_BYTES", 5 * 1024 * 1024))
|
log_max_bytes = int(_get("LOG_MAX_BYTES", 5 * 1024 * 1024))
|
||||||
log_backup_count = int(_get("LOG_BACKUP_COUNT", 3))
|
log_backup_count = int(_get("LOG_BACKUP_COUNT", 3))
|
||||||
ratelimit_default = str(_get("RATE_LIMIT_DEFAULT", "200 per minute"))
|
ratelimit_default = _validate_rate_limit(str(_get("RATE_LIMIT_DEFAULT", "200 per minute")))
|
||||||
ratelimit_storage_uri = str(_get("RATE_LIMIT_STORAGE_URI", "memory://"))
|
ratelimit_storage_uri = str(_get("RATE_LIMIT_STORAGE_URI", "memory://"))
|
||||||
|
|
||||||
def _csv(value: str, default: list[str]) -> list[str]:
|
def _csv(value: str, default: list[str]) -> list[str]:
|
||||||
@@ -162,6 +177,7 @@ class AppConfig:
|
|||||||
cors_expose_headers = _csv(str(_get("CORS_EXPOSE_HEADERS", "*")), ["*"])
|
cors_expose_headers = _csv(str(_get("CORS_EXPOSE_HEADERS", "*")), ["*"])
|
||||||
session_lifetime_days = int(_get("SESSION_LIFETIME_DAYS", 30))
|
session_lifetime_days = int(_get("SESSION_LIFETIME_DAYS", 30))
|
||||||
bucket_stats_cache_ttl = int(_get("BUCKET_STATS_CACHE_TTL", 60))
|
bucket_stats_cache_ttl = int(_get("BUCKET_STATS_CACHE_TTL", 60))
|
||||||
|
object_cache_ttl = int(_get("OBJECT_CACHE_TTL", 5))
|
||||||
|
|
||||||
encryption_enabled = str(_get("ENCRYPTION_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
encryption_enabled = str(_get("ENCRYPTION_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
encryption_keys_dir = storage_root / ".myfsio.sys" / "keys"
|
encryption_keys_dir = storage_root / ".myfsio.sys" / "keys"
|
||||||
@@ -170,6 +186,12 @@ class AppConfig:
|
|||||||
kms_keys_path = Path(_get("KMS_KEYS_PATH", encryption_keys_dir / "kms_keys.json")).resolve()
|
kms_keys_path = Path(_get("KMS_KEYS_PATH", encryption_keys_dir / "kms_keys.json")).resolve()
|
||||||
default_encryption_algorithm = str(_get("DEFAULT_ENCRYPTION_ALGORITHM", "AES256"))
|
default_encryption_algorithm = str(_get("DEFAULT_ENCRYPTION_ALGORITHM", "AES256"))
|
||||||
display_timezone = str(_get("DISPLAY_TIMEZONE", "UTC"))
|
display_timezone = str(_get("DISPLAY_TIMEZONE", "UTC"))
|
||||||
|
metrics_history_enabled = str(_get("METRICS_HISTORY_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
|
metrics_history_retention_hours = int(_get("METRICS_HISTORY_RETENTION_HOURS", 24))
|
||||||
|
metrics_history_interval_minutes = int(_get("METRICS_HISTORY_INTERVAL_MINUTES", 5))
|
||||||
|
operation_metrics_enabled = str(_get("OPERATION_METRICS_ENABLED", "0")).lower() in {"1", "true", "yes", "on"}
|
||||||
|
operation_metrics_interval_minutes = int(_get("OPERATION_METRICS_INTERVAL_MINUTES", 5))
|
||||||
|
operation_metrics_retention_hours = int(_get("OPERATION_METRICS_RETENTION_HOURS", 24))
|
||||||
|
|
||||||
return cls(storage_root=storage_root,
|
return cls(storage_root=storage_root,
|
||||||
max_upload_size=max_upload_size,
|
max_upload_size=max_upload_size,
|
||||||
@@ -200,6 +222,7 @@ class AppConfig:
|
|||||||
stream_chunk_size=stream_chunk_size,
|
stream_chunk_size=stream_chunk_size,
|
||||||
multipart_min_part_size=multipart_min_part_size,
|
multipart_min_part_size=multipart_min_part_size,
|
||||||
bucket_stats_cache_ttl=bucket_stats_cache_ttl,
|
bucket_stats_cache_ttl=bucket_stats_cache_ttl,
|
||||||
|
object_cache_ttl=object_cache_ttl,
|
||||||
encryption_enabled=encryption_enabled,
|
encryption_enabled=encryption_enabled,
|
||||||
encryption_master_key_path=encryption_master_key_path,
|
encryption_master_key_path=encryption_master_key_path,
|
||||||
kms_enabled=kms_enabled,
|
kms_enabled=kms_enabled,
|
||||||
@@ -207,7 +230,13 @@ class AppConfig:
|
|||||||
default_encryption_algorithm=default_encryption_algorithm,
|
default_encryption_algorithm=default_encryption_algorithm,
|
||||||
display_timezone=display_timezone,
|
display_timezone=display_timezone,
|
||||||
lifecycle_enabled=lifecycle_enabled,
|
lifecycle_enabled=lifecycle_enabled,
|
||||||
lifecycle_interval_seconds=lifecycle_interval_seconds)
|
lifecycle_interval_seconds=lifecycle_interval_seconds,
|
||||||
|
metrics_history_enabled=metrics_history_enabled,
|
||||||
|
metrics_history_retention_hours=metrics_history_retention_hours,
|
||||||
|
metrics_history_interval_minutes=metrics_history_interval_minutes,
|
||||||
|
operation_metrics_enabled=operation_metrics_enabled,
|
||||||
|
operation_metrics_interval_minutes=operation_metrics_interval_minutes,
|
||||||
|
operation_metrics_retention_hours=operation_metrics_retention_hours)
|
||||||
|
|
||||||
def validate_and_report(self) -> list[str]:
|
def validate_and_report(self) -> list[str]:
|
||||||
"""Validate configuration and return a list of warnings/issues.
|
"""Validate configuration and return a list of warnings/issues.
|
||||||
@@ -315,6 +344,7 @@ class AppConfig:
|
|||||||
"STREAM_CHUNK_SIZE": self.stream_chunk_size,
|
"STREAM_CHUNK_SIZE": self.stream_chunk_size,
|
||||||
"MULTIPART_MIN_PART_SIZE": self.multipart_min_part_size,
|
"MULTIPART_MIN_PART_SIZE": self.multipart_min_part_size,
|
||||||
"BUCKET_STATS_CACHE_TTL": self.bucket_stats_cache_ttl,
|
"BUCKET_STATS_CACHE_TTL": self.bucket_stats_cache_ttl,
|
||||||
|
"OBJECT_CACHE_TTL": self.object_cache_ttl,
|
||||||
"LOG_LEVEL": self.log_level,
|
"LOG_LEVEL": self.log_level,
|
||||||
"LOG_TO_FILE": self.log_to_file,
|
"LOG_TO_FILE": self.log_to_file,
|
||||||
"LOG_FILE": str(self.log_path),
|
"LOG_FILE": str(self.log_path),
|
||||||
@@ -333,4 +363,12 @@ class AppConfig:
|
|||||||
"KMS_KEYS_PATH": str(self.kms_keys_path),
|
"KMS_KEYS_PATH": str(self.kms_keys_path),
|
||||||
"DEFAULT_ENCRYPTION_ALGORITHM": self.default_encryption_algorithm,
|
"DEFAULT_ENCRYPTION_ALGORITHM": self.default_encryption_algorithm,
|
||||||
"DISPLAY_TIMEZONE": self.display_timezone,
|
"DISPLAY_TIMEZONE": self.display_timezone,
|
||||||
|
"LIFECYCLE_ENABLED": self.lifecycle_enabled,
|
||||||
|
"LIFECYCLE_INTERVAL_SECONDS": self.lifecycle_interval_seconds,
|
||||||
|
"METRICS_HISTORY_ENABLED": self.metrics_history_enabled,
|
||||||
|
"METRICS_HISTORY_RETENTION_HOURS": self.metrics_history_retention_hours,
|
||||||
|
"METRICS_HISTORY_INTERVAL_MINUTES": self.metrics_history_interval_minutes,
|
||||||
|
"OPERATION_METRICS_ENABLED": self.operation_metrics_enabled,
|
||||||
|
"OPERATION_METRICS_INTERVAL_MINUTES": self.operation_metrics_interval_minutes,
|
||||||
|
"OPERATION_METRICS_RETENTION_HOURS": self.operation_metrics_retention_hours,
|
||||||
}
|
}
|
||||||
|
|||||||
15
app/iam.py
15
app/iam.py
@@ -1,5 +1,6 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import hmac
|
||||||
import json
|
import json
|
||||||
import math
|
import math
|
||||||
import secrets
|
import secrets
|
||||||
@@ -15,7 +16,7 @@ class IamError(RuntimeError):
|
|||||||
"""Raised when authentication or authorization fails."""
|
"""Raised when authentication or authorization fails."""
|
||||||
|
|
||||||
|
|
||||||
S3_ACTIONS = {"list", "read", "write", "delete", "share", "policy", "replication"}
|
S3_ACTIONS = {"list", "read", "write", "delete", "share", "policy", "replication", "lifecycle", "cors"}
|
||||||
IAM_ACTIONS = {
|
IAM_ACTIONS = {
|
||||||
"iam:list_users",
|
"iam:list_users",
|
||||||
"iam:create_user",
|
"iam:create_user",
|
||||||
@@ -71,6 +72,16 @@ ACTION_ALIASES = {
|
|||||||
"s3:replicateobject": "replication",
|
"s3:replicateobject": "replication",
|
||||||
"s3:replicatetags": "replication",
|
"s3:replicatetags": "replication",
|
||||||
"s3:replicatedelete": "replication",
|
"s3:replicatedelete": "replication",
|
||||||
|
"lifecycle": "lifecycle",
|
||||||
|
"s3:getlifecycleconfiguration": "lifecycle",
|
||||||
|
"s3:putlifecycleconfiguration": "lifecycle",
|
||||||
|
"s3:deletelifecycleconfiguration": "lifecycle",
|
||||||
|
"s3:getbucketlifecycle": "lifecycle",
|
||||||
|
"s3:putbucketlifecycle": "lifecycle",
|
||||||
|
"cors": "cors",
|
||||||
|
"s3:getbucketcors": "cors",
|
||||||
|
"s3:putbucketcors": "cors",
|
||||||
|
"s3:deletebucketcors": "cors",
|
||||||
"iam:listusers": "iam:list_users",
|
"iam:listusers": "iam:list_users",
|
||||||
"iam:createuser": "iam:create_user",
|
"iam:createuser": "iam:create_user",
|
||||||
"iam:deleteuser": "iam:delete_user",
|
"iam:deleteuser": "iam:delete_user",
|
||||||
@@ -139,7 +150,7 @@ class IamService:
|
|||||||
f"Access temporarily locked. Try again in {seconds} seconds."
|
f"Access temporarily locked. Try again in {seconds} seconds."
|
||||||
)
|
)
|
||||||
record = self._users.get(access_key)
|
record = self._users.get(access_key)
|
||||||
if not record or record["secret_key"] != secret_key:
|
if not record or not hmac.compare_digest(record["secret_key"], secret_key):
|
||||||
self._record_failed_attempt(access_key)
|
self._record_failed_attempt(access_key)
|
||||||
raise IamError("Invalid credentials")
|
raise IamError("Invalid credentials")
|
||||||
self._clear_failed_attempts(access_key)
|
self._clear_failed_attempts(access_key)
|
||||||
|
|||||||
271
app/operation_metrics.py
Normal file
271
app/operation_metrics.py
Normal file
@@ -0,0 +1,271 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class OperationStats:
|
||||||
|
count: int = 0
|
||||||
|
success_count: int = 0
|
||||||
|
error_count: int = 0
|
||||||
|
latency_sum_ms: float = 0.0
|
||||||
|
latency_min_ms: float = float("inf")
|
||||||
|
latency_max_ms: float = 0.0
|
||||||
|
bytes_in: int = 0
|
||||||
|
bytes_out: int = 0
|
||||||
|
|
||||||
|
def record(self, latency_ms: float, success: bool, bytes_in: int = 0, bytes_out: int = 0) -> None:
|
||||||
|
self.count += 1
|
||||||
|
if success:
|
||||||
|
self.success_count += 1
|
||||||
|
else:
|
||||||
|
self.error_count += 1
|
||||||
|
self.latency_sum_ms += latency_ms
|
||||||
|
if latency_ms < self.latency_min_ms:
|
||||||
|
self.latency_min_ms = latency_ms
|
||||||
|
if latency_ms > self.latency_max_ms:
|
||||||
|
self.latency_max_ms = latency_ms
|
||||||
|
self.bytes_in += bytes_in
|
||||||
|
self.bytes_out += bytes_out
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
avg_latency = self.latency_sum_ms / self.count if self.count > 0 else 0.0
|
||||||
|
min_latency = self.latency_min_ms if self.latency_min_ms != float("inf") else 0.0
|
||||||
|
return {
|
||||||
|
"count": self.count,
|
||||||
|
"success_count": self.success_count,
|
||||||
|
"error_count": self.error_count,
|
||||||
|
"latency_avg_ms": round(avg_latency, 2),
|
||||||
|
"latency_min_ms": round(min_latency, 2),
|
||||||
|
"latency_max_ms": round(self.latency_max_ms, 2),
|
||||||
|
"bytes_in": self.bytes_in,
|
||||||
|
"bytes_out": self.bytes_out,
|
||||||
|
}
|
||||||
|
|
||||||
|
def merge(self, other: "OperationStats") -> None:
|
||||||
|
self.count += other.count
|
||||||
|
self.success_count += other.success_count
|
||||||
|
self.error_count += other.error_count
|
||||||
|
self.latency_sum_ms += other.latency_sum_ms
|
||||||
|
if other.latency_min_ms < self.latency_min_ms:
|
||||||
|
self.latency_min_ms = other.latency_min_ms
|
||||||
|
if other.latency_max_ms > self.latency_max_ms:
|
||||||
|
self.latency_max_ms = other.latency_max_ms
|
||||||
|
self.bytes_in += other.bytes_in
|
||||||
|
self.bytes_out += other.bytes_out
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class MetricsSnapshot:
|
||||||
|
timestamp: datetime
|
||||||
|
window_seconds: int
|
||||||
|
by_method: Dict[str, Dict[str, Any]]
|
||||||
|
by_endpoint: Dict[str, Dict[str, Any]]
|
||||||
|
by_status_class: Dict[str, int]
|
||||||
|
error_codes: Dict[str, int]
|
||||||
|
totals: Dict[str, Any]
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"timestamp": self.timestamp.isoformat(),
|
||||||
|
"window_seconds": self.window_seconds,
|
||||||
|
"by_method": self.by_method,
|
||||||
|
"by_endpoint": self.by_endpoint,
|
||||||
|
"by_status_class": self.by_status_class,
|
||||||
|
"error_codes": self.error_codes,
|
||||||
|
"totals": self.totals,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "MetricsSnapshot":
|
||||||
|
return cls(
|
||||||
|
timestamp=datetime.fromisoformat(data["timestamp"]),
|
||||||
|
window_seconds=data.get("window_seconds", 300),
|
||||||
|
by_method=data.get("by_method", {}),
|
||||||
|
by_endpoint=data.get("by_endpoint", {}),
|
||||||
|
by_status_class=data.get("by_status_class", {}),
|
||||||
|
error_codes=data.get("error_codes", {}),
|
||||||
|
totals=data.get("totals", {}),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class OperationMetricsCollector:
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
storage_root: Path,
|
||||||
|
interval_minutes: int = 5,
|
||||||
|
retention_hours: int = 24,
|
||||||
|
):
|
||||||
|
self.storage_root = storage_root
|
||||||
|
self.interval_seconds = interval_minutes * 60
|
||||||
|
self.retention_hours = retention_hours
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
self._by_method: Dict[str, OperationStats] = {}
|
||||||
|
self._by_endpoint: Dict[str, OperationStats] = {}
|
||||||
|
self._by_status_class: Dict[str, int] = {}
|
||||||
|
self._error_codes: Dict[str, int] = {}
|
||||||
|
self._totals = OperationStats()
|
||||||
|
self._window_start = time.time()
|
||||||
|
self._shutdown = threading.Event()
|
||||||
|
self._snapshots: List[MetricsSnapshot] = []
|
||||||
|
|
||||||
|
self._load_history()
|
||||||
|
|
||||||
|
self._snapshot_thread = threading.Thread(
|
||||||
|
target=self._snapshot_loop, name="operation-metrics-snapshot", daemon=True
|
||||||
|
)
|
||||||
|
self._snapshot_thread.start()
|
||||||
|
|
||||||
|
def _config_path(self) -> Path:
|
||||||
|
return self.storage_root / ".myfsio.sys" / "config" / "operation_metrics.json"
|
||||||
|
|
||||||
|
def _load_history(self) -> None:
|
||||||
|
config_path = self._config_path()
|
||||||
|
if not config_path.exists():
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
data = json.loads(config_path.read_text(encoding="utf-8"))
|
||||||
|
snapshots_data = data.get("snapshots", [])
|
||||||
|
self._snapshots = [MetricsSnapshot.from_dict(s) for s in snapshots_data]
|
||||||
|
self._prune_old_snapshots()
|
||||||
|
except (json.JSONDecodeError, OSError, KeyError) as e:
|
||||||
|
logger.warning(f"Failed to load operation metrics history: {e}")
|
||||||
|
|
||||||
|
def _save_history(self) -> None:
|
||||||
|
config_path = self._config_path()
|
||||||
|
config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
try:
|
||||||
|
data = {"snapshots": [s.to_dict() for s in self._snapshots]}
|
||||||
|
config_path.write_text(json.dumps(data, indent=2), encoding="utf-8")
|
||||||
|
except OSError as e:
|
||||||
|
logger.warning(f"Failed to save operation metrics history: {e}")
|
||||||
|
|
||||||
|
def _prune_old_snapshots(self) -> None:
|
||||||
|
if not self._snapshots:
|
||||||
|
return
|
||||||
|
cutoff = datetime.now(timezone.utc).timestamp() - (self.retention_hours * 3600)
|
||||||
|
self._snapshots = [
|
||||||
|
s for s in self._snapshots if s.timestamp.timestamp() > cutoff
|
||||||
|
]
|
||||||
|
|
||||||
|
def _snapshot_loop(self) -> None:
|
||||||
|
while not self._shutdown.is_set():
|
||||||
|
self._shutdown.wait(timeout=self.interval_seconds)
|
||||||
|
if not self._shutdown.is_set():
|
||||||
|
self._take_snapshot()
|
||||||
|
|
||||||
|
def _take_snapshot(self) -> None:
|
||||||
|
with self._lock:
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
window_seconds = int(time.time() - self._window_start)
|
||||||
|
|
||||||
|
snapshot = MetricsSnapshot(
|
||||||
|
timestamp=now,
|
||||||
|
window_seconds=window_seconds,
|
||||||
|
by_method={k: v.to_dict() for k, v in self._by_method.items()},
|
||||||
|
by_endpoint={k: v.to_dict() for k, v in self._by_endpoint.items()},
|
||||||
|
by_status_class=dict(self._by_status_class),
|
||||||
|
error_codes=dict(self._error_codes),
|
||||||
|
totals=self._totals.to_dict(),
|
||||||
|
)
|
||||||
|
|
||||||
|
self._snapshots.append(snapshot)
|
||||||
|
self._prune_old_snapshots()
|
||||||
|
self._save_history()
|
||||||
|
|
||||||
|
self._by_method.clear()
|
||||||
|
self._by_endpoint.clear()
|
||||||
|
self._by_status_class.clear()
|
||||||
|
self._error_codes.clear()
|
||||||
|
self._totals = OperationStats()
|
||||||
|
self._window_start = time.time()
|
||||||
|
|
||||||
|
def record_request(
|
||||||
|
self,
|
||||||
|
method: str,
|
||||||
|
endpoint_type: str,
|
||||||
|
status_code: int,
|
||||||
|
latency_ms: float,
|
||||||
|
bytes_in: int = 0,
|
||||||
|
bytes_out: int = 0,
|
||||||
|
error_code: Optional[str] = None,
|
||||||
|
) -> None:
|
||||||
|
success = 200 <= status_code < 400
|
||||||
|
status_class = f"{status_code // 100}xx"
|
||||||
|
|
||||||
|
with self._lock:
|
||||||
|
if method not in self._by_method:
|
||||||
|
self._by_method[method] = OperationStats()
|
||||||
|
self._by_method[method].record(latency_ms, success, bytes_in, bytes_out)
|
||||||
|
|
||||||
|
if endpoint_type not in self._by_endpoint:
|
||||||
|
self._by_endpoint[endpoint_type] = OperationStats()
|
||||||
|
self._by_endpoint[endpoint_type].record(latency_ms, success, bytes_in, bytes_out)
|
||||||
|
|
||||||
|
self._by_status_class[status_class] = self._by_status_class.get(status_class, 0) + 1
|
||||||
|
|
||||||
|
if error_code:
|
||||||
|
self._error_codes[error_code] = self._error_codes.get(error_code, 0) + 1
|
||||||
|
|
||||||
|
self._totals.record(latency_ms, success, bytes_in, bytes_out)
|
||||||
|
|
||||||
|
def get_current_stats(self) -> Dict[str, Any]:
|
||||||
|
with self._lock:
|
||||||
|
window_seconds = int(time.time() - self._window_start)
|
||||||
|
return {
|
||||||
|
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||||
|
"window_seconds": window_seconds,
|
||||||
|
"by_method": {k: v.to_dict() for k, v in self._by_method.items()},
|
||||||
|
"by_endpoint": {k: v.to_dict() for k, v in self._by_endpoint.items()},
|
||||||
|
"by_status_class": dict(self._by_status_class),
|
||||||
|
"error_codes": dict(self._error_codes),
|
||||||
|
"totals": self._totals.to_dict(),
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_history(self, hours: Optional[int] = None) -> List[Dict[str, Any]]:
|
||||||
|
with self._lock:
|
||||||
|
snapshots = list(self._snapshots)
|
||||||
|
|
||||||
|
if hours:
|
||||||
|
cutoff = datetime.now(timezone.utc).timestamp() - (hours * 3600)
|
||||||
|
snapshots = [s for s in snapshots if s.timestamp.timestamp() > cutoff]
|
||||||
|
|
||||||
|
return [s.to_dict() for s in snapshots]
|
||||||
|
|
||||||
|
def shutdown(self) -> None:
|
||||||
|
self._shutdown.set()
|
||||||
|
self._take_snapshot()
|
||||||
|
self._snapshot_thread.join(timeout=5.0)
|
||||||
|
|
||||||
|
|
||||||
|
def classify_endpoint(path: str) -> str:
|
||||||
|
if not path or path == "/":
|
||||||
|
return "service"
|
||||||
|
|
||||||
|
path = path.rstrip("/")
|
||||||
|
|
||||||
|
if path.startswith("/ui"):
|
||||||
|
return "ui"
|
||||||
|
|
||||||
|
if path.startswith("/kms"):
|
||||||
|
return "kms"
|
||||||
|
|
||||||
|
if path.startswith("/myfsio"):
|
||||||
|
return "service"
|
||||||
|
|
||||||
|
parts = path.lstrip("/").split("/")
|
||||||
|
if len(parts) == 0:
|
||||||
|
return "service"
|
||||||
|
elif len(parts) == 1:
|
||||||
|
return "bucket"
|
||||||
|
else:
|
||||||
|
return "object"
|
||||||
260
app/s3_api.py
260
app/s3_api.py
@@ -11,7 +11,8 @@ import uuid
|
|||||||
from datetime import datetime, timedelta, timezone
|
from datetime import datetime, timedelta, timezone
|
||||||
from typing import Any, Dict, Optional
|
from typing import Any, Dict, Optional
|
||||||
from urllib.parse import quote, urlencode, urlparse, unquote
|
from urllib.parse import quote, urlencode, urlparse, unquote
|
||||||
from xml.etree.ElementTree import Element, SubElement, tostring, fromstring, ParseError
|
from xml.etree.ElementTree import Element, SubElement, tostring, ParseError
|
||||||
|
from defusedxml.ElementTree import fromstring
|
||||||
|
|
||||||
from flask import Blueprint, Response, current_app, jsonify, request, g
|
from flask import Blueprint, Response, current_app, jsonify, request, g
|
||||||
from werkzeug.http import http_date
|
from werkzeug.http import http_date
|
||||||
@@ -29,6 +30,8 @@ from .storage import ObjectStorage, StorageError, QuotaExceededError, BucketNotF
|
|||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
S3_NS = "http://s3.amazonaws.com/doc/2006-03-01/"
|
||||||
|
|
||||||
s3_api_bp = Blueprint("s3_api", __name__)
|
s3_api_bp = Blueprint("s3_api", __name__)
|
||||||
|
|
||||||
def _storage() -> ObjectStorage:
|
def _storage() -> ObjectStorage:
|
||||||
@@ -53,6 +56,20 @@ def _bucket_policies() -> BucketPolicyStore:
|
|||||||
return store
|
return store
|
||||||
|
|
||||||
|
|
||||||
|
def _build_policy_context() -> Dict[str, Any]:
|
||||||
|
ctx: Dict[str, Any] = {}
|
||||||
|
if request.headers.get("Referer"):
|
||||||
|
ctx["aws:Referer"] = request.headers.get("Referer")
|
||||||
|
if request.access_route:
|
||||||
|
ctx["aws:SourceIp"] = request.access_route[0]
|
||||||
|
elif request.remote_addr:
|
||||||
|
ctx["aws:SourceIp"] = request.remote_addr
|
||||||
|
ctx["aws:SecureTransport"] = str(request.is_secure).lower()
|
||||||
|
if request.headers.get("User-Agent"):
|
||||||
|
ctx["aws:UserAgent"] = request.headers.get("User-Agent")
|
||||||
|
return ctx
|
||||||
|
|
||||||
|
|
||||||
def _object_lock() -> ObjectLockService:
|
def _object_lock() -> ObjectLockService:
|
||||||
return current_app.extensions["object_lock"]
|
return current_app.extensions["object_lock"]
|
||||||
|
|
||||||
@@ -71,6 +88,7 @@ def _xml_response(element: Element, status: int = 200) -> Response:
|
|||||||
|
|
||||||
|
|
||||||
def _error_response(code: str, message: str, status: int) -> Response:
|
def _error_response(code: str, message: str, status: int) -> Response:
|
||||||
|
g.s3_error_code = code
|
||||||
error = Element("Error")
|
error = Element("Error")
|
||||||
SubElement(error, "Code").text = code
|
SubElement(error, "Code").text = code
|
||||||
SubElement(error, "Message").text = message
|
SubElement(error, "Message").text = message
|
||||||
@@ -79,6 +97,13 @@ def _error_response(code: str, message: str, status: int) -> Response:
|
|||||||
return _xml_response(error, status)
|
return _xml_response(error, status)
|
||||||
|
|
||||||
|
|
||||||
|
def _require_xml_content_type() -> Response | None:
|
||||||
|
ct = request.headers.get("Content-Type", "")
|
||||||
|
if ct and not ct.startswith(("application/xml", "text/xml")):
|
||||||
|
return _error_response("InvalidRequest", "Content-Type must be application/xml or text/xml", 400)
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
def _parse_range_header(range_header: str, file_size: int) -> list[tuple[int, int]] | None:
|
def _parse_range_header(range_header: str, file_size: int) -> list[tuple[int, int]] | None:
|
||||||
if not range_header.startswith("bytes="):
|
if not range_header.startswith("bytes="):
|
||||||
return None
|
return None
|
||||||
@@ -218,16 +243,7 @@ def _verify_sigv4_header(req: Any, auth_header: str) -> Principal | None:
|
|||||||
|
|
||||||
if not hmac.compare_digest(calculated_signature, signature):
|
if not hmac.compare_digest(calculated_signature, signature):
|
||||||
if current_app.config.get("DEBUG_SIGV4"):
|
if current_app.config.get("DEBUG_SIGV4"):
|
||||||
logger.warning(
|
logger.warning("SigV4 signature mismatch for %s %s", method, req.path)
|
||||||
"SigV4 signature mismatch",
|
|
||||||
extra={
|
|
||||||
"path": req.path,
|
|
||||||
"method": method,
|
|
||||||
"signed_headers": signed_headers_str,
|
|
||||||
"content_type": req.headers.get("Content-Type"),
|
|
||||||
"content_length": req.headers.get("Content-Length"),
|
|
||||||
}
|
|
||||||
)
|
|
||||||
raise IamError("SignatureDoesNotMatch")
|
raise IamError("SignatureDoesNotMatch")
|
||||||
|
|
||||||
session_token = req.headers.get("X-Amz-Security-Token")
|
session_token = req.headers.get("X-Amz-Security-Token")
|
||||||
@@ -293,7 +309,7 @@ def _verify_sigv4_query(req: Any) -> Principal | None:
|
|||||||
if header.lower() == 'expect' and val == "":
|
if header.lower() == 'expect' and val == "":
|
||||||
val = "100-continue"
|
val = "100-continue"
|
||||||
val = " ".join(val.split())
|
val = " ".join(val.split())
|
||||||
canonical_headers_parts.append(f"{header}:{val}\n")
|
canonical_headers_parts.append(f"{header.lower()}:{val}\n")
|
||||||
canonical_headers = "".join(canonical_headers_parts)
|
canonical_headers = "".join(canonical_headers_parts)
|
||||||
|
|
||||||
payload_hash = "UNSIGNED-PAYLOAD"
|
payload_hash = "UNSIGNED-PAYLOAD"
|
||||||
@@ -380,7 +396,8 @@ def _authorize_action(principal: Principal | None, bucket_name: str | None, acti
|
|||||||
policy_decision = None
|
policy_decision = None
|
||||||
access_key = principal.access_key if principal else None
|
access_key = principal.access_key if principal else None
|
||||||
if bucket_name:
|
if bucket_name:
|
||||||
policy_decision = _bucket_policies().evaluate(access_key, bucket_name, object_key, action)
|
policy_context = _build_policy_context()
|
||||||
|
policy_decision = _bucket_policies().evaluate(access_key, bucket_name, object_key, action, policy_context)
|
||||||
if policy_decision == "deny":
|
if policy_decision == "deny":
|
||||||
raise IamError("Access denied by bucket policy")
|
raise IamError("Access denied by bucket policy")
|
||||||
|
|
||||||
@@ -407,11 +424,13 @@ def _authorize_action(principal: Principal | None, bucket_name: str | None, acti
|
|||||||
def _enforce_bucket_policy(principal: Principal | None, bucket_name: str | None, object_key: str | None, action: str) -> None:
|
def _enforce_bucket_policy(principal: Principal | None, bucket_name: str | None, object_key: str | None, action: str) -> None:
|
||||||
if not bucket_name:
|
if not bucket_name:
|
||||||
return
|
return
|
||||||
|
policy_context = _build_policy_context()
|
||||||
decision = _bucket_policies().evaluate(
|
decision = _bucket_policies().evaluate(
|
||||||
principal.access_key if principal else None,
|
principal.access_key if principal else None,
|
||||||
bucket_name,
|
bucket_name,
|
||||||
object_key,
|
object_key,
|
||||||
action,
|
action,
|
||||||
|
policy_context,
|
||||||
)
|
)
|
||||||
if decision == "deny":
|
if decision == "deny":
|
||||||
raise IamError("Access denied by bucket policy")
|
raise IamError("Access denied by bucket policy")
|
||||||
@@ -572,6 +591,7 @@ def _generate_presigned_url(
|
|||||||
bucket_name: str,
|
bucket_name: str,
|
||||||
object_key: str,
|
object_key: str,
|
||||||
expires_in: int,
|
expires_in: int,
|
||||||
|
api_base_url: str | None = None,
|
||||||
) -> str:
|
) -> str:
|
||||||
region = current_app.config["AWS_REGION"]
|
region = current_app.config["AWS_REGION"]
|
||||||
service = current_app.config["AWS_SERVICE"]
|
service = current_app.config["AWS_SERVICE"]
|
||||||
@@ -592,7 +612,7 @@ def _generate_presigned_url(
|
|||||||
}
|
}
|
||||||
canonical_query = _encode_query_params(query_params)
|
canonical_query = _encode_query_params(query_params)
|
||||||
|
|
||||||
api_base = current_app.config.get("API_BASE_URL")
|
api_base = api_base_url or current_app.config.get("API_BASE_URL")
|
||||||
if api_base:
|
if api_base:
|
||||||
parsed = urlparse(api_base)
|
parsed = urlparse(api_base)
|
||||||
host = parsed.netloc
|
host = parsed.netloc
|
||||||
@@ -644,11 +664,11 @@ def _strip_ns(tag: str | None) -> str:
|
|||||||
|
|
||||||
|
|
||||||
def _find_element(parent: Element, name: str) -> Optional[Element]:
|
def _find_element(parent: Element, name: str) -> Optional[Element]:
|
||||||
"""Find a child element by name, trying both namespaced and non-namespaced variants.
|
"""Find a child element by name, trying S3 namespace then no namespace.
|
||||||
|
|
||||||
This handles XML documents that may or may not include namespace prefixes.
|
This handles XML documents that may or may not include namespace prefixes.
|
||||||
"""
|
"""
|
||||||
el = parent.find(f"{{*}}{name}")
|
el = parent.find(f"{{{S3_NS}}}{name}")
|
||||||
if el is None:
|
if el is None:
|
||||||
el = parent.find(name)
|
el = parent.find(name)
|
||||||
return el
|
return el
|
||||||
@@ -672,7 +692,7 @@ def _parse_tagging_document(payload: bytes) -> list[dict[str, str]]:
|
|||||||
raise ValueError("Malformed XML") from exc
|
raise ValueError("Malformed XML") from exc
|
||||||
if _strip_ns(root.tag) != "Tagging":
|
if _strip_ns(root.tag) != "Tagging":
|
||||||
raise ValueError("Root element must be Tagging")
|
raise ValueError("Root element must be Tagging")
|
||||||
tagset = root.find(".//{*}TagSet")
|
tagset = root.find(".//{http://s3.amazonaws.com/doc/2006-03-01/}TagSet")
|
||||||
if tagset is None:
|
if tagset is None:
|
||||||
tagset = root.find("TagSet")
|
tagset = root.find("TagSet")
|
||||||
if tagset is None:
|
if tagset is None:
|
||||||
@@ -840,13 +860,13 @@ def _parse_encryption_document(payload: bytes) -> dict[str, Any]:
|
|||||||
bucket_key_el = child
|
bucket_key_el = child
|
||||||
if default_el is None:
|
if default_el is None:
|
||||||
continue
|
continue
|
||||||
algo_el = default_el.find("{*}SSEAlgorithm")
|
algo_el = default_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}SSEAlgorithm")
|
||||||
if algo_el is None:
|
if algo_el is None:
|
||||||
algo_el = default_el.find("SSEAlgorithm")
|
algo_el = default_el.find("SSEAlgorithm")
|
||||||
if algo_el is None or not (algo_el.text or "").strip():
|
if algo_el is None or not (algo_el.text or "").strip():
|
||||||
raise ValueError("SSEAlgorithm is required")
|
raise ValueError("SSEAlgorithm is required")
|
||||||
rule: dict[str, Any] = {"SSEAlgorithm": algo_el.text.strip()}
|
rule: dict[str, Any] = {"SSEAlgorithm": algo_el.text.strip()}
|
||||||
kms_el = default_el.find("{*}KMSMasterKeyID")
|
kms_el = default_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}KMSMasterKeyID")
|
||||||
if kms_el is None:
|
if kms_el is None:
|
||||||
kms_el = default_el.find("KMSMasterKeyID")
|
kms_el = default_el.find("KMSMasterKeyID")
|
||||||
if kms_el is not None and kms_el.text:
|
if kms_el is not None and kms_el.text:
|
||||||
@@ -921,6 +941,8 @@ def _maybe_handle_bucket_subresource(bucket_name: str) -> Response | None:
|
|||||||
"object-lock": _bucket_object_lock_handler,
|
"object-lock": _bucket_object_lock_handler,
|
||||||
"notification": _bucket_notification_handler,
|
"notification": _bucket_notification_handler,
|
||||||
"logging": _bucket_logging_handler,
|
"logging": _bucket_logging_handler,
|
||||||
|
"uploads": _bucket_uploads_handler,
|
||||||
|
"policy": _bucket_policy_handler,
|
||||||
}
|
}
|
||||||
requested = [key for key in handlers if key in request.args]
|
requested = [key for key in handlers if key in request.args]
|
||||||
if not requested:
|
if not requested:
|
||||||
@@ -948,6 +970,9 @@ def _bucket_versioning_handler(bucket_name: str) -> Response:
|
|||||||
storage = _storage()
|
storage = _storage()
|
||||||
|
|
||||||
if request.method == "PUT":
|
if request.method == "PUT":
|
||||||
|
ct_error = _require_xml_content_type()
|
||||||
|
if ct_error:
|
||||||
|
return ct_error
|
||||||
payload = request.get_data(cache=False) or b""
|
payload = request.get_data(cache=False) or b""
|
||||||
if not payload.strip():
|
if not payload.strip():
|
||||||
return _error_response("MalformedXML", "Request body is required", 400)
|
return _error_response("MalformedXML", "Request body is required", 400)
|
||||||
@@ -957,7 +982,7 @@ def _bucket_versioning_handler(bucket_name: str) -> Response:
|
|||||||
return _error_response("MalformedXML", "Unable to parse XML document", 400)
|
return _error_response("MalformedXML", "Unable to parse XML document", 400)
|
||||||
if _strip_ns(root.tag) != "VersioningConfiguration":
|
if _strip_ns(root.tag) != "VersioningConfiguration":
|
||||||
return _error_response("MalformedXML", "Root element must be VersioningConfiguration", 400)
|
return _error_response("MalformedXML", "Root element must be VersioningConfiguration", 400)
|
||||||
status_el = root.find("{*}Status")
|
status_el = root.find("{http://s3.amazonaws.com/doc/2006-03-01/}Status")
|
||||||
if status_el is None:
|
if status_el is None:
|
||||||
status_el = root.find("Status")
|
status_el = root.find("Status")
|
||||||
status = (status_el.text or "").strip() if status_el is not None else ""
|
status = (status_el.text or "").strip() if status_el is not None else ""
|
||||||
@@ -1006,6 +1031,9 @@ def _bucket_tagging_handler(bucket_name: str) -> Response:
|
|||||||
current_app.logger.info("Bucket tags deleted", extra={"bucket": bucket_name})
|
current_app.logger.info("Bucket tags deleted", extra={"bucket": bucket_name})
|
||||||
return Response(status=204)
|
return Response(status=204)
|
||||||
|
|
||||||
|
ct_error = _require_xml_content_type()
|
||||||
|
if ct_error:
|
||||||
|
return ct_error
|
||||||
payload = request.get_data(cache=False) or b""
|
payload = request.get_data(cache=False) or b""
|
||||||
try:
|
try:
|
||||||
tags = _parse_tagging_document(payload)
|
tags = _parse_tagging_document(payload)
|
||||||
@@ -1061,6 +1089,9 @@ def _object_tagging_handler(bucket_name: str, object_key: str) -> Response:
|
|||||||
current_app.logger.info("Object tags deleted", extra={"bucket": bucket_name, "key": object_key})
|
current_app.logger.info("Object tags deleted", extra={"bucket": bucket_name, "key": object_key})
|
||||||
return Response(status=204)
|
return Response(status=204)
|
||||||
|
|
||||||
|
ct_error = _require_xml_content_type()
|
||||||
|
if ct_error:
|
||||||
|
return ct_error
|
||||||
payload = request.get_data(cache=False) or b""
|
payload = request.get_data(cache=False) or b""
|
||||||
try:
|
try:
|
||||||
tags = _parse_tagging_document(payload)
|
tags = _parse_tagging_document(payload)
|
||||||
@@ -1130,6 +1161,9 @@ def _bucket_cors_handler(bucket_name: str) -> Response:
|
|||||||
current_app.logger.info("Bucket CORS deleted", extra={"bucket": bucket_name})
|
current_app.logger.info("Bucket CORS deleted", extra={"bucket": bucket_name})
|
||||||
return Response(status=204)
|
return Response(status=204)
|
||||||
|
|
||||||
|
ct_error = _require_xml_content_type()
|
||||||
|
if ct_error:
|
||||||
|
return ct_error
|
||||||
payload = request.get_data(cache=False) or b""
|
payload = request.get_data(cache=False) or b""
|
||||||
if not payload.strip():
|
if not payload.strip():
|
||||||
try:
|
try:
|
||||||
@@ -1176,6 +1210,9 @@ def _bucket_encryption_handler(bucket_name: str) -> Response:
|
|||||||
404,
|
404,
|
||||||
)
|
)
|
||||||
return _xml_response(_render_encryption_document(config))
|
return _xml_response(_render_encryption_document(config))
|
||||||
|
ct_error = _require_xml_content_type()
|
||||||
|
if ct_error:
|
||||||
|
return ct_error
|
||||||
payload = request.get_data(cache=False) or b""
|
payload = request.get_data(cache=False) or b""
|
||||||
if not payload.strip():
|
if not payload.strip():
|
||||||
try:
|
try:
|
||||||
@@ -1348,7 +1385,7 @@ def _bucket_list_versions_handler(bucket_name: str) -> Response:
|
|||||||
SubElement(ver_elem, "Key").text = obj.key
|
SubElement(ver_elem, "Key").text = obj.key
|
||||||
SubElement(ver_elem, "VersionId").text = v.get("version_id", "unknown")
|
SubElement(ver_elem, "VersionId").text = v.get("version_id", "unknown")
|
||||||
SubElement(ver_elem, "IsLatest").text = "false"
|
SubElement(ver_elem, "IsLatest").text = "false"
|
||||||
SubElement(ver_elem, "LastModified").text = v.get("archived_at", "")
|
SubElement(ver_elem, "LastModified").text = v.get("archived_at") or "1970-01-01T00:00:00Z"
|
||||||
SubElement(ver_elem, "ETag").text = f'"{v.get("etag", "")}"'
|
SubElement(ver_elem, "ETag").text = f'"{v.get("etag", "")}"'
|
||||||
SubElement(ver_elem, "Size").text = str(v.get("size", 0))
|
SubElement(ver_elem, "Size").text = str(v.get("size", 0))
|
||||||
SubElement(ver_elem, "StorageClass").text = "STANDARD"
|
SubElement(ver_elem, "StorageClass").text = "STANDARD"
|
||||||
@@ -1397,6 +1434,9 @@ def _bucket_lifecycle_handler(bucket_name: str) -> Response:
|
|||||||
current_app.logger.info("Bucket lifecycle deleted", extra={"bucket": bucket_name})
|
current_app.logger.info("Bucket lifecycle deleted", extra={"bucket": bucket_name})
|
||||||
return Response(status=204)
|
return Response(status=204)
|
||||||
|
|
||||||
|
ct_error = _require_xml_content_type()
|
||||||
|
if ct_error:
|
||||||
|
return ct_error
|
||||||
payload = request.get_data(cache=False) or b""
|
payload = request.get_data(cache=False) or b""
|
||||||
if not payload.strip():
|
if not payload.strip():
|
||||||
return _error_response("MalformedXML", "Request body is required", 400)
|
return _error_response("MalformedXML", "Request body is required", 400)
|
||||||
@@ -1461,49 +1501,49 @@ def _parse_lifecycle_config(payload: bytes) -> list:
|
|||||||
raise ValueError("Root element must be LifecycleConfiguration")
|
raise ValueError("Root element must be LifecycleConfiguration")
|
||||||
|
|
||||||
rules = []
|
rules = []
|
||||||
for rule_el in root.findall("{*}Rule") or root.findall("Rule"):
|
for rule_el in root.findall("{http://s3.amazonaws.com/doc/2006-03-01/}Rule") or root.findall("Rule"):
|
||||||
rule: dict = {}
|
rule: dict = {}
|
||||||
|
|
||||||
id_el = rule_el.find("{*}ID") or rule_el.find("ID")
|
id_el = rule_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}ID") or rule_el.find("ID")
|
||||||
if id_el is not None and id_el.text:
|
if id_el is not None and id_el.text:
|
||||||
rule["ID"] = id_el.text.strip()
|
rule["ID"] = id_el.text.strip()
|
||||||
|
|
||||||
filter_el = rule_el.find("{*}Filter") or rule_el.find("Filter")
|
filter_el = rule_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}Filter") or rule_el.find("Filter")
|
||||||
if filter_el is not None:
|
if filter_el is not None:
|
||||||
prefix_el = filter_el.find("{*}Prefix") or filter_el.find("Prefix")
|
prefix_el = filter_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}Prefix") or filter_el.find("Prefix")
|
||||||
if prefix_el is not None and prefix_el.text:
|
if prefix_el is not None and prefix_el.text:
|
||||||
rule["Prefix"] = prefix_el.text
|
rule["Prefix"] = prefix_el.text
|
||||||
|
|
||||||
if "Prefix" not in rule:
|
if "Prefix" not in rule:
|
||||||
prefix_el = rule_el.find("{*}Prefix") or rule_el.find("Prefix")
|
prefix_el = rule_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}Prefix") or rule_el.find("Prefix")
|
||||||
if prefix_el is not None:
|
if prefix_el is not None:
|
||||||
rule["Prefix"] = prefix_el.text or ""
|
rule["Prefix"] = prefix_el.text or ""
|
||||||
|
|
||||||
status_el = rule_el.find("{*}Status") or rule_el.find("Status")
|
status_el = rule_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}Status") or rule_el.find("Status")
|
||||||
rule["Status"] = (status_el.text or "Enabled").strip() if status_el is not None else "Enabled"
|
rule["Status"] = (status_el.text or "Enabled").strip() if status_el is not None else "Enabled"
|
||||||
|
|
||||||
exp_el = rule_el.find("{*}Expiration") or rule_el.find("Expiration")
|
exp_el = rule_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}Expiration") or rule_el.find("Expiration")
|
||||||
if exp_el is not None:
|
if exp_el is not None:
|
||||||
expiration: dict = {}
|
expiration: dict = {}
|
||||||
days_el = exp_el.find("{*}Days") or exp_el.find("Days")
|
days_el = exp_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}Days") or exp_el.find("Days")
|
||||||
if days_el is not None and days_el.text:
|
if days_el is not None and days_el.text:
|
||||||
days_val = int(days_el.text.strip())
|
days_val = int(days_el.text.strip())
|
||||||
if days_val <= 0:
|
if days_val <= 0:
|
||||||
raise ValueError("Expiration Days must be a positive integer")
|
raise ValueError("Expiration Days must be a positive integer")
|
||||||
expiration["Days"] = days_val
|
expiration["Days"] = days_val
|
||||||
date_el = exp_el.find("{*}Date") or exp_el.find("Date")
|
date_el = exp_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}Date") or exp_el.find("Date")
|
||||||
if date_el is not None and date_el.text:
|
if date_el is not None and date_el.text:
|
||||||
expiration["Date"] = date_el.text.strip()
|
expiration["Date"] = date_el.text.strip()
|
||||||
eodm_el = exp_el.find("{*}ExpiredObjectDeleteMarker") or exp_el.find("ExpiredObjectDeleteMarker")
|
eodm_el = exp_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}ExpiredObjectDeleteMarker") or exp_el.find("ExpiredObjectDeleteMarker")
|
||||||
if eodm_el is not None and (eodm_el.text or "").strip().lower() in {"true", "1"}:
|
if eodm_el is not None and (eodm_el.text or "").strip().lower() in {"true", "1"}:
|
||||||
expiration["ExpiredObjectDeleteMarker"] = True
|
expiration["ExpiredObjectDeleteMarker"] = True
|
||||||
if expiration:
|
if expiration:
|
||||||
rule["Expiration"] = expiration
|
rule["Expiration"] = expiration
|
||||||
|
|
||||||
nve_el = rule_el.find("{*}NoncurrentVersionExpiration") or rule_el.find("NoncurrentVersionExpiration")
|
nve_el = rule_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}NoncurrentVersionExpiration") or rule_el.find("NoncurrentVersionExpiration")
|
||||||
if nve_el is not None:
|
if nve_el is not None:
|
||||||
nve: dict = {}
|
nve: dict = {}
|
||||||
days_el = nve_el.find("{*}NoncurrentDays") or nve_el.find("NoncurrentDays")
|
days_el = nve_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}NoncurrentDays") or nve_el.find("NoncurrentDays")
|
||||||
if days_el is not None and days_el.text:
|
if days_el is not None and days_el.text:
|
||||||
noncurrent_days = int(days_el.text.strip())
|
noncurrent_days = int(days_el.text.strip())
|
||||||
if noncurrent_days <= 0:
|
if noncurrent_days <= 0:
|
||||||
@@ -1512,10 +1552,10 @@ def _parse_lifecycle_config(payload: bytes) -> list:
|
|||||||
if nve:
|
if nve:
|
||||||
rule["NoncurrentVersionExpiration"] = nve
|
rule["NoncurrentVersionExpiration"] = nve
|
||||||
|
|
||||||
aimu_el = rule_el.find("{*}AbortIncompleteMultipartUpload") or rule_el.find("AbortIncompleteMultipartUpload")
|
aimu_el = rule_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}AbortIncompleteMultipartUpload") or rule_el.find("AbortIncompleteMultipartUpload")
|
||||||
if aimu_el is not None:
|
if aimu_el is not None:
|
||||||
aimu: dict = {}
|
aimu: dict = {}
|
||||||
days_el = aimu_el.find("{*}DaysAfterInitiation") or aimu_el.find("DaysAfterInitiation")
|
days_el = aimu_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}DaysAfterInitiation") or aimu_el.find("DaysAfterInitiation")
|
||||||
if days_el is not None and days_el.text:
|
if days_el is not None and days_el.text:
|
||||||
days_after = int(days_el.text.strip())
|
days_after = int(days_el.text.strip())
|
||||||
if days_after <= 0:
|
if days_after <= 0:
|
||||||
@@ -1631,6 +1671,9 @@ def _bucket_object_lock_handler(bucket_name: str) -> Response:
|
|||||||
SubElement(root, "ObjectLockEnabled").text = "Enabled" if config.enabled else "Disabled"
|
SubElement(root, "ObjectLockEnabled").text = "Enabled" if config.enabled else "Disabled"
|
||||||
return _xml_response(root)
|
return _xml_response(root)
|
||||||
|
|
||||||
|
ct_error = _require_xml_content_type()
|
||||||
|
if ct_error:
|
||||||
|
return ct_error
|
||||||
payload = request.get_data(cache=False) or b""
|
payload = request.get_data(cache=False) or b""
|
||||||
if not payload.strip():
|
if not payload.strip():
|
||||||
return _error_response("MalformedXML", "Request body is required", 400)
|
return _error_response("MalformedXML", "Request body is required", 400)
|
||||||
@@ -1640,7 +1683,7 @@ def _bucket_object_lock_handler(bucket_name: str) -> Response:
|
|||||||
except ParseError:
|
except ParseError:
|
||||||
return _error_response("MalformedXML", "Unable to parse XML document", 400)
|
return _error_response("MalformedXML", "Unable to parse XML document", 400)
|
||||||
|
|
||||||
enabled_el = root.find("{*}ObjectLockEnabled") or root.find("ObjectLockEnabled")
|
enabled_el = root.find("{http://s3.amazonaws.com/doc/2006-03-01/}ObjectLockEnabled") or root.find("ObjectLockEnabled")
|
||||||
enabled = (enabled_el.text or "").strip() == "Enabled" if enabled_el is not None else False
|
enabled = (enabled_el.text or "").strip() == "Enabled" if enabled_el is not None else False
|
||||||
|
|
||||||
config = ObjectLockConfig(enabled=enabled)
|
config = ObjectLockConfig(enabled=enabled)
|
||||||
@@ -1696,6 +1739,9 @@ def _bucket_notification_handler(bucket_name: str) -> Response:
|
|||||||
current_app.logger.info("Bucket notifications deleted", extra={"bucket": bucket_name})
|
current_app.logger.info("Bucket notifications deleted", extra={"bucket": bucket_name})
|
||||||
return Response(status=204)
|
return Response(status=204)
|
||||||
|
|
||||||
|
ct_error = _require_xml_content_type()
|
||||||
|
if ct_error:
|
||||||
|
return ct_error
|
||||||
payload = request.get_data(cache=False) or b""
|
payload = request.get_data(cache=False) or b""
|
||||||
if not payload.strip():
|
if not payload.strip():
|
||||||
notification_service.delete_bucket_notifications(bucket_name)
|
notification_service.delete_bucket_notifications(bucket_name)
|
||||||
@@ -1707,9 +1753,9 @@ def _bucket_notification_handler(bucket_name: str) -> Response:
|
|||||||
return _error_response("MalformedXML", "Unable to parse XML document", 400)
|
return _error_response("MalformedXML", "Unable to parse XML document", 400)
|
||||||
|
|
||||||
configs: list[NotificationConfiguration] = []
|
configs: list[NotificationConfiguration] = []
|
||||||
for webhook_el in root.findall("{*}WebhookConfiguration") or root.findall("WebhookConfiguration"):
|
for webhook_el in root.findall("{http://s3.amazonaws.com/doc/2006-03-01/}WebhookConfiguration") or root.findall("WebhookConfiguration"):
|
||||||
config_id = _find_element_text(webhook_el, "Id") or uuid.uuid4().hex
|
config_id = _find_element_text(webhook_el, "Id") or uuid.uuid4().hex
|
||||||
events = [el.text for el in webhook_el.findall("{*}Event") or webhook_el.findall("Event") if el.text]
|
events = [el.text for el in webhook_el.findall("{http://s3.amazonaws.com/doc/2006-03-01/}Event") or webhook_el.findall("Event") if el.text]
|
||||||
|
|
||||||
dest_el = _find_element(webhook_el, "Destination")
|
dest_el = _find_element(webhook_el, "Destination")
|
||||||
url = _find_element_text(dest_el, "Url") if dest_el else ""
|
url = _find_element_text(dest_el, "Url") if dest_el else ""
|
||||||
@@ -1722,7 +1768,7 @@ def _bucket_notification_handler(bucket_name: str) -> Response:
|
|||||||
if filter_el:
|
if filter_el:
|
||||||
key_el = _find_element(filter_el, "S3Key")
|
key_el = _find_element(filter_el, "S3Key")
|
||||||
if key_el:
|
if key_el:
|
||||||
for rule_el in key_el.findall("{*}FilterRule") or key_el.findall("FilterRule"):
|
for rule_el in key_el.findall("{http://s3.amazonaws.com/doc/2006-03-01/}FilterRule") or key_el.findall("FilterRule"):
|
||||||
name = _find_element_text(rule_el, "Name")
|
name = _find_element_text(rule_el, "Name")
|
||||||
value = _find_element_text(rule_el, "Value")
|
value = _find_element_text(rule_el, "Value")
|
||||||
if name == "prefix":
|
if name == "prefix":
|
||||||
@@ -1775,6 +1821,9 @@ def _bucket_logging_handler(bucket_name: str) -> Response:
|
|||||||
current_app.logger.info("Bucket logging deleted", extra={"bucket": bucket_name})
|
current_app.logger.info("Bucket logging deleted", extra={"bucket": bucket_name})
|
||||||
return Response(status=204)
|
return Response(status=204)
|
||||||
|
|
||||||
|
ct_error = _require_xml_content_type()
|
||||||
|
if ct_error:
|
||||||
|
return ct_error
|
||||||
payload = request.get_data(cache=False) or b""
|
payload = request.get_data(cache=False) or b""
|
||||||
if not payload.strip():
|
if not payload.strip():
|
||||||
logging_service.delete_bucket_logging(bucket_name)
|
logging_service.delete_bucket_logging(bucket_name)
|
||||||
@@ -1813,6 +1862,72 @@ def _bucket_logging_handler(bucket_name: str) -> Response:
|
|||||||
return Response(status=200)
|
return Response(status=200)
|
||||||
|
|
||||||
|
|
||||||
|
def _bucket_uploads_handler(bucket_name: str) -> Response:
|
||||||
|
if request.method != "GET":
|
||||||
|
return _method_not_allowed(["GET"])
|
||||||
|
|
||||||
|
principal, error = _require_principal()
|
||||||
|
if error:
|
||||||
|
return error
|
||||||
|
try:
|
||||||
|
_authorize_action(principal, bucket_name, "list")
|
||||||
|
except IamError as exc:
|
||||||
|
return _error_response("AccessDenied", str(exc), 403)
|
||||||
|
|
||||||
|
storage = _storage()
|
||||||
|
if not storage.bucket_exists(bucket_name):
|
||||||
|
return _error_response("NoSuchBucket", "Bucket does not exist", 404)
|
||||||
|
|
||||||
|
key_marker = request.args.get("key-marker", "")
|
||||||
|
upload_id_marker = request.args.get("upload-id-marker", "")
|
||||||
|
prefix = request.args.get("prefix", "")
|
||||||
|
delimiter = request.args.get("delimiter", "")
|
||||||
|
try:
|
||||||
|
max_uploads = max(1, min(int(request.args.get("max-uploads", 1000)), 1000))
|
||||||
|
except ValueError:
|
||||||
|
return _error_response("InvalidArgument", "max-uploads must be an integer", 400)
|
||||||
|
|
||||||
|
uploads = storage.list_multipart_uploads(bucket_name, include_orphaned=True)
|
||||||
|
|
||||||
|
if prefix:
|
||||||
|
uploads = [u for u in uploads if u["object_key"].startswith(prefix)]
|
||||||
|
if key_marker:
|
||||||
|
uploads = [u for u in uploads if u["object_key"] > key_marker or
|
||||||
|
(u["object_key"] == key_marker and upload_id_marker and u["upload_id"] > upload_id_marker)]
|
||||||
|
|
||||||
|
uploads.sort(key=lambda u: (u["object_key"], u["upload_id"]))
|
||||||
|
|
||||||
|
is_truncated = len(uploads) > max_uploads
|
||||||
|
if is_truncated:
|
||||||
|
uploads = uploads[:max_uploads]
|
||||||
|
|
||||||
|
root = Element("ListMultipartUploadsResult", xmlns="http://s3.amazonaws.com/doc/2006-03-01/")
|
||||||
|
SubElement(root, "Bucket").text = bucket_name
|
||||||
|
SubElement(root, "KeyMarker").text = key_marker
|
||||||
|
SubElement(root, "UploadIdMarker").text = upload_id_marker
|
||||||
|
if prefix:
|
||||||
|
SubElement(root, "Prefix").text = prefix
|
||||||
|
if delimiter:
|
||||||
|
SubElement(root, "Delimiter").text = delimiter
|
||||||
|
SubElement(root, "MaxUploads").text = str(max_uploads)
|
||||||
|
SubElement(root, "IsTruncated").text = "true" if is_truncated else "false"
|
||||||
|
|
||||||
|
if is_truncated and uploads:
|
||||||
|
SubElement(root, "NextKeyMarker").text = uploads[-1]["object_key"]
|
||||||
|
SubElement(root, "NextUploadIdMarker").text = uploads[-1]["upload_id"]
|
||||||
|
|
||||||
|
for upload in uploads:
|
||||||
|
upload_el = SubElement(root, "Upload")
|
||||||
|
SubElement(upload_el, "Key").text = upload["object_key"]
|
||||||
|
SubElement(upload_el, "UploadId").text = upload["upload_id"]
|
||||||
|
if upload.get("created_at"):
|
||||||
|
SubElement(upload_el, "Initiated").text = upload["created_at"]
|
||||||
|
if upload.get("orphaned"):
|
||||||
|
SubElement(upload_el, "StorageClass").text = "ORPHANED"
|
||||||
|
|
||||||
|
return _xml_response(root)
|
||||||
|
|
||||||
|
|
||||||
def _object_retention_handler(bucket_name: str, object_key: str) -> Response:
|
def _object_retention_handler(bucket_name: str, object_key: str) -> Response:
|
||||||
if request.method not in {"GET", "PUT"}:
|
if request.method not in {"GET", "PUT"}:
|
||||||
return _method_not_allowed(["GET", "PUT"])
|
return _method_not_allowed(["GET", "PUT"])
|
||||||
@@ -1846,6 +1961,9 @@ def _object_retention_handler(bucket_name: str, object_key: str) -> Response:
|
|||||||
SubElement(root, "RetainUntilDate").text = retention.retain_until_date.strftime("%Y-%m-%dT%H:%M:%S.000Z")
|
SubElement(root, "RetainUntilDate").text = retention.retain_until_date.strftime("%Y-%m-%dT%H:%M:%S.000Z")
|
||||||
return _xml_response(root)
|
return _xml_response(root)
|
||||||
|
|
||||||
|
ct_error = _require_xml_content_type()
|
||||||
|
if ct_error:
|
||||||
|
return ct_error
|
||||||
payload = request.get_data(cache=False) or b""
|
payload = request.get_data(cache=False) or b""
|
||||||
if not payload.strip():
|
if not payload.strip():
|
||||||
return _error_response("MalformedXML", "Request body is required", 400)
|
return _error_response("MalformedXML", "Request body is required", 400)
|
||||||
@@ -1915,6 +2033,9 @@ def _object_legal_hold_handler(bucket_name: str, object_key: str) -> Response:
|
|||||||
SubElement(root, "Status").text = "ON" if enabled else "OFF"
|
SubElement(root, "Status").text = "ON" if enabled else "OFF"
|
||||||
return _xml_response(root)
|
return _xml_response(root)
|
||||||
|
|
||||||
|
ct_error = _require_xml_content_type()
|
||||||
|
if ct_error:
|
||||||
|
return ct_error
|
||||||
payload = request.get_data(cache=False) or b""
|
payload = request.get_data(cache=False) or b""
|
||||||
if not payload.strip():
|
if not payload.strip():
|
||||||
return _error_response("MalformedXML", "Request body is required", 400)
|
return _error_response("MalformedXML", "Request body is required", 400)
|
||||||
@@ -1946,6 +2067,9 @@ def _bulk_delete_handler(bucket_name: str) -> Response:
|
|||||||
except IamError as exc:
|
except IamError as exc:
|
||||||
return _error_response("AccessDenied", str(exc), 403)
|
return _error_response("AccessDenied", str(exc), 403)
|
||||||
|
|
||||||
|
ct_error = _require_xml_content_type()
|
||||||
|
if ct_error:
|
||||||
|
return ct_error
|
||||||
payload = request.get_data(cache=False) or b""
|
payload = request.get_data(cache=False) or b""
|
||||||
if not payload.strip():
|
if not payload.strip():
|
||||||
return _error_response("MalformedXML", "Request body must include a Delete specification", 400)
|
return _error_response("MalformedXML", "Request body must include a Delete specification", 400)
|
||||||
@@ -2521,9 +2645,9 @@ def _list_parts(bucket_name: str, object_key: str) -> Response:
|
|||||||
return _xml_response(root)
|
return _xml_response(root)
|
||||||
|
|
||||||
|
|
||||||
@s3_api_bp.route("/bucket-policy/<bucket_name>", methods=["GET", "PUT", "DELETE"])
|
def _bucket_policy_handler(bucket_name: str) -> Response:
|
||||||
@limiter.limit("30 per minute")
|
if request.method not in {"GET", "PUT", "DELETE"}:
|
||||||
def bucket_policy_handler(bucket_name: str) -> Response:
|
return _method_not_allowed(["GET", "PUT", "DELETE"])
|
||||||
principal, error = _require_principal()
|
principal, error = _require_principal()
|
||||||
if error:
|
if error:
|
||||||
return error
|
return error
|
||||||
@@ -2555,51 +2679,6 @@ def bucket_policy_handler(bucket_name: str) -> Response:
|
|||||||
return Response(status=204)
|
return Response(status=204)
|
||||||
|
|
||||||
|
|
||||||
@s3_api_bp.post("/presign/<bucket_name>/<path:object_key>")
|
|
||||||
@limiter.limit("45 per minute")
|
|
||||||
def presign_object(bucket_name: str, object_key: str):
|
|
||||||
payload = request.get_json(silent=True) or {}
|
|
||||||
method = str(payload.get("method", "GET")).upper()
|
|
||||||
allowed_methods = {"GET", "PUT", "DELETE"}
|
|
||||||
if method not in allowed_methods:
|
|
||||||
return _error_response("InvalidRequest", "Method must be GET, PUT, or DELETE", 400)
|
|
||||||
try:
|
|
||||||
expires = int(payload.get("expires_in", 900))
|
|
||||||
except (TypeError, ValueError):
|
|
||||||
return _error_response("InvalidRequest", "expires_in must be an integer", 400)
|
|
||||||
expires = max(1, min(expires, 7 * 24 * 3600))
|
|
||||||
action = "read" if method == "GET" else ("delete" if method == "DELETE" else "write")
|
|
||||||
principal, error = _require_principal()
|
|
||||||
if error:
|
|
||||||
return error
|
|
||||||
try:
|
|
||||||
_authorize_action(principal, bucket_name, action, object_key=object_key)
|
|
||||||
except IamError as exc:
|
|
||||||
return _error_response("AccessDenied", str(exc), 403)
|
|
||||||
storage = _storage()
|
|
||||||
if not storage.bucket_exists(bucket_name):
|
|
||||||
return _error_response("NoSuchBucket", "Bucket does not exist", 404)
|
|
||||||
if action != "write":
|
|
||||||
try:
|
|
||||||
storage.get_object_path(bucket_name, object_key)
|
|
||||||
except StorageError:
|
|
||||||
return _error_response("NoSuchKey", "Object not found", 404)
|
|
||||||
secret = _iam().secret_for_key(principal.access_key)
|
|
||||||
url = _generate_presigned_url(
|
|
||||||
principal=principal,
|
|
||||||
secret_key=secret,
|
|
||||||
method=method,
|
|
||||||
bucket_name=bucket_name,
|
|
||||||
object_key=object_key,
|
|
||||||
expires_in=expires,
|
|
||||||
)
|
|
||||||
current_app.logger.info(
|
|
||||||
"Presigned URL generated",
|
|
||||||
extra={"bucket": bucket_name, "key": object_key, "method": method},
|
|
||||||
)
|
|
||||||
return jsonify({"url": url, "method": method, "expires_in": expires})
|
|
||||||
|
|
||||||
|
|
||||||
@s3_api_bp.route("/<bucket_name>", methods=["HEAD"])
|
@s3_api_bp.route("/<bucket_name>", methods=["HEAD"])
|
||||||
@limiter.limit("100 per minute")
|
@limiter.limit("100 per minute")
|
||||||
def head_bucket(bucket_name: str) -> Response:
|
def head_bucket(bucket_name: str) -> Response:
|
||||||
@@ -2919,6 +2998,9 @@ def _complete_multipart_upload(bucket_name: str, object_key: str) -> Response:
|
|||||||
if not upload_id:
|
if not upload_id:
|
||||||
return _error_response("InvalidArgument", "uploadId is required", 400)
|
return _error_response("InvalidArgument", "uploadId is required", 400)
|
||||||
|
|
||||||
|
ct_error = _require_xml_content_type()
|
||||||
|
if ct_error:
|
||||||
|
return ct_error
|
||||||
payload = request.get_data(cache=False) or b""
|
payload = request.get_data(cache=False) or b""
|
||||||
try:
|
try:
|
||||||
root = fromstring(payload)
|
root = fromstring(payload)
|
||||||
@@ -2932,11 +3014,11 @@ def _complete_multipart_upload(bucket_name: str, object_key: str) -> Response:
|
|||||||
for part_el in list(root):
|
for part_el in list(root):
|
||||||
if _strip_ns(part_el.tag) != "Part":
|
if _strip_ns(part_el.tag) != "Part":
|
||||||
continue
|
continue
|
||||||
part_number_el = part_el.find("{*}PartNumber")
|
part_number_el = part_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}PartNumber")
|
||||||
if part_number_el is None:
|
if part_number_el is None:
|
||||||
part_number_el = part_el.find("PartNumber")
|
part_number_el = part_el.find("PartNumber")
|
||||||
|
|
||||||
etag_el = part_el.find("{*}ETag")
|
etag_el = part_el.find("{http://s3.amazonaws.com/doc/2006-03-01/}ETag")
|
||||||
if etag_el is None:
|
if etag_el is None:
|
||||||
etag_el = part_el.find("ETag")
|
etag_el = part_el.find("ETag")
|
||||||
|
|
||||||
|
|||||||
123
app/storage.py
123
app/storage.py
@@ -137,10 +137,10 @@ class ObjectStorage:
|
|||||||
BUCKET_VERSIONS_DIR = "versions"
|
BUCKET_VERSIONS_DIR = "versions"
|
||||||
MULTIPART_MANIFEST = "manifest.json"
|
MULTIPART_MANIFEST = "manifest.json"
|
||||||
BUCKET_CONFIG_FILE = ".bucket.json"
|
BUCKET_CONFIG_FILE = ".bucket.json"
|
||||||
KEY_INDEX_CACHE_TTL = 30
|
DEFAULT_CACHE_TTL = 5
|
||||||
OBJECT_CACHE_MAX_SIZE = 100
|
OBJECT_CACHE_MAX_SIZE = 100
|
||||||
|
|
||||||
def __init__(self, root: Path) -> None:
|
def __init__(self, root: Path, cache_ttl: int = DEFAULT_CACHE_TTL) -> None:
|
||||||
self.root = Path(root)
|
self.root = Path(root)
|
||||||
self.root.mkdir(parents=True, exist_ok=True)
|
self.root.mkdir(parents=True, exist_ok=True)
|
||||||
self._ensure_system_roots()
|
self._ensure_system_roots()
|
||||||
@@ -150,6 +150,7 @@ class ObjectStorage:
|
|||||||
self._cache_version: Dict[str, int] = {}
|
self._cache_version: Dict[str, int] = {}
|
||||||
self._bucket_config_cache: Dict[str, tuple[dict[str, Any], float]] = {}
|
self._bucket_config_cache: Dict[str, tuple[dict[str, Any], float]] = {}
|
||||||
self._bucket_config_cache_ttl = 30.0
|
self._bucket_config_cache_ttl = 30.0
|
||||||
|
self._cache_ttl = cache_ttl
|
||||||
|
|
||||||
def _get_bucket_lock(self, bucket_id: str) -> threading.Lock:
|
def _get_bucket_lock(self, bucket_id: str) -> threading.Lock:
|
||||||
"""Get or create a lock for a specific bucket. Reduces global lock contention."""
|
"""Get or create a lock for a specific bucket. Reduces global lock contention."""
|
||||||
@@ -773,7 +774,7 @@ class ObjectStorage:
|
|||||||
continue
|
continue
|
||||||
payload.setdefault("version_id", meta_file.stem)
|
payload.setdefault("version_id", meta_file.stem)
|
||||||
versions.append(payload)
|
versions.append(payload)
|
||||||
versions.sort(key=lambda item: item.get("archived_at", ""), reverse=True)
|
versions.sort(key=lambda item: item.get("archived_at") or "1970-01-01T00:00:00Z", reverse=True)
|
||||||
return versions
|
return versions
|
||||||
|
|
||||||
def restore_object_version(self, bucket_name: str, object_key: str, version_id: str) -> ObjectMeta:
|
def restore_object_version(self, bucket_name: str, object_key: str, version_id: str) -> ObjectMeta:
|
||||||
@@ -865,7 +866,7 @@ class ObjectStorage:
|
|||||||
except (OSError, json.JSONDecodeError):
|
except (OSError, json.JSONDecodeError):
|
||||||
payload = {}
|
payload = {}
|
||||||
version_id = payload.get("version_id") or meta_file.stem
|
version_id = payload.get("version_id") or meta_file.stem
|
||||||
archived_at = payload.get("archived_at") or ""
|
archived_at = payload.get("archived_at") or "1970-01-01T00:00:00Z"
|
||||||
size = int(payload.get("size") or 0)
|
size = int(payload.get("size") or 0)
|
||||||
reason = payload.get("reason") or "update"
|
reason = payload.get("reason") or "update"
|
||||||
record = aggregated.setdefault(
|
record = aggregated.setdefault(
|
||||||
@@ -1147,47 +1148,57 @@ class ObjectStorage:
|
|||||||
parts.sort(key=lambda x: x["PartNumber"])
|
parts.sort(key=lambda x: x["PartNumber"])
|
||||||
return parts
|
return parts
|
||||||
|
|
||||||
def list_multipart_uploads(self, bucket_name: str) -> List[Dict[str, Any]]:
|
def list_multipart_uploads(self, bucket_name: str, include_orphaned: bool = False) -> List[Dict[str, Any]]:
|
||||||
"""List all active multipart uploads for a bucket."""
|
"""List all active multipart uploads for a bucket.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
bucket_name: The bucket to list uploads for.
|
||||||
|
include_orphaned: If True, also include upload directories that have
|
||||||
|
files but no valid manifest.json (orphaned/interrupted uploads).
|
||||||
|
"""
|
||||||
bucket_path = self._bucket_path(bucket_name)
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
if not bucket_path.exists():
|
if not bucket_path.exists():
|
||||||
raise BucketNotFoundError("Bucket does not exist")
|
raise BucketNotFoundError("Bucket does not exist")
|
||||||
bucket_id = bucket_path.name
|
bucket_id = bucket_path.name
|
||||||
uploads = []
|
uploads = []
|
||||||
multipart_root = self._multipart_bucket_root(bucket_id)
|
|
||||||
if multipart_root.exists():
|
for multipart_root in (
|
||||||
|
self._multipart_bucket_root(bucket_id),
|
||||||
|
self._legacy_multipart_bucket_root(bucket_id),
|
||||||
|
):
|
||||||
|
if not multipart_root.exists():
|
||||||
|
continue
|
||||||
for upload_dir in multipart_root.iterdir():
|
for upload_dir in multipart_root.iterdir():
|
||||||
if not upload_dir.is_dir():
|
if not upload_dir.is_dir():
|
||||||
continue
|
continue
|
||||||
manifest_path = upload_dir / "manifest.json"
|
manifest_path = upload_dir / "manifest.json"
|
||||||
if not manifest_path.exists():
|
if manifest_path.exists():
|
||||||
continue
|
try:
|
||||||
try:
|
manifest = json.loads(manifest_path.read_text(encoding="utf-8"))
|
||||||
manifest = json.loads(manifest_path.read_text(encoding="utf-8"))
|
uploads.append({
|
||||||
uploads.append({
|
"upload_id": manifest.get("upload_id", upload_dir.name),
|
||||||
"upload_id": manifest.get("upload_id", upload_dir.name),
|
"object_key": manifest.get("object_key", ""),
|
||||||
"object_key": manifest.get("object_key", ""),
|
"created_at": manifest.get("created_at", ""),
|
||||||
"created_at": manifest.get("created_at", ""),
|
})
|
||||||
})
|
except (OSError, json.JSONDecodeError):
|
||||||
except (OSError, json.JSONDecodeError):
|
if include_orphaned:
|
||||||
continue
|
has_files = any(upload_dir.rglob("*"))
|
||||||
legacy_root = self._legacy_multipart_bucket_root(bucket_id)
|
if has_files:
|
||||||
if legacy_root.exists():
|
uploads.append({
|
||||||
for upload_dir in legacy_root.iterdir():
|
"upload_id": upload_dir.name,
|
||||||
if not upload_dir.is_dir():
|
"object_key": "(unknown)",
|
||||||
continue
|
"created_at": "",
|
||||||
manifest_path = upload_dir / "manifest.json"
|
"orphaned": True,
|
||||||
if not manifest_path.exists():
|
})
|
||||||
continue
|
elif include_orphaned:
|
||||||
try:
|
has_files = any(f.is_file() for f in upload_dir.rglob("*"))
|
||||||
manifest = json.loads(manifest_path.read_text(encoding="utf-8"))
|
if has_files:
|
||||||
uploads.append({
|
uploads.append({
|
||||||
"upload_id": manifest.get("upload_id", upload_dir.name),
|
"upload_id": upload_dir.name,
|
||||||
"object_key": manifest.get("object_key", ""),
|
"object_key": "(unknown)",
|
||||||
"created_at": manifest.get("created_at", ""),
|
"created_at": "",
|
||||||
})
|
"orphaned": True,
|
||||||
except (OSError, json.JSONDecodeError):
|
})
|
||||||
continue
|
|
||||||
return uploads
|
return uploads
|
||||||
|
|
||||||
def _bucket_path(self, bucket_name: str) -> Path:
|
def _bucket_path(self, bucket_name: str) -> Path:
|
||||||
@@ -1398,7 +1409,7 @@ class ObjectStorage:
|
|||||||
cached = self._object_cache.get(bucket_id)
|
cached = self._object_cache.get(bucket_id)
|
||||||
if cached:
|
if cached:
|
||||||
objects, timestamp = cached
|
objects, timestamp = cached
|
||||||
if now - timestamp < self.KEY_INDEX_CACHE_TTL:
|
if now - timestamp < self._cache_ttl:
|
||||||
self._object_cache.move_to_end(bucket_id)
|
self._object_cache.move_to_end(bucket_id)
|
||||||
return objects
|
return objects
|
||||||
cache_version = self._cache_version.get(bucket_id, 0)
|
cache_version = self._cache_version.get(bucket_id, 0)
|
||||||
@@ -1409,7 +1420,7 @@ class ObjectStorage:
|
|||||||
cached = self._object_cache.get(bucket_id)
|
cached = self._object_cache.get(bucket_id)
|
||||||
if cached:
|
if cached:
|
||||||
objects, timestamp = cached
|
objects, timestamp = cached
|
||||||
if now - timestamp < self.KEY_INDEX_CACHE_TTL:
|
if now - timestamp < self._cache_ttl:
|
||||||
self._object_cache.move_to_end(bucket_id)
|
self._object_cache.move_to_end(bucket_id)
|
||||||
return objects
|
return objects
|
||||||
objects = self._build_object_cache(bucket_path)
|
objects = self._build_object_cache(bucket_path)
|
||||||
@@ -1455,6 +1466,36 @@ class ObjectStorage:
|
|||||||
else:
|
else:
|
||||||
objects[key] = meta
|
objects[key] = meta
|
||||||
|
|
||||||
|
def warm_cache(self, bucket_names: Optional[List[str]] = None) -> None:
|
||||||
|
"""Pre-warm the object cache for specified buckets or all buckets.
|
||||||
|
|
||||||
|
This is called on startup to ensure the first request is fast.
|
||||||
|
"""
|
||||||
|
if bucket_names is None:
|
||||||
|
bucket_names = [b.name for b in self.list_buckets()]
|
||||||
|
|
||||||
|
for bucket_name in bucket_names:
|
||||||
|
try:
|
||||||
|
bucket_path = self._bucket_path(bucket_name)
|
||||||
|
if bucket_path.exists():
|
||||||
|
self._get_object_cache(bucket_path.name, bucket_path)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def warm_cache_async(self, bucket_names: Optional[List[str]] = None) -> threading.Thread:
|
||||||
|
"""Start cache warming in a background thread.
|
||||||
|
|
||||||
|
Returns the thread object so caller can optionally wait for it.
|
||||||
|
"""
|
||||||
|
thread = threading.Thread(
|
||||||
|
target=self.warm_cache,
|
||||||
|
args=(bucket_names,),
|
||||||
|
daemon=True,
|
||||||
|
name="cache-warmer",
|
||||||
|
)
|
||||||
|
thread.start()
|
||||||
|
return thread
|
||||||
|
|
||||||
def _ensure_system_roots(self) -> None:
|
def _ensure_system_roots(self) -> None:
|
||||||
for path in (
|
for path in (
|
||||||
self._system_root_path(),
|
self._system_root_path(),
|
||||||
@@ -1732,11 +1773,9 @@ class ObjectStorage:
|
|||||||
raise StorageError("Object key contains null bytes")
|
raise StorageError("Object key contains null bytes")
|
||||||
if object_key.startswith(("/", "\\")):
|
if object_key.startswith(("/", "\\")):
|
||||||
raise StorageError("Object key cannot start with a slash")
|
raise StorageError("Object key cannot start with a slash")
|
||||||
normalized = unicodedata.normalize("NFC", object_key)
|
object_key = unicodedata.normalize("NFC", object_key)
|
||||||
if normalized != object_key:
|
|
||||||
raise StorageError("Object key must use normalized Unicode")
|
|
||||||
|
|
||||||
candidate = Path(normalized)
|
candidate = Path(object_key)
|
||||||
if ".." in candidate.parts:
|
if ".." in candidate.parts:
|
||||||
raise StorageError("Object key contains parent directory references")
|
raise StorageError("Object key contains parent directory references")
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
APP_VERSION = "0.2.0"
|
APP_VERSION = "0.2.2"
|
||||||
|
|
||||||
|
|
||||||
def get_version() -> str:
|
def get_version() -> str:
|
||||||
|
|||||||
392
docs.md
392
docs.md
@@ -122,7 +122,7 @@ With these volumes attached you can rebuild/restart the container without losing
|
|||||||
|
|
||||||
### Versioning
|
### Versioning
|
||||||
|
|
||||||
The repo now tracks a human-friendly release string inside `app/version.py` (see the `APP_VERSION` constant). Edit that value whenever you cut a release. The constant flows into Flask as `APP_VERSION` and is exposed via `GET /healthz`, so you can monitor deployments or surface it in UIs.
|
The repo now tracks a human-friendly release string inside `app/version.py` (see the `APP_VERSION` constant). Edit that value whenever you cut a release. The constant flows into Flask as `APP_VERSION` and is exposed via `GET /myfsio/health`, so you can monitor deployments or surface it in UIs.
|
||||||
|
|
||||||
## 3. Configuration Reference
|
## 3. Configuration Reference
|
||||||
|
|
||||||
@@ -189,6 +189,52 @@ All configuration is done via environment variables. The table below lists every
|
|||||||
| `KMS_ENABLED` | `false` | Enable KMS key management for encryption. |
|
| `KMS_ENABLED` | `false` | Enable KMS key management for encryption. |
|
||||||
| `KMS_KEYS_PATH` | `data/.myfsio.sys/keys/kms_keys.json` | Path to store KMS key metadata. |
|
| `KMS_KEYS_PATH` | `data/.myfsio.sys/keys/kms_keys.json` | Path to store KMS key metadata. |
|
||||||
|
|
||||||
|
|
||||||
|
## Lifecycle Rules
|
||||||
|
|
||||||
|
Lifecycle rules automate object management by scheduling deletions based on object age.
|
||||||
|
|
||||||
|
### Enabling Lifecycle Enforcement
|
||||||
|
|
||||||
|
By default, lifecycle enforcement is disabled. Enable it by setting the environment variable:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
LIFECYCLE_ENABLED=true python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
Or in your `myfsio.env` file:
|
||||||
|
```
|
||||||
|
LIFECYCLE_ENABLED=true
|
||||||
|
LIFECYCLE_INTERVAL_SECONDS=3600 # Check interval (default: 1 hour)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuring Rules
|
||||||
|
|
||||||
|
Once enabled, configure lifecycle rules via:
|
||||||
|
- **Web UI:** Bucket Details → Lifecycle tab → Add Rule
|
||||||
|
- **S3 API:** `PUT /<bucket>?lifecycle` with XML configuration
|
||||||
|
|
||||||
|
### Available Actions
|
||||||
|
|
||||||
|
| Action | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| **Expiration** | Delete current version objects after N days |
|
||||||
|
| **NoncurrentVersionExpiration** | Delete old versions N days after becoming noncurrent (requires versioning) |
|
||||||
|
| **AbortIncompleteMultipartUpload** | Clean up incomplete multipart uploads after N days |
|
||||||
|
|
||||||
|
### Example Configuration (XML)
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<LifecycleConfiguration>
|
||||||
|
<Rule>
|
||||||
|
<ID>DeleteOldLogs</ID>
|
||||||
|
<Status>Enabled</Status>
|
||||||
|
<Filter><Prefix>logs/</Prefix></Filter>
|
||||||
|
<Expiration><Days>30</Days></Expiration>
|
||||||
|
</Rule>
|
||||||
|
</LifecycleConfiguration>
|
||||||
|
```
|
||||||
|
|
||||||
### Performance Tuning
|
### Performance Tuning
|
||||||
|
|
||||||
| Variable | Default | Notes |
|
| Variable | Default | Notes |
|
||||||
@@ -231,14 +277,14 @@ The application automatically trusts these headers to generate correct presigned
|
|||||||
### Version Checking
|
### Version Checking
|
||||||
|
|
||||||
The application version is tracked in `app/version.py` and exposed via:
|
The application version is tracked in `app/version.py` and exposed via:
|
||||||
- **Health endpoint:** `GET /healthz` returns JSON with `version` field
|
- **Health endpoint:** `GET /myfsio/health` returns JSON with `version` field
|
||||||
- **Metrics dashboard:** Navigate to `/ui/metrics` to see the running version in the System Status card
|
- **Metrics dashboard:** Navigate to `/ui/metrics` to see the running version in the System Status card
|
||||||
|
|
||||||
To check your current version:
|
To check your current version:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# API health endpoint
|
# API health endpoint
|
||||||
curl http://localhost:5000/healthz
|
curl http://localhost:5000/myfsio/health
|
||||||
|
|
||||||
# Or inspect version.py directly
|
# Or inspect version.py directly
|
||||||
cat app/version.py | grep APP_VERSION
|
cat app/version.py | grep APP_VERSION
|
||||||
@@ -331,7 +377,7 @@ docker run -d \
|
|||||||
myfsio:latest
|
myfsio:latest
|
||||||
|
|
||||||
# 5. Verify health
|
# 5. Verify health
|
||||||
curl http://localhost:5000/healthz
|
curl http://localhost:5000/myfsio/health
|
||||||
```
|
```
|
||||||
|
|
||||||
### Version Compatibility Checks
|
### Version Compatibility Checks
|
||||||
@@ -456,7 +502,7 @@ docker run -d \
|
|||||||
myfsio:0.1.3 # specify previous version tag
|
myfsio:0.1.3 # specify previous version tag
|
||||||
|
|
||||||
# 3. Verify
|
# 3. Verify
|
||||||
curl http://localhost:5000/healthz
|
curl http://localhost:5000/myfsio/health
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Emergency Config Restore
|
#### Emergency Config Restore
|
||||||
@@ -482,7 +528,7 @@ For production environments requiring zero downtime:
|
|||||||
APP_PORT=5001 UI_PORT=5101 python run.py &
|
APP_PORT=5001 UI_PORT=5101 python run.py &
|
||||||
|
|
||||||
# 2. Health check new instance
|
# 2. Health check new instance
|
||||||
curl http://localhost:5001/healthz
|
curl http://localhost:5001/myfsio/health
|
||||||
|
|
||||||
# 3. Update load balancer to route to new ports
|
# 3. Update load balancer to route to new ports
|
||||||
|
|
||||||
@@ -498,7 +544,7 @@ After any update, verify functionality:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. Health check
|
# 1. Health check
|
||||||
curl http://localhost:5000/healthz
|
curl http://localhost:5000/myfsio/health
|
||||||
|
|
||||||
# 2. Login to UI
|
# 2. Login to UI
|
||||||
open http://localhost:5100/ui
|
open http://localhost:5100/ui
|
||||||
@@ -542,7 +588,7 @@ APP_PID=$!
|
|||||||
|
|
||||||
# Wait and health check
|
# Wait and health check
|
||||||
sleep 5
|
sleep 5
|
||||||
if curl -f http://localhost:5000/healthz; then
|
if curl -f http://localhost:5000/myfsio/health; then
|
||||||
echo "Update successful!"
|
echo "Update successful!"
|
||||||
else
|
else
|
||||||
echo "Health check failed, rolling back..."
|
echo "Health check failed, rolling back..."
|
||||||
@@ -556,6 +602,10 @@ fi
|
|||||||
|
|
||||||
## 4. Authentication & IAM
|
## 4. Authentication & IAM
|
||||||
|
|
||||||
|
MyFSIO implements a comprehensive Identity and Access Management (IAM) system that controls who can access your buckets and what operations they can perform. The system supports both simple action-based permissions and AWS-compatible policy syntax.
|
||||||
|
|
||||||
|
### Getting Started
|
||||||
|
|
||||||
1. On first boot, `data/.myfsio.sys/config/iam.json` is seeded with `localadmin / localadmin` that has wildcard access.
|
1. On first boot, `data/.myfsio.sys/config/iam.json` is seeded with `localadmin / localadmin` that has wildcard access.
|
||||||
2. Sign into the UI using those credentials, then open **IAM**:
|
2. Sign into the UI using those credentials, then open **IAM**:
|
||||||
- **Create user**: supply a display name and optional JSON inline policy array.
|
- **Create user**: supply a display name and optional JSON inline policy array.
|
||||||
@@ -563,48 +613,241 @@ fi
|
|||||||
- **Policy editor**: select a user, paste an array of objects (`{"bucket": "*", "actions": ["list", "read"]}`), and submit. Alias support includes AWS-style verbs (e.g., `s3:GetObject`).
|
- **Policy editor**: select a user, paste an array of objects (`{"bucket": "*", "actions": ["list", "read"]}`), and submit. Alias support includes AWS-style verbs (e.g., `s3:GetObject`).
|
||||||
3. Wildcard action `iam:*` is supported for admin user definitions.
|
3. Wildcard action `iam:*` is supported for admin user definitions.
|
||||||
|
|
||||||
The API expects every request to include `X-Access-Key` and `X-Secret-Key` headers. The UI persists them in the Flask session after login.
|
### Authentication
|
||||||
|
|
||||||
|
The API expects every request to include authentication headers. The UI persists them in the Flask session after login.
|
||||||
|
|
||||||
|
| Header | Description |
|
||||||
|
| --- | --- |
|
||||||
|
| `X-Access-Key` | The user's access key identifier |
|
||||||
|
| `X-Secret-Key` | The user's secret key for signing |
|
||||||
|
|
||||||
|
**Security Features:**
|
||||||
|
- **Lockout Protection**: After `AUTH_MAX_ATTEMPTS` (default: 5) failed login attempts, the account is locked for `AUTH_LOCKOUT_MINUTES` (default: 15 minutes).
|
||||||
|
- **Session Management**: UI sessions remain valid for `SESSION_LIFETIME_DAYS` (default: 30 days).
|
||||||
|
- **Hot Reload**: IAM configuration changes take effect immediately without restart.
|
||||||
|
|
||||||
|
### Permission Model
|
||||||
|
|
||||||
|
MyFSIO uses a two-layer permission model:
|
||||||
|
|
||||||
|
1. **IAM User Policies** – Define what a user can do across the system (stored in `iam.json`)
|
||||||
|
2. **Bucket Policies** – Define who can access a specific bucket (stored in `bucket_policies.json`)
|
||||||
|
|
||||||
|
Both layers are evaluated for each request. A user must have permission in their IAM policy AND the bucket policy must allow the action (or have no explicit deny).
|
||||||
|
|
||||||
### Available IAM Actions
|
### Available IAM Actions
|
||||||
|
|
||||||
|
#### S3 Actions (Bucket/Object Operations)
|
||||||
|
|
||||||
| Action | Description | AWS Aliases |
|
| Action | Description | AWS Aliases |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `list` | List buckets and objects | `s3:ListBucket`, `s3:ListAllMyBuckets`, `s3:ListBucketVersions`, `s3:ListMultipartUploads`, `s3:ListParts` |
|
| `list` | List buckets and objects | `s3:ListBucket`, `s3:ListAllMyBuckets`, `s3:ListBucketVersions`, `s3:ListMultipartUploads`, `s3:ListParts` |
|
||||||
| `read` | Download objects | `s3:GetObject`, `s3:GetObjectVersion`, `s3:GetObjectTagging`, `s3:HeadObject`, `s3:HeadBucket` |
|
| `read` | Download objects, get metadata | `s3:GetObject`, `s3:GetObjectVersion`, `s3:GetObjectTagging`, `s3:GetObjectVersionTagging`, `s3:GetObjectAcl`, `s3:GetBucketVersioning`, `s3:HeadObject`, `s3:HeadBucket` |
|
||||||
| `write` | Upload objects, create buckets | `s3:PutObject`, `s3:CreateBucket`, `s3:CreateMultipartUpload`, `s3:UploadPart`, `s3:CompleteMultipartUpload`, `s3:AbortMultipartUpload`, `s3:CopyObject` |
|
| `write` | Upload objects, create buckets, manage tags | `s3:PutObject`, `s3:CreateBucket`, `s3:PutObjectTagging`, `s3:PutBucketVersioning`, `s3:CreateMultipartUpload`, `s3:UploadPart`, `s3:CompleteMultipartUpload`, `s3:AbortMultipartUpload`, `s3:CopyObject` |
|
||||||
| `delete` | Remove objects and buckets | `s3:DeleteObject`, `s3:DeleteObjectVersion`, `s3:DeleteBucket` |
|
| `delete` | Remove objects, versions, and buckets | `s3:DeleteObject`, `s3:DeleteObjectVersion`, `s3:DeleteBucket`, `s3:DeleteObjectTagging` |
|
||||||
| `share` | Manage ACLs | `s3:PutObjectAcl`, `s3:PutBucketAcl`, `s3:GetBucketAcl` |
|
| `share` | Manage Access Control Lists (ACLs) | `s3:PutObjectAcl`, `s3:PutBucketAcl`, `s3:GetBucketAcl` |
|
||||||
| `policy` | Manage bucket policies | `s3:PutBucketPolicy`, `s3:GetBucketPolicy`, `s3:DeleteBucketPolicy` |
|
| `policy` | Manage bucket policies | `s3:PutBucketPolicy`, `s3:GetBucketPolicy`, `s3:DeleteBucketPolicy` |
|
||||||
| `replication` | Configure and manage replication | `s3:GetReplicationConfiguration`, `s3:PutReplicationConfiguration`, `s3:ReplicateObject`, `s3:ReplicateTags`, `s3:ReplicateDelete` |
|
| `lifecycle` | Manage lifecycle rules | `s3:GetLifecycleConfiguration`, `s3:PutLifecycleConfiguration`, `s3:DeleteLifecycleConfiguration`, `s3:GetBucketLifecycle`, `s3:PutBucketLifecycle` |
|
||||||
| `iam:list_users` | View IAM users | `iam:ListUsers` |
|
| `cors` | Manage CORS configuration | `s3:GetBucketCors`, `s3:PutBucketCors`, `s3:DeleteBucketCors` |
|
||||||
| `iam:create_user` | Create IAM users | `iam:CreateUser` |
|
| `replication` | Configure and manage replication | `s3:GetReplicationConfiguration`, `s3:PutReplicationConfiguration`, `s3:DeleteReplicationConfiguration`, `s3:ReplicateObject`, `s3:ReplicateTags`, `s3:ReplicateDelete` |
|
||||||
|
|
||||||
|
#### IAM Actions (User Management)
|
||||||
|
|
||||||
|
| Action | Description | AWS Aliases |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `iam:list_users` | View all IAM users and their policies | `iam:ListUsers` |
|
||||||
|
| `iam:create_user` | Create new IAM users | `iam:CreateUser` |
|
||||||
| `iam:delete_user` | Delete IAM users | `iam:DeleteUser` |
|
| `iam:delete_user` | Delete IAM users | `iam:DeleteUser` |
|
||||||
| `iam:rotate_key` | Rotate user secrets | `iam:RotateAccessKey` |
|
| `iam:rotate_key` | Rotate user secret keys | `iam:RotateAccessKey` |
|
||||||
| `iam:update_policy` | Modify user policies | `iam:PutUserPolicy` |
|
| `iam:update_policy` | Modify user policies | `iam:PutUserPolicy` |
|
||||||
| `iam:*` | All IAM actions (admin wildcard) | — |
|
| `iam:*` | **Admin wildcard** – grants all IAM actions | — |
|
||||||
|
|
||||||
### Example Policies
|
#### Wildcards
|
||||||
|
|
||||||
|
| Wildcard | Scope | Description |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `*` (in actions) | All S3 actions | Grants `list`, `read`, `write`, `delete`, `share`, `policy`, `lifecycle`, `cors`, `replication` |
|
||||||
|
| `iam:*` | All IAM actions | Grants all `iam:*` actions for user management |
|
||||||
|
| `*` (in bucket) | All buckets | Policy applies to every bucket |
|
||||||
|
|
||||||
|
### IAM Policy Structure
|
||||||
|
|
||||||
|
User policies are stored as a JSON array of policy objects. Each object specifies a bucket and the allowed actions:
|
||||||
|
|
||||||
**Full Control (admin):**
|
|
||||||
```json
|
```json
|
||||||
[{"bucket": "*", "actions": ["list", "read", "write", "delete", "share", "policy", "replication", "iam:*"]}]
|
[
|
||||||
|
{
|
||||||
|
"bucket": "<bucket-name-or-wildcard>",
|
||||||
|
"actions": ["<action1>", "<action2>", ...]
|
||||||
|
}
|
||||||
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Read-Only:**
|
**Fields:**
|
||||||
|
- `bucket`: The bucket name (case-insensitive) or `*` for all buckets
|
||||||
|
- `actions`: Array of action strings (simple names or AWS aliases)
|
||||||
|
|
||||||
|
### Example User Policies
|
||||||
|
|
||||||
|
**Full Administrator (complete system access):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["list", "read", "write", "delete", "share", "policy", "lifecycle", "cors", "replication", "iam:*"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Read-Only User (browse and download only):**
|
||||||
```json
|
```json
|
||||||
[{"bucket": "*", "actions": ["list", "read"]}]
|
[{"bucket": "*", "actions": ["list", "read"]}]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Single Bucket Access (no listing other buckets):**
|
**Single Bucket Full Access (no access to other buckets):**
|
||||||
```json
|
```json
|
||||||
[{"bucket": "user-bucket", "actions": ["read", "write", "delete"]}]
|
[{"bucket": "user-bucket", "actions": ["list", "read", "write", "delete"]}]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Bucket Access with Replication:**
|
**Multiple Bucket Access (different permissions per bucket):**
|
||||||
```json
|
```json
|
||||||
[{"bucket": "my-bucket", "actions": ["list", "read", "write", "delete", "replication"]}]
|
[
|
||||||
|
{"bucket": "public-data", "actions": ["list", "read"]},
|
||||||
|
{"bucket": "my-uploads", "actions": ["list", "read", "write", "delete"]},
|
||||||
|
{"bucket": "team-shared", "actions": ["list", "read", "write"]}
|
||||||
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**IAM Manager (manage users but no data access):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["iam:list_users", "iam:create_user", "iam:delete_user", "iam:rotate_key", "iam:update_policy"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Replication Operator (manage replication only):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["list", "read", "replication"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Lifecycle Manager (configure object expiration):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["list", "lifecycle"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**CORS Administrator (configure cross-origin access):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["cors"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Bucket Administrator (full bucket config, no IAM access):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "my-bucket", "actions": ["list", "read", "write", "delete", "policy", "lifecycle", "cors"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Upload-Only User (write but cannot read back):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "drop-box", "actions": ["write"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Backup Operator (read, list, and replicate):**
|
||||||
|
```json
|
||||||
|
[{"bucket": "*", "actions": ["list", "read", "replication"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using AWS-Style Action Names
|
||||||
|
|
||||||
|
You can use AWS S3 action names instead of simple names. They are automatically normalized:
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"bucket": "my-bucket",
|
||||||
|
"actions": [
|
||||||
|
"s3:ListBucket",
|
||||||
|
"s3:GetObject",
|
||||||
|
"s3:PutObject",
|
||||||
|
"s3:DeleteObject"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
This is equivalent to:
|
||||||
|
```json
|
||||||
|
[{"bucket": "my-bucket", "actions": ["list", "read", "write", "delete"]}]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Managing Users via API
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all users (requires iam:list_users)
|
||||||
|
curl http://localhost:5000/iam/users \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Create a new user (requires iam:create_user)
|
||||||
|
curl -X POST http://localhost:5000/iam/users \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
||||||
|
-d '{
|
||||||
|
"display_name": "New User",
|
||||||
|
"policies": [{"bucket": "*", "actions": ["list", "read"]}]
|
||||||
|
}'
|
||||||
|
|
||||||
|
# Rotate user secret (requires iam:rotate_key)
|
||||||
|
curl -X POST http://localhost:5000/iam/users/<access-key>/rotate \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Update user policies (requires iam:update_policy)
|
||||||
|
curl -X PUT http://localhost:5000/iam/users/<access-key>/policies \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
||||||
|
-d '[{"bucket": "*", "actions": ["list", "read", "write"]}]'
|
||||||
|
|
||||||
|
# Delete a user (requires iam:delete_user)
|
||||||
|
curl -X DELETE http://localhost:5000/iam/users/<access-key> \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
```
|
||||||
|
|
||||||
|
### Permission Precedence
|
||||||
|
|
||||||
|
When a request is made, permissions are evaluated in this order:
|
||||||
|
|
||||||
|
1. **Authentication** – Verify the access key and secret key are valid
|
||||||
|
2. **Lockout Check** – Ensure the account is not locked due to failed attempts
|
||||||
|
3. **IAM Policy Check** – Verify the user has the required action for the target bucket
|
||||||
|
4. **Bucket Policy Check** – If a bucket policy exists, verify it allows the action
|
||||||
|
|
||||||
|
A request is allowed only if:
|
||||||
|
- The IAM policy grants the action, AND
|
||||||
|
- The bucket policy allows the action (or no bucket policy exists)
|
||||||
|
|
||||||
|
### Common Permission Scenarios
|
||||||
|
|
||||||
|
| Scenario | Required Actions |
|
||||||
|
| --- | --- |
|
||||||
|
| Browse bucket contents | `list` |
|
||||||
|
| Download a file | `read` |
|
||||||
|
| Upload a file | `write` |
|
||||||
|
| Delete a file | `delete` |
|
||||||
|
| Generate presigned URL (GET) | `read` |
|
||||||
|
| Generate presigned URL (PUT) | `write` |
|
||||||
|
| Generate presigned URL (DELETE) | `delete` |
|
||||||
|
| Enable versioning | `write` (includes `s3:PutBucketVersioning`) |
|
||||||
|
| View bucket policy | `policy` |
|
||||||
|
| Modify bucket policy | `policy` |
|
||||||
|
| Configure lifecycle rules | `lifecycle` |
|
||||||
|
| View lifecycle rules | `lifecycle` |
|
||||||
|
| Configure CORS | `cors` |
|
||||||
|
| View CORS rules | `cors` |
|
||||||
|
| Configure replication | `replication` (admin-only for creation) |
|
||||||
|
| Pause/resume replication | `replication` |
|
||||||
|
| Manage other users | `iam:*` or specific `iam:` actions |
|
||||||
|
| Set bucket quotas | `iam:*` or `iam:list_users` (admin feature) |
|
||||||
|
|
||||||
|
### Security Best Practices
|
||||||
|
|
||||||
|
1. **Principle of Least Privilege** – Grant only the permissions users need
|
||||||
|
2. **Avoid Wildcards** – Use specific bucket names instead of `*` when possible
|
||||||
|
3. **Rotate Secrets Regularly** – Use the rotate key feature periodically
|
||||||
|
4. **Separate Admin Accounts** – Don't use admin accounts for daily operations
|
||||||
|
5. **Monitor Failed Logins** – Check logs for repeated authentication failures
|
||||||
|
6. **Use Bucket Policies for Fine-Grained Control** – Combine with IAM for defense in depth
|
||||||
|
|
||||||
## 5. Bucket Policies & Presets
|
## 5. Bucket Policies & Presets
|
||||||
|
|
||||||
- **Storage**: Policies are persisted in `data/.myfsio.sys/config/bucket_policies.json` under `{"policies": {"bucket": {...}}}`.
|
- **Storage**: Policies are persisted in `data/.myfsio.sys/config/bucket_policies.json` under `{"policies": {"bucket": {...}}}`.
|
||||||
@@ -617,7 +860,7 @@ The API expects every request to include `X-Access-Key` and `X-Secret-Key` heade
|
|||||||
### Editing via CLI
|
### Editing via CLI
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -X PUT http://127.0.0.1:5000/bucket-policy/test \
|
curl -X PUT "http://127.0.0.1:5000/test?policy" \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..." \
|
||||||
-d '{
|
-d '{
|
||||||
@@ -680,9 +923,8 @@ Drag files directly onto the objects table to upload them to the current bucket
|
|||||||
## 6. Presigned URLs
|
## 6. Presigned URLs
|
||||||
|
|
||||||
- Trigger from the UI using the **Presign** button after selecting an object.
|
- Trigger from the UI using the **Presign** button after selecting an object.
|
||||||
- Or call `POST /presign/<bucket>/<key>` with JSON `{ "method": "GET", "expires_in": 900 }`.
|
|
||||||
- Supported methods: `GET`, `PUT`, `DELETE`; expiration must be `1..604800` seconds.
|
- Supported methods: `GET`, `PUT`, `DELETE`; expiration must be `1..604800` seconds.
|
||||||
- The service signs requests using the caller’s IAM credentials and enforces bucket policies both when issuing and when the presigned URL is used.
|
- The service signs requests using the caller's IAM credentials and enforces bucket policies both when issuing and when the presigned URL is used.
|
||||||
- Legacy share links have been removed; presigned URLs now handle both private and public workflows.
|
- Legacy share links have been removed; presigned URLs now handle both private and public workflows.
|
||||||
|
|
||||||
### Multipart Upload Example
|
### Multipart Upload Example
|
||||||
@@ -905,7 +1147,84 @@ curl -X PUT "http://localhost:5000/bucket/<bucket>?quota" \
|
|||||||
</Error>
|
</Error>
|
||||||
```
|
```
|
||||||
|
|
||||||
## 9. Site Replication
|
## 9. Operation Metrics
|
||||||
|
|
||||||
|
Operation metrics provide real-time visibility into API request statistics, including request counts, latency, error rates, and bandwidth usage.
|
||||||
|
|
||||||
|
### Enabling Operation Metrics
|
||||||
|
|
||||||
|
By default, operation metrics are disabled. Enable by setting the environment variable:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
OPERATION_METRICS_ENABLED=true python run.py
|
||||||
|
```
|
||||||
|
|
||||||
|
Or in your `myfsio.env` file:
|
||||||
|
```
|
||||||
|
OPERATION_METRICS_ENABLED=true
|
||||||
|
OPERATION_METRICS_INTERVAL_MINUTES=5
|
||||||
|
OPERATION_METRICS_RETENTION_HOURS=24
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuration Options
|
||||||
|
|
||||||
|
| Variable | Default | Description |
|
||||||
|
|----------|---------|-------------|
|
||||||
|
| `OPERATION_METRICS_ENABLED` | `false` | Enable/disable operation metrics |
|
||||||
|
| `OPERATION_METRICS_INTERVAL_MINUTES` | `5` | Snapshot interval (minutes) |
|
||||||
|
| `OPERATION_METRICS_RETENTION_HOURS` | `24` | History retention period (hours) |
|
||||||
|
|
||||||
|
### What's Tracked
|
||||||
|
|
||||||
|
**Request Statistics:**
|
||||||
|
- Request counts by HTTP method (GET, PUT, POST, DELETE, HEAD, OPTIONS)
|
||||||
|
- Response status codes grouped by class (2xx, 3xx, 4xx, 5xx)
|
||||||
|
- Latency statistics (min, max, average)
|
||||||
|
- Bytes transferred in/out
|
||||||
|
|
||||||
|
**Endpoint Breakdown:**
|
||||||
|
- `object` - Object operations (GET/PUT/DELETE objects)
|
||||||
|
- `bucket` - Bucket operations (list, create, delete buckets)
|
||||||
|
- `ui` - Web UI requests
|
||||||
|
- `service` - Health checks, internal endpoints
|
||||||
|
- `kms` - KMS API operations
|
||||||
|
|
||||||
|
**S3 Error Codes:**
|
||||||
|
Tracks API-specific error codes like `NoSuchKey`, `AccessDenied`, `BucketNotFound`. Note: These are separate from HTTP status codes - a 404 from the UI won't appear here, only S3 API errors.
|
||||||
|
|
||||||
|
### API Endpoints
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get current operation metrics
|
||||||
|
curl http://localhost:5100/ui/metrics/operations \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Get operation metrics history
|
||||||
|
curl http://localhost:5100/ui/metrics/operations/history \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
|
||||||
|
# Filter history by time range
|
||||||
|
curl "http://localhost:5100/ui/metrics/operations/history?hours=6" \
|
||||||
|
-H "X-Access-Key: ..." -H "X-Secret-Key: ..."
|
||||||
|
```
|
||||||
|
|
||||||
|
### Storage Location
|
||||||
|
|
||||||
|
Operation metrics data is stored at:
|
||||||
|
```
|
||||||
|
data/.myfsio.sys/config/operation_metrics.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### UI Dashboard
|
||||||
|
|
||||||
|
When enabled, the Metrics page (`/ui/metrics`) shows an "API Operations" section with:
|
||||||
|
- Summary cards: Requests, Success Rate, Errors, Latency, Bytes In, Bytes Out
|
||||||
|
- Charts: Requests by Method (doughnut), Requests by Status (bar), Requests by Endpoint (horizontal bar)
|
||||||
|
- S3 Error Codes table with distribution
|
||||||
|
|
||||||
|
Data refreshes every 5 seconds.
|
||||||
|
|
||||||
|
## 10. Site Replication
|
||||||
|
|
||||||
### Permission Model
|
### Permission Model
|
||||||
|
|
||||||
@@ -1042,7 +1361,7 @@ To set up two-way replication (Server A ↔ Server B):
|
|||||||
|
|
||||||
**Note**: Deleting a bucket will automatically remove its associated replication configuration.
|
**Note**: Deleting a bucket will automatically remove its associated replication configuration.
|
||||||
|
|
||||||
## 11. Running Tests
|
## 12. Running Tests
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pytest -q
|
pytest -q
|
||||||
@@ -1052,7 +1371,7 @@ The suite now includes a boto3 integration test that spins up a live HTTP server
|
|||||||
|
|
||||||
The suite covers bucket CRUD, presigned downloads, bucket policy enforcement, and regression tests for anonymous reads when a Public policy is attached.
|
The suite covers bucket CRUD, presigned downloads, bucket policy enforcement, and regression tests for anonymous reads when a Public policy is attached.
|
||||||
|
|
||||||
## 12. Troubleshooting
|
## 13. Troubleshooting
|
||||||
|
|
||||||
| Symptom | Likely Cause | Fix |
|
| Symptom | Likely Cause | Fix |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
@@ -1061,7 +1380,7 @@ The suite covers bucket CRUD, presigned downloads, bucket policy enforcement, an
|
|||||||
| Presign modal errors with 403 | IAM user lacks `read/write/delete` for target bucket or bucket policy denies | Update IAM inline policies or remove conflicting deny statements. |
|
| Presign modal errors with 403 | IAM user lacks `read/write/delete` for target bucket or bucket policy denies | Update IAM inline policies or remove conflicting deny statements. |
|
||||||
| Large upload rejected immediately | File exceeds `MAX_UPLOAD_SIZE` | Increase env var or shrink object. |
|
| Large upload rejected immediately | File exceeds `MAX_UPLOAD_SIZE` | Increase env var or shrink object. |
|
||||||
|
|
||||||
## 13. API Matrix
|
## 14. API Matrix
|
||||||
|
|
||||||
```
|
```
|
||||||
GET / # List buckets
|
GET / # List buckets
|
||||||
@@ -1071,10 +1390,9 @@ GET /<bucket> # List objects
|
|||||||
PUT /<bucket>/<key> # Upload object
|
PUT /<bucket>/<key> # Upload object
|
||||||
GET /<bucket>/<key> # Download object
|
GET /<bucket>/<key> # Download object
|
||||||
DELETE /<bucket>/<key> # Delete object
|
DELETE /<bucket>/<key> # Delete object
|
||||||
POST /presign/<bucket>/<key> # Generate SigV4 URL
|
GET /<bucket>?policy # Fetch policy
|
||||||
GET /bucket-policy/<bucket> # Fetch policy
|
PUT /<bucket>?policy # Upsert policy
|
||||||
PUT /bucket-policy/<bucket> # Upsert policy
|
DELETE /<bucket>?policy # Delete policy
|
||||||
DELETE /bucket-policy/<bucket> # Delete policy
|
|
||||||
GET /<bucket>?quota # Get bucket quota
|
GET /<bucket>?quota # Get bucket quota
|
||||||
PUT /<bucket>?quota # Set bucket quota (admin only)
|
PUT /<bucket>?quota # Set bucket quota (admin only)
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -2,9 +2,11 @@ Flask>=3.1.2
|
|||||||
Flask-Limiter>=4.1.1
|
Flask-Limiter>=4.1.1
|
||||||
Flask-Cors>=6.0.2
|
Flask-Cors>=6.0.2
|
||||||
Flask-WTF>=1.2.2
|
Flask-WTF>=1.2.2
|
||||||
|
python-dotenv>=1.2.1
|
||||||
pytest>=9.0.2
|
pytest>=9.0.2
|
||||||
requests>=2.32.5
|
requests>=2.32.5
|
||||||
boto3>=1.42.14
|
boto3>=1.42.14
|
||||||
waitress>=3.0.2
|
waitress>=3.0.2
|
||||||
psutil>=7.1.3
|
psutil>=7.1.3
|
||||||
cryptography>=46.0.3
|
cryptography>=46.0.3
|
||||||
|
defusedxml>=0.7.1
|
||||||
11
run.py
11
run.py
@@ -6,6 +6,17 @@ import os
|
|||||||
import sys
|
import sys
|
||||||
import warnings
|
import warnings
|
||||||
from multiprocessing import Process
|
from multiprocessing import Process
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
|
for _env_file in [
|
||||||
|
Path("/opt/myfsio/myfsio.env"),
|
||||||
|
Path.cwd() / ".env",
|
||||||
|
Path.cwd() / "myfsio.env",
|
||||||
|
]:
|
||||||
|
if _env_file.exists():
|
||||||
|
load_dotenv(_env_file, override=True)
|
||||||
|
|
||||||
from app import create_api_app, create_ui_app
|
from app import create_api_app, create_ui_app
|
||||||
from app.config import AppConfig
|
from app.config import AppConfig
|
||||||
|
|||||||
4238
static/js/bucket-detail-main.js
Normal file
4238
static/js/bucket-detail-main.js
Normal file
File diff suppressed because it is too large
Load Diff
344
static/js/connections-management.js
Normal file
344
static/js/connections-management.js
Normal file
@@ -0,0 +1,344 @@
|
|||||||
|
window.ConnectionsManagement = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
var endpoints = {};
|
||||||
|
var csrfToken = '';
|
||||||
|
|
||||||
|
function init(config) {
|
||||||
|
endpoints = config.endpoints || {};
|
||||||
|
csrfToken = config.csrfToken || '';
|
||||||
|
|
||||||
|
setupEventListeners();
|
||||||
|
checkAllConnectionHealth();
|
||||||
|
}
|
||||||
|
|
||||||
|
function togglePassword(id) {
|
||||||
|
var input = document.getElementById(id);
|
||||||
|
if (input) {
|
||||||
|
input.type = input.type === 'password' ? 'text' : 'password';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function testConnection(formId, resultId) {
|
||||||
|
var form = document.getElementById(formId);
|
||||||
|
var resultDiv = document.getElementById(resultId);
|
||||||
|
if (!form || !resultDiv) return;
|
||||||
|
|
||||||
|
var formData = new FormData(form);
|
||||||
|
var data = {};
|
||||||
|
formData.forEach(function(value, key) {
|
||||||
|
if (key !== 'csrf_token') {
|
||||||
|
data[key] = value;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
resultDiv.innerHTML = '<div class="text-info"><span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Testing connection...</div>';
|
||||||
|
|
||||||
|
var controller = new AbortController();
|
||||||
|
var timeoutId = setTimeout(function() { controller.abort(); }, 20000);
|
||||||
|
|
||||||
|
try {
|
||||||
|
var response = await fetch(endpoints.test, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: {
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
'X-CSRFToken': csrfToken
|
||||||
|
},
|
||||||
|
body: JSON.stringify(data),
|
||||||
|
signal: controller.signal
|
||||||
|
});
|
||||||
|
clearTimeout(timeoutId);
|
||||||
|
|
||||||
|
var result = await response.json();
|
||||||
|
if (response.ok) {
|
||||||
|
resultDiv.innerHTML = '<div class="text-success">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>' +
|
||||||
|
'</svg>' + window.UICore.escapeHtml(result.message) + '</div>';
|
||||||
|
} else {
|
||||||
|
resultDiv.innerHTML = '<div class="text-danger">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>' +
|
||||||
|
'</svg>' + window.UICore.escapeHtml(result.message) + '</div>';
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
clearTimeout(timeoutId);
|
||||||
|
var message = error.name === 'AbortError'
|
||||||
|
? 'Connection test timed out - endpoint may be unreachable'
|
||||||
|
: 'Connection failed: Network error';
|
||||||
|
resultDiv.innerHTML = '<div class="text-danger">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>' +
|
||||||
|
'</svg>' + message + '</div>';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function checkConnectionHealth(connectionId, statusEl) {
|
||||||
|
if (!statusEl) return;
|
||||||
|
|
||||||
|
try {
|
||||||
|
var controller = new AbortController();
|
||||||
|
var timeoutId = setTimeout(function() { controller.abort(); }, 15000);
|
||||||
|
|
||||||
|
var response = await fetch(endpoints.healthTemplate.replace('CONNECTION_ID', connectionId), {
|
||||||
|
signal: controller.signal
|
||||||
|
});
|
||||||
|
clearTimeout(timeoutId);
|
||||||
|
|
||||||
|
var data = await response.json();
|
||||||
|
if (data.healthy) {
|
||||||
|
statusEl.innerHTML = '<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-success" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/></svg>';
|
||||||
|
statusEl.setAttribute('data-status', 'healthy');
|
||||||
|
statusEl.setAttribute('title', 'Connected');
|
||||||
|
} else {
|
||||||
|
statusEl.innerHTML = '<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-danger" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/></svg>';
|
||||||
|
statusEl.setAttribute('data-status', 'unhealthy');
|
||||||
|
statusEl.setAttribute('title', data.error || 'Unreachable');
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
statusEl.innerHTML = '<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-warning" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M8.982 1.566a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566zM8 5c.535 0 .954.462.9.995l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995A.905.905 0 0 1 8 5zm.002 6a1 1 0 1 1 0 2 1 1 0 0 1 0-2z"/></svg>';
|
||||||
|
statusEl.setAttribute('data-status', 'unknown');
|
||||||
|
statusEl.setAttribute('title', 'Could not check status');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function checkAllConnectionHealth() {
|
||||||
|
var rows = document.querySelectorAll('tr[data-connection-id]');
|
||||||
|
rows.forEach(function(row, index) {
|
||||||
|
var connectionId = row.getAttribute('data-connection-id');
|
||||||
|
var statusEl = row.querySelector('.connection-status');
|
||||||
|
if (statusEl) {
|
||||||
|
setTimeout(function() {
|
||||||
|
checkConnectionHealth(connectionId, statusEl);
|
||||||
|
}, index * 200);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateConnectionCount() {
|
||||||
|
var countBadge = document.querySelector('.badge.bg-primary.bg-opacity-10.text-primary.fs-6');
|
||||||
|
if (countBadge) {
|
||||||
|
var remaining = document.querySelectorAll('tr[data-connection-id]').length;
|
||||||
|
countBadge.textContent = remaining + ' connection' + (remaining !== 1 ? 's' : '');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function createConnectionRowHtml(conn) {
|
||||||
|
var ak = conn.access_key || '';
|
||||||
|
var maskedKey = ak.length > 12 ? ak.slice(0, 8) + '...' + ak.slice(-4) : ak;
|
||||||
|
|
||||||
|
return '<tr data-connection-id="' + window.UICore.escapeHtml(conn.id) + '">' +
|
||||||
|
'<td class="text-center">' +
|
||||||
|
'<span class="connection-status" data-status="checking" title="Checking...">' +
|
||||||
|
'<span class="spinner-border spinner-border-sm text-muted" role="status" style="width: 12px; height: 12px;"></span>' +
|
||||||
|
'</span></td>' +
|
||||||
|
'<td><div class="d-flex align-items-center gap-2">' +
|
||||||
|
'<div class="connection-icon"><svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M4.406 3.342A5.53 5.53 0 0 1 8 2c2.69 0 4.923 2 5.166 4.579C14.758 6.804 16 8.137 16 9.773 16 11.569 14.502 13 12.687 13H3.781C1.708 13 0 11.366 0 9.318c0-1.763 1.266-3.223 2.942-3.593.143-.863.698-1.723 1.464-2.383z"/></svg></div>' +
|
||||||
|
'<span class="fw-medium">' + window.UICore.escapeHtml(conn.name) + '</span>' +
|
||||||
|
'</div></td>' +
|
||||||
|
'<td><span class="text-muted small text-truncate d-inline-block" style="max-width: 200px;" title="' + window.UICore.escapeHtml(conn.endpoint_url) + '">' + window.UICore.escapeHtml(conn.endpoint_url) + '</span></td>' +
|
||||||
|
'<td><span class="badge bg-primary bg-opacity-10 text-primary">' + window.UICore.escapeHtml(conn.region) + '</span></td>' +
|
||||||
|
'<td><code class="small">' + window.UICore.escapeHtml(maskedKey) + '</code></td>' +
|
||||||
|
'<td class="text-end"><div class="btn-group btn-group-sm" role="group">' +
|
||||||
|
'<button type="button" class="btn btn-outline-secondary" data-bs-toggle="modal" data-bs-target="#editConnectionModal" ' +
|
||||||
|
'data-id="' + window.UICore.escapeHtml(conn.id) + '" data-name="' + window.UICore.escapeHtml(conn.name) + '" ' +
|
||||||
|
'data-endpoint="' + window.UICore.escapeHtml(conn.endpoint_url) + '" data-region="' + window.UICore.escapeHtml(conn.region) + '" ' +
|
||||||
|
'data-access="' + window.UICore.escapeHtml(conn.access_key) + '" data-secret="' + window.UICore.escapeHtml(conn.secret_key || '') + '" title="Edit connection">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/></svg></button>' +
|
||||||
|
'<button type="button" class="btn btn-outline-danger" data-bs-toggle="modal" data-bs-target="#deleteConnectionModal" ' +
|
||||||
|
'data-id="' + window.UICore.escapeHtml(conn.id) + '" data-name="' + window.UICore.escapeHtml(conn.name) + '" title="Delete connection">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M5.5 5.5A.5.5 0 0 1 6 6v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0V6a.5.5 0 0 1 .5-.5zm3 .5a.5.5 0 0 0-1 0v6a.5.5 0 0 0 1 0V6z"/>' +
|
||||||
|
'<path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/></svg></button>' +
|
||||||
|
'</div></td></tr>';
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupEventListeners() {
|
||||||
|
var testBtn = document.getElementById('testConnectionBtn');
|
||||||
|
if (testBtn) {
|
||||||
|
testBtn.addEventListener('click', function() {
|
||||||
|
testConnection('createConnectionForm', 'testResult');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var editTestBtn = document.getElementById('editTestConnectionBtn');
|
||||||
|
if (editTestBtn) {
|
||||||
|
editTestBtn.addEventListener('click', function() {
|
||||||
|
testConnection('editConnectionForm', 'editTestResult');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var editModal = document.getElementById('editConnectionModal');
|
||||||
|
if (editModal) {
|
||||||
|
editModal.addEventListener('show.bs.modal', function(event) {
|
||||||
|
var button = event.relatedTarget;
|
||||||
|
if (!button) return;
|
||||||
|
|
||||||
|
var id = button.getAttribute('data-id');
|
||||||
|
|
||||||
|
document.getElementById('edit_name').value = button.getAttribute('data-name') || '';
|
||||||
|
document.getElementById('edit_endpoint_url').value = button.getAttribute('data-endpoint') || '';
|
||||||
|
document.getElementById('edit_region').value = button.getAttribute('data-region') || '';
|
||||||
|
document.getElementById('edit_access_key').value = button.getAttribute('data-access') || '';
|
||||||
|
document.getElementById('edit_secret_key').value = button.getAttribute('data-secret') || '';
|
||||||
|
document.getElementById('editTestResult').innerHTML = '';
|
||||||
|
|
||||||
|
var form = document.getElementById('editConnectionForm');
|
||||||
|
form.action = endpoints.updateTemplate.replace('CONNECTION_ID', id);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteModal = document.getElementById('deleteConnectionModal');
|
||||||
|
if (deleteModal) {
|
||||||
|
deleteModal.addEventListener('show.bs.modal', function(event) {
|
||||||
|
var button = event.relatedTarget;
|
||||||
|
if (!button) return;
|
||||||
|
|
||||||
|
var id = button.getAttribute('data-id');
|
||||||
|
var name = button.getAttribute('data-name');
|
||||||
|
|
||||||
|
document.getElementById('deleteConnectionName').textContent = name;
|
||||||
|
var form = document.getElementById('deleteConnectionForm');
|
||||||
|
form.action = endpoints.deleteTemplate.replace('CONNECTION_ID', id);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var createForm = document.getElementById('createConnectionForm');
|
||||||
|
if (createForm) {
|
||||||
|
createForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
window.UICore.submitFormAjax(createForm, {
|
||||||
|
successMessage: 'Connection created',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
createForm.reset();
|
||||||
|
document.getElementById('testResult').innerHTML = '';
|
||||||
|
|
||||||
|
if (data.connection) {
|
||||||
|
var emptyState = document.querySelector('.empty-state');
|
||||||
|
if (emptyState) {
|
||||||
|
var cardBody = emptyState.closest('.card-body');
|
||||||
|
if (cardBody) {
|
||||||
|
cardBody.innerHTML = '<div class="table-responsive"><table class="table table-hover align-middle mb-0">' +
|
||||||
|
'<thead class="table-light"><tr>' +
|
||||||
|
'<th scope="col" style="width: 50px;">Status</th>' +
|
||||||
|
'<th scope="col">Name</th><th scope="col">Endpoint</th>' +
|
||||||
|
'<th scope="col">Region</th><th scope="col">Access Key</th>' +
|
||||||
|
'<th scope="col" class="text-end">Actions</th></tr></thead>' +
|
||||||
|
'<tbody></tbody></table></div>';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var tbody = document.querySelector('table tbody');
|
||||||
|
if (tbody) {
|
||||||
|
tbody.insertAdjacentHTML('beforeend', createConnectionRowHtml(data.connection));
|
||||||
|
var newRow = tbody.lastElementChild;
|
||||||
|
var statusEl = newRow.querySelector('.connection-status');
|
||||||
|
if (statusEl) {
|
||||||
|
checkConnectionHealth(data.connection.id, statusEl);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
updateConnectionCount();
|
||||||
|
} else {
|
||||||
|
location.reload();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var editForm = document.getElementById('editConnectionForm');
|
||||||
|
if (editForm) {
|
||||||
|
editForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
window.UICore.submitFormAjax(editForm, {
|
||||||
|
successMessage: 'Connection updated',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
var modal = bootstrap.Modal.getInstance(document.getElementById('editConnectionModal'));
|
||||||
|
if (modal) modal.hide();
|
||||||
|
|
||||||
|
var connId = editForm.action.split('/').slice(-2)[0];
|
||||||
|
var row = document.querySelector('tr[data-connection-id="' + connId + '"]');
|
||||||
|
if (row && data.connection) {
|
||||||
|
var nameCell = row.querySelector('.fw-medium');
|
||||||
|
if (nameCell) nameCell.textContent = data.connection.name;
|
||||||
|
|
||||||
|
var endpointCell = row.querySelector('.text-truncate');
|
||||||
|
if (endpointCell) {
|
||||||
|
endpointCell.textContent = data.connection.endpoint_url;
|
||||||
|
endpointCell.title = data.connection.endpoint_url;
|
||||||
|
}
|
||||||
|
|
||||||
|
var regionBadge = row.querySelector('.badge.bg-primary');
|
||||||
|
if (regionBadge) regionBadge.textContent = data.connection.region;
|
||||||
|
|
||||||
|
var accessCode = row.querySelector('code.small');
|
||||||
|
if (accessCode && data.connection.access_key) {
|
||||||
|
var ak = data.connection.access_key;
|
||||||
|
accessCode.textContent = ak.slice(0, 8) + '...' + ak.slice(-4);
|
||||||
|
}
|
||||||
|
|
||||||
|
var editBtn = row.querySelector('[data-bs-target="#editConnectionModal"]');
|
||||||
|
if (editBtn) {
|
||||||
|
editBtn.setAttribute('data-name', data.connection.name);
|
||||||
|
editBtn.setAttribute('data-endpoint', data.connection.endpoint_url);
|
||||||
|
editBtn.setAttribute('data-region', data.connection.region);
|
||||||
|
editBtn.setAttribute('data-access', data.connection.access_key);
|
||||||
|
if (data.connection.secret_key) {
|
||||||
|
editBtn.setAttribute('data-secret', data.connection.secret_key);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteBtn = row.querySelector('[data-bs-target="#deleteConnectionModal"]');
|
||||||
|
if (deleteBtn) {
|
||||||
|
deleteBtn.setAttribute('data-name', data.connection.name);
|
||||||
|
}
|
||||||
|
|
||||||
|
var statusEl = row.querySelector('.connection-status');
|
||||||
|
if (statusEl) {
|
||||||
|
checkConnectionHealth(connId, statusEl);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteForm = document.getElementById('deleteConnectionForm');
|
||||||
|
if (deleteForm) {
|
||||||
|
deleteForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
window.UICore.submitFormAjax(deleteForm, {
|
||||||
|
successMessage: 'Connection deleted',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
var modal = bootstrap.Modal.getInstance(document.getElementById('deleteConnectionModal'));
|
||||||
|
if (modal) modal.hide();
|
||||||
|
|
||||||
|
var connId = deleteForm.action.split('/').slice(-2)[0];
|
||||||
|
var row = document.querySelector('tr[data-connection-id="' + connId + '"]');
|
||||||
|
if (row) {
|
||||||
|
row.remove();
|
||||||
|
}
|
||||||
|
|
||||||
|
updateConnectionCount();
|
||||||
|
|
||||||
|
if (document.querySelectorAll('tr[data-connection-id]').length === 0) {
|
||||||
|
location.reload();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
init: init,
|
||||||
|
togglePassword: togglePassword,
|
||||||
|
testConnection: testConnection,
|
||||||
|
checkConnectionHealth: checkConnectionHealth
|
||||||
|
};
|
||||||
|
})();
|
||||||
545
static/js/iam-management.js
Normal file
545
static/js/iam-management.js
Normal file
@@ -0,0 +1,545 @@
|
|||||||
|
window.IAMManagement = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
var users = [];
|
||||||
|
var currentUserKey = null;
|
||||||
|
var endpoints = {};
|
||||||
|
var csrfToken = '';
|
||||||
|
var iamLocked = false;
|
||||||
|
|
||||||
|
var policyModal = null;
|
||||||
|
var editUserModal = null;
|
||||||
|
var deleteUserModal = null;
|
||||||
|
var rotateSecretModal = null;
|
||||||
|
var currentRotateKey = null;
|
||||||
|
var currentEditKey = null;
|
||||||
|
var currentDeleteKey = null;
|
||||||
|
|
||||||
|
var policyTemplates = {
|
||||||
|
full: [{ bucket: '*', actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'replication', 'iam:list_users', 'iam:*'] }],
|
||||||
|
readonly: [{ bucket: '*', actions: ['list', 'read'] }],
|
||||||
|
writer: [{ bucket: '*', actions: ['list', 'read', 'write'] }]
|
||||||
|
};
|
||||||
|
|
||||||
|
function init(config) {
|
||||||
|
users = config.users || [];
|
||||||
|
currentUserKey = config.currentUserKey || null;
|
||||||
|
endpoints = config.endpoints || {};
|
||||||
|
csrfToken = config.csrfToken || '';
|
||||||
|
iamLocked = config.iamLocked || false;
|
||||||
|
|
||||||
|
if (iamLocked) return;
|
||||||
|
|
||||||
|
initModals();
|
||||||
|
setupJsonAutoIndent();
|
||||||
|
setupCopyButtons();
|
||||||
|
setupPolicyEditor();
|
||||||
|
setupCreateUserModal();
|
||||||
|
setupEditUserModal();
|
||||||
|
setupDeleteUserModal();
|
||||||
|
setupRotateSecretModal();
|
||||||
|
setupFormHandlers();
|
||||||
|
}
|
||||||
|
|
||||||
|
function initModals() {
|
||||||
|
var policyModalEl = document.getElementById('policyEditorModal');
|
||||||
|
var editModalEl = document.getElementById('editUserModal');
|
||||||
|
var deleteModalEl = document.getElementById('deleteUserModal');
|
||||||
|
var rotateModalEl = document.getElementById('rotateSecretModal');
|
||||||
|
|
||||||
|
if (policyModalEl) policyModal = new bootstrap.Modal(policyModalEl);
|
||||||
|
if (editModalEl) editUserModal = new bootstrap.Modal(editModalEl);
|
||||||
|
if (deleteModalEl) deleteUserModal = new bootstrap.Modal(deleteModalEl);
|
||||||
|
if (rotateModalEl) rotateSecretModal = new bootstrap.Modal(rotateModalEl);
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupJsonAutoIndent() {
|
||||||
|
window.UICore.setupJsonAutoIndent(document.getElementById('policyEditorDocument'));
|
||||||
|
window.UICore.setupJsonAutoIndent(document.getElementById('createUserPolicies'));
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupCopyButtons() {
|
||||||
|
document.querySelectorAll('.config-copy').forEach(function(button) {
|
||||||
|
button.addEventListener('click', async function() {
|
||||||
|
var targetId = button.dataset.copyTarget;
|
||||||
|
var target = document.getElementById(targetId);
|
||||||
|
if (!target) return;
|
||||||
|
await window.UICore.copyToClipboard(target.innerText, button, 'Copy JSON');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
var secretCopyButton = document.querySelector('[data-secret-copy]');
|
||||||
|
if (secretCopyButton) {
|
||||||
|
secretCopyButton.addEventListener('click', async function() {
|
||||||
|
var secretInput = document.getElementById('disclosedSecretValue');
|
||||||
|
if (!secretInput) return;
|
||||||
|
await window.UICore.copyToClipboard(secretInput.value, secretCopyButton, 'Copy');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function getUserPolicies(accessKey) {
|
||||||
|
var user = users.find(function(u) { return u.access_key === accessKey; });
|
||||||
|
return user ? JSON.stringify(user.policies, null, 2) : '';
|
||||||
|
}
|
||||||
|
|
||||||
|
function applyPolicyTemplate(name, textareaEl) {
|
||||||
|
if (policyTemplates[name] && textareaEl) {
|
||||||
|
textareaEl.value = JSON.stringify(policyTemplates[name], null, 2);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupPolicyEditor() {
|
||||||
|
var userLabelEl = document.getElementById('policyEditorUserLabel');
|
||||||
|
var userInputEl = document.getElementById('policyEditorUser');
|
||||||
|
var textareaEl = document.getElementById('policyEditorDocument');
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-policy-template]').forEach(function(button) {
|
||||||
|
button.addEventListener('click', function() {
|
||||||
|
applyPolicyTemplate(button.dataset.policyTemplate, textareaEl);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-policy-editor]').forEach(function(button) {
|
||||||
|
button.addEventListener('click', function() {
|
||||||
|
var key = button.getAttribute('data-access-key');
|
||||||
|
if (!key) return;
|
||||||
|
|
||||||
|
userLabelEl.textContent = key;
|
||||||
|
userInputEl.value = key;
|
||||||
|
textareaEl.value = getUserPolicies(key);
|
||||||
|
|
||||||
|
policyModal.show();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupCreateUserModal() {
|
||||||
|
var createUserPoliciesEl = document.getElementById('createUserPolicies');
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-create-policy-template]').forEach(function(button) {
|
||||||
|
button.addEventListener('click', function() {
|
||||||
|
applyPolicyTemplate(button.dataset.createPolicyTemplate, createUserPoliciesEl);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupEditUserModal() {
|
||||||
|
var editUserForm = document.getElementById('editUserForm');
|
||||||
|
var editUserDisplayName = document.getElementById('editUserDisplayName');
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-edit-user]').forEach(function(btn) {
|
||||||
|
btn.addEventListener('click', function() {
|
||||||
|
var key = btn.dataset.editUser;
|
||||||
|
var name = btn.dataset.displayName;
|
||||||
|
currentEditKey = key;
|
||||||
|
editUserDisplayName.value = name;
|
||||||
|
editUserForm.action = endpoints.updateUser.replace('ACCESS_KEY', key);
|
||||||
|
editUserModal.show();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupDeleteUserModal() {
|
||||||
|
var deleteUserForm = document.getElementById('deleteUserForm');
|
||||||
|
var deleteUserLabel = document.getElementById('deleteUserLabel');
|
||||||
|
var deleteSelfWarning = document.getElementById('deleteSelfWarning');
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-delete-user]').forEach(function(btn) {
|
||||||
|
btn.addEventListener('click', function() {
|
||||||
|
var key = btn.dataset.deleteUser;
|
||||||
|
currentDeleteKey = key;
|
||||||
|
deleteUserLabel.textContent = key;
|
||||||
|
deleteUserForm.action = endpoints.deleteUser.replace('ACCESS_KEY', key);
|
||||||
|
|
||||||
|
if (key === currentUserKey) {
|
||||||
|
deleteSelfWarning.classList.remove('d-none');
|
||||||
|
} else {
|
||||||
|
deleteSelfWarning.classList.add('d-none');
|
||||||
|
}
|
||||||
|
|
||||||
|
deleteUserModal.show();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupRotateSecretModal() {
|
||||||
|
var rotateUserLabel = document.getElementById('rotateUserLabel');
|
||||||
|
var confirmRotateBtn = document.getElementById('confirmRotateBtn');
|
||||||
|
var rotateCancelBtn = document.getElementById('rotateCancelBtn');
|
||||||
|
var rotateDoneBtn = document.getElementById('rotateDoneBtn');
|
||||||
|
var rotateSecretConfirm = document.getElementById('rotateSecretConfirm');
|
||||||
|
var rotateSecretResult = document.getElementById('rotateSecretResult');
|
||||||
|
var newSecretKeyInput = document.getElementById('newSecretKey');
|
||||||
|
var copyNewSecretBtn = document.getElementById('copyNewSecret');
|
||||||
|
|
||||||
|
document.querySelectorAll('[data-rotate-user]').forEach(function(btn) {
|
||||||
|
btn.addEventListener('click', function() {
|
||||||
|
currentRotateKey = btn.dataset.rotateUser;
|
||||||
|
rotateUserLabel.textContent = currentRotateKey;
|
||||||
|
|
||||||
|
rotateSecretConfirm.classList.remove('d-none');
|
||||||
|
rotateSecretResult.classList.add('d-none');
|
||||||
|
confirmRotateBtn.classList.remove('d-none');
|
||||||
|
rotateCancelBtn.classList.remove('d-none');
|
||||||
|
rotateDoneBtn.classList.add('d-none');
|
||||||
|
|
||||||
|
rotateSecretModal.show();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
if (confirmRotateBtn) {
|
||||||
|
confirmRotateBtn.addEventListener('click', async function() {
|
||||||
|
if (!currentRotateKey) return;
|
||||||
|
|
||||||
|
window.UICore.setButtonLoading(confirmRotateBtn, true, 'Rotating...');
|
||||||
|
|
||||||
|
try {
|
||||||
|
var url = endpoints.rotateSecret.replace('ACCESS_KEY', currentRotateKey);
|
||||||
|
var response = await fetch(url, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: {
|
||||||
|
'Accept': 'application/json',
|
||||||
|
'X-CSRFToken': csrfToken
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
var data = await response.json();
|
||||||
|
throw new Error(data.error || 'Failed to rotate secret');
|
||||||
|
}
|
||||||
|
|
||||||
|
var data = await response.json();
|
||||||
|
newSecretKeyInput.value = data.secret_key;
|
||||||
|
|
||||||
|
rotateSecretConfirm.classList.add('d-none');
|
||||||
|
rotateSecretResult.classList.remove('d-none');
|
||||||
|
confirmRotateBtn.classList.add('d-none');
|
||||||
|
rotateCancelBtn.classList.add('d-none');
|
||||||
|
rotateDoneBtn.classList.remove('d-none');
|
||||||
|
|
||||||
|
} catch (err) {
|
||||||
|
if (window.showToast) {
|
||||||
|
window.showToast(err.message, 'Error', 'danger');
|
||||||
|
}
|
||||||
|
rotateSecretModal.hide();
|
||||||
|
} finally {
|
||||||
|
window.UICore.setButtonLoading(confirmRotateBtn, false);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (copyNewSecretBtn) {
|
||||||
|
copyNewSecretBtn.addEventListener('click', async function() {
|
||||||
|
await window.UICore.copyToClipboard(newSecretKeyInput.value, copyNewSecretBtn, 'Copy');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (rotateDoneBtn) {
|
||||||
|
rotateDoneBtn.addEventListener('click', function() {
|
||||||
|
window.location.reload();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function createUserCardHtml(accessKey, displayName, policies) {
|
||||||
|
var policyBadges = '';
|
||||||
|
if (policies && policies.length > 0) {
|
||||||
|
policyBadges = policies.map(function(p) {
|
||||||
|
var actionText = p.actions && p.actions.includes('*') ? 'full' : (p.actions ? p.actions.length : 0);
|
||||||
|
return '<span class="badge bg-primary bg-opacity-10 text-primary">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="10" height="10" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>' +
|
||||||
|
'</svg>' + window.UICore.escapeHtml(p.bucket) +
|
||||||
|
'<span class="opacity-75">(' + actionText + ')</span></span>';
|
||||||
|
}).join('');
|
||||||
|
} else {
|
||||||
|
policyBadges = '<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>';
|
||||||
|
}
|
||||||
|
|
||||||
|
return '<div class="col-md-6 col-xl-4">' +
|
||||||
|
'<div class="card h-100 iam-user-card">' +
|
||||||
|
'<div class="card-body">' +
|
||||||
|
'<div class="d-flex align-items-start justify-content-between mb-3">' +
|
||||||
|
'<div class="d-flex align-items-center gap-3 min-width-0 overflow-hidden">' +
|
||||||
|
'<div class="user-avatar user-avatar-lg flex-shrink-0">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>' +
|
||||||
|
'</svg></div>' +
|
||||||
|
'<div class="min-width-0">' +
|
||||||
|
'<h6 class="fw-semibold mb-0 text-truncate" title="' + window.UICore.escapeHtml(displayName) + '">' + window.UICore.escapeHtml(displayName) + '</h6>' +
|
||||||
|
'<code class="small text-muted d-block text-truncate" title="' + window.UICore.escapeHtml(accessKey) + '">' + window.UICore.escapeHtml(accessKey) + '</code>' +
|
||||||
|
'</div></div>' +
|
||||||
|
'<div class="dropdown flex-shrink-0">' +
|
||||||
|
'<button class="btn btn-sm btn-icon" type="button" data-bs-toggle="dropdown" aria-expanded="false">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M9.5 13a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0z"/>' +
|
||||||
|
'</svg></button>' +
|
||||||
|
'<ul class="dropdown-menu dropdown-menu-end">' +
|
||||||
|
'<li><button class="dropdown-item" type="button" data-edit-user="' + window.UICore.escapeHtml(accessKey) + '" data-display-name="' + window.UICore.escapeHtml(displayName) + '">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16"><path d="M12.146.146a.5.5 0 0 1 .708 0l3 3a.5.5 0 0 1 0 .708l-10 10a.5.5 0 0 1-.168.11l-5 2a.5.5 0 0 1-.65-.65l2-5a.5.5 0 0 1 .11-.168l10-10zM11.207 2.5 13.5 4.793 14.793 3.5 12.5 1.207 11.207 2.5zm1.586 3L10.5 3.207 4 9.707V10h.5a.5.5 0 0 1 .5.5v.5h.5a.5.5 0 0 1 .5.5v.5h.293l6.5-6.5z"/></svg>Edit Name</button></li>' +
|
||||||
|
'<li><button class="dropdown-item" type="button" data-rotate-user="' + window.UICore.escapeHtml(accessKey) + '">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16"><path d="M11.534 7h3.932a.25.25 0 0 1 .192.41l-1.966 2.36a.25.25 0 0 1-.384 0l-1.966-2.36a.25.25 0 0 1 .192-.41zm-11 2h3.932a.25.25 0 0 0 .192-.41L2.692 6.23a.25.25 0 0 0-.384 0L.342 8.59A.25.25 0 0 0 .534 9z"/><path fill-rule="evenodd" d="M8 3c-1.552 0-2.94.707-3.857 1.818a.5.5 0 1 1-.771-.636A6.002 6.002 0 0 1 13.917 7H12.9A5.002 5.002 0 0 0 8 3zM3.1 9a5.002 5.002 0 0 0 8.757 2.182.5.5 0 1 1 .771.636A6.002 6.002 0 0 1 2.083 9H3.1z"/></svg>Rotate Secret</button></li>' +
|
||||||
|
'<li><hr class="dropdown-divider"></li>' +
|
||||||
|
'<li><button class="dropdown-item text-danger" type="button" data-delete-user="' + window.UICore.escapeHtml(accessKey) + '">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-2" viewBox="0 0 16 16"><path d="M5.5 5.5a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm2.5 0a.5.5 0 0 1 .5.5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 .5-.5zm3 .5v6a.5.5 0 0 1-1 0v-6a.5.5 0 0 1 1 0z"/><path fill-rule="evenodd" d="M14.5 3a1 1 0 0 1-1 1H13v9a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V4h-.5a1 1 0 0 1-1-1V2a1 1 0 0 1 1-1H6a1 1 0 0 1 1-1h2a1 1 0 0 1 1 1h3.5a1 1 0 0 1 1 1v1zM4.118 4 4 4.059V13a1 1 0 0 0 1 1h6a1 1 0 0 0 1-1V4.059L11.882 4H4.118zM2.5 3V2h11v1h-11z"/></svg>Delete User</button></li>' +
|
||||||
|
'</ul></div></div>' +
|
||||||
|
'<div class="mb-3">' +
|
||||||
|
'<div class="small text-muted mb-2">Bucket Permissions</div>' +
|
||||||
|
'<div class="d-flex flex-wrap gap-1">' + policyBadges + '</div></div>' +
|
||||||
|
'<button class="btn btn-outline-primary btn-sm w-100" type="button" data-policy-editor data-access-key="' + window.UICore.escapeHtml(accessKey) + '">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="me-1" viewBox="0 0 16 16"><path d="M8 4.754a3.246 3.246 0 1 0 0 6.492 3.246 3.246 0 0 0 0-6.492zM5.754 8a2.246 2.246 0 1 1 4.492 0 2.246 2.246 0 0 1-4.492 0z"/><path d="M9.796 1.343c-.527-1.79-3.065-1.79-3.592 0l-.094.319a.873.873 0 0 1-1.255.52l-.292-.16c-1.64-.892-3.433.902-2.54 2.541l.159.292a.873.873 0 0 1-.52 1.255l-.319.094c-1.79.527-1.79 3.065 0 3.592l.319.094a.873.873 0 0 1 .52 1.255l-.16.292c-.892 1.64.901 3.434 2.541 2.54l.292-.159a.873.873 0 0 1 1.255.52l.094.319c.527 1.79 3.065 1.79 3.592 0l.094-.319a.873.873 0 0 1 1.255-.52l.292.16c1.64.893 3.434-.902 2.54-2.541l-.159-.292a.873.873 0 0 1 .52-1.255l.319-.094c1.79-.527 1.79-3.065 0-3.592l-.319-.094a.873.873 0 0 1-.52-1.255l.16-.292c.893-1.64-.902-3.433-2.541-2.54l-.292.159a.873.873 0 0 1-1.255-.52l-.094-.319z"/></svg>Manage Policies</button>' +
|
||||||
|
'</div></div></div>';
|
||||||
|
}
|
||||||
|
|
||||||
|
function attachUserCardHandlers(cardElement, accessKey, displayName) {
|
||||||
|
var editBtn = cardElement.querySelector('[data-edit-user]');
|
||||||
|
if (editBtn) {
|
||||||
|
editBtn.addEventListener('click', function() {
|
||||||
|
currentEditKey = accessKey;
|
||||||
|
document.getElementById('editUserDisplayName').value = displayName;
|
||||||
|
document.getElementById('editUserForm').action = endpoints.updateUser.replace('ACCESS_KEY', accessKey);
|
||||||
|
editUserModal.show();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteBtn = cardElement.querySelector('[data-delete-user]');
|
||||||
|
if (deleteBtn) {
|
||||||
|
deleteBtn.addEventListener('click', function() {
|
||||||
|
currentDeleteKey = accessKey;
|
||||||
|
document.getElementById('deleteUserLabel').textContent = accessKey;
|
||||||
|
document.getElementById('deleteUserForm').action = endpoints.deleteUser.replace('ACCESS_KEY', accessKey);
|
||||||
|
var deleteSelfWarning = document.getElementById('deleteSelfWarning');
|
||||||
|
if (accessKey === currentUserKey) {
|
||||||
|
deleteSelfWarning.classList.remove('d-none');
|
||||||
|
} else {
|
||||||
|
deleteSelfWarning.classList.add('d-none');
|
||||||
|
}
|
||||||
|
deleteUserModal.show();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var rotateBtn = cardElement.querySelector('[data-rotate-user]');
|
||||||
|
if (rotateBtn) {
|
||||||
|
rotateBtn.addEventListener('click', function() {
|
||||||
|
currentRotateKey = accessKey;
|
||||||
|
document.getElementById('rotateUserLabel').textContent = accessKey;
|
||||||
|
document.getElementById('rotateSecretConfirm').classList.remove('d-none');
|
||||||
|
document.getElementById('rotateSecretResult').classList.add('d-none');
|
||||||
|
document.getElementById('confirmRotateBtn').classList.remove('d-none');
|
||||||
|
document.getElementById('rotateCancelBtn').classList.remove('d-none');
|
||||||
|
document.getElementById('rotateDoneBtn').classList.add('d-none');
|
||||||
|
rotateSecretModal.show();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var policyBtn = cardElement.querySelector('[data-policy-editor]');
|
||||||
|
if (policyBtn) {
|
||||||
|
policyBtn.addEventListener('click', function() {
|
||||||
|
document.getElementById('policyEditorUserLabel').textContent = accessKey;
|
||||||
|
document.getElementById('policyEditorUser').value = accessKey;
|
||||||
|
document.getElementById('policyEditorDocument').value = getUserPolicies(accessKey);
|
||||||
|
policyModal.show();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateUserCount() {
|
||||||
|
var countEl = document.querySelector('.card-header .text-muted.small');
|
||||||
|
if (countEl) {
|
||||||
|
var count = document.querySelectorAll('.iam-user-card').length;
|
||||||
|
countEl.textContent = count + ' user' + (count !== 1 ? 's' : '') + ' configured';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function setupFormHandlers() {
|
||||||
|
var createUserForm = document.querySelector('#createUserModal form');
|
||||||
|
if (createUserForm) {
|
||||||
|
createUserForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
window.UICore.submitFormAjax(createUserForm, {
|
||||||
|
successMessage: 'User created',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
var modal = bootstrap.Modal.getInstance(document.getElementById('createUserModal'));
|
||||||
|
if (modal) modal.hide();
|
||||||
|
createUserForm.reset();
|
||||||
|
|
||||||
|
var existingAlert = document.querySelector('.alert.alert-info.border-0.shadow-sm');
|
||||||
|
if (existingAlert) existingAlert.remove();
|
||||||
|
|
||||||
|
if (data.secret_key) {
|
||||||
|
var alertHtml = '<div class="alert alert-info border-0 shadow-sm mb-4" role="alert" id="newUserSecretAlert">' +
|
||||||
|
'<div class="d-flex align-items-start gap-2 mb-2">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" class="bi bi-key flex-shrink-0 mt-1" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M0 8a4 4 0 0 1 7.465-2H14a.5.5 0 0 1 .354.146l1.5 1.5a.5.5 0 0 1 0 .708l-1.5 1.5a.5.5 0 0 1-.708 0L13 9.207l-.646.647a.5.5 0 0 1-.708 0L11 9.207l-.646.647a.5.5 0 0 1-.708 0L9 9.207l-.646.647A.5.5 0 0 1 8 10h-.535A4 4 0 0 1 0 8zm4-3a3 3 0 1 0 2.712 4.285A.5.5 0 0 1 7.163 9h.63l.853-.854a.5.5 0 0 1 .708 0l.646.647.646-.647a.5.5 0 0 1 .708 0l.646.647.646-.647a.5.5 0 0 1 .708 0l.646.647.793-.793-1-1h-6.63a.5.5 0 0 1-.451-.285A3 3 0 0 0 4 5z"/><path d="M4 8a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>' +
|
||||||
|
'</svg>' +
|
||||||
|
'<div class="flex-grow-1">' +
|
||||||
|
'<div class="fw-semibold">New user created: <code>' + window.UICore.escapeHtml(data.access_key) + '</code></div>' +
|
||||||
|
'<p class="mb-2 small">This secret is only shown once. Copy it now and store it securely.</p>' +
|
||||||
|
'</div>' +
|
||||||
|
'<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"></button>' +
|
||||||
|
'</div>' +
|
||||||
|
'<div class="input-group">' +
|
||||||
|
'<span class="input-group-text"><strong>Secret key</strong></span>' +
|
||||||
|
'<input class="form-control font-monospace" type="text" value="' + window.UICore.escapeHtml(data.secret_key) + '" readonly id="newUserSecret" />' +
|
||||||
|
'<button class="btn btn-outline-primary" type="button" id="copyNewUserSecret">Copy</button>' +
|
||||||
|
'</div></div>';
|
||||||
|
var container = document.querySelector('.page-header');
|
||||||
|
if (container) {
|
||||||
|
container.insertAdjacentHTML('afterend', alertHtml);
|
||||||
|
document.getElementById('copyNewUserSecret').addEventListener('click', async function() {
|
||||||
|
await window.UICore.copyToClipboard(data.secret_key, this, 'Copy');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var usersGrid = document.querySelector('.row.g-3');
|
||||||
|
var emptyState = document.querySelector('.empty-state');
|
||||||
|
if (emptyState) {
|
||||||
|
var emptyCol = emptyState.closest('.col-12');
|
||||||
|
if (emptyCol) emptyCol.remove();
|
||||||
|
if (!usersGrid) {
|
||||||
|
var cardBody = document.querySelector('.card-body.px-4.pb-4');
|
||||||
|
if (cardBody) {
|
||||||
|
cardBody.innerHTML = '<div class="row g-3"></div>';
|
||||||
|
usersGrid = cardBody.querySelector('.row.g-3');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (usersGrid) {
|
||||||
|
var cardHtml = createUserCardHtml(data.access_key, data.display_name, data.policies);
|
||||||
|
usersGrid.insertAdjacentHTML('beforeend', cardHtml);
|
||||||
|
var newCard = usersGrid.lastElementChild;
|
||||||
|
attachUserCardHandlers(newCard, data.access_key, data.display_name);
|
||||||
|
users.push({
|
||||||
|
access_key: data.access_key,
|
||||||
|
display_name: data.display_name,
|
||||||
|
policies: data.policies || []
|
||||||
|
});
|
||||||
|
updateUserCount();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var policyEditorForm = document.getElementById('policyEditorForm');
|
||||||
|
if (policyEditorForm) {
|
||||||
|
policyEditorForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
var userInputEl = document.getElementById('policyEditorUser');
|
||||||
|
var key = userInputEl.value;
|
||||||
|
if (!key) return;
|
||||||
|
|
||||||
|
var template = policyEditorForm.dataset.actionTemplate;
|
||||||
|
policyEditorForm.action = template.replace('ACCESS_KEY_PLACEHOLDER', key);
|
||||||
|
|
||||||
|
window.UICore.submitFormAjax(policyEditorForm, {
|
||||||
|
successMessage: 'Policies updated',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
policyModal.hide();
|
||||||
|
|
||||||
|
var userCard = document.querySelector('[data-access-key="' + key + '"]');
|
||||||
|
if (userCard) {
|
||||||
|
var badgeContainer = userCard.closest('.iam-user-card').querySelector('.d-flex.flex-wrap.gap-1');
|
||||||
|
if (badgeContainer && data.policies) {
|
||||||
|
var badges = data.policies.map(function(p) {
|
||||||
|
return '<span class="badge bg-primary bg-opacity-10 text-primary">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="10" height="10" fill="currentColor" class="me-1" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M2.522 5H2a.5.5 0 0 0-.494.574l1.372 9.149A1.5 1.5 0 0 0 4.36 16h7.278a1.5 1.5 0 0 0 1.483-1.277l1.373-9.149A.5.5 0 0 0 14 5h-.522A5.5 5.5 0 0 0 2.522 5zm1.005 0a4.5 4.5 0 0 1 8.945 0H3.527z"/>' +
|
||||||
|
'</svg>' + window.UICore.escapeHtml(p.bucket) +
|
||||||
|
'<span class="opacity-75">(' + (p.actions.includes('*') ? 'full' : p.actions.length) + ')</span></span>';
|
||||||
|
}).join('');
|
||||||
|
badgeContainer.innerHTML = badges || '<span class="badge bg-secondary bg-opacity-10 text-secondary">No policies</span>';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var userIndex = users.findIndex(function(u) { return u.access_key === key; });
|
||||||
|
if (userIndex >= 0 && data.policies) {
|
||||||
|
users[userIndex].policies = data.policies;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var editUserForm = document.getElementById('editUserForm');
|
||||||
|
if (editUserForm) {
|
||||||
|
editUserForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
var key = currentEditKey;
|
||||||
|
window.UICore.submitFormAjax(editUserForm, {
|
||||||
|
successMessage: 'User updated',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
editUserModal.hide();
|
||||||
|
|
||||||
|
var newName = data.display_name || document.getElementById('editUserDisplayName').value;
|
||||||
|
var editBtn = document.querySelector('[data-edit-user="' + key + '"]');
|
||||||
|
if (editBtn) {
|
||||||
|
editBtn.setAttribute('data-display-name', newName);
|
||||||
|
var card = editBtn.closest('.iam-user-card');
|
||||||
|
if (card) {
|
||||||
|
var nameEl = card.querySelector('h6');
|
||||||
|
if (nameEl) {
|
||||||
|
nameEl.textContent = newName;
|
||||||
|
nameEl.title = newName;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var userIndex = users.findIndex(function(u) { return u.access_key === key; });
|
||||||
|
if (userIndex >= 0) {
|
||||||
|
users[userIndex].display_name = newName;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (key === currentUserKey) {
|
||||||
|
document.querySelectorAll('.sidebar-user .user-name').forEach(function(el) {
|
||||||
|
var truncated = newName.length > 16 ? newName.substring(0, 16) + '...' : newName;
|
||||||
|
el.textContent = truncated;
|
||||||
|
el.title = newName;
|
||||||
|
});
|
||||||
|
document.querySelectorAll('.sidebar-user[data-username]').forEach(function(el) {
|
||||||
|
el.setAttribute('data-username', newName);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteUserForm = document.getElementById('deleteUserForm');
|
||||||
|
if (deleteUserForm) {
|
||||||
|
deleteUserForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
var key = currentDeleteKey;
|
||||||
|
window.UICore.submitFormAjax(deleteUserForm, {
|
||||||
|
successMessage: 'User deleted',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
deleteUserModal.hide();
|
||||||
|
|
||||||
|
if (key === currentUserKey) {
|
||||||
|
window.location.href = '/ui/';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteBtn = document.querySelector('[data-delete-user="' + key + '"]');
|
||||||
|
if (deleteBtn) {
|
||||||
|
var cardCol = deleteBtn.closest('[class*="col-"]');
|
||||||
|
if (cardCol) {
|
||||||
|
cardCol.remove();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
users = users.filter(function(u) { return u.access_key !== key; });
|
||||||
|
updateUserCount();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
init: init
|
||||||
|
};
|
||||||
|
})();
|
||||||
324
static/js/ui-core.js
Normal file
324
static/js/ui-core.js
Normal file
@@ -0,0 +1,324 @@
|
|||||||
|
window.UICore = (function() {
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
function getCsrfToken() {
|
||||||
|
const meta = document.querySelector('meta[name="csrf-token"]');
|
||||||
|
return meta ? meta.getAttribute('content') : '';
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatBytes(bytes) {
|
||||||
|
if (!Number.isFinite(bytes)) return bytes + ' bytes';
|
||||||
|
const units = ['bytes', 'KB', 'MB', 'GB', 'TB'];
|
||||||
|
let i = 0;
|
||||||
|
let size = bytes;
|
||||||
|
while (size >= 1024 && i < units.length - 1) {
|
||||||
|
size /= 1024;
|
||||||
|
i++;
|
||||||
|
}
|
||||||
|
return size.toFixed(i === 0 ? 0 : 1) + ' ' + units[i];
|
||||||
|
}
|
||||||
|
|
||||||
|
function escapeHtml(value) {
|
||||||
|
if (value === null || value === undefined) return '';
|
||||||
|
return String(value)
|
||||||
|
.replace(/&/g, '&')
|
||||||
|
.replace(/</g, '<')
|
||||||
|
.replace(/>/g, '>')
|
||||||
|
.replace(/"/g, '"')
|
||||||
|
.replace(/'/g, ''');
|
||||||
|
}
|
||||||
|
|
||||||
|
async function submitFormAjax(form, options) {
|
||||||
|
options = options || {};
|
||||||
|
var onSuccess = options.onSuccess || function() {};
|
||||||
|
var onError = options.onError || function() {};
|
||||||
|
var successMessage = options.successMessage || 'Operation completed';
|
||||||
|
|
||||||
|
var formData = new FormData(form);
|
||||||
|
var csrfToken = getCsrfToken();
|
||||||
|
var submitBtn = form.querySelector('[type="submit"]');
|
||||||
|
var originalHtml = submitBtn ? submitBtn.innerHTML : '';
|
||||||
|
|
||||||
|
try {
|
||||||
|
if (submitBtn) {
|
||||||
|
submitBtn.disabled = true;
|
||||||
|
submitBtn.innerHTML = '<span class="spinner-border spinner-border-sm me-1"></span>Saving...';
|
||||||
|
}
|
||||||
|
|
||||||
|
var formAction = form.getAttribute('action') || form.action;
|
||||||
|
var response = await fetch(formAction, {
|
||||||
|
method: form.getAttribute('method') || 'POST',
|
||||||
|
headers: {
|
||||||
|
'X-CSRFToken': csrfToken,
|
||||||
|
'Accept': 'application/json',
|
||||||
|
'X-Requested-With': 'XMLHttpRequest'
|
||||||
|
},
|
||||||
|
body: formData,
|
||||||
|
redirect: 'follow'
|
||||||
|
});
|
||||||
|
|
||||||
|
var contentType = response.headers.get('content-type') || '';
|
||||||
|
if (!contentType.includes('application/json')) {
|
||||||
|
throw new Error('Server returned an unexpected response. Please try again.');
|
||||||
|
}
|
||||||
|
|
||||||
|
var data = await response.json();
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error(data.error || 'HTTP ' + response.status);
|
||||||
|
}
|
||||||
|
|
||||||
|
window.showToast(data.message || successMessage, 'Success', 'success');
|
||||||
|
onSuccess(data);
|
||||||
|
|
||||||
|
} catch (err) {
|
||||||
|
window.showToast(err.message, 'Error', 'error');
|
||||||
|
onError(err);
|
||||||
|
} finally {
|
||||||
|
if (submitBtn) {
|
||||||
|
submitBtn.disabled = false;
|
||||||
|
submitBtn.innerHTML = originalHtml;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function PollingManager() {
|
||||||
|
this.intervals = {};
|
||||||
|
this.callbacks = {};
|
||||||
|
this.timers = {};
|
||||||
|
this.defaults = {
|
||||||
|
replication: 30000,
|
||||||
|
lifecycle: 60000,
|
||||||
|
connectionHealth: 60000,
|
||||||
|
bucketStats: 120000
|
||||||
|
};
|
||||||
|
this._loadSettings();
|
||||||
|
}
|
||||||
|
|
||||||
|
PollingManager.prototype._loadSettings = function() {
|
||||||
|
try {
|
||||||
|
var stored = localStorage.getItem('myfsio-polling-intervals');
|
||||||
|
if (stored) {
|
||||||
|
var settings = JSON.parse(stored);
|
||||||
|
for (var key in settings) {
|
||||||
|
if (settings.hasOwnProperty(key)) {
|
||||||
|
this.defaults[key] = settings[key];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
console.warn('Failed to load polling settings:', e);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
PollingManager.prototype.saveSettings = function(settings) {
|
||||||
|
try {
|
||||||
|
for (var key in settings) {
|
||||||
|
if (settings.hasOwnProperty(key)) {
|
||||||
|
this.defaults[key] = settings[key];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
localStorage.setItem('myfsio-polling-intervals', JSON.stringify(this.defaults));
|
||||||
|
} catch (e) {
|
||||||
|
console.warn('Failed to save polling settings:', e);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
PollingManager.prototype.start = function(key, callback, interval) {
|
||||||
|
this.stop(key);
|
||||||
|
var ms = interval !== undefined ? interval : (this.defaults[key] || 30000);
|
||||||
|
if (ms <= 0) return;
|
||||||
|
|
||||||
|
this.callbacks[key] = callback;
|
||||||
|
this.intervals[key] = ms;
|
||||||
|
|
||||||
|
callback();
|
||||||
|
|
||||||
|
var self = this;
|
||||||
|
this.timers[key] = setInterval(function() {
|
||||||
|
if (!document.hidden) {
|
||||||
|
callback();
|
||||||
|
}
|
||||||
|
}, ms);
|
||||||
|
};
|
||||||
|
|
||||||
|
PollingManager.prototype.stop = function(key) {
|
||||||
|
if (this.timers[key]) {
|
||||||
|
clearInterval(this.timers[key]);
|
||||||
|
delete this.timers[key];
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
PollingManager.prototype.stopAll = function() {
|
||||||
|
for (var key in this.timers) {
|
||||||
|
if (this.timers.hasOwnProperty(key)) {
|
||||||
|
clearInterval(this.timers[key]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
this.timers = {};
|
||||||
|
};
|
||||||
|
|
||||||
|
PollingManager.prototype.updateInterval = function(key, newInterval) {
|
||||||
|
var callback = this.callbacks[key];
|
||||||
|
this.defaults[key] = newInterval;
|
||||||
|
this.saveSettings(this.defaults);
|
||||||
|
if (callback) {
|
||||||
|
this.start(key, callback, newInterval);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
PollingManager.prototype.getSettings = function() {
|
||||||
|
var result = {};
|
||||||
|
for (var key in this.defaults) {
|
||||||
|
if (this.defaults.hasOwnProperty(key)) {
|
||||||
|
result[key] = this.defaults[key];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result;
|
||||||
|
};
|
||||||
|
|
||||||
|
var pollingManager = new PollingManager();
|
||||||
|
|
||||||
|
document.addEventListener('visibilitychange', function() {
|
||||||
|
if (document.hidden) {
|
||||||
|
pollingManager.stopAll();
|
||||||
|
} else {
|
||||||
|
for (var key in pollingManager.callbacks) {
|
||||||
|
if (pollingManager.callbacks.hasOwnProperty(key)) {
|
||||||
|
pollingManager.start(key, pollingManager.callbacks[key], pollingManager.intervals[key]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return {
|
||||||
|
getCsrfToken: getCsrfToken,
|
||||||
|
formatBytes: formatBytes,
|
||||||
|
escapeHtml: escapeHtml,
|
||||||
|
submitFormAjax: submitFormAjax,
|
||||||
|
PollingManager: PollingManager,
|
||||||
|
pollingManager: pollingManager
|
||||||
|
};
|
||||||
|
})();
|
||||||
|
|
||||||
|
window.pollingManager = window.UICore.pollingManager;
|
||||||
|
|
||||||
|
window.UICore.copyToClipboard = async function(text, button, originalText) {
|
||||||
|
try {
|
||||||
|
await navigator.clipboard.writeText(text);
|
||||||
|
if (button) {
|
||||||
|
var prevText = button.textContent;
|
||||||
|
button.textContent = 'Copied!';
|
||||||
|
setTimeout(function() {
|
||||||
|
button.textContent = originalText || prevText;
|
||||||
|
}, 1500);
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Copy failed:', err);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
window.UICore.setButtonLoading = function(button, isLoading, loadingText) {
|
||||||
|
if (!button) return;
|
||||||
|
if (isLoading) {
|
||||||
|
button._originalHtml = button.innerHTML;
|
||||||
|
button._originalDisabled = button.disabled;
|
||||||
|
button.disabled = true;
|
||||||
|
button.innerHTML = '<span class="spinner-border spinner-border-sm me-1"></span>' + (loadingText || 'Loading...');
|
||||||
|
} else {
|
||||||
|
button.disabled = button._originalDisabled || false;
|
||||||
|
button.innerHTML = button._originalHtml || button.innerHTML;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
window.UICore.updateBadgeCount = function(selector, count, singular, plural) {
|
||||||
|
var badge = document.querySelector(selector);
|
||||||
|
if (badge) {
|
||||||
|
var label = count === 1 ? (singular || '') : (plural || 's');
|
||||||
|
badge.textContent = count + ' ' + label;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
window.UICore.setupJsonAutoIndent = function(textarea) {
|
||||||
|
if (!textarea) return;
|
||||||
|
|
||||||
|
textarea.addEventListener('keydown', function(e) {
|
||||||
|
if (e.key === 'Enter') {
|
||||||
|
e.preventDefault();
|
||||||
|
|
||||||
|
var start = this.selectionStart;
|
||||||
|
var end = this.selectionEnd;
|
||||||
|
var value = this.value;
|
||||||
|
|
||||||
|
var lineStart = value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
var currentLine = value.substring(lineStart, start);
|
||||||
|
|
||||||
|
var indentMatch = currentLine.match(/^(\s*)/);
|
||||||
|
var indent = indentMatch ? indentMatch[1] : '';
|
||||||
|
|
||||||
|
var trimmedLine = currentLine.trim();
|
||||||
|
var lastChar = trimmedLine.slice(-1);
|
||||||
|
|
||||||
|
var newIndent = indent;
|
||||||
|
var insertAfter = '';
|
||||||
|
|
||||||
|
if (lastChar === '{' || lastChar === '[') {
|
||||||
|
newIndent = indent + ' ';
|
||||||
|
|
||||||
|
var charAfterCursor = value.substring(start, start + 1).trim();
|
||||||
|
if ((lastChar === '{' && charAfterCursor === '}') ||
|
||||||
|
(lastChar === '[' && charAfterCursor === ']')) {
|
||||||
|
insertAfter = '\n' + indent;
|
||||||
|
}
|
||||||
|
} else if (lastChar === ',' || lastChar === ':') {
|
||||||
|
newIndent = indent;
|
||||||
|
}
|
||||||
|
|
||||||
|
var insertion = '\n' + newIndent + insertAfter;
|
||||||
|
var newValue = value.substring(0, start) + insertion + value.substring(end);
|
||||||
|
|
||||||
|
this.value = newValue;
|
||||||
|
|
||||||
|
var newCursorPos = start + 1 + newIndent.length;
|
||||||
|
this.selectionStart = this.selectionEnd = newCursorPos;
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
|
||||||
|
if (e.key === 'Tab') {
|
||||||
|
e.preventDefault();
|
||||||
|
var start = this.selectionStart;
|
||||||
|
var end = this.selectionEnd;
|
||||||
|
|
||||||
|
if (e.shiftKey) {
|
||||||
|
var lineStart = this.value.lastIndexOf('\n', start - 1) + 1;
|
||||||
|
var lineContent = this.value.substring(lineStart, start);
|
||||||
|
if (lineContent.startsWith(' ')) {
|
||||||
|
this.value = this.value.substring(0, lineStart) +
|
||||||
|
this.value.substring(lineStart + 2);
|
||||||
|
this.selectionStart = this.selectionEnd = Math.max(lineStart, start - 2);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
this.value = this.value.substring(0, start) + ' ' + this.value.substring(end);
|
||||||
|
this.selectionStart = this.selectionEnd = start + 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
this.dispatchEvent(new Event('input', { bubbles: true }));
|
||||||
|
}
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
document.addEventListener('DOMContentLoaded', function() {
|
||||||
|
var flashMessage = sessionStorage.getItem('flashMessage');
|
||||||
|
if (flashMessage) {
|
||||||
|
sessionStorage.removeItem('flashMessage');
|
||||||
|
try {
|
||||||
|
var msg = JSON.parse(flashMessage);
|
||||||
|
if (window.showToast) {
|
||||||
|
window.showToast(msg.body || msg.title, msg.title, msg.variant || 'info');
|
||||||
|
}
|
||||||
|
} catch (e) {}
|
||||||
|
}
|
||||||
|
});
|
||||||
@@ -393,6 +393,8 @@
|
|||||||
{% endwith %}
|
{% endwith %}
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
|
<script src="{{ url_for('static', filename='js/ui-core.js') }}"></script>
|
||||||
{% block extra_scripts %}{% endblock %}
|
{% block extra_scripts %}{% endblock %}
|
||||||
|
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -51,7 +51,7 @@
|
|||||||
</div>
|
</div>
|
||||||
<div>
|
<div>
|
||||||
<h5 class="bucket-name text-break">{{ bucket.meta.name }}</h5>
|
<h5 class="bucket-name text-break">{{ bucket.meta.name }}</h5>
|
||||||
<small class="text-muted">Created {{ bucket.meta.created_at.strftime('%b %d, %Y') }}</small>
|
<small class="text-muted">Created {{ bucket.meta.created_at | format_datetime }}</small>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<span class="badge {{ bucket.access_badge }} bucket-access-badge">{{ bucket.access_label }}</span>
|
<span class="badge {{ bucket.access_badge }} bucket-access-badge">{{ bucket.access_label }}</span>
|
||||||
@@ -104,7 +104,7 @@
|
|||||||
</h1>
|
</h1>
|
||||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||||
</div>
|
</div>
|
||||||
<form method="post" action="{{ url_for('ui.create_bucket') }}">
|
<form method="post" action="{{ url_for('ui.create_bucket') }}" id="createBucketForm">
|
||||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" />
|
||||||
<div class="modal-body pt-0">
|
<div class="modal-body pt-0">
|
||||||
<label class="form-label fw-medium">Bucket name</label>
|
<label class="form-label fw-medium">Bucket name</label>
|
||||||
@@ -205,6 +205,25 @@
|
|||||||
});
|
});
|
||||||
row.style.cursor = 'pointer';
|
row.style.cursor = 'pointer';
|
||||||
});
|
});
|
||||||
|
|
||||||
|
var createForm = document.getElementById('createBucketForm');
|
||||||
|
if (createForm) {
|
||||||
|
createForm.addEventListener('submit', function(e) {
|
||||||
|
e.preventDefault();
|
||||||
|
window.UICore.submitFormAjax(createForm, {
|
||||||
|
successMessage: 'Bucket created',
|
||||||
|
onSuccess: function(data) {
|
||||||
|
var modal = bootstrap.Modal.getInstance(document.getElementById('createBucketModal'));
|
||||||
|
if (modal) modal.hide();
|
||||||
|
if (data.bucket_name) {
|
||||||
|
window.location.href = '{{ url_for("ui.bucket_detail", bucket_name="__BUCKET__") }}'.replace('__BUCKET__', data.bucket_name);
|
||||||
|
} else {
|
||||||
|
location.reload();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
})();
|
})();
|
||||||
</script>
|
</script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
@@ -57,7 +57,7 @@
|
|||||||
<label for="secret_key" class="form-label fw-medium">Secret Key</label>
|
<label for="secret_key" class="form-label fw-medium">Secret Key</label>
|
||||||
<div class="input-group">
|
<div class="input-group">
|
||||||
<input type="password" class="form-control font-monospace" id="secret_key" name="secret_key" required>
|
<input type="password" class="form-control font-monospace" id="secret_key" name="secret_key" required>
|
||||||
<button class="btn btn-outline-secondary" type="button" onclick="togglePassword('secret_key')" title="Toggle visibility">
|
<button class="btn btn-outline-secondary" type="button" onclick="ConnectionsManagement.togglePassword('secret_key')" title="Toggle visibility">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
||||||
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
||||||
@@ -220,7 +220,7 @@
|
|||||||
<label for="edit_secret_key" class="form-label fw-medium">Secret Key</label>
|
<label for="edit_secret_key" class="form-label fw-medium">Secret Key</label>
|
||||||
<div class="input-group">
|
<div class="input-group">
|
||||||
<input type="password" class="form-control font-monospace" id="edit_secret_key" name="secret_key" required>
|
<input type="password" class="form-control font-monospace" id="edit_secret_key" name="secret_key" required>
|
||||||
<button class="btn btn-outline-secondary" type="button" onclick="togglePassword('edit_secret_key')">
|
<button class="btn btn-outline-secondary" type="button" onclick="ConnectionsManagement.togglePassword('edit_secret_key')">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
<path d="M16 8s-3-5.5-8-5.5S0 8 0 8s3 5.5 8 5.5S16 8 16 8zM1.173 8a13.133 13.133 0 0 1 1.66-2.043C4.12 4.668 5.88 3.5 8 3.5c2.12 0 3.879 1.168 5.168 2.457A13.133 13.133 0 0 1 14.828 8c-.058.087-.122.183-.195.288-.335.48-.83 1.12-1.465 1.755C11.879 11.332 10.119 12.5 8 12.5c-2.12 0-3.879-1.168-5.168-2.457A13.134 13.134 0 0 1 1.172 8z"/>
|
||||||
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
<path d="M8 5.5a2.5 2.5 0 1 0 0 5 2.5 2.5 0 0 0 0-5zM4.5 8a3.5 3.5 0 1 1 7 0 3.5 3.5 0 0 1-7 0z"/>
|
||||||
@@ -289,153 +289,16 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<script src="{{ url_for('static', filename='js/connections-management.js') }}"></script>
|
||||||
<script>
|
<script>
|
||||||
function togglePassword(id) {
|
ConnectionsManagement.init({
|
||||||
const input = document.getElementById(id);
|
csrfToken: "{{ csrf_token() }}",
|
||||||
if (input.type === "password") {
|
endpoints: {
|
||||||
input.type = "text";
|
test: "{{ url_for('ui.test_connection') }}",
|
||||||
} else {
|
updateTemplate: "{{ url_for('ui.update_connection', connection_id='CONNECTION_ID') }}",
|
||||||
input.type = "password";
|
deleteTemplate: "{{ url_for('ui.delete_connection', connection_id='CONNECTION_ID') }}",
|
||||||
}
|
healthTemplate: "/ui/connections/CONNECTION_ID/health"
|
||||||
}
|
}
|
||||||
|
});
|
||||||
async function testConnection(formId, resultId) {
|
|
||||||
const form = document.getElementById(formId);
|
|
||||||
const resultDiv = document.getElementById(resultId);
|
|
||||||
const formData = new FormData(form);
|
|
||||||
const data = Object.fromEntries(formData.entries());
|
|
||||||
|
|
||||||
resultDiv.innerHTML = '<div class="text-info"><span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Testing connection...</div>';
|
|
||||||
|
|
||||||
const controller = new AbortController();
|
|
||||||
const timeoutId = setTimeout(() => controller.abort(), 20000);
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch("{{ url_for('ui.test_connection') }}", {
|
|
||||||
method: "POST",
|
|
||||||
headers: {
|
|
||||||
"Content-Type": "application/json",
|
|
||||||
"X-CSRFToken": "{{ csrf_token() }}"
|
|
||||||
},
|
|
||||||
body: JSON.stringify(data),
|
|
||||||
signal: controller.signal
|
|
||||||
});
|
|
||||||
clearTimeout(timeoutId);
|
|
||||||
|
|
||||||
const result = await response.json();
|
|
||||||
if (response.ok) {
|
|
||||||
resultDiv.innerHTML = `<div class="text-success">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
|
||||||
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
|
||||||
</svg>
|
|
||||||
${result.message}
|
|
||||||
</div>`;
|
|
||||||
} else {
|
|
||||||
resultDiv.innerHTML = `<div class="text-danger">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
|
||||||
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>
|
|
||||||
</svg>
|
|
||||||
${result.message}
|
|
||||||
</div>`;
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
clearTimeout(timeoutId);
|
|
||||||
if (error.name === 'AbortError') {
|
|
||||||
resultDiv.innerHTML = `<div class="text-danger">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
|
||||||
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>
|
|
||||||
</svg>
|
|
||||||
Connection test timed out - endpoint may be unreachable
|
|
||||||
</div>`;
|
|
||||||
} else {
|
|
||||||
resultDiv.innerHTML = `<div class="text-danger">
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="me-1" viewBox="0 0 16 16">
|
|
||||||
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>
|
|
||||||
</svg>
|
|
||||||
Connection failed: Network error
|
|
||||||
</div>`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
document.getElementById('testConnectionBtn').addEventListener('click', () => {
|
|
||||||
testConnection('createConnectionForm', 'testResult');
|
|
||||||
});
|
|
||||||
|
|
||||||
document.getElementById('editTestConnectionBtn').addEventListener('click', () => {
|
|
||||||
testConnection('editConnectionForm', 'editTestResult');
|
|
||||||
});
|
|
||||||
|
|
||||||
const editModal = document.getElementById('editConnectionModal');
|
|
||||||
editModal.addEventListener('show.bs.modal', event => {
|
|
||||||
const button = event.relatedTarget;
|
|
||||||
const id = button.getAttribute('data-id');
|
|
||||||
|
|
||||||
document.getElementById('edit_name').value = button.getAttribute('data-name');
|
|
||||||
document.getElementById('edit_endpoint_url').value = button.getAttribute('data-endpoint');
|
|
||||||
document.getElementById('edit_region').value = button.getAttribute('data-region');
|
|
||||||
document.getElementById('edit_access_key').value = button.getAttribute('data-access');
|
|
||||||
document.getElementById('edit_secret_key').value = button.getAttribute('data-secret');
|
|
||||||
document.getElementById('editTestResult').innerHTML = '';
|
|
||||||
|
|
||||||
const form = document.getElementById('editConnectionForm');
|
|
||||||
form.action = "{{ url_for('ui.update_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
|
|
||||||
});
|
|
||||||
|
|
||||||
const deleteModal = document.getElementById('deleteConnectionModal');
|
|
||||||
deleteModal.addEventListener('show.bs.modal', event => {
|
|
||||||
const button = event.relatedTarget;
|
|
||||||
const id = button.getAttribute('data-id');
|
|
||||||
const name = button.getAttribute('data-name');
|
|
||||||
|
|
||||||
document.getElementById('deleteConnectionName').textContent = name;
|
|
||||||
const form = document.getElementById('deleteConnectionForm');
|
|
||||||
form.action = "{{ url_for('ui.delete_connection', connection_id='CONN_ID') }}".replace('CONN_ID', id);
|
|
||||||
});
|
|
||||||
|
|
||||||
async function checkConnectionHealth(connectionId, statusEl) {
|
|
||||||
try {
|
|
||||||
const controller = new AbortController();
|
|
||||||
const timeoutId = setTimeout(() => controller.abort(), 15000);
|
|
||||||
|
|
||||||
const response = await fetch(`/ui/connections/${connectionId}/health`, {
|
|
||||||
signal: controller.signal
|
|
||||||
});
|
|
||||||
clearTimeout(timeoutId);
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
if (data.healthy) {
|
|
||||||
statusEl.innerHTML = `
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-success" viewBox="0 0 16 16">
|
|
||||||
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zm-3.97-3.03a.75.75 0 0 0-1.08.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-.01-1.05z"/>
|
|
||||||
</svg>`;
|
|
||||||
statusEl.setAttribute('data-status', 'healthy');
|
|
||||||
statusEl.setAttribute('title', 'Connected');
|
|
||||||
} else {
|
|
||||||
statusEl.innerHTML = `
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-danger" viewBox="0 0 16 16">
|
|
||||||
<path d="M16 8A8 8 0 1 1 0 8a8 8 0 0 1 16 0zM5.354 4.646a.5.5 0 1 0-.708.708L7.293 8l-2.647 2.646a.5.5 0 0 0 .708.708L8 8.707l2.646 2.647a.5.5 0 0 0 .708-.708L8.707 8l2.647-2.646a.5.5 0 0 0-.708-.708L8 7.293 5.354 4.646z"/>
|
|
||||||
</svg>`;
|
|
||||||
statusEl.setAttribute('data-status', 'unhealthy');
|
|
||||||
statusEl.setAttribute('title', data.error || 'Unreachable');
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
statusEl.innerHTML = `
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="text-warning" viewBox="0 0 16 16">
|
|
||||||
<path d="M8.982 1.566a1.13 1.13 0 0 0-1.96 0L.165 13.233c-.457.778.091 1.767.98 1.767h13.713c.889 0 1.438-.99.98-1.767L8.982 1.566zM8 5c.535 0 .954.462.9.995l-.35 3.507a.552.552 0 0 1-1.1 0L7.1 5.995A.905.905 0 0 1 8 5zm.002 6a1 1 0 1 1 0 2 1 1 0 0 1 0-2z"/>
|
|
||||||
</svg>`;
|
|
||||||
statusEl.setAttribute('data-status', 'unknown');
|
|
||||||
statusEl.setAttribute('title', 'Could not check status');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const connectionRows = document.querySelectorAll('tr[data-connection-id]');
|
|
||||||
connectionRows.forEach((row, index) => {
|
|
||||||
const connectionId = row.getAttribute('data-connection-id');
|
|
||||||
const statusEl = row.querySelector('.connection-status');
|
|
||||||
if (statusEl) {
|
|
||||||
setTimeout(() => checkConnectionHealth(connectionId, statusEl), index * 200);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
</script>
|
</script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
@@ -39,6 +39,8 @@
|
|||||||
<li><a href="#quotas">Bucket Quotas</a></li>
|
<li><a href="#quotas">Bucket Quotas</a></li>
|
||||||
<li><a href="#encryption">Encryption</a></li>
|
<li><a href="#encryption">Encryption</a></li>
|
||||||
<li><a href="#lifecycle">Lifecycle Rules</a></li>
|
<li><a href="#lifecycle">Lifecycle Rules</a></li>
|
||||||
|
<li><a href="#metrics">Metrics History</a></li>
|
||||||
|
<li><a href="#operation-metrics">Operation Metrics</a></li>
|
||||||
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
||||||
</ul>
|
</ul>
|
||||||
</div>
|
</div>
|
||||||
@@ -181,6 +183,24 @@ python run.py --mode ui
|
|||||||
<td><code>true</code></td>
|
<td><code>true</code></td>
|
||||||
<td>Enable file logging.</td>
|
<td>Enable file logging.</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr class="table-secondary">
|
||||||
|
<td colspan="3" class="fw-semibold">Metrics History Settings</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>METRICS_HISTORY_ENABLED</code></td>
|
||||||
|
<td><code>false</code></td>
|
||||||
|
<td>Enable metrics history recording and charts (opt-in).</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>METRICS_HISTORY_RETENTION_HOURS</code></td>
|
||||||
|
<td><code>24</code></td>
|
||||||
|
<td>How long to retain metrics history data.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>METRICS_HISTORY_INTERVAL_MINUTES</code></td>
|
||||||
|
<td><code>5</code></td>
|
||||||
|
<td>Interval between history snapshots.</td>
|
||||||
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
</div>
|
</div>
|
||||||
@@ -356,11 +376,8 @@ curl -X PUT {{ api_base }}/demo/notes.txt \
|
|||||||
-H "X-Secret-Key: <secret_key>" \
|
-H "X-Secret-Key: <secret_key>" \
|
||||||
--data-binary @notes.txt
|
--data-binary @notes.txt
|
||||||
|
|
||||||
curl -X POST {{ api_base }}/presign/demo/notes.txt \
|
# Presigned URLs are generated via the UI
|
||||||
-H "Content-Type: application/json" \
|
# Use the "Presign" button in the object browser
|
||||||
-H "X-Access-Key: <access_key>" \
|
|
||||||
-H "X-Secret-Key: <secret_key>" \
|
|
||||||
-d '{"method":"GET", "expires_in": 900}'
|
|
||||||
</code></pre>
|
</code></pre>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -418,13 +435,8 @@ curl -X POST {{ api_base }}/presign/demo/notes.txt \
|
|||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>GET/PUT/DELETE</td>
|
<td>GET/PUT/DELETE</td>
|
||||||
<td><code>/bucket-policy/<bucket></code></td>
|
<td><code>/<bucket>?policy</code></td>
|
||||||
<td>Fetch, upsert, or remove a bucket policy.</td>
|
<td>Fetch, upsert, or remove a bucket policy (S3-compatible).</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td>POST</td>
|
|
||||||
<td><code>/presign/<bucket>/<key></code></td>
|
|
||||||
<td>Generate SigV4 URLs for GET/PUT/DELETE with custom expiry.</td>
|
|
||||||
</tr>
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
@@ -523,17 +535,16 @@ s3.complete_multipart_upload(
|
|||||||
)</code></pre>
|
)</code></pre>
|
||||||
|
|
||||||
<h3 class="h6 text-uppercase text-muted mt-4">Presigned URLs for Sharing</h3>
|
<h3 class="h6 text-uppercase text-muted mt-4">Presigned URLs for Sharing</h3>
|
||||||
<pre class="mb-0"><code class="language-bash"># Generate a download link valid for 15 minutes
|
<pre class="mb-0"><code class="language-text"># Generate presigned URLs via the UI:
|
||||||
curl -X POST "{{ api_base }}/presign/mybucket/photo.jpg" \
|
# 1. Navigate to your bucket in the object browser
|
||||||
-H "Content-Type: application/json" \
|
# 2. Select the object you want to share
|
||||||
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
# 3. Click the "Presign" button
|
||||||
-d '{"method": "GET", "expires_in": 900}'
|
# 4. Choose method (GET/PUT/DELETE) and expiration time
|
||||||
|
# 5. Copy the generated URL
|
||||||
|
|
||||||
# Generate an upload link (PUT) valid for 1 hour
|
# Supported options:
|
||||||
curl -X POST "{{ api_base }}/presign/mybucket/upload.bin" \
|
# - Method: GET (download), PUT (upload), DELETE (remove)
|
||||||
-H "Content-Type: application/json" \
|
# - Expiration: 1 second to 7 days (604800 seconds)</code></pre>
|
||||||
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
|
||||||
-d '{"method": "PUT", "expires_in": 3600}'</code></pre>
|
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="replication" class="card shadow-sm docs-section">
|
<article id="replication" class="card shadow-sm docs-section">
|
||||||
@@ -976,10 +987,201 @@ curl "{{ api_base }}/<bucket>?lifecycle" \
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</article>
|
</article>
|
||||||
<article id="troubleshooting" class="card shadow-sm docs-section">
|
<article id="metrics" class="card shadow-sm docs-section">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-center gap-2 mb-3">
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
<span class="docs-section-kicker">13</span>
|
<span class="docs-section-kicker">13</span>
|
||||||
|
<h2 class="h4 mb-0">Metrics History</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">Track CPU, memory, and disk usage over time with optional metrics history. Disabled by default to minimize overhead.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Enabling Metrics History</h3>
|
||||||
|
<p class="small text-muted">Set the environment variable to opt-in:</p>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># PowerShell
|
||||||
|
$env:METRICS_HISTORY_ENABLED = "true"
|
||||||
|
python run.py
|
||||||
|
|
||||||
|
# Bash
|
||||||
|
export METRICS_HISTORY_ENABLED=true
|
||||||
|
python run.py</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Configuration Options</h3>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Variable</th>
|
||||||
|
<th>Default</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><code>METRICS_HISTORY_ENABLED</code></td>
|
||||||
|
<td><code>false</code></td>
|
||||||
|
<td>Enable/disable metrics history recording</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>METRICS_HISTORY_RETENTION_HOURS</code></td>
|
||||||
|
<td><code>24</code></td>
|
||||||
|
<td>How long to keep history data (hours)</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>METRICS_HISTORY_INTERVAL_MINUTES</code></td>
|
||||||
|
<td><code>5</code></td>
|
||||||
|
<td>Interval between snapshots (minutes)</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">API Endpoints</h3>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Get metrics history (last 24 hours by default)
|
||||||
|
curl "{{ api_base | replace('/api', '/ui') }}/metrics/history" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Get history for specific time range
|
||||||
|
curl "{{ api_base | replace('/api', '/ui') }}/metrics/history?hours=6" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Get current settings
|
||||||
|
curl "{{ api_base | replace('/api', '/ui') }}/metrics/settings" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Update settings at runtime
|
||||||
|
curl -X PUT "{{ api_base | replace('/api', '/ui') }}/metrics/settings" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>" \
|
||||||
|
-d '{"enabled": true, "retention_hours": 48, "interval_minutes": 10}'</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Storage Location</h3>
|
||||||
|
<p class="small text-muted mb-3">History data is stored at:</p>
|
||||||
|
<code class="d-block mb-3">data/.myfsio.sys/config/metrics_history.json</code>
|
||||||
|
|
||||||
|
<div class="alert alert-light border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1 flex-shrink-0" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>UI Charts:</strong> When enabled, the Metrics dashboard displays line charts showing CPU, memory, and disk usage trends with a time range selector (1h, 6h, 24h, 7d).
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="operation-metrics" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">14</span>
|
||||||
|
<h2 class="h4 mb-0">Operation Metrics</h2>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted">Track API request statistics including request counts, latency, error rates, and bandwidth usage. Provides real-time visibility into API operations.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Enabling Operation Metrics</h3>
|
||||||
|
<p class="small text-muted">Set the environment variable to opt-in:</p>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># PowerShell
|
||||||
|
$env:OPERATION_METRICS_ENABLED = "true"
|
||||||
|
python run.py
|
||||||
|
|
||||||
|
# Bash
|
||||||
|
export OPERATION_METRICS_ENABLED=true
|
||||||
|
python run.py</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Configuration Options</h3>
|
||||||
|
<div class="table-responsive mb-3">
|
||||||
|
<table class="table table-sm table-bordered small">
|
||||||
|
<thead class="table-light">
|
||||||
|
<tr>
|
||||||
|
<th>Variable</th>
|
||||||
|
<th>Default</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><code>OPERATION_METRICS_ENABLED</code></td>
|
||||||
|
<td><code>false</code></td>
|
||||||
|
<td>Enable/disable operation metrics collection</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>OPERATION_METRICS_INTERVAL_MINUTES</code></td>
|
||||||
|
<td><code>5</code></td>
|
||||||
|
<td>Interval between snapshots (minutes)</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><code>OPERATION_METRICS_RETENTION_HOURS</code></td>
|
||||||
|
<td><code>24</code></td>
|
||||||
|
<td>How long to keep history data (hours)</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">What's Tracked</h3>
|
||||||
|
<div class="row g-3 mb-4">
|
||||||
|
<div class="col-md-6">
|
||||||
|
<div class="bg-light rounded p-3 h-100">
|
||||||
|
<h6 class="small fw-bold mb-2">Request Statistics</h6>
|
||||||
|
<ul class="small text-muted mb-0 ps-3">
|
||||||
|
<li>Request counts by HTTP method (GET, PUT, POST, DELETE)</li>
|
||||||
|
<li>Response status codes (2xx, 3xx, 4xx, 5xx)</li>
|
||||||
|
<li>Average, min, max latency</li>
|
||||||
|
<li>Bytes transferred in/out</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-md-6">
|
||||||
|
<div class="bg-light rounded p-3 h-100">
|
||||||
|
<h6 class="small fw-bold mb-2">Endpoint Breakdown</h6>
|
||||||
|
<ul class="small text-muted mb-0 ps-3">
|
||||||
|
<li><code>object</code> - Object operations (GET/PUT/DELETE)</li>
|
||||||
|
<li><code>bucket</code> - Bucket operations</li>
|
||||||
|
<li><code>ui</code> - Web UI requests</li>
|
||||||
|
<li><code>service</code> - Health checks, etc.</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">S3 Error Codes</h3>
|
||||||
|
<p class="small text-muted">The dashboard tracks S3 API-specific error codes like <code>NoSuchKey</code>, <code>AccessDenied</code>, <code>BucketNotFound</code>. These are separate from HTTP status codes – a 404 from the UI won't appear here, only S3 API errors.</p>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">API Endpoints</h3>
|
||||||
|
<pre class="mb-3"><code class="language-bash"># Get current operation metrics
|
||||||
|
curl "{{ api_base | replace('/api', '/ui') }}/metrics/operations" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Get operation metrics history
|
||||||
|
curl "{{ api_base | replace('/api', '/ui') }}/metrics/operations/history" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"
|
||||||
|
|
||||||
|
# Filter history by time range
|
||||||
|
curl "{{ api_base | replace('/api', '/ui') }}/metrics/operations/history?hours=6" \
|
||||||
|
-H "X-Access-Key: <key>" -H "X-Secret-Key: <secret>"</code></pre>
|
||||||
|
|
||||||
|
<h3 class="h6 text-uppercase text-muted mt-4">Storage Location</h3>
|
||||||
|
<p class="small text-muted mb-3">Operation metrics data is stored at:</p>
|
||||||
|
<code class="d-block mb-3">data/.myfsio.sys/config/operation_metrics.json</code>
|
||||||
|
|
||||||
|
<div class="alert alert-light border mb-0">
|
||||||
|
<div class="d-flex gap-2">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-info-circle text-muted mt-1 flex-shrink-0" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="m8.93 6.588-2.29.287-.082.38.45.083c.294.07.352.176.288.469l-.738 3.468c-.194.897.105 1.319.808 1.319.545 0 1.178-.252 1.465-.598l.088-.416c-.2.176-.492.246-.686.246-.275 0-.375-.193-.304-.533L8.93 6.588zM9 4.5a1 1 0 1 1-2 0 1 1 0 0 1 2 0z"/>
|
||||||
|
</svg>
|
||||||
|
<div>
|
||||||
|
<strong>UI Dashboard:</strong> When enabled, the Metrics page shows an "API Operations" section with summary cards, charts for requests by method/status/endpoint, and an S3 error codes table. Data refreshes every 5 seconds.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
<article id="troubleshooting" class="card shadow-sm docs-section">
|
||||||
|
<div class="card-body">
|
||||||
|
<div class="d-flex align-items-center gap-2 mb-3">
|
||||||
|
<span class="docs-section-kicker">15</span>
|
||||||
<h2 class="h4 mb-0">Troubleshooting & tips</h2>
|
<h2 class="h4 mb-0">Troubleshooting & tips</h2>
|
||||||
</div>
|
</div>
|
||||||
<div class="table-responsive">
|
<div class="table-responsive">
|
||||||
@@ -1045,6 +1247,8 @@ curl "{{ api_base }}/<bucket>?lifecycle" \
|
|||||||
<li><a href="#quotas">Bucket Quotas</a></li>
|
<li><a href="#quotas">Bucket Quotas</a></li>
|
||||||
<li><a href="#encryption">Encryption</a></li>
|
<li><a href="#encryption">Encryption</a></li>
|
||||||
<li><a href="#lifecycle">Lifecycle Rules</a></li>
|
<li><a href="#lifecycle">Lifecycle Rules</a></li>
|
||||||
|
<li><a href="#metrics">Metrics History</a></li>
|
||||||
|
<li><a href="#operation-metrics">Operation Metrics</a></li>
|
||||||
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
<li><a href="#troubleshooting">Troubleshooting</a></li>
|
||||||
</ul>
|
</ul>
|
||||||
<div class="docs-sidebar-callouts">
|
<div class="docs-sidebar-callouts">
|
||||||
|
|||||||
@@ -116,8 +116,8 @@
|
|||||||
<div class="card h-100 iam-user-card">
|
<div class="card h-100 iam-user-card">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="d-flex align-items-start justify-content-between mb-3">
|
<div class="d-flex align-items-start justify-content-between mb-3">
|
||||||
<div class="d-flex align-items-center gap-3">
|
<div class="d-flex align-items-center gap-3 min-width-0 overflow-hidden">
|
||||||
<div class="user-avatar user-avatar-lg">
|
<div class="user-avatar user-avatar-lg flex-shrink-0">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
|
<path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6zm2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0zm4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4zm-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10c-2.29 0-3.516.68-4.168 1.332-.678.678-.83 1.418-.832 1.664h10z"/>
|
||||||
</svg>
|
</svg>
|
||||||
@@ -127,7 +127,7 @@
|
|||||||
<code class="small text-muted d-block text-truncate" title="{{ user.access_key }}">{{ user.access_key }}</code>
|
<code class="small text-muted d-block text-truncate" title="{{ user.access_key }}">{{ user.access_key }}</code>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="dropdown">
|
<div class="dropdown flex-shrink-0">
|
||||||
<button class="btn btn-sm btn-icon" type="button" data-bs-toggle="dropdown" aria-expanded="false">
|
<button class="btn btn-sm btn-icon" type="button" data-bs-toggle="dropdown" aria-expanded="false">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
|
||||||
<path d="M9.5 13a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0z"/>
|
<path d="M9.5 13a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0zm0-5a1.5 1.5 0 1 1-3 0 1.5 1.5 0 0 1 3 0z"/>
|
||||||
@@ -454,339 +454,20 @@
|
|||||||
|
|
||||||
{% block extra_scripts %}
|
{% block extra_scripts %}
|
||||||
{{ super() }}
|
{{ super() }}
|
||||||
|
<script src="{{ url_for('static', filename='js/iam-management.js') }}"></script>
|
||||||
<script>
|
<script>
|
||||||
(function () {
|
IAMManagement.init({
|
||||||
function setupJsonAutoIndent(textarea) {
|
users: JSON.parse(document.getElementById('iamUsersJson').textContent || '[]'),
|
||||||
if (!textarea) return;
|
currentUserKey: {{ principal.access_key | tojson }},
|
||||||
|
iamLocked: {{ iam_locked | tojson }},
|
||||||
textarea.addEventListener('keydown', function(e) {
|
csrfToken: "{{ csrf_token() }}",
|
||||||
if (e.key === 'Enter') {
|
endpoints: {
|
||||||
e.preventDefault();
|
createUser: "{{ url_for('ui.create_iam_user') }}",
|
||||||
|
updateUser: "{{ url_for('ui.update_iam_user', access_key='ACCESS_KEY') }}",
|
||||||
const start = this.selectionStart;
|
deleteUser: "{{ url_for('ui.delete_iam_user', access_key='ACCESS_KEY') }}",
|
||||||
const end = this.selectionEnd;
|
updatePolicies: "{{ url_for('ui.update_iam_policies', access_key='ACCESS_KEY') }}",
|
||||||
const value = this.value;
|
rotateSecret: "{{ url_for('ui.rotate_iam_secret', access_key='ACCESS_KEY') }}"
|
||||||
|
|
||||||
const lineStart = value.lastIndexOf('\n', start - 1) + 1;
|
|
||||||
const currentLine = value.substring(lineStart, start);
|
|
||||||
|
|
||||||
const indentMatch = currentLine.match(/^(\s*)/);
|
|
||||||
let indent = indentMatch ? indentMatch[1] : '';
|
|
||||||
|
|
||||||
const trimmedLine = currentLine.trim();
|
|
||||||
const lastChar = trimmedLine.slice(-1);
|
|
||||||
|
|
||||||
const charBeforeCursor = value.substring(start - 1, start).trim();
|
|
||||||
|
|
||||||
let newIndent = indent;
|
|
||||||
let insertAfter = '';
|
|
||||||
|
|
||||||
if (lastChar === '{' || lastChar === '[') {
|
|
||||||
newIndent = indent + ' ';
|
|
||||||
|
|
||||||
const charAfterCursor = value.substring(start, start + 1).trim();
|
|
||||||
if ((lastChar === '{' && charAfterCursor === '}') ||
|
|
||||||
(lastChar === '[' && charAfterCursor === ']')) {
|
|
||||||
insertAfter = '\n' + indent;
|
|
||||||
}
|
|
||||||
} else if (lastChar === ',' || lastChar === ':') {
|
|
||||||
newIndent = indent;
|
|
||||||
}
|
|
||||||
|
|
||||||
const insertion = '\n' + newIndent + insertAfter;
|
|
||||||
const newValue = value.substring(0, start) + insertion + value.substring(end);
|
|
||||||
|
|
||||||
this.value = newValue;
|
|
||||||
|
|
||||||
const newCursorPos = start + 1 + newIndent.length;
|
|
||||||
this.selectionStart = this.selectionEnd = newCursorPos;
|
|
||||||
|
|
||||||
this.dispatchEvent(new Event('input', { bubbles: true }));
|
|
||||||
}
|
|
||||||
|
|
||||||
if (e.key === 'Tab') {
|
|
||||||
e.preventDefault();
|
|
||||||
const start = this.selectionStart;
|
|
||||||
const end = this.selectionEnd;
|
|
||||||
|
|
||||||
if (e.shiftKey) {
|
|
||||||
const lineStart = this.value.lastIndexOf('\n', start - 1) + 1;
|
|
||||||
const lineContent = this.value.substring(lineStart, start);
|
|
||||||
if (lineContent.startsWith(' ')) {
|
|
||||||
this.value = this.value.substring(0, lineStart) +
|
|
||||||
this.value.substring(lineStart + 2);
|
|
||||||
this.selectionStart = this.selectionEnd = Math.max(lineStart, start - 2);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
this.value = this.value.substring(0, start) + ' ' + this.value.substring(end);
|
|
||||||
this.selectionStart = this.selectionEnd = start + 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
this.dispatchEvent(new Event('input', { bubbles: true }));
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
|
});
|
||||||
setupJsonAutoIndent(document.getElementById('policyEditorDocument'));
|
|
||||||
setupJsonAutoIndent(document.getElementById('createUserPolicies'));
|
|
||||||
|
|
||||||
const currentUserKey = {{ principal.access_key | tojson }};
|
|
||||||
const configCopyButtons = document.querySelectorAll('.config-copy');
|
|
||||||
configCopyButtons.forEach((button) => {
|
|
||||||
button.addEventListener('click', async () => {
|
|
||||||
const targetId = button.dataset.copyTarget;
|
|
||||||
const target = document.getElementById(targetId);
|
|
||||||
if (!target) return;
|
|
||||||
const text = target.innerText;
|
|
||||||
try {
|
|
||||||
await navigator.clipboard.writeText(text);
|
|
||||||
button.textContent = 'Copied!';
|
|
||||||
setTimeout(() => {
|
|
||||||
button.textContent = 'Copy JSON';
|
|
||||||
}, 1500);
|
|
||||||
} catch (err) {
|
|
||||||
console.error('Unable to copy IAM config', err);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
const secretCopyButton = document.querySelector('[data-secret-copy]');
|
|
||||||
if (secretCopyButton) {
|
|
||||||
secretCopyButton.addEventListener('click', async () => {
|
|
||||||
const secretInput = document.getElementById('disclosedSecretValue');
|
|
||||||
if (!secretInput) return;
|
|
||||||
try {
|
|
||||||
await navigator.clipboard.writeText(secretInput.value);
|
|
||||||
secretCopyButton.textContent = 'Copied!';
|
|
||||||
setTimeout(() => {
|
|
||||||
secretCopyButton.textContent = 'Copy';
|
|
||||||
}, 1500);
|
|
||||||
} catch (err) {
|
|
||||||
console.error('Unable to copy IAM secret', err);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
const iamUsersData = document.getElementById('iamUsersJson');
|
|
||||||
const users = iamUsersData ? JSON.parse(iamUsersData.textContent || '[]') : [];
|
|
||||||
|
|
||||||
const policyModalEl = document.getElementById('policyEditorModal');
|
|
||||||
const policyModal = new bootstrap.Modal(policyModalEl);
|
|
||||||
const userLabelEl = document.getElementById('policyEditorUserLabel');
|
|
||||||
const userInputEl = document.getElementById('policyEditorUser');
|
|
||||||
const textareaEl = document.getElementById('policyEditorDocument');
|
|
||||||
const formEl = document.getElementById('policyEditorForm');
|
|
||||||
const templateButtons = document.querySelectorAll('[data-policy-template]');
|
|
||||||
const iamLocked = {{ iam_locked | tojson }};
|
|
||||||
|
|
||||||
if (iamLocked) return;
|
|
||||||
|
|
||||||
const userPolicies = (accessKey) => {
|
|
||||||
const target = users.find((user) => user.access_key === accessKey);
|
|
||||||
return target ? JSON.stringify(target.policies, null, 2) : '';
|
|
||||||
};
|
|
||||||
|
|
||||||
const applyTemplate = (name) => {
|
|
||||||
const templates = {
|
|
||||||
full: [
|
|
||||||
{
|
|
||||||
bucket: '*',
|
|
||||||
actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'replication', 'iam:list_users', 'iam:*'],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
readonly: [
|
|
||||||
{
|
|
||||||
bucket: '*',
|
|
||||||
actions: ['list', 'read'],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
writer: [
|
|
||||||
{
|
|
||||||
bucket: '*',
|
|
||||||
actions: ['list', 'read', 'write'],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
};
|
|
||||||
if (templates[name]) {
|
|
||||||
textareaEl.value = JSON.stringify(templates[name], null, 2);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
templateButtons.forEach((button) => {
|
|
||||||
button.addEventListener('click', () => applyTemplate(button.dataset.policyTemplate));
|
|
||||||
});
|
|
||||||
|
|
||||||
const createUserPoliciesEl = document.getElementById('createUserPolicies');
|
|
||||||
const createTemplateButtons = document.querySelectorAll('[data-create-policy-template]');
|
|
||||||
|
|
||||||
const applyCreateTemplate = (name) => {
|
|
||||||
const templates = {
|
|
||||||
full: [
|
|
||||||
{
|
|
||||||
bucket: '*',
|
|
||||||
actions: ['list', 'read', 'write', 'delete', 'share', 'policy', 'replication', 'iam:list_users', 'iam:*'],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
readonly: [
|
|
||||||
{
|
|
||||||
bucket: '*',
|
|
||||||
actions: ['list', 'read'],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
writer: [
|
|
||||||
{
|
|
||||||
bucket: '*',
|
|
||||||
actions: ['list', 'read', 'write'],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
};
|
|
||||||
if (templates[name] && createUserPoliciesEl) {
|
|
||||||
createUserPoliciesEl.value = JSON.stringify(templates[name], null, 2);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
createTemplateButtons.forEach((button) => {
|
|
||||||
button.addEventListener('click', () => applyCreateTemplate(button.dataset.createPolicyTemplate));
|
|
||||||
});
|
|
||||||
|
|
||||||
formEl?.addEventListener('submit', (event) => {
|
|
||||||
const key = userInputEl.value;
|
|
||||||
if (!key) {
|
|
||||||
event.preventDefault();
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
const template = formEl.dataset.actionTemplate;
|
|
||||||
formEl.action = template.replace('ACCESS_KEY_PLACEHOLDER', key);
|
|
||||||
});
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-policy-editor]').forEach((button) => {
|
|
||||||
button.addEventListener('click', () => {
|
|
||||||
const key = button.getAttribute('data-access-key');
|
|
||||||
if (!key) return;
|
|
||||||
|
|
||||||
userLabelEl.textContent = key;
|
|
||||||
userInputEl.value = key;
|
|
||||||
textareaEl.value = userPolicies(key);
|
|
||||||
|
|
||||||
policyModal.show();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
const editUserModal = new bootstrap.Modal(document.getElementById('editUserModal'));
|
|
||||||
const editUserForm = document.getElementById('editUserForm');
|
|
||||||
const editUserDisplayName = document.getElementById('editUserDisplayName');
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-edit-user]').forEach(btn => {
|
|
||||||
btn.addEventListener('click', () => {
|
|
||||||
const key = btn.dataset.editUser;
|
|
||||||
const name = btn.dataset.displayName;
|
|
||||||
editUserDisplayName.value = name;
|
|
||||||
editUserForm.action = "{{ url_for('ui.update_iam_user', access_key='ACCESS_KEY') }}".replace('ACCESS_KEY', key);
|
|
||||||
editUserModal.show();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
const deleteUserModal = new bootstrap.Modal(document.getElementById('deleteUserModal'));
|
|
||||||
const deleteUserForm = document.getElementById('deleteUserForm');
|
|
||||||
const deleteUserLabel = document.getElementById('deleteUserLabel');
|
|
||||||
const deleteSelfWarning = document.getElementById('deleteSelfWarning');
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-delete-user]').forEach(btn => {
|
|
||||||
btn.addEventListener('click', () => {
|
|
||||||
const key = btn.dataset.deleteUser;
|
|
||||||
deleteUserLabel.textContent = key;
|
|
||||||
deleteUserForm.action = "{{ url_for('ui.delete_iam_user', access_key='ACCESS_KEY') }}".replace('ACCESS_KEY', key);
|
|
||||||
|
|
||||||
if (key === currentUserKey) {
|
|
||||||
deleteSelfWarning.classList.remove('d-none');
|
|
||||||
} else {
|
|
||||||
deleteSelfWarning.classList.add('d-none');
|
|
||||||
}
|
|
||||||
|
|
||||||
deleteUserModal.show();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
const rotateSecretModal = new bootstrap.Modal(document.getElementById('rotateSecretModal'));
|
|
||||||
const rotateUserLabel = document.getElementById('rotateUserLabel');
|
|
||||||
const confirmRotateBtn = document.getElementById('confirmRotateBtn');
|
|
||||||
const rotateCancelBtn = document.getElementById('rotateCancelBtn');
|
|
||||||
const rotateDoneBtn = document.getElementById('rotateDoneBtn');
|
|
||||||
const rotateSecretConfirm = document.getElementById('rotateSecretConfirm');
|
|
||||||
const rotateSecretResult = document.getElementById('rotateSecretResult');
|
|
||||||
const newSecretKeyInput = document.getElementById('newSecretKey');
|
|
||||||
const copyNewSecretBtn = document.getElementById('copyNewSecret');
|
|
||||||
let currentRotateKey = null;
|
|
||||||
|
|
||||||
document.querySelectorAll('[data-rotate-user]').forEach(btn => {
|
|
||||||
btn.addEventListener('click', () => {
|
|
||||||
currentRotateKey = btn.dataset.rotateUser;
|
|
||||||
rotateUserLabel.textContent = currentRotateKey;
|
|
||||||
|
|
||||||
rotateSecretConfirm.classList.remove('d-none');
|
|
||||||
rotateSecretResult.classList.add('d-none');
|
|
||||||
confirmRotateBtn.classList.remove('d-none');
|
|
||||||
rotateCancelBtn.classList.remove('d-none');
|
|
||||||
rotateDoneBtn.classList.add('d-none');
|
|
||||||
|
|
||||||
rotateSecretModal.show();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
confirmRotateBtn.addEventListener('click', async () => {
|
|
||||||
if (!currentRotateKey) return;
|
|
||||||
|
|
||||||
confirmRotateBtn.disabled = true;
|
|
||||||
confirmRotateBtn.textContent = "Rotating...";
|
|
||||||
|
|
||||||
try {
|
|
||||||
const url = "{{ url_for('ui.rotate_iam_secret', access_key='ACCESS_KEY') }}".replace('ACCESS_KEY', currentRotateKey);
|
|
||||||
const response = await fetch(url, {
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
'Accept': 'application/json',
|
|
||||||
'X-CSRFToken': "{{ csrf_token() }}"
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
const data = await response.json();
|
|
||||||
throw new Error(data.error || 'Failed to rotate secret');
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
newSecretKeyInput.value = data.secret_key;
|
|
||||||
|
|
||||||
rotateSecretConfirm.classList.add('d-none');
|
|
||||||
rotateSecretResult.classList.remove('d-none');
|
|
||||||
confirmRotateBtn.classList.add('d-none');
|
|
||||||
rotateCancelBtn.classList.add('d-none');
|
|
||||||
rotateDoneBtn.classList.remove('d-none');
|
|
||||||
|
|
||||||
} catch (err) {
|
|
||||||
if (window.showToast) {
|
|
||||||
window.showToast(err.message, 'Error', 'danger');
|
|
||||||
}
|
|
||||||
rotateSecretModal.hide();
|
|
||||||
} finally {
|
|
||||||
confirmRotateBtn.disabled = false;
|
|
||||||
confirmRotateBtn.textContent = "Rotate Key";
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
copyNewSecretBtn.addEventListener('click', async () => {
|
|
||||||
try {
|
|
||||||
await navigator.clipboard.writeText(newSecretKeyInput.value);
|
|
||||||
copyNewSecretBtn.textContent = 'Copied!';
|
|
||||||
setTimeout(() => copyNewSecretBtn.textContent = 'Copy', 1500);
|
|
||||||
} catch (err) {
|
|
||||||
console.error('Failed to copy', err);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
rotateDoneBtn.addEventListener('click', () => {
|
|
||||||
window.location.reload();
|
|
||||||
});
|
|
||||||
})();
|
|
||||||
</script>
|
</script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
@@ -6,11 +6,11 @@
|
|||||||
<p class="text-muted mb-0">Real-time server performance and storage usage</p>
|
<p class="text-muted mb-0">Real-time server performance and storage usage</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="d-flex gap-2 align-items-center">
|
<div class="d-flex gap-2 align-items-center">
|
||||||
<span class="d-flex align-items-center gap-2 text-muted small">
|
<span class="d-flex align-items-center gap-2 text-muted small" id="metricsLiveIndicator">
|
||||||
<span class="live-indicator"></span>
|
<span class="live-indicator"></span>
|
||||||
Live
|
Auto-refresh: <span id="refreshCountdown">5</span>s
|
||||||
</span>
|
</span>
|
||||||
<button class="btn btn-outline-secondary btn-sm" onclick="window.location.reload()">
|
<button class="btn btn-outline-secondary btn-sm" id="refreshMetricsBtn">
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-arrow-clockwise me-1" viewBox="0 0 16 16">
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-arrow-clockwise me-1" viewBox="0 0 16 16">
|
||||||
<path fill-rule="evenodd" d="M8 3a5 5 0 1 0 4.546 2.914.5.5 0 0 1 .908-.417A6 6 0 1 1 8 2v1z"/>
|
<path fill-rule="evenodd" d="M8 3a5 5 0 1 0 4.546 2.914.5.5 0 0 1 .908-.417A6 6 0 1 1 8 2v1z"/>
|
||||||
<path d="M8 4.466V.534a.25.25 0 0 1 .41-.192l2.36 1.966c.12.1.12.284 0 .384L8.41 4.658A.25.25 0 0 1 8 4.466z"/>
|
<path d="M8 4.466V.534a.25.25 0 0 1 .41-.192l2.36 1.966c.12.1.12.284 0 .384L8.41 4.658A.25.25 0 0 1 8 4.466z"/>
|
||||||
@@ -32,15 +32,13 @@
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="display-6 fw-bold mb-2 stat-value">{{ cpu_percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
<h2 class="display-6 fw-bold mb-2 stat-value"><span data-metric="cpu_percent">{{ cpu_percent }}</span><span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
<div class="progress" style="height: 8px; border-radius: 4px;">
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
<div class="progress-bar {% if cpu_percent > 80 %}bg-danger{% elif cpu_percent > 50 %}bg-warning{% else %}bg-primary{% endif %}" role="progressbar" style="width: {{ cpu_percent }}%"></div>
|
<div class="progress-bar bg-primary" data-metric="cpu_bar" role="progressbar" style="width: {{ cpu_percent }}%"></div>
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-2 d-flex justify-content-between">
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
<small class="text-muted">Current load</small>
|
<small class="text-muted">Current load</small>
|
||||||
<small class="{% if cpu_percent > 80 %}text-danger{% elif cpu_percent > 50 %}text-warning{% else %}text-success{% endif %}">
|
<small data-metric="cpu_status" class="text-success">Normal</small>
|
||||||
{% if cpu_percent > 80 %}High{% elif cpu_percent > 50 %}Medium{% else %}Normal{% endif %}
|
|
||||||
</small>
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -57,13 +55,13 @@
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="display-6 fw-bold mb-2 stat-value">{{ memory.percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
<h2 class="display-6 fw-bold mb-2 stat-value"><span data-metric="memory_percent">{{ memory.percent }}</span><span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
<div class="progress" style="height: 8px; border-radius: 4px;">
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
<div class="progress-bar bg-info" role="progressbar" style="width: {{ memory.percent }}%"></div>
|
<div class="progress-bar bg-info" data-metric="memory_bar" role="progressbar" style="width: {{ memory.percent }}%"></div>
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-2 d-flex justify-content-between">
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
<small class="text-muted">{{ memory.used }} used</small>
|
<small class="text-muted"><span data-metric="memory_used">{{ memory.used }}</span> used</small>
|
||||||
<small class="text-muted">{{ memory.total }} total</small>
|
<small class="text-muted"><span data-metric="memory_total">{{ memory.total }}</span> total</small>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -81,13 +79,13 @@
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="display-6 fw-bold mb-2 stat-value">{{ disk.percent }}<span class="fs-4 fw-normal text-muted">%</span></h2>
|
<h2 class="display-6 fw-bold mb-2 stat-value"><span data-metric="disk_percent">{{ disk.percent }}</span><span class="fs-4 fw-normal text-muted">%</span></h2>
|
||||||
<div class="progress" style="height: 8px; border-radius: 4px;">
|
<div class="progress" style="height: 8px; border-radius: 4px;">
|
||||||
<div class="progress-bar {% if disk.percent > 90 %}bg-danger{% elif disk.percent > 75 %}bg-warning{% else %}bg-warning{% endif %}" role="progressbar" style="width: {{ disk.percent }}%"></div>
|
<div class="progress-bar bg-warning" data-metric="disk_bar" role="progressbar" style="width: {{ disk.percent }}%"></div>
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-2 d-flex justify-content-between">
|
<div class="mt-2 d-flex justify-content-between">
|
||||||
<small class="text-muted">{{ disk.free }} free</small>
|
<small class="text-muted"><span data-metric="disk_free">{{ disk.free }}</span> free</small>
|
||||||
<small class="text-muted">{{ disk.total }} total</small>
|
<small class="text-muted"><span data-metric="disk_total">{{ disk.total }}</span> total</small>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -104,15 +102,15 @@
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<h2 class="display-6 fw-bold mb-2 stat-value">{{ app.storage_used }}</h2>
|
<h2 class="display-6 fw-bold mb-2 stat-value" data-metric="storage_used">{{ app.storage_used }}</h2>
|
||||||
<div class="d-flex gap-3 mt-3">
|
<div class="d-flex gap-3 mt-3">
|
||||||
<div class="text-center flex-fill">
|
<div class="text-center flex-fill">
|
||||||
<div class="h5 fw-bold mb-0">{{ app.buckets }}</div>
|
<div class="h5 fw-bold mb-0" data-metric="buckets_count">{{ app.buckets }}</div>
|
||||||
<small class="text-muted">Buckets</small>
|
<small class="text-muted">Buckets</small>
|
||||||
</div>
|
</div>
|
||||||
<div class="vr"></div>
|
<div class="vr"></div>
|
||||||
<div class="text-center flex-fill">
|
<div class="text-center flex-fill">
|
||||||
<div class="h5 fw-bold mb-0">{{ app.objects }}</div>
|
<div class="h5 fw-bold mb-0" data-metric="objects_count">{{ app.objects }}</div>
|
||||||
<small class="text-muted">Objects</small>
|
<small class="text-muted">Objects</small>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -269,4 +267,629 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
{% if operation_metrics_enabled %}
|
||||||
|
<div class="row g-4 mt-2">
|
||||||
|
<div class="col-12">
|
||||||
|
<div class="card shadow-sm border-0">
|
||||||
|
<div class="card-header bg-transparent border-0 pt-4 px-4 d-flex justify-content-between align-items-center">
|
||||||
|
<h5 class="card-title mb-0 fw-semibold">API Operations</h5>
|
||||||
|
<div class="d-flex align-items-center gap-3">
|
||||||
|
<span class="small text-muted" id="opStatus">Loading...</span>
|
||||||
|
<button class="btn btn-outline-secondary btn-sm" id="resetOpMetricsBtn" title="Reset current window">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" class="bi bi-arrow-counterclockwise" viewBox="0 0 16 16">
|
||||||
|
<path fill-rule="evenodd" d="M8 3a5 5 0 1 1-4.546 2.914.5.5 0 0 0-.908-.417A6 6 0 1 0 8 2v1z"/>
|
||||||
|
<path d="M8 4.466V.534a.25.25 0 0 0-.41-.192L5.23 2.308a.25.25 0 0 0 0 .384l2.36 1.966A.25.25 0 0 0 8 4.466z"/>
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="card-body p-4">
|
||||||
|
<div class="row g-3 mb-4">
|
||||||
|
<div class="col-6 col-md-4 col-lg-2">
|
||||||
|
<div class="text-center p-3 bg-light rounded h-100">
|
||||||
|
<h4 class="fw-bold mb-1" id="opTotalRequests">0</h4>
|
||||||
|
<small class="text-muted">Requests</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-6 col-md-4 col-lg-2">
|
||||||
|
<div class="text-center p-3 bg-light rounded h-100">
|
||||||
|
<h4 class="fw-bold mb-1 text-success" id="opSuccessRate">0%</h4>
|
||||||
|
<small class="text-muted">Success</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-6 col-md-4 col-lg-2">
|
||||||
|
<div class="text-center p-3 bg-light rounded h-100">
|
||||||
|
<h4 class="fw-bold mb-1 text-danger" id="opErrorCount">0</h4>
|
||||||
|
<small class="text-muted">Errors</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-6 col-md-4 col-lg-2">
|
||||||
|
<div class="text-center p-3 bg-light rounded h-100">
|
||||||
|
<h4 class="fw-bold mb-1 text-info" id="opAvgLatency">0ms</h4>
|
||||||
|
<small class="text-muted">Latency</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-6 col-md-4 col-lg-2">
|
||||||
|
<div class="text-center p-3 bg-light rounded h-100">
|
||||||
|
<h4 class="fw-bold mb-1 text-primary" id="opBytesIn">0 B</h4>
|
||||||
|
<small class="text-muted">Bytes In</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-6 col-md-4 col-lg-2">
|
||||||
|
<div class="text-center p-3 bg-light rounded h-100">
|
||||||
|
<h4 class="fw-bold mb-1 text-secondary" id="opBytesOut">0 B</h4>
|
||||||
|
<small class="text-muted">Bytes Out</small>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="row g-4">
|
||||||
|
<div class="col-lg-6">
|
||||||
|
<div class="bg-light rounded p-3">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-3">Requests by Method</h6>
|
||||||
|
<div style="height: 220px; display: flex; align-items: center; justify-content: center;">
|
||||||
|
<canvas id="methodChart"></canvas>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-lg-6">
|
||||||
|
<div class="bg-light rounded p-3">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-3">Requests by Status</h6>
|
||||||
|
<div style="height: 220px;">
|
||||||
|
<canvas id="statusChart"></canvas>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="row g-4 mt-1">
|
||||||
|
<div class="col-lg-6">
|
||||||
|
<div class="bg-light rounded p-3">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-3">Requests by Endpoint</h6>
|
||||||
|
<div style="height: 180px;">
|
||||||
|
<canvas id="endpointChart"></canvas>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col-lg-6">
|
||||||
|
<div class="bg-light rounded p-3 h-100 d-flex flex-column">
|
||||||
|
<div class="d-flex justify-content-between align-items-start mb-3">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-0">S3 Error Codes</h6>
|
||||||
|
<span class="badge bg-secondary-subtle text-secondary" style="font-size: 0.65rem;" title="Tracks S3 API errors like NoSuchKey, AccessDenied, etc.">API Only</span>
|
||||||
|
</div>
|
||||||
|
<div class="flex-grow-1 d-flex flex-column" style="min-height: 150px;">
|
||||||
|
<div class="d-flex border-bottom pb-2 mb-2" style="font-size: 0.75rem;">
|
||||||
|
<div class="text-muted fw-semibold" style="flex: 1;">Code</div>
|
||||||
|
<div class="text-muted fw-semibold text-end" style="width: 60px;">Count</div>
|
||||||
|
<div class="text-muted fw-semibold text-end" style="width: 100px;">Distribution</div>
|
||||||
|
</div>
|
||||||
|
<div id="errorCodesContainer" class="flex-grow-1" style="overflow-y: auto;">
|
||||||
|
<div id="errorCodesBody">
|
||||||
|
<div class="text-muted small text-center py-4">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" class="bi bi-check-circle mb-2 text-success" viewBox="0 0 16 16">
|
||||||
|
<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>
|
||||||
|
<path d="M10.97 4.97a.235.235 0 0 0-.02.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-1.071-1.05z"/>
|
||||||
|
</svg>
|
||||||
|
<div>No S3 API errors</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% if metrics_history_enabled %}
|
||||||
|
<div class="row g-4 mt-2">
|
||||||
|
<div class="col-12">
|
||||||
|
<div class="card shadow-sm border-0">
|
||||||
|
<div class="card-header bg-transparent border-0 pt-4 px-4 d-flex justify-content-between align-items-center">
|
||||||
|
<h5 class="card-title mb-0 fw-semibold">Metrics History</h5>
|
||||||
|
<div class="d-flex gap-2 align-items-center">
|
||||||
|
<select class="form-select form-select-sm" id="historyTimeRange" style="width: auto;">
|
||||||
|
<option value="1">Last 1 hour</option>
|
||||||
|
<option value="6">Last 6 hours</option>
|
||||||
|
<option value="24" selected>Last 24 hours</option>
|
||||||
|
<option value="168">Last 7 days</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="card-body p-4">
|
||||||
|
<div class="row">
|
||||||
|
<div class="col-md-4 mb-4">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-3">CPU Usage</h6>
|
||||||
|
<canvas id="cpuHistoryChart" height="200"></canvas>
|
||||||
|
</div>
|
||||||
|
<div class="col-md-4 mb-4">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-3">Memory Usage</h6>
|
||||||
|
<canvas id="memoryHistoryChart" height="200"></canvas>
|
||||||
|
</div>
|
||||||
|
<div class="col-md-4 mb-4">
|
||||||
|
<h6 class="text-muted small fw-bold text-uppercase mb-3">Disk Usage</h6>
|
||||||
|
<canvas id="diskHistoryChart" height="200"></canvas>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<p class="text-muted small mb-0 text-center" id="historyStatus">Loading history data...</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
|
{% endblock %}
|
||||||
|
|
||||||
|
{% block extra_scripts %}
|
||||||
|
{% if metrics_history_enabled or operation_metrics_enabled %}
|
||||||
|
<script src="https://cdn.jsdelivr.net/npm/chart.js@4.4.1/dist/chart.umd.min.js"></script>
|
||||||
|
{% endif %}
|
||||||
|
<script>
|
||||||
|
(function() {
|
||||||
|
var refreshInterval = 5000;
|
||||||
|
var countdown = 5;
|
||||||
|
var countdownEl = document.getElementById('refreshCountdown');
|
||||||
|
var refreshBtn = document.getElementById('refreshMetricsBtn');
|
||||||
|
var countdownTimer = null;
|
||||||
|
var fetchTimer = null;
|
||||||
|
|
||||||
|
function updateMetrics() {
|
||||||
|
fetch('/ui/metrics/api')
|
||||||
|
.then(function(resp) { return resp.json(); })
|
||||||
|
.then(function(data) {
|
||||||
|
var el;
|
||||||
|
el = document.querySelector('[data-metric="cpu_percent"]');
|
||||||
|
if (el) el.textContent = data.cpu_percent.toFixed(2);
|
||||||
|
el = document.querySelector('[data-metric="cpu_bar"]');
|
||||||
|
if (el) {
|
||||||
|
el.style.width = data.cpu_percent + '%';
|
||||||
|
el.className = 'progress-bar ' + (data.cpu_percent > 80 ? 'bg-danger' : data.cpu_percent > 50 ? 'bg-warning' : 'bg-primary');
|
||||||
|
}
|
||||||
|
el = document.querySelector('[data-metric="cpu_status"]');
|
||||||
|
if (el) {
|
||||||
|
el.textContent = data.cpu_percent > 80 ? 'High' : data.cpu_percent > 50 ? 'Medium' : 'Normal';
|
||||||
|
el.className = data.cpu_percent > 80 ? 'text-danger' : data.cpu_percent > 50 ? 'text-warning' : 'text-success';
|
||||||
|
}
|
||||||
|
|
||||||
|
el = document.querySelector('[data-metric="memory_percent"]');
|
||||||
|
if (el) el.textContent = data.memory.percent.toFixed(2);
|
||||||
|
el = document.querySelector('[data-metric="memory_bar"]');
|
||||||
|
if (el) el.style.width = data.memory.percent + '%';
|
||||||
|
el = document.querySelector('[data-metric="memory_used"]');
|
||||||
|
if (el) el.textContent = data.memory.used;
|
||||||
|
el = document.querySelector('[data-metric="memory_total"]');
|
||||||
|
if (el) el.textContent = data.memory.total;
|
||||||
|
|
||||||
|
el = document.querySelector('[data-metric="disk_percent"]');
|
||||||
|
if (el) el.textContent = data.disk.percent.toFixed(2);
|
||||||
|
el = document.querySelector('[data-metric="disk_bar"]');
|
||||||
|
if (el) {
|
||||||
|
el.style.width = data.disk.percent + '%';
|
||||||
|
el.className = 'progress-bar ' + (data.disk.percent > 90 ? 'bg-danger' : 'bg-warning');
|
||||||
|
}
|
||||||
|
el = document.querySelector('[data-metric="disk_free"]');
|
||||||
|
if (el) el.textContent = data.disk.free;
|
||||||
|
el = document.querySelector('[data-metric="disk_total"]');
|
||||||
|
if (el) el.textContent = data.disk.total;
|
||||||
|
|
||||||
|
el = document.querySelector('[data-metric="storage_used"]');
|
||||||
|
if (el) el.textContent = data.app.storage_used;
|
||||||
|
el = document.querySelector('[data-metric="buckets_count"]');
|
||||||
|
if (el) el.textContent = data.app.buckets;
|
||||||
|
el = document.querySelector('[data-metric="objects_count"]');
|
||||||
|
if (el) el.textContent = data.app.objects;
|
||||||
|
|
||||||
|
countdown = 5;
|
||||||
|
})
|
||||||
|
.catch(function(err) {
|
||||||
|
console.error('Metrics fetch error:', err);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function startCountdown() {
|
||||||
|
if (countdownTimer) clearInterval(countdownTimer);
|
||||||
|
countdown = 5;
|
||||||
|
if (countdownEl) countdownEl.textContent = countdown;
|
||||||
|
countdownTimer = setInterval(function() {
|
||||||
|
countdown--;
|
||||||
|
if (countdownEl) countdownEl.textContent = countdown;
|
||||||
|
if (countdown <= 0) {
|
||||||
|
countdown = 5;
|
||||||
|
}
|
||||||
|
}, 1000);
|
||||||
|
}
|
||||||
|
|
||||||
|
function startPolling() {
|
||||||
|
if (fetchTimer) clearInterval(fetchTimer);
|
||||||
|
fetchTimer = setInterval(function() {
|
||||||
|
if (!document.hidden) {
|
||||||
|
updateMetrics();
|
||||||
|
}
|
||||||
|
}, refreshInterval);
|
||||||
|
startCountdown();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (refreshBtn) {
|
||||||
|
refreshBtn.addEventListener('click', function() {
|
||||||
|
updateMetrics();
|
||||||
|
countdown = 5;
|
||||||
|
if (countdownEl) countdownEl.textContent = countdown;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
document.addEventListener('visibilitychange', function() {
|
||||||
|
if (!document.hidden) {
|
||||||
|
updateMetrics();
|
||||||
|
startPolling();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
startPolling();
|
||||||
|
})();
|
||||||
|
|
||||||
|
{% if operation_metrics_enabled %}
|
||||||
|
(function() {
|
||||||
|
var methodChart = null;
|
||||||
|
var statusChart = null;
|
||||||
|
var endpointChart = null;
|
||||||
|
var opStatus = document.getElementById('opStatus');
|
||||||
|
var opTimer = null;
|
||||||
|
var methodColors = {
|
||||||
|
'GET': '#0d6efd',
|
||||||
|
'PUT': '#198754',
|
||||||
|
'POST': '#ffc107',
|
||||||
|
'DELETE': '#dc3545',
|
||||||
|
'HEAD': '#6c757d',
|
||||||
|
'OPTIONS': '#0dcaf0'
|
||||||
|
};
|
||||||
|
var statusColors = {
|
||||||
|
'2xx': '#198754',
|
||||||
|
'3xx': '#0dcaf0',
|
||||||
|
'4xx': '#ffc107',
|
||||||
|
'5xx': '#dc3545'
|
||||||
|
};
|
||||||
|
var endpointColors = {
|
||||||
|
'object': '#0d6efd',
|
||||||
|
'bucket': '#198754',
|
||||||
|
'ui': '#6c757d',
|
||||||
|
'service': '#0dcaf0',
|
||||||
|
'kms': '#ffc107'
|
||||||
|
};
|
||||||
|
|
||||||
|
function formatBytes(bytes) {
|
||||||
|
if (bytes === 0) return '0 B';
|
||||||
|
var k = 1024;
|
||||||
|
var sizes = ['B', 'KB', 'MB', 'GB', 'TB'];
|
||||||
|
var i = Math.floor(Math.log(bytes) / Math.log(k));
|
||||||
|
return parseFloat((bytes / Math.pow(k, i)).toFixed(1)) + ' ' + sizes[i];
|
||||||
|
}
|
||||||
|
|
||||||
|
function initOpCharts() {
|
||||||
|
var methodCtx = document.getElementById('methodChart');
|
||||||
|
var statusCtx = document.getElementById('statusChart');
|
||||||
|
var endpointCtx = document.getElementById('endpointChart');
|
||||||
|
|
||||||
|
if (methodCtx) {
|
||||||
|
methodChart = new Chart(methodCtx, {
|
||||||
|
type: 'doughnut',
|
||||||
|
data: {
|
||||||
|
labels: [],
|
||||||
|
datasets: [{
|
||||||
|
data: [],
|
||||||
|
backgroundColor: []
|
||||||
|
}]
|
||||||
|
},
|
||||||
|
options: {
|
||||||
|
responsive: true,
|
||||||
|
maintainAspectRatio: false,
|
||||||
|
animation: false,
|
||||||
|
plugins: {
|
||||||
|
legend: { position: 'right', labels: { boxWidth: 12, font: { size: 11 } } }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (statusCtx) {
|
||||||
|
statusChart = new Chart(statusCtx, {
|
||||||
|
type: 'bar',
|
||||||
|
data: {
|
||||||
|
labels: [],
|
||||||
|
datasets: [{
|
||||||
|
data: [],
|
||||||
|
backgroundColor: []
|
||||||
|
}]
|
||||||
|
},
|
||||||
|
options: {
|
||||||
|
responsive: true,
|
||||||
|
maintainAspectRatio: false,
|
||||||
|
animation: false,
|
||||||
|
plugins: { legend: { display: false } },
|
||||||
|
scales: {
|
||||||
|
y: { beginAtZero: true, ticks: { stepSize: 1 } }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (endpointCtx) {
|
||||||
|
endpointChart = new Chart(endpointCtx, {
|
||||||
|
type: 'bar',
|
||||||
|
data: {
|
||||||
|
labels: [],
|
||||||
|
datasets: [{
|
||||||
|
data: [],
|
||||||
|
backgroundColor: []
|
||||||
|
}]
|
||||||
|
},
|
||||||
|
options: {
|
||||||
|
responsive: true,
|
||||||
|
maintainAspectRatio: false,
|
||||||
|
indexAxis: 'y',
|
||||||
|
animation: false,
|
||||||
|
plugins: { legend: { display: false } },
|
||||||
|
scales: {
|
||||||
|
x: { beginAtZero: true, ticks: { stepSize: 1 } }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateOpMetrics() {
|
||||||
|
if (document.hidden) return;
|
||||||
|
fetch('/ui/metrics/operations')
|
||||||
|
.then(function(r) { return r.json(); })
|
||||||
|
.then(function(data) {
|
||||||
|
if (!data.enabled || !data.stats) {
|
||||||
|
if (opStatus) opStatus.textContent = 'Operation metrics not available';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
var stats = data.stats;
|
||||||
|
var totals = stats.totals || {};
|
||||||
|
|
||||||
|
var totalEl = document.getElementById('opTotalRequests');
|
||||||
|
var successEl = document.getElementById('opSuccessRate');
|
||||||
|
var errorEl = document.getElementById('opErrorCount');
|
||||||
|
var latencyEl = document.getElementById('opAvgLatency');
|
||||||
|
var bytesInEl = document.getElementById('opBytesIn');
|
||||||
|
var bytesOutEl = document.getElementById('opBytesOut');
|
||||||
|
|
||||||
|
if (totalEl) totalEl.textContent = totals.count || 0;
|
||||||
|
if (successEl) {
|
||||||
|
var rate = totals.count > 0 ? ((totals.success_count / totals.count) * 100).toFixed(1) : 0;
|
||||||
|
successEl.textContent = rate + '%';
|
||||||
|
}
|
||||||
|
if (errorEl) errorEl.textContent = totals.error_count || 0;
|
||||||
|
if (latencyEl) latencyEl.textContent = (totals.latency_avg_ms || 0).toFixed(1) + 'ms';
|
||||||
|
if (bytesInEl) bytesInEl.textContent = formatBytes(totals.bytes_in || 0);
|
||||||
|
if (bytesOutEl) bytesOutEl.textContent = formatBytes(totals.bytes_out || 0);
|
||||||
|
|
||||||
|
if (methodChart && stats.by_method) {
|
||||||
|
var methods = Object.keys(stats.by_method);
|
||||||
|
var methodData = methods.map(function(m) { return stats.by_method[m].count; });
|
||||||
|
var methodBg = methods.map(function(m) { return methodColors[m] || '#6c757d'; });
|
||||||
|
methodChart.data.labels = methods;
|
||||||
|
methodChart.data.datasets[0].data = methodData;
|
||||||
|
methodChart.data.datasets[0].backgroundColor = methodBg;
|
||||||
|
methodChart.update('none');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (statusChart && stats.by_status_class) {
|
||||||
|
var statuses = Object.keys(stats.by_status_class).sort();
|
||||||
|
var statusData = statuses.map(function(s) { return stats.by_status_class[s]; });
|
||||||
|
var statusBg = statuses.map(function(s) { return statusColors[s] || '#6c757d'; });
|
||||||
|
statusChart.data.labels = statuses;
|
||||||
|
statusChart.data.datasets[0].data = statusData;
|
||||||
|
statusChart.data.datasets[0].backgroundColor = statusBg;
|
||||||
|
statusChart.update('none');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (endpointChart && stats.by_endpoint) {
|
||||||
|
var endpoints = Object.keys(stats.by_endpoint);
|
||||||
|
var endpointData = endpoints.map(function(e) { return stats.by_endpoint[e].count; });
|
||||||
|
var endpointBg = endpoints.map(function(e) { return endpointColors[e] || '#6c757d'; });
|
||||||
|
endpointChart.data.labels = endpoints;
|
||||||
|
endpointChart.data.datasets[0].data = endpointData;
|
||||||
|
endpointChart.data.datasets[0].backgroundColor = endpointBg;
|
||||||
|
endpointChart.update('none');
|
||||||
|
}
|
||||||
|
|
||||||
|
var errorBody = document.getElementById('errorCodesBody');
|
||||||
|
if (errorBody && stats.error_codes) {
|
||||||
|
var errorCodes = Object.entries(stats.error_codes);
|
||||||
|
errorCodes.sort(function(a, b) { return b[1] - a[1]; });
|
||||||
|
var totalErrors = errorCodes.reduce(function(sum, e) { return sum + e[1]; }, 0);
|
||||||
|
errorCodes = errorCodes.slice(0, 10);
|
||||||
|
if (errorCodes.length === 0) {
|
||||||
|
errorBody.innerHTML = '<div class="text-muted small text-center py-4">' +
|
||||||
|
'<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" fill="currentColor" class="bi bi-check-circle mb-2 text-success" viewBox="0 0 16 16">' +
|
||||||
|
'<path d="M8 15A7 7 0 1 1 8 1a7 7 0 0 1 0 14zm0 1A8 8 0 1 0 8 0a8 8 0 0 0 0 16z"/>' +
|
||||||
|
'<path d="M10.97 4.97a.235.235 0 0 0-.02.022L7.477 9.417 5.384 7.323a.75.75 0 0 0-1.06 1.06L6.97 11.03a.75.75 0 0 0 1.079-.02l3.992-4.99a.75.75 0 0 0-1.071-1.05z"/>' +
|
||||||
|
'</svg><div>No S3 API errors</div></div>';
|
||||||
|
} else {
|
||||||
|
errorBody.innerHTML = errorCodes.map(function(e) {
|
||||||
|
var pct = totalErrors > 0 ? ((e[1] / totalErrors) * 100).toFixed(0) : 0;
|
||||||
|
return '<div class="d-flex align-items-center py-1" style="font-size: 0.8rem;">' +
|
||||||
|
'<div style="flex: 1;"><code class="text-danger">' + e[0] + '</code></div>' +
|
||||||
|
'<div class="text-end fw-semibold" style="width: 60px;">' + e[1] + '</div>' +
|
||||||
|
'<div style="width: 100px; padding-left: 10px;"><div class="progress" style="height: 6px;"><div class="progress-bar bg-danger" style="width: ' + pct + '%"></div></div></div>' +
|
||||||
|
'</div>';
|
||||||
|
}).join('');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var windowMins = Math.floor(stats.window_seconds / 60);
|
||||||
|
var windowSecs = stats.window_seconds % 60;
|
||||||
|
var windowStr = windowMins > 0 ? windowMins + 'm ' + windowSecs + 's' : windowSecs + 's';
|
||||||
|
if (opStatus) opStatus.textContent = 'Window: ' + windowStr + ' | ' + new Date().toLocaleTimeString();
|
||||||
|
})
|
||||||
|
.catch(function(err) {
|
||||||
|
console.error('Operation metrics fetch error:', err);
|
||||||
|
if (opStatus) opStatus.textContent = 'Failed to load';
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function startOpPolling() {
|
||||||
|
if (opTimer) clearInterval(opTimer);
|
||||||
|
opTimer = setInterval(updateOpMetrics, 5000);
|
||||||
|
}
|
||||||
|
|
||||||
|
var resetBtn = document.getElementById('resetOpMetricsBtn');
|
||||||
|
if (resetBtn) {
|
||||||
|
resetBtn.addEventListener('click', function() {
|
||||||
|
updateOpMetrics();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
document.addEventListener('visibilitychange', function() {
|
||||||
|
if (document.hidden) {
|
||||||
|
if (opTimer) clearInterval(opTimer);
|
||||||
|
opTimer = null;
|
||||||
|
} else {
|
||||||
|
updateOpMetrics();
|
||||||
|
startOpPolling();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
initOpCharts();
|
||||||
|
updateOpMetrics();
|
||||||
|
startOpPolling();
|
||||||
|
})();
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% if metrics_history_enabled %}
|
||||||
|
(function() {
|
||||||
|
var cpuChart = null;
|
||||||
|
var memoryChart = null;
|
||||||
|
var diskChart = null;
|
||||||
|
var historyStatus = document.getElementById('historyStatus');
|
||||||
|
var timeRangeSelect = document.getElementById('historyTimeRange');
|
||||||
|
var historyTimer = null;
|
||||||
|
var MAX_DATA_POINTS = 500;
|
||||||
|
|
||||||
|
function createChart(ctx, label, color) {
|
||||||
|
return new Chart(ctx, {
|
||||||
|
type: 'line',
|
||||||
|
data: {
|
||||||
|
labels: [],
|
||||||
|
datasets: [{
|
||||||
|
label: label,
|
||||||
|
data: [],
|
||||||
|
borderColor: color,
|
||||||
|
backgroundColor: color + '20',
|
||||||
|
fill: true,
|
||||||
|
tension: 0.3,
|
||||||
|
pointRadius: 3,
|
||||||
|
pointHoverRadius: 6,
|
||||||
|
hitRadius: 10,
|
||||||
|
}]
|
||||||
|
},
|
||||||
|
options: {
|
||||||
|
responsive: true,
|
||||||
|
maintainAspectRatio: true,
|
||||||
|
animation: false,
|
||||||
|
plugins: {
|
||||||
|
legend: { display: false },
|
||||||
|
tooltip: {
|
||||||
|
callbacks: {
|
||||||
|
label: function(ctx) { return ctx.parsed.y.toFixed(2) + '%'; }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
scales: {
|
||||||
|
x: {
|
||||||
|
display: true,
|
||||||
|
ticks: { maxTicksAuto: true, maxRotation: 0, font: { size: 10 }, autoSkip: true, maxTicksLimit: 10 }
|
||||||
|
},
|
||||||
|
y: {
|
||||||
|
display: true,
|
||||||
|
min: 0,
|
||||||
|
max: 100,
|
||||||
|
ticks: { callback: function(v) { return v + '%'; } }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function initCharts() {
|
||||||
|
var cpuCtx = document.getElementById('cpuHistoryChart');
|
||||||
|
var memCtx = document.getElementById('memoryHistoryChart');
|
||||||
|
var diskCtx = document.getElementById('diskHistoryChart');
|
||||||
|
if (cpuCtx) cpuChart = createChart(cpuCtx, 'CPU %', '#0d6efd');
|
||||||
|
if (memCtx) memoryChart = createChart(memCtx, 'Memory %', '#0dcaf0');
|
||||||
|
if (diskCtx) diskChart = createChart(diskCtx, 'Disk %', '#ffc107');
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatTime(ts) {
|
||||||
|
var d = new Date(ts);
|
||||||
|
return d.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' });
|
||||||
|
}
|
||||||
|
|
||||||
|
function loadHistory() {
|
||||||
|
if (document.hidden) return;
|
||||||
|
var hours = timeRangeSelect ? timeRangeSelect.value : 24;
|
||||||
|
fetch('/ui/metrics/history?hours=' + hours)
|
||||||
|
.then(function(r) { return r.json(); })
|
||||||
|
.then(function(data) {
|
||||||
|
if (!data.enabled || !data.history || data.history.length === 0) {
|
||||||
|
if (historyStatus) historyStatus.textContent = 'No history data available yet. Data is recorded every ' + (data.interval_minutes || 5) + ' minutes.';
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
var history = data.history.slice(-MAX_DATA_POINTS);
|
||||||
|
var labels = history.map(function(h) { return formatTime(h.timestamp); });
|
||||||
|
var cpuData = history.map(function(h) { return h.cpu_percent; });
|
||||||
|
var memData = history.map(function(h) { return h.memory_percent; });
|
||||||
|
var diskData = history.map(function(h) { return h.disk_percent; });
|
||||||
|
|
||||||
|
if (cpuChart) {
|
||||||
|
cpuChart.data.labels = labels;
|
||||||
|
cpuChart.data.datasets[0].data = cpuData;
|
||||||
|
cpuChart.update('none');
|
||||||
|
}
|
||||||
|
if (memoryChart) {
|
||||||
|
memoryChart.data.labels = labels;
|
||||||
|
memoryChart.data.datasets[0].data = memData;
|
||||||
|
memoryChart.update('none');
|
||||||
|
}
|
||||||
|
if (diskChart) {
|
||||||
|
diskChart.data.labels = labels;
|
||||||
|
diskChart.data.datasets[0].data = diskData;
|
||||||
|
diskChart.update('none');
|
||||||
|
}
|
||||||
|
if (historyStatus) historyStatus.textContent = 'Showing ' + history.length + ' data points';
|
||||||
|
})
|
||||||
|
.catch(function(err) {
|
||||||
|
console.error('History fetch error:', err);
|
||||||
|
if (historyStatus) historyStatus.textContent = 'Failed to load history data';
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function startHistoryPolling() {
|
||||||
|
if (historyTimer) clearInterval(historyTimer);
|
||||||
|
historyTimer = setInterval(loadHistory, 60000);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (timeRangeSelect) {
|
||||||
|
timeRangeSelect.addEventListener('change', loadHistory);
|
||||||
|
}
|
||||||
|
|
||||||
|
document.addEventListener('visibilitychange', function() {
|
||||||
|
if (document.hidden) {
|
||||||
|
if (historyTimer) clearInterval(historyTimer);
|
||||||
|
historyTimer = null;
|
||||||
|
} else {
|
||||||
|
loadHistory();
|
||||||
|
startHistoryPolling();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
initCharts();
|
||||||
|
loadHistory();
|
||||||
|
startHistoryPolling();
|
||||||
|
})();
|
||||||
|
{% endif %}
|
||||||
|
</script>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|||||||
@@ -35,6 +35,7 @@ def app(tmp_path: Path):
|
|||||||
flask_app = create_api_app(
|
flask_app = create_api_app(
|
||||||
{
|
{
|
||||||
"TESTING": True,
|
"TESTING": True,
|
||||||
|
"SECRET_KEY": "testing",
|
||||||
"STORAGE_ROOT": storage_root,
|
"STORAGE_ROOT": storage_root,
|
||||||
"IAM_CONFIG": iam_config,
|
"IAM_CONFIG": iam_config,
|
||||||
"BUCKET_POLICY_PATH": bucket_policies,
|
"BUCKET_POLICY_PATH": bucket_policies,
|
||||||
|
|||||||
@@ -1,6 +1,3 @@
|
|||||||
from urllib.parse import urlsplit
|
|
||||||
|
|
||||||
|
|
||||||
def test_bucket_and_object_lifecycle(client, signer):
|
def test_bucket_and_object_lifecycle(client, signer):
|
||||||
headers = signer("PUT", "/photos")
|
headers = signer("PUT", "/photos")
|
||||||
response = client.put("/photos", headers=headers)
|
response = client.put("/photos", headers=headers)
|
||||||
@@ -104,12 +101,12 @@ def test_request_id_header_present(client, signer):
|
|||||||
assert response.headers.get("X-Request-ID")
|
assert response.headers.get("X-Request-ID")
|
||||||
|
|
||||||
|
|
||||||
def test_healthcheck_returns_version(client):
|
def test_healthcheck_returns_status(client):
|
||||||
response = client.get("/healthz")
|
response = client.get("/myfsio/health")
|
||||||
data = response.get_json()
|
data = response.get_json()
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert data["status"] == "ok"
|
assert data["status"] == "ok"
|
||||||
assert "version" in data
|
assert "version" not in data
|
||||||
|
|
||||||
|
|
||||||
def test_missing_credentials_denied(client):
|
def test_missing_credentials_denied(client):
|
||||||
@@ -117,36 +114,20 @@ def test_missing_credentials_denied(client):
|
|||||||
assert response.status_code == 403
|
assert response.status_code == 403
|
||||||
|
|
||||||
|
|
||||||
def test_presign_and_bucket_policies(client, signer):
|
def test_bucket_policies_deny_reads(client, signer):
|
||||||
# Create bucket and object
|
import json
|
||||||
|
|
||||||
headers = signer("PUT", "/docs")
|
headers = signer("PUT", "/docs")
|
||||||
assert client.put("/docs", headers=headers).status_code == 200
|
assert client.put("/docs", headers=headers).status_code == 200
|
||||||
|
|
||||||
headers = signer("PUT", "/docs/readme.txt", body=b"content")
|
headers = signer("PUT", "/docs/readme.txt", body=b"content")
|
||||||
assert client.put("/docs/readme.txt", headers=headers, data=b"content").status_code == 200
|
assert client.put("/docs/readme.txt", headers=headers, data=b"content").status_code == 200
|
||||||
|
|
||||||
# Generate presigned GET URL and follow it
|
headers = signer("GET", "/docs/readme.txt")
|
||||||
json_body = {"method": "GET", "expires_in": 120}
|
response = client.get("/docs/readme.txt", headers=headers)
|
||||||
# Flask test client json parameter automatically sets Content-Type and serializes body
|
|
||||||
# But for signing we need the body bytes.
|
|
||||||
import json
|
|
||||||
body_bytes = json.dumps(json_body).encode("utf-8")
|
|
||||||
headers = signer("POST", "/presign/docs/readme.txt", headers={"Content-Type": "application/json"}, body=body_bytes)
|
|
||||||
|
|
||||||
response = client.post(
|
|
||||||
"/presign/docs/readme.txt",
|
|
||||||
headers=headers,
|
|
||||||
json=json_body,
|
|
||||||
)
|
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
presigned_url = response.get_json()["url"]
|
assert response.data == b"content"
|
||||||
parts = urlsplit(presigned_url)
|
|
||||||
presigned_path = f"{parts.path}?{parts.query}"
|
|
||||||
download = client.get(presigned_path)
|
|
||||||
assert download.status_code == 200
|
|
||||||
assert download.data == b"content"
|
|
||||||
|
|
||||||
# Attach a deny policy for GETs
|
|
||||||
policy = {
|
policy = {
|
||||||
"Version": "2012-10-17",
|
"Version": "2012-10-17",
|
||||||
"Statement": [
|
"Statement": [
|
||||||
@@ -160,29 +141,26 @@ def test_presign_and_bucket_policies(client, signer):
|
|||||||
],
|
],
|
||||||
}
|
}
|
||||||
policy_bytes = json.dumps(policy).encode("utf-8")
|
policy_bytes = json.dumps(policy).encode("utf-8")
|
||||||
headers = signer("PUT", "/bucket-policy/docs", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
headers = signer("PUT", "/docs?policy", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
||||||
assert client.put("/bucket-policy/docs", headers=headers, json=policy).status_code == 204
|
assert client.put("/docs?policy", headers=headers, json=policy).status_code == 204
|
||||||
|
|
||||||
headers = signer("GET", "/bucket-policy/docs")
|
headers = signer("GET", "/docs?policy")
|
||||||
fetched = client.get("/bucket-policy/docs", headers=headers)
|
fetched = client.get("/docs?policy", headers=headers)
|
||||||
assert fetched.status_code == 200
|
assert fetched.status_code == 200
|
||||||
assert fetched.get_json()["Version"] == "2012-10-17"
|
assert fetched.get_json()["Version"] == "2012-10-17"
|
||||||
|
|
||||||
# Reads are now denied by bucket policy
|
|
||||||
headers = signer("GET", "/docs/readme.txt")
|
headers = signer("GET", "/docs/readme.txt")
|
||||||
denied = client.get("/docs/readme.txt", headers=headers)
|
denied = client.get("/docs/readme.txt", headers=headers)
|
||||||
assert denied.status_code == 403
|
assert denied.status_code == 403
|
||||||
|
|
||||||
# Presign attempts are also denied
|
headers = signer("DELETE", "/docs?policy")
|
||||||
json_body = {"method": "GET", "expires_in": 60}
|
assert client.delete("/docs?policy", headers=headers).status_code == 204
|
||||||
body_bytes = json.dumps(json_body).encode("utf-8")
|
|
||||||
headers = signer("POST", "/presign/docs/readme.txt", headers={"Content-Type": "application/json"}, body=body_bytes)
|
headers = signer("DELETE", "/docs/readme.txt")
|
||||||
response = client.post(
|
assert client.delete("/docs/readme.txt", headers=headers).status_code == 204
|
||||||
"/presign/docs/readme.txt",
|
|
||||||
headers=headers,
|
headers = signer("DELETE", "/docs")
|
||||||
json=json_body,
|
assert client.delete("/docs", headers=headers).status_code == 204
|
||||||
)
|
|
||||||
assert response.status_code == 403
|
|
||||||
|
|
||||||
|
|
||||||
def test_trailing_slash_returns_xml(client):
|
def test_trailing_slash_returns_xml(client):
|
||||||
@@ -193,6 +171,8 @@ def test_trailing_slash_returns_xml(client):
|
|||||||
|
|
||||||
|
|
||||||
def test_public_policy_allows_anonymous_list_and_read(client, signer):
|
def test_public_policy_allows_anonymous_list_and_read(client, signer):
|
||||||
|
import json
|
||||||
|
|
||||||
headers = signer("PUT", "/public")
|
headers = signer("PUT", "/public")
|
||||||
assert client.put("/public", headers=headers).status_code == 200
|
assert client.put("/public", headers=headers).status_code == 200
|
||||||
|
|
||||||
@@ -221,10 +201,9 @@ def test_public_policy_allows_anonymous_list_and_read(client, signer):
|
|||||||
},
|
},
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
import json
|
|
||||||
policy_bytes = json.dumps(policy).encode("utf-8")
|
policy_bytes = json.dumps(policy).encode("utf-8")
|
||||||
headers = signer("PUT", "/bucket-policy/public", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
headers = signer("PUT", "/public?policy", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
||||||
assert client.put("/bucket-policy/public", headers=headers, json=policy).status_code == 204
|
assert client.put("/public?policy", headers=headers, json=policy).status_code == 204
|
||||||
|
|
||||||
list_response = client.get("/public")
|
list_response = client.get("/public")
|
||||||
assert list_response.status_code == 200
|
assert list_response.status_code == 200
|
||||||
@@ -237,14 +216,16 @@ def test_public_policy_allows_anonymous_list_and_read(client, signer):
|
|||||||
headers = signer("DELETE", "/public/hello.txt")
|
headers = signer("DELETE", "/public/hello.txt")
|
||||||
assert client.delete("/public/hello.txt", headers=headers).status_code == 204
|
assert client.delete("/public/hello.txt", headers=headers).status_code == 204
|
||||||
|
|
||||||
headers = signer("DELETE", "/bucket-policy/public")
|
headers = signer("DELETE", "/public?policy")
|
||||||
assert client.delete("/bucket-policy/public", headers=headers).status_code == 204
|
assert client.delete("/public?policy", headers=headers).status_code == 204
|
||||||
|
|
||||||
headers = signer("DELETE", "/public")
|
headers = signer("DELETE", "/public")
|
||||||
assert client.delete("/public", headers=headers).status_code == 204
|
assert client.delete("/public", headers=headers).status_code == 204
|
||||||
|
|
||||||
|
|
||||||
def test_principal_dict_with_object_get_only(client, signer):
|
def test_principal_dict_with_object_get_only(client, signer):
|
||||||
|
import json
|
||||||
|
|
||||||
headers = signer("PUT", "/mixed")
|
headers = signer("PUT", "/mixed")
|
||||||
assert client.put("/mixed", headers=headers).status_code == 200
|
assert client.put("/mixed", headers=headers).status_code == 200
|
||||||
|
|
||||||
@@ -270,10 +251,9 @@ def test_principal_dict_with_object_get_only(client, signer):
|
|||||||
},
|
},
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
import json
|
|
||||||
policy_bytes = json.dumps(policy).encode("utf-8")
|
policy_bytes = json.dumps(policy).encode("utf-8")
|
||||||
headers = signer("PUT", "/bucket-policy/mixed", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
headers = signer("PUT", "/mixed?policy", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
||||||
assert client.put("/bucket-policy/mixed", headers=headers, json=policy).status_code == 204
|
assert client.put("/mixed?policy", headers=headers, json=policy).status_code == 204
|
||||||
|
|
||||||
assert client.get("/mixed").status_code == 403
|
assert client.get("/mixed").status_code == 403
|
||||||
allowed = client.get("/mixed/only.txt")
|
allowed = client.get("/mixed/only.txt")
|
||||||
@@ -283,14 +263,16 @@ def test_principal_dict_with_object_get_only(client, signer):
|
|||||||
headers = signer("DELETE", "/mixed/only.txt")
|
headers = signer("DELETE", "/mixed/only.txt")
|
||||||
assert client.delete("/mixed/only.txt", headers=headers).status_code == 204
|
assert client.delete("/mixed/only.txt", headers=headers).status_code == 204
|
||||||
|
|
||||||
headers = signer("DELETE", "/bucket-policy/mixed")
|
headers = signer("DELETE", "/mixed?policy")
|
||||||
assert client.delete("/bucket-policy/mixed", headers=headers).status_code == 204
|
assert client.delete("/mixed?policy", headers=headers).status_code == 204
|
||||||
|
|
||||||
headers = signer("DELETE", "/mixed")
|
headers = signer("DELETE", "/mixed")
|
||||||
assert client.delete("/mixed", headers=headers).status_code == 204
|
assert client.delete("/mixed", headers=headers).status_code == 204
|
||||||
|
|
||||||
|
|
||||||
def test_bucket_policy_wildcard_resource_allows_object_get(client, signer):
|
def test_bucket_policy_wildcard_resource_allows_object_get(client, signer):
|
||||||
|
import json
|
||||||
|
|
||||||
headers = signer("PUT", "/test")
|
headers = signer("PUT", "/test")
|
||||||
assert client.put("/test", headers=headers).status_code == 200
|
assert client.put("/test", headers=headers).status_code == 200
|
||||||
|
|
||||||
@@ -314,10 +296,9 @@ def test_bucket_policy_wildcard_resource_allows_object_get(client, signer):
|
|||||||
},
|
},
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
import json
|
|
||||||
policy_bytes = json.dumps(policy).encode("utf-8")
|
policy_bytes = json.dumps(policy).encode("utf-8")
|
||||||
headers = signer("PUT", "/bucket-policy/test", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
headers = signer("PUT", "/test?policy", headers={"Content-Type": "application/json"}, body=policy_bytes)
|
||||||
assert client.put("/bucket-policy/test", headers=headers, json=policy).status_code == 204
|
assert client.put("/test?policy", headers=headers, json=policy).status_code == 204
|
||||||
|
|
||||||
listing = client.get("/test")
|
listing = client.get("/test")
|
||||||
assert listing.status_code == 403
|
assert listing.status_code == 403
|
||||||
@@ -328,8 +309,8 @@ def test_bucket_policy_wildcard_resource_allows_object_get(client, signer):
|
|||||||
headers = signer("DELETE", "/test/vid.mp4")
|
headers = signer("DELETE", "/test/vid.mp4")
|
||||||
assert client.delete("/test/vid.mp4", headers=headers).status_code == 204
|
assert client.delete("/test/vid.mp4", headers=headers).status_code == 204
|
||||||
|
|
||||||
headers = signer("DELETE", "/bucket-policy/test")
|
headers = signer("DELETE", "/test?policy")
|
||||||
assert client.delete("/bucket-policy/test", headers=headers).status_code == 204
|
assert client.delete("/test?policy", headers=headers).status_code == 204
|
||||||
|
|
||||||
headers = signer("DELETE", "/test")
|
headers = signer("DELETE", "/test")
|
||||||
assert client.delete("/test", headers=headers).status_code == 204
|
assert client.delete("/test", headers=headers).status_code == 204
|
||||||
|
|||||||
@@ -15,6 +15,7 @@ def kms_client(tmp_path):
|
|||||||
|
|
||||||
app = create_app({
|
app = create_app({
|
||||||
"TESTING": True,
|
"TESTING": True,
|
||||||
|
"SECRET_KEY": "testing",
|
||||||
"STORAGE_ROOT": str(tmp_path / "storage"),
|
"STORAGE_ROOT": str(tmp_path / "storage"),
|
||||||
"IAM_CONFIG": str(tmp_path / "iam.json"),
|
"IAM_CONFIG": str(tmp_path / "iam.json"),
|
||||||
"BUCKET_POLICY_PATH": str(tmp_path / "policies.json"),
|
"BUCKET_POLICY_PATH": str(tmp_path / "policies.json"),
|
||||||
|
|||||||
297
tests/test_operation_metrics.py
Normal file
297
tests/test_operation_metrics.py
Normal file
@@ -0,0 +1,297 @@
|
|||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from app.operation_metrics import (
|
||||||
|
OperationMetricsCollector,
|
||||||
|
OperationStats,
|
||||||
|
classify_endpoint,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestOperationStats:
|
||||||
|
def test_initial_state(self):
|
||||||
|
stats = OperationStats()
|
||||||
|
assert stats.count == 0
|
||||||
|
assert stats.success_count == 0
|
||||||
|
assert stats.error_count == 0
|
||||||
|
assert stats.latency_sum_ms == 0.0
|
||||||
|
assert stats.bytes_in == 0
|
||||||
|
assert stats.bytes_out == 0
|
||||||
|
|
||||||
|
def test_record_success(self):
|
||||||
|
stats = OperationStats()
|
||||||
|
stats.record(latency_ms=50.0, success=True, bytes_in=100, bytes_out=200)
|
||||||
|
|
||||||
|
assert stats.count == 1
|
||||||
|
assert stats.success_count == 1
|
||||||
|
assert stats.error_count == 0
|
||||||
|
assert stats.latency_sum_ms == 50.0
|
||||||
|
assert stats.latency_min_ms == 50.0
|
||||||
|
assert stats.latency_max_ms == 50.0
|
||||||
|
assert stats.bytes_in == 100
|
||||||
|
assert stats.bytes_out == 200
|
||||||
|
|
||||||
|
def test_record_error(self):
|
||||||
|
stats = OperationStats()
|
||||||
|
stats.record(latency_ms=100.0, success=False, bytes_in=50, bytes_out=0)
|
||||||
|
|
||||||
|
assert stats.count == 1
|
||||||
|
assert stats.success_count == 0
|
||||||
|
assert stats.error_count == 1
|
||||||
|
|
||||||
|
def test_latency_min_max(self):
|
||||||
|
stats = OperationStats()
|
||||||
|
stats.record(latency_ms=50.0, success=True)
|
||||||
|
stats.record(latency_ms=10.0, success=True)
|
||||||
|
stats.record(latency_ms=100.0, success=True)
|
||||||
|
|
||||||
|
assert stats.latency_min_ms == 10.0
|
||||||
|
assert stats.latency_max_ms == 100.0
|
||||||
|
assert stats.latency_sum_ms == 160.0
|
||||||
|
|
||||||
|
def test_to_dict(self):
|
||||||
|
stats = OperationStats()
|
||||||
|
stats.record(latency_ms=50.0, success=True, bytes_in=100, bytes_out=200)
|
||||||
|
stats.record(latency_ms=100.0, success=False, bytes_in=50, bytes_out=0)
|
||||||
|
|
||||||
|
result = stats.to_dict()
|
||||||
|
assert result["count"] == 2
|
||||||
|
assert result["success_count"] == 1
|
||||||
|
assert result["error_count"] == 1
|
||||||
|
assert result["latency_avg_ms"] == 75.0
|
||||||
|
assert result["latency_min_ms"] == 50.0
|
||||||
|
assert result["latency_max_ms"] == 100.0
|
||||||
|
assert result["bytes_in"] == 150
|
||||||
|
assert result["bytes_out"] == 200
|
||||||
|
|
||||||
|
def test_to_dict_empty(self):
|
||||||
|
stats = OperationStats()
|
||||||
|
result = stats.to_dict()
|
||||||
|
assert result["count"] == 0
|
||||||
|
assert result["latency_avg_ms"] == 0.0
|
||||||
|
assert result["latency_min_ms"] == 0.0
|
||||||
|
|
||||||
|
def test_merge(self):
|
||||||
|
stats1 = OperationStats()
|
||||||
|
stats1.record(latency_ms=50.0, success=True, bytes_in=100, bytes_out=200)
|
||||||
|
|
||||||
|
stats2 = OperationStats()
|
||||||
|
stats2.record(latency_ms=10.0, success=True, bytes_in=50, bytes_out=100)
|
||||||
|
stats2.record(latency_ms=100.0, success=False, bytes_in=25, bytes_out=50)
|
||||||
|
|
||||||
|
stats1.merge(stats2)
|
||||||
|
|
||||||
|
assert stats1.count == 3
|
||||||
|
assert stats1.success_count == 2
|
||||||
|
assert stats1.error_count == 1
|
||||||
|
assert stats1.latency_min_ms == 10.0
|
||||||
|
assert stats1.latency_max_ms == 100.0
|
||||||
|
assert stats1.bytes_in == 175
|
||||||
|
assert stats1.bytes_out == 350
|
||||||
|
|
||||||
|
|
||||||
|
class TestClassifyEndpoint:
|
||||||
|
def test_root_path(self):
|
||||||
|
assert classify_endpoint("/") == "service"
|
||||||
|
assert classify_endpoint("") == "service"
|
||||||
|
|
||||||
|
def test_ui_paths(self):
|
||||||
|
assert classify_endpoint("/ui") == "ui"
|
||||||
|
assert classify_endpoint("/ui/buckets") == "ui"
|
||||||
|
assert classify_endpoint("/ui/metrics") == "ui"
|
||||||
|
|
||||||
|
def test_kms_paths(self):
|
||||||
|
assert classify_endpoint("/kms") == "kms"
|
||||||
|
assert classify_endpoint("/kms/keys") == "kms"
|
||||||
|
|
||||||
|
def test_service_paths(self):
|
||||||
|
assert classify_endpoint("/myfsio/health") == "service"
|
||||||
|
|
||||||
|
def test_bucket_paths(self):
|
||||||
|
assert classify_endpoint("/mybucket") == "bucket"
|
||||||
|
assert classify_endpoint("/mybucket/") == "bucket"
|
||||||
|
|
||||||
|
def test_object_paths(self):
|
||||||
|
assert classify_endpoint("/mybucket/mykey") == "object"
|
||||||
|
assert classify_endpoint("/mybucket/folder/nested/key.txt") == "object"
|
||||||
|
|
||||||
|
|
||||||
|
class TestOperationMetricsCollector:
|
||||||
|
def test_record_and_get_stats(self, tmp_path: Path):
|
||||||
|
collector = OperationMetricsCollector(
|
||||||
|
storage_root=tmp_path,
|
||||||
|
interval_minutes=60,
|
||||||
|
retention_hours=24,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
collector.record_request(
|
||||||
|
method="GET",
|
||||||
|
endpoint_type="bucket",
|
||||||
|
status_code=200,
|
||||||
|
latency_ms=50.0,
|
||||||
|
bytes_in=0,
|
||||||
|
bytes_out=1000,
|
||||||
|
)
|
||||||
|
|
||||||
|
collector.record_request(
|
||||||
|
method="PUT",
|
||||||
|
endpoint_type="object",
|
||||||
|
status_code=201,
|
||||||
|
latency_ms=100.0,
|
||||||
|
bytes_in=500,
|
||||||
|
bytes_out=0,
|
||||||
|
)
|
||||||
|
|
||||||
|
collector.record_request(
|
||||||
|
method="GET",
|
||||||
|
endpoint_type="object",
|
||||||
|
status_code=404,
|
||||||
|
latency_ms=25.0,
|
||||||
|
bytes_in=0,
|
||||||
|
bytes_out=0,
|
||||||
|
error_code="NoSuchKey",
|
||||||
|
)
|
||||||
|
|
||||||
|
stats = collector.get_current_stats()
|
||||||
|
|
||||||
|
assert stats["totals"]["count"] == 3
|
||||||
|
assert stats["totals"]["success_count"] == 2
|
||||||
|
assert stats["totals"]["error_count"] == 1
|
||||||
|
|
||||||
|
assert "GET" in stats["by_method"]
|
||||||
|
assert stats["by_method"]["GET"]["count"] == 2
|
||||||
|
assert "PUT" in stats["by_method"]
|
||||||
|
assert stats["by_method"]["PUT"]["count"] == 1
|
||||||
|
|
||||||
|
assert "bucket" in stats["by_endpoint"]
|
||||||
|
assert "object" in stats["by_endpoint"]
|
||||||
|
assert stats["by_endpoint"]["object"]["count"] == 2
|
||||||
|
|
||||||
|
assert stats["by_status_class"]["2xx"] == 2
|
||||||
|
assert stats["by_status_class"]["4xx"] == 1
|
||||||
|
|
||||||
|
assert stats["error_codes"]["NoSuchKey"] == 1
|
||||||
|
finally:
|
||||||
|
collector.shutdown()
|
||||||
|
|
||||||
|
def test_thread_safety(self, tmp_path: Path):
|
||||||
|
collector = OperationMetricsCollector(
|
||||||
|
storage_root=tmp_path,
|
||||||
|
interval_minutes=60,
|
||||||
|
retention_hours=24,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
num_threads = 5
|
||||||
|
requests_per_thread = 100
|
||||||
|
threads = []
|
||||||
|
|
||||||
|
def record_requests():
|
||||||
|
for _ in range(requests_per_thread):
|
||||||
|
collector.record_request(
|
||||||
|
method="GET",
|
||||||
|
endpoint_type="object",
|
||||||
|
status_code=200,
|
||||||
|
latency_ms=10.0,
|
||||||
|
)
|
||||||
|
|
||||||
|
for _ in range(num_threads):
|
||||||
|
t = threading.Thread(target=record_requests)
|
||||||
|
threads.append(t)
|
||||||
|
t.start()
|
||||||
|
|
||||||
|
for t in threads:
|
||||||
|
t.join()
|
||||||
|
|
||||||
|
stats = collector.get_current_stats()
|
||||||
|
assert stats["totals"]["count"] == num_threads * requests_per_thread
|
||||||
|
finally:
|
||||||
|
collector.shutdown()
|
||||||
|
|
||||||
|
def test_status_class_categorization(self, tmp_path: Path):
|
||||||
|
collector = OperationMetricsCollector(
|
||||||
|
storage_root=tmp_path,
|
||||||
|
interval_minutes=60,
|
||||||
|
retention_hours=24,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
collector.record_request("GET", "object", 200, 10.0)
|
||||||
|
collector.record_request("GET", "object", 204, 10.0)
|
||||||
|
collector.record_request("GET", "object", 301, 10.0)
|
||||||
|
collector.record_request("GET", "object", 304, 10.0)
|
||||||
|
collector.record_request("GET", "object", 400, 10.0)
|
||||||
|
collector.record_request("GET", "object", 403, 10.0)
|
||||||
|
collector.record_request("GET", "object", 404, 10.0)
|
||||||
|
collector.record_request("GET", "object", 500, 10.0)
|
||||||
|
collector.record_request("GET", "object", 503, 10.0)
|
||||||
|
|
||||||
|
stats = collector.get_current_stats()
|
||||||
|
assert stats["by_status_class"]["2xx"] == 2
|
||||||
|
assert stats["by_status_class"]["3xx"] == 2
|
||||||
|
assert stats["by_status_class"]["4xx"] == 3
|
||||||
|
assert stats["by_status_class"]["5xx"] == 2
|
||||||
|
finally:
|
||||||
|
collector.shutdown()
|
||||||
|
|
||||||
|
def test_error_code_tracking(self, tmp_path: Path):
|
||||||
|
collector = OperationMetricsCollector(
|
||||||
|
storage_root=tmp_path,
|
||||||
|
interval_minutes=60,
|
||||||
|
retention_hours=24,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
collector.record_request("GET", "object", 404, 10.0, error_code="NoSuchKey")
|
||||||
|
collector.record_request("GET", "object", 404, 10.0, error_code="NoSuchKey")
|
||||||
|
collector.record_request("GET", "bucket", 403, 10.0, error_code="AccessDenied")
|
||||||
|
collector.record_request("PUT", "object", 500, 10.0, error_code="InternalError")
|
||||||
|
|
||||||
|
stats = collector.get_current_stats()
|
||||||
|
assert stats["error_codes"]["NoSuchKey"] == 2
|
||||||
|
assert stats["error_codes"]["AccessDenied"] == 1
|
||||||
|
assert stats["error_codes"]["InternalError"] == 1
|
||||||
|
finally:
|
||||||
|
collector.shutdown()
|
||||||
|
|
||||||
|
def test_history_persistence(self, tmp_path: Path):
|
||||||
|
collector = OperationMetricsCollector(
|
||||||
|
storage_root=tmp_path,
|
||||||
|
interval_minutes=60,
|
||||||
|
retention_hours=24,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
collector.record_request("GET", "object", 200, 10.0)
|
||||||
|
collector._take_snapshot()
|
||||||
|
|
||||||
|
history = collector.get_history()
|
||||||
|
assert len(history) == 1
|
||||||
|
assert history[0]["totals"]["count"] == 1
|
||||||
|
|
||||||
|
config_path = tmp_path / ".myfsio.sys" / "config" / "operation_metrics.json"
|
||||||
|
assert config_path.exists()
|
||||||
|
finally:
|
||||||
|
collector.shutdown()
|
||||||
|
|
||||||
|
def test_get_history_with_hours_filter(self, tmp_path: Path):
|
||||||
|
collector = OperationMetricsCollector(
|
||||||
|
storage_root=tmp_path,
|
||||||
|
interval_minutes=60,
|
||||||
|
retention_hours=24,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
collector.record_request("GET", "object", 200, 10.0)
|
||||||
|
collector._take_snapshot()
|
||||||
|
|
||||||
|
history_all = collector.get_history()
|
||||||
|
history_recent = collector.get_history(hours=1)
|
||||||
|
|
||||||
|
assert len(history_all) >= len(history_recent)
|
||||||
|
finally:
|
||||||
|
collector.shutdown()
|
||||||
@@ -28,6 +28,7 @@ def _make_app(tmp_path: Path):
|
|||||||
flask_app = create_app(
|
flask_app = create_app(
|
||||||
{
|
{
|
||||||
"TESTING": True,
|
"TESTING": True,
|
||||||
|
"SECRET_KEY": "testing",
|
||||||
"WTF_CSRF_ENABLED": False,
|
"WTF_CSRF_ENABLED": False,
|
||||||
"STORAGE_ROOT": storage_root,
|
"STORAGE_ROOT": storage_root,
|
||||||
"IAM_CONFIG": iam_config,
|
"IAM_CONFIG": iam_config,
|
||||||
@@ -184,5 +185,5 @@ class TestPaginatedObjectListing:
|
|||||||
assert resp.status_code == 200
|
assert resp.status_code == 200
|
||||||
|
|
||||||
html = resp.data.decode("utf-8")
|
html = resp.data.decode("utf-8")
|
||||||
# Should have the JavaScript loading infrastructure
|
# Should have the JavaScript loading infrastructure (external JS file)
|
||||||
assert "loadObjects" in html or "objectsApiUrl" in html
|
assert "bucket-detail-main.js" in html
|
||||||
|
|||||||
Reference in New Issue
Block a user