Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(volumes): increase volume count and decrease size to ensure available volumes for s3 buckets #15

Merged
merged 3 commits into from
Apr 16, 2024

Conversation

andrewazores
Copy link
Member

@andrewazores andrewazores commented Apr 16, 2024

Welcome to Cryostat3! 👋

Before contributing, make sure you have:

  • Read the contributing guidelines
  • Linked a relevant issue which this PR resolves
  • Linked any other relevant issues, PR's, or documentation, if any
  • Resolved all conflicts, if any
  • Rebased your branch PR on top of the latest upstream main branch
  • Attached at least one of the following labels to the PR: [chore, ci, docs, feat, fix, test]
  • Signed all commits using a GPG signature

To recreate commits with GPG signature git fetch upstream && git rebase --force --gpg-sign upstream/main


Fixes: cryostatio/cryostat#370
Related to #14

Description of the change:

After some more review of SeaweedFS docs, it is expected that each S3 bucket will have multiple volumes allocated to it, even when those volumes are not yet filled. Once the volumes are filled then each bucket will also claim more of the free volumes. In practice with the Cryostat 3 smoketest I find that 28 volumes are needed for the archivedreports,archivedrecordings,eventtemplates set of buckets after some light exercising to upload a custom template, create some active recordings, create multiple copies of archived recordings, and generate reports for each of those recordings. The number of claimed volumes reaches 28 as soon as each bucket has a single file of content, and it seems to stay at 28 (presumably until the volumes become full). Therefore, 40 seems like a reasonable number of volumes to allocate. The maximum size of each volume is defaulted to a relatively small 256MB here (Seaweed's default is 30GB), but this number should be tuned to suit the disk space actually allocated to storage. Seaweed's default strategy is to divide the available disk space by the volume size and assign that many volumes, but this strategy does not work well when using the S3 interface which actually imposes a minimum number of volumes. Instead, we should assign a minimum number of volumes, then divide the desired total storage capacity by the number of volumes, and assign this result as the maximum volume size. Therefore, the tuning knobs to do so are exposed by environment variable, so that the Operator (and Helm chart?) can do this math based on the PVC supplied to the storage container.

The default selections of 40 volumes and 256MB per volume implies a total maximum storage capacity of 10GB. The container should happily run with less disk space available than that - it will just complain about not having disk space if it does fill up sooner. However, if the desired storage capacity is larger than 10GB (ie. a PVC larger than that is being assigned), then this is when it becomes important to tune the maximum volume size and increase it past 256MB. Otherwise, assigning a 50GB volume will not help, and the container will still only store and manage up to 10GB of content, leaving the rest of the disk unused and unavailable.

Question: should the "disk space / 40" calculation just be done in this entrypoint script instead of putting that work onto the Helm chart/Operator? Should we always assume that whatever disk space we see available to this container, should always be fully allocated to it? (this also has implications for cryostatio/cryostat-operator#727 , where a single PVC is assigned and shared between both the cryostat-storage container and a postgresql database).

@ebaron
Copy link
Member

ebaron commented Apr 16, 2024

Question: should the "disk space / 40" calculation just be done in this entrypoint script instead of putting that work onto the Helm chart/Operator? Should we always assume that whatever disk space we see available to this container, should always be fully allocated to it? (this also has implications for cryostatio/cryostat-operator#727 , where a single PVC is assigned and shared between both the cryostat-storage container and a postgresql database).

What about an environment variable to specify the total size that should be used by the storage. Then the entrypoint can divide it up as it sees fit?

@andrewazores
Copy link
Member Author

andrewazores commented Apr 16, 2024

There aren't a lot of tools available in the container for the entrypoint to use, so it isn't super robust, but the latest commit adds that and it seems to work reasonably well. You can specify the size as ex. 10GB, or 4G, or 2000M. 1000MiB doesn't work because it only knows how to handle units like K, M, G and the optional B suffix. You can also just pass a raw number of bytes, like 10737418240 for 10GB, and it will also handle that.

@andrewazores
Copy link
Member Author

And now it defaults to detecting the free space available on disk at startup and dividing all of that up over the 40 volumes. But, the 40 volumes can still be overridden (only to increase), and the storage capacity can also be specified - either to limit it to some smaller portion of what is available, or perhaps to set a larger limit because it's expected that more space will be freed or added later.

Copy link
Member

@ebaron ebaron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@andrewazores andrewazores merged commit 03a6445 into cryostatio:main Apr 16, 2024
1 check passed
@andrewazores andrewazores deleted the volume-tuning branch April 16, 2024 21:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug] No volumes available for archivedreports
2 participants