-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 Targets e.g. Cloudflare R2 for Staging, S3 for AWS Deep Archive, Backblaze for lukewarm, Wasabi for lukewarm, and etc. #7
Comments
What is the reason for waiting for Lifecycle rules ? These destinations are quite cheap as they are no ? |
The biggest concern there for me is that it'll cost $15 to host 1TB of data on R2. That blows my budget by quite a lot. I want to make sure Cloudflare has some safeguards that a guide can guide to setup to prevent that in case someone forgets to delete their staging area. |
I fleshed out the issue description a lot @mderazon . |
fwiw, this is what I was trying to do with Workers: |
Hmm, that's such a weird usage of some APIs. You pass in a body which is just a |
You're doing a lot more orchestration in the worker than what I did my approach as well. In the prototype GTR Azure Transload from Cloudware Workers where the worker itself does the transloading, a lot of the orchestration happens on the extension, where it isn't bound by the silly 10ms CPU limit. The worker or the many worker instances really is just pretty much given two fetches, a response body from one to stick into the other, and no fat libraries doing stuff like part size and queues are used; the worker stays very dumb. |
On that note about fat libraries, if I do try to tackle this, I'll probably be using https://github.com/mhart/aws4fetch and maybe just the raw stuff in there. |
I don't think the size of the library makes any difference, as it could be one line in the library that does some CPU and that would be it. There's also this: I will try the lib you mentioned in my code to see if it makes a difference |
Just noting this down here: https://developers.cloudflare.com/workers/platform/limits/#simultaneous-open-connections There is a limit of 6 simultaneous connections. Theoretically, I can do 3/10s the speed of the current Azure transloading from one worker call. |
https://developers.cloudflare.com/r2/buckets/object-lifecycles/ lifecycle rules have been added |
I'm keeping an eye on this project, wanted to ask, now that lifecycle rules have been added, the last missing piece to send it to any S3 compatible storage is remote fetch feature that Azure storage has ? |
The last missing piece is acceptable performance. The 100MB POST limit inside workers was extremely annoying. Is it still there? It cuts the speed to a top speed of 3/10s of Azure's and causes request count to spike to the point where it smashes into the free account limit's ceiling. I haven't touched this issue in some time, I might resurrect it now that I got a new 8 TB drive to backup to. |
General Issues to tackle:
await
on a fetch that was PUT-ing the response body object to Azure. It worked, but this was beyond 10ms. I suspect an S3 implementation would suffer the same too. Anyway, my memory is really fuzzy on this. This may not be true.Targets:
The text was updated successfully, but these errors were encountered: