Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cdn.dl.k8s.io doesn't seem to be cutting bandwidth to the backing GCS bucket #5726

Closed
BenTheElder opened this issue Aug 16, 2023 · 5 comments

Comments

@BenTheElder
Copy link
Member

I suspect the objects are too large for the current cache configuration? Not an expert with fastly.

cc @dims @ameukam

We don't have super good visibility on the bucket end of things just because I don't know anyone that currently has e.g. audit log access, but I can provide bandwidth graphs (there's a lot of noise) and we have graphs on the fastly side.

Best I can tell looking at the fastly UI and the GCS bandwidth graphs .... we're basically operating in pull-through mode w/o caching.

We'll need to fix this before we can rotate to a kubernetes.io GCS bucket (or S3 on AWS or ...).

@BenTheElder
Copy link
Member Author

Concretely: I think we need to improve our fastly configuration to ensure binaries are actually cached. There was a slack thread suggesting the default is <20MB but that would exclude most or all of kubernetes core binaries ...

@BenTheElder
Copy link
Member Author

For context: #5603 merged July 24th.

This may be working better than we thought, but we should still be confirming if we have suitable caching parameters etc.

requests:
Screenshot 2023-08-23 at 11 59 14 AM

sent bytes:
Screenshot 2023-08-23 at 11 59 55 AM

@BenTheElder
Copy link
Member Author

BenTheElder commented Aug 24, 2023

I think we should replace this with new issues about revisiting the config, the bandwidth is definitely down.

@xmudrii mentioned a problem with slow downloads, but I don't think we have a tracking issue yet.

@xmudrii
Copy link
Member

xmudrii commented Aug 24, 2023

I created #5755 to track the problem with slow downloads.

@xmudrii
Copy link
Member

xmudrii commented Aug 25, 2023

I created #5757 to track increasing cache TTL, this might further reduce bandwidth to the origin bucket.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants