-
Notifications
You must be signed in to change notification settings - Fork 791
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cdn.dl.k8s.io doesn't seem to be cutting bandwidth to the backing GCS bucket #5726
Comments
Concretely: I think we need to improve our fastly configuration to ensure binaries are actually cached. There was a slack thread suggesting the default is <20MB but that would exclude most or all of kubernetes core binaries ... |
For context: #5603 merged July 24th. This may be working better than we thought, but we should still be confirming if we have suitable caching parameters etc. |
I think we should replace this with new issues about revisiting the config, the bandwidth is definitely down. @xmudrii mentioned a problem with slow downloads, but I don't think we have a tracking issue yet. |
I created #5755 to track the problem with slow downloads. |
I created #5757 to track increasing cache TTL, this might further reduce bandwidth to the origin bucket. |
I suspect the objects are too large for the current cache configuration? Not an expert with fastly.
cc @dims @ameukam
We don't have super good visibility on the bucket end of things just because I don't know anyone that currently has e.g. audit log access, but I can provide bandwidth graphs (there's a lot of noise) and we have graphs on the fastly side.
Best I can tell looking at the fastly UI and the GCS bandwidth graphs .... we're basically operating in pull-through mode w/o caching.
We'll need to fix this before we can rotate to a kubernetes.io GCS bucket (or S3 on AWS or ...).
The text was updated successfully, but these errors were encountered: