-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor snapshot for format k8s #450
Refactor snapshot for format k8s #450
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: wzshiming The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
✅ Deploy Preview for k8s-kwok canceled.
|
ffbd11c
to
c1cd7da
Compare
1b05c31
to
e90b7ba
Compare
Thanks @wzshiming. I will review tomorrow. |
0eccc73
to
e3d6251
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a good start to implement snapshot save --format=k8s
and snapshot export
. Some perf optimizations and customizations can be implemented iteratively.
err = result.Visit(func(info *resource.Info, err error) error { | ||
gvrs := make([]schema.GroupVersionResource, 0, len(resources)) | ||
for _, resource := range resources { | ||
mapping, err := mappingFor(restMapper, resource) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this line is hitting the server has received too many requests and has asked us to try again later
every time when I tested it locally against a 160-node cluster.
I suppose the code is borrowed from cli-runtime which seems to have some perf issue if you want to iterate mapping a series of resources. We may turn to construct the GVRs locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
another topic is the log is not revealing where this error message comes from, which makes debugging a bit inconvenient.
type flagpole struct { | ||
Path string | ||
Kubeconfig string | ||
Filters []string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the future, we may seek a fine-grained interface via componentconfig, so it can:
- adapt to lower-version k8s clusters as which may employ different GVK (like v1beta1 runtimeclass vs. v1)
- fine-grained behavior on fields stripping (like, whether or not strip a pod's nodeName, some particular annotations, security contexts, controllerRef, finalizers, etc.)
- ......
a3de6c9
to
7a75e60
Compare
c4512b4
to
36221da
Compare
7f67af2
to
3c42b6b
Compare
3c42b6b
to
2099e55
Compare
/lgtm Thanks @wzshiming! |
Created #456 to track follow-ups. |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #32
Refactor for #381 (comment)
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: