-
Notifications
You must be signed in to change notification settings - Fork 50
Introduce upload
source and rukpakctl run
#417
Conversation
Hmm.... I wonder if the inclusion of the EDIT: Yeah it sounds like upstream has a similar problem: https://github.com/kubernetes/kubernetes/blob/b7337cb171ab126cb892aacdab2816017a290841/pkg/controller/deployment/util/deployment_util.go#L598-L599 |
411d98e
to
25c1069
Compare
About the client tool, can it create the bundle instead of the bundle instance embedding a bundle? So it doesn't need to wait for the bundle name is determined. we can have another command to create the bundle instance that specify the bundle. |
216edec
to
b79f77a
Compare
To actually run the bundle, we have to create a bundle instance that embeds the binary-type bundle. So at least in that case, I don't think we have any choice but to:
|
@joelanford Thanks! I didn't realize that the bundleinstance has to embed the bundle. I thought it could still reference a bundle. |
b79f77a
to
478e1d8
Compare
@joelanford: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
26d77f9
to
2ab3a02
Compare
binary
sourcebinary
source and rukpakctl run
c2a9fbd
to
fcc02a5
Compare
204883b
to
9c9d013
Compare
Is there anything I'm missing about what we'd need to check? The bundle And the upload manager http server forbids re-uploads of already unpacked binary bundles, so I think that covers the immutability from that perspective.
Will do. They're all under the internal tree so I somewhat glossed over that, but I agree nonetheless. docs > no docs regardless of what/where. |
This is my bad: I forgot the core webhook just ensures Bundle spec immutability, and we moved the union type validation logic as oneOf CRD schema validations. I was talking about the latter when making that comment. |
11554ed
to
69c0ae9
Compare
binary
source and rukpakctl run
upload
source and rukpakctl run
dd26000
to
1c113c0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Nice job!
) | ||
|
||
cmd := &cobra.Command{ | ||
Use: "run <bundleDeploymentName> <bundleDir>", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WDYT about making the bundleDeploymentName
optional, and just using the name of the bundle directory if a BD name is not provided? It makes the UX even more easy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main reason I chose that is to ensure that the pivot workflow works regardless of whether you use the same directory name. If you had ./my-bundle-v0.1.0
and ./my-bundle-v0.2.0
, it would be really easy to create 2 different BDs when what you really want is to re-use the same BD, and pivot it from v0.1.0 to v0.2.0.
// This function unsets user and group information in the tar archive so that readers | ||
// of archives produced by this function do not need to account for differences in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should help alleviate some of the issues we see when attempting to prune red hat catalog images that include root-permission directories. Maybe this is worth highlighting in the user-facing documentation? If it's not overly technical
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, +1 on that recommendation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be a little bit technical given the existing way most users would interact with this (rukpakctl run
). That subcommand does all the necessary tar.gz stuff under the hood and it is not exposed at all to users.
Regardless, I could definitely see value in documenting the upload manager service as a separate thing because there may be in-cluster processes that can directly interact with the upload manager without requiring the extra port-forward stuff that rukpakctl run
provides.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really nice work. Lots of complexity here, and I didn't play around with it locally so YMMV, but the packages that were introduced were pretty clean. Overall, I didn't have any blocking feedback, and I'm excited to see this finally get close to the finish line. I tried to avoid making a mass amount of comments, so I stopped midway through in hopes we can merge this before closing out this current release milestone.
Do we need any follow-up tickets around benchmarking this service? Does it make sense to call out in the user-facing documentation (if we haven't already) and/or the CRD schema documentation that this service can be used to handle arbitrarily sized bundles to avoid the etcd size limitation?
cmd/rukpakctl/cmd/run.go
Outdated
dynCl, err := dynamic.NewForConfig(cfg) | ||
if err != nil { | ||
return "", fmt.Errorf("build dynamic client: %v", err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not blocking: create the dynamic client at the driver level, and pass it as a paramter vs. the rest config?
// NOTE: AddMetricsExtraHandler isn't actually metrics-specific. We can run | ||
// whatever handlers we want on the existing webserver that | ||
// controller-runtime runs when MetricsBindAddress is configured on the | ||
// manager. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
internal/rukpakctl/ca.go
Outdated
|
||
// GetClusterCA returns an x509.CertPool by reading the contents of a Kubernetes Secret. It uses the provided | ||
// client to get the requested secret and then loads the contents of the secret's "ca.crt" key into the cert pool. | ||
func GetClusterCA(ctx context.Context, cl client.Reader, ns, secretName string) (*x509.CertPool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be easier to pass a NamespacedName instead of individual ns/secretName string fields?
podName := endpoints.Subsets[0].Addresses[0].TargetRef.Name | ||
port := pf.port.IntVal | ||
if port == 0 { | ||
for _, p := range endpoints.Subsets[0].Ports { | ||
if p.Name == pf.port.StrVal { | ||
port = p.Port | ||
break | ||
} | ||
} | ||
} | ||
if port == 0 { | ||
return fmt.Errorf("could not find port %q for service %q", pf.port.String(), pf.serviceName) | ||
} | ||
|
||
path := fmt.Sprintf("/api/v1/namespaces/%s/pods/%s/portforward", pf.serviceNamespace, podName) | ||
host := strings.TrimLeft(pf.cfg.Host, "htps:/") | ||
serverURL := url.URL{Scheme: "https", Path: path, Host: host} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could modularize this logic, but this is fine for now.
@@ -90,12 +94,19 @@ func (s *unpacker) Unpack(ctx context.Context, bundle *rukpakv1alpha1.Bundle) (* | |||
// NewDefaultUnpacker returns a new composite Source that unpacks bundles using | |||
// a default source mapping with built-in implementations of all of the supported | |||
// source types. | |||
func NewDefaultUnpacker(mgr ctrl.Manager, namespace, provisionerName, unpackImage string) (Unpacker, error) { | |||
// | |||
// TODO: refactor NewDefaultUnpacker due to growing parameter list |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
internal/source/unpacker.go
Outdated
rukpakv1alpha1.SourceTypeUpload: &Upload{ | ||
baseDownloadURL: baseUploadManagerURL, | ||
bearerToken: mgr.GetConfig().BearerToken, | ||
client: http.Client{Timeout: 10 * time.Second, Transport: httpTransport}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any value in a constant variable for the timeout period?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, why not!
// This function unsets user and group information in the tar archive so that readers | ||
// of archives produced by this function do not need to account for differences in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, +1 on that recommendation.
test/e2e/plain_provisioner_test.go
Outdated
if bundle.Status.Phase != rukpakv1alpha1.PhaseUnpacked { | ||
return errors.New("bundle is not unpacked") | ||
} | ||
return nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle this through gomega matchers?
One additional thought I had -- do we ensure that the kube client proxy to the service tears down gracefully after ending the upload? I have seen issues with |
If I understand your question, I think the answer is yes. We start a goroutine in the error group whose sole purpose is to wait for the context to be canceled and then close the stop channel. Then at the very end of that function we call |
01f7e6f
to
8f0ada7
Compare
I didn't call this out specifically in the "upload" source struct GoDoc because the "image" and "git" sources also support arbitrarily sized bundles. Maybe we should instead call out the |
Yeah, that sounds reasonable to me. That can be done as a follow-up anyways from my perspective. |
Signed-off-by: Joe Lanford <joe.lanford@gmail.com>
Signed-off-by: Joe Lanford <joe.lanford@gmail.com>
Signed-off-by: Joe Lanford <joe.lanford@gmail.com>
Signed-off-by: Joe Lanford <joe.lanford@gmail.com>
8f0ada7
to
060d165
Compare
This PR introduces an
upload
source and arukpakctl run
command that makes use of the newupload
source.Closes #472
The
local
source is a great way to help us get up and running quickly with a mechanism that enables direct usage bundle contents to a kubernetes cluster without requiring those bundles to be pushed to a git remote or an image registry. However, thelocal
source's use of configmaps could only get us so far. Configmap size is restricted by etcd configuration (typically 2MB), so thelocal
source could only support bundles up to 2MB in size. Some improvements for thelocal
source are discussed in #464.The
upload
source does not place this restriction on bundle size because it does not rely on etcd for storage of local bundle content.An upload source enables bundle developers or other clients to directly upload bundle tar.gz files to inject them into bundles.
This introduces a upload manager service that:
upload
source typeThis also introduces client tooling necessary to actually exercise this functionality in a user-friendly way. It:
rukpakctl run
to create a BD with an upload source bundle, get the name of the generated bundle, and then upload the bundle contents to the upload manager, which then makes that bundle content available to the bundle provisioners.BundleUploader
type that is essentially a small API for actually doing the upload. This is used inrukpakctl run
and also in the e2e tests.