Skip to content
This repository has been archived by the owner on Jan 17, 2020. It is now read-only.

Integration: Expose rest.Config #37

Closed

Conversation

hoegaarden
Copy link
Contributor

For now, only the Host field is populated. Going forward we can then add all-the-things clients would need to know for connecting to our ControlPlane/APIServer.

closes #23

@hoegaarden hoegaarden requested a review from a team February 2, 2018 16:11
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: hoegaarden

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Feb 2, 2018
@marun
Copy link
Contributor

marun commented Feb 2, 2018

How will a vendoring repo ensure compatibility between the version of apimachinery they are vendoring and the version required by this repo? Will it be necessary to maintain release branches of this repo that track release branches of apimachinery?

@k8s-ci-robot
Copy link
Contributor

@marun: GitHub didn't allow me to assign the following users: marun.

Note that only kubernetes-sig-testing members and repo collaborators can be assigned.

In response to this:

/assign

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@marun
Copy link
Contributor

marun commented Feb 2, 2018

/assign

@k8s-ci-robot
Copy link
Contributor

@marun: GitHub didn't allow me to assign the following users: marun.

Note that only kubernetes-sig-testing members and repo collaborators can be assigned.

In response to this:

/assign

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@hoegaarden
Copy link
Contributor Author

How will a vendoring repo ensure compatibility between the version of apimachinery they are vendoring and the version required by this repo?
Will it be necessary to maintain release branches of this repo that track release branches of apimachinery?

I think those questions should be answered by CI. I think we should have this framework tested with multiple versions of client-go and by doing so set up some sort of compatibility matrix, something like client-go itself has maybe.

If possible, I'd like to avoid release branches. But if that's the way forward we will figure it out.

Alternatively, we could not depend on rest.Client but expose our configuration in a different way. Right now, the only thing we really need to expose is the URL. But in future I expect there to be more config stuff that will need be handled (TLS, users, ...). And then rest.Config seems to be a natural way to do so. We could also expose our own struct (similar to rest.Config and have users convert to rest.Config themselves if they need that. Also we could look into returning a kubeconfig file(contents).

@hoegaarden
Copy link
Contributor Author

d5bfdd1 introduces tests against different versions of k8s.io/client-go (which brings in rest.Config).

@marun
Copy link
Contributor

marun commented Feb 5, 2018

My question about 'ensuring compatibility' wasn't related to test compatibility so much as vendoring compatibility. For a repo like k8s.io/api that has release branches, any repo that tracks those branches in its own release branches (e.g. service catalog) will require that other dependencies for a given branch be compatible. Attempting to ensure compatibility via testing would be risky for even a single dependency (e.g. client-go), let alone multiple staging repos.

As you say, another option is to duplicate code where necessary to minimize vendoring of the staging repos and any others that use release branches. I think the cost of this is likely to be higher than release branches, though, since it will impose both a code maintenance burden and a cognitive burden on developers when discrepancies inevitably arise between the code in this repo and the code it is duplicating from one or more of the staging repos.

@hoegaarden
Copy link
Contributor Author

My question about 'ensuring compatibility' wasn't related to test compatibility so much as vendoring compatibility. For a repo like k8s.io/api that has release branches, any repo that tracks those branches in its own release branches (e.g. service catalog) will require that other dependencies for a given branch be compatible. That would be risky even for a single dependency (e.g. client-go), let alone multiple staging repos.

I am not sure if I understand your concerns correctly.

Right now the CI tests prove that our framework is compatible with all (tested) versions of client-go. That means that -- as long as this stays true -- anyone that has already vendored any of those versions of client-go can vendor our framework?

As soon as our code is not compatible with a specific version of client-go anymore the CI will tell us. And then we can think about introducing release branches.

I feel I am missing your point though ... ?

@marun
Copy link
Contributor

marun commented Feb 5, 2018

This PR adds dependencies on k8s.io/api and k8s.io/apimachinery, both of which have release branches. Where are the tests that ensure compatibility across the branches for those repos?

... and print log versions of some interesting packages.
@hoegaarden
Copy link
Contributor Author

This PR adds dependencies on k8s.io/api and k8s.io/apimachinery, both of which have release branches. Where are the tests that ensure compatibility across the branches for those repos?

I thought the tests run against specific versions of api and apimachinery because I thought godep would restore the correct version of those. It seems, that is not the case.

Maybe let me explain what I was trying to do:

  • Remove k8s.io/{client-go,api,apimachinery} from the ./vendor directory
  • get client-go in a specific version into the $GOPATH
  • use godep restore ./... to get the dependencies of client-go in there correct version

Now there is a problem with that:
For older versions of client-go (v4.0.0 and lower) godep does not restore the correct versions of k8s.io/api and k8s.io/apimachinery:

  • k8s.io/api is always at 163903a761b32a51f7cb963cf8c64290e854a3a9, 2018-02-05T21:14:43-08:00
  • k8s.io/apimachinery is at caa3b27b0fdaa84adc020cd7663afd62777ec9bd, 2018-02-05T18:22:21-08:00 (for v.2.0.0 of client-go, other versions may be correct)

How can I now find out which versions of api and apimachinery should really be used for client-go v2.0.0 for example? I thought I could blindly trust godep ...


Version restored by godep:

Setting client-go version to v2.0.0
found in /home/travis/gopath/src/k8s.io/client-go:
e121606b0d09b2e1c467183ee46217fa85a6b672, 2017-02-13T09:46:50-08:00,  (HEAD, tag: v2.0.0)
found in /home/travis/gopath/src/k8s.io/api:
163903a761b32a51f7cb963cf8c64290e854a3a9, 2018-02-05T21:14:43-08:00,  (HEAD -> master, origin/master, origin/HEAD)
found in /home/travis/gopath/src/k8s.io/apimachinery:
caa3b27b0fdaa84adc020cd7663afd62777ec9bd, 2018-02-05T18:22:21-08:00,  (HEAD -> master, origin/master, origin/HEAD)

Setting client-go version to v3.0.0
found in /home/travis/gopath/src/k8s.io/client-go:
21300e3e11c918b8e6a70fb7293b310683d6c046, 2017-07-05T23:56:09+00:00,  (HEAD, tag: v3.0.0)
found in /home/travis/gopath/src/k8s.io/api:
163903a761b32a51f7cb963cf8c64290e854a3a9, 2018-02-05T21:14:43-08:00,  (HEAD -> master, origin/master, origin/HEAD)
found in /home/travis/gopath/src/k8s.io/apimachinery:
85ace5365f33b16fc735c866a12e3c765b9700f2, 2017-06-13T20:37:36+00:00,  (HEAD)

Setting client-go version to v4.0.0
found in /home/travis/gopath/src/k8s.io/client-go:
d92e8497f71b7b4e0494e5bd204b48d34bd6f254, 2017-07-28T13:53:02+00:00,  (HEAD, tag: v4.0.0)
found in /home/travis/gopath/src/k8s.io/api:
163903a761b32a51f7cb963cf8c64290e854a3a9, 2018-02-05T21:14:43-08:00,  (HEAD -> master, origin/master, origin/HEAD)
found in /home/travis/gopath/src/k8s.io/apimachinery:
1fd2e63a9a370677308a42f24fd40c86438afddf, 2017-07-28T13:45:14+00:00,  (HEAD)

Setting client-go version to v5.0.0
found in /home/travis/gopath/src/k8s.io/client-go:
35874c597fed17ca62cd197e516d7d5ff9a2958c, 2017-09-28T15:13:57-07:00,  (HEAD, tag: v5.0.0, tag: kubernetes-1.8.0)
found in /home/travis/gopath/src/k8s.io/api:
fe29995db37613b9c5b2a647544cf627bfa8d299, 2017-09-21T16:17:36-07:00,  (HEAD, tag: kubernetes-1.8.1-beta.0, tag: kubernetes-1.8.0-rc.1, tag: kubernetes-1.8.0)
found in /home/travis/gopath/src/k8s.io/apimachinery:
019ae5ada31de202164b118aee88ee2d14075c31, 2017-09-25T16:41:55-07:00,  (HEAD, tag: kubernetes-1.8.3-beta.0, tag: kubernetes-1.8.2-beta.0, tag: kubernetes-1.8.2, tag: kubernetes-1.8.1-beta.0, tag: kubernetes-1.8.1, tag: kubernetes-1.8.0)

Setting client-go version to v5.0.1
found in /home/travis/gopath/src/k8s.io/client-go:
2ae454230481a7cb5544325e12ad7658ecccd19b, 2017-10-11T16:15:37-07:00,  (HEAD, tag: v5.0.1, tag: kubernetes-1.8.1)
found in /home/travis/gopath/src/k8s.io/api:
6c6dac0277229b9e9578c5ca3f74a4345d35cdc2, 2017-10-05T14:45:14-07:00,  (HEAD, tag: kubernetes-1.8.2-beta.0, tag: kubernetes-1.8.1)
found in /home/travis/gopath/src/k8s.io/apimachinery:
019ae5ada31de202164b118aee88ee2d14075c31, 2017-09-25T16:41:55-07:00,  (HEAD, tag: kubernetes-1.8.3-beta.0, tag: kubernetes-1.8.2-beta.0, tag: kubernetes-1.8.2, tag: kubernetes-1.8.1-beta.0, tag: kubernetes-1.8.1, tag: kubernetes-1.8.0)

Setting client-go version to v6.0.0
found in /home/travis/gopath/src/k8s.io/client-go:
78700dec6369ba22221b72770783300f143df150, 2017-12-06T10:53:11-08:00,  (HEAD, tag: v6.0.0, tag: kubernetes-1.9.1-beta.0, tag: kubernetes-1.9.0-beta.2, tag: kubernetes-1.9.0)
found in /home/travis/gopath/src/k8s.io/api:
11147472b7c934c474a2c484af3c0c5210b7a3af, 2017-12-01T15:45:52-08:00,  (HEAD, tag: kubernetes-1.9.0-beta.2)
found in /home/travis/gopath/src/k8s.io/apimachinery:
180eddb345a5be3a157cea1c624700ad5bd27b8f, 2017-11-27T09:38:05-08:00,  (HEAD, tag: kubernetes-1.9.1-beta.0, tag: kubernetes-1.9.0-beta.2, tag: kubernetes-1.9.0-beta.1, tag: kubernetes-1.9.0)

@marun
Copy link
Contributor

marun commented Feb 6, 2018

I'm not sure why the version isn't being set correctly.

I'm still not clear on how a repo that is using release branches of staging repos is intended to vendor this repo. Ok, compatibility is being tested. Is it possible to configure dep, run against a repo that is vendoring this one, to ignore the staging repo dependencies defined in Gopkg.toml for this repo (e.g. via overrides in the vendoring repo Gopkg.toml)?

@sttts
Copy link

sttts commented Feb 6, 2018

I'm not sure why the version isn't being set correctly.

There was no k8s.io/api in 4.0.0, and no apimachinery in 2.0.0.

@hoegaarden
Copy link
Contributor Author

I'm not sure why the version isn't being set correctly.

There was no k8s.io/api in 4.0.0, and no apimachinery in 2.0.0.

Interesting. godep still seems to pull those in.

@sttts
Copy link

sttts commented Feb 6, 2018

Interesting. godep still seems to pull those in.

Probably because you reference them in your code. That's the kind of problems you will see when targetting multiple client-go versions in parallel. This asks for trouble in vendoring and compilation.

@hoegaarden
Copy link
Contributor Author

Probably because you reference them in your code.

We only reference rest.Config directly / knowingly.

Anyway, I am looking into how introducing release branches would look like. But I am currently in a big fight with dep on that.

@hoegaarden
Copy link
Contributor Author

Ok, we did some experiments on how to potentially vendor this repo (dep managed) into k/k (godep managed). What we found is:

  1. Any version constraints we have set up in the dep world will of course be ignored when vendoring with godep. Thus this package needs to be compatible with "all" versions of it's dependencies.
  2. Generally vendoring a dep managed package into a godep one is possible - given you are OK with 1.
  3. Updating the vendor tree with godep update ... seems to be broken, so it is probably easier to remove all the vendoring stuff and start from scratch with gedep save ...
  4. If we want to stick with dep, release branches won't help, as any version constraints configured by dep will be ignored by godep anyway.

For 1. we can have tests in CI, and we can document which versions of client-go and others we support / test against.

Regarding 3. we can also document that. But actually the problem arises in the package which vendors this one -- and therefore the documentation about this issue may well be overlooked.

So I see two ways how to address that:

  • Keep using dep and test against different versions of client-go in CI. k/k and other should be able to vendor this package with the list of issues from above. We do not need release branches, at least not right now.
  • Switch to godep instead of dep. Also introduce release branches, tracking the versions from cleint-go.

Did I miss anything? Any input? Any preference?

@marun @sttts @totherme @pwittrock

@sttts
Copy link

sttts commented Feb 12, 2018

@hoegaarden I am not sure why dep+godep is discussed. This is not the problem. There is a general rule:

We cannot vendor any packages depending on client-go and friends into Kubernetes.

This has nothing to do with the vendoring tool used. It has to do with the conflict of the client-go version in k/k's staging/ and an vendored in test-framework that must be compatible with the published k8s.io/client-go. If we change client-go in k/k, we might have to change the test framework outside of k/k. But the test-framework outside of k/k cannot be changed because then it's incompatible with the published client-go version at k8s.io/client-go. There is no clean process to solve this.

Hence, repeating what @marun just wrote, "using client-go in the test-framework" + "using the test-framework inside k/k" unavoidibly means that the test-framework must live inside k/k as a staging/ repo.

Btw, better don't make assumptions about the vendoring tool. We might switch to dep at some point kubernetes/kubernetes#57795.

@hoegaarden
Copy link
Contributor Author

hoegaarden commented Feb 12, 2018

Thanks. I think I am starting to understand all the problems with vendoring a staging repo now. Sorry about the confusion and thanks for the explanations and help.

So for now, that means that this cannot be done and I will probably close this PR and the related issue #23. It should be left to the users to create a rest.Config if they need that. I think eventually we will want something like rest.Config, but we need to see how this might look and how it can be achieved without a dependency on client-go.

@sttts
Copy link

sttts commented Feb 12, 2018

I see these options:

  • Accept staging: create a staging directory, update https://github.com/kubernetes/publishing-bot/blob/master/configs/kubernetes-rules-configmap.yaml. Your repo will be published 4 times a day.
  • put your framework and all the tests self-contained in a docker container and run that from k/k's tests. Alternatively, git-clone the test-framework from the test script. Note though that you still cannot import any other k/k code, it must be completely self-contained.
  • don't use client-go and friends from the test framework at all. This can mean:
    • to duplicate the functionality with your own code

    • to come up with an injection mechanism that is implemented once in k/k (using client-go from k/k) and once in an out-of-k/k repository against the published k8s.io/client-go.

      In other words: have k8s.io/test-framework and k8s.io/test-framework-runner and a k8s.io/kubernetes/test/integration/runner.

      The first can be vendored because it does not have client-go imports. The second has all client-go imports for standalone mode. The third is the runner implementation inside kube.

@hoegaarden
Copy link
Contributor Author

Closing this PR because this implementation would not allow to be vendored into k/k.
Thanks all for the feedback!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Provide rest.Config to talk to apiserver
4 participants