Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s.gcr.io VDF (Vanity Domain Flip): Move official container images to K8s Infra #270

Closed
11 of 12 tasks
jbeda opened this issue Mar 5, 2017 · 28 comments
Closed
11 of 12 tasks
Assignees
Labels
area/release-eng Issues or PRs related to the Release Engineering subproject lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. sig/release Categorizes an issue or PR as relevant to SIG Release.
Milestone

Comments

@jbeda
Copy link
Contributor

jbeda commented Mar 5, 2017

Google Infra --> K8s Infra tasks (ordered by priority):

  • Ensure that k8s-artifacts-prod GCR is a superset of google-containers (aka backfiling; this has already been done and is easy to check before VDF happens)
  • Fix auditor's ability to handle child images in fat manifests (issue)
  • Optimize backup mechanism (issue)
  • Add CLI command to make editing the promoter manifests easier (issue)
  • Auditor alerts --> <somewhere>

Release Engineering


Original note from @jbeda:

We should not be using a Google only repo for official k8s builds.

@ixdy
Copy link
Member

ixdy commented Mar 5, 2017

Yep, this is something that @timstclair and I are looking into.

@timstclair
Copy link

We're expecting to have a proposal out in the next week or two that will address this.

@zmerlynn
Copy link
Member

zmerlynn commented Mar 6, 2017

Please include me. We have internal infrastructure that handles the replication of google-containers to non-US regions.

@luxas
Copy link
Member

luxas commented Mar 16, 2017

Please include me as well, I have been thinking about this for some time as well, but haven't just had the time to focus fully on trying to make a change here (not that I have the permissions to do so either but 😄)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2017
@david-mcmahon
Copy link
Contributor

cc @javier-b-perez

@luxas
Copy link
Member

luxas commented Dec 22, 2017

Progress: kubernetes/kubernetes#54174

/remove-lifecycle stale

cc @thockin @ixdy

What are the next steps here?
To start moving over images to a specific Kubernetes suborg gradually and/or updating refs in all other repos as well?

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 22, 2018
@stp-ip
Copy link
Member

stp-ip commented Mar 22, 2018

/remove-lifecycle stale
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 22, 2018
marpaia pushed a commit to marpaia/release that referenced this issue Feb 21, 2019
Add features archives for past releases (1.3 - 1.11)
@listx
Copy link

listx commented Apr 6, 2019

I really hope this will happen in the coming months. Check out this issue for high-level updates, but also updates to the manifest.yaml files here, as this is the first batch of staging registries that will be using the new non-Googler promotion process.

Or, please attend the wg-k8s-infra meetings where updates to this work is currently on the agenda.

@justaugustus
Copy link
Member

/assign @tpepper @justaugustus @timothysc @dims
/milestone v1.15
/priority critical-urgent
/area release-eng

@k8s-ci-robot k8s-ci-robot added the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label May 1, 2019
@k8s-ci-robot k8s-ci-robot added this to the v1.15 milestone May 1, 2019
@k8s-ci-robot k8s-ci-robot added the area/release-eng Issues or PRs related to the Release Engineering subproject label May 1, 2019
@dims dims removed their assignment Jul 8, 2019
@justaugustus justaugustus removed this from the v1.15 milestone Dec 4, 2019
@justaugustus
Copy link
Member

@listx -- Would you mind providing an update on your side as we're nearing the date?

@listx
Copy link

listx commented Mar 31, 2020

I'm manually performing the backups for now to make the automated jobs not time-out and cause a quota exceeded error. The backup optimization implementation has hit a snag with Workload Identity here kubernetes/k8s.io#677, but that should be sorted out by EOD tomorrow.

The auditor has had fixes to child manifest detection and this has been rolled out earlier today.

There is an internal Google change that will start rolling out on Wednesday (April 1) morning for the flip itself. As k8s-artifacts-prod has been a superset of google-containers for a while now, the transition should be seamless. (However, I will perform another backfill if necessary tomorrow).

The backup stuff is in a bit of a pinch but I don't think it will take longer than tomorrow to sort out (in time for the flip on Wed morning).

@justaugustus
Copy link
Member

@listx -- Thanks for the update! I think we're fine for day 1 as long as the backfill is set (which it is) and we've got a go-forward plan for the backups (which we do).

On my end, I've successfully tested the Releng changes here. I'm going to do a few spot-checks just in case, but we should be all good to go! :)

@listx
Copy link

listx commented Apr 2, 2020

The flip is in progress. Charts and updates haven been posted here: https://groups.google.com/d/msg/kubernetes-sig-release/ew-k9PEBckQ/mSa7KGeUCAAJ

@listx
Copy link

listx commented Apr 20, 2020

Forgot to update this thread, but TL;DR: the flip has been rolled back due to an unforeseen issue. Details here: https://groups.google.com/d/msg/kubernetes-sig-release/ew-k9PEBckQ/SZC2KeFYAwAJ

We will attempt the flip again in the coming weeks; I will update this thread again once we decide on the date.

@justaugustus
Copy link
Member

Last I chatted w/ @listx ETA is in a few weeks.
They're still working on some stuff on the Google side.

@thockin
Copy link
Member

thockin commented May 14, 2020 via email

@listx
Copy link

listx commented Aug 26, 2020

I vote to close this as VDF was completed on July 24, 2020.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/release-eng Issues or PRs related to the Release Engineering subproject lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. sig/release Categorizes an issue or PR as relevant to SIG Release.
Projects
None yet
Development

No branches or pull requests