Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UPSTREAM: 70647: Always run untag when removing docker image #22500

Closed
wants to merge 1 commit into from
Closed

UPSTREAM: 70647: Always run untag when removing docker image #22500

wants to merge 1 commit into from

Conversation

soukron
Copy link

@soukron soukron commented Apr 8, 2019

K8S PR: kubernetes/kubernetes#70647
Backport of commit applied to /pull/22154 in release-3.11

xref https://bugzilla.redhat.com/show_bug.cgi?id=1691333

@openshift-ci-robot openshift-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Apr 8, 2019
@soukron
Copy link
Author

soukron commented Apr 8, 2019

Can anyone confirm the 2 failed tests are due to this commit? It looks like to me that they are something different related with scheduling a pod with a volume and none of the modified files appears in the traces.

@sferich888
Copy link
Contributor

/retest

@sjenning
Copy link
Contributor

most recent unit failure is

=== RUN   TestServerRunWithSNI
coverage: 24.7% of statements
panic: test timed out after 2m0s

goroutine 174 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:1240 +0x146
created by time.goFunc
	/usr/local/go/src/time/sleep.go:172 +0x52

goroutine 1 [chan receive, 1 minutes]:
testing.(*T).Run(0xc42011a0f0, 0x1dceb9e, 0x14, 0x1e56808, 0xc42022fc01)
	/usr/local/go/src/testing/testing.go:825 +0x597
testing.runTests.func1(0xc42011a0f0)
	/usr/local/go/src/testing/testing.go:1063 +0xa5
testing.tRunner(0xc42011a0f0, 0xc42022fd48)
	/usr/local/go/src/testing/testing.go:777 +0x16e
testing.runTests(0xc42000c520, 0x2a661e0, 0x6, 0x6, 0xc420536400)
	/usr/local/go/src/testing/testing.go:1061 +0x4e2
testing.(*M).Run(0xc420536400, 0x0)
	/usr/local/go/src/testing/testing.go:978 +0x2ce
main.main()
	_testmain.go:122 +0x325

goroutine 20 [chan receive]:
github.com/openshift/origin/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x2a75600)
	/go/src/github.com/openshift/origin/vendor/github.com/golang/glog/glog.go:879 +0xac
created by github.com/openshift/origin/vendor/github.com/golang/glog.init.0
	/go/src/github.com/openshift/origin/vendor/github.com/golang/glog/glog.go:410 +0x231

goroutine 68 [syscall, 2 minutes]:
os/signal.signal_recv(0x488ff1)
	/usr/local/go/src/runtime/sigqueue.go:139 +0xa6
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:22 +0x30
created by os/signal.init.0
	/usr/local/go/src/os/signal/signal_unix.go:28 +0x4f

goroutine 16 [runnable]:
math/big.nat.montgomery(0xc420464140, 0x10, 0x26, 0xc4204b6140, 0x10, 0x14, 0xc4204b6960, 0x10, 0x14, 0xc42034ac80, ...)
	/usr/local/go/src/math/big/nat.go:218 +0x262
math/big.nat.expNNMontgomery(0xc4204b6140, 0x10, 0x14, 0xc4204b60a0, 0x10, 0x14, 0xc4204f7f40, 0x10, 0x14, 0xc42034ac80, ...)
	/usr/local/go/src/math/big/nat.go:1163 +0x7ea
math/big.nat.expNN(0x0, 0x0, 0x0, 0xc4204b60a0, 0x10, 0x14, 0xc4204f7f40, 0x10, 0x14, 0xc42034ac80, ...)
	/usr/local/go/src/math/big/nat.go:973 +0xb38
math/big.nat.probablyPrimeMillerRabin(0xc42034ac80, 0x10, 0x14, 0x15, 0x15fdb2c2bb413901, 0x5)
	/usr/local/go/src/math/big/prime.go:106 +0x5f2
math/big.(*Int).ProbablyPrime(0xc4204fda60, 0x14, 0xc4206abc60)
	/usr/local/go/src/math/big/prime.go:78 +0x3d1
crypto/rand.Prime(0x1f78a60, 0xc42013b020, 0x400, 0xc42023a840, 0x2, 0x2)
	/usr/local/go/src/crypto/rand/util.go:99 +0x23a
crypto/rsa.GenerateMultiPrimeKey(0x1f78a60, 0xc42013b020, 0x2, 0x800, 0xc420282580, 0x0, 0x0)
	/usr/local/go/src/crypto/rsa/rsa.go:258 +0x1ab
crypto/rsa.GenerateKey(0x1f78a60, 0xc42013b020, 0x800, 0xc420282580, 0x0, 0x0)
	/usr/local/go/src/crypto/rsa/rsa.go:200 +0x5a
github.com/openshift/origin/vendor/k8s.io/client-go/util/cert.GenerateSelfSignedCertKey(0x1dd4282, 0x19, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x38, ...)
	/go/src/github.com/openshift/origin/vendor/k8s.io/client-go/util/cert/cert.go:169 +0x49a
github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/options.(*SecureServingOptionsWithLoopback).ApplyTo(0xc420242108, 0xc420266000, 0xc42013aba0, 0xb)
	/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/options/serving_with_loopback.go:55 +0x204
github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/options.TestServerRunWithSNI.func2(0xc420268180, 0xc42011a2d0, 0xc420246d20, 0x22, 0xc420246d80, 0x21, 0xc420385940, 0x1, 0x1, 0xc4206ad8b0, ...)
	/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/options/serving_test.go:495 +0x5f0
github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/options.TestServerRunWithSNI(0xc42011a2d0)
	/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/options/serving_test.go:580 +0x2a71
testing.tRunner(0xc42011a2d0, 0x1e56808)
	/usr/local/go/src/testing/testing.go:777 +0x16e
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:824 +0x565

goroutine 103 [select]:
github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1e56770, 0x3b9aca00, 0x0, 0x1, 0xc4202681e0)
	/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:145 +0x189
github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x1e56770, 0x3b9aca00, 0xc4202681e0)
	/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x5b
github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x1e56770, 0x3b9aca00)
	/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:79 +0x5f
github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/filters.startRecordingUsage.func1()
	/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:81 +0x44
created by github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/filters.startRecordingUsage
	/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:80 +0x43
FAIL	github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/options	120.422s

time before that was

=== RUN   TestSchedulerWithVolumeBinding
==================
WARNING: DATA RACE
Read at 0x0000022bf6c0 by goroutine 81:
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core.podFitsOnNode()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core/generic_scheduler.go:471 +0x4fa
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core.findNodesThatFit.func1()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core/generic_scheduler.go:324 +0x286
  github.com/openshift/origin/vendor/k8s.io/client-go/util/workqueue.Parallelize.func1()
      /go/src/github.com/openshift/origin/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:47 +0xa3

Previous write at 0x0000022bf6c0 by goroutine 91:
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler.TestSchedulerWithVolumeBinding()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/algorithm/predicates/predicates.go:164 +0xb1
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:777 +0x16d

Goroutine 81 (running) created at:
  github.com/openshift/origin/vendor/k8s.io/client-go/util/workqueue.Parallelize()
      /go/src/github.com/openshift/origin/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:43 +0x139
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core.findNodesThatFit()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core/generic_scheduler.go:348 +0xd31
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core.(*genericScheduler).Schedule()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core/generic_scheduler.go:136 +0x47b
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler.(*Scheduler).schedule()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go:189 +0xe5
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler.(*Scheduler).scheduleOne()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go:443 +0x5e0
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler.(*Scheduler).(github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler.scheduleOne)-fm()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go:179 +0x41
  github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1()
      /go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x61
  github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      /go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xcd
  github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait.Until()
      /go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x5a

Goroutine 91 (running) created at:
  testing.(*T).Run()
      /usr/local/go/src/testing/testing.go:824 +0x564
  testing.runTests.func1()
      /usr/local/go/src/testing/testing.go:1063 +0xa4
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:777 +0x16d
  testing.runTests()
      /usr/local/go/src/testing/testing.go:1061 +0x4e1
  testing.(*M).Run()
      /usr/local/go/src/testing/testing.go:978 +0x2cd
  main.main()
      _testmain.go:100 +0x324
==================
==================
WARNING: DATA RACE
Read at 0x00c4203bc380 by goroutine 81:
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core.podFitsOnNode()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core/generic_scheduler.go:471 +0x172
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core.findNodesThatFit.func1()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core/generic_scheduler.go:324 +0x286
  github.com/openshift/origin/vendor/k8s.io/client-go/util/workqueue.Parallelize.func1()
      /go/src/github.com/openshift/origin/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:47 +0xa3

Previous write at 0x00c4203bc380 by goroutine 91:
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler.TestSchedulerWithVolumeBinding()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler_test.go:662 +0x71
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:777 +0x16d

Goroutine 81 (running) created at:
  github.com/openshift/origin/vendor/k8s.io/client-go/util/workqueue.Parallelize()
      /go/src/github.com/openshift/origin/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:43 +0x139
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core.findNodesThatFit()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core/generic_scheduler.go:348 +0xd31
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core.(*genericScheduler).Schedule()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/core/generic_scheduler.go:136 +0x47b
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler.(*Scheduler).schedule()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go:189 +0xe5
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler.(*Scheduler).scheduleOne()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go:443 +0x5e0
  github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler.(*Scheduler).(github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler.scheduleOne)-fm()
      /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go:179 +0x41
  github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1()
      /go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x61
  github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      /go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xcd
  github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait.Until()
      /go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x5a

Goroutine 91 (running) created at:
  testing.(*T).Run()
      /usr/local/go/src/testing/testing.go:824 +0x564
  testing.runTests.func1()
      /usr/local/go/src/testing/testing.go:1063 +0xa4
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:777 +0x16d
  testing.runTests()
      /usr/local/go/src/testing/testing.go:1061 +0x4e1
  testing.(*M).Run()
      /usr/local/go/src/testing/testing.go:978 +0x2cd
  main.main()
      _testmain.go:100 +0x324
==================
--- FAIL: TestSchedulerWithVolumeBinding (18.13s)
	testing.go:730: race detected during execution of test
FAIL
coverage: 65.3% of statements
FAIL	github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/scheduler	29.657s

seem like to different flakes
/test unit

@sjenning
Copy link
Contributor

@droslean was going to rebuild that missing image. lets see if it is there.
/test unit

@sjenning
Copy link
Contributor

nope

@droslean
Copy link
Member

/test unit

@soukron
Copy link
Author

soukron commented Apr 11, 2019

These still don't seem to be related with the PR itself... is there anything we can do to help?

@sferich888
Copy link
Contributor

/retest

@sjenning
Copy link
Contributor

/lgtm
/approve

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 11, 2019
@derekwaynecarr
Copy link
Member

/approve

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 11, 2019
@sjenning
Copy link
Contributor

gah, i thought Travis wasn't actually required...

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

12 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

9 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@soukron
Copy link
Author

soukron commented Apr 15, 2019

@sjenning @derekwaynecarr guys is this happening in other PRs waiting for merge? The travis error is related with hack/lib files.

@brenton
Copy link
Contributor

brenton commented Apr 15, 2019

The travis CI error in question makes me wonder if this is somehow related to https://docs.travis-ci.com/user/customizing-the-build/#git-clone-depth

@brenton
Copy link
Contributor

brenton commented Apr 15, 2019

@soukron
You could try adding this to your .travis.yml to see if it prevents a shallow clone :

git:
  depth: false

@soukron
Copy link
Author

soukron commented Apr 15, 2019

@soukron
You could try adding this to your .travis.yml to see if it prevents a shallow clone :

git:
  depth: false

Let me try.

@openshift-ci-robot openshift-ci-robot removed the lgtm Indicates that a PR is ready to be merged. label Apr 15, 2019
@openshift-ci-robot
Copy link

New changes are detected. LGTM label has been removed.

@soukron
Copy link
Author

soukron commented Apr 15, 2019

@brenton please check if the result it's the expected or not.

@brenton
Copy link
Contributor

brenton commented Apr 15, 2019

When I look at the "files changed' in this PR I don't see any .travis.yaml updates. In any case, the travis CI job failed in the same way (and still cloned with a depth of 50).

@brenton
Copy link
Contributor

brenton commented Apr 15, 2019

@soukron, I talked with a few other devs on the team. This is highly likely a situation with clone depth. If you update your PR to include a custom .travis.yaml it should go away.

@soukron
Copy link
Author

soukron commented Apr 23, 2019

@soukron, I talked with a few other devs on the team. This is highly likely a situation with clone depth. If you update your PR to include a custom .travis.yaml it should go away.

After including the custom .travis.yaml with the option the result is even worse...

@soukron soukron mentioned this pull request Apr 23, 2019
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: derekwaynecarr, sjenning, soukron

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@soukron
Copy link
Author

soukron commented Apr 23, 2019

I've undoed the change in .travis.yml after confirming that an empty commit (a new-line in readme file) will fail in the same tests.

I've submitted the issue #22639 to track the problem in the files as I expected that the code should pass the tests.

@sjenning
Copy link
Contributor

/retest
/hold
we also need to pick kubernetes/kubernetes#75367 which is introduced by kubernetes/kubernetes#70647

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 14, 2019
@sjenning sjenning mentioned this pull request Jun 14, 2019
@sjenning
Copy link
Contributor

superseded by #23183 which contains an addition fix that the first commit introduced and was fixed upstream

@sjenning sjenning closed this Jun 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants