Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable a large chunk of upstream e2e tests that were accidentally not being run #18816

Merged
merged 7 commits into from
Aug 31, 2018

Conversation

smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Mar 2, 2018

While investigating why GCE PVs were failing, we realized that there were no e2e tests running for PVs, which led us to realize that we weren't including a large chunk of the new sig-specific tests added as subfolders of k8s/test/e2e into our test suite. This was due to how the e2e test upstream has changed - it used to be a regular package that grabbed other tests, but then changed to be a _test, which caused our imports to silently stop grabbing those tests. This PR specifically links those in (although in the future we need a better reflective test to validate we aren't dropping those) here and corrects the gaps.

The biggest gaps are alpha features, although quite a few upstream tests make bad assumptions about master nodes. We also went through and investigated the networking issues - GCP CI runs were switched to the openshift network policy plugin in order to get better coverage, and there was only one known failure (ingress port by name).

The _install job will likely need an exclusion rule for some of the tests until we can get the AWS cluster job up and running.

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 2, 2018
@openshift-ci-robot openshift-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Mar 2, 2018
@openshift-merge-robot openshift-merge-robot added the vendor-update Touching vendor dir or related files label Mar 2, 2018
@smarterclayton
Copy link
Contributor Author

/test gcp

@smarterclayton
Copy link
Contributor Author

@openshift/sig-networking I opened an Ansible PR to default GCP clusters to network policy. Can you look at the other issues here and determine which ones are functional issues for 3.9? Ie source ip preservation, ipv6, any of the load balancer ones.

@smarterclayton
Copy link
Contributor Author

@danwinship
Copy link
Contributor

Hm. So our tests enable all [Feature:*] tags by default?

  • We need to disable [Feature:NetworkPolicy] because we don't guarantee that OpenShift SDN in a given OCP release implements all the features defined for NetworkPolicy in the corresponding upstream kubernetes release. In particular, k8s 1.9 has tests for NetworkPolicy egress and IPBlock support, but OCP 3.9 does not implement that. We currently have a forked copy of kubernetes/test/e2e/network/network_policy.go in origin/test/extended/networking/networkpolicy.go which uses a different tag ([Feature:OSNetworkPolicy]) so we'll still test the functionality that we do implement.
  • We need to disable [Feature:Networking-IPv6] because OpenShift SDN doesn't do IPv6.
  • [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services had previously been skipped by test/extended/networking.sh with the comment "Skipped due to origin returning 403 for some of the urls".
  • [sig-network] Services should preserve source pod IP for traffic thru service cluster IP had been skipped by networking.sh, although that will actually pass under ovs-networkpolicy, just not ovs-multitenant (which is Source pod ip is not preserved when contacting a cluster ip service #11042, but may not ever be fixed at this point).
  • [sig-network] Network should set TCP CLOSE_WAIT timeout is looking for a file in /proc/net that doesn't exist. Maybe an Ubuntu vs Fedora/RHEL kernel issue. Probably not hard to fix but will need to be disabled until then.
  • [sig-network] Services should be able to up and down services is failing because wget isn't installed. There seems to be a lot of code in the upstream tests that uses wget, although there don't seem to be any other tests failing, so maybe there's something else going on here too? Anyway, if we aren't currently installing wget, then the simple fix is to install wget.

I'm not sure yet what's up with [sig-network] Networking IPerf IPv4 [Experimental] [Feature:Networking-IPv4] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients), which got run in the extended-networking-minimal run, but not gcp or conformance-install. (Presumably because of either [Slow] or [Experimental].) (It probably shouldn't run in extended-networking-minimal either, but it probably should run in the full extended-networking.)

@danwinship
Copy link
Contributor

oh, the TCP CLOSE_WAIT thing is already fixed, I just typoed when I searched for it before. kubernetes/kubernetes#56765

jpeeler added a commit to jpeeler/origin that referenced this pull request Mar 5, 2018
This is extracted from openshift#18816 in order
to make hostpath tests pass.
jpeeler added a commit to jpeeler/origin that referenced this pull request Mar 5, 2018
This is extracted from openshift#18816 in order
to make hostpath tests pass.
@smarterclayton
Copy link
Contributor Author

Thanks, that was the info I needed.

/retest

gnufied pushed a commit to gnufied/origin that referenced this pull request Mar 6, 2018
This is extracted from openshift#18816 in order
to make hostpath tests pass.
@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 8, 2018
@openshift-bot openshift-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 9, 2018
@smarterclayton
Copy link
Contributor Author

/retest

@smarterclayton
Copy link
Contributor Author

I tested the network policy plugin with this pr, and https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/18816/test_pull_request_origin_extended_conformance_gce/17466/#sig-network-networkpolicy-networkpolicy-between-server-and-client-should-allow-ingress-access-on-one-named-port-featurenetworkpolicy-suiteopenshiftconformanceparallel-suitek8s failed ([sig-network] NetworkPolicy NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy] [Suite:openshift/conformance/parallel] [Suite:k8s])

@mrunalp
Copy link
Member

mrunalp commented Aug 29, 2018

@smarterclayton okay, I am pretty certain that this particular test was passing for us before.

@mrunalp
Copy link
Member

mrunalp commented Aug 30, 2018

Some of them were failing because of wrong oc binary getting picked up. The image layer test is still failing though. I have kicked off another full run.

@mrunalp
Copy link
Member

mrunalp commented Aug 30, 2018

Down to 6 failures this run:

Summarizing 6 Failures:

[Fail] [Conformance][templates] templateservicebroker end-to-end test [BeforeEach]  should pass an end-to-end test [Suite:openshift/conformance/parallel] 
/home/mrunalp/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:68

[Fail] [Feature:DeploymentConfig] deploymentconfigs with custom deployments [Conformance] [It] should run the custom deployment steps [Suite:openshift/conformance/parallel] 
/home/mrunalp/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:585

[Fail] [Feature:ImageLayers] Image layer subresource [It] should return layers from tagged images [Suite:openshift/conformance/parallel] 
/home/mrunalp/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/images/layers.go:74

[Fail] [Feature:DeploymentConfig] deploymentconfigs paused [Conformance] [It] should disable actions on deployments [Suite:openshift/conformance/parallel] 
/home/mrunalp/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:762

[Fail] [Conformance][templates] templateservicebroker security test [BeforeEach]  should pass security tests [Suite:openshift/conformance/parallel] 
/home/mrunalp/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:56

[Fail] [Conformance][templates] templateservicebroker bind test  [BeforeEach] should pass bind tests [Suite:openshift/conformance/parallel] 
/home/mrunalp/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:46

Ran 235 of 455 Specs in 1046.486 seconds
FAIL! -- 229 Passed | 6 Failed | 0 Pending | 220 Skipped

@mrunalp
Copy link
Member

mrunalp commented Aug 30, 2018

Of these the template service broker tests are failing similarly and one of the dc tests is a flake we have seen before.

Failure in Spec Setup (BeforeEach) [16.830 seconds]                                                                                                                                                                                         
[Conformance][templates] templateservicebroker end-to-end test                                                                                                                                                                                
/home/mrunalp/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:40                                                                                     
   [BeforeEach]                                                                                                                                                                                                                               
  /home/mrunalp/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:356                                                                                  
    should pass an end-to-end test [Suite:openshift/conformance/parallel]                                                                                                                                                                     
    /home/mrunalp/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:370                                                                                
                                                                                                                                                                                                                                              
    Expected error:                                                                                                                                                                                                                           
        <*errors.StatusError | 0xc421b41680>: {                                                                                                                                                                                               
            ErrStatus: {                                                                                                                                                                                                                      
                TypeMeta: {Kind: "", APIVersion: ""},                                                                                                                                                                                         
                ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},                                                                                                                                                                  
                Status: "Failure",                                                                                                                                                                                                            
                Message: "services \"apiserver\" not found",                                                                                                                                                                                  
                Reason: "NotFound",                                                                                                                                                                                                           
                Details: {Name: "apiserver", Group: "", Kind: "services", UID: "", Causes: nil, RetryAfterSeconds: 0},                                                                                                                        
                Code: 404,                                                                                                                                                                                                                    
            },                                                                                                                                                                                                                                
        }                                                                                                                                                                                                                                     
        services "apiserver" not found                                                                                                                                                                                                        
    not to have occurred         

@smarterclayton
Copy link
Contributor Author

smarterclayton commented Aug 30, 2018

Running locally I found a test namespace that isn't being cleaned up after a run:

$ oc status -n e2e-tests-kubectl-thzf8
In project e2e-tests-kubectl-thzf8 on server https://internal-api.claytondev4.origin-ci-int-gce.dev.rhcloud.com:8443

pod/run-test-3-vnvqg runs busybox

pod/nginx runs k8s.gcr.io/nginx-slim-amd64:0.20

and also

oc status -n e2e-tests-container-probe-g4n79
In project e2e-tests-container-probe-g4n79 on server https://internal-api.claytondev4.origin-ci-int-gce.dev.rhcloud.com:8443

pod/liveness-exec runs busybox

You have no services, deployment configs, or build configs.
Run 'oc new-app' to create an application.

I think that means that those tests were in the child that failed (the process timed out and was killed, which prevented them from being cleaned up).

The first one is k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:537 which is should support inline execution and attach. Running it directly it seems to hang here on cri-o but not docker:

Aug 29 23:52:06.249: INFO: stdout: "pod/nginx created\n"
Aug 29 23:52:06.249: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [nginx]
Aug 29 23:52:06.249: INFO: Waiting up to 5m0s for pod "nginx" in namespace "e2e-tests-kubectl-hrnt8" to be "running and ready"
Aug 29 23:52:06.280: INFO: Pod "nginx": Phase="Pending", Reason="", readiness=false. Elapsed: 30.380841ms
Aug 29 23:52:08.432: INFO: Pod "nginx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182755924s
Aug 29 23:52:10.472: INFO: Pod "nginx": Phase="Running", Reason="", readiness=false. Elapsed: 4.222670823s
Aug 29 23:52:12.510: INFO: Pod "nginx": Phase="Running", Reason="", readiness=false. Elapsed: 6.261194601s
Aug 29 23:52:14.657: INFO: Pod "nginx": Phase="Running", Reason="", readiness=false. Elapsed: 8.407866695s
Aug 29 23:52:16.768: INFO: Pod "nginx": Phase="Running", Reason="", readiness=true. Elapsed: 10.518586582s
Aug 29 23:52:16.768: INFO: Pod "nginx" satisfied condition "running and ready"
Aug 29 23:52:16.768: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nginx]
[It] should support inline execution and attach [Suite:openshift/conformance/parallel] [Suite:k8s]
  /Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:507
STEP: executing a command with run and attach with stdin
Aug 29 23:52:16.768: INFO: Running '/Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/bin/darwin/amd64/kubectl --server=https://internal-api.claytondev4.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/Users/clayton/projects/origin/src/github.com/openshift/release/cluster/test-deploy/gcp-crio/admin.kubeconfig --namespace=e2e-tests-kubectl-hrnt8 run run-test --image=busybox --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 29 23:52:22.570: INFO: stderr: "If you don't see a command prompt, try pressing enter.\n"
Aug 29 23:52:22.570: INFO: stdout: "abcd1234stdin closed\n"
STEP: executing a command with run and attach without stdin
Aug 29 23:52:22.602: INFO: Running '/Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/bin/darwin/amd64/kubectl --server=https://internal-api.claytondev4.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/Users/clayton/projects/origin/src/github.com/openshift/release/cluster/test-deploy/gcp-crio/admin.kubeconfig --namespace=e2e-tests-kubectl-hrnt8 run run-test-2 --image=busybox --restart=OnFailure --attach=true --leave-stdin-open=true -- sh -c cat && echo 'stdin closed''
Aug 29 23:52:26.363: INFO: stderr: ""
Aug 29 23:52:26.363: INFO: stdout: "stdin closed\n"
STEP: executing a command with run and attach with stdin with open stdin should remain running
Aug 29 23:52:26.394: INFO: Running '/Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/bin/darwin/amd64/kubectl --server=https://internal-api.claytondev4.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/Users/clayton/projects/origin/src/github.com/openshift/release/cluster/test-deploy/gcp-crio/admin.kubeconfig --namespace=e2e-tests-kubectl-hrnt8 run run-test-3 --image=busybox --restart=OnFailure --attach=true --leave-stdin-open=true --stdin -- sh -c cat && echo 'stdin closed''

@smarterclayton
Copy link
Contributor Author

[k8s.io] Probing container
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  /Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:684
[BeforeEach] [Top Level]
  /Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Probing container
  /Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 30 00:01:24.732: INFO: >>> kubeConfig: /Users/clayton/projects/origin/src/github.com/openshift/release/cluster/test-deploy/gcp-crio/admin.kubeconfig
STEP: Building a namespace api object
Aug 30 00:01:26.126: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  /Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:684
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-gk7tf
Aug 30 00:01:32.633: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-gk7tf
STEP: checking the pod's current state and verifying that restartCount is present
Aug 30 00:01:32.663: INFO: Initial restart count of pod liveness-exec is 0

seems to hang on cri-o

@smarterclayton
Copy link
Contributor Author

it eventually completed, but probably hangs / doesn't work in very long runs

[k8s.io] Probing container
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  /Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:684
[BeforeEach] [Top Level]
  /Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Probing container
  /Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 30 00:01:24.732: INFO: >>> kubeConfig: /Users/clayton/projects/origin/src/github.com/openshift/release/cluster/test-deploy/gcp-crio/admin.kubeconfig
STEP: Building a namespace api object
Aug 30 00:01:26.126: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  /Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:684
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-gk7tf
Aug 30 00:01:32.633: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-gk7tf
STEP: checking the pod's current state and verifying that restartCount is present
Aug 30 00:01:32.663: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 30 00:05:33.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-gk7tf" for this suite.
Aug 30 00:05:40.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 00:05:41.790: INFO: namespace: e2e-tests-container-probe-gk7tf, resource: bindings, ignored listing per whitelist
Aug 30 00:05:42.292: INFO: namespace e2e-tests-container-probe-gk7tf deletion completed in 8.358645532s

• [SLOW TEST:257.557 seconds]
[k8s.io] Probing container
/Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:679
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  /Users/clayton/projects/origin/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:684

@mrunalp
Copy link
Member

mrunalp commented Aug 30, 2018

@smarterclayton we will take a look at liveness exec. The other inline attach test depends on a docker specific behavior where docker just disconnects from attach even though it is asked to keep stdin alive. Can we temporarily disable that? I will look into modifying that test upstream.

@squeed
Copy link
Contributor

squeed commented Aug 30, 2018

/retest

@smarterclayton
Copy link
Contributor Author

Yes we can temporarily disable it - do you have a bugzilla open already?

@mrunalp
Copy link
Member

mrunalp commented Aug 30, 2018

@smarterclayton
Copy link
Contributor Author

@mrunalp a problem with disabling that is that it is a conformance test. you'll need to put extra attention to get it fixed. It also means cri-o would not be considered conformant. Please give that extra focus.

@@ -332,6 +332,8 @@ var (
`SELinux relabeling`, // https://github.com/openshift/origin/issues/7287 still broken
`Volumes CephFS`, // permission denied, selinux?

`Probing container should \*not\* be restarted with a exec "cat /tmp/health" liveness probe`, // https://bugzilla.redhat.com/show_bug.cgi?id=1624041
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be other one - "should support inline execution and attach"

@smarterclayton
Copy link
Contributor Author

/test gcp-crio

@smarterclayton
Copy link
Contributor Author

Ok, looks like that got us past the hanging test. Two new failures this time it looks like, maybe it's taking too long for crio

/test gcp-crio
/test gcp

@openshift-ci-robot
Copy link

openshift-ci-robot commented Aug 31, 2018

@smarterclayton: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
ci/openshift-jenkins/extended_networking_minimal 168894d link /test extended_networking_minimal
ci/openshift-jenkins/gcp 59afc26 link /test gcp
ci/prow/gcp-crio ada6020 link /test gcp-crio

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@smarterclayton
Copy link
Contributor Author

Seeing these fail on crio - most of the are consistent. If this is timeouts, we should bump them.

[sig-network] Services should be able to switch session affinity for NodePort service [Suite:openshift/conformance/parallel] [Suite:k8s] 3m49s
[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 [Suite:openshift/conformance/parallel] [Suite:k8s] 2m12s
[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 [Suite:openshift/conformance/parallel] [Suite:k8s] 2m12s

The feature gate is not yet enabled and may not be for several releases.
Pod team owns allowing this to be used.
The feature gate is off and will remain off for several releases.
Pod team owns fixing this carry.
Remove use of -suite parameter in favor of a temporary SUITE env var
for old jobs. Newer jobs will call gingko directly. Remove the setup
code from the extended tests.
The inline attach scenario is behaving differently between docker and crio.
Temporarily disable while this is fixed upstream.
@smarterclayton
Copy link
Contributor Author

I'm going to set crio to the minimal suite on ansible for now (.../minimal) so we should be able to merge this and let this get fixed incrementally

@smarterclayton smarterclayton added the lgtm Indicates that a PR is ready to be merged. label Aug 31, 2018
@smarterclayton
Copy link
Contributor Author

Merging, once this is in I'll switch over the release jobs in openshift/release#1339

@openshift-merge-robot openshift-merge-robot merged commit 15be1fa into openshift:master Aug 31, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. sig/networking size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. vendor-update Touching vendor dir or related files
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants