Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for 'osc stop|label' from upstream #1944

Merged
merged 1 commit into from
May 19, 2015
Merged

Add support for 'osc stop|label' from upstream #1944

merged 1 commit into from
May 19, 2015

Conversation

0xmichalis
Copy link
Contributor

[vagrant@openshiftdev sample-app]$ osc label svc/mysql2 app=sql
NAME      LABELS    SELECTOR                            IP               PORT(S)
mysql2    app=sql   deploymentconfig=mysql,name=hello   172.30.228.100   41/TCP
[vagrant@openshiftdev sample-app]$ osc stop svc/mysql2
services/mysql2

cc: @smarterclayton @bparees (https://trello.com/c/XxiMNEHv/553-add-support-for-osc-stop-label-from-upstream-codefreeze)

# Shut down all resources in the path/to/resources directory
$ %[1]s stop -f path/to/resources
`
cmd.Long = fmt.Sprintf(longDesc, fullName)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reaper needs to be implemented for DeploymentConfig

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can do that in a follow on pull, but I don't want to expose stop for osc until that works.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem, I'll work on it here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smarterclayton fyi after the latest change in stop code we can use osc stop as is since if the passed resource doesn't implement the Reaper, stop will fallback to delete (https://github.com/GoogleCloudPlatform/kubernetes/blob/e1256c08027748c4a0dc8078f1f8db5afb673923/pkg/kubectl/cmd/delete.go#L112).

Regarding the implementation, DC first has to support the Resizer and then the Reaper. How are we going to implement those since DC is OpenShift-specific resource and we need to make that change in Kubernetes?
https://github.com/GoogleCloudPlatform/kubernetes/blob/e1256c08027748c4a0dc8078f1f8db5afb673923/pkg/kubectl/stop.go#L51
https://github.com/GoogleCloudPlatform/kubernetes/blob/e1256c08027748c4a0dc8078f1f8db5afb673923/pkg/kubectl/resize.go#L92

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, maybe I'm wrong about the resizer but still...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

----- Original Message -----

+Examples:
+

  • Shut down foo.

  • $ %[1]s stop replicationcontroller foo
  • Stop pods and services with label name=myLabel.

  • $ %[1]s stop pods,services -l name=myLabel
  • Shut down the service defined in service.json

  • $ %[1]s stop -f service.json
  • Shut down all resources in the path/to/resources directory

  • $ %[1]s stop -f path/to/resources
    +`
  • cmd.Long = fmt.Sprintf(longDesc, fullName)

Well, maybe I'm wrong about the resizer but still...

Resizer I believe has an abstraction as well that goes into the Factory. There are other issues with Resize though, which is that currently deployments aren't preserving resize and it's not clear whether the deployment configs value for replicas should be used. For now, Resizer resizing the current dcs/deployment would be most correct.

@0xmichalis 0xmichalis changed the title Add support for 'osc stop|label' from upstream [WIP] Add support for 'osc stop|label' from upstream Apr 28, 2015
@0xmichalis
Copy link
Contributor Author

Will hold on to this until the next Kube rebase. Then it can be merged w/o implementing the Reaper for dc.

@0xmichalis 0xmichalis changed the title [WIP] Add support for 'osc stop|label' from upstream Add support for 'osc stop|label' from upstream May 5, 2015
@0xmichalis
Copy link
Contributor Author

This still needs to implement the Reaper but for doing that dcs first have to implement the Resizer and then resizing down to zero is all this will take.

@smarterclayton
Copy link
Contributor

For a DC, the resizer should find the most recent (highest deployment number) rc associated with that deployment, and apply resize to that deployment. Can you make that change?

On May 7, 2015, at 7:26 AM, Michail Kargakis notifications@github.com wrote:

This still needs to implement the Reaper but for doing that dcs first have to implement the Resizer and then resizing down to zero is all this will take.


Reply to this email directly or view it on GitHub.

@0xmichalis
Copy link
Contributor Author

For a DC, the resizer should find the most recent (highest deployment number) rc associated with that deployment, and apply resize to that deployment. Can you make that change?

I will give it a go.

@0xmichalis
Copy link
Contributor Author

I've updated this PR to work with the resizing PR. Merged both of them in a test branch and:

[vagrant@openshiftdev sample-app]$ osc get dc
NAME               TRIGGERS                    LATEST VERSION
php-55-centos7     ConfigChange, ImageChange   0
ruby-hello-world   ConfigChange, ImageChange   1

[vagrant@openshiftdev sample-app]$ osc resize dc ruby-hello-world --replicas=3
resized

[vagrant@openshiftdev sample-app]$ osc describe dc ruby-hello-world
Name:       ruby-hello-world
Created:    About an hour ago
Labels:     <none>
Latest Version: 1
Triggers:   Config, Image(ruby-hello-world@latest, auto=true)
Strategy:   Recreate
Template:
    Selector:   deploymentconfig=ruby-hello-world
    Replicas:   1
    Containers:
        NAME            IMAGE               ENV
        ruby-hello-world    library/ruby-hello-world:latest 
Deployment #1 (latest):
    Name:       ruby-hello-world-1
    Created:    about an hour ago
    Status:     Running
    Replicas:   3 current / 3 desired
    Selector:   deployment=ruby-hello-world-1,deploymentconfig=ruby-hello-world
    Labels:     deployment=ruby-hello-world-1,deploymentconfig=ruby-hello-world
    Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No events.

[vagrant@openshiftdev sample-app]$ osc get pods
POD                              IP        CONTAINER(S)       IMAGE(S)    HOST                           LABELS                                                            STATUS    CREATED            MESSAGE
deploy-ruby-hello-world-1wdk5w   1.2.3.4                                  openshiftdev.local/127.0.0.1   <none>                                                            Running   About an hour      
                                           deployment         testimage                                                                                                    Running   292.471209 years   
ruby-hello-world-1-3wuye         1.2.3.4                                  openshiftdev.local/127.0.0.1   deployment=ruby-hello-world-1,deploymentconfig=ruby-hello-world   Running   33 seconds         
                                           ruby-hello-world   testimage                                                                                                    Running   292.471209 years   
ruby-hello-world-1-oaj7s         1.2.3.4                                  openshiftdev.local/127.0.0.1   deployment=ruby-hello-world-1,deploymentconfig=ruby-hello-world   Running   33 seconds         
                                           ruby-hello-world   testimage                                                                                                    Running   292.471209 years   
ruby-hello-world-1-pkr4z         1.2.3.4                                  openshiftdev.local/127.0.0.1   deployment=ruby-hello-world-1,deploymentconfig=ruby-hello-world   Running   33 seconds         
                                           ruby-hello-world   testimage                                                                                                    Running   292.471209 years   

[vagrant@openshiftdev sample-app]$ osc stop dc ruby-hello-world
[vagrant@openshiftdev sample-app]$ osc get pods
POD                              IP        CONTAINER(S)   IMAGE(S)    HOST                           LABELS    STATUS    CREATED            MESSAGE
deploy-ruby-hello-world-1wdk5w   1.2.3.4                              openshiftdev.local/127.0.0.1   <none>    Running   About an hour      
                                           deployment     testimage                                            Running   292.471209 years   

[vagrant@openshiftdev sample-app]$ osc describe dc ruby-hello-world
Name:       ruby-hello-world
Created:    About an hour ago
Labels:     <none>
Latest Version: 1
Triggers:   Config, Image(ruby-hello-world@latest, auto=true)
Strategy:   Recreate
Template:
    Selector:   deploymentconfig=ruby-hello-world
    Replicas:   1
    Containers:
        NAME            IMAGE               ENV
        ruby-hello-world    library/ruby-hello-world:latest 
Deployment #1 (latest):
    Name:       ruby-hello-world-1
    Created:    about an hour ago
    Status:     Running
    Replicas:   0 current / 0 desired
    Selector:   deployment=ruby-hello-world-1,deploymentconfig=ruby-hello-world
    Labels:     deployment=ruby-hello-world-1,deploymentconfig=ruby-hello-world
    Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No events.

@@ -13,6 +13,64 @@ import (
)

const (
stop_long = `Gracefully shut down a resource by id or filename.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stopLong and stopExample

@0xmichalis
Copy link
Contributor Author

I didn't actually delete the dc. Fixed.

[ $(osc describe bc/ruby-helloworld-sample | grep acustom=label) ]
echo "label: ok"
osc stop svc/ruby-helloworld-sample
osc delete all -l app=dockerbuild
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Verify the dc was deleted in your test case

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was about to do that

@smarterclayton
Copy link
Contributor

Travis is failing

@0xmichalis
Copy link
Contributor Author

[test]

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin/2248/)

@0xmichalis
Copy link
Contributor Author

re[test]

@smarterclayton
Copy link
Contributor

Failing due to

deploymentConfigs/database
replicationControllers "database-1" not found
!!! Error in hack/test-cmd.sh:402
  'osc get rc/database-1' exited with status 1
Call stack:
  1: hack/test-cmd.sh:402 main(...)

might be a race introduced by your changes

@smarterclayton
Copy link
Contributor

The race should be fixed by #2294

@0xmichalis
Copy link
Contributor Author

The deployment pod isn't deleted when stopping a dc... Looking at it.

@smarterclayton
Copy link
Contributor

That's the responsibility of the deployment controller.

On May 18, 2015, at 5:15 AM, Michail Kargakis notifications@github.com wrote:

The deployment pod isn't deleted when stopping a dc... Looking at it.


Reply to this email directly or view it on GitHub.

@0xmichalis
Copy link
Contributor Author

It seems like the deployment controller will delete the deployer pod only when the deployment is successful. Does the replication controller have a status at all when it's deleted by a cli command? Should we change its status to complete before deleting it (can we even do that?) so that the deployer controller can delete the pod or should we just go on by manually deleting it?

@smarterclayton
Copy link
Contributor

----- Original Message -----

It
seems
like the deployment controller will delete the deployer pod only when the
deployment is successful. Does the replication controller have a status at
all when it's deleted by a cli command?

The deployment controller will try to start it again, which we don't want.

Should we change its status to
complete before deleting it (can we even do that?) so that the deployer
controller can delete the pod or should we just go on by manually deleting
it?

We shouldn't be changing anything on the RC - that's the responsibility of the deployment controller to manage, and it's likely we'd end up having code in two places. I'd prefer that we not be deleting it from the CLI.


Reply to this email directly or view it on GitHub:
#1944 (comment)

@0xmichalis
Copy link
Contributor Author

@smarterclayton this is ready for merging. The error on Travis is fixed here: #2308

@0xmichalis
Copy link
Contributor Author

Re deployer pod, opened a separate issue to track it down: #2316

cmds.AddCommand(cmd.NewCmdLogin(fullName, f, in, out))
cmds.AddCommand(cmd.NewCmdLogout("logout", fullName+" logout", fullName+" login", f, in, out))
cmds.AddCommand(cmd.NewCmdNewApplication(fullName, f, out))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you reorder these? The order is significant, and is intentional. Please revert the ordering.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure of the previous order but now the commands are alphabetically sorted in two groups: OpenShift-specific and Kubernetes wrappers.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want that. The order is intentional, most used commands followed by lesser used commands.

----- Original Message -----

cmds.AddCommand(cmd.NewCmdLogin(fullName, f, in, out))
cmds.AddCommand(cmd.NewCmdLogout("logout", fullName+" logout", fullName+"
login", f, in, out))
  • cmds.AddCommand(cmd.NewCmdNewApplication(fullName, f, out))

I am not sure of the previous order but now the commands are
alphabetically sorted in two groups: OpenShift-specific and Kubernetes
wrappers.


Reply to this email directly or view it on GitHub:
https://github.com/openshift/origin/pull/1944/files#r30515535

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm, for what it's worth (not much i'm sure) i think alphabetic makes more sense. if i'm trying to scan for the usage of a command i know the name of, it's easier to find if the list is alphabetical. and if i don't know the name of the command i'm looking for, i still may have guesses which are going to lead me to navigate the list in an alphabetic search.

i don't really see the value in having the most common ones at the top, given that the whole list is going to be displayed anyway (in fact you could argue the most common ones should be at the bottom since that's what's going to be front and center after someone runs osc)

----- Original Message -----

From: "Clayton Coleman" notifications@github.com
To: "openshift/origin" origin@noreply.github.com
Cc: "Ben Parees" bparees@redhat.com
Sent: Monday, May 18, 2015 1:34:59 PM
Subject: Re: [origin] Add support for 'osc stop|label' from upstream (#1944)

cmds.AddCommand(cmd.NewCmdLogin(fullName, f, in, out))
cmds.AddCommand(cmd.NewCmdLogout("logout", fullName+" logout", fullName+"
login", f, in, out))
  • cmds.AddCommand(cmd.NewCmdNewApplication(fullName, f, out))

We don't want that. The order is intentional, most used commands followed by
lesser used commands.

----- Original Message -----

cmds.AddCommand(cmd.NewCmdLogin(fullName, f, in, out))
cmds.AddCommand(cmd.NewCmdLogout("logout", fullName+" logout",
fullName+"
login", f, in, out))

  • cmds.AddCommand(cmd.NewCmdNewApplication(fullName, f, out))

I am not sure of the previous order but now the commands are
alphabetically sorted in two groups: OpenShift-specific and Kubernetes
wrappers.


Reply to this email directly or view it on GitHub:
https://github.com/openshift/origin/pull/1944/files#r30515535


Reply to this email directly or view it on GitHub:
https://github.com/openshift/origin/pull/1944/files#r30529266

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The organization of the main help is going to be like "git help". Ordered by function and utility. The group boundaries will be based on the order.

----- Original Message -----

cmds.AddCommand(cmd.NewCmdLogin(fullName, f, in, out))
cmds.AddCommand(cmd.NewCmdLogout("logout", fullName+" logout", fullName+"
login", f, in, out))
  • cmds.AddCommand(cmd.NewCmdNewApplication(fullName, f, out))

hm, for what it's worth (not much i'm sure) i think alphabetic makes more
sense. if i'm trying to scan for the usage of a command i know the name of,
it's easier to find if the list is alphabetical. and if i don't know the
name of the command i'm looking for, i still may have guesses which are
going to lead me to navigate the list in an alphabetic search.

i don't really see the value in having the most common ones at the top, given
that the whole list is going to be displayed anyway (in fact you could argue
the most common ones should be at the bottom since that's what's going to be
front and center after someone runs osc)

----- Original Message -----

From: "Clayton Coleman" notifications@github.com
To: "openshift/origin" origin@noreply.github.com
Cc: "Ben Parees" bparees@redhat.com
Sent: Monday, May 18, 2015 1:34:59 PM
Subject: Re: [origin] Add support for 'osc stop|label' from upstream
(#1944)

cmds.AddCommand(cmd.NewCmdLogin(fullName, f, in, out))
cmds.AddCommand(cmd.NewCmdLogout("logout", fullName+" logout",
fullName+"
login", f, in, out))

  • cmds.AddCommand(cmd.NewCmdNewApplication(fullName, f, out))

We don't want that. The order is intentional, most used commands followed
by
lesser used commands.

----- Original Message -----

cmds.AddCommand(cmd.NewCmdLogin(fullName, f, in, out))
cmds.AddCommand(cmd.NewCmdLogout("logout", fullName+" logout",
fullName+"
login", f, in, out))
  • cmds.AddCommand(cmd.NewCmdNewApplication(fullName, f, out))

I am not sure of the previous order but now the commands are
alphabetically sorted in two groups: OpenShift-specific and
Kubernetes
wrappers.


Reply to this email directly or view it on GitHub:
https://github.com/openshift/origin/pull/1944/files#r30515535


Reply to this email directly or view it on GitHub:
https://github.com/openshift/origin/pull/1944/files#r30529266


Reply to this email directly or view it on GitHub:
https://github.com/openshift/origin/pull/1944/files#r30530444

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we sort within group boundaries? please? :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No :) See "rhc help" for what you're going to get :)

----- Original Message -----

cmds.AddCommand(cmd.NewCmdLogin(fullName, f, in, out))
cmds.AddCommand(cmd.NewCmdLogout("logout", fullName+" logout", fullName+"
login", f, in, out))
  • cmds.AddCommand(cmd.NewCmdNewApplication(fullName, f, out))

can we sort within group boundaries? please? :)


Reply to this email directly or view it on GitHub:
https://github.com/openshift/origin/pull/1944/files#r30530947

This commit adds two commands from Kubernetes: stop and label. While label was rather trivial to import in OpenShift, for stop to be fully functional deploymentConfigs had first to implement the kubectl.Resizer interface (work on the Resizer can be found in #2158) which is a prerequisite for another needed interface, kubectl.Reaper (work on the Reaper can be found here).
@smarterclayton
Copy link
Contributor

LGTM [merge]

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin/2248/) (Image: devenv-fedora_1564)

@openshift-bot
Copy link
Contributor

Evaluated for origin up to 633b1b9

openshift-bot pushed a commit that referenced this pull request May 19, 2015
@openshift-bot openshift-bot merged commit 10b4cfe into openshift:master May 19, 2015
@0xmichalis 0xmichalis deleted the bundle-stop-and-label branch May 19, 2015 07:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants