Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Kubernetes] stack services filters #1023

Merged
merged 2 commits into from
May 24, 2018

Conversation

simonferquel
Copy link
Contributor

This PR implements stack services filtering, to align the docker stack services experience better.

This also fix an issue where global services where missing from the list in Kubernetes mode.

To do that, I leveraged Kubernetes LabelSelectors (so everything is done server side), and translated as closely as possible existing supported filters.

One occurrence of behavior consistency, is that k8s label selectors only support perfect matches while in Swarm mode, name filters behave as a StartsWith comparison

@@ -69,3 +69,8 @@ func (s *Factory) ReplicationControllers() corev1.ReplicationControllerInterface
func (s *Factory) ReplicaSets() typesappsv1beta2.ReplicaSetInterface {
return s.appsClientSet.ReplicaSets(s.namespace)
}

// DaemonSets return a client for kubernetes daemon sets
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

returns

var presentLabels []string
valuedLabels := map[string][]string{}

for _, rawLabel := range filters.Get("label") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extract this to a function?

 presentLabels, valuedLabels := parseLabels(filters.Get("label"))


servicesList, err := client.Services().List(metav1.ListOptions{LabelSelector: labels.SelectorForStack(opts.Namespace)})
servicesList, err := client.Services().List(metav1.ListOptions{LabelSelector: labelSelector})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should extract all the labelSelector creation logic and add unit tests.

@@ -161,6 +162,45 @@ func replicasToServices(replicas *appsv1beta2.ReplicaSetList, services *apiv1.Se
Replicas: fmt.Sprintf("%d/%d", r.Status.AvailableReplicas, r.Status.Replicas),
}
}
for i, d := range daemons.Items {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could factorize with the code above, and extract it to a function taking the labels and the containers as parameters (it seams these are the only dependencies on replicaSets and daemonSets).

@thaJeztah
Copy link
Member

One occurrence of behavior consistency, is that k8s label selectors only support perfect matches while in Swarm mode, name filters behave as a StartsWith comparison

@simonferquel could you elaborate a bit? I think name filters work on full matches (but there was a bug at some point that incorrectly did partial matches: #72)

$ docker stack ls

NAME                SERVICES
foobar              1


$ docker stack ps foo
nothing found in stack: foo

$ docker stack ps foobar
ID                  NAME                IMAGE               NODE                    DESIRED STATE       CURRENT STATE        ERROR               PORTS
u542iioaug1g        foobar_web.1        nginx:alpine        linuxkit-025000000001   Running             Running 3 days ago                       


$ docker service ps foobar
no such service: foobar


$ docker service ps foobar_web
ID                  NAME                IMAGE               NODE                    DESIRED STATE       CURRENT STATE        ERROR               PORTS
u542iioaug1g        foobar_web.1        nginx:alpine        linuxkit-025000000001   Running             Running 3 days ago                       

@simonferquel
Copy link
Contributor Author

@thaJeztah the problem is not on the stack name, but when you use the -f name=<something> with swarm, this name filter is a prefix filter.

}
// name filters is done client side for matching partials

for i := len(result) - 1; i >= 0; i-- {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This for loop is hard to read 😓

result := make([]swarm.Service, len(replicas.Items))
infos := make(map[string]formatter.ServiceListInfo, len(replicas.Items))
func replicasToServices(replicas *appsv1beta2.ReplicaSetList, daemons *appsv1beta2.DaemonSetList, services *apiv1.ServiceList) ([]swarm.Service, map[string]formatter.ServiceListInfo, error) {
result := make([]swarm.Service, len(replicas.Items)+len(daemons.Items))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

put len(replicas.Items)+len(daemons.Items) in a var

@@ -55,7 +65,8 @@ func RunPS(dockerCli *KubeCli, options options.PS) error {
for i, pod := range pods {
tasks[i] = podToTask(pod)
}
return print(dockerCli, namespace, tasks, pods, nodeResolver, !options.NoTrunc, options.Quiet, format)

return print(dockerCli, stackName, tasks, pods, nodeResolver, !options.NoTrunc, options.Quiet, format)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This print function takes way too many argument.. we should at least pass it options and remove the last 3 (by putting if len(format) … in print).
Related to that, I would tend to prefer if format == "", as we check if the string is empty, it's easier to read.

if len(v) == 1 {
result = append(result, fmt.Sprintf("%s=%s", k, v[0]))
} else {
// make result more testable by sorting values
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't make a change "for the result to be more testable". Either we want to sort (and we always sort), or when testing, we check the existence of the element in the slice (and thus, not depending on the order)

func generateLabelSelector(f filters.Args, stackName string) string {
names := f.Get("name")

// make result more testable by sorting names
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

@mat007 mat007 force-pushed the k8s-stack-services-filters branch from ac5453c to 1d935a2 Compare May 5, 2018 06:51
@mat007
Copy link
Member

mat007 commented May 5, 2018

@vdemeester I believe I have addressed your comments, would you please take another look?

@mat007 mat007 force-pushed the k8s-stack-services-filters branch from 1d935a2 to 646446e Compare May 5, 2018 07:05
@thaJeztah
Copy link
Member

looks like this needs a rebase as well

} else {
result = []apiv1.Pod{}
for _, n := range nodes {
podsList, err := pods.List(metav1.ListOptions{LabelSelector: labelSelector, FieldSelector: "spec.nodeName=" + n})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might result in tons of pods.List() calls. Can we not do like spec.nodeName in (node1,node2,...)? If not, we should probably just do one pods.List() call and post-filter the results (or just remove the node arg altogether).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have simplified the code and done the node filtering after querying for a full pods.List(), this is much clearer in the end. Thanks for suggesting this!

return result
}

func parseLabelFilters(rawFilters []string) ([]string, map[string][]string) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any code in the Kubernetes CLI we could reuse here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by Kubernetes CLI? kubectl?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'm wondering if there are any public functions in the kubernetes/kubernetes repo we could use here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this code transforms label filters (with swarm syntax) to their Kubernetes counterpart. I don't think there is any function in kubernetes/kubernetes for that, as the kubectl user will write its filters directly in Kubernetes syntax, which are resolved server side IIRC so kubectl doesn't have to parse/interpret it.

}
result = podsList.Items
} else {
result = []apiv1.Pod{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this actually needed, I suspect var result []apiv1.Pod should be enough?

func fetchPods(stackName string, pods corev1.PodInterface, f filters.Args) ([]apiv1.Pod, error) {
services := f.Get("service")
nodes := f.Get("node")
// for existing script compatibility, support either <servicename> or <stackname>_<servicename> format
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the "existing script functionality"? Is that only in experimental versions? If so, we're ok to break old behavior, and we can remove this hack.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"existing script functionality" here means existing user scripts that target Swarm

// name filters is done client side for matching partials

for i := len(result) - 1; i >= 0; i-- {
fullName := fmt.Sprintf("%s_%s", stackName, result[i].Name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be

fullName := stackNamePrefix + result[i].Name


func print(dockerCli command.Cli, options options.PS, namespace string, client corev1.NodesGetter, pods []apiv1.Pod) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My IDE was complaining that print collides with the builtin print() function; perhaps rename to printTasks()

// Fetch pods
if _, err := stacks.Get(namespace); err != nil {
return fmt.Errorf("nothing found in stack: %s", namespace)
filters := options.Filter.Value()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might want to move this more to the top; i.e. validate filters/input values before initialising the clients

for i, task := range tasks {
nodeValue, err := nodeResolver(pods[i].Spec.NodeName)
nodeValue, err := resolveNode(options.NoResolve, client.Nodes(), pods[i].Spec.NodeName)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should client.Nodes() be moved outside of the loop?

also, order of arguments is a bit odd; perhaps put name first

if err != nil {
return "", err
}
if len(n.Items) != 1 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could more than 1 be returned? (i.e., can node names be ambiguous?)

Copy link
Member

@mat007 mat007 May 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the docs there is Kubernetes will create a node object internally (the representation), and validate the node by health checking based on the metadata.name field (we assume metadata.name can be resolved).
and the example shows

"metadata": {
   "name": "10.240.79.157",

so I would assume if the name is not unique the duplicate node attempting to join the cluster will fail.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, missed your comment: so should we skip this validation (as k8s already handles this), or keep it?


for _, n := range names {
if strings.HasPrefix(n, stackName+"_") {
// also accepts with unprefixed service name (for compat with existing swarm scripts where service names are prefixed by stack names)

This comment was marked as resolved.

This comment was marked as resolved.


// RunServices is the kubernetes implementation of docker stack services
func RunServices(dockerCli *KubeCli, opts options.Services) error {
// Initialize clients
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment is a bit superfluous


replicasList, err := replicas.List(metav1.ListOptions{LabelSelector: labels.SelectorForStack(opts.Namespace)})
if err != nil {
filters := opts.Filter.Value()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Best move this to the top; no need to initialise clients if filters are invalid

@mat007 mat007 force-pushed the k8s-stack-services-filters branch from 646446e to b759869 Compare May 11, 2018 14:51
@mat007
Copy link
Member

mat007 commented May 11, 2018

@thaJeztah @wsong I believe I have incorporated your suggestions, thank you both for your very helpful reviews. Would you care to take another look? Cheers!

@mat007
Copy link
Member

mat007 commented May 11, 2018

Oh by the way, here is the test suite I have been using as regression testing.
I know it is heavily depending on my environment, e2e tests are being worked on and should cover all this shortly.

+ ./build/docker.exe --orchestrator=kubernetes stack services dtcccccc
ID                  NAME                  MODE                REPLICAS            IMAGE                 PORTS
63662850-551        dtcccccc_backend      replicated          0/1                 docker-teaches-code   *:8081->8081/tcp
637c5e01-551        dtcccccc_cpp          replicated          0/1                 dtc-cpp               
638dd704-551        dtcccccc_dockerfile   replicated          0/1                 dtc-dockerfile        
63a3c53a-551        dtcccccc_golang       replicated          0/1                 dtc-golang            
63b2242a-551        dtcccccc_python       replicated          0/1                 dtc-python            
+ ./build/docker.exe --orchestrator=kubernetes stack services -f name=dtc dtcccccc
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
+ ./build/docker.exe --orchestrator=kubernetes stack services -f name=dtcccccc_py dtcccccc
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
+ ./build/docker.exe --orchestrator=kubernetes stack services -f name=dtcccccc_python dtcccccc
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
63b2242a-551        dtcccccc_python     replicated          0/1                 dtc-python          
+ ./build/docker.exe --orchestrator=kubernetes stack services -f name=python dtcccccc
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
63b2242a-551        dtcccccc_python     replicated          0/1                 dtc-python          
+ ./build/docker.exe --orchestrator=kubernetes stack services -f name=thon dtcccccc
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
+ ./build/docker.exe --orchestrator=kubernetes stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' dtcccccc
ID                  NAME                                   NODE
635e19bc-551        dtcccccc_backend-695899db65-tptbg      docker-for-desktop
6379fe84-551        dtcccccc_cpp-5b7bb8d649-sb8z4          docker-for-desktop
6389d833-551        dtcccccc_dockerfile-6f975bf945-bpxkl   docker-for-desktop
63a1c880-551        dtcccccc_golang-6d94cc9cd9-brx9r       docker-for-desktop
63b1ccdc-551        dtcccccc_python-57b458bb85-wqgzp       docker-for-desktop
+ ./build/docker.exe --orchestrator=kubernetes stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f name=dtc dtcccccc
ID                  NAME                                   NODE
635e19bc-551        dtcccccc_backend-695899db65-tptbg      docker-for-desktop
6379fe84-551        dtcccccc_cpp-5b7bb8d649-sb8z4          docker-for-desktop
6389d833-551        dtcccccc_dockerfile-6f975bf945-bpxkl   docker-for-desktop
63a1c880-551        dtcccccc_golang-6d94cc9cd9-brx9r       docker-for-desktop
63b1ccdc-551        dtcccccc_python-57b458bb85-wqgzp       docker-for-desktop
+ ./build/docker.exe --orchestrator=kubernetes stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f name=dtcccccc_cpp-5b7bb8d649-sb8z4 dtcccccc
ID                  NAME                            NODE
6379fe84-551        dtcccccc_cpp-5b7bb8d649-sb8z4   docker-for-desktop
+ ./build/docker.exe --orchestrator=kubernetes stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f name=python dtcccccc
nothing found in stack: dtcccccc
+ ./build/docker.exe --orchestrator=kubernetes stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f name=thon dtcccccc
nothing found in stack: dtcccccc
+ ./build/docker.exe --orchestrator=kubernetes stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f name=py -f name=cp dtcccccc
nothing found in stack: dtcccccc
+ ./build/docker.exe --orchestrator=kubernetes stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f node=bla dtcccccc
nothing found in stack: dtcccccc
+ ./build/docker.exe --orchestrator=kubernetes stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f node=docker dtcccccc
nothing found in stack: dtcccccc
+ ./build/docker.exe --orchestrator=kubernetes stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f node=desktop dtcccccc
nothing found in stack: dtcccccc
+ ./build/docker.exe --orchestrator=kubernetes stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f node=docker-for-desktop dtcccccc
ID                  NAME                                   NODE
635e19bc-551        dtcccccc_backend-695899db65-tptbg      docker-for-desktop
6379fe84-551        dtcccccc_cpp-5b7bb8d649-sb8z4          docker-for-desktop
6389d833-551        dtcccccc_dockerfile-6f975bf945-bpxkl   docker-for-desktop
63a1c880-551        dtcccccc_golang-6d94cc9cd9-brx9r       docker-for-desktop
63b1ccdc-551        dtcccccc_python-57b458bb85-wqgzp       docker-for-desktop

@mat007
Copy link
Member

mat007 commented May 11, 2018

And similar tests for swarm

+ ./build/docker.exe --orchestrator=swarm stack services dtc
ID                  NAME                MODE                REPLICAS            IMAGE                        PORTS
7g4y3usaihb8        dtc_golang          replicated          0/1                 dtc-golang:latest            
87do8nsnv8dl        dtc_backend         replicated          0/1                 docker-teaches-code:latest   *:8081->8081/tcp
j7kd43n73q62        dtc_python          replicated          0/1                 dtc-python:latest            
rse2qpv9t3jo        dtc_cpp             replicated          0/1                 dtc-cpp:latest               
wt4okbue8nu6        dtc_dockerfile      replicated          0/1                 dtc-dockerfile:latest        
+ ./build/docker.exe --orchestrator=swarm stack services -f name=dtc dtc
ID                  NAME                MODE                REPLICAS            IMAGE                        PORTS
87do8nsnv8dl        dtc_backend         replicated          0/1                 docker-teaches-code:latest   *:8081->8081/tcp
rse2qpv9t3jo        dtc_cpp             replicated          0/1                 dtc-cpp:latest               
wt4okbue8nu6        dtc_dockerfile      replicated          0/1                 dtc-dockerfile:latest        
7g4y3usaihb8        dtc_golang          replicated          0/1                 dtc-golang:latest            
j7kd43n73q62        dtc_python          replicated          0/1                 dtc-python:latest            
+ ./build/docker.exe --orchestrator=swarm stack services -f name=dtc_py dtc
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
j7kd43n73q62        dtc_python          replicated          0/1                 dtc-python:latest   
+ ./build/docker.exe --orchestrator=swarm stack services -f name=dtc_python dtc
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
j7kd43n73q62        dtc_python          replicated          0/1                 dtc-python:latest   
+ ./build/docker.exe --orchestrator=swarm stack services -f name=python dtc
Nothing found in stack: dtc
+ ./build/docker.exe --orchestrator=swarm stack services -f name=thon dtc
Nothing found in stack: dtc
+ ./build/docker.exe --orchestrator=swarm stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' dtc
ID                  NAME                   NODE
yxb9ps6qn4rv        dtc_dockerfile.1       linuxkit-00155d010c1d
iyit6i1pxszh        dtc_golang.1           linuxkit-00155d010c1d
sdfn21yxykr6        dtc_backend.1          linuxkit-00155d010c1d
nhb9yz91qksp        dtc_cpp.1              linuxkit-00155d010c1d
kv16awmhy2dj        dtc_python.1           linuxkit-00155d010c1d
zv616ve50qvu        dtc_golang.1           linuxkit-00155d010c1d
zxad3atlgsdq        dtc_backend.1          linuxkit-00155d010c1d
zypkkte7difa         \_ dtc_backend.1      linuxkit-00155d010c1d
zx7v9fytv6bn        dtc_cpp.1              linuxkit-00155d010c1d
zwo3sus0wm6d        dtc_golang.1           linuxkit-00155d010c1d
zyjijpeis8kv        dtc_python.1           linuxkit-00155d010c1d
zwv7v2sqodk2        dtc_cpp.1              linuxkit-00155d010c1d
zzetzphqa72d        dtc_backend.1          linuxkit-00155d010c1d
zzs4ie7iwdbi        dtc_python.1           linuxkit-00155d010c1d
zyybc2d3yh36        dtc_dockerfile.1       linuxkit-00155d010c1d
zzcb139dl84o        dtc_python.1           linuxkit-00155d010c1d
zwtkroy79205        dtc_golang.1           linuxkit-00155d010c1d
zzavcgawbaed        dtc_dockerfile.1       linuxkit-00155d010c1d
zyi2fw033rvz         \_ dtc_dockerfile.1   linuxkit-00155d010c1d
zyeigdoihon2        dtc_python.1           linuxkit-00155d010c1d
zy4spdwf6dgc        dtc_golang.1           linuxkit-00155d010c1d
zy6vbgx3azd8        dtc_dockerfile.1       linuxkit-00155d010c1d
zwx3xh29qaet        dtc_cpp.1              linuxkit-00155d010c1d
zxzx6iejy2pi        dtc_backend.1          linuxkit-00155d010c1d
zx6ajt8bvoh0        dtc_cpp.1              linuxkit-00155d010c1d
+ ./build/docker.exe --orchestrator=swarm stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f name=dtc dtc
ID                  NAME                   NODE
lzvydksu418n        dtc_cpp.1              
ndq9mekv99lb        dtc_python.1           linuxkit-00155d010c1d
yxb9ps6qn4rv        dtc_dockerfile.1       linuxkit-00155d010c1d
iyit6i1pxszh        dtc_golang.1           linuxkit-00155d010c1d
sdfn21yxykr6        dtc_backend.1          linuxkit-00155d010c1d
nhb9yz91qksp        dtc_cpp.1              linuxkit-00155d010c1d
kv16awmhy2dj        dtc_python.1           linuxkit-00155d010c1d
zv616ve50qvu        dtc_golang.1           linuxkit-00155d010c1d
zxad3atlgsdq        dtc_backend.1          linuxkit-00155d010c1d
zypkkte7difa         \_ dtc_backend.1      linuxkit-00155d010c1d
zx7v9fytv6bn        dtc_cpp.1              linuxkit-00155d010c1d
zwo3sus0wm6d        dtc_golang.1           linuxkit-00155d010c1d
zyjijpeis8kv        dtc_python.1           linuxkit-00155d010c1d
zwv7v2sqodk2        dtc_cpp.1              linuxkit-00155d010c1d
zzetzphqa72d        dtc_backend.1          linuxkit-00155d010c1d
zzs4ie7iwdbi        dtc_python.1           linuxkit-00155d010c1d
zyybc2d3yh36        dtc_dockerfile.1       linuxkit-00155d010c1d
zzcb139dl84o        dtc_python.1           linuxkit-00155d010c1d
zwtkroy79205        dtc_golang.1           linuxkit-00155d010c1d
zzavcgawbaed        dtc_dockerfile.1       linuxkit-00155d010c1d
zyi2fw033rvz         \_ dtc_dockerfile.1   linuxkit-00155d010c1d
zyeigdoihon2        dtc_python.1           linuxkit-00155d010c1d
zy4spdwf6dgc        dtc_golang.1           linuxkit-00155d010c1d
zy6vbgx3azd8        dtc_dockerfile.1       linuxkit-00155d010c1d
zwx3xh29qaet        dtc_cpp.1              linuxkit-00155d010c1d
zxzx6iejy2pi        dtc_backend.1          linuxkit-00155d010c1d
zx6ajt8bvoh0        dtc_cpp.1              linuxkit-00155d010c1d
+ ./build/docker.exe --orchestrator=swarm stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f name=dtc_python.1 dtc
ID                  NAME                NODE
ndq9mekv99lb        dtc_python.1        linuxkit-00155d010c1d
kv16awmhy2dj         \_ dtc_python.1    linuxkit-00155d010c1d
zyjijpeis8kv         \_ dtc_python.1    linuxkit-00155d010c1d
zzs4ie7iwdbi         \_ dtc_python.1    linuxkit-00155d010c1d
zzcb139dl84o         \_ dtc_python.1    linuxkit-00155d010c1d
zyeigdoihon2         \_ dtc_python.1    linuxkit-00155d010c1d
+ ./build/docker.exe --orchestrator=swarm stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f name=dtc_py dtc
ID                  NAME                NODE
ndq9mekv99lb        dtc_python.1        linuxkit-00155d010c1d
kv16awmhy2dj         \_ dtc_python.1    linuxkit-00155d010c1d
zyjijpeis8kv         \_ dtc_python.1    linuxkit-00155d010c1d
zzs4ie7iwdbi         \_ dtc_python.1    linuxkit-00155d010c1d
zzcb139dl84o         \_ dtc_python.1    linuxkit-00155d010c1d
zyeigdoihon2         \_ dtc_python.1    linuxkit-00155d010c1d
+ ./build/docker.exe --orchestrator=swarm stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f name=python dtc
nothing found in stack: dtc
+ ./build/docker.exe --orchestrator=swarm stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f name=python.1 dtc
nothing found in stack: dtc
+ ./build/docker.exe --orchestrator=swarm stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f name=thon dtc
nothing found in stack: dtc
+ ./build/docker.exe --orchestrator=swarm stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f name=py -f name=cp dtc
nothing found in stack: dtc
+ ./build/docker.exe --orchestrator=swarm stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f node=bla dtc
Error response from daemon: node bla not found
+ ./build/docker.exe --orchestrator=swarm stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f node=linuxkit dtc
Error response from daemon: node linuxkit not found
+ ./build/docker.exe --orchestrator=swarm stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f node=kit dtc
Error response from daemon: node kit not found
+ ./build/docker.exe --orchestrator=swarm stack ps '--format=table {{.ID}}\t{{.Name}}\t{{.Node}}' -f node=linuxkit-00155d010c1d dtc
ID                  NAME                   NODE
u3znpk8tsskh        dtc_cpp.1              linuxkit-00155d010c1d
ndq9mekv99lb        dtc_python.1           linuxkit-00155d010c1d
yxb9ps6qn4rv        dtc_dockerfile.1       linuxkit-00155d010c1d
iyit6i1pxszh        dtc_golang.1           linuxkit-00155d010c1d
sdfn21yxykr6        dtc_backend.1          linuxkit-00155d010c1d
zv616ve50qvu        dtc_golang.1           linuxkit-00155d010c1d
zxad3atlgsdq        dtc_backend.1          linuxkit-00155d010c1d
zypkkte7difa         \_ dtc_backend.1      linuxkit-00155d010c1d
zx7v9fytv6bn        dtc_cpp.1              linuxkit-00155d010c1d
zwo3sus0wm6d        dtc_golang.1           linuxkit-00155d010c1d
zyjijpeis8kv        dtc_python.1           linuxkit-00155d010c1d
zwv7v2sqodk2        dtc_cpp.1              linuxkit-00155d010c1d
zzetzphqa72d        dtc_backend.1          linuxkit-00155d010c1d
zzs4ie7iwdbi        dtc_python.1           linuxkit-00155d010c1d
zyybc2d3yh36        dtc_dockerfile.1       linuxkit-00155d010c1d
zzcb139dl84o        dtc_python.1           linuxkit-00155d010c1d
zwtkroy79205        dtc_golang.1           linuxkit-00155d010c1d
zzavcgawbaed        dtc_dockerfile.1       linuxkit-00155d010c1d
zyi2fw033rvz         \_ dtc_dockerfile.1   linuxkit-00155d010c1d
zyeigdoihon2        dtc_python.1           linuxkit-00155d010c1d
zy4spdwf6dgc        dtc_golang.1           linuxkit-00155d010c1d
zy6vbgx3azd8        dtc_dockerfile.1       linuxkit-00155d010c1d
zwx3xh29qaet        dtc_cpp.1              linuxkit-00155d010c1d
zxzx6iejy2pi        dtc_backend.1          linuxkit-00155d010c1d
zx6ajt8bvoh0        dtc_cpp.1              linuxkit-00155d010c1d

for _, pod := range podsList.Items {
if filterPod(pod, nodes) &&
// name filter is done client side for matching partials
f.FuzzyMatch("name", stackNamePrefix+pod.Name) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So does this mean that we require the name field to be like --filter name=<stack name prefix> + <pod name>?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a FuzzyMatch meaning with a stackNamePrefix of dtcccccc and a pod.Name of backend-695899db65-tptbg all these --filter will match:

  • name=dtc
  • name=dtcccccc
  • name=dtcccccc-backend
  • name=dtcccccc-backend-695899db65-tptbg

However this one won't:

  • name=backend

This seems to be consistent with the behavior with --orchestrator=swarm, however @simonferquel initially put Match here thus allowing the following filters to match:

  • name=backend
  • name=ccc

etc…

I'm not sure which solution we want, the specification document was a bit vague and gave no example of expected outputs:

  • Best effort mapping of --filter flag
    • Best effort as some filters will not map perfectly

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, aren't you just concatenating the stackNamePrefix and pod.Name with stackNamePrefix+pod.Name? You said that dtcccccc-backend should match, implying there should be a hyphen between stackNamePrefix and pod.Name, but that doesn't seem to be the case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stackNamePrefix actually ends with an hyphen, it's defined a few lines before as

stackNamePrefix := stackName + "_"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if this is intentional, but that's an underscore, not a hyphen.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the confusion, I made an error when writing the examples in #1023 (comment): the hyphens should have been underscores i.e.

with a `stackNamePrefix` of `dtcccccc` and a `pod.Name` of `backend-695899db65-tptbg` all these `--filter` will match:
- `name=dtc`
- `name=dtcccccc`
- `name=dtcccccc_backend`
- `name=dtcccccc_backend-695899db65-tptbg`

@thaJeztah
Copy link
Member

This needs a rebase

@mat007 mat007 force-pushed the k8s-stack-services-filters branch from b759869 to 1e6ea25 Compare May 15, 2018 19:19
@mat007
Copy link
Member

mat007 commented May 15, 2018

This needs a rebase

Done!

@silvin-lubecki
Copy link
Contributor

@vdemeester can you take another look on this PR? 👋

return "", fmt.Errorf("could not find node '%s'", name)
}
return string(n.Items[0].UID), nil
n, err := nodes.List(metav1.ListOptions{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're going to call this function for every single task, which is going to be super slow with large amounts of tasks. Is there any way we could just do a single nodes.List call?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, well spotted… I've changed the code to request the nodes only once and loop on the tasks afterwards.

return "", err
}
if len(n.Items) != 1 {
return "", fmt.Errorf("could not find node '%s'", name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we keep this, consider changing this to something that mentions we found nodes, but that the name is ambiguous, e.g.;

fmt.Errorf("node name is ambiguous %q returned $d nodes", name, len(n.Items))

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kubernetes manages everything by name so there is no chance for two nodes to have the same name.
Also the code has changed now, the nodes are fetched once in a slice which is then used to resolve the name with a clearer plain == comparison.

@mat007 mat007 force-pushed the k8s-stack-services-filters branch from de5009b to 249a3f3 Compare May 18, 2018 14:21
@mat007
Copy link
Member

mat007 commented May 18, 2018

I squashed all commits. I also updated stack_services.md and would really like at least another pair of eyeballs to double check it : https://github.com/docker/cli/pull/1023/files#diff-0a1345791b0bc60882c42c271f35e3d4

if err != nil {
return nil, nil, err
}
result[i+len(replicas.Items)] = *s
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was there a reason for not doing;

result = append(result, *s)

result = append(result, fmt.Sprintf("%s=%s", k, v[0]))
} else {
sort.Strings(v)
result = append(result, fmt.Sprintf("%s in (%s)", k, strings.Join(v, ",")))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the in (x) operator less efficient than x=y, or will the selector be smart enough to optimise if there's a single value?

If so, we could simplify to;

for k, v := range valuedLabels {
    sort.Strings(v)
    result = append(result, fmt.Sprintf("%s in (%s)", k, strings.Join(v, ",")))
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I honestly don't know if it's really less efficient (we are talking about 1 equality vs a list of 1...)

var presentLabels []string
valuedLabels := map[string][]string{}
for _, rawLabel := range rawFilters {
equalIndex := strings.Index(rawLabel, "=")
Copy link
Member

@thaJeztah thaJeztah May 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be simplified by using strings.SplitN();

func parseLabelFilters(rawFilters []string) ([]string, map[string][]string) {
	var presentLabels []string
	valuedLabels := map[string][]string{}
	for _, rawLabel := range rawFilters {
		v := strings.SplitN(rawLabel, "=", 2)
		key := v[0]
		if len(v) > 1 {
			valuedLabels[key] = append(valuedLabels[key], v[1])
			continue
		}
		presentLabels = append(presentLabels, key)
	}
	return presentLabels, valuedLabels
}

However, filters can appear both as valuedLabel and as presentLabel, is that expected? see https://play.golang.org/p/EuJk1d7b5Ts

If that's not the intent, we could treat presentLabels and valuedLabels as identical; a presentLabel just being a valuedLabel without value, and change generateValuedLabelsSelector() to generateSelector(), something like;

func generateSelector(labels map[string][]string) []string {
	var result []string
	for k, v := range labels {
		switch len(v) {
		case 0:
			result = append(result, k)
		case 1:
			result = append(result, fmt.Sprintf("%s=%s", k, v[0]))
		default:
			sort.Strings(v)
			result = append(result, fmt.Sprintf("%s in (%s)", k, strings.Join(v, ",")))
		}
	}
	return result
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, filters can appear both as valuedLabel and as presentLabel, is that expected? see https://play.golang.org/p/EuJk1d7b5Ts

Good point, on docker/swarm it's basically a and, i.e. --filter=label=foo --filter=label=foo=bar is the same as --filter=label=foo=bar. I think we should do/keep the same, and thus presentLabels and valuedLabels shouldn't have anything in common 👼

So yeah, we should tread them as identical and update/fix generateSelector instead 👍

return nil, nil, nil, err
}
} else {
replicasList = &appsv1beta2.ReplicaSetList{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this needed (or done already by var replicasList *appsv1beta2.ReplicaSetList ?)

return nil, nil, nil, err
}
} else {
daemonsList = &appsv1beta2.DaemonSetList{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this needed (or done already by var replicasList *appsv1beta2.DaemonSetList ?)

@thaJeztah
Copy link
Member

Vendor check is somehow failing;

 D vendor/k8s.io/apimachinery/pkg/apimachinery/announced/announced.go
 D vendor/k8s.io/apimachinery/pkg/apimachinery/announced/group_factory.go
 D vendor/k8s.io/apimachinery/pkg/apimachinery/doc.go
 D vendor/k8s.io/apimachinery/pkg/apimachinery/registered/registered.go
 D vendor/k8s.io/apimachinery/pkg/apimachinery/types.go
 D vendor/k8s.io/kubernetes/pkg/api/annotation_key_constants.go
 D vendor/k8s.io/kubernetes/pkg/api/doc.go
 D vendor/k8s.io/kubernetes/pkg/api/field_constants.go
 D vendor/k8s.io/kubernetes/pkg/api/json.go
 D vendor/k8s.io/kubernetes/pkg/api/objectreference.go
 D vendor/k8s.io/kubernetes/pkg/api/register.go
 D vendor/k8s.io/kubernetes/pkg/api/resource.go
 D vendor/k8s.io/kubernetes/pkg/api/taint.go
 D vendor/k8s.io/kubernetes/pkg/api/toleration.go
 D vendor/k8s.io/kubernetes/pkg/api/types.go
 D vendor/k8s.io/kubernetes/pkg/api/zz_generated.deepcopy.go
```

@mat007
Copy link
Member

mat007 commented May 18, 2018

Not sure what goes wrong with the vendoring, the PR does not change any file in vendor

Copy link
Contributor

@wsong wsong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you need to re-run vndr. You removed some imports in the code which means some packages in vendor/ are no longer necessary.

return "", fmt.Errorf("could not find node '%s'", name)
for _, node := range nodes.Items {
if node.Name == name {
return string(node.UID), nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the node.UID really what we want? In Kube, there as you've already pointed out, names are the IDs, so there isn't really an analogue to the ID-vs-hostname split of Swarmkit (in Swarmkit, you can have two nodes with the same name, but that's not possible in Kube). As such, I think it'd make more sense to just ignore noResolve for Kube instead of using this UID which isn't used anywhere else in the system.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is more about having the same UX as Swarm than being really useful... Sure we can remove the noResolve flag and simplify the code, but in that case we may break some Swarm scripts which use this flag? From the start we choose to stick to Swarm UX, even if it is not very relevant sometimes. But maybe we are wrong and we should simplify where we can, WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, we can keep the flag, but I think when we translate Kube nodes to Swarm nodes, the ID field and the name field should contain the same value (i.e. the name of the node in Kube). There's no reason to expose the node.UID field in docker node ls; users will never need to know the value of node.UID for the purposes of docker node ls. It's not important that the ID field contains an opaque, alphanumeric string; what's important is that it's unique, and in Kube the name is unique.

The same goes for everywhere else we use UID in this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @wsong, but I think this a different issue then. I opened a new issue #1075 to take care of it in a follow-up so we can move forward on this PR. Does it sounds good to you?

@mat007 mat007 force-pushed the k8s-stack-services-filters branch 3 times, most recently from b7285d4 to af9f0d8 Compare May 22, 2018 07:42
),
expectedSelectorParts: []string{
"com.docker.stack.namespace=test",
"label1 in (test,test2)",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to have a similar behavior between swarm and k8s on the filter, this test case is wrong.

  • --filter=label=foo=bar --filter=label=foo=baz translate to foo being equal to both bar and baz (which can't happen, but that's another story)
  • foo in (bar, baz) in k8s translate to foo being equal to bar or baz.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok good point. What do you suggest to fix this? As it seems the AND is impossible is Swarm, maybe we shouldn't be able to write it? My point is that the OR makes more sens, so maybe we should error out on swarm side and keep this valid for Kubernetes?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have a look (separate from this PR) at improving the filter design, and have syntax for "and", "or", but also (regex) matching, negative matches and so on. (Was discussing that with @vdemeester yesterday, as it keeps coming up)

My point is that the OR makes more sense

Depends how you look at it; the "filter" was designed to reduce results until you have a match, so the more filters you add, the less results you get.

For name, yes, I agree, there 's likely not much use, but for labels (e.g.) definitely (show me all objects having both label-x and label-y). On the other hand; in what cases would I be looking at a stack named "foo" or "bar"?

@mat007 mat007 force-pushed the k8s-stack-services-filters branch from af9f0d8 to 20657db Compare May 23, 2018 14:15
@mat007
Copy link
Member

mat007 commented May 23, 2018

Here is an update addressing the refactoring suggested (all comments for which I added a 👍 ).
For the record my local tests are as follows, with two services, backend has a mylabel:mat and toto a mylabel:mot:

+ ./build/docker.exe stack services --filter label=bla dtc
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
+ ./build/docker.exe stack services --filter label=mat dtc
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
+ ./build/docker.exe stack services --filter label=mylabel dtc
ID                  NAME                MODE                REPLICAS            IMAGE                 PORTS
0543f8fe-5e8        dtc_backend         replicated          0/1                 docker-teaches-code   *:8081->8081/tcp
5970841e-5e8        dtc_toto            replicated          0/1                 docker-teaches-code   *:8082->8082/tcp
+ ./build/docker.exe stack services --filter label=mylabel=bla dtc
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
+ ./build/docker.exe stack services --filter label=mylabel=mat dtc
ID                  NAME                MODE                REPLICAS            IMAGE                 PORTS
0543f8fe-5e8        dtc_backend         replicated          0/1                 docker-teaches-code   *:8081->8081/tcp
+ ./build/docker.exe stack services --filter label=mylabel=mat --filter label=mylabel=mot dtc
ID                  NAME                MODE                REPLICAS            IMAGE                 PORTS
0543f8fe-5e8        dtc_backend         replicated          0/1                 docker-teaches-code   *:8081->8081/tcp
5970841e-5e8        dtc_toto            replicated          0/1                 docker-teaches-code   *:8082->8082/tcp
+ ./build/docker.exe stack services --filter label=mylabel --filter label=mylabel=mat dtc
ID                  NAME                MODE                REPLICAS            IMAGE                 PORTS
0543f8fe-5e8        dtc_backend         replicated          0/1                 docker-teaches-code   *:8081->8081/tcp
+ ./build/docker.exe stack services --filter label=mylabel=mat --filter label=mylabel dtc
ID                  NAME                MODE                REPLICAS            IMAGE                 PORTS
0543f8fe-5e8        dtc_backend         replicated          0/1                 docker-teaches-code   *:8081->8081/tcp

Thanks!

Signed-off-by: Mathieu Champlon <mathieu.champlon@docker.com>
@GordonTheTurtle

This comment has been minimized.

@mat007
Copy link
Member

mat007 commented May 24, 2018

@thaJeztah @vdemeester as suggested I added a commit to match Swarm and Kubernetes behavior when combining filters so we can move forward here.

`docker stack services --filter=label=foo=bar --filter=label=foo=baz my-stack` with Swarm gets handled as `filter on (a label named foo with value bar) AND (a label named foo with value baz).
This obviously yields an empty result set every time, but if and how this should be changed is out of scope here, so simply align Kubernetes with Swarm for now.

Signed-off-by: Mathieu Champlon <mathieu.champlon@docker.com>
Copy link
Member

@thaJeztah thaJeztah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks everyone!

Copy link
Collaborator

@vdemeester vdemeester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🐯

@thaJeztah thaJeztah merged commit 0089f17 into docker:master May 24, 2018
@GordonTheTurtle GordonTheTurtle added this to the 18.06.0 milestone May 24, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants