Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Docker to host most server components #19

Closed
1 of 7 tasks
jbeda opened this issue Jun 8, 2014 · 21 comments
Closed
1 of 7 tasks

Use Docker to host most server components #19

jbeda opened this issue Jun 8, 2014 · 21 comments
Labels
area/build-release area/docker priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@jbeda
Copy link
Contributor

jbeda commented Jun 8, 2014

Right now we use salt to distribute and start most of the server components for the cluster.

Instead, we should do the following:

  • Only support building on a linux machine with docker installed. Perhaps support local development on a mac with a local linux VM
  • Package each server component up as a Docker image, built with a Dockerfile
    • Support uploading these Docker images to either the public index or a GCS backed index with google/docker-registry. Or GCR?
    • Minimize docker image size by switching to single layer static golang binary docker images.
    • Support docker save to generate tars of the docker image(s) for dev/private development.
  • Use the kubelet to run/health check the components. This means the kubelet will manage a set of static tasks on each machine (including the master) and a set of dynamic tasks.
  • The only task that shouldn't run under the docker should be the kubelet itself. We may have to hack in something for (network mode = host) for the proxy.
@proppy
Copy link
Contributor

proppy commented Jun 8, 2014

On Sat, Jun 7, 2014 at 10:35 PM, Joe Beda notifications@github.com wrote:

Right now we use salt to distribute and start most of the server
components for the master. As we support build and deployment from a local
Mac, we don't pre-compile the go scripts but instead ship the source code
to the nodes (with salt) and compile at install time.

Instead, we should do the following:

  • Only support building on a linux machine with docker installed.
    Perhaps support local development on a mac with a local linux VM

We could link to docker instruction about boot2docker:
http://docs.docker.io/installation/mac/

  • Package each server component up as a Docker image, built with a
    Dockerfile
    • We should support uploading these Docker images to either the
      public index or a GCS backed index with google/docker-registry
      https://index.docker.io/u/google/docker-registry/.
      • Use the kubelet to run/health check the components. This means the
        kubelet will manage a set of static tasks on each machine (including the
        master) and a set of dynamic tasks.
  • The only task that shouldn't run under the docker should be the
    kubelet itself. We may have to hack in something for (network mode = host)
    for the proxy.


Reply to this email directly or view it on GitHub
#19.

Johan Euphrosine (proppy)
Developer Programs Engineer
Google Developer Relations

@monnand
Copy link
Contributor

monnand commented Jun 11, 2014

I have some questions just out of curious:

  • Does it mean that salt will not be used in the future?
  • About kubenetes docker containers, does it only contain compiled binary files, or contain source code? More precisely, are you going to put build process into the Dockerfile?

Thank you!

@jbeda
Copy link
Contributor Author

jbeda commented Jun 11, 2014

@monnand I imagine that we will continue to use salt to bootstrap stuff. But we'll be able to reduce some of the more complex salt config.

For example, we currently ship and compile the source everywhere where it is run. If we start building docker images, we can precompile the binaries before they are run.

I'm thinking that we'll follow the example of Docker itself and do the build process in docker containers. This, with boot2docker, could lead to a good dev flow for Mac OS X.

@monnand
Copy link
Contributor

monnand commented Jun 11, 2014

@jbeda Thank you!

You also mentioned that kubelet should not run under the docker. Is it for technical reason, or other? I do not see any technical difficulties to run kubelet in docker. Or did I miss something?

@jbeda
Copy link
Contributor Author

jbeda commented Jun 11, 2014

We may be able to run the kubelet under docker, but most likely we'll want it to have a whole machine view and expanded privs. Running it under a cgroup container is totally doable. namespaces? I'm not so sure if we can make that happen.

Another way of looking at this is that I think of the kubelet as operating at the same level as Docker itself (and perhaps merging with Docker at some point?) and so it should run outside of Docker.

@monnand
Copy link
Contributor

monnand commented Jun 11, 2014

@jbeda Correct me if I'm wrong. (I'm not saying that kubelet should run inside a docker container. I'm just trying to see what are the technical difficulties here.)

As far as I know, kubelet only needs to communicate with docker through docker remote api, which is either trhough a unix socket or a remote IP/port pair. Does it need to read/write cgroup's filesystem? In either case, it seems that we could mount the /var/run onto the container and run kubelet inside that containers.

We are currently doing this in cadvisor, which runs inside a docker container but can communicate with docker daemon and read information from the cgroup filesystem. The container could still run inside its own namespace, but communicate with docker daemon through the mounted volume. We use the following command to run cadvisor inside a docker container:

sudo docker run \
  --volume=/var/run:/var/run:rw \
  --volume=/sys/fs/cgroup/:/sys/fs/cgroup:ro \
  --volume=/var/lib/docker/:/var/lib/docker:ro \
  --publish=8080:8080 \
  --detach=true \
  google/cadvisor

@jbeda
Copy link
Contributor Author

jbeda commented Jun 11, 2014

@monnand Nice! We should try and make that work.

One thing I worry about is things like driving iptables rules. To solve #15 we'll have to be able to either muck with iptable rules or get a new networking mode into Docker proper.

@brendanburns
Copy link
Contributor

There was just a discussion of this in the plumbers meeting. The union bay
networks folks want the ability to muck with the network from a container
too.

Brendan
On Jun 11, 2014 4:33 PM, "Joe Beda" notifications@github.com wrote:

@monnand https://github.com/monnand Nice! We should try and make that
work.

One thing I worry about is things like driving iptables rules. To solve
#15 #15 we'll
have to be able to either muck with iptable rules or get a new networking
mode into Docker proper.


Reply to this email directly or view it on GitHub
#19 (comment)
.

@vmarmol
Copy link
Contributor

vmarmol commented Jun 12, 2014

Today you should be able to get the host's network. +1 to @brendanburns's comment.

@proppy
Copy link
Contributor

proppy commented Jun 12, 2014

yes --net host should do the trick.

Another interesting thing to do is -v /var/run/docker.sock:/var/run/docker.sock to access the docker daemon from the container. (or just having the docker daemon listen on localhost w/ --net host)

@jbeda
Copy link
Contributor Author

jbeda commented Jun 14, 2014

Notes of work in progress:

  • I'm starting out by moving our build process into Docker. A snapshot of a Dockerfile and a Makefile to automate some common stuff is here: jbeda@6c4a6a8
  • If we say everyone has to build on Linux, this is much easier. With the boot2docker on a Mac, you essentially have a remote machine that you are talking to through a local TCP pipe. That means with any -v local-path:container-path the local-path is really local to the boot2docker VM and not the local workstation.

This leaves us with 2 choices:

  1. Build in the boot2docker VM and copy results back out, either through stdout from the docker run or via boot2docker ssh
  2. Build the final docker images inside the boot2docker VM inside of a docker container. Yup, this means running docker in docker. This is supported with dind but it gets complicated. There is a repo (https://github.com/jpetazzo/dind) with support but...

Right now I'm leaning toward copying stuff in and out (option 1).

@proppy
Copy link
Contributor

proppy commented Jun 14, 2014

You could have Dockerfile for individual binaries and have the resulting
container image launch it has its ENTRYPOINT.

That way you could leverage the fact sources are sent from your workstation
to the VM as the /build payload (context) over the remote API (no need to
copy to the host) and the docker image is your artefact.
On Jun 14, 2014 8:48 AM, "Joe Beda" notifications@github.com wrote:

Notes of work in progress:

  • I'm starting out by moving our build process into Docker. A snapshot
    of a Dockerfile and a Makefile to automate some common stuff is here:
    jbeda/kubernetes@6c4a6a8
    jbeda@6c4a6a8
  • If we say everyone has to build on Linux, this is much easier. With
    the boot2docker on a Mac, you essentially have a remote machine that you
    are talking to through a local TCP pipe. That means with any -v
    local-path:container-path the local-path is really local to the
    boot2docker VM and not the local workstation.

This leaves us with 2 choices:

  1. Build in the boot2docker VM and copy results back out, either
    through stdout from the docker run or via boot2docker ssh
  2. Build the final docker images inside the boot2docker VM inside of a
    docker container. Yup, this means running docker in docker. This is
    supported with dind but it gets complicated. There is a repo (
    https://github.com/jpetazzo/dind) with support but...

Right now I'm leaning toward copying stuff in and out (option 1).


Reply to this email directly or view it on GitHub
#19 (comment)
.

@proppy
Copy link
Contributor

proppy commented Jun 14, 2014

Note of you just want to have a container to build the projects and get
binaries out.

You can also set the ENTRYPOINT to the build command.

docker build ; docker run # to build
docker cp # to get the file out of the container
On Jun 14, 2014 10:00 AM, "Johan Euphrosine" proppy@google.com wrote:

You could have Dockerfile for individual binaries and have the resulting
container image launch it has its ENTRYPOINT.

That way you could leverage the fact sources are sent from your
workstation to the VM as the /build payload (context) over the remote API
(no need to copy to the host) and the docker image is your artefact.
On Jun 14, 2014 8:48 AM, "Joe Beda" notifications@github.com wrote:

Notes of work in progress:

  • I'm starting out by moving our build process into Docker. A
    snapshot of a Dockerfile and a Makefile to automate some common stuff is
    here: jbeda/kubernetes@6c4a6a8
    jbeda@6c4a6a8
  • If we say everyone has to build on Linux, this is much easier. With
    the boot2docker on a Mac, you essentially have a remote machine that you
    are talking to through a local TCP pipe. That means with any -v
    local-path:container-path the local-path is really local to the
    boot2docker VM and not the local workstation.

This leaves us with 2 choices:

  1. Build in the boot2docker VM and copy results back out, either
    through stdout from the docker run or via boot2docker ssh
  2. Build the final docker images inside the boot2docker VM inside of
    a docker container. Yup, this means running docker in docker. This is
    supported with dind but it gets complicated. There is a repo (
    https://github.com/jpetazzo/dind) with support but...

Right now I'm leaning toward copying stuff in and out (option 1).


Reply to this email directly or view it on GitHub
#19 (comment)
.

@proppy
Copy link
Contributor

proppy commented Jun 14, 2014

You could also have a combination of the two.

Build and run kube on top of google/golang for development; for production
if the size of google/debian base bother you: rebase the binary on top of
busybox image.
On Jun 14, 2014 10:11 AM, "Johan Euphrosine" proppy@google.com wrote:

Note of you just want to have a container to build the projects and get
binaries out.

You can also set the ENTRYPOINT to the build command.

docker build ; docker run # to build
docker cp # to get the file out of the container
On Jun 14, 2014 10:00 AM, "Johan Euphrosine" proppy@google.com wrote:

You could have Dockerfile for individual binaries and have the resulting
container image launch it has its ENTRYPOINT.

That way you could leverage the fact sources are sent from your
workstation to the VM as the /build payload (context) over the remote API
(no need to copy to the host) and the docker image is your artefact.
On Jun 14, 2014 8:48 AM, "Joe Beda" notifications@github.com wrote:

Notes of work in progress:

  • I'm starting out by moving our build process into Docker. A
    snapshot of a Dockerfile and a Makefile to automate some common stuff is
    here: jbeda/kubernetes@6c4a6a8
    jbeda@6c4a6a8
  • If we say everyone has to build on Linux, this is much easier.
    With the boot2docker on a Mac, you essentially have a remote machine that
    you are talking to through a local TCP pipe. That means with any -v
    local-path:container-path the local-path is really local to the
    boot2docker VM and not the local workstation.

This leaves us with 2 choices:

  1. Build in the boot2docker VM and copy results back out, either
    through stdout from the docker run or via boot2docker ssh
  2. Build the final docker images inside the boot2docker VM inside of
    a docker container. Yup, this means running docker in docker. This is
    supported with dind but it gets complicated. There is a repo (
    https://github.com/jpetazzo/dind) with support but...

Right now I'm leaning toward copying stuff in and out (option 1).


Reply to this email directly or view it on GitHub
#19 (comment)
.

@jbeda
Copy link
Contributor Author

jbeda commented Jun 14, 2014

Thanks for the comments @proppy.

I want the resultant container image to be minimal. I like the idea of layering it on the busybox image.

That means that the image used to build should be different than the image used at runtime. Doing docker cp to copy things around is one thing I'm looking to avoid. dind is one solution there. Rebasing will require either dind or docker cp. If we don't do this carefully we end up packaging up 60+MB every time we build the image. That takes too long :)

@proppy
Copy link
Contributor

proppy commented Jun 15, 2014

FYI, I have a pending patch to docker that could provide an hacky
alternative.
moby/moby#5715

This would allow something like

docker build -t builder ; (docker run builder | docker build -t runner -)

On Jun 14, 2014 2:52 PM, "Joe Beda" notifications@github.com wrote:

Thanks for the comments @proppy https://github.com/proppy.

I want the resultant container image to be minimal. I like the idea of
layering it on the busybox image.

That means that the image used to build should be different than the image
used at runtime. Doing docker cp to copy things around is one thing I'm
looking to avoid. dind is one solution there. Rebasing will require
either dind or docker cp. If we don't do this carefully we end up
packaging up 60+MB every time we build the image. That takes too long :)


Reply to this email directly or view it on GitHub
#19 (comment)
.

@errordeveloper
Copy link
Member

Isn't this mostly done already?

@bgrant0607
Copy link
Member

Yes.

dlorenc added a commit to dlorenc/kubernetes that referenced this issue May 13, 2016
lazypower pushed a commit to lazypower/kubernetes that referenced this issue Sep 20, 2016
…r-services

Adds the workerSupporting files for services
joshwget pushed a commit to joshwget/kubernetes that referenced this issue Sep 20, 2016
luxas pushed a commit to luxas/kubernetes that referenced this issue Sep 23, 2016
Rename flag `--schedule-workload` to `--schedule-pods-here` for kubeadm init
dagnello pushed a commit to dagnello/kubernetes that referenced this issue Oct 7, 2016
OpenStack Security Group - replace group on port with ours.
xingzhou pushed a commit to xingzhou/kubernetes that referenced this issue Dec 15, 2016
…link

replace calendar link with zoom link
cofyc added a commit to cofyc/kubernetes that referenced this issue May 7, 2018
RBD Plugin: Pass monitors addresses in a comma-separed list instead of trying one by one.
yujuhong pushed a commit to yujuhong/kubernetes that referenced this issue Nov 8, 2018
Use IP alias range for podCidr.
@maicohjf
Copy link

You could have Dockerfile for individual binaries and have the resulting
container image launch it has its ENTRYPOINT.

That way you could leverage the fact sources are sent from your workstation
to the VM as the /build payload (context) over the remote API (no need to
copy to the host) and the docker image is your artefact.
On Jun 14, 2014 8:48 AM, "Joe Beda" notifications@github.com wrote:

Notes of work in progress:

  • I'm starting out by moving our build process into Docker. A snapshot
    of a Dockerfile and a Makefile to automate some common stuff is here:
    jbeda/kubernetes@6c4a6a8
    jbeda@6c4a6a8
  • If we say everyone has to build on Linux, this is much easier. With
    the boot2docker on a Mac, you essentially have a remote machine that you
    are talking to through a local TCP pipe. That means with any -v
    local-path:container-path the local-path is really local to the
    boot2docker VM and not the local workstation.

This leaves us with 2 choices:

  1. Build in the boot2docker VM and copy results back out, either
    through stdout from the docker run or via boot2docker ssh

  2. Build the final docker images inside the boot2docker VM inside of a
    docker container. Yup, this means running docker in docker. This is
    supported with dind but it gets complicated. There is a repo (
    https://github.com/jpetazzo/dind) with support but...

Right now I'm leaning toward copying stuff in and out (option 1).

Reply to this email directly or view it on GitHub
#19 (comment)
.

Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the
number to /opt/......

首先用kubectl get node 查看总共node 数量,然后kubectl describe node 查看
不包含污点值为NoSchedule 的node 数量

From the Pod label name=cpu-utilizer, find pods running high CPU workloads and write the name
of the Pod consuming most CPU to the file /opt/...... (which already exists)

方法一:
通过kubectl get pod -l name=cpu-utilizer -o wide 查看pod 所在的node
之后kubectl describe node 查看哪个pod 请求的cpu 最多
三个都是0,我就都写进去了
方法二:
kubectl top pod -l name=cpu-utilizer

cjcullen pushed a commit to cjcullen/kubernetes that referenced this issue Apr 27, 2019
yangkev added a commit to yangkev/kubernetes that referenced this issue Apr 9, 2020
yangkev added a commit to yangkev/kubernetes that referenced this issue Apr 9, 2020
cynepco3hahue pushed a commit to cynepco3hahue/kubernetes that referenced this issue May 26, 2020
b3atlesfan pushed a commit to b3atlesfan/kubernetes that referenced this issue Feb 5, 2021
Refactor watch event handling; sleep for 1s on all errors
stevekuznetsov added a commit to stevekuznetsov/kubernetes that referenced this issue Nov 22, 2021
Varus-m pushed a commit to Varus-m/kubernetes that referenced this issue Jul 23, 2023
the fix about cannot connect to service by minikube ip
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/build-release area/docker priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests