-
Notifications
You must be signed in to change notification settings - Fork 14.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updates for Windows Server version 1709 with K8s v1.8 #6180
Changes from 5 commits
ba8258a
b41b9d2
44bb863
703c5f8
6a6922c
c056787
b84ac59
e5acb8e
a99a42f
a88e70c
4468bda
ffd6057
3e5c06d
31e4dbd
3308735
5d89b14
5d39cfe
f7543a3
ee8801d
3585405
0ca9deb
47eed43
0b9fd9a
ed832e3
ae44007
95638a2
87f3f9e
9865d04
d7c1e3f
551c2a8
b353c56
d761624
34bfdfc
fcf42fa
21d5307
780d53d
c9b268e
a091ea4
616c342
a6bce76
babfe58
45ea42d
3b2f03d
e320e97
61bec6c
907bc08
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,139 +1,144 @@ | ||
--- | ||
title: Windows Server Containers | ||
--- | ||
**Note:** These instructions were recently updated based on Windows Server platform enhancements | ||
|
||
Kubernetes version 1.5 introduces support for Windows Server Containers. In version 1.5, the Kubernetes control plane (API Server, Scheduler, Controller Manager, etc) continue to run on Linux, while the kubelet and kube-proxy can be run on Windows Server. | ||
Kubernetes version 1.5 introduced support for Windows Server Containers based on the Windows Server 2016 OS. With the release of Windows Server version 1709 and using Kubernetes v1.8 users are able to deploy a K8s cluster either on-premises or in a private/public cloud using a number of different network toplogies and CNI plugins. Platform improvements include: | ||
- Improved support for Pods! Shared network namespace (compartment) with multiple Windows Server containers (shared kernel) | ||
- Reduced network complexity by using a single network endpoint per Pod | ||
- Kernel-Based load-balancing using the Virtual Filtering Platform (VFP) Hyper-v Switch Extension (analogous to Linux iptables) | ||
|
||
**Note:** Windows Server Containers on Kubernetes is an Alpha feature in Kubernetes 1.5. | ||
The Kubernetes control plane (API Server, Scheduler, Controller Manager, etc) continue to run on Linux, while the kubelet and kube-proxy can be run on Windows Server version 1709. | ||
|
||
**Note:** Windows Server Containers on Kubernetes is an Alpha feature in Kubernetes 1.8. | ||
|
||
**Note:** There is one outstanding PR ([51063 Fixes to enable Windows CNI](https://github.com/kubernetes/kubernetes/pull/51063))which has not been merged into v1.8 and is required for Windows CNI to work with kubelet. Users will need to build a private kubelet binary to consume this change. Please refer to these [instructions](https://github.com/Microsoft/SDN/blob/master/Kubernetes/HOWTO-on-prem.md) for build | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think this it's necessary anymore, the PR got merged in v1.9, we can make v1.9 as the minimum requirement. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The note about 51063 Fixes to enable Windows CNI has been removed in PR to this branch JMesser81#1 It also makes mention and links to the releases page where pre-built binaries can be found. It does not address the K8SREPO referencing non-mainline K8S. |
||
|
||
## Prerequisites | ||
In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported using the following: | ||
In Kubernetes version 1.8, Windows Server Containers for Kubernetes is supported using the following: | ||
|
||
1. Kubernetes control plane running on existing Linux infrastructure (version 1.5 or later). | ||
1. Kubernetes control plane running on existing Linux infrastructure (version 1.8 or later). | ||
2. Kubenet network plugin setup on the Linux nodes. | ||
3. Windows Server 2016 (RTM version 10.0.14393 or later). | ||
4. Docker Version 1.12.2-cs2-ws-beta or later for Windows Server nodes (Linux nodes and Kubernetes control plane can run any Kubernetes supported Docker Version). | ||
3. Windows Server version 1709 (RTM version 10.0.16299.15 or later). | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We should temporarily mention the fact that these can't be Core builds until kubernetes/kubernetes#55031 is fixed. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. v1.9.0-beta.1 has been released containing the fix for this issue. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It shouldn't be a hard requirement to have Windows Server version 1709. The OVN/OVS CNI plugin will also support the previous versions of Windows with the limitation of one container per pod. You will be able to have mixed nodes in the same deployment with Windows Server 2016 and Windows server version 1709. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed in JMesser81#1 |
||
4. Docker Version 17.06.1-ee-2 or later for Windows Server nodes (Linux nodes and Kubernetes control plane can run any Kubernetes supported Docker Version). | ||
|
||
## Networking | ||
Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don't natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used. | ||
There are several supported network configurations with Windows Server version 1709 and K8s v1.8 including both Layer-3 routed and overlay topologies using third-party network plugins. | ||
1. Upstream L3 Routing - IP routes configured in upstream ToR | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Direct links to each section? :D There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Added direct links in JMesser81#1 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think a new line is needed before 1, otherwise it won't list them properly. Preview link: https://deploy-preview-6180--kubernetes-io-master-staging.netlify.com/docs/getting-started-guides/windows/ There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Added in JMesser81#1 |
||
2. Host-Gateway - IP routes configured on each host | ||
3. OVN & OVS with Overlay - OVS switch extension and OVN controller creates VXLAN overlay network | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Direct links for this: https://github.com/apprenda/kubernetes-ovn-heterogeneous-cluster and https://github.com/openvswitch/ovn-kubernetes There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed in JMesser81#1 |
||
4. [Future] Overlay - VXLAN or IP-in-IP encapsulation using Flannel | ||
5. [Future] Layer-3 Routing with BGP (Calico) | ||
|
||
## CNI Plugins | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Other CNI plugins should be listed here as well, such as https://github.com/openvswitch/ovn-kubernetes and https://github.com/apprenda/kubernetes-ovn-heterogeneous-cluster There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed in JMesser81#1 |
||
Microsoft plans to publish code for two CNI plugins - win-l2bridge (host-gateway) and win-overlay (vxlan)) - per this [issue](https://github.com/containernetworking/plugins/issues/80). These two CNI plugins can either be used directly by WinCNI.exe or with Flannel [PR 832](https://github.com/coreos/flannel/pull/832). We have an [outstanding informational PR](https://github.com/containernetworking/plugins/pull/85) needed to complete this work. Windows Server platform work is complete. | ||
|
||
The selection of which network configuration and topology to deploy depends on the physical network topolgy and a user's ability to configure routes, performance concerns with encapsulation, and requirement to integrate with third-party network plugins. | ||
|
||
### Linux | ||
The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the "public" NIC. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This section doesn't seem relevant anymore here ("above networking approach" refers to?)? It seems like it belongs in either the host-gateway or upstream sections. Maybe it just needs clarification. |
||
|
||
### Windows | ||
Windows supports the CNI network model and uses plugins to interface with the Windows Host Networking Service (HNS) to configure host networking and policy. An administrator creates a local host network using HNS PowerShell commands on each node as documented in the **_Windows Host Setup_** section below. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You can just link to it directly via There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. added direct link in JMesser81#1 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not all the steps from |
||
|
||
#### Upstream L3 Routing Topology | ||
In this topology, networking is achieved using L3 routing with static IP routes configured in an upstream Top of Rack (ToR) switch/router. Each cluster node is connected to the management network with a host IP. Additionally, each node uses a local 'l2bridge' network with a Pod CIDR assigned. All pods on a given worker node will be connected to the POD CIDR subnet ('l2bridge' network). In order to enable network communication between pods running on different nodes. The upstream router has static routes configured with POD CIDR prefix => Host IP. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Kind of a nit-pick, but there's 3 different capitalizations of "pod" here; just say "pod."
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This entire section is always true, not just for ToR routing:
I recommend it be moved to somewhere more generic. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. fixed pod/POD/Pod nit in JMesser81#1 |
||
|
||
Each Window Server node should have the following configuration: | ||
|
||
1. Two NICs (virtual networking adapters) are required on each Windows Server node - The two Windows container networking modes of interest (transparent and L2 bridge) use an external Hyper-V virtual switch. This means that one of the NICs is entirely allocated to the bridge, creating the need for the second NIC. | ||
2. Transparent container network created - This is a manual configuration step and is shown in **_Route Setup_** section below. | ||
3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also "captures" packets that have the destination IP of a POD running on the node. To enable, open "Server Manager". Click on "Roles", "Add Roles". Click "Next". Select "Network Policy and Access Services". Click on "Routing and Remote Access Service" and the underlying checkboxes. | ||
4. Routes defined pointing to the other pod CIDRs via the "public" NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below. | ||
The following diagram illustrates the Windows Server networking setup for Kubernetes using Upstream L3 Routing Setup: | ||
![K8s Cluster using L3 Routing with ToR](UpstreamRouting.png) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. freaking awesome diagram. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That diagram really helps to convey the setup |
||
|
||
#### Host-Gateway Topology | ||
This topology is similar to the Upstream L3 Routing topology with the only difference being that static IP routes are configured directly on each cluster node and not in the upstream ToR. Each node uses a local 'l2bridge' network with a Pod CIDR assigned as before and has routing table entries for all other Pod CIDR subnets assigned to the remote cluster nodes. | ||
|
||
#### Overlay using OVN controller and OVS Switch Extension | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed in JMesser81#1 |
||
|
||
The following diagram illustrates the Windows Server networking setup for Kubernetes Setup: | ||
![Windows Setup](windows-setup.png) | ||
TODO | ||
|
||
## Setting up Windows Server Containers on Kubernetes | ||
To run Windows Server Containers on Kubernetes, you'll need to set up both your host machines and the Kubernetes node components for Windows and setup Routes for Pod communication on different nodes. | ||
To run Windows Server Containers on Kubernetes, you'll need to set up both your host machines and the Kubernetes node components for Windows and depending on your network topology, setup Routes for Pod communication on different nodes. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: "Routes" should be lowercase. The latter part ("and depending...") might sound a bit better as a separate sentence; it took me a sec to process all the pieces. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. addressed in JMesser81#1 |
||
|
||
### Host Setup | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I suggest having different sections for There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed in JMesser81#1 |
||
**Windows Host Setup** | ||
|
||
1. Windows Server container host running Windows Server 2016 and Docker v1.12. Follow the setup instructions outlined by this blog post: https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/quick_start_windows_server. | ||
2. DNS support for Windows recently got merged to docker master and is currently not supported in a stable docker release. To use DNS build docker from master or download the binary from [Docker master](https://master.dockerproject.org/). | ||
3. Pull the `apprenda/pause` image from `https://hub.docker.com/r/apprenda/pause`. | ||
4. RRAS (Routing) Windows feature enabled. | ||
5. Install a VMSwitch of type `Internal`, by running `New-VMSwitch -Name KubeProxySwitch -SwitchType Internal` command in *PowerShell* window. This will create a new Network Interface with name `vEthernet (KubeProxySwitch)`. This interface will be used by kube-proxy to add Service IPs. | ||
|
||
**Linux Host Setup** | ||
|
||
1. Linux hosts should be setup according to their respective distro documentation and the requirements of the Kubernetes version you will be using. | ||
2. CNI network plugin installed. | ||
|
||
### Component Setup | ||
|
||
Requirements | ||
|
||
* Git | ||
* Go 1.7.1+ | ||
* make (if using Linux or MacOS) | ||
* Important notes and other dependencies are listed [here](https://git.k8s.io/community/contributors/devel/development.md#building-kubernetes-on-a-local-osshell-environment) | ||
|
||
**kubelet** | ||
|
||
To build the *kubelet*, run: | ||
|
||
1. `cd $GOPATH/src/k8s.io/kubernetes` | ||
2. Build *kubelet* | ||
1. Linux/MacOS: `KUBE_BUILD_PLATFORMS=windows/amd64 make WHAT=cmd/kubelet` | ||
2. Windows: `go build cmd/kubelet/kubelet.go` | ||
|
||
**kube-proxy** | ||
|
||
To build *kube-proxy*, run: | ||
|
||
1. `cd $GOPATH/src/k8s.io/kubernetes` | ||
2. Build *kube-proxy* | ||
1. Linux/MacOS: `KUBE_BUILD_PLATFORMS=windows/amd64 make WHAT=cmd/kube-proxy` | ||
2. Windows: `go build cmd/kube-proxy/proxy.go` | ||
2. Configure Linux Master node using steps [here](https://github.com/Microsoft/SDN/blob/master/Kubernetes/HOWTO-on-prem.md) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You can link to a specific section of the guide by slapping a Markdown's section IDs are generated in a very predictable way (at least, from the pattern I've seen): it's the section title, all lowercase, stripped of everything except a-z, hyphen-separated. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Added direct links in JMesser81#1 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There are different steps on how to setup the Linux master node depending on the chosen network topology. For the OVN/OVS case, the following guides can be used: https://github.com/apprenda/kubernetes-ovn-heterogeneous-cluster/tree/master/master#master-node and https://github.com/openvswitch/ovn-kubernetes#k8s-master-node-initialization There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed in JMesser81#1 |
||
3. [Optional] CNI network plugin installed. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is only partially true, right? By default (i.e. using |
||
|
||
### Route Setup | ||
The below example setup assumes one Linux and two Windows Server 2016 nodes and a cluster CIDR 192.168.0.0/16 | ||
|
||
| Hostname | Routable IP address | Pod CIDR | | ||
| --- | --- | --- | | ||
| Lin01 | `<IP of Lin01 host>` | 192.168.0.0/24 | | ||
| Win01 | `<IP of Win01 host>` | 192.168.1.0/24 | | ||
| Win02 | `<IP of Win02 host>` | 192.168.2.0/24 | | ||
|
||
**Lin01** | ||
|
||
``` | ||
ip route add 192.168.1.0/24 via <IP of Win01 host> | ||
ip route add 192.168.2.0/24 via <IP of Win02 host> | ||
``` | ||
**Windows Host Setup** | ||
|
||
**Win01** | ||
1. Windows Server container host running Windows Server version 1709 and Docker v17.06 or later. Follow the setup instructions outlined by this help topic: https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/quick-start-windows-server. | ||
2. Build or download kubelet.exe, kube-proxy.exe, and kubectl.exe using instructions found [here](https://github.com/Microsoft/SDN/blob/master/Kubernetes/HOWTO-on-prem.md) | ||
3. Copy Node spec file (config) from Linux master node with X.509 keys | ||
4. Create HNS Network | ||
5. Ensure correct CNI network config | ||
5. Start kubelet.exe using this script [start-kubelet.ps1](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/start-kubelet.ps1) | ||
6. Start kube-proxy using this script[start-kubeproxy.ps1](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/start-kubeproxy.ps1) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: space after "script" in step 6 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If you're starting using the script, you don't need steps 4 & 5. I like that it basically outlines what the script will do so it's not a black box, but maybe it should just be inline with the kubelet step. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. addressed in JMesser81#1 |
||
7. [Optional] Add static routes on Windows host | ||
|
||
``` | ||
docker network create -d transparent --gateway 192.168.1.1 --subnet 192.168.1.0/24 <network name> | ||
# A bridge is created with Adapter name "vEthernet (HNSTransparent)". Set its IP address to transparent network gateway | ||
netsh interface ipv4 set address "vEthernet (HNSTransparent)" addr=192.168.1.1 | ||
route add 192.168.0.0 mask 255.255.255.0 192.168.0.1 if <Interface Id of the Routable Ethernet Adapter> -p | ||
route add 192.168.2.0 mask 255.255.255.0 192.168.2.1 if <Interface Id of the Routable Ethernet Adapter> -p | ||
``` | ||
|
||
**Win02** | ||
**Windows CNI Config Example** | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Add two spaces at the end of a line to make it an actual new line. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. done |
||
Today, Windows CNI plugin is based on wincni.exe code with the following example, configuration file. | ||
|
||
Note: this file assumes that a user previous created 'l2bridge' host networks on each Windows node using <Verb>-HNSNetwork cmdlets as shown in the start-kublet.ps1 and start-kubeproxy.ps1 scripts linked above | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: minor grammar issues, also There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ensure There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fixed these in JMesser81#1 |
||
``` | ||
docker network create -d transparent --gateway 192.168.2.1 --subnet 192.168.2.0/24 <network name> | ||
# A bridge is created with Adapter name "vEthernet (HNSTransparent)". Set its IP address to transparent network gateway | ||
netsh interface ipv4 set address "vEthernet (HNSTransparent)" addr=192.168.2.1 | ||
route add 192.168.0.0 mask 255.255.255.0 192.168.0.1 if <Interface Id of the Routable Ethernet Adapter> -p | ||
route add 192.168.1.0 mask 255.255.255.0 192.168.1.1 if <Interface Id of the Routable Ethernet Adapter> -p | ||
{ | ||
"cniVersion": "0.2.0", | ||
"name": "l2bridge", | ||
"type": "wincni.exe", | ||
"master": "Ethernet", | ||
"ipam": { | ||
"environment": "azure", | ||
"subnet": "10.10.187.64/26", | ||
"routes": [ | ||
{ | ||
"GW": "10.10.187.66" | ||
} | ||
] | ||
}, | ||
"dns": { | ||
"Nameservers": [ | ||
"11.0.0.10" | ||
] | ||
}, | ||
"AdditionalArgs": [ | ||
{ | ||
"Name": "EndpointPolicy", | ||
"Value": { | ||
"Type": "OutBoundNAT", | ||
"ExceptionList": [ | ||
"11.0.0.0/8", | ||
"10.10.0.0/16", | ||
"10.127.132.128/25" | ||
] | ||
} | ||
}, | ||
{ | ||
"Name": "EndpointPolicy", | ||
"Value": { | ||
"Type": "ROUTE", | ||
"DestinationPrefix": "11.0.0.0/8", | ||
"NeedEncap": true | ||
} | ||
}, | ||
{ | ||
"Name": "EndpointPolicy", | ||
"Value": { | ||
"Type": "ROUTE", | ||
"DestinationPrefix": "10.127.132.213/32", | ||
"NeedEncap": true | ||
} | ||
} | ||
] | ||
} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You could run the JSON through this linter so that the spacing is prettier. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, I see the issue. There's a newline missing right before the code block. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed in JMesser81#1 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Different sections are definitely needed. This does not apply to all CNI plugins. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed in JMesser81#1 |
||
``` | ||
|
||
## Starting the Cluster | ||
To start your cluster, you'll need to start both the Linux-based Kubernetes control plane, and the Windows Server-based Kubernetes node components. | ||
## Starting the Linux-based Control Plane | ||
Use your preferred method to start Kubernetes cluster on Linux. Please note that Cluster CIDR might need to be updated. | ||
## Starting the Windows Node Components | ||
To start kubelet on your Windows node: | ||
Run the following in a PowerShell window. Be aware that if the node reboots or the process exits, you will have to rerun the commands below to restart the kubelet. | ||
|
||
1. Set environment variable *CONTAINER_NETWORK* value to the docker container network to use | ||
`$env:CONTAINER_NETWORK = "<docker network>"` | ||
|
||
2. Run *kubelet* executable using the below command | ||
`kubelet.exe --hostname-override=<ip address/hostname of the windows node> --pod-infra-container-image="apprenda/pause" --resolv-conf="" --api_servers=<api server location>` | ||
|
||
To start kube-proxy on your Windows node: | ||
To start your cluster, you'll need to start both the Linux-based Kubernetes control plane, and the Windows Server-based Kubernetes node components (kubelet and kube-proxy). | ||
|
||
Run the following in a PowerShell window with administrative privileges. Be aware that if the node reboots or the process exits, you will have to rerun the commands below to restart the kube-proxy. | ||
|
||
1. Set environment variable *INTERFACE_TO_ADD_SERVICE_IP* value to `vEthernet (KubeProxySwitch)` which we created in **_Windows Host Setup_** above | ||
`$env:INTERFACE_TO_ADD_SERVICE_IP = "vEthernet (KubeProxySwitch)"` | ||
|
||
2. Run *kube-proxy* executable using the below command | ||
`.\proxy.exe --v=3 --proxy-mode=userspace --hostname-override=<ip address/hostname of the windows node> --master=<api server location> --bind-address=<ip address of the windows node>` | ||
## Starting the Linux-based Control Plane | ||
Use your preferred method to setup and start Kubernetes cluster on Linux or follow the directions given in this [link](https://github.com/Microsoft/SDN/blob/master/Kubernetes/HOWTO-on-prem.md). Please note that Cluster CIDR might need to be updated. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "...since it's hard-coded to Also, though it's in the linked guide, reiterate the necessity of copying the certificate configuration file from the master. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Different sections needed here as well. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed in JMesser81#1 |
||
|
||
## Scheduling Pods on Windows | ||
Because your cluster has both Linux and Windows nodes, you must explicitly set the nodeSelector constraint to be able to schedule Pods to Windows nodes. You must set nodeSelector with the label beta.kubernetes.io/os to the value windows; see the following example: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: backticks on Also, you can write
to get syntax highlighting! Likewise, use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Update: looks like they actually auto-detected the YAML it in the deployment, but the CNI config is broken for some reason. Either way, it'll make the GitHub preview prettier. |
||
|
@@ -166,9 +171,3 @@ Because your cluster has both Linux and Windows nodes, you must explicitly set t | |
} | ||
} | ||
``` | ||
|
||
## Known Limitations: | ||
1. There is no network namespace in Windows and as a result currently only one container per pod is supported. | ||
2. Secrets currently do not work because of a bug in Windows Server Containers described [here](https://github.com/docker/docker/issues/28401). | ||
3. ConfigMaps have not been implemented yet. | ||
4. `kube-proxy` implementation uses `netsh portproxy` and as it only supports TCP, DNS currently works only if the client retries DNS query using TCP. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be beta in Kubernetes 1.9 - please, update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@idvoretskyi we have a pull on this ticket with the updated docs. please reference JMesser81#1