Skip to content

Commit

Permalink
prepare for 1.4 release
Browse files Browse the repository at this point in the history
  • Loading branch information
oilbeater committed Sep 1, 2020
1 parent 7b77292 commit 0f973a5
Show file tree
Hide file tree
Showing 16 changed files with 119 additions and 35 deletions.
23 changes: 23 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,28 @@
# CHANGELOG

## 1.4.0 -- 2020/9/1

### New Feature
* Integrate OVN-IC to support multi-cluster networking
* Enable ACL log to record networkpolicy drop packets
* Reserve source ip for NodePort service to local pod
* Support vlan subnet switch to underlay gateway

### Bugfix
* Add forward accept rules
* kubectl-ko cannot find nic
* Prevent vlan/subnet init error logs
* Subnet ACL might conflict if allSubnets and subnet cidr overlap
* Missing session lb

### Misc
* Update ovs to 2.14
* Update golang to 1.15
* Suppress logs
* Add psp rules
* Remove juju log dependency


## 1.3.0 -- 2020/7/31

### New Feature
Expand Down
65 changes: 63 additions & 2 deletions docs/cluster-interconnection.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ communicate directly using Pod IP. Kub-OVN uses tunnel to encapsulate traffic be
only L3 connectivity for gateway nodes is required.

## Prerequest
* Subnet CIDRs in different clusters *MUST NOT* be overlapped with each other,including ovn-default and join subnets CIDRs.
* To use route auto advertise, subnet CIDRs in different clusters *MUST NOT* be overlapped with each other,including ovn-default and join subnets CIDRs. Otherwise, you should disable the auto route and add routes mannually.
* The Interconnection Controller *Should* be deployed in a region that every cluster can access by IP.
* Every cluster *Should* have at least one node(work as gateway later) that can access other gateway nodes in different clusters by IP.

## Step
## Auto Route Step
1. Run Interconnection Controller in a region that can be accessed by other cluster
```bash
docker run --name=ovn-ic-db -d --network=host -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn kubeovn/kube-ovn:v1.4.0 bash start-ic-db.sh
Expand Down Expand Up @@ -68,6 +68,67 @@ IPv4 Routes

If Pods cannot communicate with each other, please check the log of kube-ovn-controller.

For manually adding routes, you need to find the

## Manually Route Step
1. Same as AutoRoute step 1,run Interconnection Controller in a region that can be accessed by other cluster
```bash
docker run --name=ovn-ic-db -d --network=host -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn kubeovn/kube-ovn:v1.4.0 bash start-ic-db.sh
```
2. Create `ic-config` ConfigMap in each cluster. Edit and apply the yaml below in each cluster. Note that `auto-route` is set to `false`
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ovn-ic-config
namespace: kube-system
data:
enable-ic: "true"
az-name: "az1" # AZ name for cluster, every cluster should be different
ic-db-host: "192.168.65.3" # The Interconnection Controller host IP address
ic-nb-port: "6645" # The ic-nb port, default 6645
ic-sb-port: "6646" # The ic-sb port, default 6646
gw-nodes: "az1-gw" # The node name which acts as the interconnection gateway
auto-route: "false" # Auto announce route to all clusters. If set false, you can select announced routes later manually
```
3. Find the remote gateway address in each cluster, and add routes to remote cluster.
In az1
```bash
[root@az1 ~]# kubectl ko nbctl show
switch a391d3a1-14a0-4841-9836-4bd930c447fb (ts)
port ts-az1
type: router
router-port: az1-ts
port ts-az2
type: remote
addresses: ["00:00:00:4B:E2:9F 169.254.100.31/24"]
```
In az2
```bash
[root@az2 ~]# kubectl ko nbctl show
switch da6138b8-de81-4908-abf9-b2224ec4edf3 (ts)
port ts-az2
type: router
router-port: az2-ts
port ts-az1
type: remote
addresses: ["00:00:00:FB:2A:F7 169.254.100.79/24"]
```
Record the remote lsp address in `ts` logical switch

4. Add Static route in each cluster

In az1
```bash
kubectl ko nbctl lr-route-add ovn-cluster 10.17.0.0/24 169.254.100.31
```
In az2
```bash
kubectl ko nbctl lr-route-add ovn-cluster 10.16.0.0/24 169.254.100.79
```

## Gateway High Available
Kube-OVN now supports Active-Backup mode gateway HA. You can add more nodes name in the configmap separated by commas.

Expand Down
2 changes: 1 addition & 1 deletion docs/dpdk.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ dpdk-hugepage-dir=/dev/hugepages
## To Install

1. Download the installation script:
`wget https://raw.githubusercontent.com/alauda/kube-ovn/release-1.3/dist/images/install.sh`
`wget https://raw.githubusercontent.com/alauda/kube-ovn/release-1.4/dist/images/install.sh`

2. Use vim to edit the script variables to meet your requirement
```bash
Expand Down
2 changes: 1 addition & 1 deletion docs/high-available.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Change the replicas to 3, and add NODE_IPS environment var points to node that h
replicas: 3
containers:
- name: ovn-central
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: Always
env:
- name: POD_IP
Expand Down
2 changes: 1 addition & 1 deletion docs/hw-offload.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ Update ovs-ovn daemonset and set the HW_OFFLOAD env to true and delete exist pod
fieldPath: status.podIP
- name: HW_OFFLOAD
value: "true"
image: kubeovn/kube-ovn:v1.3.0
image: kubeovn/kube-ovn:v1.4.0
```
### Create Pod with SR-IOV
```yaml
Expand Down
16 changes: 8 additions & 8 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@ Kube-OVN provides a one script install to easily install a high-available, produ
1. Download the installer scripts

For Kubernetes version>=1.16
`wget https://raw.githubusercontent.com/alauda/kube-ovn/release-1.3/dist/images/install.sh`
`wget https://raw.githubusercontent.com/alauda/kube-ovn/release-1.4/dist/images/install.sh`

For Kubernetes version<1.16
`wget https://raw.githubusercontent.com/alauda/kube-ovn/release-1.3/dist/images/install-pre-1.16.sh`
`wget https://raw.githubusercontent.com/alauda/kube-ovn/release-1.4/dist/images/install-pre-1.16.sh`

2. Use vim to edit the script variables to meet your requirement
```bash
Expand All @@ -36,7 +36,7 @@ For Kubernetes version<1.16
JOIN_CIDR="100.64.0.0/16" # Do NOT overlap with NODE/POD/SVC CIDR
LABEL="node-role.kubernetes.io/master" # The node label to deploy OVN DB
IFACE="" # The nic to support container network, if empty will use the nic that the default route use
VERSION="v1.3.0"
VERSION="v1.4.0"
```

3. Execute the script
Expand All @@ -54,19 +54,19 @@ For Kubernetes version before 1.17 please use the following command to add the n
`kubectl label node <Node on which to deploy OVN DB> kube-ovn/role=master`
2. Install Kube-OVN related CRDs

`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/release-1.3/yamls/crd.yaml`
`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/release-1.4/yamls/crd.yaml`
3. Install native OVS and OVN components:

`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/release-1.3/yamls/ovn.yaml`
`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/release-1.4/yamls/ovn.yaml`
4. Install the Kube-OVN Controller and CNI plugins:

`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/release-1.3/yamls/kube-ovn.yaml`
`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/release-1.4/yamls/kube-ovn.yaml`

That's all! You can now create some pods and test connectivity.

For high-available ovn db, see [high available](high-available.md)

If you want to enable IPv6 on default subnet and node subnet, please apply https://raw.githubusercontent.com/alauda/kube-ovn/release-1.3/yamls/kube-ovn-ipv6.yaml on Step 3.
If you want to enable IPv6 on default subnet and node subnet, please apply https://raw.githubusercontent.com/alauda/kube-ovn/release-1.4/yamls/kube-ovn-ipv6.yaml on Step 3.

## More Configuration

Expand Down Expand Up @@ -153,7 +153,7 @@ You can use `--default-cidr` flags below to config default Pod CIDR or create a
1. Remove Kubernetes resources:
```bash
wget https://raw.githubusercontent.com/alauda/kube-ovn/release-1.3/dist/images/cleanup.sh
wget https://raw.githubusercontent.com/alauda/kube-ovn/release-1.4/dist/images/cleanup.sh
bash cleanup.sh
```
Expand Down
2 changes: 1 addition & 1 deletion docs/ipv6.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

Through Kube-OVN does support both protocol subnets coexist in a cluster, Kubernetes control plan now only support one protocol. So you will lost some ability like probe and service discovery if you use a protocol other than the kubernetes control plan. We recommend you use only one same ip protocol that same with kubernetes control plan.

To enable IPv6 support you need to modify the installation yaml to specify the default subnet and node subnet cidrBlock and gateway with a ipv6 format. You can apply this [v6 version yaml](https://raw.githubusercontent.com/alauda/kube-ovn/release-1.3/yamls/kube-ovn-ipv6.yaml) at [installation step 3](install.md#to-install) for a quick start.
To enable IPv6 support you need to modify the installation yaml to specify the default subnet and node subnet cidrBlock and gateway with a ipv6 format. You can apply this [v6 version yaml](https://raw.githubusercontent.com/alauda/kube-ovn/release-1.4/yamls/kube-ovn-ipv6.yaml) at [installation step 3](install.md#to-install) for a quick start.
2 changes: 1 addition & 1 deletion docs/vlan.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ We are working at combine two networks in one cluster.

1. Get the installation script

`wget https://raw.githubusercontent.com/alauda/kube-ovn/release-1.3/dist/images/install.sh`
`wget https://raw.githubusercontent.com/alauda/kube-ovn/release-1.4/dist/images/install.sh`

2. Edit the `install.sh`, modify `NETWORK_TYPE` to `vlan`, `VLAN_INTERFACE_NAME` to related host interface.

Expand Down
8 changes: 4 additions & 4 deletions yamls/kube-ovn-ipv6.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ spec:
hostNetwork: true
containers:
- name: kube-ovn-controller
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command:
- /kube-ovn/start-controller.sh
Expand Down Expand Up @@ -110,7 +110,7 @@ spec:
hostPID: true
initContainers:
- name: install-cni
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command: ["/kube-ovn/install-cni.sh"]
securityContext:
Expand All @@ -123,7 +123,7 @@ spec:
name: cni-bin
containers:
- name: cni-server
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
command: ["sh", "/kube-ovn/start-cniserver.sh"]
args:
- --enable-mirror=false
Expand Down Expand Up @@ -206,7 +206,7 @@ spec:
hostPID: true
containers:
- name: pinger
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command: ["/kube-ovn/kube-ovn-pinger"]
securityContext:
Expand Down
8 changes: 4 additions & 4 deletions yamls/kube-ovn-pre17.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ spec:
hostNetwork: true
containers:
- name: kube-ovn-controller
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command:
- /kube-ovn/start-controller.sh
Expand Down Expand Up @@ -108,7 +108,7 @@ spec:
hostPID: true
initContainers:
- name: install-cni
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command: ["/kube-ovn/install-cni.sh"]
securityContext:
Expand All @@ -121,7 +121,7 @@ spec:
name: cni-bin
containers:
- name: cni-server
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command:
- sh
Expand Down Expand Up @@ -216,7 +216,7 @@ spec:
hostPID: true
containers:
- name: pinger
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
command: ["/kube-ovn/kube-ovn-pinger", "--external-address=114.114.114.114"]
imagePullPolicy: IfNotPresent
securityContext:
Expand Down
8 changes: 4 additions & 4 deletions yamls/kube-ovn.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ spec:
hostNetwork: true
containers:
- name: kube-ovn-controller
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command:
- /kube-ovn/start-controller.sh
Expand Down Expand Up @@ -110,7 +110,7 @@ spec:
hostPID: true
initContainers:
- name: install-cni
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command: ["/kube-ovn/install-cni.sh"]
securityContext:
Expand All @@ -123,7 +123,7 @@ spec:
name: cni-bin
containers:
- name: cni-server
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command:
- sh
Expand Down Expand Up @@ -218,7 +218,7 @@ spec:
hostPID: true
containers:
- name: pinger
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
command: ["/kube-ovn/kube-ovn-pinger", "--external-address=114.114.114.114", "--external-dns=alauda.cn"]
imagePullPolicy: IfNotPresent
securityContext:
Expand Down
4 changes: 2 additions & 2 deletions yamls/ovn-ha.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ spec:
hostNetwork: true
containers:
- name: ovn-central
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command: ["/kube-ovn/start-db.sh"]
securityContext:
Expand Down Expand Up @@ -304,7 +304,7 @@ spec:
hostPID: true
containers:
- name: openvswitch
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command: ["/kube-ovn/start-ovs.sh"]
securityContext:
Expand Down
4 changes: 2 additions & 2 deletions yamls/ovn-pre17.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ spec:
hostNetwork: true
containers:
- name: ovn-central
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command: ["/kube-ovn/start-db.sh"]
securityContext:
Expand Down Expand Up @@ -261,7 +261,7 @@ spec:
hostPID: true
containers:
- name: openvswitch
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command: ["/kube-ovn/start-ovs.sh"]
securityContext:
Expand Down
4 changes: 2 additions & 2 deletions yamls/ovn.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ spec:
hostNetwork: true
containers:
- name: ovn-central
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command: ["/kube-ovn/start-db.sh"]
securityContext:
Expand Down Expand Up @@ -304,7 +304,7 @@ spec:
hostPID: true
containers:
- name: openvswitch
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command: ["/kube-ovn/start-ovs.sh"]
securityContext:
Expand Down
2 changes: 1 addition & 1 deletion yamls/speaker.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ spec:
hostNetwork: true
containers:
- name: ovn-central
image: "kubeovn/kube-ovn:v1.3.0"
image: "kubeovn/kube-ovn:v1.4.0"
imagePullPolicy: IfNotPresent
command:
- /kube-ovn/kube-ovn-speaker
Expand Down
2 changes: 1 addition & 1 deletion yamls/webhook.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ spec:
hostNetwork: true
containers:
- name: kube-ovn-webhook
image: "index.alauda.cn/alaudak8s/kube-ovn-webhook:v1.0.0"
image: "index.alauda.cn/alaudak8s/kube-ovn-webhook:v1.4.0"
imagePullPolicy: IfNotPresent
command:
- /kube-ovn/start-webhook.sh
Expand Down

0 comments on commit 0f973a5

Please sign in to comment.