Releases: labring/sealos
v2.0.0-alpha.6
简体中文,老版本
简体中文,kubernetes v1.14.0+
Sealos 2.0
Support kuberentes 1.14.0+ , you needn't keepalived and haproxy anymore!
build a production kubernetes cluster
Quick Start
sealos init --master 192.168.0.2 --master 192.168.0.3 --master 192.168.0.4 --node 192.168.0.5 --user root --passwd your-server-password
Thats all!
Architecture
+----------+ +---------------+ virturl server: 127.0.0.1:6443
| mater0 |<----------------------| ipvs nodes | real servers:
+----------+ |+---------------+ 10.103.97.200:6443
| 10.103.97.201:6443
+----------+ | 10.103.97.202:6443
| mater1 |<---------------------+
+----------+ |
|
+----------+ |
| mater2 |<---------------------+
+----------+
Every node config a ipvs for masters LB.
Then run a lvscare as a staic pod to check realserver is aviliable. /etc/kubernetes/manifests/sealyun-lvscare.yaml
LVScare
This can care your masters ipvs rules.
公众号:
v2.0.0-alpha.5
简体中文,老版本
简体中文,kubernetes v1.14.0+
Sealos 2.0
Support kuberentes 1.14.0+ , you needn't keepalived and haproxy anymore!
build a production kubernetes cluster
Quick Start
sealos init --master 192.168.0.2 --master 192.168.0.3 --master 192.168.0.4 --node 192.168.0.5 --user root --passwd your-server-password
Thats all!
Architecture
+----------+ +---------------+ virturl server: 127.0.0.1:6443
| mater0 |<----------------------| ipvs nodes | real servers:
+----------+ |+---------------+ 10.103.97.200:6443
| 10.103.97.201:6443
+----------+ | 10.103.97.202:6443
| mater1 |<---------------------+
+----------+ |
|
+----------+ |
| mater2 |<---------------------+
+----------+
Every node config a ipvs for masters LB.
Then run a lvscare as a staic pod to check realserver is aviliable. /etc/kubernetes/manifests/sealyun-lvscare.yaml
LVScare
This can care your masters ipvs rules.
公众号:
v2.0.0-alpha.3
简体中文,老版本
简体中文,kubernetes v1.14.0+
Sealos 2.0
Support kuberentes 1.14.0+ , you needn't keepalived and haproxy anymore!
build a production kubernetes cluster
Quick Start
sealos init --master 192.168.0.2 --master 192.168.0.3 --master 192.168.0.4 --node 192.168.0.5 --user root --passwd your-server-password
Thats all!
Architecture
+----------+ +---------------+ virturl server: 127.0.0.1:6443
| mater0 |<----------------------| ipvs nodes | real servers:
+----------+ |+---------------+ 10.103.97.200:6443
| 10.103.97.201:6443
+----------+ | 10.103.97.202:6443
| mater1 |<---------------------+
+----------+ |
|
+----------+ |
| mater2 |<---------------------+
+----------+
Every node config a ipvs for masters LB.
Then run a lvscare as a staic pod to check realserver is aviliable. /etc/kubernetes/manifests/sealyun-lvscare.yaml
LVScare
This can care your masters ipvs rules.
公众号:
v2.0.0-alpha.2
简体中文,老版本
简体中文,kubernetes v1.14.0+
Sealos 2.0
Support kuberentes 1.14.0+ , you needn't keepalived and haproxy anymore!
build a production kubernetes cluster
Quick Start
sealos init --master 192.168.0.2 --master 192.168.0.3 --master 192.168.0.4 --node 192.168.0.5 --user root --passwd your-server-password
Thats all!
Architecture
+----------+ +---------------+ virturl server: 127.0.0.1:6443
| mater0 |<----------------------| ipvs nodes | real servers:
+----------+ |+---------------+ 10.103.97.200:6443
| 10.103.97.201:6443
+----------+ | 10.103.97.202:6443
| mater1 |<---------------------+
+----------+ |
|
+----------+ |
| mater2 |<---------------------+
+----------+
Every node config a ipvs for masters LB.
Then run a lvscare as a staic pod to check realserver is aviliable. /etc/kubernetes/manifests/sealyun-lvscare.yaml
LVScare
This can care your masters ipvs rules.
公众号:
v2.0.0-alpha.10
简体中文,老版本
简体中文,kubernetes v1.14.0+
Sealos 2.0
Support kuberentes 1.14.0+ , you needn't keepalived and haproxy anymore!
build a production kubernetes cluster
Quick Start
sealos init --master 192.168.0.2 --master 192.168.0.3 --master 192.168.0.4 --node 192.168.0.5 --user root --passwd your-server-password
Thats all!
Architecture
+----------+ +---------------+ virturl server: 127.0.0.1:6443
| mater0 |<----------------------| ipvs nodes | real servers:
+----------+ |+---------------+ 10.103.97.200:6443
| 10.103.97.201:6443
+----------+ | 10.103.97.202:6443
| mater1 |<---------------------+
+----------+ |
|
+----------+ |
| mater2 |<---------------------+
+----------+
Every node config a ipvs for masters LB.
Then run a lvscare as a staic pod to check realserver is aviliable. /etc/kubernetes/manifests/sealyun-lvscare.yaml
LVScare
This can care your masters ipvs rules.
公众号:
v2.0.0-alpha.0
简体中文,老版本
简体中文,kubernetes v1.14.0+
Sealos 2.0
Support kuberentes 1.14.0+ , you needn't keepalived and haproxy anymore!
build a production kubernetes cluster
Quick Start
sealos init --master 192.168.0.2 --master 192.168.0.3 --master 192.168.0.4 --node 192.168.0.5 --user root --passwd your-server-password
Thats all!
Architecture
+----------+ +---------------+ virturl server: 127.0.0.1:6443
| mater0 |<----------------------| ipvs nodes | real servers:
+----------+ |+---------------+ 10.103.97.200:6443
| 10.103.97.201:6443
+----------+ | 10.103.97.202:6443
| mater1 |<---------------------+
+----------+ |
|
+----------+ |
| mater2 |<---------------------+
+----------+
Every node config a ipvs for masters LB.
Then run a lvscare as a staic pod to check realserver is aviliable. /etc/kubernetes/manifests/sealyun-lvscare.yaml
LVScare
This can care your masters ipvs rules.
公众号:
kubeadm1.12.2
cert 99 years!
基于1.12.2版本重新编译的kubeadm代码,证书时间从默认的1年改成99年!其它与kubeadm使用方式相同。
不用clone本项目,直接下载上面的bin文件即可
chmod +x kubeadm && cp kubeadm /usr/bin
使用你安装时的kubeadm.yaml
[root@dev-86-202 ~]# rm /etc/kubernetes/pki/ -rf
[root@dev-86-202 ~]# kubeadm alpha phase certs all --config kube/conf/kubeadm.yaml
更新kubeconfig
[root@dev-86-202 ~]# rm -rf /etc/kubernetes/*conf
[root@dev-86-202 ~]# kubeadm alpha phase kubeconfig all --config ~/kube/conf/kubeadm.yaml
[root@dev-86-202 ~]# cp /etc/kubernetes/admin.conf ~/.kube/config
验证:
$ cd /etc/kubernetes/pki
$ openssl x509 -in apiserver-etcd-client.crt -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4701787282062078235 (0x41401a9f34c2711b)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=etcd-ca
Validity
Not Before: Nov 22 11:58:50 2018 GMT
Not After : Oct 29 11:58:51 2117 GMT
其它证书验证同理
v1.0.0-beta.0
test kubernetes1.12.0 and 1.12.2 success support kubernetes1.12.x