Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
* 'master' of https://github.com/kubernetes-sigs/kubespray:
  Documentation for Ingress (kubernetes-sigs#6378)
  Fix ansible-lint E301 for commands fetching data (kubernetes-sigs#6465)
  Fix shellcheck url (kubernetes-sigs#6462)
  Fix ansible-lint E305 (kubernetes-sigs#6459)
  Fix ansible-lint E404 (kubernetes-sigs#6417)
  Update README.md and openstack.md (kubernetes-sigs#6455)
  Add noqa and disable .ansible-lint global exclusions (kubernetes-sigs#6410)
  Move healthz check to secure ports (kubernetes-sigs#6446)
  Update multus version & crio conf (kubernetes-sigs#6444)
  Fix remove etcd broken with etcdctl_api 3 (kubernetes-sigs#6448)
  update cinder csi manifests (kubernetes-sigs#6434)
  Update docker package to 19.03.12 (kubernetes-sigs#6439)
  * add proxy_env definition to remove_node.yml resolving kubernetes-sigs#6430 (kubernetes-sigs#6431)
  • Loading branch information
erulabs committed Jul 29, 2020
2 parents c021412 + 0fa5a25 commit 9a7dcdc
Show file tree
Hide file tree
Showing 91 changed files with 349 additions and 213 deletions.
11 changes: 2 additions & 9 deletions .ansible-lint
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,8 @@
parseable: true
skip_list:
# see https://docs.ansible.com/ansible-lint/rules/default_rules.html for a list of all default rules
# The following rules throw errors.
# These either still need to be corrected in the repository and the rules re-enabled or documented why they are skipped on purpose.
- '301'
- '302'
- '303'
- '305'
- '306'
- '404'
- '503'

# DO NOT add any other rules to this skip_list, instead use local `# noqa` with a comment explaining WHY it is necessary

# These rules are intentionally skipped:
#
Expand Down
2 changes: 1 addition & 1 deletion .gitlab-ci/shellcheck.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ shellcheck:
SHELLCHECK_VERSION: v0.6.0
before_script:
- ./tests/scripts/rebase.sh
- curl --silent "https://storage.googleapis.com/shellcheck/shellcheck-"${SHELLCHECK_VERSION}".linux.x86_64.tar.xz" | tar -xJv
- curl --silent --location "https://github.com/koalaman/shellcheck/releases/download/"${SHELLCHECK_VERSION}"/shellcheck-"${SHELLCHECK_VERSION}".linux.x86_64.tar.xz" | tar -xJv
- cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/
- shellcheck --version
script:
Expand Down
16 changes: 12 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/)

- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Packet](docs/packet.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions**
Expand Down Expand Up @@ -129,9 +129,10 @@ Note: Upstart/SysV init based OS types are not supported.
- [flanneld](https://github.com/coreos/flannel) v0.12.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.2.1
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.0.0
- [multus](https://github.com/intel/multus-cni) v3.4.2
- [multus](https://github.com/intel/multus-cni) v3.6.0
- [weave](https://github.com/weaveworks/weave) v2.6.5
- Application
- [ambassador](https://github.com/datawire/ambassador): v1.5
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v0.11.1
Expand Down Expand Up @@ -197,6 +198,12 @@ The choice is defined with the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md).

## Ingress Plugins

- [ambassador](docs/ambassador.md): the Ambassador Ingress Controller and API gateway.

- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.

## Community docs and resources

- [kubernetes.io/docs/setup/production-environment/tools/kubespray/](https://kubernetes.io/docs/setup/production-environment/tools/kubespray/)
Expand All @@ -211,7 +218,8 @@ See also [Network checker](docs/netcheck.md).

## CI Tests

[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/pipeline.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)

CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Packet](https://www.packet.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).

CI/end-to-end tests sponsored by Google (GCE)
See the [test matrix](docs/test_cases.md) for details.
2 changes: 1 addition & 1 deletion contrib/azurerm/roles/generate-inventory/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---

- name: Query Azure VMs
- name: Query Azure VMs # noqa 301
command: azure vm list-ip-address --json {{ azure_resource_group }}
register: vm_list_cmd

Expand Down
6 changes: 3 additions & 3 deletions contrib/azurerm/roles/generate-inventory_2/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
---

- name: Query Azure VMs IPs
- name: Query Azure VMs IPs # noqa 301
command: az vm list-ip-addresses -o json --resource-group {{ azure_resource_group }}
register: vm_ip_list_cmd

- name: Query Azure VMs Roles
- name: Query Azure VMs Roles # noqa 301
command: az vm list -o json --resource-group {{ azure_resource_group }}
register: vm_list_cmd

- name: Query Azure Load Balancer Public IP
- name: Query Azure Load Balancer Public IP # noqa 301
command: az network public-ip show -o json -g {{ azure_resource_group }} -n kubernetes-api-pubip
register: lb_pubip_cmd

Expand Down
2 changes: 1 addition & 1 deletion contrib/dind/roles/dind-host/tasks/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@

# Running systemd-machine-id-setup doesn't create a unique id for each node container on Debian,
# handle manually
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave)
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave) # noqa 301
raw: |
echo {{ item | hash('sha1') }} > /etc/machine-id.new
mv -b /etc/machine-id.new /etc/machine-id
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
register: glusterfs_ppa_added
when: glusterfs_ppa_use

- name: Ensure GlusterFS client will reinstall if the PPA was just added.
- name: Ensure GlusterFS client will reinstall if the PPA was just added. # noqa 503
apt:
name: "{{ item }}"
state: absent
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
register: glusterfs_ppa_added
when: glusterfs_ppa_use

- name: Ensure GlusterFS will reinstall if the PPA was just added.
- name: Ensure GlusterFS will reinstall if the PPA was just added. # noqa 503
apt:
name: "{{ item }}"
state: absent
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
- name: "Delete bootstrap Heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"deploy-heketi\""
when: "heketi_resources.stdout|from_json|json_query('items[*]')|length > 0"
- name: "Ensure there is nothing left over."
- name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"deploy-heketi\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
- name: "Copy topology configuration into container."
changed_when: false
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology."
- name: "Load heketi topology." # noqa 503
when: "render.changed"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
register: "load_heketi"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
- name: "Provision database volume."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} setup-openshift-heketi-storage"
when: "heketi_database_volume_exists is undefined"
- name: "Copy configuration from pod."
- name: "Copy configuration from pod." # noqa 301
become: true
command: "{{ bin_dir }}/kubectl cp {{ initial_heketi_pod_name }}:/heketi-storage.json {{ kube_config_dir }}/heketi-storage-bootstrap.json"
- name: "Get heketi volume ids."
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,10 @@
template:
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
- name: "Copy topology configuration into container."
- name: "Copy topology configuration into container." # noqa 503
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology."
- name: "Load heketi topology." # noqa 503
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
- name: "Get heketi topology."
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@
ignore_errors: true
changed_when: false

- name: "Remove volume groups."
- name: "Remove volume groups." # noqa 301
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
command: "vgremove {{ volume_group }} --yes"
with_items: "{{ volume_groups.stdout_lines }}"
loop_control: { loop_var: "volume_group" }

- name: "Remove physical volume from cluster disks."
- name: "Remove physical volume from cluster disks." # noqa 301
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
Expand Down
22 changes: 11 additions & 11 deletions contrib/network-storage/heketi/roles/tear-down/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -1,43 +1,43 @@
---
- name: "Remove storage class."
- name: "Remove storage class." # noqa 301
command: "{{ bin_dir }}/kubectl delete storageclass gluster"
ignore_errors: true
- name: "Tear down heketi."
- name: "Tear down heketi." # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
ignore_errors: true
- name: "Tear down heketi."
- name: "Tear down heketi." # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
ignore_errors: true
- name: "Tear down bootstrap."
include_tasks: "../provision/tasks/bootstrap/tear-down.yml"
- name: "Ensure there is nothing left over."
- name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: "Ensure there is nothing left over."
- name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: "Tear down glusterfs."
- name: "Tear down glusterfs." # noqa 301
command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
ignore_errors: true
- name: "Remove heketi storage service."
- name: "Remove heketi storage service." # noqa 301
command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
ignore_errors: true
- name: "Remove heketi gluster role binding"
- name: "Remove heketi gluster role binding" # noqa 301
command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
ignore_errors: true
- name: "Remove heketi config secret"
- name: "Remove heketi config secret" # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
ignore_errors: true
- name: "Remove heketi db backup"
- name: "Remove heketi db backup" # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
ignore_errors: true
- name: "Remove heketi service account"
- name: "Remove heketi service account" # noqa 301
command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
ignore_errors: true
- name: "Get secrets"
Expand Down
4 changes: 3 additions & 1 deletion docs/_sidebar.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@
* [Kube Router](docs/kube-router.md)
* [Weave](docs/weave.md)
* [Multus](docs/multus.md)
* Ingress
* [Ambassador](docs/ambassador.md)
* [Cloud providers](docs/cloud.md)
* [AWS](docs/aws.md)
* [Azure](docs/azure.md)
Expand All @@ -26,7 +28,7 @@
* Operating Systems
* [Debian](docs/debian.md)
* [Coreos](docs/coreos.md)
* [Fedora CoreOS](docs/fcos.md)
* [Fedora CoreOS](docs/fcos.md)
* [OpenSUSE](docs/opensuse.md)
* Advanced
* [Proxy](/docs/proxy.md)
Expand Down
86 changes: 86 additions & 0 deletions docs/ambassador.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@

# Ambassador

The Ambassador API Gateway provides all the functionality of a traditional ingress controller
(e.g., path-based routing) while exposing many additional capabilities such as authentication,
URL rewriting, CORS, rate limiting, and automatic metrics collection.

## Installation

### Configuration

* `ingress_ambassador_namespace` (default `ambassador`): namespace for installing Ambassador.
* `ingress_ambassador_update_window` (default `0 0 * * SUN`): _crontab_-like expression
for specifying when the Operator should try to update the Ambassador API Gateway.
* `ingress_ambassador_version` (defaulkt: `*`): SemVer rule for versions allowed for
installation/updates.
* `ingress_ambassador_secure_port` (default: 443): HTTPS port to listen at.
* `ingress_ambassador_insecure_port` (default: 80): HTTP port to listen at.

### Ambassador Operator

This Ambassador addon deploys the Ambassador Operator, which in turn will install Ambassador in
a Kubernetes cluster.

The Ambassador Operator is a Kubernetes Operator that controls Ambassador's complete lifecycle
in your cluster, automating many of the repeatable tasks you would otherwise have to perform
yourself. Once installed, the Operator will complete installations and seamlessly upgrade to new
versions of Ambassador as they become available.

## Usage

The following example creates simple http-echo services and an `Ingress` object
to route to these services.

Note well that the Ambassador API Gateway will automatically load balance `Ingress` resources
that include the annotation `kubernetes.io/ingress.class=ambassador`. All the other
resources will be just ignored.

```yaml
kind: Pod
apiVersion: v1
metadata:
name: foo-app
labels:
app: foo
spec:
containers:
- name: foo-app
image: hashicorp/http-echo
args:
- "-text=foo"
---
kind: Service
apiVersion: v1
metadata:
name: foo-service
spec:
selector:
app: foo
ports:
# Default port used by the image
- port: 5678
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: ambassador
spec:
rules:
- http:
paths:
- path: /foo
backend:
serviceName: foo-service
servicePort: 5678
```
Now you can test that the ingress is working with curl:
```console
$ export AMB_IP=$(kubectl get service ambassador -n ambassador -o 'go-template={{range .status.loadBalancer.ingress}}{{print .ip "\n"}}{{end}}')
$ curl $AMB_IP/foo
foo
```
28 changes: 22 additions & 6 deletions docs/openstack.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,25 @@
OpenStack
=========

The in-tree cloud provider
--------------------------
# OpenStack

## Known compatible public clouds

Kubespray has been tested on a number of OpenStack Public Clouds including (in alphabetical order):

- [Auro](https://auro.io/)
- [Betacloud](https://www.betacloud.io/)
- [CityCloud](https://www.citycloud.com/)
- [DreamHost](https://www.dreamhost.com/cloud/computing/)
- [ELASTX](https://elastx.se/)
- [EnterCloudSuite](https://www.entercloudsuite.com/)
- [FugaCloud](https://fuga.cloud/)
- [Open Telekom Cloud](https://cloud.telekom.de/) : requires to set the variable `wait_for_floatingip = "true"` in your cluster.tfvars
- [OVHcloud](https://www.ovhcloud.com/)
- [Rackspace](https://www.rackspace.com/)
- [Ultimum](https://ultimum.io/)
- [VexxHost](https://vexxhost.com/)
- [Zetta](https://www.zetta.io/)

## The in-tree cloud provider

To deploy Kubespray on [OpenStack](https://www.openstack.org/) uncomment the `cloud_provider` option in `group_vars/all/all.yml` and set it to `openstack`.

Expand Down Expand Up @@ -62,8 +79,7 @@ If all the VMs in the tenant correspond to Kubespray deployment, you can "sweep

Now you can finally run the playbook.

The external cloud provider
---------------------------
## The external cloud provider

The in-tree cloud provider is deprecated and will be removed in a future version of Kubernetes. The target release for removing all remaining in-tree cloud providers is set to 1.21.

Expand Down
4 changes: 2 additions & 2 deletions extra_playbooks/migrate_openstack_provider.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,13 @@
src: get_cinder_pvs.sh
dest: /tmp
mode: u+rwx
- name: Get PVs provisioned by in-tree cloud provider
- name: Get PVs provisioned by in-tree cloud provider # noqa 301
command: /tmp/get_cinder_pvs.sh
register: pvs
- name: Remove get_cinder_pvs.sh
file:
path: /tmp/get_cinder_pvs.sh
state: absent
- name: Rewrite the "pv.kubernetes.io/provisioned-by" annotation
- name: Rewrite the "pv.kubernetes.io/provisioned-by" annotation # noqa 301
command: "{{ bin_dir }}/kubectl annotate --overwrite pv {{ item }} pv.kubernetes.io/provisioned-by=cinder.csi.openstack.org"
loop: "{{ pvs.stdout_lines | list }}"
Loading

0 comments on commit 9a7dcdc

Please sign in to comment.