Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update k8s deployments and add cluster-wide configs #157

Merged
merged 7 commits into from
Apr 17, 2020
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
**/.terraform
*.tfstate
*.tfstate.*
sql_credentials.json
key.json
14 changes: 9 additions & 5 deletions WCP-WS/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ spec:
spec:
containers:
- name: study-meta-data
image: gcr.io/heroes-hat-dev/study-meta-data:latest
image: gcr.io/heroes-hat-dev-apps/study-meta-data:latest
env:
- name: DB_USER
valueFrom:
Expand All @@ -35,17 +35,21 @@ spec:
name: cloudsql-db-credentials
key: dbname
- name: RESPONSE_SERVER
value: "35.196.119.71:60000"
value: "response-server-np:50000"
- name: USER_REGISTRATION_SERVER
value: "35.196.79.108:60000"
value: "user-registration-server-np:60000"
- name: STUDY_DESIGNER
value: "34.74.173.54:50000"
value: "study-designer-np:50000"
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /StudyMetaData/ping
port: 8080
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:latest
command: ["/cloud_sql_proxy",
"-instances=heroes-hat-dev:us-east1:my-studies2=tcp:3306",
"-instances=heroes-hat-dev-data:us-central1:my-studies-1=tcp:3306",
"-credential_file=/secrets/cloudsql/sql_credentials.json"]
volumeMounts:
- name: my-secrets-volume
Expand Down
10 changes: 6 additions & 4 deletions WCP-WS/service.yaml
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
# LoadBalancer service for Response server.
# See https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps#creating_a_service_of_type_loadbalancer
apiVersion: v1
kind: Service
metadata:
name: study-meta-data-lb
name: study-meta-data-np
# Add Container Native Load Balancing.
# See https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#create_service
annotations:
cloud.google.com/neg: '{"ingress": true}' # Creates a NEG after an Ingress is created
spec:
type: LoadBalancer
type: NodePort
selector:
app: study-meta-data
ports:
Expand Down
8 changes: 4 additions & 4 deletions WCP/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ spec:
spec:
containers:
- name: study-designer
image: gcr.io/heroes-hat-dev/study-designer:latest
image: gcr.io/heroes-hat-dev-apps/study-designer:latest
env:
- name: DB_USER
valueFrom:
Expand All @@ -34,15 +34,15 @@ spec:
name: cloudsql-db-credentials
key: dbname
- name: REGISTRATION_SERVER_URL
value: "35.196.79.108:60000"
value: "user-registration-server-np:60000"
- name: RESPONSE_SERVER_URL
value: "35.196.119.71:60000"
value: "response-server-np:50000"
ports:
- containerPort: 8080
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:latest
command: ["/cloud_sql_proxy",
"-instances=heroes-hat-dev:us-east1:my-studies2=tcp:3306",
"-instances=heroes-hat-dev-data:us-central1:my-studies-1=tcp:3306",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems to be different from the previous instance, both database name and region are changed. Is that intentional?

nit: would be great if we could extract all these values out of the yaml files and make the deployment configurable.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are different, these are for the project we've been deploying to, not the main heroes-hat-dev. These would presumably have to change before going to UNC, to point to their particular project(s).

I'll look into making this configurable, though I'm not very familiar with customizable Kubernetes configs. Maybe we need something like kustomize, but I haven't explored it yet.

Copy link
Author

@MartinPetkov MartinPetkov Apr 17, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I did a small literature review, and none of the options seem great. In particular, this is a great overview.

  1. kustomize
  • Seems very heavyweight and complex to use.
  • Would not be able to replace i.e. the path to the database in the cloudsql_proxy command.
  1. yq
  • Easier to use, could probably be used to replace image paths.
  • Also cannot replace the path to the database.
  1. Helm charts and templated yaml files
  • Could replace fields freely.
  • Requires all files to be in a particular "chart" structure.
  • Files are no usable on their own, i.e. you can't kubectl apply -f an untemplated file.
  • You need to use Helm, which adds yet another tool and dependency.
  1. Bash scripting and sed
  • This honestly seems like the most straightforward and simple approach, albeit the least robust.
  • It would be two sed commands replacing i.e. gcr.io/heroes-hat-dev with gcr.io/, and same with the -instances=heroes-hat-dev-data:us-central1:my-studies-1=tcp:3306 flags, in all the deployment.yaml files.
  1. Using the Terraform Engine
  • The Terraform Engine would be able to do this here by generating the files from templates, but it's not ready at the moment.

All that said, I think it's simplest to just replace the values (either manually or via sed) before attempting to deploy in the final GCP project.

Let me know if you'd still like to discuss, or if I can resolve.

"-credential_file=/secrets/cloudsql/sql_credentials.json"]
volumeMounts:
- name: my-secrets-volume
Expand Down
8 changes: 6 additions & 2 deletions WCP/service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,13 @@
apiVersion: v1
kind: Service
metadata:
name: study-designer-lb
name: study-designer-np
# Add Container Native Load Balancing.
# See https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#create_service
annotations:
cloud.google.com/neg: '{"ingress": true}' # Creates a NEG after an Ingress is created
spec:
type: LoadBalancer
type: NodePort
selector:
app: study-designer
ports:
Expand Down
18 changes: 12 additions & 6 deletions auth-server-ws/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,31 +16,37 @@ spec:
spec:
containers:
- name: auth-server-ws
image: gcr.io/heroes-hat-dev/auth-server-ws:latest
image: gcr.io/heroes-hat-dev-apps/auth-server-ws:latest
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: auth-server-db-credentials
name: cloudsql-db-credentials
key: username
- name: DB_PASS
valueFrom:
secretKeyRef:
name: auth-server-db-credentials
name: cloudsql-db-credentials
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: auth-server-db-credentials
name: cloudsql-db-credentials
key: dbname
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /AuthServer/healthCheck
port: 8080
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:latest
command: ['/cloud_sql_proxy', '-instances=heroes-hat-dev:us-east1:my-studies2=tcp:3306', '-credential_file=/secrets/cloudsql/sql_credentials.json']
command: ['/cloud_sql_proxy', '-instances=heroes-hat-dev-data:us-central1:my-studies-1=tcp:3306', '-credential_file=/secrets/cloudsql/sql_credentials.json']
volumeMounts:
- name: my-secrets-volume
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: my-secrets-volume
secret:
secretName: auth-server-instance-credentials
secretName: cloudsql-instance-credentials
9 changes: 7 additions & 2 deletions auth-server-ws/service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,17 @@
apiVersion: v1
kind: Service
metadata:
name: auth-server-lb
name: auth-server-np
# Add Container Native Load Balancing.
# See https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#create_service
annotations:
cloud.google.com/neg: '{"ingress": true}' # Creates a NEG after an Ingress is created
spec:
type: LoadBalancer
type: NodePort
selector:
app: auth-server
ports:
- protocol: TCP
port: 50000
targetPort: 8080
name: http
7 changes: 7 additions & 0 deletions kubernetes/heroes-hat-cert.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: heroes-hat-cert
spec:
domains:
- heroes-hat.rocketturtle.net
MartinPetkov marked this conversation as resolved.
Show resolved Hide resolved
15 changes: 15 additions & 0 deletions kubernetes/ingress.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: heroes-hat-dev
annotations:
kubernetes.io/ingress.global-static-ip-name: heroes-hat
networking.gke.io/managed-certificates: heroes-hat-cert
spec:
rules:
- http:
MartinPetkov marked this conversation as resolved.
Show resolved Hide resolved
paths:
- path: /AuthServer/*
backend:
serviceName: auth-server-np
servicePort: 50000
56 changes: 56 additions & 0 deletions kubernetes/kubeapply.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
#!/usr/bin/env bash

# Short helper script to run repetitive commands for Kubernetes deployments.
# Args:
# kubeapply.sh <project> <region> <cluster>
#
# It does the following:
# * Activate sthe cluster for kubectl via `gcloud container clusters get-credentials`.
# * Applies the pod security policies.
# * Applies the heroes-hat cert configuration.
# * Applies all services from children of the parent folder.
# * Applies the ingress configuration
#
# The services and deployments should be applied separately.
#
# Requires existence of files ./pod_security_policy{,-istio}.yaml.
# Currently hardcoded to use projects "heroes-hat-dev-{apps,data}".
#
# Run like:
# $ ./kubernetes/kubeapply.sh heroes-hat-cluster

if [ "$#" -ne 1 ]; then
echo 'Please provide exactly 1 argument in the order of <cluster>'
exit 1
fi

cluster="${1}"
shift 1

set -e

serviceaccount="$(gcloud container clusters describe "${cluster}" --region="us-east1" --project="heroes-hat-dev-apps" --format='value(nodeConfig.serviceAccount)')"

echo "=== Switching kubectl to cluster ${cluster} ==="
read -p "Press enter to continue"
gcloud container clusters get-credentials "${cluster}" --region="us-east1" --project="heroes-hat-dev-apps"

for policy in $(find . -name "pod_security_policy*.yaml"); do
echo "=== Applying ${policy} ==="
read -p "Press enter to continue"
kubectl apply -f ${policy}
done

echo '=== Applying heroes-hat-cert.yaml ==='
read -p "Press enter to continue"
kubectl apply -f ./heroes-hat-cert.yaml

for service in $(find .. -name "service.yaml"); do
echo "=== Applying service ${service} ==="
read -p "Press enter to continue"
kubectl apply -f ${service}
done

echo '=== Applying ingress.yaml ==='
read -p "Press enter to continue"
kubectl apply -f ./ingress.yaml
60 changes: 60 additions & 0 deletions kubernetes/pod_security_policy-istio.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Taken from https://github.com/istio/istio/issues/6806#issuecomment-406230278

# Istio containers need to run as root with a pretty loose policy.
# See: https://github.com/istio/istio/issues/6806

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: pod-security-policy-istio
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false
allowedCapabilities:
- '*'
volumes:
- '*'
# Required to prevent escalations to root.
allowPrivilegeEscalation: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
readOnlyRootFilesystem: false
---
# ClusterRole for reading the policy.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-security-policy-clusterrole-istio
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- pod-security-policy-istio
---
# Binding for reading the policy via the role.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-security-policy-rolebinding-istio
namespace: istio-system
roleRef:
kind: ClusterRole
name: pod-security-policy-clusterrole-istio
apiGroup: rbac.authorization.k8s.io
subjects:
# Authorize all service accounts in the Istio namespace.
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:serviceaccounts
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:authenticated
84 changes: 84 additions & 0 deletions kubernetes/pod_security_policy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# Taken from https://kubernetes.io/docs/concepts/policy/pod-security-policy/#example-policies

# This is an example of a restrictive policy that requires users to run as an
# unprivileged user, blocks possible escalations to root, and requires use of
# several security mechanisms.
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: pod-security-policy
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
# This is redundant with non-root + disallow privilege escalation,
# but we can provide it for defense in depth.
requiredDropCapabilities:
- ALL
# Allow core volume types.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
# Assume that persistentVolumes set up by the cluster admin are safe to use.
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
# Require the container to run without root privileges.
# The FDA MyStudies containers run as root.
#rule: 'MustRunAsNonRoot'
rule: 'RunAsAny'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false
---
# Role for reading the policy.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-security-policy-clusterrole
namespace: default
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- pod-security-policy
---
# Binding for reading the policy via the role.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-security-policy-rolebinding
namespace: default
roleRef:
kind: ClusterRole
name: pod-security-policy-clusterrole
apiGroup: rbac.authorization.k8s.io
subjects:
# Authorize all service accounts in the namespace.
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:serviceaccounts
Loading