diff --git a/01-prerequisites.md b/01-prerequisites.md index 6327632c..373534a7 100644 --- a/01-prerequisites.md +++ b/01-prerequisites.md @@ -53,9 +53,11 @@ This is the starting point for the instructions on deploying the [AKS Baseline r > :twisted_rightwards_arrows: If you have forked this reference implementation repo, you'll be able to customize some of the files and commands for a more personalized and production-like experience; ensure references to this git repository mentioned throughout the walk-through are updated to use your own fork. + > Make sure you use HTTPS (and not SSH) to clone the repository. (The remote URL will later be used to configure GitOps using Flux which requires an HTTPS endpoint to work properly.) + ```bash - git clone https://github.com/mspnp/aks-secure-baseline.git - cd aks-secure-baseline + git clone https://github.com/mspnp/aks-baseline.git + cd aks-baseline ``` > :bulb: The steps shown here and elsewhere in the reference implementation use Bash shell commands. On Windows, you can use the [Windows Subsystem for Linux](https://docs.microsoft.com/windows/wsl/about) to run Bash. diff --git a/03-aad.md b/03-aad.md index 87949e8a..62485ba9 100644 --- a/03-aad.md +++ b/03-aad.md @@ -1,107 +1,119 @@ -# Prep for Azure Active Directory Integration - -In the prior step, you [generated the user-facing TLS certificate](./02-ca-certificates.md); now we'll prepare Azure AD for Kubernetes role-based access control (RBAC). This will ensure you have an Azure AD security group(s) and user(s) assigned for group-based Kubernetes control plane access. - -## Expected results - -Following the steps below you will result in an Azure AD configuration that will be used for Kubernetes control plane (Cluster API) authorization. - -| Object | Purpose | -|------------------------------------|---------------------------------------------------------| -| A Cluster Admin Security Group | Will be mapped to `cluster-admin` Kubernetes role. | -| A Cluster Admin User | Represents at least one break-glass cluster admin user. | -| Cluster Admin Group Membership | Association between the Cluster Admin User(s) and the Cluster Admin Security Group. | -| A Namespace Reader Security Group | Represents users that will have read-only access to a specific namespace in the cluster. | -| _Additional Security Groups_ | _Optional._ A security group (and its memberships) for the other built-in and custom Kubernetes roles you plan on using. | - -## Steps - -> :book: The Contoso Bicycle Azure AD team requires all admin access to AKS clusters be security-group based. This applies to the new AKS cluster that is being built for Application ID a0008 under the BU0001 business unit. Kubernetes RBAC will be AAD-backed and access granted based on users' AAD group membership(s). - -1. Query and save your Azure subscription's tenant id. - - ```bash - export TENANTID_AZURERBAC_AKS_BASELINE=$(az account show --query tenantId -o tsv) - echo TENANTID_AZURERBAC_AKS_BASELINE: $TENANTID_AZURERBAC_AKS_BASELINE - ``` - -1. Playing the role as the Contoso Bicycle Azure AD team, login into the tenant where Kubernetes Cluster API authorization will be associated with. - - ```bash - az login -t --allow-no-subscriptions - export TENANTID_K8SRBAC_AKS_BASELINE=$(az account show --query tenantId -o tsv) - echo TENANTID_K8SRBAC_AKS_BASELINE: $TENANTID_K8SRBAC_AKS_BASELINE - ``` - -1. Create/identify the Azure AD security group that is going to map to the [Kubernetes Cluster Admin](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) role `cluster-admin`. - - If you already have a security group that is appropriate for your cluster's admin service accounts, use that group and skip this step. If using your own group or your Azure AD administrator created one for you to use; you will need to update the group name and ID throughout the reference implementation. - - ```bash - export AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE=$(az ad group create --display-name 'cluster-admins-bu0001a000800' --mail-nickname 'cluster-admins-bu0001a000800' --description "Principals in this group are cluster admins in the bu0001a000800 cluster." --query objectId -o tsv) - echo AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE: $AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE - ``` - - This Azure AD group object ID will be used later while creating the cluster. This way, once the cluster gets deployed the new group will get the proper Cluster Role bindings in Kubernetes. - -1. Create a "break-glass" cluster administrator user for your AKS cluster. - - > :book: The organization knows the value of having a break-glass admin user for their critical infrastructure. The app team requests a cluster admin user and Azure AD Admin team proceeds with the creation of the user in Azure AD. - - ```bash - TENANTDOMAIN_K8SRBAC=$(az ad signed-in-user show --query 'userPrincipalName' -o tsv | cut -d '@' -f 2 | sed 's/\"//') - AADOBJECTNAME_USER_CLUSTERADMIN=bu0001a000800-admin - AADOBJECTID_USER_CLUSTERADMIN=$(az ad user create --display-name=${AADOBJECTNAME_USER_CLUSTERADMIN} --user-principal-name ${AADOBJECTNAME_USER_CLUSTERADMIN}@${TENANTDOMAIN_K8SRBAC} --force-change-password-next-login --password ChangeMebu0001a0008AdminChangeMe --query objectId -o tsv) - echo TENANTDOMAIN_K8SRBAC: $TENANTDOMAIN_K8SRBAC - echo AADOBJECTNAME_USER_CLUSTERADMIN: $AADOBJECTNAME_USER_CLUSTERADMIN - echo AADOBJECTID_USER_CLUSTERADMIN: $AADOBJECTID_USER_CLUSTERADMIN - ``` - -1. Add the cluster admin user(s) to the cluster admin security group. - - > :book: The recently created break-glass admin user is added to the Kubernetes Cluster Admin group from Azure AD. After this step the Azure AD Admin team will have finished the app team's request. - - ```bash - az ad group member add -g $AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE --member-id $AADOBJECTID_USER_CLUSTERADMIN - ``` - -1. Create/identify the Azure AD security group that is going to be a namespace reader. - - ```bash - export AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE=$(az ad group create --display-name 'cluster-ns-a0008-readers-bu0001a000800' --mail-nickname 'cluster-ns-a0008-readers-bu0001a000800' --description "Principals in this group are readers of namespace a0008 in the bu0001a000800 cluster." --query objectId -o tsv) - echo AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE: $AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE - ``` - -## Kubernetes RBAC backing store - -AKS supports backing Kubernetes with Azure AD in two different modalities. One is direct association between Azure AD and Kubernetes `ClusterRoleBindings`/`RoleBindings` in the cluster. This is possible no matter if the Azure AD tenant you wish to use to back your Kubernetes RBAC is the same or different than the Tenant backing your Azure resources. If however the tenant that is backing your Azure resources (Azure RBAC source) is the same tenant you plan on using to back your Kubernetes RBAC, then instead you can add a layer of indirection between Azure AD and your cluster by using Azure RBAC instead of direct cluster `RoleBinding` manipulation. When performing this walk-through, you may have had no choice but to associate the cluster with another tenant (due to the elevated permissions necessary in Azure AD to manage groups and users); but when you take this to production be sure you're using Azure RBAC as your Kubernetes RBAC backing store if the tenants are the same. Both cases still leverage integrated authentication between Azure AD and AKS, Azure RBAC simply elevates this control to Azure RBAC instead of direct yaml-based management within the cluster which usually will align better with your organization's governance strategy. - -### Azure RBAC _[Preferred]_ - -If you are using a single tenant for this walk-through, the cluster deployment step later will take care of the necessary role assignments for the groups created above. Specifically, in the above steps, you created the Azure AD security group `cluster-ns-a0008-readers-bu0001a000800` that is going to be a namespace reader in namespace `a0008` and the Azure AD security group `cluster-admins-bu0001a000800` is going to contain cluster admins. Those group Object IDs will be associated to the 'Azure Kubernetes Service RBAC Reader' and 'Azure Kubernetes Service RBAC Cluster Admin' RBAC role respectively, scoped to their proper level within the cluster. - -Using Azure RBAC as your authorization approach is ultimately preferred as it allows for the unified management and access control across Azure Resources, AKS, and Kubernetes resources. At the time of this writing there are four [Azure RBAC roles](https://docs.microsoft.com/azure/aks/manage-azure-rbac#create-role-assignments-for-users-to-access-cluster) that represent typical cluster access patterns. - -### Direct Kubernetes RBAC management _[Alternative]_ - -If you instead wish to not use Azure RBAC as your Kubernetes RBAC authorization mechanism, either due to the intentional use of disparate Azure AD tenants or another business justifications, you can then manage these RBAC assignments via direct `ClusterRoleBinding`/`RoleBinding` associations. This method is also useful when the four Azure RBAC roles are not granular enough for your desired permission model. - -1. Set up additional Kubernetes RBAC associations. _Optional, fork required._ - - > :book: The team knows there will be more than just cluster admins that need group-managed access to the cluster. Out of the box, Kubernetes has other roles like _admin_, _edit_, and _view_ which can also be mapped to Azure AD Groups for use both at namespace and at the cluster level. Likewise custom roles can be created which need to be mapped to Azure AD Groups. - - In the [`cluster-rbac.yaml` file](./cluster-manifests/cluster-rbac.yaml) and the various namespaced [`rbac.yaml files`](./cluster-manifests/cluster-baseline-settings/rbac.yaml), you can uncomment what you wish and replace the `` placeholders with corresponding new or existing Azure AD groups that map to their purpose for this cluster or namespace. **You do not need to perform this action for this walkthrough**; they are only here for your reference. - -### Save your work in-progress - -```bash -# run the saveenv.sh script at any time to save environment variables created above to aks_baseline.env -./saveenv.sh - -# if your terminal session gets reset, you can source the file to reload the environment variables -# source aks_baseline.env -``` - -### Next step - -:arrow_forward: [Deploy the hub-spoke network topology](./04-networking.md) +# Prep for Azure Active Directory Integration + +In the prior step, you [generated the user-facing TLS certificate](./02-ca-certificates.md); now we'll prepare Azure AD for Kubernetes role-based access control (RBAC). This will ensure you have an Azure AD security group(s) and user(s) assigned for group-based Kubernetes control plane access. + +## Expected results + +Following the steps below you will result in an Azure AD configuration that will be used for Kubernetes control plane (Cluster API) authorization. + +| Object | Purpose | +|------------------------------------|---------------------------------------------------------| +| A Cluster Admin Security Group | Will be mapped to `cluster-admin` Kubernetes role. | +| A Cluster Admin User | Represents at least one break-glass cluster admin user. | +| Cluster Admin Group Membership | Association between the Cluster Admin User(s) and the Cluster Admin Security Group. | +| A Namespace Reader Security Group | Represents users that will have read-only access to a specific namespace in the cluster. | +| _Additional Security Groups_ | _Optional._ A security group (and its memberships) for the other built-in and custom Kubernetes roles you plan on using. | + +## Steps + +> :book: The Contoso Bicycle Azure AD team requires all admin access to AKS clusters be security-group based. This applies to the new AKS cluster that is being built for Application ID a0008 under the BU0001 business unit. Kubernetes RBAC will be AAD-backed and access granted based on users' AAD group membership(s). + +1. Query and save your Azure subscription's tenant id. + + ```bash + export TENANTID_AZURERBAC_AKS_BASELINE=$(az account show --query tenantId -o tsv) + echo TENANTID_AZURERBAC_AKS_BASELINE: $TENANTID_AZURERBAC_AKS_BASELINE + ``` + +1. Playing the role as the Contoso Bicycle Azure AD team, login into the tenant where Kubernetes Cluster API authorization will be associated with. + + > :bulb: Skip the `az login` command if you plan to use your current user account's Azure AD tenant for Kubernetes authorization. + + ```bash + az login -t --allow-no-subscriptions + export TENANTID_K8SRBAC_AKS_BASELINE=$(az account show --query tenantId -o tsv) + echo TENANTID_K8SRBAC_AKS_BASELINE: $TENANTID_K8SRBAC_AKS_BASELINE + ``` + +1. Create/identify the Azure AD security group that is going to map to the [Kubernetes Cluster Admin](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) role `cluster-admin`. + + If you already have a security group that is appropriate for your cluster's admin service accounts, use that group and don't create a new one. If using your own group or your Azure AD administrator created one for you to use; you will need to update the group name and ID throughout the reference implementation. + ```bash + export AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE=[Paste your existing cluster admin group Object ID here.] + echo AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE: $AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE + ``` + + If you want to create a new one instead, you can use the following code: + + ```bash + export AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE=$(az ad group create --display-name 'cluster-admins-bu0001a000800' --mail-nickname 'cluster-admins-bu0001a000800' --description "Principals in this group are cluster admins in the bu0001a000800 cluster." --query objectId -o tsv) + echo AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE: $AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE + ``` + + This Azure AD group object ID will be used later while creating the cluster. This way, once the cluster gets deployed the new group will get the proper Cluster Role bindings in Kubernetes. + +1. Create a "break-glass" cluster administrator user for your AKS cluster. + + > :book: The organization knows the value of having a break-glass admin user for their critical infrastructure. The app team requests a cluster admin user and Azure AD Admin team proceeds with the creation of the user in Azure AD. + + You should skip this step, if the group identified in the former step already has a cluster admin assigned as member. + + ```bash + TENANTDOMAIN_K8SRBAC=$(az ad signed-in-user show --query 'userPrincipalName' -o tsv | cut -d '@' -f 2 | sed 's/\"//') + AADOBJECTNAME_USER_CLUSTERADMIN=bu0001a000800-admin + AADOBJECTID_USER_CLUSTERADMIN=$(az ad user create --display-name=${AADOBJECTNAME_USER_CLUSTERADMIN} --user-principal-name ${AADOBJECTNAME_USER_CLUSTERADMIN}@${TENANTDOMAIN_K8SRBAC} --force-change-password-next-login --password ChangeMebu0001a0008AdminChangeMe --query objectId -o tsv) + echo TENANTDOMAIN_K8SRBAC: $TENANTDOMAIN_K8SRBAC + echo AADOBJECTNAME_USER_CLUSTERADMIN: $AADOBJECTNAME_USER_CLUSTERADMIN + echo AADOBJECTID_USER_CLUSTERADMIN: $AADOBJECTID_USER_CLUSTERADMIN + ``` + +1. Add the cluster admin user(s) to the cluster admin security group. + + > :book: The recently created break-glass admin user is added to the Kubernetes Cluster Admin group from Azure AD. After this step the Azure AD Admin team will have finished the app team's request. + + You should skip this step, if the group identified in the former step already has a cluster admin assigned as member. + + ```bash + az ad group member add -g $AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE --member-id $AADOBJECTID_USER_CLUSTERADMIN + ``` + +1. Create/identify the Azure AD security group that is going to be a namespace reader. _Optional_ + + ```bash + export AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE=$(az ad group create --display-name 'cluster-ns-a0008-readers-bu0001a000800' --mail-nickname 'cluster-ns-a0008-readers-bu0001a000800' --description "Principals in this group are readers of namespace a0008 in the bu0001a000800 cluster." --query objectId -o tsv) + echo AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE: $AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE + ``` + +## Kubernetes RBAC backing store + +AKS supports backing Kubernetes with Azure AD in two different modalities. One is direct association between Azure AD and Kubernetes `ClusterRoleBindings`/`RoleBindings` in the cluster. This is possible no matter if the Azure AD tenant you wish to use to back your Kubernetes RBAC is the same or different than the Tenant backing your Azure resources. If however the tenant that is backing your Azure resources (Azure RBAC source) is the same tenant you plan on using to back your Kubernetes RBAC, then instead you can add a layer of indirection between Azure AD and your cluster by using Azure RBAC instead of direct cluster `RoleBinding` manipulation. When performing this walk-through, you may have had no choice but to associate the cluster with another tenant (due to the elevated permissions necessary in Azure AD to manage groups and users); but when you take this to production be sure you're using Azure RBAC as your Kubernetes RBAC backing store if the tenants are the same. Both cases still leverage integrated authentication between Azure AD and AKS, Azure RBAC simply elevates this control to Azure RBAC instead of direct yaml-based management within the cluster which usually will align better with your organization's governance strategy. + +### Azure RBAC _[Preferred]_ + +If you are using a single tenant for this walk-through, the cluster deployment step later will take care of the necessary role assignments for the groups created above. Specifically, in the above steps, you created the Azure AD security group `cluster-ns-a0008-readers-bu0001a000800` that is going to be a namespace reader in namespace `a0008` and the Azure AD security group `cluster-admins-bu0001a000800` is going to contain cluster admins. Those group Object IDs will be associated to the 'Azure Kubernetes Service RBAC Reader' and 'Azure Kubernetes Service RBAC Cluster Admin' RBAC role respectively, scoped to their proper level within the cluster. + +Using Azure RBAC as your authorization approach is ultimately preferred as it allows for the unified management and access control across Azure Resources, AKS, and Kubernetes resources. At the time of this writing there are four [Azure RBAC roles](https://docs.microsoft.com/azure/aks/manage-azure-rbac#create-role-assignments-for-users-to-access-cluster) that represent typical cluster access patterns. + +### Direct Kubernetes RBAC management _[Alternative]_ + +If you instead wish to not use Azure RBAC as your Kubernetes RBAC authorization mechanism, either due to the intentional use of disparate Azure AD tenants or another business justifications, you can then manage these RBAC assignments via direct `ClusterRoleBinding`/`RoleBinding` associations. This method is also useful when the four Azure RBAC roles are not granular enough for your desired permission model. + +1. Set up additional Kubernetes RBAC associations. _Optional, fork required._ + + > :book: The team knows there will be more than just cluster admins that need group-managed access to the cluster. Out of the box, Kubernetes has other roles like _admin_, _edit_, and _view_ which can also be mapped to Azure AD Groups for use both at namespace and at the cluster level. Likewise custom roles can be created which need to be mapped to Azure AD Groups. + + In the [`cluster-rbac.yaml` file](./cluster-manifests/cluster-rbac.yaml) and the various namespaced [`rbac.yaml files`](./cluster-manifests/cluster-baseline-settings/rbac.yaml), you can uncomment what you wish and replace the `` placeholders with corresponding new or existing Azure AD groups that map to their purpose for this cluster or namespace. **You do not need to perform this action for this walkthrough**; they are only here for your reference. + +### Save your work in-progress + +```bash +# run the saveenv.sh script at any time to save environment variables created above to aks_baseline.env +./saveenv.sh + +# if your terminal session gets reset, you can source the file to reload the environment variables +# source aks_baseline.env +``` + +### Next step + +:arrow_forward: [Deploy the hub-spoke network topology](./04-networking.md) diff --git a/04-networking.md b/04-networking.md index 0036247d..345a39a1 100644 --- a/04-networking.md +++ b/04-networking.md @@ -94,7 +94,7 @@ The following two resource groups will be created and populated with networking ```bash RESOURCEID_SUBNET_NODEPOOLS=$(az deployment group show -g rg-enterprise-networking-spokes -n spoke-BU0001A0008 --query properties.outputs.nodepoolSubnetResourceIds.value -o json) - echo RESOURCEID_VNET_HUB: $RESOURCEID_SUBNET_NODEPOOLS + echo RESOURCEID_SUBNET_NODEPOOLS: $RESOURCEID_SUBNET_NODEPOOLS # [This takes about ten minutes to run.] az deployment group create -g rg-enterprise-networking-hubs -f networking/hub-regionA.bicep -p location=eastus2 nodepoolSubnetResourceIds="${RESOURCEID_SUBNET_NODEPOOLS}" diff --git a/06-aks-cluster.md b/06-aks-cluster.md index eeb33739..2ebee237 100644 --- a/06-aks-cluster.md +++ b/06-aks-cluster.md @@ -11,6 +11,9 @@ Now that your [ACR instance is deployed and ready to support cluster bootstrappi ```bash GITOPS_REPOURL=$(git config --get remote.origin.url) echo GITOPS_REPOURL: $GITOPS_REPOURL + + GITOPS_CURRENT_BRANCH_NAME=$(git branch --show-current) + echo GITOPS_CURRENT_BRANCH_NAME: $GITOPS_CURRENT_BRANCH_NAME ``` 1. Deploy the cluster ARM template. @@ -20,7 +23,7 @@ Now that your [ACR instance is deployed and ready to support cluster bootstrappi ```bash # [This takes about 18 minutes.] - az deployment group create -g rg-bu0001a0008 -f cluster-stamp.bicep -p targetVnetResourceId=${RESOURCEID_VNET_CLUSTERSPOKE_AKS_BASELINE} clusterAdminAadGroupObjectId=${AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE} a0008NamespaceReaderAadGroupObjectId=${AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE} k8sControlPlaneAuthorizationTenantId=${TENANTID_K8SRBAC_AKS_BASELINE} appGatewayListenerCertificate=${APP_GATEWAY_LISTENER_CERTIFICATE_AKS_BASELINE} aksIngressControllerCertificate=${AKS_INGRESS_CONTROLLER_CERTIFICATE_BASE64_AKS_BASELINE} domainName=${DOMAIN_NAME_AKS_BASELINE} gitOpsBootstrappingRepoHttpsUrl=${GITOPS_REPOURL} + az deployment group create -g rg-bu0001a0008 -f cluster-stamp.bicep -p targetVnetResourceId=${RESOURCEID_VNET_CLUSTERSPOKE_AKS_BASELINE} clusterAdminAadGroupObjectId=${AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE} a0008NamespaceReaderAadGroupObjectId=${AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE} k8sControlPlaneAuthorizationTenantId=${TENANTID_K8SRBAC_AKS_BASELINE} appGatewayListenerCertificate=${APP_GATEWAY_LISTENER_CERTIFICATE_AKS_BASELINE} aksIngressControllerCertificate=${AKS_INGRESS_CONTROLLER_CERTIFICATE_BASE64_AKS_BASELINE} domainName=${DOMAIN_NAME_AKS_BASELINE} gitOpsBootstrappingRepoHttpsUrl=${GITOPS_REPOURL} gitOpsBootstrappingRepoBranch=${GITOPS_CURRENT_BRANCH_NAME} ``` > Alteratively, you could have updated the [`azuredeploy.parameters.prod.json`](./azuredeploy.parameters.prod.json) file and deployed as above, using `-p "@azuredeploy.parameters.prod.json"` instead of providing the individual key-value pairs. diff --git a/09-secret-management-and-ingress-controller.md b/09-secret-management-and-ingress-controller.md index f579e036..667fb0b7 100644 --- a/09-secret-management-and-ingress-controller.md +++ b/09-secret-management-and-ingress-controller.md @@ -97,7 +97,7 @@ Previously you have configured [workload prerequisites](./08-workload-prerequisi :warning: Deploying the traefik `traefik.yaml` file unmodified from this repo will be deploying your workload to take dependencies on a public container registry. This is generally okay for learning/testing, but not suitable for production. Before going to production, ensure _all_ image references are from _your_ container registry or another that you feel confident relying on. ```bash - kubectl create -f https://raw.githubusercontent.com/mspnp/aks-secure-baseline/main/workload/traefik.yaml + kubectl create -f https://raw.githubusercontent.com/mspnp/aks-baseline/main/workload/traefik.yaml ``` 1. Wait for Traefik to be ready. diff --git a/10-workload.md b/10-workload.md index 395bd25e..661016ad 100644 --- a/10-workload.md +++ b/10-workload.md @@ -46,10 +46,9 @@ The cluster now has an [Traefik configured with a TLS certificate](./08-secret-m > You should expect a `403` HTTP response from your ingress controller if you attempt to connect to it _without_ going through the App Gateway. Likewise, if any workload other than the ingress controller attempts to reach the workload, the traffic will be denied via network policies. ```bash - kubectl run curl -n a0008 -i --tty --rm --image=mcr.microsoft.com/azure-cli --overrides='[{"op":"add","path":"/spec/containers/0/resources","value":{"limits":{"cpu":"200m","memory":"128Mi"}}}]' --override-type json + kubectl run curl -n a0008 -i --tty --rm --image=mcr.microsoft.com/azure-cli --overrides='[{"op":"add","path":"/spec/containers/0/resources","value":{"limits":{"cpu":"200m","memory":"128Mi"}}}]' --override-type json --env="DOMAIN_NAME=${DOMAIN_NAME_AKS_BASELINE}" # From within the open shell now running on a container inside your cluster - DOMAIN_NAME="contoso.com" # <-- Change to your custom domain value if a different one was used curl -kI https://bu0001a0008-00.aks-ingress.$DOMAIN_NAME -w '%{remote_ip}\n' exit ``` diff --git a/11-validation.md b/11-validation.md index 45cf0ba7..f308f38e 100644 --- a/11-validation.md +++ b/11-validation.md @@ -26,7 +26,9 @@ This section will help you to validate the workload is exposed correctly and res 1. Browse to the site (e.g. ). - > :bulb: A TLS warning will be present due to using a self-signed certificate. You can ignore it or import the self-signed cert (`appgw.pfx`) to your user's trusted root store. + > :bulb: Remember to include the protocol prefix `https://` in the URL you type in the address bar of your browser. A TLS warning will be present due to using a self-signed certificate. You can ignore it or import the self-signed cert (`appgw.pfx`) to your user's trusted root store. + + Refresh the web page a couple of times and observe the value `Host name` displayed at the bottom of the page. As the Traefik Ingress Controller balances the requests between the two pods hosting the web page, the host name will change from one pod name to the other throughtout your queries. ## Validate reader access to the a0008 namespace. _Optional._ @@ -48,7 +50,11 @@ Your workload is placed behind a Web Application Firewall (WAF), which has rules 1. Browse to the site with the following appended to the URL: `?sql=DELETE%20FROM` (e.g. ). 1. Observe that your request was blocked by Application Gateway's WAF rules and your workload never saw this potentially dangerous request. -1. Blocked requests (along with other gateway data) will be visible in the attached Log Analytics workspace. Execute the following query to show WAF logs, for example. +1. Blocked requests (along with other gateway data) will be visible in the attached Log Analytics workspace. + + Browse to the Application Gateway in the resource group `rg-bu0001-a0008` and navigate to the _Logs_ blade. Execute the following query below to show WAF logs and see that the request was rejected due to a _SQL Injection Attack_ (field _Message_). + + > :warning: Note that it may take a couple of minutes until the logs are transferred from the Application Gateway to the Log Analytics Workspace. So be a little patient if the query does not immediatly return results after sending the https request in the former step. ``` AzureDiagnostics @@ -77,15 +83,14 @@ Azure Monitor is configured to [scrape Prometheus metrics](https://docs.microsof - [Traefik](./workload/traefik.yaml) (in the `a0008` namespace) - [Kured](./cluster-baseline-settings/kured.yaml) (in the `cluster-baseline-settings` namespace) + :bulb: This reference implementation ships with two saved queries (_All collected Prometheus information_ and _Nodes reboot required by kured_) as an example of how you can write your own and manage them via ARM templates. + ### Steps 1. In the Azure Portal, navigate to your AKS cluster resource group (`rg-bu0001a0008`). -1. Select your Log Analytic Workspace resource. -1. Click _Saved Searches_. - - :bulb: This reference implementation ships with some saved queries as an example of how you can write your own and manage them via ARM templates. - -1. Type _Prometheus_ in the filter. +1. Select your Log Analytic Workspace resource and open the _Logs_ blade. +1. In the popup _Queries_ select _Legacy category_ in the drop down field in the upper left corner. +1. Select _Prometheus_ in the section list on the left. 1. You are able to select and execute the saved query over the scraped metrics. ## Validate Workload Logs @@ -95,7 +100,7 @@ The example workload uses the standard dotnet logger interface, which are captur ### Steps 1. In the Azure Portal, navigate to your AKS cluster resource group (`rg-bu0001a0008`). -1. Select your Log Analytic Workspace resource. +1. Select your Log Analytic Workspace resource and open the _Logs_ blade. 1. Execute the following query ``` @@ -121,13 +126,13 @@ Azure will generate alerts on the health of your cluster and adjacent resources. An alert based on [Azure Monitor for containers information using a Kusto query](https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-alerts) was configured in this reference implementation. 1. In the Azure Portal, navigate to your AKS cluster resource group (`rg-bu0001a0008`). -1. Select _Alerts_, then _Manage Rule Alerts_. -1. There is an alert called "PodFailedScheduledQuery" that will be triggered based on the custom query response. +1. Select _Alerts_, then _Alert Rules_. +1. There is an alert titled "[your cluster name] Scheduled Query for Pod Failed Alert" that will be triggered based on the custom query response. An [Azure Advisor Alert](https://docs.microsoft.com/azure/advisor/advisor-overview) was configured as well in this reference implementation. 1. In the Azure Portal, navigate to your AKS cluster resource group (`rg-bu0001a0008`). -1. Select _Alerts_, then _Manage Rule Alerts_. +1. Select _Alerts_, then _Alert Rules_. 1. There is an alert called "AllAzureAdvisorAlert" that will be triggered based on new Azure Advisor alerts. A series of metric alerts were configured as well in this reference implementation. @@ -151,7 +156,7 @@ If you configured your third-party images to be pulled from your Azure Container | where OperationName == 'Pull' ``` -1. You should see logs for CSI, kured, memcached, and traefik. You'll see multiple for some as the image was pulled to multiple nodes to satisfy ReplicaSet/DaemonSet placement. +1. You should see logs for kured. You'll see multiple for some as the image was pulled to multiple nodes to satisfy ReplicaSet/DaemonSet placement. ## Next step diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 1e0f1ee1..ea8ecad5 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -43,13 +43,13 @@ If your issue appears to be a bug, and hasn't been reported, open a new issue. H * **Related Issues** - has a similar issue been reported before? * **Suggest a Fix** - if you can't fix the bug yourself, perhaps you can point to what might be causing the problem (line of code or commit) -You can file new issues by providing the above information at the corresponding repository's issues link: https://github.com/mspnp/aks-secure-baseline/issues/new]. +You can file new issues by providing the above information at the corresponding repository's issues link: https://github.com/mspnp/aks-baseline/issues/new]. ### Submitting a Pull Request (PR) Before you submit your Pull Request (PR) consider the following guidelines: -* Search the repository () for an open or closed PR +* Search the repository () for an open or closed PR that relates to your submission. You don't want to duplicate effort. * Make your changes in a new git fork: diff --git a/cluster-stamp.bicep b/cluster-stamp.bicep index f973969b..0d5627a2 100644 --- a/cluster-stamp.bicep +++ b/cluster-stamp.bicep @@ -1495,7 +1495,7 @@ resource mcAadAdminGroupServiceClusterUserRole_roleAssignment 'Microsoft.Authori dependsOn: [] } -resource maAadA0008ReaderGroupClusterReaderRole_roleAssignment 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = if (isUsingAzureRBACasKubernetesRBAC && (!(a0008NamespaceReaderAadGroupObjectId == clusterAdminAadGroupObjectId))) { +resource maAadA0008ReaderGroupClusterReaderRole_roleAssignment 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = if (isUsingAzureRBACasKubernetesRBAC && !(empty(a0008NamespaceReaderAadGroupObjectId)) && (!(a0008NamespaceReaderAadGroupObjectId == clusterAdminAadGroupObjectId))) { scope: nsA0008 name: guid('aad-a0008-reader-group', mc.id, a0008NamespaceReaderAadGroupObjectId) properties: { @@ -1507,7 +1507,7 @@ resource maAadA0008ReaderGroupClusterReaderRole_roleAssignment 'Microsoft.Author dependsOn: [] } -resource maAadA0008ReaderGroupServiceClusterUserRole_roleAssignment 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = if (isUsingAzureRBACasKubernetesRBAC && (!(a0008NamespaceReaderAadGroupObjectId == clusterAdminAadGroupObjectId))) { +resource maAadA0008ReaderGroupServiceClusterUserRole_roleAssignment 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = if (isUsingAzureRBACasKubernetesRBAC && !(empty(a0008NamespaceReaderAadGroupObjectId)) && (!(a0008NamespaceReaderAadGroupObjectId == clusterAdminAadGroupObjectId))) { scope: mc name: guid('aad-a0008-reader-group-sc', mc.id, a0008NamespaceReaderAadGroupObjectId) properties: { diff --git a/inner-loop-scripts/shell/1-cluster-stamp.sh b/inner-loop-scripts/shell/1-cluster-stamp.sh index 1b68bcdd..5a9097e5 100755 --- a/inner-loop-scripts/shell/1-cluster-stamp.sh +++ b/inner-loop-scripts/shell/1-cluster-stamp.sh @@ -43,7 +43,7 @@ openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ -subj "/CN=*.aks-ingress.contoso.com/O=Contoso Aks Ingress" AKS_INGRESS_CONTROLLER_CERTIFICATE_BASE64=$(cat traefik-ingress-internal-aks-ingress-tls.crt | base64 | tr -d '\n') -# WARNING: Below hasn't yet been updated for Azure Key Vault RBAC support that came in https://github.com/mspnp/aks-secure-baseline/releases/tag/v1.21.2.2 +# WARNING: Below hasn't yet been updated for Azure Key Vault RBAC support that came in https://github.com/mspnp/aks-baseline/releases/tag/v1.21.2.2 # AKS Cluster Creation. Advance Networking. AAD identity integration. This might take about 10 minutes # Note: By default, this deployment will allow unrestricted access to your cluster's API Server. @@ -82,7 +82,7 @@ echo "" echo "# Creating AAD Groups and users for the created cluster" echo "" -# unset errexit as per https://github.com/mspnp/aks-secure-baseline/issues/69 +# unset errexit as per https://github.com/mspnp/aks-baseline/issues/69 set +e echo $'Ensure Flux has created the following namespace and then press Ctrl-C' kubectl get ns a0008 --watch diff --git a/saveenv.sh b/saveenv.sh index 26a881e4..314ce99d 100755 --- a/saveenv.sh +++ b/saveenv.sh @@ -4,10 +4,12 @@ # the page they are created on. Then a user can source this file to restore those environment # variables if their shell session is reset for some reason. -cat > aks_baseline.env << EOF +DIR_NAME=$(dirname "$0") + +cat > $DIR_NAME/aks_baseline.env << EOF #!/bin/bash $(env | sed -n "s/\(.*_AKS_BASELINE=\)\(.*\)/export \1'\2'/p" | sort) EOF -cat aks_baseline.env +cat $DIR_NAME/aks_baseline.env