Skip to content

Commit

Permalink
Merge branch 'main' into consul-api-gateway-add-tolerations-support
Browse files Browse the repository at this point in the history
  • Loading branch information
sarahalsmiller authored Oct 31, 2022
2 parents 71f56d8 + 767340c commit 971845b
Show file tree
Hide file tree
Showing 53 changed files with 1,113 additions and 2,517 deletions.
20 changes: 9 additions & 11 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,10 @@ on:

env:
TEST_RESULTS: /tmp/test-results # path to where test results are saved
CONSUL_VERSION: 1.13.1 # Consul's OSS version to use in tests
CONSUL_ENT_VERSION: 1.13.1+ent # Consul's enterprise version to use in tests
GOTESTSUM_VERSION: 1.8.2 # You cannot use environment variables with workflows. The gotestsum version is hardcoded in the reusable workflows too.
# We use docker images to copy the consul binary for unit tests.
CONSUL_OSS_DOCKER_IMAGE: hashicorppreview/consul:1.14-dev # Consul's OSS version to use in tests
CONSUL_ENT_DOCKER_IMAGE: hashicorppreview/consul-enterprise:1.14-dev # Consul's enterprise version to use in tests

jobs:
get-go-version:
Expand Down Expand Up @@ -158,11 +159,9 @@ jobs:
working-directory: control-plane
run: |
mkdir -p $HOME/bin
wget https://releases.hashicorp.com/consul/${{env.CONSUL_VERSION}}/consul_${{env.CONSUL_VERSION}}_linux_amd64.zip && \
unzip consul_${{env.CONSUL_VERSION}}_linux_amd64.zip -d $HOME/bin && \
rm consul_${{env.CONSUL_VERSION}}_linux_amd64.zip
chmod +x $HOME/bin/consul
container_id=$(docker create ${{env.CONSUL_OSS_DOCKER_IMAGE}})
docker cp "$container_id:/bin/consul" $HOME/bin/consul
docker rm "$container_id"
- name: Run go tests
working-directory: control-plane
run: |
Expand Down Expand Up @@ -207,10 +206,9 @@ jobs:
working-directory: control-plane
run: |
mkdir -p $HOME/bin
wget https://releases.hashicorp.com/consul/${{env.CONSUL_ENT_VERSION}}/consul_${{env.CONSUL_ENT_VERSION}}_linux_amd64.zip && \
unzip consul_${{env.CONSUL_ENT_VERSION}}_linux_amd64.zip -d $HOME/bin && \
rm consul_${{env.CONSUL_ENT_VERSION}}_linux_amd64.zip
chmod +x $HOME/bin/consul
container_id=$(docker create ${{env.CONSUL_ENT_DOCKER_IMAGE}})
docker cp "$container_id:/bin/consul" $HOME/bin/consul
docker rm "$container_id"
- name: Run go tests
working-directory: control-plane
Expand Down
33 changes: 26 additions & 7 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,43 @@
## UNRELEASED

BREAKING_CHANGES:
* Helm:
* Remove `global.consulSidecarContainer` from values file as there is no longer a consul sidecar. [[GH-1635](https://github.com/hashicorp/consul-k8s/pull/1635)]
* Consul snapshot-agent now runs as a sidecar with Consul servers. [[GH-1620](https://github.com/hashicorp/consul-k8s/pull/1620)]
This results in the following changes to Helm values:
* Move `client.snapshotAgent` values to `server.snapshotAgent`, with the exception of the following values:
* `client.snaphostAgent.replicas`
* `client.snaphostAgent.serviceAccount`
* Remove `global.secretsBackend.vault.consulSnapshotAgentRole` value. You should now use the `global.secretsBackend.vault.consulServerRole` for access to any Vault secrets.
* Peering:
* Remove support for customizing the server addresses in peering token generation. Instead, mesh gateways should be used
to establish peering connections if the server pods are not directly reachable. [[GH-1610](https://github.com/hashicorp/consul-k8s/pull/1610)]
* Enabling peering requires `tls.enabled`. [[GH-1610](https://github.com/hashicorp/consul-k8s/pull/1610)]

FEATURES:
* Consul-dataplane:
* Support merged metrics with consul-dataplane. [[GH-1635](https://github.com/hashicorp/consul-k8s/pull/1635)]
* Support transparent proxying when using consul-dataplane. [[GH-1625](https://github.com/hashicorp/consul-k8s/pull/1478),[GH-1632](https://github.com/hashicorp/consul-k8s/pull/1632)]

IMPROVEMENTS:
* CLI
* Update minimum go version for project to 1.19 [[GH-1633](https://github.com/hashicorp/consul-k8s/pull/1633)]
* Control Plane
* Update minimum go version for project to 1.19 [[GH-1633](https://github.com/hashicorp/consul-k8s/pull/1633)]
* Remove unneeded `agent:read` ACL permissions from mesh gateway policy. [[GH-1255](https://github.com/hashicorp/consul-k8s/pull/1255)]
* Helm:
* Remove deprecated annotation `service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"` in the `server-service` template. [[GH-1619](https://github.com/hashicorp/consul-k8s/pull/1619)]
* Support `minAvailable` on connect injector `PodDisruptionBudget`. [[GH-1557](https://github.com/hashicorp/consul-k8s/pull/1557)]
* Add `tolerations` and `nodeSelector` to Server ACL init jobs and `nodeSelector` to Webhook cert manager. [[GH-1581](https://github.com/hashicorp/consul-k8s/pull/1581)]
* API Gateway: Add `tolerations` to `apiGateway.managedGatewayClass` and `apiGateway.controller` [[GH-1650](https://github.com/hashicorp/consul-k8s/pull/1650)]
* Control plane
* Support merged metrics with consul-dataplane. [[GH-1635](https://github.com/hashicorp/consul-k8s/pull/1635)]

BREAKING_CHANGES:
## 1.0.0-beta4 (October 28, 2022)

* Helm:
* Removal of `global.consulSidecarContainer` from values file as there is no longer a consul sidecar. [[GH-1635](https://github.com/hashicorp/consul-k8s/pull/1635)]
IMPROVEMENTS:

CLI:

* Update demo charts and CLI command to not presume tproxy when using HCP preset. Also, use the most recent version of hashicups. [[GH-1657](https://github.com/hashicorp/consul-k8s/pull/1657)]

## 1.0.0-beta3 (October 12, 2022)

Expand All @@ -39,8 +60,6 @@ IMPROVEMENTS:
BREAKING CHANGES:
* Peering:
* Rename `PeerName` to `Peer` in ExportedServices CRD. [[GH-1596](https://github.com/hashicorp/consul-k8s/pull/1596)]
* Remove support for customizing the server addresses in peering token generation. Instead, mesh gateways should be used
to establish peering connections if the server pods are not directly reachable. [[GH-1610](https://github.com/hashicorp/consul-k8s/pull/1610)]
* Helm
* `server.replicas` now defaults to `1`. Formerly, this defaulted to `3`. [[GH-1551](https://github.com/hashicorp/consul-k8s/pull/1551)]
* `connectInject.enabled` now defaults to `true`. [[GH-1551](https://github.com/hashicorp/consul-k8s/pull/1551)]
Expand Down
2 changes: 2 additions & 0 deletions acceptance/framework/connhelper/connect_helper.go
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,8 @@ func (c *ConnectHelper) helmValues() map[string]string {
"connectInject.enabled": "true",
"global.tls.enabled": strconv.FormatBool(c.Secure),
"global.acls.manageSystemACLs": strconv.FormatBool(c.Secure),
"dns.enabled": "true",
"dns.enableRedirection": "true",
}

helpers.MergeMaps(helmValues, c.HelmValues)
Expand Down
7 changes: 3 additions & 4 deletions acceptance/tests/consul-dns/consul_dns_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

const podName = "dns-pod"

func TestConsulDNS(t *testing.T) {
cfg := suite.Config()
if cfg.EnableCNI {
Expand Down Expand Up @@ -59,16 +57,17 @@ func TestConsulDNS(t *testing.T) {
serverIPs = append(serverIPs, serverPod.Status.PodIP)
}

dnsPodName := fmt.Sprintf("%s-dns-pod", releaseName)
dnsTestPodArgs := []string{
"run", "-i", podName, "--restart", "Never", "--image", "anubhavmishra/tiny-tools", "--", "dig", fmt.Sprintf("@%s-consul-dns", releaseName), "consul.service.consul",
"run", "-i", dnsPodName, "--restart", "Never", "--image", "anubhavmishra/tiny-tools", "--", "dig", fmt.Sprintf("@%s-consul-dns", releaseName), "consul.service.consul",
}

helpers.Cleanup(t, suite.Config().NoCleanupOnFailure, func() {
// Note: this delete command won't wait for pods to be fully terminated.
// This shouldn't cause any test pollution because the underlying
// objects are deployments, and so when other tests create these
// they should have different pod names.
k8s.RunKubectl(t, ctx.KubectlOptions(t), "delete", "pod", podName)
k8s.RunKubectl(t, ctx.KubectlOptions(t), "delete", "pod", dnsPodName)
})

retry.Run(t, func(r *retry.R) {
Expand Down
3 changes: 1 addition & 2 deletions acceptance/tests/partitions/main_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,7 @@ var suite testsuite.Suite
func TestMain(m *testing.M) {
suite = testsuite.NewSuite(m)

// todo(agentless): Re-enable tproxy tests once we support it for multi-cluster.
if suite.Config().EnableMultiCluster && !suite.Config().EnableTransparentProxy {
if suite.Config().EnableMultiCluster {
os.Exit(suite.Run())
} else {
fmt.Println("Skipping partitions tests because -enable-multi-cluster is not set")
Expand Down
3 changes: 1 addition & 2 deletions acceptance/tests/peering/main_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,7 @@ var suite testsuite.Suite
func TestMain(m *testing.M) {
suite = testsuite.NewSuite(m)

// todo(agentless): Re-enable tproxy tests once we support it for multi-cluster.
if suite.Config().EnableMultiCluster && !suite.Config().DisablePeering && !suite.Config().EnableTransparentProxy {
if suite.Config().EnableMultiCluster && !suite.Config().DisablePeering {
os.Exit(suite.Run())
} else {
fmt.Println("Skipping peering tests because either -enable-multi-cluster is not set or -disable-peering is set")
Expand Down
4 changes: 0 additions & 4 deletions acceptance/tests/peering/peering_connect_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,6 @@ func TestPeering_Connect(t *testing.T) {
t.Skipf("skipping this test because peering is not supported in version %v", cfg.ConsulVersion.String())
}

if cfg.EnableTransparentProxy {
t.Skipf("skipping because no t-proxy support")
}

const staticServerPeer = "server"
const staticClientPeer = "client"
cases := []struct {
Expand Down
7 changes: 2 additions & 5 deletions acceptance/tests/snapshot-agent/main_test.go
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
package snapshotagent

import (
"fmt"
"os"
"testing"

Expand All @@ -11,8 +10,6 @@ import (
var suite testsuite.Suite

func TestMain(m *testing.M) {
fmt.Println("Skipping snapshot agent tests because it's not supported with agentless yet")
os.Exit(0)
//suite = testsuite.NewSuite(m)
//os.Exit(suite.Run())
suite = testsuite.NewSuite(m)
os.Exit(suite.Run())
}
123 changes: 58 additions & 65 deletions acceptance/tests/snapshot-agent/snapshot_agent_k8s_secret_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ import (
"context"
"encoding/json"
"fmt"
"strings"
"strconv"
"testing"
"time"

Expand All @@ -15,7 +15,7 @@ import (
"github.com/hashicorp/consul-k8s/acceptance/framework/helpers"
"github.com/hashicorp/consul-k8s/acceptance/framework/k8s"
"github.com/hashicorp/consul-k8s/acceptance/framework/logger"
"github.com/hashicorp/go-uuid"
"github.com/hashicorp/consul/sdk/testutil/retry"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
Expand All @@ -31,83 +31,76 @@ func TestSnapshotAgent_K8sSecret(t *testing.T) {
if cfg.EnableCNI {
t.Skipf("skipping because -enable-cni is set and snapshot agent is already tested with regular tproxy")
}
ctx := suite.Environment().DefaultContext(t)
kubectlOptions := ctx.KubectlOptions(t)
ns := kubectlOptions.Namespace
releaseName := helpers.RandomName()

// Generate a bootstrap token
bootstrapToken, err := uuid.GenerateUUID()
require.NoError(t, err)

bsSecretName := fmt.Sprintf("%s-acl-bootstrap-token", releaseName)
bsSecretKey := "token"
saSecretName := fmt.Sprintf("%s-snapshot-agent-config", releaseName)
saSecretKey := "token"

// Create cluster
helmValues := map[string]string{
"global.tls.enabled": "true",
"global.gossipEncryption.autoGenerate": "true",
"global.acls.manageSystemACLs": "true",
"global.acls.bootstrapToken.secretName": bsSecretName,
"global.acls.bootstrapToken.secretKey": bsSecretKey,
"client.snapshotAgent.enabled": "true",
"client.snapshotAgent.configSecret.secretName": saSecretName,
"client.snapshotAgent.configSecret.secretKey": saSecretKey,
cases := map[string]struct {
secure bool
}{
"non-secure": {secure: false},
"secure": {secure: true},
}

// Get new cluster
consulCluster := consul.NewHelmCluster(t, helmValues, suite.Environment().DefaultContext(t), cfg, releaseName)
client := environment.KubernetesClientFromOptions(t, kubectlOptions)
for name, c := range cases {
t.Run(name, func(t *testing.T) {
ctx := suite.Environment().DefaultContext(t)
kubectlOptions := ctx.KubectlOptions(t)
ns := kubectlOptions.Namespace
releaseName := helpers.RandomName()

// Add bootstrap token secret
logger.Log(t, "Storing bootstrap token as a k8s secret")
consul.CreateK8sSecret(t, client, cfg, ns, bsSecretName, bsSecretKey, bootstrapToken)
saSecretName := fmt.Sprintf("%s-snapshot-agent-config", releaseName)
saSecretKey := "config"

// Add snapshot agent config secret
logger.Log(t, "Storing snapshot agent config as a k8s secret")
config := generateSnapshotAgentConfig(t, bootstrapToken)
logger.Logf(t, "Snapshot agent config: %s", config)
consul.CreateK8sSecret(t, client, cfg, ns, saSecretName, saSecretKey, config)
// Create cluster
helmValues := map[string]string{
"global.tls.enabled": strconv.FormatBool(c.secure),
"global.gossipEncryption.autoGenerate": strconv.FormatBool(c.secure),
"global.acls.manageSystemACLs": strconv.FormatBool(c.secure),
"server.snapshotAgent.enabled": "true",
"server.snapshotAgent.configSecret.secretName": saSecretName,
"server.snapshotAgent.configSecret.secretKey": saSecretKey,
"connectInject.enabled": "false",
"controller.enabled": "false",
}

// Create cluster
consulCluster.Create(t)
// ----------------------------------
// Get new cluster
consulCluster := consul.NewHelmCluster(t, helmValues, suite.Environment().DefaultContext(t), cfg, releaseName)
client := environment.KubernetesClientFromOptions(t, kubectlOptions)

// Validate that consul snapshot agent is running correctly and is generating snapshot files
logger.Log(t, "Confirming that Consul Snapshot Agent is generating snapshot files")
// Create k8s client from kubectl options.
// Add snapshot agent config secret
logger.Log(t, "Storing snapshot agent config as a k8s secret")
config := generateSnapshotAgentConfig(t)
logger.Logf(t, "Snapshot agent config: %s", config)
consul.CreateK8sSecret(t, client, cfg, ns, saSecretName, saSecretKey, config)

podList, err := client.CoreV1().Pods(kubectlOptions.Namespace).List(context.Background(),
metav1.ListOptions{LabelSelector: fmt.Sprintf("app=consul,component=client-snapshot-agent,release=%s", releaseName)})
require.NoError(t, err)
require.True(t, len(podList.Items) > 0)
// Create cluster
consulCluster.Create(t)
// ----------------------------------

// Validate that consul snapshot agent is running correctly and is generating snapshot files
logger.Log(t, "Confirming that Consul Snapshot Agent is generating snapshot files")
// Create k8s client from kubectl options.

// Wait for 10seconds to allow snapsot to write.
time.Sleep(10 * time.Second)
podList, err := client.CoreV1().Pods(kubectlOptions.Namespace).List(context.Background(),
metav1.ListOptions{LabelSelector: fmt.Sprintf("app=consul,component=server,release=%s", releaseName)})
require.NoError(t, err)
require.Len(t, podList.Items, 1, "expected to find only 1 consul server instance")

// Loop through snapshot agents. Only one will be the leader and have the snapshot files.
hasSnapshots := false
for _, pod := range podList.Items {
snapshotFileListOutput, err := k8s.RunKubectlAndGetOutputWithLoggerE(t, kubectlOptions, terratestLogger.Discard, "exec", pod.Name, "-c", "consul-snapshot-agent", "--", "ls", "/")
logger.Logf(t, "Snapshot: \n%s", snapshotFileListOutput)
require.NoError(t, err)
if strings.Contains(snapshotFileListOutput, ".snap") {
logger.Logf(t, "Agent pod contains snapshot files")
hasSnapshots = true
break
} else {
logger.Logf(t, "Agent pod does not contain snapshot files")
}
// We need to give some extra time for ACLs to finish bootstrapping and for servers to come up.
timer := &retry.Timer{Timeout: 1 * time.Minute, Wait: 1 * time.Second}
retry.RunWith(timer, t, func(r *retry.R) {
// Loop through snapshot agents. Only one will be the leader and have the snapshot files.
pod := podList.Items[0]
snapshotFileListOutput, err := k8s.RunKubectlAndGetOutputWithLoggerE(t, kubectlOptions, terratestLogger.Discard, "exec", pod.Name, "-c", "consul-snapshot-agent", "--", "ls", "/tmp")
require.NoError(r, err)
logger.Logf(t, "Snapshot: \n%s", snapshotFileListOutput)
require.Contains(r, snapshotFileListOutput, ".snap", "Agent pod does not contain snapshot files")
})
})
}
require.True(t, hasSnapshots, ".snap")
}

func generateSnapshotAgentConfig(t *testing.T, token string) string {
func generateSnapshotAgentConfig(t *testing.T) string {
config := map[string]interface{}{
"snapshot_agent": map[string]interface{}{
"token": token,
"log": map[string]interface{}{
"level": "INFO",
"enable_syslog": false,
Expand All @@ -124,7 +117,7 @@ func generateSnapshotAgentConfig(t *testing.T, token string) string {
"local_scratch_path": "",
},
"local_storage": map[string]interface{}{
"path": ".",
"path": "/tmp",
},
},
}
Expand Down
Loading

0 comments on commit 971845b

Please sign in to comment.