Skip to content
This repository has been archived by the owner on Sep 17, 2024. It is now read-only.

Commit

Permalink
chore: backports for 7.13.x branch (#1178)
Browse files Browse the repository at this point in the history
* Move kubernetes/kubectl/kind code to internal project layout (#1092)

This is mainly a cleanup to keep all internal related code that could be
reusable in our `internal` directory layout.

Next steps would be to take what's in `internal/kubectl` and merge with this code.

Signed-off-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com>

* feat: bootstrap fleet-server for the deployment of regular elastic-agents (#1078)

* chore: provide a fleet-server base image based on centos/debian with systemd

* WIP

* fix: remove duplicated fields after merge conflicts

* fix: update method call after merge conflicts

* chore: extract service name calculation to a method

* chore: extract container name calculation to a method

* chore: refactor get container name method

* chore: refactor method even more

* chore: use installer state to retrieve container name

* chore: use installer when calculating service name

* fix: adapt service names for fleet server

* chore: enrich log when creating an installer

* fix: use fleet server host when creating fleet config

* fix: use https when connecting to fleet-server

It's creating its own self-signed certs

* feat: bootstrap a fleet server before a regular agent is deployed to fleet

It will define the server host to be used when enrolling agents

* fix: use fleet policy for agents, not the server one

* fix: get different installers for fleet-server and agents

* fix: use the old step for deploying regular agents

* chore: rename variable with consistent name

* chore: rename fleet-server scenario

* fix: use proper container name for standalone mode

* chore: save two variables

* chore: rename standalone scenario for bootstrapping fleet-server

* chore: rename bootstrap methods

* chore: encapsulate bootstrap fleet-server logic

* Update fleet.go

* chore: remove Fleet Server CI parallel execution

* chore: remove feature file for fleet-server

* chore: boostrap fleet server only once

We want to have it bootstrapped for the entire test suite, not for each scenario

* fix: an agent was needed when adding integrations

Co-authored-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com>

* apm-server tests (#1083)

* some tests for apm-server
* clean op dir on init instead of after

* fix agent uninstall (#1111)

* Kubernetes Deployment (#1110)

* Kubernetes Deployment

Signed-off-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com>

* Expose hostPort for kibana, elasticsearch, fleet without needing ingress

This is nice for local development where you don't need an ingress and are
relatively sure that the host system has the required ports available to bind to.

Signed-off-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com>

* Auto bootstrap fleet during initialize scenario (#1116)

Signed-off-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com>

Co-authored-by: Manuel de la Peña <mdelapenya@gmail.com>

* feat: support running k8s autodiscover suite for Beats PRs and local repositories (#1115)

* chore: add license

* chore: initialise configurations before test suite

* chore: use timeout_factor from env

* fix: tell kind to skip pulling beats images

* chore: add a method to load images into kind

* feat: support running k8s autodiscover for Beats PRs or local filesystem

* chore: add license header

* chore: expose logger and use it, simplifying initialisation

* fix: only run APM services for local APM environment

* Revert "chore: expose logger and use it, simplifying initialisation"

This reverts commit a89325c.

* chore: log scenario name

* fix: always cache beat version for podName

* chore: reduce log level

Co-authored-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com>

* chore: initialise timeout factor next to the declaration (#1118)

* chore: initialise timeout factor on its own package

* chore: reuse timeout factor from common

* Unify fleet and stand-alone suites (#1112)

* fix agent uninstall

* unify fleet and stand alone suites

* move things around a bit more

* fixe bad merge

* simplify some things

* chore: remove unused code (#1119)

* chore: remove unused code

* chore: remove all references to fleet server hostname

Because we assume it's a runtime dependency, provided by the initial
compose file, we do not need to calculate service names, or URIs for the
fleet-service endpoint. Instead, we assume it's listening in the 8220 port
in the "fleet-server" hostname, which is accessible from the network
created by docker-compose.

* fix: use HTTP to connect to fleet-server

* chore: remove fleet server policy code

We do not need it anymore, as the fleet server is already bootstrapped

* chore: remove all policies but system and fleet_server

* Update policies.go

* Update fleet.go

* Update stand-alone.go

Co-authored-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com>

* Support multiple deployment backends (#1130)

* Abstract out deployment

Provides ability to plugin different deployment backends for use in testing.
Current deployment backends supported are "docker" and "kubernetes"

* remove unused import
* remove unsetting of fleet server hostname as it's not needed
* add deployer support to stand-alone
* add elastic-agent to k8s deployment specs
* Update internal/docker/docker.go

Signed-off-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com>
Co-authored-by: Manuel de la Peña <mdelapenya@gmail.com>

* fix: bump stale agent version to 7.12-snapshot

* chore: abstract process checks to the deployer (#1156)

* chore: abstract process checks to the deployer

* chore: rename variable in log entry

* docs: improve comment

* fix: go-fmt

* feat: simplify the initialisation of versions (#1159)

* chore: use fixed version in shell scripts

* chore: move retry to utils

We could move it to its own package, but at this moment it's very small

* chore: initialise stackVesion at one single place

* chore: initialise agent version base at one single place

* chore: initialise agent version at one single place

* chore: reduce the number of requests to Elastic's artifacts endpoint

* chore: rename AgentVersionBase variable to BeatVersionBase

* chore: rename AgentVersion variable to BeatVersion

* chore: use Beat version in metricbeat test suite

* chore: check if the version must use the fallback after coming from a Git SHA

* feat: support flavours in services, specially in the elastic-agent (#1162)

* chore: move compose to deploy package

* feat: use a ServiceRequest when adding services

* feat: add service flavour support

* chore: remove unused centos/debian services

* fixup: add service flavour

* chore: move docker client to the deploy package

We will need another abstraction to represent the Docker client operations,
as it's clear what is a deployment and what is an operation in the deployment.
Maybe a Client struct for each provider will help out in differenciate it

* chore: use ServiceRequest everywhere

* chore: run agent commands with a ServiceRequest

* chore: use ServiceRequest in metricbeat test suite

* chore: pass flavours to installers

* chore: add a step to install the agent for the underlying OS

* chore: always add flavour

* fix: use installer for fleet_mode when removing services at the end of the scenario

* fix: update broken references in metricbeat test suite

* fix: update broken references in helm test suite

* fix: standalone does not have an installer

* fix: use service instead of image to get a service request for the agent

* feat: support for scaling services in compose

* fix: run second agent using compose scale option

* fix: update kibana's default Docker namespace

* feat: make a stronger verification of fleet-server being bootstrapped (#1164)

* fix: resolve issues in k8s-autodiscover test suite (#1171)

* chore: use timeout factor when tagging docker images

* fix: resolve alias version in k8s-autodiscover test suite

* fix: use common versions for k8s-autodiscover

* fix: update background processes to 2 instances

Co-authored-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com>
Co-authored-by: Juan Álvarez <juan.alvarez@elastic.co>
  • Loading branch information
3 people authored May 17, 2021
1 parent 294db5c commit 1c90992
Show file tree
Hide file tree
Showing 89 changed files with 2,694 additions and 1,085 deletions.
3 changes: 0 additions & 3 deletions .ci/.e2e-tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,6 @@ SUITES:
- name: "Fleet"
pullRequestFilter: " && ~debian"
tags: "fleet_mode_agent"
- name: "Fleet Server"
pullRequestFilter: " && ~debian"
tags: "fleet_server"
- name: "Endpoint Integration"
pullRequestFilter: " && ~debian"
tags: "agent_endpoint_integration"
Expand Down
2 changes: 1 addition & 1 deletion .ci/scripts/clean-docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ set -euxo pipefail
# Build and test the app using the install and test make goals.
#

readonly VERSION="7.13.0-SNAPSHOT"
readonly VERSION="$(cat $(pwd)/.stack-version)"

main() {
# refresh docker images
Expand Down
4 changes: 2 additions & 2 deletions .ci/scripts/fleet-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ set -euxo pipefail
# Run the functional tests for fleets using the functional-test wrapper
#
# Parameters:
# - STACK_VERSION - that's the version of the stack to be tested. Default '7.13.0-SNAPSHOT'.
# - STACK_VERSION - that's the version of the stack to be tested. Default is stored in '.stack-version'.
#

STACK_VERSION=${1:-'7.13.0-SNAPSHOT'}
STACK_VERSION=${1:-"$(cat $(pwd)/.stack-version)"}
SUITE='fleet'

# Exclude the nightly tests in the CI.
Expand Down
10 changes: 6 additions & 4 deletions .ci/scripts/functional-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,16 @@ set -euxo pipefail
# Parameters:
# - SUITE - that's the suite to be tested. Default '' which means all of them.
# - TAGS - that's the tags to be tested. Default '' which means all of them.
# - STACK_VERSION - that's the version of the stack to be tested. Default '7.13.0-SNAPSHOT'.
# - BEAT_VERSION - that's the version of the metricbeat to be tested. Default '7.13.0-SNAPSHOT'.
# - STACK_VERSION - that's the version of the stack to be tested. Default is stored in '.stack-version'.
# - BEAT_VERSION - that's the version of the metricbeat to be tested. Default is stored in '.stack-version'.
#

BASE_VERSION="$(cat $(pwd)/.stack-version)"

SUITE=${1:-''}
TAGS=${2:-''}
STACK_VERSION=${3:-'7.13.0-SNAPSHOT'}
BEAT_VERSION=${4:-'7.13.0-SNAPSHOT'}
STACK_VERSION=${3:-"${BASE_VERSION}"}
BEAT_VERSION=${4:-"${BASE_VERSION}"}

## Install the required dependencies for the given SUITE
.ci/scripts/install-test-dependencies.sh "${SUITE}"
Expand Down
10 changes: 6 additions & 4 deletions .ci/scripts/metricbeat-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,14 @@ set -euxo pipefail
# Run the functional tests for metricbeat using the functional-test wrapper
#
# Parameters:
# - STACK_VERSION - that's the version of the stack to be tested. Default '7.13.0-SNAPSHOT'.
# - BEAT_VERSION - that's the version of the metricbeat to be tested. Default '7.13.0-SNAPSHOT'.
# - STACK_VERSION - that's the version of the stack to be tested. Default is stored in '.stack-version'.
# - BEAT_VERSION - that's the version of the metricbeat to be tested. Default is stored in '.stack-version'.
#

STACK_VERSION=${1:-'7.13.0-SNAPSHOT'}
BEAT_VERSION=${2:-'7.13.0-SNAPSHOT'}
BASE_VERSION="$(cat $(pwd)/.stack-version)"

STACK_VERSION=${1:-"${BASE_VERSION}"}
BEAT_VERSION=${2:-"${BASE_VERSION}"}
SUITE='metricbeat'

.ci/scripts/functional-test.sh "${SUITE}" "" "${STACK_VERSION}" "${BEAT_VERSION}"
2 changes: 2 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ repos:
exclude: ^notice/overrides.json
- id: check-merge-conflict
- id: check-yaml
exclude: >
(?x)^(cli/config/kubernetes.*)$
- id: check-xml
- id: end-of-file-fixer
exclude: >
Expand Down
1 change: 1 addition & 0 deletions .stack-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
7.13.0-SNAPSHOT
18 changes: 13 additions & 5 deletions cli/cmd/deploy.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ import (
"context"

"github.com/elastic/e2e-testing/cli/config"
"github.com/elastic/e2e-testing/internal/compose"
"github.com/elastic/e2e-testing/internal/deploy"
log "github.com/sirupsen/logrus"

"github.com/spf13/cobra"
Expand Down Expand Up @@ -63,12 +63,16 @@ func buildDeployServiceCommand(srv string) *cobra.Command {
Short: `Deploys a ` + srv + ` service`,
Long: `Deploys a ` + srv + ` service, adding it to a running profile, identified by its name`,
Run: func(cmd *cobra.Command, args []string) {
serviceManager := compose.NewServiceManager()
serviceManager := deploy.NewServiceManager()

env := map[string]string{}
env = config.PutServiceEnvironment(env, srv, versionToRun)

err := serviceManager.AddServicesToCompose(context.Background(), deployToProfile, []string{srv}, env)
err := serviceManager.AddServicesToCompose(
context.Background(),
deploy.NewServiceRequest(deployToProfile),
[]deploy.ServiceRequest{deploy.NewServiceRequest(srv)},
env)
if err != nil {
log.WithFields(log.Fields{
"profile": deployToProfile,
Expand All @@ -85,12 +89,16 @@ func buildUndeployServiceCommand(srv string) *cobra.Command {
Short: `Undeploys a ` + srv + ` service`,
Long: `Undeploys a ` + srv + ` service, removing it from a running profile, identified by its name`,
Run: func(cmd *cobra.Command, args []string) {
serviceManager := compose.NewServiceManager()
serviceManager := deploy.NewServiceManager()

env := map[string]string{}
env = config.PutServiceEnvironment(env, srv, versionToRun)

err := serviceManager.RemoveServicesFromCompose(context.Background(), deployToProfile, []string{srv}, env)
err := serviceManager.RemoveServicesFromCompose(
context.Background(),
deploy.NewServiceRequest(deployToProfile),
[]deploy.ServiceRequest{deploy.NewServiceRequest(srv)},
env)
if err != nil {
log.WithFields(log.Fields{
"profile": deployToProfile,
Expand Down
18 changes: 10 additions & 8 deletions cli/cmd/run.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ import (
"strings"

"github.com/elastic/e2e-testing/cli/config"
"github.com/elastic/e2e-testing/internal/compose"
"github.com/elastic/e2e-testing/internal/deploy"
log "github.com/sirupsen/logrus"

"github.com/spf13/cobra"
Expand Down Expand Up @@ -64,7 +64,7 @@ func buildRunServiceCommand(srv string) *cobra.Command {
Short: `Runs a ` + srv + ` service`,
Long: `Runs a ` + srv + ` service, spinning up a Docker container for it and exposing its internal configuration so that you are able to connect to it in an easy manner`,
Run: func(cmd *cobra.Command, args []string) {
serviceManager := compose.NewServiceManager()
serviceManager := deploy.NewServiceManager()

env := config.PutServiceEnvironment(map[string]string{}, srv, versionToRun)

Expand All @@ -76,7 +76,8 @@ func buildRunServiceCommand(srv string) *cobra.Command {
env[k] = v
}

err := serviceManager.RunCompose(context.Background(), false, []string{srv}, env)
err := serviceManager.RunCompose(
context.Background(), false, []deploy.ServiceRequest{deploy.NewServiceRequest(srv)}, env)
if err != nil {
log.WithFields(log.Fields{
"service": srv,
Expand All @@ -96,7 +97,7 @@ Example:
go run main.go run profile fleet -s elastic-agent:7.13.0-SNAPSHOT
`,
Run: func(cmd *cobra.Command, args []string) {
serviceManager := compose.NewServiceManager()
serviceManager := deploy.NewServiceManager()

env := map[string]string{
"profileVersion": versionToRun,
Expand All @@ -110,14 +111,15 @@ Example:
env[k] = v
}

err := serviceManager.RunCompose(context.Background(), true, []string{key}, env)
err := serviceManager.RunCompose(
context.Background(), true, []deploy.ServiceRequest{deploy.NewServiceRequest(key)}, env)
if err != nil {
log.WithFields(log.Fields{
"profile": key,
}).Error("Could not run the profile.")
}

composeNames := []string{}
composeNames := []deploy.ServiceRequest{}
if len(servicesToRun) > 0 {
for _, srv := range servicesToRun {
arr := strings.Split(srv, ":")
Expand All @@ -137,10 +139,10 @@ Example:
}).Trace("Adding service")

env = config.PutServiceEnvironment(env, image, tag)
composeNames = append(composeNames, image)
composeNames = append(composeNames, deploy.NewServiceRequest(image))
}

err = serviceManager.AddServicesToCompose(context.Background(), key, composeNames, env)
err = serviceManager.AddServicesToCompose(context.Background(), deploy.NewServiceRequest(key), composeNames, env)
if err != nil {
log.WithFields(log.Fields{
"profile": key,
Expand Down
12 changes: 7 additions & 5 deletions cli/cmd/stop.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ import (
"context"

"github.com/elastic/e2e-testing/cli/config"
"github.com/elastic/e2e-testing/internal/compose"
"github.com/elastic/e2e-testing/internal/deploy"
log "github.com/sirupsen/logrus"

"github.com/spf13/cobra"
Expand Down Expand Up @@ -55,9 +55,10 @@ func buildStopServiceCommand(srv string) *cobra.Command {
Short: `Stops a ` + srv + ` service`,
Long: `Stops a ` + srv + ` service, stoppping its Docker container`,
Run: func(cmd *cobra.Command, args []string) {
serviceManager := compose.NewServiceManager()
serviceManager := deploy.NewServiceManager()

err := serviceManager.StopCompose(context.Background(), false, []string{srv})
err := serviceManager.StopCompose(
context.Background(), false, []deploy.ServiceRequest{deploy.NewServiceRequest(srv)})
if err != nil {
log.WithFields(log.Fields{
"service": srv,
Expand All @@ -73,9 +74,10 @@ func buildStopProfileCommand(key string, profile config.Profile) *cobra.Command
Short: `Stops the ` + profile.Name + ` profile`,
Long: `Stops the ` + profile.Name + ` profile, stopping the Services that compound it`,
Run: func(cmd *cobra.Command, args []string) {
serviceManager := compose.NewServiceManager()
serviceManager := deploy.NewServiceManager()

err := serviceManager.StopCompose(context.Background(), true, []string{key})
err := serviceManager.StopCompose(
context.Background(), true, []deploy.ServiceRequest{deploy.NewServiceRequest(key)})
if err != nil {
log.WithFields(log.Fields{
"profile": key,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,5 @@ xpack.fleet.registryUrl: http://package-registry:8080
xpack.fleet.agents.enabled: true
xpack.fleet.agents.elasticsearch.host: http://elasticsearch:9200
xpack.fleet.agents.fleet_server.hosts:
- http://kibana:5601
- http://fleet-server:8220
xpack.fleet.agents.tlsCheckDisabled: true
21 changes: 20 additions & 1 deletion cli/config/compose/profiles/fleet/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ services:
test: "curl -f http://localhost:5601/login | grep kbn-injected-metadata 2>&1 >/dev/null"
retries: 600
interval: 1s
image: "docker.elastic.co/${kibanaDockerNamespace:-beats}/kibana:${kibanaVersion:-7.13.0-SNAPSHOT}"
image: "docker.elastic.co/${kibanaDockerNamespace:-kibana}/kibana:${kibanaVersion:-7.13.0-SNAPSHOT}"
ports:
- "5601:5601"
volumes:
Expand All @@ -40,3 +40,22 @@ services:
test: ["CMD", "curl", "-f", "http://localhost:8080"]
retries: 300
interval: 1s

fleet-server:
image: "docker.elastic.co/beats/elastic-agent:${stackVersion:-8.0.0-SNAPSHOT}"
depends_on:
elasticsearch:
condition: service_healthy
kibana:
condition: service_healthy
healthcheck:
test: "curl -f http://127.0.0.1:8220/api/status | grep HEALTHY 2>&1 >/dev/null"
retries: 12
interval: 5s
environment:
- "FLEET_SERVER_ENABLE=1"
- "FLEET_SERVER_INSECURE_HTTP=1"
- "KIBANA_FLEET_SETUP=1"
- "KIBANA_FLEET_HOST=http://kibana:5601"
- "FLEET_SERVER_HOST=0.0.0.0"
- "FLEET_SERVER_PORT=8220"
6 changes: 0 additions & 6 deletions cli/config/compose/services/centos/docker-compose.yml

This file was deleted.

6 changes: 0 additions & 6 deletions cli/config/compose/services/debian/docker-compose.yml

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
monitoring.enabled: true
http.enabled: true
http.port: 5067
http.host: "0.0.0.0"
apm-server:
host: "0.0.0:8200"
secret_token: "1234"
# Enable APM Server Golang expvar support (https://golang.org/pkg/expvar/).
expvar:
enabled: true
url: "/debug/vars"
kibana:
# For APM Agent configuration in Kibana, enabled must be true.
enabled: true
host: "kibana"
username: "elastic"
password: "changeme"
output.elasticsearch:
hosts: ["http://elasticsearch:9200"]
username: "elastic"
password: "changeme"
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
capabilities:
- rule: allow
input: fleet-server
- rule: deny
input: "*"
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
fleet_server:
elasticsearch:
host: "elasticsearch"
username: "elastic"
password: "changeme"
kibana:
fleet:
host: "kibana"
username: "elastic"
password: "changeme"
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
fleet:
enroll: true
force: false
insecure: true
fleet_server:
enable: true
kibana:
fleet:
setup: true
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
version: '2.4'
services:
centos-systemd:
elastic-agent:
image: centos/systemd:${centos_systemdTag:-latest}
container_name: ${centos_systemdContainerName}
entrypoint: "/usr/sbin/init"
Expand Down
26 changes: 26 additions & 0 deletions cli/config/compose/services/elastic-agent/cloud/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
version: '2.4'
services:
elastic-agent:
image: docker.elastic.co/${elasticAgentDockerNamespace:-beats}/elastic-agent${elasticAgentDockerImageSuffix}:${elasticAgentTag:-8.0.0-SNAPSHOT}
container_name: ${elasticAgentContainerName}
depends_on:
elasticsearch:
condition: service_healthy
kibana:
condition: service_healthy
environment:
- "FLEET_SERVER_ENABLE=1"
- "FLEET_SERVER_INSECURE_HTTP=1"
- "ELASTIC_AGENT_CLOUD=1"
- "APM_SERVER_PATH=/apm-legacy/apm-server/"
- "STATE_PATH=/apm-legacy/elastic-agent/"
- "CONFIG_PATH=/apm-legacy/config/"
- "DATA_PATH=/apm-legacy/data/"
- "LOGS_PATH=/apm-legacy/logs/"
- "HOME_PATH=/apm-legacy/"
volumes:
- "${apmVolume}:/apm-legacy"
ports:
- "127.0.0.1:8220:8220"
- "127.0.0.1:8200:8200"
- "127.0.0.1:5066:5066"
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
version: '2.4'
services:
debian-systemd:
elastic-agent:
image: alehaa/debian-systemd:${debian_systemdTag:-stretch}
container_name: ${debian_systemdContainerName}
entrypoint: "/sbin/init"
Expand Down
4 changes: 0 additions & 4 deletions cli/config/compose/services/elastic-agent/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,8 @@ services:
kibana:
condition: service_healthy
environment:
- "FLEET_SERVER_ELASTICSEARCH_HOST=http://${elasticsearchHost:-elasticsearch}:${elasticsearchPort:-9200}"
- "FLEET_SERVER_ENABLE=${fleetServerMode:-0}"
- "FLEET_SERVER_INSECURE_HTTP=${fleetServerMode:-0}"
- "FLEET_SERVER_HOST=0.0.0.0"
- "FLEET_SERVER_ELASTICSEARCH_USERNAME=elastic"
- "FLEET_SERVER_ELASTICSEARCH_PASSWORD=changeme"
platform: ${elasticAgentPlatform:-linux/amd64}
ports:
- "127.0.0.1:8220:8220"
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@ services:
entrypoint: "/usr/sbin/init"
privileged: true
volumes:
- ${fleet_server_centosAgentBinarySrcPath:-.}:${fleet_server_centosAgentBinaryTargetPath:-/tmp}
- /sys/fs/cgroup:/sys/fs/cgroup:ro
Loading

0 comments on commit 1c90992

Please sign in to comment.