Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge branch 'dev' into boltdb #2553

Merged
merged 35 commits into from
Aug 4, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
a2e361d
Refactoring Conditional to BooleanDefaultTrue
yhlee-aws Jun 24, 2020
24c4c20
BooleanDefaultFalse implementation
yhlee-aws Jun 26, 2020
abd1617
Refactoring Conditional to BooleanDefaultTrue
yhlee-aws Jun 24, 2020
481a248
BooleanDefaultFalse implementation
yhlee-aws Jun 26, 2020
369a709
Merge branch 'dev' into boolean_config_fix
yhlee-aws Jul 9, 2020
1ccce07
Merge pull request #2516 from yunhee-l/boolean_config_fix
yhlee-aws Jul 9, 2020
da9a73e
migrating ENITrunkingEnabled (ECS_ENABLE_HIGH_DENSITY_ENI) to Boolean…
yhlee-aws Jul 13, 2020
17e5493
migrated ECS_CHECKPOINT (config.Checkpoint) to BooleanDefaultFalse
yhlee-aws Jul 14, 2020
e1674fc
config.UpdatesEnabled (ECS_UPDATES_ENABLED) migrated to BooleanDefaul…
yhlee-aws Jul 14, 2020
5d32f68
Migrating AppArmorCapable (ECS_APPARMOR_CAPABLE) to BooleanDefaultFalse
yhlee-aws Jul 17, 2020
315d47a
migrating SELinuxCapable (ECS_SELINUX_CAPABLE) to BooleanConfigFalse
yhlee-aws Jul 18, 2020
bd832a7
Migrating PrivilegedDisabled (ECS_DISABLE_PRIVILEGED) to BooleanDefau…
yhlee-aws Jul 18, 2020
b890d88
migrating PollMetrics (ECS_POLL_METRICS) to BooleanDefaultTrue
yhlee-aws Jul 18, 2020
90adf56
migrate more boolen
mythri-garaga Jul 21, 2020
25ba693
migrate DisableMetrics ("ECS_DISABLE_METRICS") to BooleanDefaultFalse
Jul 21, 2020
ace992b
Migrating ImageCleanupDisabled (ECS_DISABLE_IMAGE_CLEANUP) to Boolean…
Jul 21, 2020
211e2b3
Migrating TaskENIEnabled, TaskIAMRoleEnabled, AWSVPCBlockInstanceMeta…
Jul 22, 2020
3f56401
merge error fixed
Jul 22, 2020
1a88cff
Feature / fluentD
Jul 21, 2020
a250409
Release 1.42.0
yhlee-aws Jul 23, 2020
094a290
Add task cluster arn
shubham2892 Jul 21, 2020
83a8ee2
Merge branch 'v1.42.0-stage' into dev
amazon-ecs-bot Jul 24, 2020
6904342
Merge branch 'dev' into boolean_config_fix
yhlee-aws Jul 28, 2020
11a412a
Merge pull request #2543 from yunhee-l/boolean_config_fix
yhlee-aws Jul 28, 2020
b434746
Merge pull request #2544 from yunhee-l/dev
yhlee-aws Jul 29, 2020
4a72e6c
Calculate per-sec network metrics for RXBytes and TxBytes
ubhattacharjya Jun 16, 2020
53e483c
add containernetworking library
shubham2892 Jun 29, 2020
ba0a082
add support for awsvpc mode stats
shubham2892 Jun 29, 2020
a633191
Enable V4 task metadata endpoint to display network rate stats
ubhattacharjya Jul 7, 2020
155f185
change to pointer
ubhattacharjya Jul 9, 2020
7bf330f
Fix Telemetry model to include per second network stats metrics
ubhattacharjya Jul 14, 2020
2dd3798
Bug Fix: fix V4 metadata endpoint displaying network rate metrics for…
ubhattacharjya Jul 15, 2020
52bc1a9
Check task network device before querying network stats
shubham2892 Jul 20, 2020
76dd8ed
replace device list argument with task stats device
shubham2892 Jul 21, 2020
cbb6d2f
Merge branch 'dev' into boltdb
fenxiong Aug 3, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
# Changelog

## 1.42.0
* Feature - Support for sub second precision in FluentD [#2538](https://github.com/aws/amazon-ecs-agent/pull/2538).
* Bug - Fixed a bug that caused configured values for ImageCleanupExclusionList
to be ignored in some situations [#2513](https://github.com/aws/amazon-ecs-agent/pull/2513)

## 1.41.1
* Bug - Fixed a bug [#2476](https://github.com/aws/amazon-ecs-agent/issues/2476) where HostPort is not present in ECS Task Metadata Endpoint response with bridge network type [#2495](https://github.com/aws/amazon-ecs-agent/pull/2495)

Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.41.1
1.42.0
15 changes: 12 additions & 3 deletions agent/Gopkg.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 4 additions & 1 deletion agent/Gopkg.toml
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,6 @@ required = ["github.com/golang/mock/mockgen/model"]
[[constraint]]
name = "github.com/vishvananda/netlink"
revision ="fe3b5664d23a11b52ba59bece4ff29c52772a56b"

[[constraint]]
name = "github.com/didip/tollbooth"
version = "3.0.2"
Expand All @@ -96,3 +95,7 @@ required = ["github.com/golang/mock/mockgen/model"]
[[constraint]]
branch = "master"
name = "github.com/awslabs/go-config-generator-for-fluentd-and-fluentbit"

[[constraint]]
name = "github.com/containernetworking/plugins"
version = "0.8.6"
11 changes: 6 additions & 5 deletions agent/acs/handler/task_manifest_handler.go
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,6 @@ func newTaskManifestHandler(ctx context.Context,

// Create a cancelable context from the parent context
derivedContext, cancel := context.WithCancel(ctx)

return taskManifestHandler{
messageBufferTaskManifest: make(chan *ecsacs.TaskManifestMessage),
messageBufferTaskManifestAck: make(chan string),
Expand Down Expand Up @@ -187,7 +186,7 @@ func (taskManifestHandler *taskManifestHandler) sendTaskStopVerificationMessage(

// compares the list of tasks received in the task manifest message and tasks running on the the instance
// It returns all the task that are running on the instance but not present in task manifest message task list
func compareTasks(receivedTaskList []*ecsacs.TaskIdentifier, runningTaskList []*apitask.Task) []*ecsacs.TaskIdentifier {
func compareTasks(receivedTaskList []*ecsacs.TaskIdentifier, runningTaskList []*apitask.Task, clusterARN string) []*ecsacs.TaskIdentifier {
tasksToBeKilled := make([]*ecsacs.TaskIdentifier, 0)
for _, runningTask := range runningTaskList {
// For every task running on the instance check if the task is present in receivedTaskList with the DesiredState
Expand All @@ -204,8 +203,9 @@ func compareTasks(receivedTaskList []*ecsacs.TaskIdentifier, runningTaskList []*
}
if !taskPresent {
tasksToBeKilled = append(tasksToBeKilled, &ecsacs.TaskIdentifier{
DesiredStatus: aws.String(apitaskstatus.TaskStoppedString),
TaskArn: aws.String(runningTask.Arn),
DesiredStatus: aws.String(apitaskstatus.TaskStoppedString),
TaskArn: aws.String(runningTask.Arn),
TaskClusterArn: aws.String(clusterARN),
})
}
}
Expand Down Expand Up @@ -240,6 +240,7 @@ func (taskManifestHandler *taskManifestHandler) handleTaskManifestSingleMessage(
message *ecsacs.TaskManifestMessage) error {
taskListManifestHandler := message.Tasks
seqNumberFromMessage := *message.Timeline
clusterARN := *message.ClusterArn
agentLatestSequenceNumber := *taskManifestHandler.latestSeqNumberTaskManifest

// Check if the sequence number of message received is more than the one stored in Agent
Expand All @@ -255,7 +256,7 @@ func (taskManifestHandler *taskManifestHandler) handleTaskManifestSingleMessage(
return err
}

tasksToKill := compareTasks(taskListManifestHandler, runningTasksOnInstance)
tasksToKill := compareTasks(taskListManifestHandler, runningTasksOnInstance, clusterARN)

// Update messageId so that it can be compared to the messageId in TaskStopVerificationAck message
taskManifestHandler.setMessageId(*message.MessageId)
Expand Down
37 changes: 20 additions & 17 deletions agent/acs/handler/task_manifest_handler_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,8 @@ func TestManifestHandlerKillAllTasks(t *testing.T) {

//Task that needs to be stopped, sent back by agent
taskIdentifierFinal := []*ecsacs.TaskIdentifier{
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn1")},
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn2")},
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn1"), TaskClusterArn: aws.String(cluster)},
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn2"), TaskClusterArn: aws.String(cluster)},
}

taskStopVerificationMessage := &ecsacs.TaskStopVerificationMessage{
Expand Down Expand Up @@ -107,7 +107,7 @@ func TestManifestHandlerKillAllTasks(t *testing.T) {
ClusterArn: aws.String(cluster),
ContainerInstanceArn: aws.String(containerInstanceArn),
Tasks: []*ecsacs.TaskIdentifier{
{DesiredStatus: aws.String("STOPPED"), TaskArn: aws.String("arn-long")},
{DesiredStatus: aws.String("STOPPED"), TaskArn: aws.String("arn-long"), TaskClusterArn: aws.String(cluster)},
},
Timeline: aws.Int64(testSeqNum),
}
Expand Down Expand Up @@ -164,8 +164,8 @@ func TestManifestHandlerKillFewTasks(t *testing.T) {

//Task that needs to be stopped, sent back by agent
taskIdentifierFinal := []*ecsacs.TaskIdentifier{
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn2")},
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn3")},
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn2"), TaskClusterArn: aws.String(cluster)},
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn3"), TaskClusterArn: aws.String(cluster)},
}

taskStopVerificationMessage := &ecsacs.TaskStopVerificationMessage{
Expand Down Expand Up @@ -202,12 +202,14 @@ func TestManifestHandlerKillFewTasks(t *testing.T) {
ContainerInstanceArn: aws.String(containerInstanceArn),
Tasks: []*ecsacs.TaskIdentifier{
{
DesiredStatus: aws.String(apitaskstatus.TaskRunningString),
TaskArn: aws.String("arn1"),
DesiredStatus: aws.String(apitaskstatus.TaskRunningString),
TaskArn: aws.String("arn1"),
TaskClusterArn: aws.String(cluster),
},
{
DesiredStatus: aws.String(apitaskstatus.TaskStoppedString),
TaskArn: aws.String("arn2"),
DesiredStatus: aws.String(apitaskstatus.TaskStoppedString),
TaskArn: aws.String("arn2"),
TaskClusterArn: aws.String(cluster),
},
},
Timeline: aws.Int64(testSeqNum),
Expand Down Expand Up @@ -355,20 +357,21 @@ func TestManifestHandlerDifferentTaskLists(t *testing.T) {

// tasks that suppose to be running
taskIdentifierInitial := ecsacs.TaskIdentifier{
DesiredStatus: aws.String(apitaskstatus.TaskStoppedString),
TaskArn: aws.String("arn1"),
DesiredStatus: aws.String(apitaskstatus.TaskStoppedString),
TaskArn: aws.String("arn1"),
TaskClusterArn: aws.String(cluster),
}

//Task that needs to be stopped, sent back by agent
taskIdentifierAckFinal := []*ecsacs.TaskIdentifier{
{DesiredStatus: aws.String(apitaskstatus.TaskRunningString), TaskArn: aws.String("arn1")},
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn2")},
{DesiredStatus: aws.String(apitaskstatus.TaskRunningString), TaskArn: aws.String("arn1"), TaskClusterArn: aws.String(cluster)},
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn2"), TaskClusterArn: aws.String(cluster)},
}

//Task that needs to be stopped, sent back by agent
taskIdentifierMessage := []*ecsacs.TaskIdentifier{
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn1")},
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn2")},
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn1"), TaskClusterArn: aws.String(cluster)},
{DesiredStatus: aws.String(apitaskstatus.TaskStoppedString), TaskArn: aws.String("arn2"), TaskClusterArn: aws.String(cluster)},
}

taskStopVerificationMessage := &ecsacs.TaskStopVerificationMessage{
Expand Down Expand Up @@ -518,7 +521,7 @@ func TestCompareTasksDifferentTasks(t *testing.T) {
{Arn: "arn1", DesiredStatusUnsafe: apitaskstatus.TaskRunning},
}

compareTaskList := compareTasks(receivedTaskList, taskList)
compareTaskList := compareTasks(receivedTaskList, taskList, "test-cluster-arn")

assert.Equal(t, 2, len(compareTaskList))
}
Expand All @@ -540,7 +543,7 @@ func TestCompareTasksSameTasks(t *testing.T) {
{Arn: "arn1", DesiredStatusUnsafe: apitaskstatus.TaskRunning},
}

compareTaskList := compareTasks(receivedTaskList, taskList)
compareTaskList := compareTasks(receivedTaskList, taskList, "test-cluster-arn")

assert.Equal(t, 0, len(compareTaskList))
}
4 changes: 2 additions & 2 deletions agent/acs/update_handler/updater.go
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ func (u *updater) stageUpdateHandler() func(req *ecsacs.StageUpdateMessage) {
u.reset()
}

if !u.config.UpdatesEnabled {
if !u.config.UpdatesEnabled.Enabled() {
nack("Updates are disabled")
return
}
Expand Down Expand Up @@ -225,7 +225,7 @@ func (u *updater) performUpdateHandler(state dockerstate.TaskEngineState, dataCl

seelog.Debug("Got perform update request")

if !u.config.UpdatesEnabled {
if !u.config.UpdatesEnabled.Enabled() {
reason := "Updates are disabled"
seelog.Errorf("Nacking PerformUpdate; reason: %s", reason)
u.acs.MakeRequest(&ecsacs.NackRequest{
Expand Down
6 changes: 3 additions & 3 deletions agent/acs/update_handler/updater_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ func ptr(i interface{}) interface{} {
func mocks(t *testing.T, cfg *config.Config) (*updater, *gomock.Controller, *config.Config, *mock_client.MockClientServer, *mock_http.MockRoundTripper) {
if cfg == nil {
cfg = &config.Config{
UpdatesEnabled: true,
UpdatesEnabled: config.BooleanDefaultFalse{Value: config.ExplicitlyEnabled},
UpdateDownloadDir: filepath.Clean("/tmp/test/"),
}
}
Expand Down Expand Up @@ -91,7 +91,7 @@ func mockOS() func() {
}
func TestStageUpdateWithUpdatesDisabled(t *testing.T) {
u, ctrl, _, mockacs, _ := mocks(t, &config.Config{
UpdatesEnabled: false,
UpdatesEnabled: config.BooleanDefaultFalse{Value: config.ExplicitlyDisabled},
})
defer ctrl.Finish()

Expand All @@ -116,7 +116,7 @@ func TestStageUpdateWithUpdatesDisabled(t *testing.T) {

func TestPerformUpdateWithUpdatesDisabled(t *testing.T) {
u, ctrl, cfg, mockacs, _ := mocks(t, &config.Config{
UpdatesEnabled: false,
UpdatesEnabled: config.BooleanDefaultFalse{Value: config.ExplicitlyDisabled},
})
defer ctrl.Finish()

Expand Down
2 changes: 1 addition & 1 deletion agent/api/task/task.go
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,7 @@ func (task *Task) initializeVolumes(cfg *config.Config, dockerClient dockerapi.D
if err != nil {
return apierrors.NewResourceInitError(task.Arn, err)
}
err = task.initializeDockerVolumes(cfg.SharedVolumeMatchFullConfig, dockerClient, ctx)
err = task.initializeDockerVolumes(cfg.SharedVolumeMatchFullConfig.Enabled(), dockerClient, ctx)
if err != nil {
return apierrors.NewResourceInitError(task.Arn, err)
}
Expand Down
2 changes: 1 addition & 1 deletion agent/api/task/task_linux_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -568,7 +568,7 @@ func TestPostUnmarshalWithCPULimitsFail(t *testing.T) {
ResourcesMapUnsafe: make(map[string][]taskresource.TaskResource),
}
cfg := config.Config{
TaskCPUMemLimit: config.ExplicitlyEnabled,
TaskCPUMemLimit: config.BooleanDefaultTrue{Value: config.ExplicitlyEnabled},
}
assert.Error(t, task.PostUnmarshalTask(&cfg, nil, nil, nil, nil))
assert.Equal(t, 0, len(task.GetResources()))
Expand Down
8 changes: 4 additions & 4 deletions agent/api/task/task_windows.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,10 +47,10 @@ const (
type PlatformFields struct {
// CpuUnbounded determines whether a mix of unbounded and bounded CPU tasks
// are allowed to run in the instance
CpuUnbounded bool `json:"cpuUnbounded"`
CpuUnbounded config.BooleanDefaultFalse `json:"cpuUnbounded"`
// MemoryUnbounded determines whether a mix of unbounded and bounded Memory tasks
// are allowed to run in the instance
MemoryUnbounded bool `json:"memoryUnbounded"`
MemoryUnbounded config.BooleanDefaultFalse `json:"memoryUnbounded"`
}

var cpuShareScaleFactor = runtime.NumCPU() * cpuSharesPerCore
Expand Down Expand Up @@ -130,7 +130,7 @@ func (task *Task) platformHostConfigOverride(hostConfig *dockercontainer.HostCon
}
hostConfig.CPUShares = 0

if hostConfig.Memory <= 0 && task.PlatformFields.MemoryUnbounded {
if hostConfig.Memory <= 0 && task.PlatformFields.MemoryUnbounded.Enabled() {
// As of version 17.06.2-ee-6 of docker. MemoryReservation is not supported on windows. This ensures that
// this parameter is not passed, allowing to launch a container without a hard limit.
hostConfig.MemoryReservation = 0
Expand All @@ -144,7 +144,7 @@ func (task *Task) platformHostConfigOverride(hostConfig *dockercontainer.HostCon
// want. Instead, we convert 0 to 2 to be closer to expected behavior. The
// reason for 2 over 1 is that 1 is an invalid value (Linux's choice, not Docker's).
func (task *Task) dockerCPUShares(containerCPU uint) int64 {
if containerCPU <= 1 && !task.PlatformFields.CpuUnbounded {
if containerCPU <= 1 && !task.PlatformFields.CpuUnbounded.Enabled() {
seelog.Debugf(
"Converting CPU shares to allowed minimum of 2 for task arn: [%s] and cpu shares: %d",
task.Arn, containerCPU)
Expand Down
22 changes: 11 additions & 11 deletions agent/api/task/task_windows_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ func TestPostUnmarshalWindowsCanonicalPaths(t *testing.T) {
seqNum := int64(42)
task, err := TaskFromACS(&taskFromAcs, &ecsacs.PayloadMessage{SeqNum: &seqNum})
assert.Nil(t, err, "Should be able to handle acs task")
cfg := config.Config{TaskCPUMemLimit: config.ExplicitlyDisabled}
cfg := config.Config{TaskCPUMemLimit: config.BooleanDefaultTrue{Value: config.ExplicitlyDisabled}}
task.PostUnmarshalTask(&cfg, nil, nil, nil, nil)

for _, container := range task.Containers { // remove v3 endpoint from each container because it's randomly generated
Expand Down Expand Up @@ -211,43 +211,43 @@ func TestCPUPercentBasedOnUnboundedEnabled(t *testing.T) {
cpuShareScaleFactor := runtime.NumCPU() * cpuSharesPerCore
testcases := []struct {
cpu int64
cpuUnbounded bool
cpuUnbounded config.BooleanDefaultFalse
cpuPercent int64
}{
{
cpu: 0,
cpuUnbounded: true,
cpuUnbounded: config.BooleanDefaultFalse{Value: config.ExplicitlyEnabled},
cpuPercent: 0,
},
{
cpu: 1,
cpuUnbounded: true,
cpuUnbounded: config.BooleanDefaultFalse{Value: config.ExplicitlyEnabled},
cpuPercent: 1,
},
{
cpu: 0,
cpuUnbounded: false,
cpuUnbounded: config.BooleanDefaultFalse{Value: config.ExplicitlyDisabled},
cpuPercent: 1,
},
{
cpu: 1,
cpuUnbounded: false,
cpuUnbounded: config.BooleanDefaultFalse{Value: config.ExplicitlyDisabled},
cpuPercent: 1,
},
{
cpu: 100,
cpuUnbounded: true,
cpuUnbounded: config.BooleanDefaultFalse{Value: config.ExplicitlyEnabled},
cpuPercent: 100 * percentageFactor / int64(cpuShareScaleFactor),
},
{
cpu: 100,
cpuUnbounded: false,
cpuUnbounded: config.BooleanDefaultFalse{Value: config.ExplicitlyDisabled},
cpuPercent: 100 * percentageFactor / int64(cpuShareScaleFactor),
},
}
for _, tc := range testcases {
t.Run(fmt.Sprintf("container cpu-%d,cpu unbounded tasks enabled- %t,expected cpu percent-%d",
tc.cpu, tc.cpuUnbounded, tc.cpuPercent), func(t *testing.T) {
tc.cpu, tc.cpuUnbounded.Enabled(), tc.cpuPercent), func(t *testing.T) {
testTask := &Task{
Containers: []*apicontainer.Container{
{
Expand Down Expand Up @@ -295,7 +295,7 @@ func TestWindowsMemoryReservationOption(t *testing.T) {
},
},
PlatformFields: PlatformFields{
MemoryUnbounded: false,
MemoryUnbounded: config.BooleanDefaultFalse{Value: config.ExplicitlyDisabled},
},
}

Expand All @@ -307,7 +307,7 @@ func TestWindowsMemoryReservationOption(t *testing.T) {
assert.EqualValues(t, nonZeroMemoryReservationValue, cfg.MemoryReservation)

// With MemoryUnbounded set to true, tasks with no memory hard limit will have their memory reservation set to zero
testTask.PlatformFields.MemoryUnbounded = true
testTask.PlatformFields.MemoryUnbounded = config.BooleanDefaultFalse{Value: config.ExplicitlyEnabled}
cfg, configErr = testTask.DockerHostConfig(testTask.Containers[0], dockerMap(testTask),
defaultDockerClientAPIVersion, &config.Config{})

Expand Down
Loading