From 3da49eacc0f6add31df29c07b0d90da22a4ba3d9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pawe=C5=82=20Mudlaff?= <5096816+MudlaffP@users.noreply.github.com> Date: Wed, 6 Dec 2023 14:16:11 +0000 Subject: [PATCH] Merge with upstream v20231116 (#30) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Update CHANGELOG.md for v20230703 AMI release (#1337) * Update CHANGELOG.md for v20230703 AMI release * Update CHANGELOG.md Co-authored-by: Carter * Update CHANGELOG.md --------- Co-authored-by: Carter * Update CHANGELOG.md (#1338) * Add logging for aws managed csi drivers (#1336) * Update CHANGELOG.md latest AMI release notes to highlight this was last 1.22 AMI (#1342) * Removing 1.22 from Makefile (#1343) * Generate version info for cached images only when is active (#1341) * Remove region names from us-iso/us-isob credential provider config (#1344) * Amazon Linux 2023 proof-of-concept (#1340) * Remove hardcoded pull_cni_from_github var (#1346) * Remove sonobuoy_e2e_registry (#1249) * Revert "avoid hard coding provisioner index array" (#1347) This reverts commit 6c167655de0e40bce46bc786c6e2ab2ae795e25a. Signed-off-by: Davanum Srinivas * Update sync-eni-max-pods.yaml role ARN (#1350) * Add CodeCommit sync action (#1351) * update core CNI plugins version (#1308) * Update internal build config (#1353) * Update binary references (#1355) * Update CHANGELOG.md for 20230711 AMI release (#1357) * Enable discard_unpacked_layers by default (#1360) * Mount bpffs on all supported Kubernetes versions (#1349) * Cleanup /var/log/audit (#1363) * Use GitHub bot user as committer/author (#1366) * Update eni-max-pods.txt (#1365) * Update CHANGELOG.md for 20230728 AMI release (#1371) * Update eni-max-pods.txt (#1373) Co-authored-by: GitHub * Install latest amazon-ssm-agent from S3 (#1370) * Do not set KubeletCredentialProviders feature flag for 1.28+ (#1375) * Fix bug in var doc gen (#1378) * Generate docs for GitHub Pages (#1379) * Add write permissions to deploy-docs workflow (#1381) * Force-push docs to gh-pages (#1382) * Cache IMDS tokens per-user (#1386) * Install latest runc 1.1.* (#1384) * Update eni-max-pods.txt (#1388) * Update binary build dates (#1390) * Fetch new IMDS token for every request (#1395) * Update CHANGELOG for v20230816 (#1396) * Update eni-max-pods.txt (#1397) * Update Makefile with latest binaries (#1403) * Add CI bot (#1402) * Disable janitor in forks (#1407) * Add note about bot authorization (#1406) * noproxy for direct communication to apiserver and timeouts of 3 seconds (#1393) * Update CHANGELOG.md for 20230825 AMI release (#1408) * Update CHANGELOG.md for 20230825 AMI release --------- Co-authored-by: Vela WU <50354807+FerrelWallis@users.noreply.github.com> * Allow --reserved-cpus kubelet arg to be used (#1405) * Install kernel-headers, kernel-devel (#1302) * Handle eventually-consistent PrivateDnsName (#1383) * Add .git-commit to archivebuild (#1411) * Use archivebuild-wrapper system (#1413) * Discover .git-commit from environment (#1418) * Update eni-max-pods.txt (#1423) Co-authored-by: GitHub * Update eni-max-pods.txt (#1424) Co-authored-by: GitHub * Require builder instance to use IMDSv2 (#1422) * Add release note config (#1426) * Update eni-max-pods.txt (#1429) Co-authored-by: GitHub * Use 2023-09-14 binaries, add 1.28 target (#1431) * Update eni-max-pods.txt (#1432) Co-authored-by: GitHub * Set pid_max to 4194304 (#1434) * Install nerdctl (#1321) * Update CHANGELOG.md for 20230919 AMI release (#1439) * Update CHANGELOG.md for 20230919 AMI release Co-authored-by: Carter --------- Co-authored-by: Carter * bump latest Kubernetes build target version (#1440) * fix: Tag cached image with the ECR URI for the target region (#1442) * Add H100 into gpu clock (#1447) * bug: incorrect region variable name (#1449) Co-authored-by: ljosyula * Update eni-max-pods.txt (#1452) Co-authored-by: GitHub * Update CHANGELOG.md for 20231002 AMI release (#1456) Co-authored-by: ljosyula * Build with latest binaries by default (#1391) * Fix region in cached image names (#1461) * Add 1.28 to CI (#1464) * Add optional FIPS support (#1458) * Set remote_folder on all shell provisioners (#1462) * Pull eksctl supported versions for CI (#1465) * remove kubernetes versions file and use eksctl supported version list * recognize compression Co-authored-by: Carter --------- Co-authored-by: Carter * Add CHANGELOG entry placeholder (#1466) * Add named arguments to bot commands (#1463) * get-ecr-uri.sh falls back to use another region in partition if region unconfigured (#1468) * Force delete CI clusters, don't wait for pod eviction (#1472) * Add CHANGELOG workflow for new releases (#1467) * Allow more flexible kernel_version (#1469) * Add r7i to eni-max-pods.txt (#1473) Co-authored-by: GitHub * Fix containerd slice configuration (#1437) * Correctly tag cached images for us-gov-west-1 FIPS endpoint (#1476) * Lint space errors (#1121) * Ignore commit to address space errors (#1478) * Collect more info about Amazon VPC CNI (#1245) * Update eni-max-pods.txt (#1485) Co-authored-by: GitHub * Fail fast if we cannot determine kubelet version (#1484) kubelet is likely to fail when there is a mismatch with GLIBC that is in the image vs the one golang uses to build the kubelet. So fail the image right away when this happens as this specific kubelet binary will NOT work in any instance started with this image. ``` 2023-10-25T10:11:38-04:00: amazon-ebs: kubelet: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by kubelet) 2023-10-25T10:11:38-04:00: amazon-ebs: kubelet: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by kubelet) ``` Signed-off-by: Davanum Srinivas * Persist CI version-info.json as artifact (#1493) * Add new i4i sizes to eni-max-pods.txt (#1495) Co-authored-by: GitHub * Update eni-max-pods.txt (#1497) Co-authored-by: GitHub * Drop the FIPS related provisioners for al2023 (#1499) Signed-off-by: Davanum Srinivas * Set nerdctl default namespace to k8s.io (#1488) * Update CHANGELOG.md for release v20231027 (#1502) Co-authored-by: GitHub * Skip installing amazon-ssm-agent if already present (#1501) * Exclude automated eni-max-pods.txt PR's from release notes (#1498) * Remove extraneous space character (#1505) * Update CHANGELOG.md (#1507) * Update CHANGELOG.md to fix docker version (#1511) * Update docker to the latest 20.10 version (#1510) * Changelog entry format tweaks (#1508) * Document how to collect UserData (#1504) * Update Fluence changelog * Update what kubernetes ami will be build --------- Signed-off-by: Davanum Srinivas Co-authored-by: Xavier Ryan <108886506+xr1776@users.noreply.github.com> Co-authored-by: Carter Co-authored-by: jacobwolfaws <113703057+jacobwolfaws@users.noreply.github.com> Co-authored-by: Prasad Shende Co-authored-by: camrakin <113552683+camrakin@users.noreply.github.com> Co-authored-by: Davanum Srinivas Co-authored-by: Jeffrey Nelson Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Sichaow Co-authored-by: GitHub Co-authored-by: Vincent Marguerie <24724195+vincentmrg@users.noreply.github.com> Co-authored-by: Andrew Johnstone Co-authored-by: Vela WU <50354807+wwvela@users.noreply.github.com> Co-authored-by: Vela WU <50354807+FerrelWallis@users.noreply.github.com> Co-authored-by: Raghvendra Singh <90425886+raghs-aws@users.noreply.github.com> Co-authored-by: Matthew Wong Co-authored-by: Nick Baker Co-authored-by: ddl-retornam <56278673+ddl-retornam@users.noreply.github.com> Co-authored-by: Carter Co-authored-by: Bryant Biggs Co-authored-by: Laxmi Soumya Josyula <42261978+ljosyula@users.noreply.github.com> Co-authored-by: ljosyula Co-authored-by: Alex Schultz Co-authored-by: Julien Baladier Co-authored-by: Matt Co-authored-by: Zoltรกn Reegn Co-authored-by: donovanrost Co-authored-by: guessi Co-authored-by: pjaudiomv <34245618+pjaudiomv@users.noreply.github.com> Co-authored-by: Edmond Ceausu --- .circleci/config.yml | 2 +- .git-blame-ignore-revs | 3 +- .github/actions/bot/.gitignore | 1 + .github/actions/bot/README.md | 21 + .github/actions/bot/action.yaml | 13 + .github/actions/bot/index.js | 213 +++++ .github/actions/bot/package-lock.json | 430 ++++++++++ .github/actions/bot/package.json | 13 + .github/actions/ci/build/action.yaml | 33 + .github/actions/ci/launch/action.yaml | 52 ++ .github/actions/ci/sonobuoy/action.yaml | 15 + .../actions/janitor/ami-sweeper/action.yaml | 13 + .github/actions/janitor/ami-sweeper/script.sh | 41 + .../janitor/cluster-sweeper/action.yaml | 13 + .../actions/janitor/cluster-sweeper/script.sh | 37 + .github/release.yaml | 5 + .github/workflows/alas-issues.yaml | 26 - .github/workflows/bot-trigger.yaml | 14 + .github/workflows/{ci.yaml => ci-auto.yaml} | 8 +- .github/workflows/ci-manual.yaml | 186 ++++ .github/workflows/deploy-docs.yaml | 15 + .github/workflows/janitor.yaml | 38 + .github/workflows/sync-eni-max-pods.yaml | 6 +- .github/workflows/sync-to-codecommit.yaml | 29 + .github/workflows/update-changelog.yaml | 60 ++ .gitignore | 2 + ArchiveBuildConfig.yaml | 1 + CHANGELOG.md | 796 +++++++++++++++++- CHANGELOG_FLUENCE.md | 4 + Config | 4 +- Makefile | 53 +- README.md | 2 +- build-tools/bin/archivebuild-wrapper | 13 + doc/CHANGELOG.md | 1 + doc/CODE_OF_CONDUCT.md | 4 +- doc/CONTRIBUTING.md | 16 +- doc/README.md | 1 + doc/USER_GUIDE.md | 165 ++-- eks-worker-al2-variables.json | 9 +- eks-worker-al2.json | 40 +- files/bin/imds | 52 +- files/bin/private-dns-name | 44 + files/bootstrap.sh | 36 +- files/containerd-config.toml | 1 + files/ecr-credential-provider-config.json | 4 +- files/eni-max-pods.txt | 119 ++- files/get-ecr-uri.sh | 57 +- files/kubelet-config.json | 3 +- files/sonobuoy-e2e-registry-config | 5 - hack/generate-template-variable-doc.py | 4 +- hack/latest-binaries.sh | 26 + hack/lint-docs.sh | 10 + hack/lint-space-errors.sh | 8 + hack/mkdocs.Dockerfile | 4 + hack/mkdocs.sh | 14 + hack/transform-al2-to-al2023.sh | 34 + log-collector-script/linux/README.md | 9 + .../linux/eks-log-collector.sh | 46 +- log-collector-script/windows/README.md | 9 + .../windows/eks-log-collector.ps1 | 34 +- mkdocs.yaml | 19 + scripts/cleanup.sh | 2 +- scripts/cleanup_additional_repos.sh | 10 +- scripts/enable-fips.sh | 10 + scripts/generate-version-info.sh | 22 +- scripts/install-worker.sh | 80 +- scripts/install_additional_repos.sh | 10 +- scripts/upgrade_kernel.sh | 12 +- scripts/validate.sh | 57 +- test/README.md | 8 +- test/cases/get-ecr-uri.sh | 85 ++ test/cases/imds-token-refresh.sh | 69 -- test/cases/mount-bpf-fs.sh | 7 +- test/cases/private-dns-name.sh | 31 + test/cases/reserved-cpus-kubelet-arg.sh | 73 ++ test/mocks/aws | 19 +- .../i-1234567890abcdef0.json | 154 ++++ 77 files changed, 3189 insertions(+), 396 deletions(-) create mode 100644 .github/actions/bot/.gitignore create mode 100644 .github/actions/bot/README.md create mode 100644 .github/actions/bot/action.yaml create mode 100644 .github/actions/bot/index.js create mode 100644 .github/actions/bot/package-lock.json create mode 100644 .github/actions/bot/package.json create mode 100644 .github/actions/ci/build/action.yaml create mode 100644 .github/actions/ci/launch/action.yaml create mode 100644 .github/actions/ci/sonobuoy/action.yaml create mode 100644 .github/actions/janitor/ami-sweeper/action.yaml create mode 100755 .github/actions/janitor/ami-sweeper/script.sh create mode 100644 .github/actions/janitor/cluster-sweeper/action.yaml create mode 100755 .github/actions/janitor/cluster-sweeper/script.sh create mode 100644 .github/release.yaml delete mode 100644 .github/workflows/alas-issues.yaml create mode 100644 .github/workflows/bot-trigger.yaml rename .github/workflows/{ci.yaml => ci-auto.yaml} (83%) create mode 100644 .github/workflows/ci-manual.yaml create mode 100644 .github/workflows/deploy-docs.yaml create mode 100644 .github/workflows/janitor.yaml create mode 100644 .github/workflows/sync-to-codecommit.yaml create mode 100644 .github/workflows/update-changelog.yaml create mode 100755 build-tools/bin/archivebuild-wrapper create mode 120000 doc/CHANGELOG.md create mode 120000 doc/README.md create mode 100755 files/bin/private-dns-name delete mode 100644 files/sonobuoy-e2e-registry-config create mode 100755 hack/latest-binaries.sh create mode 100755 hack/lint-docs.sh create mode 100755 hack/lint-space-errors.sh create mode 100644 hack/mkdocs.Dockerfile create mode 100755 hack/mkdocs.sh create mode 100755 hack/transform-al2-to-al2023.sh create mode 100644 mkdocs.yaml create mode 100755 scripts/enable-fips.sh create mode 100755 test/cases/get-ecr-uri.sh delete mode 100755 test/cases/imds-token-refresh.sh create mode 100755 test/cases/private-dns-name.sh create mode 100755 test/cases/reserved-cpus-kubelet-arg.sh create mode 100644 test/mocks/describe-instances/i-1234567890abcdef0.json diff --git a/.circleci/config.yml b/.circleci/config.yml index fb9711121..41a0245f1 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -31,7 +31,7 @@ jobs: # By default, Circle CI have 10 minutes timeout without response, # so we must extend that timeout limit to be sure that ami build can pass no_output_timeout: 60m - command: cd aws_ecr_ami && PACKER_VARIABLE_FILE=fluence-eks-worker-al2-variables.json make -e -j2 1.23 + command: cd aws_ecr_ami && PACKER_VARIABLE_FILE=fluence-eks-worker-al2-variables.json make -e -j2 1.24 workflows: aws_eks_ami: jobs: diff --git a/.git-blame-ignore-revs b/.git-blame-ignore-revs index b78d5db21..d04cc330c 100644 --- a/.git-blame-ignore-revs +++ b/.git-blame-ignore-revs @@ -1,3 +1,4 @@ # Applied code style rules to shell files 6014c4e6872a23f82ca295afa93b033207042876 - +# Addressed space errors +bde408b340d992aad39e13de1aaf929f358f4338 diff --git a/.github/actions/bot/.gitignore b/.github/actions/bot/.gitignore new file mode 100644 index 000000000..c2658d7d1 --- /dev/null +++ b/.github/actions/bot/.gitignore @@ -0,0 +1 @@ +node_modules/ diff --git a/.github/actions/bot/README.md b/.github/actions/bot/README.md new file mode 100644 index 000000000..526846d91 --- /dev/null +++ b/.github/actions/bot/README.md @@ -0,0 +1,21 @@ +# bot + +This GitHub Action parses commands from pull request comments and executes them. + +Only authorized users (members and owners of this repository) are able to execute commands. + +Commands look like `/COMMAND ARGS`, for example: +``` +/echo hello world +``` + +Multiple commands can be included in a comment, one per line; but each command must be unique. + +Some commands accept additional, named arguments specified on subsequent lines. +Named arguments look like `+NAME ARGS`, for example: +``` +/ci launch ++build cache_container_images=true +``` + +Multiple named arguments can be specified. \ No newline at end of file diff --git a/.github/actions/bot/action.yaml b/.github/actions/bot/action.yaml new file mode 100644 index 000000000..dfb471a30 --- /dev/null +++ b/.github/actions/bot/action.yaml @@ -0,0 +1,13 @@ +name: "Bot" +description: "๐Ÿค– beep boop" +runs: + using: "composite" + steps: + - uses: "actions/checkout@v3" + - uses: "actions/github-script@v6" + with: + script: | + const crypto = require('crypto'); + const uuid = crypto.randomUUID(); + const bot = require('./.github/actions/bot/index.js'); + await bot(core, github, context, uuid); \ No newline at end of file diff --git a/.github/actions/bot/index.js b/.github/actions/bot/index.js new file mode 100644 index 000000000..c24398f6d --- /dev/null +++ b/.github/actions/bot/index.js @@ -0,0 +1,213 @@ +// this script cannot require/import, because it's called by actions/github-script. +// any dependencies must be passed in the inline script in action.yaml + +async function bot(core, github, context, uuid) { + const payload = context.payload; + + if (!payload.comment) { + console.log("No comment found in payload"); + return; + } + console.log("Comment found in payload"); + + // user's org membership must be public for the author_association to be MEMBER + // go to the org's member page, find yourself, and set the visibility to public + const author = payload.comment.user.login; + const authorized = ["OWNER", "MEMBER"].includes(payload.comment.author_association); + if (!authorized) { + console.log(`Comment author is not authorized: ${author}`); + return; + } + console.log(`Comment author is authorized: ${author}`); + + let commands; + try { + commands = parseCommands(uuid, payload, payload.comment.body); + } catch (error) { + console.log(error); + const reply = `@${author} I didn't understand [that](${payload.comment.html_url})! ๐Ÿค”\n\nTake a look at my [logs](${getBotWorkflowURL(payload, context)}).` + replyToCommand(github, payload, reply); + return; + } + if (commands.length === 0) { + console.log("No commands found in comment body"); + return; + } + const uniqueCommands = [...new Set(commands.map(command => typeof command))]; + if (uniqueCommands.length != commands.length) { + replyToCommand(github, payload, `@${author} you can't use the same command more than once! ๐Ÿ™…`); + return; + } + console.log(commands.length + " command(s) found in comment body"); + + for (const command of commands) { + const reply = await command.run(author, github); + if (typeof reply === 'string') { + replyToCommand(github, payload, reply); + } else if (reply) { + console.log(`Command returned: ${reply}`); + } else { + console.log("Command did not return a reply"); + } + } +} + +// replyToCommand creates a comment on the same PR that triggered this workflow +function replyToCommand(github, payload, reply) { + github.rest.issues.createComment({ + owner: payload.repository.owner.login, + repo: payload.repository.name, + issue_number: payload.issue.number, + body: reply + }); +} + +// getBotWorkflowURL returns an HTML URL for this workflow execution of the bot +function getBotWorkflowURL(payload, context) { + return `https://github.com/${payload.repository.owner.login}/${payload.repository.name}/actions/runs/${context.runId}`; +} + +// parseCommands splits the comment body into lines and parses each line as a command or named arguments to the previous command. +function parseCommands(uuid, payload, commentBody) { + const commands = []; + if (!commentBody) { + return commands; + } + const lines = commentBody.split(/\r?\n/); + for (const line of lines) { + console.log(`Parsing line: ${line}`); + const command = parseCommand(uuid, payload, line); + if (command) { + commands.push(command); + } else { + const namedArguments = parseNamedArguments(line); + if (namedArguments) { + const previousCommand = commands.at(-1); + if (previousCommand) { + if (typeof previousCommand.addNamedArguments === 'function') { + previousCommand.addNamedArguments(namedArguments.name, namedArguments.args); + } else { + throw new Error(`Parsed named arguments but previous command (${previousCommand.constructor.name}) does not support arguments: ${JSON.stringify(namedArguments)}`); + } + } else { + // don't treat this as an error, because the named argument syntax might just be someone '+1'-ing. + console.log(`Parsed named arguments with no previous command: ${JSON.stringify(namedArguments)}`); + } + } + } + } + return commands +} + +// parseCommand parses a line as a command. +// The format of a command is `/NAME ARGS...`. +// Leading and trailing spaces are ignored. +function parseCommand(uuid, payload, line) { + const command = line.trim().match(/^\/([a-z\-]+)(?:\s+(.+))?$/); + if (command) { + return buildCommand(uuid, payload, command[1], command[2]); + } + return null; +} + +// buildCommand builds a command from a name and arguments. +function buildCommand(uuid, payload, name, args) { + switch (name) { + case "echo": + return new EchoCommand(uuid, payload, args); + case "ci": + return new CICommand(uuid, payload, args); + default: + console.log(`Unknown command: ${name}`); + return null; + } +} + +// parseNamedArgument parses a line as named arguments. +// The format of a command is `+NAME ARGS...`. +// Leading and trailing spaces are ignored. +function parseNamedArguments(line) { + const parsed = line.trim().match(/^\+([a-z\-]+)(?:\s+(.+))?$/); + if (parsed) { + return { + name: parsed[1], + args: parsed[2] + } + } + return null; +} + +class EchoCommand { + constructor(uuid, payload, args) { + this.phrase = args ? args : "echo"; + } + + run(author) { + return `@${author} *${this.phrase}*`; + } +} + +class CICommand { + constructor(uuid, payload, args) { + this.repository_owner = payload.repository.owner.login; + this.repository_name = payload.repository.name; + this.pr_number = payload.issue.number; + this.comment_url = payload.comment.html_url; + this.uuid = uuid; + this.goal = "test"; + // "test" goal, which executes all CI stages, is the default when no goal is specified + if (args != null && args != "") { + this.goal = args; + } + this.goal_args = {}; + } + + addNamedArguments(goal, args) { + this.goal_args[goal] = args; + } + + async run(author, github) { + const pr = await github.rest.pulls.get({ + owner: this.repository_owner, + repo: this.repository_name, + pull_number: this.pr_number + }); + const mergeable = pr.data.mergeable; + switch (mergeable) { + case true: + break; + case false: + case null: + return `@${author} this PR is not currently mergeable, you'll need to rebase it first.`; + default: + throw new Error(`Unknown mergeable value: ${mergeable}`); + } + const inputs = { + uuid: this.uuid, + pr_number: this.pr_number.toString(), + git_sha: pr.data.merge_commit_sha, + goal: this.goal, + requester: author, + comment_url: this.comment_url + }; + for (const [goal, args] of Object.entries(this.goal_args)) { + inputs[`${goal}_arguments`] = args; + } + console.log(`Dispatching workflow with inputs: ${JSON.stringify(inputs)}`); + await github.rest.actions.createWorkflowDispatch({ + owner: this.repository_owner, + repo: this.repository_name, + workflow_id: 'ci-manual.yaml', + ref: 'master', + inputs: inputs + }); + return null; + } +} + + +module.exports = async (core, github, context, uuid) => { + bot(core, github, context, uuid).catch((error) => { + core.setFailed(error); + }); +} diff --git a/.github/actions/bot/package-lock.json b/.github/actions/bot/package-lock.json new file mode 100644 index 000000000..333a0db57 --- /dev/null +++ b/.github/actions/bot/package-lock.json @@ -0,0 +1,430 @@ +{ + "name": "bot", + "version": "1.0.0", + "lockfileVersion": 2, + "requires": true, + "packages": { + "": { + "name": "bot", + "version": "1.0.0", + "dependencies": { + "@actions/core": "^1.10.0", + "@actions/github": "^5.1.1" + } + }, + "node_modules/@actions/core": { + "version": "1.10.0", + "resolved": "https://registry.npmjs.org/@actions/core/-/core-1.10.0.tgz", + "integrity": "sha512-2aZDDa3zrrZbP5ZYg159sNoLRb61nQ7awl5pSvIq5Qpj81vwDzdMRKzkWJGJuwVvWpvZKx7vspJALyvaaIQyug==", + "dependencies": { + "@actions/http-client": "^2.0.1", + "uuid": "^8.3.2" + } + }, + "node_modules/@actions/github": { + "version": "5.1.1", + "resolved": "https://registry.npmjs.org/@actions/github/-/github-5.1.1.tgz", + "integrity": "sha512-Nk59rMDoJaV+mHCOJPXuvB1zIbomlKS0dmSIqPGxd0enAXBnOfn4VWF+CGtRCwXZG9Epa54tZA7VIRlJDS8A6g==", + "dependencies": { + "@actions/http-client": "^2.0.1", + "@octokit/core": "^3.6.0", + "@octokit/plugin-paginate-rest": "^2.17.0", + "@octokit/plugin-rest-endpoint-methods": "^5.13.0" + } + }, + "node_modules/@actions/http-client": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.1.1.tgz", + "integrity": "sha512-qhrkRMB40bbbLo7gF+0vu+X+UawOvQQqNAA/5Unx774RS8poaOhThDOG6BGmxvAnxhQnDp2BG/ZUm65xZILTpw==", + "dependencies": { + "tunnel": "^0.0.6" + } + }, + "node_modules/@octokit/auth-token": { + "version": "2.5.0", + "resolved": "https://registry.npmjs.org/@octokit/auth-token/-/auth-token-2.5.0.tgz", + "integrity": "sha512-r5FVUJCOLl19AxiuZD2VRZ/ORjp/4IN98Of6YJoJOkY75CIBuYfmiNHGrDwXr+aLGG55igl9QrxX3hbiXlLb+g==", + "dependencies": { + "@octokit/types": "^6.0.3" + } + }, + "node_modules/@octokit/core": { + "version": "3.6.0", + "resolved": "https://registry.npmjs.org/@octokit/core/-/core-3.6.0.tgz", + "integrity": "sha512-7RKRKuA4xTjMhY+eG3jthb3hlZCsOwg3rztWh75Xc+ShDWOfDDATWbeZpAHBNRpm4Tv9WgBMOy1zEJYXG6NJ7Q==", + "dependencies": { + "@octokit/auth-token": "^2.4.4", + "@octokit/graphql": "^4.5.8", + "@octokit/request": "^5.6.3", + "@octokit/request-error": "^2.0.5", + "@octokit/types": "^6.0.3", + "before-after-hook": "^2.2.0", + "universal-user-agent": "^6.0.0" + } + }, + "node_modules/@octokit/endpoint": { + "version": "6.0.12", + "resolved": "https://registry.npmjs.org/@octokit/endpoint/-/endpoint-6.0.12.tgz", + "integrity": "sha512-lF3puPwkQWGfkMClXb4k/eUT/nZKQfxinRWJrdZaJO85Dqwo/G0yOC434Jr2ojwafWJMYqFGFa5ms4jJUgujdA==", + "dependencies": { + "@octokit/types": "^6.0.3", + "is-plain-object": "^5.0.0", + "universal-user-agent": "^6.0.0" + } + }, + "node_modules/@octokit/graphql": { + "version": "4.8.0", + "resolved": "https://registry.npmjs.org/@octokit/graphql/-/graphql-4.8.0.tgz", + "integrity": "sha512-0gv+qLSBLKF0z8TKaSKTsS39scVKF9dbMxJpj3U0vC7wjNWFuIpL/z76Qe2fiuCbDRcJSavkXsVtMS6/dtQQsg==", + "dependencies": { + "@octokit/request": "^5.6.0", + "@octokit/types": "^6.0.3", + "universal-user-agent": "^6.0.0" + } + }, + "node_modules/@octokit/openapi-types": { + "version": "12.11.0", + "resolved": "https://registry.npmjs.org/@octokit/openapi-types/-/openapi-types-12.11.0.tgz", + "integrity": "sha512-VsXyi8peyRq9PqIz/tpqiL2w3w80OgVMwBHltTml3LmVvXiphgeqmY9mvBw9Wu7e0QWk/fqD37ux8yP5uVekyQ==" + }, + "node_modules/@octokit/plugin-paginate-rest": { + "version": "2.21.3", + "resolved": "https://registry.npmjs.org/@octokit/plugin-paginate-rest/-/plugin-paginate-rest-2.21.3.tgz", + "integrity": "sha512-aCZTEf0y2h3OLbrgKkrfFdjRL6eSOo8komneVQJnYecAxIej7Bafor2xhuDJOIFau4pk0i/P28/XgtbyPF0ZHw==", + "dependencies": { + "@octokit/types": "^6.40.0" + }, + "peerDependencies": { + "@octokit/core": ">=2" + } + }, + "node_modules/@octokit/plugin-rest-endpoint-methods": { + "version": "5.16.2", + "resolved": "https://registry.npmjs.org/@octokit/plugin-rest-endpoint-methods/-/plugin-rest-endpoint-methods-5.16.2.tgz", + "integrity": "sha512-8QFz29Fg5jDuTPXVtey05BLm7OB+M8fnvE64RNegzX7U+5NUXcOcnpTIK0YfSHBg8gYd0oxIq3IZTe9SfPZiRw==", + "dependencies": { + "@octokit/types": "^6.39.0", + "deprecation": "^2.3.1" + }, + "peerDependencies": { + "@octokit/core": ">=3" + } + }, + "node_modules/@octokit/request": { + "version": "5.6.3", + "resolved": "https://registry.npmjs.org/@octokit/request/-/request-5.6.3.tgz", + "integrity": "sha512-bFJl0I1KVc9jYTe9tdGGpAMPy32dLBXXo1dS/YwSCTL/2nd9XeHsY616RE3HPXDVk+a+dBuzyz5YdlXwcDTr2A==", + "dependencies": { + "@octokit/endpoint": "^6.0.1", + "@octokit/request-error": "^2.1.0", + "@octokit/types": "^6.16.1", + "is-plain-object": "^5.0.0", + "node-fetch": "^2.6.7", + "universal-user-agent": "^6.0.0" + } + }, + "node_modules/@octokit/request-error": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/@octokit/request-error/-/request-error-2.1.0.tgz", + "integrity": "sha512-1VIvgXxs9WHSjicsRwq8PlR2LR2x6DwsJAaFgzdi0JfJoGSO8mYI/cHJQ+9FbN21aa+DrgNLnwObmyeSC8Rmpg==", + "dependencies": { + "@octokit/types": "^6.0.3", + "deprecation": "^2.0.0", + "once": "^1.4.0" + } + }, + "node_modules/@octokit/types": { + "version": "6.41.0", + "resolved": "https://registry.npmjs.org/@octokit/types/-/types-6.41.0.tgz", + "integrity": "sha512-eJ2jbzjdijiL3B4PrSQaSjuF2sPEQPVCPzBvTHJD9Nz+9dw2SGH4K4xeQJ77YfTq5bRQ+bD8wT11JbeDPmxmGg==", + "dependencies": { + "@octokit/openapi-types": "^12.11.0" + } + }, + "node_modules/before-after-hook": { + "version": "2.2.3", + "resolved": "https://registry.npmjs.org/before-after-hook/-/before-after-hook-2.2.3.tgz", + "integrity": "sha512-NzUnlZexiaH/46WDhANlyR2bXRopNg4F/zuSA3OpZnllCUgRaOF2znDioDWrmbNVsuZk6l9pMquQB38cfBZwkQ==" + }, + "node_modules/deprecation": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/deprecation/-/deprecation-2.3.1.tgz", + "integrity": "sha512-xmHIy4F3scKVwMsQ4WnVaS8bHOx0DmVwRywosKhaILI0ywMDWPtBSku2HNxRvF7jtwDRsoEwYQSfbxj8b7RlJQ==" + }, + "node_modules/is-plain-object": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/is-plain-object/-/is-plain-object-5.0.0.tgz", + "integrity": "sha512-VRSzKkbMm5jMDoKLbltAkFQ5Qr7VDiTFGXxYFXXowVj387GeGNOCsOH6Msy00SGZ3Fp84b1Naa1psqgcCIEP5Q==", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/node-fetch": { + "version": "2.6.13", + "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.13.tgz", + "integrity": "sha512-StxNAxh15zr77QvvkmveSQ8uCQ4+v5FkvNTj0OESmiHu+VRi/gXArXtkWMElOsOUNLtUEvI4yS+rdtOHZTwlQA==", + "dependencies": { + "whatwg-url": "^5.0.0" + }, + "engines": { + "node": "4.x || >=6.0.0" + }, + "peerDependencies": { + "encoding": "^0.1.0" + }, + "peerDependenciesMeta": { + "encoding": { + "optional": true + } + } + }, + "node_modules/once": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", + "integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==", + "dependencies": { + "wrappy": "1" + } + }, + "node_modules/tr46": { + "version": "0.0.3", + "resolved": "https://registry.npmjs.org/tr46/-/tr46-0.0.3.tgz", + "integrity": "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw==" + }, + "node_modules/tunnel": { + "version": "0.0.6", + "resolved": "https://registry.npmjs.org/tunnel/-/tunnel-0.0.6.tgz", + "integrity": "sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg==", + "engines": { + "node": ">=0.6.11 <=0.7.0 || >=0.7.3" + } + }, + "node_modules/universal-user-agent": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/universal-user-agent/-/universal-user-agent-6.0.0.tgz", + "integrity": "sha512-isyNax3wXoKaulPDZWHQqbmIx1k2tb9fb3GGDBRxCscfYV2Ch7WxPArBsFEG8s/safwXTT7H4QGhaIkTp9447w==" + }, + "node_modules/uuid": { + "version": "8.3.2", + "resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz", + "integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==", + "bin": { + "uuid": "dist/bin/uuid" + } + }, + "node_modules/webidl-conversions": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz", + "integrity": "sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ==" + }, + "node_modules/whatwg-url": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-5.0.0.tgz", + "integrity": "sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw==", + "dependencies": { + "tr46": "~0.0.3", + "webidl-conversions": "^3.0.0" + } + }, + "node_modules/wrappy": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", + "integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==" + } + }, + "dependencies": { + "@actions/core": { + "version": "1.10.0", + "resolved": "https://registry.npmjs.org/@actions/core/-/core-1.10.0.tgz", + "integrity": "sha512-2aZDDa3zrrZbP5ZYg159sNoLRb61nQ7awl5pSvIq5Qpj81vwDzdMRKzkWJGJuwVvWpvZKx7vspJALyvaaIQyug==", + "requires": { + "@actions/http-client": "^2.0.1", + "uuid": "^8.3.2" + } + }, + "@actions/github": { + "version": "5.1.1", + "resolved": "https://registry.npmjs.org/@actions/github/-/github-5.1.1.tgz", + "integrity": "sha512-Nk59rMDoJaV+mHCOJPXuvB1zIbomlKS0dmSIqPGxd0enAXBnOfn4VWF+CGtRCwXZG9Epa54tZA7VIRlJDS8A6g==", + "requires": { + "@actions/http-client": "^2.0.1", + "@octokit/core": "^3.6.0", + "@octokit/plugin-paginate-rest": "^2.17.0", + "@octokit/plugin-rest-endpoint-methods": "^5.13.0" + } + }, + "@actions/http-client": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.1.1.tgz", + "integrity": "sha512-qhrkRMB40bbbLo7gF+0vu+X+UawOvQQqNAA/5Unx774RS8poaOhThDOG6BGmxvAnxhQnDp2BG/ZUm65xZILTpw==", + "requires": { + "tunnel": "^0.0.6" + } + }, + "@octokit/auth-token": { + "version": "2.5.0", + "resolved": "https://registry.npmjs.org/@octokit/auth-token/-/auth-token-2.5.0.tgz", + "integrity": "sha512-r5FVUJCOLl19AxiuZD2VRZ/ORjp/4IN98Of6YJoJOkY75CIBuYfmiNHGrDwXr+aLGG55igl9QrxX3hbiXlLb+g==", + "requires": { + "@octokit/types": "^6.0.3" + } + }, + "@octokit/core": { + "version": "3.6.0", + "resolved": "https://registry.npmjs.org/@octokit/core/-/core-3.6.0.tgz", + "integrity": "sha512-7RKRKuA4xTjMhY+eG3jthb3hlZCsOwg3rztWh75Xc+ShDWOfDDATWbeZpAHBNRpm4Tv9WgBMOy1zEJYXG6NJ7Q==", + "requires": { + "@octokit/auth-token": "^2.4.4", + "@octokit/graphql": "^4.5.8", + "@octokit/request": "^5.6.3", + "@octokit/request-error": "^2.0.5", + "@octokit/types": "^6.0.3", + "before-after-hook": "^2.2.0", + "universal-user-agent": "^6.0.0" + } + }, + "@octokit/endpoint": { + "version": "6.0.12", + "resolved": "https://registry.npmjs.org/@octokit/endpoint/-/endpoint-6.0.12.tgz", + "integrity": "sha512-lF3puPwkQWGfkMClXb4k/eUT/nZKQfxinRWJrdZaJO85Dqwo/G0yOC434Jr2ojwafWJMYqFGFa5ms4jJUgujdA==", + "requires": { + "@octokit/types": "^6.0.3", + "is-plain-object": "^5.0.0", + "universal-user-agent": "^6.0.0" + } + }, + "@octokit/graphql": { + "version": "4.8.0", + "resolved": "https://registry.npmjs.org/@octokit/graphql/-/graphql-4.8.0.tgz", + "integrity": "sha512-0gv+qLSBLKF0z8TKaSKTsS39scVKF9dbMxJpj3U0vC7wjNWFuIpL/z76Qe2fiuCbDRcJSavkXsVtMS6/dtQQsg==", + "requires": { + "@octokit/request": "^5.6.0", + "@octokit/types": "^6.0.3", + "universal-user-agent": "^6.0.0" + } + }, + "@octokit/openapi-types": { + "version": "12.11.0", + "resolved": "https://registry.npmjs.org/@octokit/openapi-types/-/openapi-types-12.11.0.tgz", + "integrity": "sha512-VsXyi8peyRq9PqIz/tpqiL2w3w80OgVMwBHltTml3LmVvXiphgeqmY9mvBw9Wu7e0QWk/fqD37ux8yP5uVekyQ==" + }, + "@octokit/plugin-paginate-rest": { + "version": "2.21.3", + "resolved": "https://registry.npmjs.org/@octokit/plugin-paginate-rest/-/plugin-paginate-rest-2.21.3.tgz", + "integrity": "sha512-aCZTEf0y2h3OLbrgKkrfFdjRL6eSOo8komneVQJnYecAxIej7Bafor2xhuDJOIFau4pk0i/P28/XgtbyPF0ZHw==", + "requires": { + "@octokit/types": "^6.40.0" + } + }, + "@octokit/plugin-rest-endpoint-methods": { + "version": "5.16.2", + "resolved": "https://registry.npmjs.org/@octokit/plugin-rest-endpoint-methods/-/plugin-rest-endpoint-methods-5.16.2.tgz", + "integrity": "sha512-8QFz29Fg5jDuTPXVtey05BLm7OB+M8fnvE64RNegzX7U+5NUXcOcnpTIK0YfSHBg8gYd0oxIq3IZTe9SfPZiRw==", + "requires": { + "@octokit/types": "^6.39.0", + "deprecation": "^2.3.1" + } + }, + "@octokit/request": { + "version": "5.6.3", + "resolved": "https://registry.npmjs.org/@octokit/request/-/request-5.6.3.tgz", + "integrity": "sha512-bFJl0I1KVc9jYTe9tdGGpAMPy32dLBXXo1dS/YwSCTL/2nd9XeHsY616RE3HPXDVk+a+dBuzyz5YdlXwcDTr2A==", + "requires": { + "@octokit/endpoint": "^6.0.1", + "@octokit/request-error": "^2.1.0", + "@octokit/types": "^6.16.1", + "is-plain-object": "^5.0.0", + "node-fetch": "^2.6.7", + "universal-user-agent": "^6.0.0" + } + }, + "@octokit/request-error": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/@octokit/request-error/-/request-error-2.1.0.tgz", + "integrity": "sha512-1VIvgXxs9WHSjicsRwq8PlR2LR2x6DwsJAaFgzdi0JfJoGSO8mYI/cHJQ+9FbN21aa+DrgNLnwObmyeSC8Rmpg==", + "requires": { + "@octokit/types": "^6.0.3", + "deprecation": "^2.0.0", + "once": "^1.4.0" + } + }, + "@octokit/types": { + "version": "6.41.0", + "resolved": "https://registry.npmjs.org/@octokit/types/-/types-6.41.0.tgz", + "integrity": "sha512-eJ2jbzjdijiL3B4PrSQaSjuF2sPEQPVCPzBvTHJD9Nz+9dw2SGH4K4xeQJ77YfTq5bRQ+bD8wT11JbeDPmxmGg==", + "requires": { + "@octokit/openapi-types": "^12.11.0" + } + }, + "before-after-hook": { + "version": "2.2.3", + "resolved": "https://registry.npmjs.org/before-after-hook/-/before-after-hook-2.2.3.tgz", + "integrity": "sha512-NzUnlZexiaH/46WDhANlyR2bXRopNg4F/zuSA3OpZnllCUgRaOF2znDioDWrmbNVsuZk6l9pMquQB38cfBZwkQ==" + }, + "deprecation": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/deprecation/-/deprecation-2.3.1.tgz", + "integrity": "sha512-xmHIy4F3scKVwMsQ4WnVaS8bHOx0DmVwRywosKhaILI0ywMDWPtBSku2HNxRvF7jtwDRsoEwYQSfbxj8b7RlJQ==" + }, + "is-plain-object": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/is-plain-object/-/is-plain-object-5.0.0.tgz", + "integrity": "sha512-VRSzKkbMm5jMDoKLbltAkFQ5Qr7VDiTFGXxYFXXowVj387GeGNOCsOH6Msy00SGZ3Fp84b1Naa1psqgcCIEP5Q==" + }, + "node-fetch": { + "version": "2.6.13", + "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.13.tgz", + "integrity": "sha512-StxNAxh15zr77QvvkmveSQ8uCQ4+v5FkvNTj0OESmiHu+VRi/gXArXtkWMElOsOUNLtUEvI4yS+rdtOHZTwlQA==", + "requires": { + "whatwg-url": "^5.0.0" + } + }, + "once": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", + "integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==", + "requires": { + "wrappy": "1" + } + }, + "tr46": { + "version": "0.0.3", + "resolved": "https://registry.npmjs.org/tr46/-/tr46-0.0.3.tgz", + "integrity": "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw==" + }, + "tunnel": { + "version": "0.0.6", + "resolved": "https://registry.npmjs.org/tunnel/-/tunnel-0.0.6.tgz", + "integrity": "sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg==" + }, + "universal-user-agent": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/universal-user-agent/-/universal-user-agent-6.0.0.tgz", + "integrity": "sha512-isyNax3wXoKaulPDZWHQqbmIx1k2tb9fb3GGDBRxCscfYV2Ch7WxPArBsFEG8s/safwXTT7H4QGhaIkTp9447w==" + }, + "uuid": { + "version": "8.3.2", + "resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz", + "integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==" + }, + "webidl-conversions": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz", + "integrity": "sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ==" + }, + "whatwg-url": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-5.0.0.tgz", + "integrity": "sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw==", + "requires": { + "tr46": "~0.0.3", + "webidl-conversions": "^3.0.0" + } + }, + "wrappy": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", + "integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==" + } + } +} diff --git a/.github/actions/bot/package.json b/.github/actions/bot/package.json new file mode 100644 index 000000000..0c3a320e9 --- /dev/null +++ b/.github/actions/bot/package.json @@ -0,0 +1,13 @@ +{ + "name": "bot", + "version": "1.0.0", + "description": "", + "main": "index.js", + "scripts": { + "command": "./local-harness.js $@" + }, + "dependencies": { + "@actions/core": "^1.10.0", + "@actions/github": "^5.1.1" + } +} diff --git a/.github/actions/ci/build/action.yaml b/.github/actions/ci/build/action.yaml new file mode 100644 index 000000000..822617abb --- /dev/null +++ b/.github/actions/ci/build/action.yaml @@ -0,0 +1,33 @@ +name: "[CI] Build" +inputs: + git_sha: + required: true + type: string + build_id: + required: true + type: string + k8s_version: + required: true + type: string + additional_arguments: + required: false + type: string +outputs: + ami_id: + value: ${{ steps.build.outputs.ami_id }} +runs: + using: "composite" + steps: + - uses: actions/checkout@v3 + with: + ref: ${{ inputs.git_sha }} + - id: build + shell: bash + run: | + AMI_NAME="amazon-eks-node-${{ inputs.k8s_version }}-${{ inputs.build_id }}" + make ${{ inputs.k8s_version }} ami_name=${AMI_NAME} ${{ inputs.additional_arguments }} + echo "ami_id=$(jq -r .builds[0].artifact_id "${AMI_NAME}-manifest.json" | cut -d ':' -f 2)" >> $GITHUB_OUTPUT + - uses: actions/upload-artifact@v3 + with: + name: version-info + path: "*-version-info.json" diff --git a/.github/actions/ci/launch/action.yaml b/.github/actions/ci/launch/action.yaml new file mode 100644 index 000000000..c5e6303b8 --- /dev/null +++ b/.github/actions/ci/launch/action.yaml @@ -0,0 +1,52 @@ +name: '[CI] Integration test / Launch' +inputs: + build_id: + required: true + type: string + ami_id: + required: true + type: string + k8s_version: + required: true + type: string + aws_region: + required: true + type: string +outputs: + cluster_name: + value: ${{ steps.launch.outputs.cluster_name }} +runs: + using: "composite" + steps: + - id: launch + shell: bash + run: | + wget --no-verbose -O eksctl.tar.gz "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_Linux_amd64.tar.gz" + tar xf eksctl.tar.gz && chmod +x ./eksctl + + SANITIZED_K8S_VERSION=$(echo ${{ inputs.k8s_version }} | tr -d '.') + CLUSTER_NAME="$SANITIZED_K8S_VERSION-${{ inputs.build_id }}" + + echo '--- + apiVersion: eksctl.io/v1alpha5 + kind: ClusterConfig + metadata: + name: "'$CLUSTER_NAME'" + region: "${{ inputs.aws_region }}" + version: "${{ inputs.k8s_version }}" + nodeGroups: + - name: "${{ inputs.build_id }}" + instanceType: m5.large + minSize: 3 + maxSize: 3 + desiredCapacity: 3 + ami: "${{ inputs.ami_id }}" + amiFamily: AmazonLinux2 + overrideBootstrapCommand: | + #!/bin/bash + source /var/lib/cloud/scripts/eksctl/bootstrap.helper.sh + /etc/eks/bootstrap.sh "'$CLUSTER_NAME'" --kubelet-extra-args "--node-labels=${NODE_LABELS}"' >> cluster.yaml + cat cluster.yaml + + ./eksctl create cluster --config-file cluster.yaml + echo "cluster_name=$CLUSTER_NAME" >> $GITHUB_OUTPUT diff --git a/.github/actions/ci/sonobuoy/action.yaml b/.github/actions/ci/sonobuoy/action.yaml new file mode 100644 index 000000000..e829719b9 --- /dev/null +++ b/.github/actions/ci/sonobuoy/action.yaml @@ -0,0 +1,15 @@ +name: '[CI] Integration test / Sonobuoy' +inputs: + cluster_name: + required: true + type: string +runs: + using: "composite" + steps: + - shell: bash + run: | + aws eks update-kubeconfig --name ${{ inputs.cluster_name }} + wget --no-verbose -O sonobuoy.tar.gz "https://github.com/vmware-tanzu/sonobuoy/releases/download/v0.56.11/sonobuoy_0.56.11_linux_amd64.tar.gz" + tar xf sonobuoy.tar.gz && chmod +x ./sonobuoy + ./sonobuoy run --wait + ./sonobuoy results $(./sonobuoy retrieve) diff --git a/.github/actions/janitor/ami-sweeper/action.yaml b/.github/actions/janitor/ami-sweeper/action.yaml new file mode 100644 index 000000000..e7735cc32 --- /dev/null +++ b/.github/actions/janitor/ami-sweeper/action.yaml @@ -0,0 +1,13 @@ +name: "[Janitor] AMI sweeper" +description: "๐Ÿ—‘๏ธ Deletes CI AMI's when they're no longer needed" +inputs: + max_age_seconds: + description: "Number of seconds after creation when an AMI becomes eligible for deletion" + required: true +runs: + using: "composite" + steps: + - run: ${{ github.action_path }}/script.sh + shell: bash + env: + MAX_AGE_SECONDS: ${{ inputs.max_age_seconds }} diff --git a/.github/actions/janitor/ami-sweeper/script.sh b/.github/actions/janitor/ami-sweeper/script.sh new file mode 100755 index 000000000..f20e6005a --- /dev/null +++ b/.github/actions/janitor/ami-sweeper/script.sh @@ -0,0 +1,41 @@ +#!/usr/bin/env bash + +set -o errexit +set -o pipefail + +MAX_AGE_SECONDS=${MAX_AGE_SECONDS:-$1} +if [ -z "${MAX_AGE_SECONDS}" ]; then + echo "usage: $0 MAX_AGE_SECONDS" + exit 1 +fi + +set -o nounset + +# https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-retries.html +AWS_RETRY_MODE=standard +AWS_MAX_ATTEMPTS=5 + +function jqb64() { + if [ "$#" -lt 2 ]; then + echo "usage: jqb64 BASE64_JSON JQ_ARGS..." + exit 1 + fi + BASE64_JSON="$1" + shift + echo "$BASE64_JSON" | base64 --decode | jq "$@" +} +for IMAGE_DETAILS in $(aws ec2 describe-images --owners self --output json | jq -r '.Images[] | @base64'); do + NAME=$(jqb64 "$IMAGE_DETAILS" -r '.Name') + IMAGE_ID=$(jqb64 "$IMAGE_DETAILS" -r '.ImageId') + CREATION_DATE=$(jqb64 "$IMAGE_DETAILS" -r '.CreationDate') + CREATION_DATE_SECONDS=$(date -d "$CREATION_DATE" '+%s') + CURRENT_TIME_SECONDS=$(date '+%s') + MIN_CREATION_DATE_SECONDS=$(($CURRENT_TIME_SECONDS - $MAX_AGE_SECONDS)) + if [ "$CREATION_DATE_SECONDS" -lt "$MIN_CREATION_DATE_SECONDS" ]; then + aws ec2 deregister-image --image-id "$IMAGE_ID" + for SNAPSHOT_ID in $(jqb64 "$IMAGE_DETAILS" -r '.BlockDeviceMappings[].Ebs.SnapshotId'); do + aws ec2 delete-snapshot --snapshot-id "$SNAPSHOT_ID" + done + echo "Deleted $IMAGE_ID: $NAME" + fi +done diff --git a/.github/actions/janitor/cluster-sweeper/action.yaml b/.github/actions/janitor/cluster-sweeper/action.yaml new file mode 100644 index 000000000..e53de27d1 --- /dev/null +++ b/.github/actions/janitor/cluster-sweeper/action.yaml @@ -0,0 +1,13 @@ +name: "[Janitor] Cluster sweeper" +description: "๐Ÿ—‘๏ธ Deletes CI clusters when they're no longer needed" +inputs: + max_age_seconds: + description: "Number of seconds after creation when a cluster becomes eligible for deletion" + required: true +runs: + using: "composite" + steps: + - run: ${{ github.action_path }}/script.sh + shell: bash + env: + MAX_AGE_SECONDS: ${{ inputs.max_age_seconds }} diff --git a/.github/actions/janitor/cluster-sweeper/script.sh b/.github/actions/janitor/cluster-sweeper/script.sh new file mode 100755 index 000000000..57a20759d --- /dev/null +++ b/.github/actions/janitor/cluster-sweeper/script.sh @@ -0,0 +1,37 @@ +#!/usr/bin/env bash + +set -o errexit +set -o pipefail + +MAX_AGE_SECONDS=${MAX_AGE_SECONDS:-$1} +if [ -z "${MAX_AGE_SECONDS}" ]; then + echo "usage: $0 MAX_AGE_SECONDS" + exit 1 +fi + +set -o nounset + +# https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-retries.html +AWS_RETRY_MODE=standard +AWS_MAX_ATTEMPTS=5 + +function iso8601_is_eligible_for_deletion() { + local TIME_IN_ISO8601="$1" + local TIME_IN_SECONDS=$(date -d "$TIME_IN_ISO8601" '+%s') + local CURRENT_TIME_IN_SECONDS=$(date '+%s') + MIN_TIME_SECONDS=$(($CURRENT_TIME_IN_SECONDS - $MAX_AGE_SECONDS)) + [ "$TIME_IN_SECONDS" -lt "$MIN_TIME_SECONDS" ] +} +function cluster_is_eligible_for_deletion() { + local CLUSTER_NAME="$1" + local CREATED_AT_ISO8601=$(aws eks describe-cluster --name $CLUSTER_NAME --query 'cluster.createdAt' --output text) + iso8601_is_eligible_for_deletion "$CREATED_AT_ISO8601" +} +wget --no-verbose -O eksctl.tar.gz "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_Linux_amd64.tar.gz" +tar xf eksctl.tar.gz && chmod +x ./eksctl +for CLUSTER in $(aws eks list-clusters --query 'clusters[]' --output text); do + if cluster_is_eligible_for_deletion $CLUSTER; then + echo "Deleting cluster $CLUSTER" + ./eksctl delete cluster --name "$CLUSTER" --force --disable-nodegroup-eviction + fi +done diff --git a/.github/release.yaml b/.github/release.yaml new file mode 100644 index 000000000..5fbdeeba5 --- /dev/null +++ b/.github/release.yaml @@ -0,0 +1,5 @@ +--- +changelog: + exclude: + labels: + - "changelog/exclude" diff --git a/.github/workflows/alas-issues.yaml b/.github/workflows/alas-issues.yaml deleted file mode 100644 index d71611bdc..000000000 --- a/.github/workflows/alas-issues.yaml +++ /dev/null @@ -1,26 +0,0 @@ ---- -name: "[ALAS] Open issues for new bulletins" -on: - workflow_dispatch: - inputs: - window: - description: "Only consider bulletins published within this relative time window (golang Duration)" - default: "24h" - required: true - schedule: - # once an hour, at the top of hour - - cron: "0 * * * *" -permissions: - issues: write -jobs: - alas-al2-bulletins: - runs-on: ubuntu-latest - steps: - - uses: guilhem/rss-issues-action@0.5.2 - with: - repo-token: "${{ secrets.GITHUB_TOKEN }}" - feed: "https://alas.aws.amazon.com/AL2/alas.rss" - dry-run: "true" - lastTime: "${{ github.event.inputs.window || '24h' }}" - labels: "alas,alas/al2" - titleFilter: "(medium|low)" diff --git a/.github/workflows/bot-trigger.yaml b/.github/workflows/bot-trigger.yaml new file mode 100644 index 000000000..d728d4f10 --- /dev/null +++ b/.github/workflows/bot-trigger.yaml @@ -0,0 +1,14 @@ +name: Bot +run-name: ๐Ÿค– beep boop +on: + issue_comment: + types: + - created +jobs: + bot: + if: ${{ github.event.issue.pull_request }} + runs-on: ubuntu-latest + permissions: write-all + steps: + - uses: actions/checkout@v3 + - uses: ./.github/actions/bot diff --git a/.github/workflows/ci.yaml b/.github/workflows/ci-auto.yaml similarity index 83% rename from .github/workflows/ci.yaml rename to .github/workflows/ci-auto.yaml index 7f780e683..879ba2bb3 100644 --- a/.github/workflows/ci.yaml +++ b/.github/workflows/ci-auto.yaml @@ -1,9 +1,5 @@ -name: CI +name: "[CI] Auto" on: - workflow_dispatch: - push: - branches: - - 'master' pull_request: types: - opened @@ -17,7 +13,7 @@ jobs: - run: echo "$(go env GOPATH)/bin" >> $GITHUB_PATH - run: go install mvdan.cc/sh/v3/cmd/shfmt@latest - run: make lint - test: + unit-test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 diff --git a/.github/workflows/ci-manual.yaml b/.github/workflows/ci-manual.yaml new file mode 100644 index 000000000..2860b75c7 --- /dev/null +++ b/.github/workflows/ci-manual.yaml @@ -0,0 +1,186 @@ +name: '[CI] Manual' +run-name: "#${{ inputs.pr_number }} - ${{ inputs.uuid }}" +on: + workflow_dispatch: + inputs: + requester: + required: true + type: string + comment_url: + required: true + type: string + uuid: + required: true + type: string + pr_number: + required: true + type: string + git_sha: + required: true + type: string + goal: + required: true + type: choice + default: "test" + options: + - "build" + - "launch" + - "test" + build_arguments: + required: false + type: string +jobs: + setup: + runs-on: ubuntu-latest + outputs: + git_sha_short: ${{ steps.variables.outputs.git_sha_short }} + workflow_run_url: ${{ steps.variables.outputs.workflow_run_url }} + kubernetes_versions: ${{ steps.variables.outputs.kubernetes_versions }} + build_id: ${{ steps.variables.outputs.build_id }} + ci_step_name_prefix: ${{ steps.variables.outputs.ci_step_name_prefix }} + steps: + - id: variables + run: | + echo "git_sha_short=$(echo ${{ inputs.git_sha }} | rev | cut -c-7 | rev)" >> $GITHUB_OUTPUT + echo "workflow_run_url=https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}" >> $GITHUB_OUTPUT + # grab supported versions directly from eksctl + wget --no-verbose -O eksctl.tar.gz "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_Linux_amd64.tar.gz" + tar xzf eksctl.tar.gz && chmod +x ./eksctl + echo "kubernetes_versions=$(./eksctl version --output json | jq -c .EKSServerSupportedVersions)" >> $GITHUB_OUTPUT + echo "build_id=ci-${{ inputs.pr_number }}-${{ needs.setup.outputs.git_sha_short }}-${{ inputs.uuid }}" >> $GITHUB_OUTPUT + echo 'ci_step_name_prefix=CI:' >> $GITHUB_OUTPUT + + notify-start: + runs-on: ubuntu-latest + needs: + - setup + steps: + - uses: actions/github-script@v6 + with: + script: | + github.rest.issues.createComment({ + owner: context.repo.owner, + repo: context.repo.repo, + issue_number: ${{ inputs.pr_number }}, + body: `@${{ inputs.requester }} roger [that](${{ inputs.comment_url }})! I've dispatched a [workflow](${{ needs.setup.outputs.workflow_run_url }}). ๐Ÿ‘` + }); + kubernetes-versions: + runs-on: ubuntu-latest + name: ${{ matrix.k8s_version }} + needs: + - setup + - notify-start + permissions: + id-token: write + contents: read + strategy: + # don't bail out of all sub-tasks if one fails + fail-fast: false + matrix: + k8s_version: ${{ fromJson(needs.setup.outputs.kubernetes_versions) }} + steps: + - uses: actions/checkout@v3 + with: + ref: 'master' + - uses: aws-actions/configure-aws-credentials@v2 + with: + aws-region: ${{ secrets.AWS_REGION }} + role-to-assume: ${{ secrets.AWS_ROLE_ARN_CI }} + # 2.5 hours (job usually completes within 2 hours) + role-duration-seconds: 9000 + - name: "${{ needs.setup.outputs.ci_step_name_prefix }} Build" + id: build + uses: ./.github/actions/ci/build + with: + git_sha: ${{ inputs.git_sha }} + k8s_version: ${{ matrix.k8s_version }} + build_id: ${{ needs.setup.outputs.build_id }} + additional_arguments: ${{ inputs.build_arguments }} + - if: ${{ inputs.goal == 'launch' || inputs.goal == 'test' }} + name: "${{ needs.setup.outputs.ci_step_name_prefix }} Launch" + id: launch + uses: ./.github/actions/ci/launch + with: + ami_id: ${{ steps.build.outputs.ami_id }} + k8s_version: ${{ matrix.k8s_version }} + build_id: ${{ needs.setup.outputs.build_id }} + aws_region: ${{ secrets.AWS_REGION }} + - if: ${{ inputs.goal == 'test' }} + name: "${{ needs.setup.outputs.ci_step_name_prefix }} Test" + id: sonobuoy + uses: ./.github/actions/ci/sonobuoy + with: + cluster_name: ${{ steps.launch.outputs.cluster_name }} + notify-outcome: + if: ${{ always() }} + runs-on: ubuntu-latest + needs: + - setup + - kubernetes-versions + steps: + - uses: actions/github-script@v6 + with: + script: | + const { data } = await github.rest.actions.listJobsForWorkflowRun({ + owner: context.repo.owner, + repo: context.repo.repo, + run_id: context.runId + }); + const conclusionEmojis = { + "success": "โœ…", + "skipped": "โญ๏ธ", + "failure": "โŒ", + "cancelled": "๐Ÿšฎ" + }; + const uniqueStepNames = new Set(); + const stepConclusionsByK8sVersion = new Map(); + const ciStepNamePrefix = "${{ needs.setup.outputs.ci_step_name_prefix }}"; + for (const job of data.jobs) { + if (/\d+\.\d+/.test(job.name)) { + const k8sVersion = job.name; + for (const step of job.steps) { + if (step.name.startsWith(ciStepNamePrefix)) { + const stepName = step.name.substring(ciStepNamePrefix.length).trim(); + let stepConclusions = stepConclusionsByK8sVersion.get(k8sVersion); + if (!stepConclusions) { + stepConclusions = new Map(); + stepConclusionsByK8sVersion.set(k8sVersion, stepConclusions); + } + stepConclusions.set(stepName, step.conclusion); + uniqueStepNames.add(stepName); + } + } + } + } + const headers = [{ + data: 'Kubernetes version', + header: true + }]; + for (const stepName of uniqueStepNames.values()) { + headers.push({ + data: stepName, + header: true + }); + } + const rows = []; + for (const stepConclusionsForK8sVersion of [...stepConclusionsByK8sVersion.entries()].sort()) { + const k8sVersion = stepConclusionsForK8sVersion[0]; + const row = [k8sVersion]; + for (const step of stepConclusionsForK8sVersion[1].entries()) { + row.push(`${step[1]} ${conclusionEmojis[step[1]]}`); + } + rows.push(row); + } + const commentBody = core.summary + .addRaw("@${{ inputs.requester }} the workflow that you requested has completed. ๐ŸŽ‰") + .addTable([ + headers, + ...rows, + ]) + .stringify(); + github.rest.issues.createComment({ + owner: context.repo.owner, + repo: context.repo.repo, + issue_number: ${{ inputs.pr_number }}, + body: commentBody + }); diff --git a/.github/workflows/deploy-docs.yaml b/.github/workflows/deploy-docs.yaml new file mode 100644 index 000000000..30328b76a --- /dev/null +++ b/.github/workflows/deploy-docs.yaml @@ -0,0 +1,15 @@ +name: Deploy documentation +on: + workflow_dispatch: + push: + branches: + - 'master' +jobs: + mkdocs: + permissions: + contents: write + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + - run: pip install mkdocs mkdocs-material + - run: mkdocs gh-deploy --strict --no-history --force diff --git a/.github/workflows/janitor.yaml b/.github/workflows/janitor.yaml new file mode 100644 index 000000000..47fec1059 --- /dev/null +++ b/.github/workflows/janitor.yaml @@ -0,0 +1,38 @@ +name: "Janitor" +on: + workflow_dispatch: + schedule: + # hourly at the top of the hour + - cron: "0 * * * *" +permissions: + id-token: write + contents: read +jobs: + cluster-sweeper: + # disable in forks + if: github.repository == 'awslabs/amazon-eks-ami' + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + - uses: aws-actions/configure-aws-credentials@v2 + with: + aws-region: ${{ secrets.AWS_REGION }} + role-to-assume: ${{ secrets.AWS_ROLE_ARN_JANITOR }} + - uses: ./.github/actions/janitor/cluster-sweeper + with: + # 3 hours + max_age_seconds: 10800 + ami-sweeper: + # disable in forks + if: github.repository == 'awslabs/amazon-eks-ami' + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + - uses: aws-actions/configure-aws-credentials@v2 + with: + aws-region: ${{ secrets.AWS_REGION }} + role-to-assume: ${{ secrets.AWS_ROLE_ARN_JANITOR }} + - uses: ./.github/actions/janitor/ami-sweeper + with: + # 3 days + max_age_seconds: 259200 diff --git a/.github/workflows/sync-eni-max-pods.yaml b/.github/workflows/sync-eni-max-pods.yaml index 9bb3275bc..2affd7873 100644 --- a/.github/workflows/sync-eni-max-pods.yaml +++ b/.github/workflows/sync-eni-max-pods.yaml @@ -17,7 +17,7 @@ jobs: - uses: aws-actions/configure-aws-credentials@v2 with: aws-region: ${{ secrets.AWS_REGION }} - role-to-assume: ${{ secrets.AWS_ROLE_ARN }} + role-to-assume: ${{ secrets.AWS_ROLE_ARN_SYNC_ENI_MAX_PODS }} - uses: actions/checkout@v3 with: repository: awslabs/amazon-eks-ami @@ -40,6 +40,10 @@ jobs: path: amazon-eks-ami/ add-paths: files/eni-max-pods.txt commit-message: "Update eni-max-pods.txt" + committer: "GitHub " + author: "GitHub " + labels: | + changelog/exclude title: "Update eni-max-pods.txt" body: | Generated by [aws/amazon-vpc-cni-k8s](https://github.com/aws/amazon-vpc-cni-k8s): diff --git a/.github/workflows/sync-to-codecommit.yaml b/.github/workflows/sync-to-codecommit.yaml new file mode 100644 index 000000000..ebed3203c --- /dev/null +++ b/.github/workflows/sync-to-codecommit.yaml @@ -0,0 +1,29 @@ +name: '[Sync] Push to CodeCommit' + +on: + schedule: + # twice an hour, at :00 and :30 + - cron: '0,30 * * * *' + +jobs: + mirror: + if: github.repository == 'awslabs/amazon-eks-ami' + runs-on: ubuntu-latest + # These permissions are needed to interact with GitHub's OIDC Token endpoint. + permissions: + id-token: write + contents: read + steps: + - uses: actions/checkout@v2 + with: + # fetch complete history + fetch-depth: 0 + - uses: aws-actions/configure-aws-credentials@v1 + with: + aws-region: ${{ secrets.AWS_REGION }} + role-to-assume: ${{ secrets.AWS_ROLE_ARN_SYNC_TO_CODECOMMIT }} + - run: git config credential.helper '!aws codecommit credential-helper $@' + - run: git config credential.UseHttpPath true + - run: git remote add codecommit ${{ secrets.AWS_CODECOMMIT_REPO_URL }} + - run: git checkout master + - run: git push codecommit master diff --git a/.github/workflows/update-changelog.yaml b/.github/workflows/update-changelog.yaml new file mode 100644 index 000000000..aaffcc5d8 --- /dev/null +++ b/.github/workflows/update-changelog.yaml @@ -0,0 +1,60 @@ +name: "[Release] Update CHANGELOG.md" +on: + release: + types: [released] +permissions: + contents: write + pull-requests: write +jobs: + setup: + # this workflow will always fail in forks; bail if this isn't running in the upstream + if: github.repository == 'awslabs/amazon-eks-ami' + runs-on: ubuntu-latest + outputs: + tag_name: ${{ steps.variables.outputs.tag_name }} + steps: + - id: variables + run: | + echo "tag_name=$(echo ${{ github.ref }} | cut -d/ -f3)" >> $GITHUB_OUTPUT + update-changelog: + runs-on: ubuntu-latest + needs: + - setup + steps: + - uses: actions/checkout@v3 + with: + repository: awslabs/amazon-eks-ami + ref: refs/heads/master + path: amazon-eks-ami/ + - uses: actions/github-script@v6 + with: + script: | + const fs = require('fs'); + const changelogPath = './amazon-eks-ami/CHANGELOG.md'; + const placeholder = ''; + const tagName = '${{ needs.setup.outputs.tag_name }}'; + const release = await github.rest.repos.getReleaseByTag({ + tag: tagName, + owner: context.repo.owner, + repo: context.repo.repo, + }); + const changelog = fs.readFileSync(changelogPath, 'utf8'); + if (changelog.includes(release.data.name)) { + throw new Error(`changelog already includes ${release.data.name}`); + } + const newEntry = `# ${release.data.name}\n${release.data.body}`; + const updatedChangelog = changelog.replace(placeholder, placeholder + '\n\n' + newEntry + '\n---\n'); + fs.writeFileSync(changelogPath, updatedChangelog); + - uses: peter-evans/create-pull-request@v4 + with: + branch: update-changelog + path: amazon-eks-ami/ + add-paths: CHANGELOG.md + commit-message: "Update CHANGELOG.md for release ${{ needs.setup.outputs.tag_name }}" + committer: "GitHub " + author: "GitHub " + title: "Update CHANGELOG.md" + labels: | + changelog/exclude + body: | + Adds CHANGELOG.md entry for release [${{ needs.setup.outputs.tag_name }}](https://github.com/awslabs/amazon-eks-ami/releases/tag/${{ needs.setup.outputs.tag_name }}). diff --git a/.gitignore b/.gitignore index 2d9cb419a..12527754f 100644 --- a/.gitignore +++ b/.gitignore @@ -3,3 +3,5 @@ .idea *version-info.json .DS_Store +site/ +.git-commit diff --git a/ArchiveBuildConfig.yaml b/ArchiveBuildConfig.yaml index ba146715d..d7a4de238 100644 --- a/ArchiveBuildConfig.yaml +++ b/ArchiveBuildConfig.yaml @@ -14,6 +14,7 @@ dependencies: - src: Makefile - src: eks-worker-al2.json - src: eks-worker-al2-variables.json + - src: .git-commit archive: name: amazon-eks-ami.tar.gz type: tgz diff --git a/CHANGELOG.md b/CHANGELOG.md index 9532c7419..a3047068a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,785 @@ # Changelog + + +# AMI Release v20231106 +## What's Changed +* Add new i4i sizes to eni-max-pods.txt by @github-actions in https://github.com/awslabs/amazon-eks-ami/pull/1495 +* Set nerdctl default namespace to k8s.io by @reegnz in https://github.com/awslabs/amazon-eks-ami/pull/1488 +* Skip installing amazon-ssm-agent if already present by @pjaudiomv in https://github.com/awslabs/amazon-eks-ami/pull/1501 + +## New Contributors +* @pjaudiomv made their first contribution in https://github.com/awslabs/amazon-eks-ami/pull/1501 + +**Full Changelog**: https://github.com/awslabs/amazon-eks-ami/compare/v20231027...v20231106 + +--- + +

AMI Details

+ + +
+Kubernetes 1.28 + + + + + + + + + + + + + + + + + +
AMI namesRelease versionIncluded artifacts
amazon-eks-node-1.28-v202311061.28.3-20231106s3://amazon-eks/1.28.3/2023-11-02/
amazon-eks-gpu-node-1.28-v20231106
amazon-eks-arm64-node-1.28-v20231106
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PackageVersion
amazon-ssm-agent3.2.1705.0-1
containerd1.6.19-1.amzn2.0.5
cuda12.2.0-1
kernel5.10.198-187.748.amzn2
nvidia-driver-latest-dkms535.54.03-1.el7
runc1.1.7-4.amzn2
+
+ +
+Kubernetes 1.27 + + + + + + + + + + + + + + + + + +
AMI namesRelease versionIncluded artifacts
amazon-eks-node-1.27-v202311061.27.7-20231106s3://amazon-eks/1.27.7/2023-11-02/
amazon-eks-gpu-node-1.27-v20231106
amazon-eks-arm64-node-1.27-v20231106
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PackageVersion
amazon-ssm-agent3.2.1705.0-1
containerd1.6.19-1.amzn2.0.5
cuda11.4.0-1
kernel5.10.198-187.748.amzn2
nvidia-driver-latest-dkms470.182.03-1.el7
runc1.1.7-4.amzn2
+
+ +
+Kubernetes 1.26 + + + + + + + + + + + + + + + + + +
AMI namesRelease versionIncluded artifacts
amazon-eks-node-1.26-v202311061.26.10-20231106s3://amazon-eks/1.26.10/2023-11-02/
amazon-eks-gpu-node-1.26-v20231106
amazon-eks-arm64-node-1.26-v20231106
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PackageVersion
amazon-ssm-agent3.2.1705.0-1
containerd1.6.19-1.amzn2.0.5
cuda11.4.0-1
kernel5.10.198-187.748.amzn2
nvidia-driver-latest-dkms470.182.03-1.el7
runc1.1.7-4.amzn2
+
+ +
+Kubernetes 1.25 + + + + + + + + + + + + + + + + + +
AMI namesRelease versionIncluded artifacts
amazon-eks-node-1.25-v202311061.25.15-20231106s3://amazon-eks/1.25.15/2023-11-02/
amazon-eks-gpu-node-1.25-v20231106
amazon-eks-arm64-node-1.25-v20231106
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PackageVersion
amazon-ssm-agent3.2.1705.0-1
containerd1.6.19-1.amzn2.0.5
cuda11.4.0-1
kernel5.10.198-187.748.amzn2
nvidia-driver-latest-dkms470.182.03-1.el7
runc1.1.7-4.amzn2
+
+ +
+Kubernetes 1.24 + + + + + + + + + + + + + + + + + +
AMI namesRelease versionIncluded artifacts
amazon-eks-node-1.24-v202311061.24.17-20231106s3://amazon-eks/1.24.17/2023-11-02/
amazon-eks-gpu-node-1.24-v20231106
amazon-eks-arm64-node-1.24-v20231106
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PackageVersion
amazon-ssm-agent3.2.1705.0-1
containerd1.6.19-1.amzn2.0.5
cuda11.4.0-1
docker20.10.23-1.amzn2.0.1
kernel5.10.198-187.748.amzn2
nvidia-driver-latest-dkms470.182.03-1.el7
runc1.1.7-4.amzn2
+
+ +
+Kubernetes 1.23 + + + + + + + + + + + + + + + + + +
AMI namesRelease versionIncluded artifacts
amazon-eks-node-1.23-v202311061.23.17-20231106s3://amazon-eks/1.23.17/2023-11-02/
amazon-eks-gpu-node-1.23-v20231106
amazon-eks-arm64-node-1.23-v20231106
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PackageVersion
amazon-ssm-agent3.2.1705.0-1
containerd1.6.19-1.amzn2.0.5
cuda11.4.0-1
docker20.10.23-1.amzn2.0.1
kernel5.4.258-171.360.amzn2
nvidia-driver-latest-dkms470.182.03-1.el7
runc1.1.7-4.amzn2
+
+ + +> **Note** +> A recent change in the Linux kernel caused the EFA and NVIDIA drivers to be incompatible. More information is available in #1494. +> To prevent unexpected failures, the kernel in the GPU AMI will remain at the following versions until we have determined a solution: +> - Kubernetes 1.27 and below: `5.4.254-170.358.amzn2` +> - Kubernetes 1.28 and above: `5.10.192-183.736.amzn2` + +--- + +### AMI Release v20231027 +* amazon-eks-gpu-node-1.28-v20231027 +* amazon-eks-gpu-node-1.27-v20231027 +* amazon-eks-gpu-node-1.26-v20231027 +* amazon-eks-gpu-node-1.25-v20231027 +* amazon-eks-gpu-node-1.24-v20231027 +* amazon-eks-gpu-node-1.23-v20231027 +* amazon-eks-arm64-node-1.28-v20231027 +* amazon-eks-arm64-node-1.27-v20231027 +* amazon-eks-arm64-node-1.26-v20231027 +* amazon-eks-arm64-node-1.25-v20231027 +* amazon-eks-arm64-node-1.24-v20231027 +* amazon-eks-arm64-node-1.23-v20231027 +* amazon-eks-node-1.28-v20231027 +* amazon-eks-node-1.27-v20231027 +* amazon-eks-node-1.26-v20231027 +* amazon-eks-node-1.25-v20231027 +* amazon-eks-node-1.24-v20231027 +* amazon-eks-node-1.23-v20231027 + +[Release versions](https://docs.aws.amazon.com/eks/latest/userguide/eks-linux-ami-versions.html) for these AMIs: +* `1.28.2-20231027` +* `1.27.6-20231027` +* `1.26.9-20231027` +* `1.25.14-20231027` +* `1.24.17-20231027` +* `1.23.17-20231027` + +Binaries used to build these AMIs are published: +* s3://amazon-eks/1.28.2/2023-10-17/ +* s3://amazon-eks/1.27.6/2023-10-17/ +* s3://amazon-eks/1.26.9/2023-10-17/ +* s3://amazon-eks/1.25.14/2023-10-17/ +* s3://amazon-eks/1.24.17/2023-10-17/ +* s3://amazon-eks/1.23.17/2023-10-17/ + +AMI details: +* `kernel`: + * Kubernetes 1.23 and below: 5.4.257-170.359.amzn2 + * Kubernetes 1.24 and above: 5.10.197-186.748.amzn2 + * โš ๏ธ **Note: A recent change in the Linux kernel caused the EFA and NVIDIA drivers to be incompatible.** More information is available in https://github.com/awslabs/amazon-eks-ami/issues/1494. To prevent unexpected failures, the kernel in the GPU AMI will remain at the following versions until we have determined a solution: + * Kubernetes 1.27 and below: 5.4.254-170.358.amzn2 + * Kubernetes 1.28 and above: 5.10.192-183.736.amzn2 +* `dockerd`: 20.10.23-1.amzn2.0.1 + * **Note** that Docker is not installed on AMI's with Kubernetes 1.25+. +* `containerd`: 1.6.19-1.amzn2.0.5 +* `runc`: 1.1.7-4.amzn2 +* `cuda`: 12.2.0-1 +* `nvidia-container-runtime-hook`: 1.4.0-1.amzn2 +* `amazon-ssm-agent`: 3.2.1705.0-1 + +Notable changes: +- Add optional FIPS support ([#1458](https://github.com/awslabs/amazon-eks-ami/pull/1458)) +- Fix region in cached image names ([#1461](https://github.com/awslabs/amazon-eks-ami/pull/1461)) +- Update curl for [ALAS-2023-2287](https://alas.aws.amazon.com/AL2/ALAS-2023-2287.html) +- Update kernel for [ALASKERNEL-5.10-2023-039](https://alas.aws.amazon.com/AL2/ALASKERNEL-5.10-2023-039.html) + +Minor changes: +- Add r7i to eni-max-pods.txt ([#1473](https://github.com/awslabs/amazon-eks-ami/pull/1473)) +- Correctly tag cached images for us-gov-west-1 FIPS endpoint ([#1476](https://github.com/awslabs/amazon-eks-ami/pull/1476)) +- Add new i4i sizes to eni-max-pods.txt ([#1495](https://github.com/awslabs/amazon-eks-ami/pull/1495)) + +### AMI Release v20231002 +* amazon-eks-gpu-node-1.28-v20231002 +* amazon-eks-gpu-node-1.27-v20231002 +* amazon-eks-gpu-node-1.26-v20231002 +* amazon-eks-gpu-node-1.25-v20231002 +* amazon-eks-gpu-node-1.24-v20231002 +* amazon-eks-gpu-node-1.23-v20231002 +* amazon-eks-arm64-node-1.28-v20231002 +* amazon-eks-arm64-node-1.27-v20231002 +* amazon-eks-arm64-node-1.26-v20231002 +* amazon-eks-arm64-node-1.25-v20231002 +* amazon-eks-arm64-node-1.24-v20231002 +* amazon-eks-arm64-node-1.23-v20231002 +* amazon-eks-node-1.28-v20231002 +* amazon-eks-node-1.27-v20231002 +* amazon-eks-node-1.26-v20231002 +* amazon-eks-node-1.25-v20231002 +* amazon-eks-node-1.24-v20231002 +* amazon-eks-node-1.23-v20231002 + +[Release versions](https://docs.aws.amazon.com/eks/latest/userguide/eks-linux-ami-versions.html) for these AMIs: +* `1.28.1-20231002` +* `1.27.5-20231002` +* `1.26.8-20231002` +* `1.25.13-20231002` +* `1.24.17-20231002` +* `1.23.17-20231002` + +Binaries used to build these AMIs are published: +* s3://amazon-eks/1.28.1/20230914/ +* s3://amazon-eks/1.27.5/20230914/ +* s3://amazon-eks/1.26.8/20230914/ +* s3://amazon-eks/1.25.13/20230914/ +* s3://amazon-eks/1.24.17/20230914/ +* s3://amazon-eks/1.23.17/20230914/ + +AMI details: +* `kernel`: + * Kubernetes 1.23 and below: 5.4.254-170.358.amzn2 + * Kubernetes 1.24 and above: 5.10.192-183.736.amzn2 + * **Note** that the GPU AMI on Kubernetes 1.27 and below will continue to use kernel-5.4 as we work to address a [compatibility issue](https://github.com/awslabs/amazon-eks-ami/issues/1222) with `nvidia-driver-latest-dkms`. +* `dockerd`: 20.10.23-1.amzn2.0.1 + * **Note** that Docker is not installed on AMI's with Kubernetes 1.25+. +* `containerd`: 1.6.19-1.amzn2.0.3 +* `runc`: 1.1.7-3.amzn2 +* `cuda`: 12.2.0-1 +* `nvidia-container-runtime-hook`: 1.4.0-1.amzn2 +* `amazon-ssm-agent`: 3.2.1630.0-1 + +Notable changes: + - SSM agent upgraded to `3.2.1630.0-1` + - Update `libssh2` for [ALAS-2023-2257](https://alas.aws.amazon.com/AL2/ALAS-2023-2257.html) + +### AMI Release v20230919 +* amazon-eks-gpu-node-1.28-v20230919 +* amazon-eks-gpu-node-1.27-v20230919 +* amazon-eks-gpu-node-1.26-v20230919 +* amazon-eks-gpu-node-1.25-v20230919 +* amazon-eks-gpu-node-1.24-v20230919 +* amazon-eks-gpu-node-1.23-v20230919 +* amazon-eks-arm64-node-1.28-v20230919 +* amazon-eks-arm64-node-1.27-v20230919 +* amazon-eks-arm64-node-1.26-v20230919 +* amazon-eks-arm64-node-1.25-v20230919 +* amazon-eks-arm64-node-1.24-v20230919 +* amazon-eks-arm64-node-1.23-v20230919 +* amazon-eks-node-1.28-v20230919 +* amazon-eks-node-1.27-v20230919 +* amazon-eks-node-1.26-v20230919 +* amazon-eks-node-1.25-v20230919 +* amazon-eks-node-1.24-v20230919 +* amazon-eks-node-1.23-v20230919 + +[Release versions](https://docs.aws.amazon.com/eks/latest/userguide/eks-linux-ami-versions.html) for these AMIs: +* `1.28.1-20230919` +* `1.27.5-20230919` +* `1.26.8-20230919` +* `1.25.13-20230919` +* `1.24.17-20230919` +* `1.23.17-20230919` + +Binaries used to build these AMIs are published: +* s3://amazon-eks/1.28.1/20230914/ +* s3://amazon-eks/1.27.5/20230914/ +* s3://amazon-eks/1.26.8/20230914/ +* s3://amazon-eks/1.25.13/20230914/ +* s3://amazon-eks/1.24.17/20230914/ +* s3://amazon-eks/1.23.17/20230914/ + +AMI details: +* `kernel`: + * Kubernetes 1.23 and below: 5.4.254-170.358.amzn2 + * Kubernetes 1.24 and above: 5.10.192-183.736.amzn2 + * **Note** that the GPU AMI on Kubernetes 1.27 and below will continue to use kernel-5.4 due to a [compatibility issue](https://github.com/awslabs/amazon-eks-ami/issues/1222) with `nvidia-driver-latest-dkms`. +* `dockerd`: 20.10.23-1.amzn2.0.1 + * **Note** that Docker is not installed on AMI's with Kubernetes 1.25+. +* `containerd`: 1.6.19-1.amzn2.0.3 +* `runc`: 1.1.7-3.amzn2 +* `cuda`: 12.2.0-1 +* `nvidia-container-runtime-hook`: 1.4.0-1.amzn2 +* `amazon-ssm-agent`: 3.2.1542.0-1 + +Notable changes: + - kernel-5.10 updated to address: + - [ALAS2KERNEL-5.10-2023-039](https://alas.aws.amazon.com/AL2/ALASKERNEL-5.10-2023-039.html) + - Add support for Kubernetes 1.28 ([#1431](https://github.com/awslabs/amazon-eks-ami/pull/1431)) + - GPU AMI: + - Released with [Neuron version 2.14.0](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#neuron-2-14-0-09-15-2023) + - GPU AMIs on Kubernetes 1.28 and above: + - Upgraded `kernel` to 5.10 + - Upgraded `cuda` version to 12.2 + - Upgraded Nvidia driver to 535.54.03-1 + - [Installed EFA version 1.26.1](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa-start.html#efa-start-enable) + - Limited deeper [sleep states](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/processor_state_control.html) + +### AMI Release v20230825 +* amazon-eks-gpu-node-1.27-v20230825 +* amazon-eks-gpu-node-1.26-v20230825 +* amazon-eks-gpu-node-1.25-v20230825 +* amazon-eks-gpu-node-1.24-v20230825 +* amazon-eks-gpu-node-1.23-v20230825 +* amazon-eks-arm64-node-1.27-v20230825 +* amazon-eks-arm64-node-1.26-v20230825 +* amazon-eks-arm64-node-1.25-v20230825 +* amazon-eks-arm64-node-1.24-v20230825 +* amazon-eks-arm64-node-1.23-v20230825 +* amazon-eks-node-1.27-v20230825 +* amazon-eks-node-1.26-v20230825 +* amazon-eks-node-1.25-v20230825 +* amazon-eks-node-1.24-v20230825 +* amazon-eks-node-1.23-v20230825 + +[Release versions](https://docs.aws.amazon.com/eks/latest/userguide/eks-linux-ami-versions.html) for these AMIs: +* `1.27.4-20230825` +* `1.26.7-20230825` +* `1.25.12-20230825` +* `1.24.16-20230825` +* `1.23.17-20230825` + +Binaries used to build these AMIs are published: +* s3://amazon-eks/1.27.4/2023-08-16/ +* s3://amazon-eks/1.26.7/2023-08-16/ +* s3://amazon-eks/1.25.12/2023-08-16/ +* s3://amazon-eks/1.24.16/2023-08-16/ +* s3://amazon-eks/1.23.17/2023-08-16/ + +AMI details: +* `kernel`: + * Kubernetes 1.23 and below: 5.4.253-167.359.amzn2 + * Kubernetes 1.24 and above: 5.10.186-179.751.amzn2 + * **Note** that the GPU AMI will continue to use kernel-5.4 as we work to address a [compatibility issue](https://github.com/awslabs/amazon-eks-ami/issues/1222) with `nvidia-driver-latest-dkms`. +* `dockerd`: 20.10.23-1.amzn2.0.1 + * **Note** that Docker is not installed on AMI's with Kubernetes 1.25+. +* `containerd`: 1.6.19-1.amzn2.0.3 +* `runc`: 1.1.7-3.amzn2 +* `cuda`: 11.4.0-1 +* `nvidia-container-runtime-hook`: 1.4.0-1.amzn2 +* `amazon-ssm-agent`: 3.2.1478.0-1 + +Notable changes: + - containerd updated to address: + - [ALAS2DOCKER-2023-029](https://alas.aws.amazon.com/AL2/ALASDOCKER-2023-029.html) + - runc updated to address: + - [ALAS2DOCKER-2023-028](https://alas.aws.amazon.com/AL2/ALASDOCKER-2023-028.html) + - Fetch new IMDS token for every request. ([#1395](https://github.com/awslabs/amazon-eks-ami/pull/1395)) + +### AMI Release v20230816 +* amazon-eks-gpu-node-1.27-v20230816 +* amazon-eks-gpu-node-1.26-v20230816 +* amazon-eks-gpu-node-1.25-v20230816 +* amazon-eks-gpu-node-1.24-v20230816 +* amazon-eks-gpu-node-1.23-v20230816 +* amazon-eks-arm64-node-1.27-v20230816 +* amazon-eks-arm64-node-1.26-v20230816 +* amazon-eks-arm64-node-1.25-v20230816 +* amazon-eks-arm64-node-1.24-v20230816 +* amazon-eks-arm64-node-1.23-v20230816 +* amazon-eks-node-1.27-v20230816 +* amazon-eks-node-1.26-v20230816 +* amazon-eks-node-1.25-v20230816 +* amazon-eks-node-1.24-v20230816 +* amazon-eks-node-1.23-v20230816 + +[Release versions](https://docs.aws.amazon.com/eks/latest/userguide/eks-linux-ami-versions.html) for these AMIs: +* `1.27.3-20230816` +* `1.26.6-20230816` +* `1.25.11-20230816` +* `1.24.15-20230816` +* `1.23.17-20230816` + +Binaries used to build these AMIs are published: +* s3://amazon-eks/1.27.3/2023-08-14/ +* s3://amazon-eks/1.26.6/2023-08-14/ +* s3://amazon-eks/1.25.11/2023-08-14/ +* s3://amazon-eks/1.24.15/2023-08-14/ +* s3://amazon-eks/1.23.17/2023-08-15/ + +AMI details: +* `kernel`: + * Kubernetes 1.23 and below: 5.4.250-166.369.amzn2 + * Kubernetes 1.24 and above: 5.10.186-179.751.amzn2 +* `dockerd`: 20.10.23-1.amzn2.0.1 + * **Note** that Docker is not installed on AMI's with Kubernetes 1.25+. +* `containerd`: 1.6.19-1.amzn2.0.1 +* `runc`: 1.1.7-1.amzn2 +* `cuda`: 11.4.0-1 +* `nvidia-container-runtime-hook`: 1.4.0-1.amzn2 +* `amazon-ssm-agent`: 3.2.1377.0-1 +Notable changes: +- Install latest runc `1.1.*` ([#1384](https://github.com/awslabs/amazon-eks-ami/pull/1384)). +- Install latest amazon-ssm-agent from S3 ([#1370](https://github.com/awslabs/amazon-eks-ami/pull/1370)). +- `kernel` updated to address: + - [ALASKERNEL-5.4-2023-050](https://alas.aws.amazon.com/AL2/ALASKERNEL-5.4-2023-050.html) + - [ALASKERNEL-5.10-2023-038](https://alas.aws.amazon.com/AL2/ALASKERNEL-5.10-2023-038.html) + +Other changes: +- Do not set `KubeletCredentialProviders` feature flag for 1.28+ ([#1375](https://github.com/awslabs/amazon-eks-ami/pull/1375)) +- Cache IMDS tokens per-user ([#1386](https://github.com/awslabs/amazon-eks-ami/pull/1386)) + +### AMI Release v20230728 +* amazon-eks-gpu-node-1.27-v20230728 +* amazon-eks-gpu-node-1.26-v20230728 +* amazon-eks-gpu-node-1.25-v20230728 +* amazon-eks-gpu-node-1.24-v20230728 +* amazon-eks-gpu-node-1.23-v20230728 +* amazon-eks-arm64-node-1.27-v20230728 +* amazon-eks-arm64-node-1.26-v20230728 +* amazon-eks-arm64-node-1.25-v20230728 +* amazon-eks-arm64-node-1.24-v20230728 +* amazon-eks-arm64-node-1.23-v20230728 +* amazon-eks-node-1.27-v20230728 +* amazon-eks-node-1.26-v20230728 +* amazon-eks-node-1.25-v20230728 +* amazon-eks-node-1.24-v20230728 +* amazon-eks-node-1.23-v20230728 + +[Release versions](https://docs.aws.amazon.com/eks/latest/userguide/eks-linux-ami-versions.html) for these AMIs: +* `1.27.3-20230728` +* `1.26.6-20230728` +* `1.25.11-20230728` +* `1.24.15-20230728` +* `1.23.17-20230728` + +Binaries used to build these AMIs are published: +* s3://amazon-eks/1.27.3/2023-06-30/ +* s3://amazon-eks/1.26.6/2023-06-30/ +* s3://amazon-eks/1.25.11/2023-06-30/ +* s3://amazon-eks/1.24.15/2023-06-30/ +* s3://amazon-eks/1.23.17/2023-06-30/ + +AMI details: +* `kernel`: + * Kubernetes 1.23 and below: 5.4.249-163.359.amzn2 + * Kubernetes 1.24 and above: 5.10.184-175.749.amzn2 +* `dockerd`: 20.10.23-1.amzn2.0.1 + * **Note** that Docker is not installed on AMI's with Kubernetes 1.25+. +* `containerd`: 1.6.19-1.amzn2.0.1 +* `runc`: 1.1.5-1.amzn2 +* `cuda`: 11.4.0-1 +* `nvidia-container-runtime-hook`: 1.4.0-1.amzn2 +* `amazon-ssm-agent`: 3.1.1732.0-1.amzn2 + +Notable changes: +- Kernel fix for `CVE-2023-3117` and `CVE-2023-35001` with new versions: [5.10 kernel](https://alas.aws.amazon.com/AL2/ALASKERNEL-5.10-2023-037.html) and [5.4 kernel](https://alas.aws.amazon.com/AL2/ALASKERNEL-5.4-2023-049.html) +- Mount bpffs on all supported Kubernetes versions. ([#1349](https://github.com/awslabs/amazon-eks-ami/pull/1349)) +- Enable discard_unpacked_layers by default to clean up compressed image layers in containerd's content store.([#1360](https://github.com/awslabs/amazon-eks-ami/pull/1360)) + +### AMI Release v20230711 +* amazon-eks-gpu-node-1.27-v20230711 +* amazon-eks-gpu-node-1.26-v20230711 +* amazon-eks-gpu-node-1.25-v20230711 +* amazon-eks-gpu-node-1.24-v20230711 +* amazon-eks-gpu-node-1.23-v20230711 +* amazon-eks-arm64-node-1.27-v20230711 +* amazon-eks-arm64-node-1.26-v20230711 +* amazon-eks-arm64-node-1.25-v20230711 +* amazon-eks-arm64-node-1.24-v20230711 +* amazon-eks-arm64-node-1.23-v20230711 +* amazon-eks-node-1.27-v20230711 +* amazon-eks-node-1.26-v20230711 +* amazon-eks-node-1.25-v20230711 +* amazon-eks-node-1.24-v20230711 +* amazon-eks-node-1.23-v20230711 + +[Release versions](https://docs.aws.amazon.com/eks/latest/userguide/eks-linux-ami-versions.html) for these AMIs: +* `1.27.3-20230711` +* `1.26.6-20230711` +* `1.25.11-20230711` +* `1.24.15-20230711` +* `1.23.17-20230711` + +Binaries used to build these AMIs are published: +* s3://amazon-eks/1.27.3/2023-06-30/ +* s3://amazon-eks/1.26.6/2023-06-30/ +* s3://amazon-eks/1.25.11/2023-06-30/ +* s3://amazon-eks/1.24.15/2023-06-30/ +* s3://amazon-eks/1.23.17/2023-06-30/ + +AMI details: +* `kernel`: + * Kubernetes 1.23 and below: 5.4.247-162.350.amzn2 + * Kubernetes 1.24 and above: 5.10.184-175.731.amzn2 +* `dockerd`: 20.10.23-1.amzn2.0.1 + * **Note** that Docker is not installed on AMI's with Kubernetes 1.25+. +* `containerd`: 1.6.19-1.amzn2.0.1 +* `runc`: 1.1.5-1.amzn2 +* `cuda`: 11.4.0-1 +* `nvidia-container-runtime-hook`: 1.4.0-1.amzn2 +* `amazon-ssm-agent`: 3.1.1732.0-1.amzn2 + +Notable changes: +- Kubelet versions bumped up for k8s version 1.23-1.27 to address [bug](https://github.com/kubernetes/kubernetes/issues/116847#issuecomment-1552938714) +- Source VPC CNI plugin version bumped from 0.8.0 to 1.2.0 + +### AMI Release v20230703 +* amazon-eks-gpu-node-1.27-v20230703 +* amazon-eks-gpu-node-1.26-v20230703 +* amazon-eks-gpu-node-1.25-v20230703 +* amazon-eks-gpu-node-1.24-v20230703 +* amazon-eks-gpu-node-1.23-v20230703 +* amazon-eks-gpu-node-1.22-v20230703 +* amazon-eks-arm64-node-1.27-v20230703 +* amazon-eks-arm64-node-1.26-v20230703 +* amazon-eks-arm64-node-1.25-v20230703 +* amazon-eks-arm64-node-1.24-v20230703 +* amazon-eks-arm64-node-1.23-v20230703 +* amazon-eks-arm64-node-1.22-v20230703 +* amazon-eks-node-1.27-v20230703 +* amazon-eks-node-1.26-v20230703 +* amazon-eks-node-1.25-v20230703 +* amazon-eks-node-1.24-v20230703 +* amazon-eks-node-1.23-v20230703 +* amazon-eks-node-1.22-v20230703 + +[Release versions](https://docs.aws.amazon.com/eks/latest/userguide/eks-linux-ami-versions.html) for these AMIs: +* `1.27.1-20230703` +* `1.26.4-20230703` +* `1.25.9-20230703` +* `1.24.13-20230703` +* `1.23.17-20230703` +* `1.22.17-20230703` + +Binaries used to build these AMIs are published: +* s3://amazon-eks/1.27.1/2023-04-19/ +* s3://amazon-eks/1.26.4/2023-05-11/ +* s3://amazon-eks/1.25.9/2023-05-11/ +* s3://amazon-eks/1.24.13/2023-05-11/ +* s3://amazon-eks/1.23.17/2023-05-11/ +* s3://amazon-eks/1.22.17/2023-05-11/ + +AMI details: +* `kernel`: + * Kubernetes 1.23 and below: 5.4.247-162.350.amzn2 + * Kubernetes 1.24 and above: 5.10.184-175.731.amzn2 +* `dockerd`: 20.10.23-1.amzn2.0.1 + * **Note** that Docker is not installed on AMI's with Kubernetes 1.25+. +* `containerd`: 1.6.19-1.amzn2.0.1 +* `runc`: 1.1.5-1.amzn2 +* `cuda`: 11.4.0-1 +* `nvidia-container-runtime-hook`: 1.4.0-1.amzn2 +* `amazon-ssm-agent`: 3.1.1732.0-1.amzn2 + +Notable changes: +- This is the last AMI release for Kubernetes 1.22 +- Update Kernel to 5.4.247-162.350.amzn2 to address [ALASKERNEL-5.4-2023-048](https://alas.aws.amazon.com/AL2/ALASKERNEL-5.4-2023-048.html), [CVE-2023-1206](https://alas.aws.amazon.com/cve/html/CVE-2023-1206.html) +- Update Kernel to 5.10.184-175.731.amzn2 to address [ALASKERNEL-5.10-2023-035](https://alas.aws.amazon.com/AL2/ALASKERNEL-5.10-2023-035.html), [CVE-2023-1206](https://alas.aws.amazon.com/cve/html/CVE-2023-1206.html) +- Use recommended clocksources ([#1328](https://github.com/awslabs/amazon-eks-ami/pull/1328)) +- Add configurable working directory ([#1231](https://github.com/awslabs/amazon-eks-ami/pull/1231)) +- Update eni-max-pods.txt ([#1330](https://github.com/awslabs/amazon-eks-ami/pull/1330)) +- Mount bpffs by default on 1.25+ ([#1320](https://github.com/awslabs/amazon-eks-ami/pull/1320)) + ### AMI Release v20230607 * amazon-eks-gpu-node-1.27-v20230607 * amazon-eks-gpu-node-1.26-v20230607 @@ -891,8 +1671,8 @@ AMI details: Notable changes: * Pin Kernel 5.4 to 5.4.209-116.367 to prevent nodes from going into Unready [#1072](https://github.com/awslabs/amazon-eks-ami/pull/1072) -* Increase the kube-api-server QPS from 5/10 to 10/20 [#1030](https://github.com/awslabs/amazon-eks-ami/pull/1030) -* Update docker and containerd for [ALASDOCKER-2022-021](https://alas.aws.amazon.com/AL2/ALASDOCKER-2022-021.html) [#1056](https://github.com/awslabs/amazon-eks-ami/pull/1056) +* Increase the kube-api-server QPS from 5/10 to 10/20 [#1030](https://github.com/awslabs/amazon-eks-ami/pull/1030) +* Update docker and containerd for [ALASDOCKER-2022-021](https://alas.aws.amazon.com/AL2/ALASDOCKER-2022-021.html) [#1056](https://github.com/awslabs/amazon-eks-ami/pull/1056) * runc version is updated to 1.1.3-1.amzn2.0.2 to include ALAS2DOCKER-2022-020 [#1055](https://github.com/awslabs/amazon-eks-ami/pull/1055) * Release AMI in me-central-1 with version 1.21, 1.22, 1.23. 1.20 is not supported in this region since it will be deprecated soon. * Fixes an issue with Docker daemon configuration on the GPU AMI (#351). @@ -1052,9 +1832,9 @@ Binaries used to build these AMIs are published: AMI details: * kernel: 5.4.209-116.363.amzn2 -* dockerd: 20.10.17-1.amzn2 -* containerd: 1.6.6-1.amzn2 -* runc: 1.1.3-1.amzn2-1.amzn2 +* dockerd: 20.10.17-1.amzn2 +* containerd: 1.6.6-1.amzn2 +* runc: 1.1.3-1.amzn2-1.amzn2 * cuda: 470.57.02-1 * nvidia-container-runtime-hook: 1.4.0-1.amzn2 * SSM agent: 3.1.1575.0-1.amzn2 @@ -1236,7 +2016,7 @@ AMI details: Notable changes: * Update kubelet binaries for 1.20 * Support packer's ami_regions feature -* Increase /var/log/messages limit to 100M +* Increase /var/log/messages limit to 100M * Support local cluster in Outposts * Adding c6id, m6id, r6id to eni-max-pods.txt @@ -2743,7 +3523,7 @@ Notable changes: - Fix Makefile indentation for 1.19 (#616) - Increase fs.inotify.max_user_instances to 8192 from the default of 128 (#614) - use dynamic lookup of docker gid (#622) -- bump docker version to 19.03.13ce-1 (#624) +- bump docker version to 19.03.13ce-1 (#624) ### AMI Release v20210208 * amazon-eks-gpu-node-1.19-v20210208 @@ -2794,7 +3574,7 @@ Binaries used to build these AMIs are published : * s3://amazon-eks/1.15.12/2020-11-02/ Notable changes : -* ARM AMIs built with m6g.large instance type (#601) +* ARM AMIs built with m6g.large instance type (#601) * Add Support for c6gn instance type (#597) * Patch for CVE-2021-3156 (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3156) diff --git a/CHANGELOG_FLUENCE.md b/CHANGELOG_FLUENCE.md index 6499c5eb2..d05224f81 100644 --- a/CHANGELOG_FLUENCE.md +++ b/CHANGELOG_FLUENCE.md @@ -1,5 +1,9 @@ # Changelog +### 2023-12-06 + +* Upgrade repository with upstream repo: `awslabs/amazon-eks-ami`, with tag `v20231116` + ### 2023-08-07 * Upgrade repository with upstream repo: `awslabs/amazon-eks-ami`, with tag `v20230607` diff --git a/Config b/Config index 97041313e..42acc08d9 100644 --- a/Config +++ b/Config @@ -3,7 +3,7 @@ # Copyright 2019 Amazon.com, Inc. or its affiliates. # SPDX-License-Identifier: Apache-2.0 -package.Amazon-eks-ami = { +package.Amazon-eks-ami-mirror = { interfaces = (1.0); deploy = { @@ -15,7 +15,7 @@ package.Amazon-eks-ami = { network-access = blocked; }; - build-system = archivebuild; + build-system = archivebuild-wrapper; build-tools = { 1.0 = { ArchiveBuild = 1.0; diff --git a/Makefile b/Makefile index d2b9d92f2..1a10456bb 100644 --- a/Makefile +++ b/Makefile @@ -26,14 +26,22 @@ ifeq ($(call vercmp,$(kubernetes_version),gteq,1.25.0), true) ami_component_description ?= (k8s: {{ user `kubernetes_version` }}, containerd: {{ user `containerd_version` }}) endif +AMI_VERSION ?= v$(shell date '+%Y%m%d') +AMI_VARIANT ?= amazon-eks +ifneq (,$(findstring al2023, $(PACKER_TEMPLATE_FILE))) + AMI_VARIANT := $(AMI_VARIANT)-al2023 +endif arch ?= x86_64 ifeq ($(arch), arm64) instance_type ?= m6g.large - ami_name ?= amazon-eks-arm64-node-$(K8S_VERSION_MINOR)-v$(shell date +'%Y%m%d%H%M%S') + AMI_VARIANT := $(AMI_VARIANT)-arm64 else instance_type ?= m5.large - ami_name ?= amazon-eks-node-$(K8S_VERSION_MINOR)-v$(shell date +'%Y%m%d%H%M%S') endif +ifeq ($(enable_fips), true) + AMI_VARIANT := $(AMI_VARIANT)-fips +endif +ami_name ?= $(AMI_VARIANT)-node-$(K8S_VERSION_MINOR)-$(AMI_VERSION) ifeq ($(aws_region), cn-northwest-1) source_ami_owners ?= 141808717104 @@ -48,8 +56,12 @@ T_GREEN := \e[0;32m T_YELLOW := \e[0;33m T_RESET := \e[0m -.PHONY: latest -latest: 1.27 ## Build EKS Optimized AL2 AMI with the latest supported version of Kubernetes +# default to the latest supported Kubernetes version +k8s=1.28 + +.PHONY: build +build: ## Build EKS Optimized AL2 AMI + $(MAKE) k8s $(shell hack/latest-binaries.sh $(k8s)) # ensure that these flags are equivalent to the rules in the .editorconfig SHFMT_FLAGS := --list \ @@ -74,10 +86,17 @@ ifeq (, $(SHELLCHECK_COMMAND)) endif SHELL_FILES := $(shell find $(MAKEFILE_DIR) -type f -name '*.sh') +.PHONY: transform-al2-to-al2023 +transform-al2-to-al2023: + PACKER_TEMPLATE_FILE=$(PACKER_TEMPLATE_FILE) \ + PACKER_DEFAULT_VARIABLE_FILE=$(PACKER_DEFAULT_VARIABLE_FILE) \ + hack/transform-al2-to-al2023.sh + .PHONY: lint -lint: ## Check the source files for syntax and format issues +lint: lint-docs ## Check the source files for syntax and format issues $(SHFMT_COMMAND) $(SHFMT_FLAGS) --diff $(MAKEFILE_DIR) $(SHELLCHECK_COMMAND) --format gcc --severity error $(SHELL_FILES) + hack/lint-space-errors.sh .PHONY: test test: ## run the test-harness @@ -98,31 +117,33 @@ k8s: validate ## Build default K8s version of EKS Optimized AL2 AMI @echo "$(T_GREEN)Building AMI for version $(T_YELLOW)$(kubernetes_version)$(T_GREEN) on $(T_YELLOW)$(arch)$(T_RESET)" $(PACKER_BINARY) build -timestamp-ui -color=false $(PACKER_VAR_FLAGS) $(PACKER_TEMPLATE_FILE) -# Build dates and versions taken from https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html - -.PHONY: 1.22 -1.22: ## Build EKS Optimized AL2 AMI - K8s 1.22 - $(MAKE) k8s kubernetes_version=1.22.17 kubernetes_build_date=2023-05-11 pull_cni_from_github=true - .PHONY: 1.23 1.23: ## Build EKS Optimized AL2 AMI - K8s 1.23 - $(MAKE) k8s kubernetes_version=1.23.17 kubernetes_build_date=2023-05-11 pull_cni_from_github=true + $(MAKE) k8s $(shell hack/latest-binaries.sh 1.23) .PHONY: 1.24 1.24: ## Build EKS Optimized AL2 AMI - K8s 1.24 - $(MAKE) k8s kubernetes_version=1.24.13 kubernetes_build_date=2023-05-11 pull_cni_from_github=true + $(MAKE) k8s $(shell hack/latest-binaries.sh 1.24) .PHONY: 1.25 1.25: ## Build EKS Optimized AL2 AMI - K8s 1.25 - $(MAKE) k8s kubernetes_version=1.25.9 kubernetes_build_date=2023-05-11 pull_cni_from_github=true + $(MAKE) k8s $(shell hack/latest-binaries.sh 1.25) .PHONY: 1.26 1.26: ## Build EKS Optimized AL2 AMI - K8s 1.26 - $(MAKE) k8s kubernetes_version=1.26.4 kubernetes_build_date=2023-05-11 pull_cni_from_github=true + $(MAKE) k8s $(shell hack/latest-binaries.sh 1.26) .PHONY: 1.27 1.27: ## Build EKS Optimized AL2 AMI - K8s 1.27 - $(MAKE) k8s kubernetes_version=1.27.1 kubernetes_build_date=2023-04-19 pull_cni_from_github=true + $(MAKE) k8s $(shell hack/latest-binaries.sh 1.27) + +.PHONY: 1.28 +1.28: ## Build EKS Optimized AL2 AMI - K8s 1.28 + $(MAKE) k8s $(shell hack/latest-binaries.sh 1.28) + +.PHONY: lint-docs +lint-docs: ## Lint the docs + hack/lint-docs.sh .PHONY: clean clean: diff --git a/README.md b/README.md index 758fb9868..49eb62c26 100644 --- a/README.md +++ b/README.md @@ -44,7 +44,7 @@ The Makefile chooses a particular kubelet binary to use per Kubernetes version w ## ๐Ÿ‘ฉโ€๐Ÿ’ป Using the AMI -The [AMI user guide](doc/USER_GUIDE.md) has details about the AMI's internals, and the [EKS user guide](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-custom-ami) explains how to use a custom AMI in a managed node group. +The [AMI user guide](https://awslabs.github.io/amazon-eks-ami/USER_GUIDE/) has details about the AMI's internals, and the [EKS user guide](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-custom-ami) explains how to use a custom AMI in a managed node group. ## ๐Ÿ”’ Security diff --git a/build-tools/bin/archivebuild-wrapper b/build-tools/bin/archivebuild-wrapper new file mode 100755 index 000000000..ba86f6f0f --- /dev/null +++ b/build-tools/bin/archivebuild-wrapper @@ -0,0 +1,13 @@ +#!/usr/bin/env bash + +# This file is for Amazon internal build processes + +HEAD_COMMIT="${BRAZIL_PACKAGE_CHANGE_ID:-$(git rev-parse HEAD)}" + +if [ "${HEAD_COMMIT}" = "" ]; then + echo >&2 "could not determine HEAD commit" + exit 1 +fi + +echo "${HEAD_COMMIT}" > .git-commit +archivebuild diff --git a/doc/CHANGELOG.md b/doc/CHANGELOG.md new file mode 120000 index 000000000..04c99a55c --- /dev/null +++ b/doc/CHANGELOG.md @@ -0,0 +1 @@ +../CHANGELOG.md \ No newline at end of file diff --git a/doc/CODE_OF_CONDUCT.md b/doc/CODE_OF_CONDUCT.md index 3b6446687..5b627cfa6 100644 --- a/doc/CODE_OF_CONDUCT.md +++ b/doc/CODE_OF_CONDUCT.md @@ -1,4 +1,4 @@ ## Code of Conduct -This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). -For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact +This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). +For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact opensource-codeofconduct@amazon.com with any additional questions or comments. diff --git a/doc/CONTRIBUTING.md b/doc/CONTRIBUTING.md index 2d6946816..b7cdc25ab 100644 --- a/doc/CONTRIBUTING.md +++ b/doc/CONTRIBUTING.md @@ -1,9 +1,9 @@ # Contributing Guidelines -Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional +Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community. -Please read through this document before submitting any issues or pull requests to ensure we have all the necessary +Please read through this document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your bug report or contribution. @@ -11,7 +11,7 @@ information to effectively respond to your bug report or contribution. We welcome you to use the GitHub issue tracker to report bugs or suggest features. -When filing an issue, please check [existing open](https://github.com/aws-samples/amazon-eks-ami/issues), or [recently closed](https://github.com/aws-samples/amazon-eks-ami/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already +When filing an issue, please check [existing open](https://github.com/aws-samples/amazon-eks-ami/issues), or [recently closed](https://github.com/aws-samples/amazon-eks-ami/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: * A reproducible test case or series of steps @@ -37,7 +37,7 @@ To send us a pull request, please: 6. Send us a pull request, answering any default questions in the pull request interface. 7. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. -GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and +GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). ### Testing Changes @@ -46,7 +46,7 @@ When submitting PRs, we want to verify that there are no regressions in the AMI **Test #1: Verify that the unit tests pass** -Please add a test case for your changes, if possible. See the [unit test README](test/README.md) for more information. These tests will be run automatically for every pull request. +Please add a test case for your changes, if possible. See the [unit test README](https://github.com/awslabs/amazon-eks-ami/tree/master/test#readme) for more information. These tests will be run automatically for every pull request. ``` make test @@ -131,12 +131,12 @@ The issue is discussed in [this StackExchange post](https://unix.stackexchange.c On OSX, running `brew install coreutils` resolves the issue. ## Finding contributions to work on -Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/aws-samples/amazon-eks-ami/labels/help%20wanted) issues is a great place to start. +Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/aws-samples/amazon-eks-ami/labels/help%20wanted) issues is a great place to start. ## Code of Conduct -This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). -For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact +This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). +For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact opensource-codeofconduct@amazon.com with any additional questions or comments. diff --git a/doc/README.md b/doc/README.md new file mode 120000 index 000000000..32d46ee88 --- /dev/null +++ b/doc/README.md @@ -0,0 +1 @@ +../README.md \ No newline at end of file diff --git a/doc/USER_GUIDE.md b/doc/USER_GUIDE.md index c8f79a5bf..f96c046ea 100644 --- a/doc/USER_GUIDE.md +++ b/doc/USER_GUIDE.md @@ -2,28 +2,16 @@ This document includes details about using the AMI template and the resulting AMIs. -1. [AMI template variables](#ami-template-variables) -1. [Building against other versions of Kubernetes binaries](#building-against-other-versions-of-kubernetes-binaries) -1. [Providing your own Kubernetes binaries](#providing-your-own-kubernetes-binaries) -1. [Container image caching](#container-image-caching) -1. [IAM permissions](#iam-permissions) -1. [Customizing kubelet config](#customizing-kubelet-config) -1. [AL2 and Linux kernel information](#al2-and-linux-kernel-information) -1. [Updating known instance types](#updating-known-instance-types) -1. [Version-locked packages](#version-locked-packages) -1. [Image credential provider plugins](#image-credential-provider-plugins) -1. [Ephemeral Storage](#ephemeral-storage) - --- ## AMI template variables -Default values for most variables are defined in [a default variable file](eks-worker-al2-variables.json). +Default values for most variables are defined in [a default variable file](https://github.com/awslabs/amazon-eks-ami/blob/master/eks-worker-al2-variables.json). Users have the following options for specifying their own values: 1. Provide a variable file with the `PACKER_VARIABLE_FILE` argument to `make`. Values in this file will override values in the default variable file. Your variable file does not need to include all possible variables, as it will be merged with the default variable file. -2. Pass a key-value pair for any template variable to `make`. These values will override any values that were specified with the first method. In the table below, these variables have a default value of "None". +2. Pass a key-value pair for any template variable to `make`. These values will override any values that were specified with the first method. In the table below, these variables have a default value of *None*. > **Note** > Some variables (such as `arch` and `kubernetes_version`) do not have a sensible, static default, and are satisfied by the Makefile. @@ -34,94 +22,113 @@ Users have the following options for specifying their own values: | Variable | Default value | Description | | - | - | - | | `additional_yum_repos` | `""` | | -| `ami_component_description` | ```{{user `remote_folder`}}/worker``` | | -| `ami_description` | ```{{user `remote_folder`}}/worker``` | | -| `ami_name` | None | | +| `ami_component_description` | ```(k8s: {{ user `kubernetes_version` }}, docker: {{ user `docker_version` }}, containerd: {{ user `containerd_version` }})``` | | +| `ami_description` | ```EKS Kubernetes Worker AMI with AmazonLinux2 image``` | | +| `ami_name` | *None* | | | `ami_regions` | `""` | | | `ami_users` | `""` | | -| `arch` | None | | +| `arch` | *None* | | | `associate_public_ip_address` | `""` | | -| `aws_access_key_id` | ```{{user `remote_folder`}}/worker``` | | -| `aws_region` | ```{{user `remote_folder`}}/worker``` | | -| `aws_secret_access_key` | ```{{user `remote_folder`}}/worker``` | | -| `aws_session_token` | ```{{user `remote_folder`}}/worker``` | | -| `binary_bucket_name` | ```{{user `remote_folder`}}/worker``` | | -| `binary_bucket_region` | ```{{user `remote_folder`}}/worker``` | | -| `cache_container_images` | ```{{user `remote_folder`}}/worker``` | | -| `cni_plugin_version` | ```{{user `remote_folder`}}/worker``` | | -| `containerd_version` | ```{{user `remote_folder`}}/worker``` | | -| `creator` | ```{{user `remote_folder`}}/worker``` | | -| `docker_version` | ```{{user `remote_folder`}}/worker``` | | -| `encrypted` | ```{{user `remote_folder`}}/worker``` | | -| `instance_type` | None | | +| `aws_access_key_id` | ```{{env `AWS_ACCESS_KEY_ID`}}``` | | +| `aws_region` | ```us-west-2``` | | +| `aws_secret_access_key` | ```{{env `AWS_SECRET_ACCESS_KEY`}}``` | | +| `aws_session_token` | ```{{env `AWS_SESSION_TOKEN`}}``` | | +| `binary_bucket_name` | ```amazon-eks``` | | +| `binary_bucket_region` | ```us-west-2``` | | +| `cache_container_images` | ```false``` | | +| `cni_plugin_version` | ```v1.2.0``` | | +| `containerd_version` | ```1.6.*``` | | +| `creator` | ```{{env `USER`}}``` | | +| `docker_version` | ```20.10.*``` | | +| `encrypted` | ```false``` | | +| `enable_fips` | ```false``` | Install openssl and enable fips related kernel parameters | +| `instance_type` | *None* | | | `kernel_version` | `""` | | | `kms_key_id` | `""` | | -| `kubernetes_build_date` | None | | -| `kubernetes_version` | None | | -| `launch_block_device_mappings_volume_size` | ```{{user `remote_folder`}}/worker``` | | -| `pause_container_version` | ```{{user `remote_folder`}}/worker``` | | -| `pull_cni_from_github` | ```{{user `remote_folder`}}/worker``` | | -| `remote_folder` | ```{{user `remote_folder`}}/worker``` | Directory path for shell provisioner scripts on the builder instance | -| `runc_version` | ```{{user `remote_folder`}}/worker``` | | +| `kubernetes_build_date` | *None* | | +| `kubernetes_version` | *None* | | +| `launch_block_device_mappings_volume_size` | ```4``` | | +| `pause_container_version` | ```3.5``` | | +| `pull_cni_from_github` | ```true``` | | +| `remote_folder` | ```/tmp``` | Directory path for shell provisioner scripts on the builder instance | +| `runc_version` | ```1.1.*``` | | | `security_group_id` | `""` | | -| `sonobuoy_e2e_registry` | `""` | | -| `source_ami_filter_name` | ```{{user `remote_folder`}}/worker``` | | +| `source_ami_filter_name` | ```amzn2-ami-minimal-hvm-*``` | | | `source_ami_id` | `""` | | -| `source_ami_owners` | ```{{user `remote_folder`}}/worker``` | | +| `source_ami_owners` | ```137112412989``` | | | `ssh_interface` | `""` | | -| `ssh_username` | ```{{user `remote_folder`}}/worker``` | | +| `ssh_username` | ```ec2-user``` | | +| `ssm_agent_version` | ```latest``` | | | `subnet_id` | `""` | | | `temporary_security_group_source_cidrs` | `""` | | -| `volume_type` | ```{{user `remote_folder`}}/worker``` | | +| `volume_type` | ```gp2``` | | | `working_dir` | ```{{user `remote_folder`}}/worker``` | Directory path for ephemeral resources on the builder instance | --- -## Building against other versions of Kubernetes binaries -To build an Amazon EKS Worker AMI with other versions of Kubernetes that are not listed above run the following AWS Command -Line Interface (AWS CLI) commands to obtain values for KUBERNETES_VERSION, KUBERNETES_BUILD_DATE, PLATFORM, ARCH from S3 +## Choosing Kubernetes binaries + +When building the AMI, binaries such as `kubelet`, `aws-iam-authenticator`, and `ecr-credential-provider` are installed. + +### Using the latest binaries + +It is recommended that the latest available binaries are used, as they may contain important fixes for bugs or security issues. +The latest binaries can be discovered with the following script: ```bash -#List of all avalable Kuberenets Versions: -aws s3 ls s3://amazon-eks -KUBERNETES_VERSION=1.23.9 # Chose a version and set the variable +hack/latest-binaries.sh $KUBERNETES_MINOR_VERSION +``` +This script will return the values for the binary-related AMI template variables, for example: +```bash +> hack/latest-binaries.sh 1.28 -#List of all builds for the specified Kubernetes Version: -aws s3 ls s3://amazon-eks/$KUBERNETES_VERSION/ -KUBERNETES_BUILD_DATE=2022-07-27 # Chose a date and set the variable +kubernetes_version=1.28.1 kubernetes_build_date=2023-10-01 +``` -#List of all platforms available for the selected Kubernetes Version and build date -aws s3 ls s3://amazon-eks/$KUBERNETES_VERSION/$KUBERNETES_BUILD_DATE/bin/ -PLATFORM=linux # Chose a platform and set the variable +### Using a specific version of the binaries -#List of all architectures for the selected Kubernetes Version, build date and platform -aws s3 ls s3://amazon-eks/$KUBERNETES_VERSION/$KUBERNETES_BUILD_DATE/bin/linux/ -ARCH=x86_64 #Chose an architecture and set the variable +Use the following commands to obtain values for the binary-related AMI template variables: +```bash +# List Kubernetes versions +aws s3 ls s3://amazon-eks + +# List build dates +aws s3 ls s3://amazon-eks/1.23.9/ + +# List platforms +aws s3 ls s3://amazon-eks/1.23.9/2022-07-27/bin/ + +# List architectures +aws s3 ls s3://amazon-eks/1.23.9/2022-07-27/bin/linux/ + +# List binaries +aws s3 ls s3://amazon-eks/1.23.9/2022-07-27/bin/linux/x86_64/ ``` -Run the following command to build an Amazon EKS Worker AMI based on the chosen parameters in the previous step + +To build using the example binaries above: ```bash make k8s \ - kubernetes_version=$KUBERNETES_VERSION \ - kubernetes_build_date=$KUBERNETES_BUILD_DATE \ - arch=$ARCH + kubernetes_version=1.23.9 \ + kubernetes_build_date=2022-07-27 \ + arch=x86_64 ``` ---- +### Providing your own binaries -## Providing your own Kubernetes Binaries +By default, binaries are downloaded from the public S3 bucket `amazon-eks` in `us-west-2`. +You can instead provide your own version of Kubernetes binaries. -By default, binaries are downloaded from the Amazon EKS public Amazon Simple Storage Service (Amazon S3) -bucket amazon-eks in us-west-2. You can instead choose to provide your own version of Kubernetes binaries to be used. To use your own binaries +To use your own binaries: -1. Copy the binaries to your own S3 bucket using the AWS CLI. Here is an example that uses Kubelet binary +1. Copy all of the necessary binaries to your own S3 bucket using the AWS CLI. For example: ```bash - aws s3 cp kubelet s3://my-custom-bucket/kubernetes_version/kubernetes_build_date/bin/linux/arch/kubelet + aws s3 cp kubelet s3://$BUCKET/$KUBERNETES_VERSION/$KUBERNETES_BUILD_DATE/bin/linux/$ARCH/kubelet ``` -**Note**: Replace my-custom-bucket, amazon-eks, kubernetes_version, kubernetes_build_date, and arch with your values. -**Important**: You must provide all the binaries listed in the default amazon-eks bucket for a specific kubernetes_version, kubernetes_build_date, and arch combination. These binaries must be accessible through AWS Identity and Access Management (IAM) credentials configured in the Install and configure HashiCorp Packer section. +**Important**: You must provide all the binaries present in the default `amazon-eks` bucket for a specific `KUBERNETES_VERSION`, `KUBERNETES_BUILD_DATE`, and `ARCH` combination. +These binaries must be accessible using the credentials on the Packer builder EC2 instance. -2. Run the following command to start the build process to use your own Kubernetes binaries +2. Run the following command to start the build process to use your own Kubernetes binaries: ```bash make k8s \ binary_bucket_name=my-custom-bucket \ @@ -131,19 +138,11 @@ make k8s \ ``` **Note**: Confirm that the binary_bucket_name, binary_bucket_region, kubernetes_version, and kubernetes_build_date parameters match the path to your binaries in Amazon S3. -The Makefile runs Packer with the `eks-worker-al2.json` build specification -template and the [amazon-ebs](https://www.packer.io/docs/builders/amazon-ebs.html) -builder. An instance is launched and the Packer [Shell -Provisioner](https://www.packer.io/docs/provisioners/shell.html) runs the -`install-worker.sh` script on the instance to install software and perform other -necessary configuration tasks. Then, Packer creates an AMI from the instance -and terminates the instance after the AMI is created. - --- ## Container Image Caching -Optionally, some container images can be cached during the AMI build process in order to reduce the latency of the node getting to a `Ready` state when launched. +Optionally, some container images can be cached during the AMI build process in order to reduce the latency of the node getting to a `Ready` state when launched. To turn on container image caching: @@ -160,7 +159,7 @@ When container image caching is enabled, the following images are cached: The account ID can be different depending on the region and partition you are building the AMI in. See [here](https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html) for more details. -Since the VPC CNI is not versioned with K8s itself, the latest version of the VPC CNI and the default version, based on the response from the EKS DescribeAddonVersions at the time of the AMI build, will be cached. +Since the VPC CNI is not versioned with K8s itself, the latest version of the VPC CNI and the default version, based on the response from the EKS DescribeAddonVersions at the time of the AMI build, will be cached. The images listed above are also tagged with each region in the partition the AMI is built in, since images are often built in one region and copied to others within the same partition. Images that are available to pull from an ECR FIPS endpoint are also tagged as such (i.e. `602401143452.dkr.ecr-fips.us-east-1.amazonaws.com/eks/pause:3.5`). @@ -310,7 +309,7 @@ If `kernel_version` is not set: - For Kubernetes 1.23 and below, `5.4` is used. - For Kubernetes 1.24 and above, `5.10` is used. -The [upgrade_kernel.sh script](../scripts/upgrade_kernel.sh) contains the logic for updating and upgrading the kernel. +The [upgrade_kernel.sh script](https://github.com/awslabs/amazon-eks-ami/blob/master/scripts/upgrade_kernel.sh) contains the logic for updating and upgrading the kernel. --- @@ -378,7 +377,7 @@ For more information about image credential provider plugins, refer to the [Kube Some instance types launch with ephemeral NVMe instance storage (i3, i4i, c5d, c6id, etc). There are two main ways of utilizing this storage within Kubernetes: a single RAID-0 array for use by kubelet and containerd or mounting the individual disks for pod usage. -The EKS Optimized AMI includes a utility script to configure ephemeral storage. The script can be invoked by passing the `--local-disks ` flag to the `/etc/eks/bootstrap.sh` script or the script can be invoked directly at `/bin/setup-local-disks`. All disks are formatted with an XFS file system. +The EKS Optimized AMI includes a utility script to configure ephemeral storage. The script can be invoked by passing the `--local-disks ` flag to the `/etc/eks/bootstrap.sh` script or the script can be invoked directly at `/bin/setup-local-disks`. All disks are formatted with an XFS file system. Below are details on the two disk setup options: diff --git a/eks-worker-al2-variables.json b/eks-worker-al2-variables.json index 1f30250c4..43b60748c 100644 --- a/eks-worker-al2-variables.json +++ b/eks-worker-al2-variables.json @@ -12,10 +12,11 @@ "binary_bucket_name": "amazon-eks", "binary_bucket_region": "us-west-2", "cache_container_images": "false", - "cni_plugin_version": "v0.8.6", + "cni_plugin_version": "v1.2.0", "containerd_version": "1.6.*", "creator": "{{env `USER`}}", - "docker_version": "20.10.23-1.amzn2.0.1", + "docker_version": "20.10.*", + "enable_fips": "false", "encrypted": "false", "kernel_version": "", "kms_key_id": "", @@ -23,14 +24,14 @@ "pause_container_version": "3.5", "pull_cni_from_github": "true", "remote_folder": "/tmp", - "runc_version": "1.1.5-1.amzn2", + "runc_version": "1.1.*", "security_group_id": "", - "sonobuoy_e2e_registry": "", "source_ami_filter_name": "amzn2-ami-minimal-hvm-*", "source_ami_id": "", "source_ami_owners": "137112412989", "ssh_interface": "", "ssh_username": "ec2-user", + "ssm_agent_version": "latest", "subnet_id": "", "temporary_security_group_source_cidrs": "", "volume_type": "gp2", diff --git a/eks-worker-al2.json b/eks-worker-al2.json index 0d74b6f34..c301c1eca 100644 --- a/eks-worker-al2.json +++ b/eks-worker-al2.json @@ -21,6 +21,7 @@ "creator": null, "docker_version": null, "encrypted": null, + "enable_fips": null, "instance_type": null, "kernel_version": null, "kms_key_id": null, @@ -32,12 +33,12 @@ "remote_folder": null, "runc_version": null, "security_group_id": null, - "sonobuoy_e2e_registry": null, "source_ami_filter_name": null, "source_ami_id": null, "source_ami_owners": null, "ssh_interface": null, "ssh_username": null, + "ssm_agent_version": null, "subnet_id": null, "temporary_security_group_source_cidrs": null, "volume_type": null, @@ -106,15 +107,20 @@ "docker_version": "{{ user `docker_version`}}", "containerd_version": "{{ user `containerd_version`}}", "kubernetes": "{{ user `kubernetes_version`}}/{{ user `kubernetes_build_date` }}/bin/linux/{{ user `arch` }}", - "cni_plugin_version": "{{ user `cni_plugin_version`}}" + "cni_plugin_version": "{{ user `cni_plugin_version`}}", + "ssm_agent_version": "{{ user `ssm_agent_version`}}" }, "ami_name": "{{user `ami_name`}}", - "ami_description": "{{ user `ami_description` }}, {{ user `ami_component_description` }}" + "ami_description": "{{ user `ami_description` }}, {{ user `ami_component_description` }}", + "metadata_options": { + "http_tokens": "required" + } } ], "provisioners": [ { "type": "shell", + "remote_folder": "{{ user `remote_folder`}}", "inline": [ "mkdir -p {{user `working_dir`}}", "mkdir -p {{user `working_dir`}}/log-collector-script" @@ -140,6 +146,7 @@ }, { "type": "shell", + "remote_folder": "{{ user `remote_folder`}}", "inline": [ "sudo chmod -R a+x {{user `working_dir`}}/bin/", "sudo mv {{user `working_dir`}}/bin/* /usr/bin/" @@ -148,14 +155,27 @@ { "type": "shell", "remote_folder": "{{ user `remote_folder`}}", - "expect_disconnect": true, - "pause_after": "90s", "script": "{{template_dir}}/scripts/upgrade_kernel.sh", "environment_vars": [ "KUBERNETES_VERSION={{user `kubernetes_version`}}", "KERNEL_VERSION={{user `kernel_version`}}" ] }, + { + "type": "shell", + "remote_folder": "{{ user `remote_folder`}}", + "script": "{{template_dir}}/scripts/enable-fips.sh", + "environment_vars": [ + "ENABLE_FIPS={{user `enable_fips`}}" + ] + }, + { + "type": "shell", + "remote_folder": "{{ user `remote_folder`}}", + "inline": ["sudo reboot"], + "expect_disconnect": true, + "pause_after": "90s" + }, { "type": "shell", "remote_folder": "{{ user `remote_folder`}}", @@ -173,10 +193,10 @@ "AWS_ACCESS_KEY_ID={{user `aws_access_key_id`}}", "AWS_SECRET_ACCESS_KEY={{user `aws_secret_access_key`}}", "AWS_SESSION_TOKEN={{user `aws_session_token`}}", - "SONOBUOY_E2E_REGISTRY={{user `sonobuoy_e2e_registry`}}", "PAUSE_CONTAINER_VERSION={{user `pause_container_version`}}", "CACHE_CONTAINER_IMAGES={{user `cache_container_images`}}", - "WORKING_DIR={{user `working_dir`}}" + "WORKING_DIR={{user `working_dir`}}", + "SSM_AGENT_VERSION={{user `ssm_agent_version`}}" ] }, { @@ -204,7 +224,10 @@ "type": "shell", "remote_folder": "{{ user `remote_folder`}}", "script": "{{template_dir}}/scripts/generate-version-info.sh", - "execute_command": "chmod +x {{ .Path }}; {{ .Path }} {{user `working_dir`}}/version-info.json" + "execute_command": "chmod +x {{ .Path }}; {{ .Path }} {{user `working_dir`}}/version-info.json", + "environment_vars": [ + "CACHE_CONTAINER_IMAGES={{user `cache_container_images`}}" + ] }, { "type": "file", @@ -214,6 +237,7 @@ }, { "type": "shell", + "remote_folder": "{{ user `remote_folder`}}", "inline": [ "rm -rf {{user `working_dir`}}" ] diff --git a/files/bin/imds b/files/bin/imds index 2d23801ba..2e87c00d8 100755 --- a/files/bin/imds +++ b/files/bin/imds @@ -5,20 +5,13 @@ set -o pipefail set -o nounset if [ "$#" -ne 1 ]; then - echo >&2 "usage: imds API_PATH" + echo >&2 "usage: imds token|API_PATH" exit 1 fi -# leading slashes will be removed -API_PATH="${1#/}" - -CURRENT_TIME=$(date '+%s') - IMDS_DEBUG="${IMDS_DEBUG:-false}" # default ttl is 15 minutes IMDS_TOKEN_TTL_SECONDS=${IMDS_TOKEN_TTL_SECONDS:-900} -# max ttl is 6 hours, see: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html -IMDS_MAX_TOKEN_TTL_SECONDS=${IMDS_MAX_TOKEN_TTL_SECONDS:-21600} IMDS_RETRIES=${IMDS_RETRIES:-10} IMDS_RETRY_DELAY_SECONDS=${IMDS_RETRY_DELAY_SECONDS:-1} IMDS_ENDPOINT=${IMDS_ENDPOINT:-169.254.169.254} @@ -49,44 +42,25 @@ function imdscurl() { } function get-token() { - local TOKEN_DIR=/tmp/imds-tokens - mkdir -p -m a+wrx $TOKEN_DIR - - # cleanup expired tokens - local DELETED_TOKENS=0 - for TOKEN_FILE in $(ls $TOKEN_DIR | awk '$0 < '$(($CURRENT_TIME - $IMDS_MAX_TOKEN_TTL_SECONDS))); do - rm $TOKEN_DIR/$TOKEN_FILE - DELETED_TOKENS=$(($DELETED_TOKENS + 1)) - done - if [ "$DELETED_TOKENS" -gt 0 ]; then - log "๐Ÿ—‘๏ธ Deleted $DELETED_TOKENS expired IMDS token(s)." - fi - - local TOKEN_FILE=$(ls $TOKEN_DIR | awk '$0 > '$CURRENT_TIME | sort -n -r | head -n 1) - - if [ "$TOKEN_FILE" = "" ]; then - TOKEN_FILE=$(($CURRENT_TIME + $IMDS_TOKEN_TTL_SECONDS)) - local TOKEN=$(imdscurl \ - -H "X-aws-ec2-metadata-token-ttl-seconds: $IMDS_TOKEN_TTL_SECONDS" \ - -X PUT \ - "http://$IMDS_ENDPOINT/latest/api/token") - echo "$TOKEN" > "$TOKEN_DIR/$TOKEN_FILE" - # make sure any user can utilize (and clean up) these tokens - chmod a+rwx $TOKEN_DIR/$TOKEN_FILE - log "๐Ÿ”‘ Retrieved a fresh IMDS token that will expire in $IMDS_TOKEN_TTL_SECONDS seconds." - else - log "โ„น๏ธ Using cached IMDS token that expires in $(($TOKEN_FILE - $CURRENT_TIME)) seconds." - fi - cat "$TOKEN_DIR/$TOKEN_FILE" + imdscurl \ + -H "X-aws-ec2-metadata-token-ttl-seconds: $IMDS_TOKEN_TTL_SECONDS" \ + -X PUT \ + "http://$IMDS_ENDPOINT/latest/api/token" } function get-with-token() { local API_PATH="$1" imdscurl \ - -H "X-aws-ec2-metadata-token: $(get-token)" \ + -H "X-aws-ec2-metadata-token: ${IMDS_TOKEN:-$(get-token)}" \ "http://$IMDS_ENDPOINT/$API_PATH" } log "โ„น๏ธ Talking to IMDS at $IMDS_ENDPOINT" -get-with-token "$API_PATH" +if [ "$1" = "token" ]; then + get-token +else + # leading slashes will be removed + API_PATH="${1#/}" + get-with-token "$API_PATH" +fi diff --git a/files/bin/private-dns-name b/files/bin/private-dns-name new file mode 100755 index 000000000..f8ce371d8 --- /dev/null +++ b/files/bin/private-dns-name @@ -0,0 +1,44 @@ +#!/usr/bin/env bash + +set -o errexit +set -o nounset +set -o xtrace + +# Retrieves the PrivateDnsName from EC2 for this instance, waiting until +# it is available if necessary (due to eventual consistency). + +function log { + echo >&2 "$(date '+%Y-%m-%dT%H:%M:%S%z')" "[private-dns-name]" "$@" +} + +INSTANCE_ID=$(imds /latest/meta-data/instance-id) + +# the AWS CLI currently constructs the wrong endpoint URL on localzones (the availability zone group will be used instead of the parent region) +# more info: https://github.com/aws/aws-cli/issues/7043 +REGION=$(imds /latest/meta-data/placement/region) + +# by default, wait for 120 seconds +PRIVATE_DNS_NAME_MAX_ATTEMPTS=${PRIVATE_DNS_NAME_MAX_ATTEMPTS:-20} +PRIVATE_DNS_NAME_ATTEMPT_INTERVAL=${PRIVATE_DNS_NAME_ATTEMPT_INTERVAL:-6} + +log "will make up to ${PRIVATE_DNS_NAME_MAX_ATTEMPTS} attempt(s) every ${PRIVATE_DNS_NAME_ATTEMPT_INTERVAL} second(s)" + +ATTEMPT=0 +while true; do + PRIVATE_DNS_NAME=$(aws ec2 describe-instances --region $REGION --instance-ids $INSTANCE_ID | jq -r '.Reservations[].Instances[].PrivateDnsName') + if [ ! "${PRIVATE_DNS_NAME}" = "" ] || [ ${ATTEMPT} -ge ${PRIVATE_DNS_NAME_MAX_ATTEMPTS} ]; then + break + fi + ATTEMPT=$((ATTEMPT + 1)) + log "WARN: PrivateDnsName is not available, waiting for ${PRIVATE_DNS_NAME_ATTEMPT_INTERVAL} seconds..." + sleep ${PRIVATE_DNS_NAME_ATTEMPT_INTERVAL} +done + +if [ "${PRIVATE_DNS_NAME}" = "" ]; then + log "ERROR: failed to retrieve PrivateDnsName after ${ATTEMPT} attempts!" + exit 1 +else + log "INFO: retrieved PrivateDnsName: ${PRIVATE_DNS_NAME}" + echo "${PRIVATE_DNS_NAME}" + exit 0 +fi diff --git a/files/bootstrap.sh b/files/bootstrap.sh index 8937784bb..36f47d9c3 100755 --- a/files/bootstrap.sh +++ b/files/bootstrap.sh @@ -33,7 +33,7 @@ function print_help { echo "--ip-family Specify ip family of the cluster" echo "--kubelet-extra-args Extra arguments to add to the kubelet. Useful for adding labels or taints." echo "--local-disks Setup instance storage NVMe disks in raid0 or mount the individual disks for use by pods [mount | raid0]" - echo "--mount-bpf-fs Mount a bpffs at /sys/fs/bpf (default: true, for Kubernetes 1.25+; false otherwise)" + echo "--mount-bpf-fs Mount a bpffs at /sys/fs/bpf (default: true)" echo "--pause-container-account The AWS account (number) to pull the pause container from" echo "--pause-container-version The tag of the pause container" echo "--service-ipv6-cidr ipv6 cidr range of the cluster" @@ -175,6 +175,8 @@ set -- "${POSITIONAL[@]}" # restore positional parameters CLUSTER_NAME="$1" set -u +export IMDS_TOKEN=$(imds token) + KUBELET_VERSION=$(kubelet --version | grep -Eo '[0-9]\.[0-9]+\.[0-9]+') log "INFO: Using kubelet version $KUBELET_VERSION" @@ -220,15 +222,18 @@ ENABLE_LOCAL_OUTPOST="${ENABLE_LOCAL_OUTPOST:-}" CLUSTER_ID="${CLUSTER_ID:-}" LOCAL_DISKS="${LOCAL_DISKS:-}" +##allow --reserved-cpus options via kubelet arg directly. Disable default reserved cgroup option in such cases +USE_RESERVED_CGROUPS=true +if [[ ${KUBELET_EXTRA_ARGS} == *'--reserved-cpus'* ]]; then + USE_RESERVED_CGROUPS=false + log "INFO: --kubelet-extra-args includes --reserved-cpus, so kube/system-reserved cgroups will not be used." +fi + if [[ ! -z ${LOCAL_DISKS} ]]; then setup-local-disks "${LOCAL_DISKS}" fi -DEFAULT_MOUNT_BPF_FS="true" -if vercmp "$KUBELET_VERSION" lt "1.25.0"; then - DEFAULT_MOUNT_BPF_FS="false" -fi -MOUNT_BPF_FS="${MOUNT_BPF_FS:-$DEFAULT_MOUNT_BPF_FS}" +MOUNT_BPF_FS="${MOUNT_BPF_FS:-true}" # Helper function which calculates the amount of the given resource (either CPU or memory) # to reserve in a given resource range, specified by a start and end of the range and a percentage @@ -533,12 +538,7 @@ else # If the VPC has a custom `domain-name` in its DHCP options set, and the VPC has `enableDnsHostnames` set to `true`, # then /etc/hostname is not the same as EC2's PrivateDnsName. # The name of the Node object must be equal to EC2's PrivateDnsName for the aws-iam-authenticator to allow this kubelet to manage it. - INSTANCE_ID=$(imds /latest/meta-data/instance-id) - # the AWS CLI currently constructs the wrong endpoint URL on localzones (the availability zone group will be used instead of the parent region) - # more info: https://github.com/aws/aws-cli/issues/7043 - REGION=$(imds /latest/meta-data/placement/region) - PRIVATE_DNS_NAME=$(AWS_RETRY_MODE=standard AWS_MAX_ATTEMPTS=10 aws ec2 describe-instances --region $REGION --instance-ids $INSTANCE_ID --query 'Reservations[].Instances[].PrivateDnsName' --output text) - KUBELET_ARGS="$KUBELET_ARGS --hostname-override=$PRIVATE_DNS_NAME" + KUBELET_ARGS="$KUBELET_ARGS --hostname-override=$(private-dns-name)" fi KUBELET_ARGS="$KUBELET_ARGS --cloud-provider=$KUBELET_CLOUD_PROVIDER" @@ -557,9 +557,6 @@ if [[ "$CONTAINER_RUNTIME" = "containerd" ]]; then sudo mkdir -p /etc/containerd sudo mkdir -p /etc/cni/net.d - sudo mkdir -p /etc/systemd/system/containerd.service.d - printf '[Service]\nSlice=runtime.slice\n' | sudo tee /etc/systemd/system/containerd.service.d/00-runtime-slice.conf - if [[ -n "${CONTAINERD_CONFIG_FILE}" ]]; then sudo cp -v "${CONTAINERD_CONFIG_FILE}" /etc/eks/containerd/containerd-config.toml fi @@ -567,8 +564,11 @@ if [[ "$CONTAINER_RUNTIME" = "containerd" ]]; then sudo sed -i s,SANDBOX_IMAGE,$PAUSE_CONTAINER,g /etc/eks/containerd/containerd-config.toml echo "$(jq '.cgroupDriver="systemd"' "${KUBELET_CONFIG}")" > "${KUBELET_CONFIG}" - echo "$(jq '.systemReservedCgroup="/system"' "${KUBELET_CONFIG}")" > "${KUBELET_CONFIG}" - echo "$(jq '.kubeReservedCgroup="/runtime"' "${KUBELET_CONFIG}")" > "${KUBELET_CONFIG}" + ##allow --reserved-cpus options via kubelet arg directly. Disable default reserved cgroup option in such cases + if [[ "${USE_RESERVED_CGROUPS}" = true ]]; then + echo "$(jq '.systemReservedCgroup="/system"' "${KUBELET_CONFIG}")" > "${KUBELET_CONFIG}" + echo "$(jq '.kubeReservedCgroup="/runtime"' "${KUBELET_CONFIG}")" > "${KUBELET_CONFIG}" + fi # Check if the containerd config file is the same as the one used in the image build. # If different, then restart containerd w/ proper config @@ -655,6 +655,8 @@ if command -v nvidia-smi &> /dev/null; then nvidia-smi -ac 5001,1590 elif [[ $GPUNAME == *"M60"* ]]; then nvidia-smi -ac 2505,1177 + elif [[ $GPUNAME == *"H100"* ]]; then + nvidia-smi -ac 2619,1980 else echo "unsupported gpu" fi diff --git a/files/containerd-config.toml b/files/containerd-config.toml index 1cddeb2f6..42458568f 100644 --- a/files/containerd-config.toml +++ b/files/containerd-config.toml @@ -7,6 +7,7 @@ address = "/run/containerd/containerd.sock" [plugins."io.containerd.grpc.v1.cri".containerd] default_runtime_name = "runc" +discard_unpacked_layers = true [plugins."io.containerd.grpc.v1.cri"] sandbox_image = "SANDBOX_IMAGE" diff --git a/files/ecr-credential-provider-config.json b/files/ecr-credential-provider-config.json index 21581c4e9..6b251d69c 100644 --- a/files/ecr-credential-provider-config.json +++ b/files/ecr-credential-provider-config.json @@ -8,8 +8,8 @@ "*.dkr.ecr.*.amazonaws.com", "*.dkr.ecr.*.amazonaws.com.cn", "*.dkr.ecr-fips.*.amazonaws.com", - "*.dkr.ecr.us-iso-east-1.c2s.ic.gov", - "*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov" + "*.dkr.ecr.*.c2s.ic.gov", + "*.dkr.ecr.*.sc2s.sgov.gov" ], "defaultCacheDuration": "12h", "apiVersion": "credentialprovider.kubelet.k8s.io/v1" diff --git a/files/eni-max-pods.txt b/files/eni-max-pods.txt index f82b87d9f..0d5e473f0 100644 --- a/files/eni-max-pods.txt +++ b/files/eni-max-pods.txt @@ -167,6 +167,18 @@ c6in.8xlarge 234 c6in.large 29 c6in.metal 345 c6in.xlarge 58 +c7a.12xlarge 234 +c7a.16xlarge 737 +c7a.24xlarge 737 +c7a.2xlarge 58 +c7a.32xlarge 737 +c7a.48xlarge 737 +c7a.4xlarge 234 +c7a.8xlarge 234 +c7a.large 29 +c7a.medium 8 +c7a.metal-48xl 737 +c7a.xlarge 58 c7g.12xlarge 234 c7g.16xlarge 737 c7g.2xlarge 58 @@ -176,6 +188,14 @@ c7g.large 29 c7g.medium 8 c7g.metal 737 c7g.xlarge 58 +c7gd.12xlarge 234 +c7gd.16xlarge 737 +c7gd.2xlarge 58 +c7gd.4xlarge 234 +c7gd.8xlarge 234 +c7gd.large 29 +c7gd.medium 8 +c7gd.xlarge 58 c7gn.12xlarge 234 c7gn.16xlarge 737 c7gn.2xlarge 58 @@ -184,6 +204,17 @@ c7gn.8xlarge 234 c7gn.large 29 c7gn.medium 8 c7gn.xlarge 58 +c7i.12xlarge 234 +c7i.16xlarge 737 +c7i.24xlarge 737 +c7i.2xlarge 58 +c7i.48xlarge 737 +c7i.4xlarge 234 +c7i.8xlarge 234 +c7i.large 29 +c7i.metal-24xl 737 +c7i.metal-48xl 737 +c7i.xlarge 58 cr1.8xlarge 234 d2.2xlarge 58 d2.4xlarge 234 @@ -203,8 +234,6 @@ dl1.24xlarge 737 f1.16xlarge 394 f1.2xlarge 58 f1.4xlarge 234 -g2.2xlarge 58 -g2.8xlarge 234 g3.16xlarge 737 g3.4xlarge 234 g3.8xlarge 234 @@ -241,6 +270,10 @@ h1.4xlarge 234 h1.8xlarge 234 hpc6a.48xlarge 100 hpc6id.32xlarge 51 +hpc7a.12xlarge 100 +hpc7a.24xlarge 100 +hpc7a.48xlarge 100 +hpc7a.96xlarge 100 hpc7g.16xlarge 198 hpc7g.4xlarge 198 hpc7g.8xlarge 198 @@ -270,7 +303,9 @@ i4g.4xlarge 234 i4g.8xlarge 234 i4g.large 29 i4g.xlarge 58 +i4i.12xlarge 234 i4i.16xlarge 737 +i4i.24xlarge 437 i4i.2xlarge 58 i4i.32xlarge 737 i4i.4xlarge 234 @@ -443,6 +478,18 @@ m6in.8xlarge 234 m6in.large 29 m6in.metal 345 m6in.xlarge 58 +m7a.12xlarge 234 +m7a.16xlarge 737 +m7a.24xlarge 737 +m7a.2xlarge 58 +m7a.32xlarge 737 +m7a.48xlarge 737 +m7a.4xlarge 234 +m7a.8xlarge 234 +m7a.large 29 +m7a.medium 8 +m7a.metal-48xl 737 +m7a.xlarge 58 m7g.12xlarge 234 m7g.16xlarge 737 m7g.2xlarge 58 @@ -452,7 +499,33 @@ m7g.large 29 m7g.medium 8 m7g.metal 737 m7g.xlarge 58 +m7gd.12xlarge 234 +m7gd.16xlarge 737 +m7gd.2xlarge 58 +m7gd.4xlarge 234 +m7gd.8xlarge 234 +m7gd.large 29 +m7gd.medium 8 +m7gd.xlarge 58 +m7i-flex.2xlarge 58 +m7i-flex.4xlarge 234 +m7i-flex.8xlarge 234 +m7i-flex.large 29 +m7i-flex.xlarge 58 +m7i.12xlarge 234 +m7i.16xlarge 737 +m7i.24xlarge 737 +m7i.2xlarge 58 +m7i.48xlarge 737 +m7i.4xlarge 234 +m7i.8xlarge 234 +m7i.large 29 +m7i.metal-24xl 737 +m7i.metal-48xl 737 +m7i.xlarge 58 mac1.metal 234 +mac2-m2.metal 234 +mac2-m2pro.metal 234 mac2.metal 234 p2.16xlarge 234 p2.8xlarge 234 @@ -463,6 +536,7 @@ p3.8xlarge 234 p3dn.24xlarge 737 p4d.24xlarge 737 p4de.24xlarge 737 +p5.48xlarge 100 r3.2xlarge 58 r3.4xlarge 234 r3.8xlarge 234 @@ -604,6 +678,18 @@ r6in.8xlarge 234 r6in.large 29 r6in.metal 345 r6in.xlarge 58 +r7a.12xlarge 234 +r7a.16xlarge 737 +r7a.24xlarge 737 +r7a.2xlarge 58 +r7a.32xlarge 737 +r7a.48xlarge 737 +r7a.4xlarge 234 +r7a.8xlarge 234 +r7a.large 29 +r7a.medium 8 +r7a.metal-48xl 737 +r7a.xlarge 58 r7g.12xlarge 234 r7g.16xlarge 737 r7g.2xlarge 58 @@ -613,6 +699,35 @@ r7g.large 29 r7g.medium 8 r7g.metal 737 r7g.xlarge 58 +r7gd.12xlarge 234 +r7gd.16xlarge 737 +r7gd.2xlarge 58 +r7gd.4xlarge 234 +r7gd.8xlarge 234 +r7gd.large 29 +r7gd.medium 8 +r7gd.xlarge 58 +r7i.12xlarge 234 +r7i.16xlarge 737 +r7i.24xlarge 737 +r7i.2xlarge 58 +r7i.48xlarge 737 +r7i.4xlarge 234 +r7i.8xlarge 234 +r7i.large 29 +r7i.metal-24xl 737 +r7i.metal-48xl 737 +r7i.xlarge 58 +r7iz.12xlarge 234 +r7iz.16xlarge 737 +r7iz.2xlarge 58 +r7iz.32xlarge 737 +r7iz.4xlarge 234 +r7iz.8xlarge 234 +r7iz.large 29 +r7iz.metal-16xl 737 +r7iz.metal-32xl 737 +r7iz.xlarge 58 t1.micro 4 t2.2xlarge 44 t2.large 35 diff --git a/files/get-ecr-uri.sh b/files/get-ecr-uri.sh index ba719ac06..a160cebcb 100755 --- a/files/get-ecr-uri.sh +++ b/files/get-ecr-uri.sh @@ -39,15 +39,15 @@ else af-south-1) acct="877085696533" ;; - eu-south-1) - acct="590381155156" - ;; ap-southeast-3) acct="296578399912" ;; me-central-1) acct="759879836304" ;; + eu-south-1) + acct="590381155156" + ;; eu-south-2) acct="455263428931" ;; @@ -63,10 +63,57 @@ else il-central-1) acct="066635153087" ;; + # This sections includes all commercial non-opt-in regions, which use + # the same account for ECR pause container images, but still have in-region + # registries. + ap-northeast-1 | \ + ap-northeast-2 | \ + ap-northeast-3 | \ + ap-south-1 | \ + ap-southeast-1 | \ + ap-southeast-2 | \ + ca-central-1 | \ + eu-central-1 | \ + eu-north-1 | \ + eu-west-1 | \ + eu-west-2 | \ + eu-west-3 | \ + sa-east-1 | \ + us-east-1 | \ + us-east-2 | \ + us-west-1 | \ + us-west-2) + acct="602401143452" + ;; + # If the region is not mapped to an account, let's try to choose another region + # in that partition. + us-gov-*) + acct="013241004608" + region="us-gov-west-1" + ;; + cn-*) + acct="961992271922" + region="cn-northwest-1" + ;; + us-iso-*) + acct="725322719131" + region="us-iso-east-1" + ;; + us-isob-*) + acct="187977181151" + region="us-isob-east-1" + ;; *) acct="602401143452" + region="us-west-2" ;; - esac + esac # end region check +fi + +AWS_ECR_SUBDOMAIN="ecr" +# if FIPS is enabled on the machine, use the FIPS endpoint. +if [[ "$(sysctl -n crypto.fips_enabled)" == 1 ]]; then + AWS_ECR_SUBDOMAIN="ecr-fips" fi -echo "${acct}.dkr.ecr.${region}.${aws_domain}" +echo "${acct}.dkr.${AWS_ECR_SUBDOMAIN}.${region}.${aws_domain}" diff --git a/files/kubelet-config.json b/files/kubelet-config.json index 666350e2b..b78510c6a 100644 --- a/files/kubelet-config.json +++ b/files/kubelet-config.json @@ -27,8 +27,7 @@ "cgroupDriver": "cgroupfs", "cgroupRoot": "/", "featureGates": { - "RotateKubeletServerCertificate": true, - "KubeletCredentialProviders": true + "RotateKubeletServerCertificate": true }, "protectKernelDefaults": true, "serializeImagePulls": false, diff --git a/files/sonobuoy-e2e-registry-config b/files/sonobuoy-e2e-registry-config deleted file mode 100644 index be3813d86..000000000 --- a/files/sonobuoy-e2e-registry-config +++ /dev/null @@ -1,5 +0,0 @@ -dockerLibraryRegistry: SONOBUOY_E2E_REGISTRY/library -e2eRegistry: SONOBUOY_E2E_REGISTRY/kubernetes-e2e-test-images -gcRegistry: SONOBUOY_E2E_REGISTRY -googleContainerRegistry: SONOBUOY_E2E_REGISTRY/google-containers -sampleRegistry: SONOBUOY_E2E_REGISTRY/google-samples \ No newline at end of file diff --git a/hack/generate-template-variable-doc.py b/hack/generate-template-variable-doc.py index 35cdde476..3f08fcb7a 100755 --- a/hack/generate-template-variable-doc.py +++ b/hack/generate-template-variable-doc.py @@ -47,7 +47,9 @@ if val == "": val = f"`\"\"`" else: - val = f"```{default_val}```" + val = f"```{val}```" + else: + val = "*None*" description = "" if var in existing_descriptions: description = existing_descriptions[var] diff --git a/hack/latest-binaries.sh b/hack/latest-binaries.sh new file mode 100755 index 000000000..246fc8dd8 --- /dev/null +++ b/hack/latest-binaries.sh @@ -0,0 +1,26 @@ +#!/usr/bin/env bash + +set -o errexit +set -o pipefail +set -o nounset + +if [ "$#" -ne 1 ]; then + echo "usage: $0 KUBERNETES_MINOR_VERSION" + exit 1 +fi + +MINOR_VERSION="${1}" + +# retrieve the available "VERSION/BUILD_DATE" prefixes (e.g. "1.28.1/2023-09-14") +# from the binary object keys, sorted in descending semver order, and pick the first one +LATEST_BINARIES=$(aws s3api list-objects-v2 --bucket amazon-eks --prefix "${MINOR_VERSION}" --query 'Contents[*].[Key]' --output text | cut -d'/' -f-2 | sort -Vru | head -n1) + +if [ "${LATEST_BINARIES}" == "None" ]; then + echo >&2 "No binaries available for minor version: ${MINOR_VERSION}" + exit 1 +fi + +LATEST_VERSION=$(echo "${LATEST_BINARIES}" | cut -d'/' -f1) +LATEST_BUILD_DATE=$(echo "${LATEST_BINARIES}" | cut -d'/' -f2) + +echo "kubernetes_version=${LATEST_VERSION} kubernetes_build_date=${LATEST_BUILD_DATE}" diff --git a/hack/lint-docs.sh b/hack/lint-docs.sh new file mode 100755 index 000000000..24ef64720 --- /dev/null +++ b/hack/lint-docs.sh @@ -0,0 +1,10 @@ +#!/usr/bin/env bash + +set -o errexit +cd $(dirname $0) +./generate-template-variable-doc.py +if ! git diff --exit-code ../doc/USER_GUIDE.md; then + echo "ERROR: doc/USER_GUIDE.md is out of date. Please run hack/generate-template-variable-doc.py and commit the changes." + exit 1 +fi +./mkdocs.sh build --strict diff --git a/hack/lint-space-errors.sh b/hack/lint-space-errors.sh new file mode 100755 index 000000000..6c0f84a73 --- /dev/null +++ b/hack/lint-space-errors.sh @@ -0,0 +1,8 @@ +#!/usr/bin/env bash + +cd $(dirname $0)/.. + +# `git apply|diff` can check for space errors, with the core implementation being `git diff-tree` +# this tool compares two trees, generally used to find errors in proposed changes +# we want to check the entire existing tree, so we compare HEAD against an empty tree +git diff-tree --check $(git hash-object -t tree /dev/null) HEAD diff --git a/hack/mkdocs.Dockerfile b/hack/mkdocs.Dockerfile new file mode 100644 index 000000000..0f02dedce --- /dev/null +++ b/hack/mkdocs.Dockerfile @@ -0,0 +1,4 @@ +FROM python:3.9 +RUN pip install mkdocs mkdocs-material +WORKDIR /workdir +ENTRYPOINT ["mkdocs"] \ No newline at end of file diff --git a/hack/mkdocs.sh b/hack/mkdocs.sh new file mode 100755 index 000000000..4f7c93b95 --- /dev/null +++ b/hack/mkdocs.sh @@ -0,0 +1,14 @@ +#!/usr/bin/env bash + +set -o errexit + +cd $(dirname $0) + +IMAGE_ID=$(docker build --file mkdocs.Dockerfile --quiet .) +cd .. + +if [[ "$*" =~ "serve" ]]; then + EXTRA_ARGS="${EXTRA_ARGS} -a 0.0.0.0:8000" +fi + +docker run --rm -v ${PWD}:/workdir -p 8000:8000 ${IMAGE_ID} "${@}" ${EXTRA_ARGS} diff --git a/hack/transform-al2-to-al2023.sh b/hack/transform-al2-to-al2023.sh new file mode 100755 index 000000000..7a5c0bb69 --- /dev/null +++ b/hack/transform-al2-to-al2023.sh @@ -0,0 +1,34 @@ +#!/usr/bin/env bash + +set -o pipefail +set -o nounset +set -o errexit + +if [[ -z "${PACKER_TEMPLATE_FILE:-}" ]]; then + echo "PACKER_TEMPLATE_FILE must be set." >&2 + exit 1 +fi +if [[ -z "${PACKER_DEFAULT_VARIABLE_FILE:-}" ]]; then + echo "PACKER_DEFAULT_VARIABLE_FILE must be set." >&2 + exit 1 +fi + +# rsa keys are not supported in al2023, switch to ed25519 +# delete the upgrade kernel provisioner as we don't need it for al2023 +cat "${PACKER_TEMPLATE_FILE}" \ + | jq '._comment = "All template variables are enumerated here; and most variables have a default value defined in eks-worker-al2023-variables.json"' \ + | jq '.variables.temporary_key_pair_type = "ed25519"' \ + | jq 'del(.provisioners[5])' \ + | jq 'del(.provisioners[5])' \ + | jq 'del(.provisioners[5])' \ + > "${PACKER_TEMPLATE_FILE/al2/al2023}" + +# use newer versions of containerd and runc, do not install docker +# use al2023 6.1 minimal image +cat "${PACKER_DEFAULT_VARIABLE_FILE}" \ + | jq '.ami_component_description = "(k8s: {{ user `kubernetes_version` }}, containerd: {{ user `containerd_version` }})"' \ + | jq '.ami_description = "EKS-optimized Kubernetes node based on Amazon Linux 2023"' \ + | jq '.containerd_version = "*" | .runc_version = "*" | .docker_version = "" ' \ + | jq '.source_ami_filter_name = "al2023-ami-minimal-2023.*-kernel-6.1-x86_64"' \ + | jq '.volume_type = "gp3"' \ + > "${PACKER_DEFAULT_VARIABLE_FILE/al2/al2023}" diff --git a/log-collector-script/linux/README.md b/log-collector-script/linux/README.md index 4119e4410..9bdad98bd 100644 --- a/log-collector-script/linux/README.md +++ b/log-collector-script/linux/README.md @@ -129,3 +129,12 @@ aws ssm get-command-invocation \ ``` 4. Once the above command is executed successfully, the logs should be present in the S3 bucket specified in the previous step. + +### Collect User Data + +If collecting user data is required as apart of troubleshooting please use the commands below to retrieve data via IMDSv2: + +``` +TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \ +&& curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/user-data +``` diff --git a/log-collector-script/linux/eks-log-collector.sh b/log-collector-script/linux/eks-log-collector.sh index 75eada625..ee03b46ac 100644 --- a/log-collector-script/linux/eks-log-collector.sh +++ b/log-collector-script/linux/eks-log-collector.sh @@ -20,7 +20,7 @@ export LANG="C" export LC_ALL="C" # Global options -readonly PROGRAM_VERSION="0.7.5" +readonly PROGRAM_VERSION="0.7.6" readonly PROGRAM_SOURCE="https://github.com/awslabs/amazon-eks-ami/blob/master/log-collector-script/" readonly PROGRAM_NAME="$(basename "$0" .sh)" readonly PROGRAM_DIR="/opt/log-collector" @@ -50,6 +50,7 @@ REQUIRED_UTILS=( COMMON_DIRECTORIES=( kernel + modinfo system docker containerd @@ -263,6 +264,7 @@ collect() { get_region get_common_logs get_kernel_info + get_modinfo get_mounts_info get_selinux_info get_iptables_info @@ -276,6 +278,7 @@ collect() { get_sysctls_info get_networking_info get_cni_config + get_cni_configuration_variables get_docker_logs get_sandboxImage_info get_cpu_throttled_processes @@ -354,6 +357,7 @@ get_common_logs() { cp --force --dereference --recursive /var/log/containers/ebs-csi* "${COLLECT_DIR}"/var_log/ 2> /dev/null cp --force --dereference --recursive /var/log/containers/efs-csi* "${COLLECT_DIR}"/var_log/ 2> /dev/null cp --force --dereference --recursive /var/log/containers/fsx-csi* "${COLLECT_DIR}"/var_log/ 2> /dev/null + cp --force --dereference --recursive /var/log/containers/fsx-openzfs-csi* "${COLLECT_DIR}"/var_log/ 2> /dev/null cp --force --dereference --recursive /var/log/containers/file-cache-csi* "${COLLECT_DIR}"/var_log/ 2> /dev/null continue fi @@ -364,6 +368,9 @@ get_common_logs() { cp --force --dereference --recursive /var/log/pods/kube-system_kube-proxy* "${COLLECT_DIR}"/var_log/ 2> /dev/null cp --force --dereference --recursive /var/log/pods/kube-system_ebs-csi-* "${COLLECT_DIR}"/var_log/ 2> /dev/null cp --force --dereference --recursive /var/log/pods/kube-system_efs-csi-* "${COLLECT_DIR}"/var_log/ 2> /dev/null + cp --force --dereference --recursive /var/log/pods/kube-system_fsx-csi-* "${COLLECT_DIR}"/var_log/ 2> /dev/null + cp --force --dereference --recursive /var/log/pods/kube-system_fsx-openzfs-csi-* "${COLLECT_DIR}"/var_log/ 2> /dev/null + cp --force --dereference --recursive /var/log/pods/kube-system_file-cache-csi-* "${COLLECT_DIR}"/var_log/ 2> /dev/null continue fi cp --force --recursive --dereference /var/log/"${entry}" "${COLLECT_DIR}"/var_log/ 2> /dev/null @@ -386,6 +393,12 @@ get_kernel_info() { ok } +# collect modinfo on specific modules for debugging purposes +get_modinfo() { + try "collect modinfo" + modinfo lustre > "${COLLECT_DIR}/modinfo/lustre" +} + get_docker_logs() { try "collect Docker daemon logs" @@ -522,7 +535,7 @@ get_networking_info() { CA_CRT=$(grep certificate-authority: "${COLLECT_DIR}"/kubelet/kubeconfig.yaml | sed 's/.*certificate-authority: //') for i in $(seq 5); do echo -e "curling ${API_SERVER} ($i of 5) $(date --utc +%FT%T.%3N%Z)\n\n" >> ${COLLECT_DIR}"/networking/curl_api_server.txt" - timeout 75 curl -v --cacert "${CA_CRT}" "${API_SERVER}"/livez?verbose >> ${COLLECT_DIR}"/networking/curl_api_server.txt" 2>&1 + timeout 75 curl -v --connect-timeout 3 --max-time 10 --noproxy '*' --cacert "${CA_CRT}" "${API_SERVER}"/livez?verbose >> ${COLLECT_DIR}"/networking/curl_api_server.txt" 2>&1 done fi @@ -548,6 +561,35 @@ get_cni_config() { ok } +get_cni_configuration_variables() { + # To get cni configuration variables, gather from the main container "amazon-k8s-cni" + # - https://github.com/aws/amazon-vpc-cni-k8s#cni-configuration-variables + try "collect CNI Configuration Variables from Docker" + + # "docker container list" will only show "RUNNING" containers. + # "docker container inspect" will generate plain text output. + if [[ "$(pgrep -o dockerd)" -ne 0 ]]; then + timeout 75 docker container list | awk '/amazon-k8s-cni/{print$NF}' | xargs -n 1 docker container inspect > "${COLLECT_DIR}"/cni/cni-configuration-variables-dockerd.txt 2>&1 || echo -e "\tTimed out, ignoring \"cni configuration variables output \" " + else + warning "The Docker daemon is not running." + fi + + try "collect CNI Configuration Variables from Containerd" + + # "ctr container list" will list down all containers, including stopped ones. + # "ctr container info" will generate JSON format output. + if ! command -v ctr > /dev/null 2>&1; then + warning "ctr not installed" + else + # "ctr --namespace k8s.io container list" will return two containers + # - amazon-k8s-cni:v1.xx.yy + # - amazon-k8s-cni-init:v1.xx.yy + timeout 75 ctr --namespace k8s.io container list | awk '/amazon-k8s-cni:v/{print$1}' | xargs -n 1 ctr --namespace k8s.io container info > "${COLLECT_DIR}"/cni/cni-configuration-variables-containerd.json 2>&1 || echo -e "\tTimed out, ignoring \"cni configuration variables output \" " + fi + + ok +} + get_pkgtype() { if [[ "$(command -v rpm)" ]]; then PACKAGE_TYPE=rpm diff --git a/log-collector-script/windows/README.md b/log-collector-script/windows/README.md index 945211c14..1bff2287b 100644 --- a/log-collector-script/windows/README.md +++ b/log-collector-script/windows/README.md @@ -121,3 +121,12 @@ aws ssm get-command-invocation \ ``` 4. Once the above command is executed successfully, the logs should be present in the S3 bucket specified in the previous step. + +### Collect User Data + +If collecting use rdata is required as apart of troubleshooting please use the commands below to retrieve data via IMDSv2: + +``` +[string]$token = Invoke-RestMethod -Headers @{"X-aws-ec2-metadata-token-ttl-seconds" = "21600"} -Method PUT -Uri http://169.254.169.254/latest/api/token +Invoke-RestMethod -Headers @{"X-aws-ec2-metadata-token" = $token} -Method GET -Uri http://169.254.169.254/latest/user-data +``` diff --git a/log-collector-script/windows/eks-log-collector.ps1 b/log-collector-script/windows/eks-log-collector.ps1 index 31fa84ba2..4bb1e454e 100644 --- a/log-collector-script/windows/eks-log-collector.ps1 +++ b/log-collector-script/windows/eks-log-collector.ps1 @@ -1,27 +1,27 @@ -<# +<# Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ - or in the "license" file accompanying this file. + or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -.SYNOPSIS +.SYNOPSIS Collects EKS Logs -.DESCRIPTION - Run the script to gather basic operating system, Docker daemon, and kubelet logs. +.DESCRIPTION + Run the script to gather basic operating system, Docker daemon, and kubelet logs. .NOTES You need to run this script with Elevated permissions to allow for the collection of the installed applications list -.EXAMPLE +.EXAMPLE eks-log-collector.ps1 - Gather basic operating system, Docker daemon, and kubelet logs. + Gather basic operating system, Docker daemon, and kubelet logs. #> param( - [Parameter(Mandatory=$False)][string]$RunMode = "Collect" + [Parameter(Mandatory=$False)][string]$RunMode = "Collect" ) # Common options @@ -111,10 +111,10 @@ Function get_sysinfo{ Write-Host "OK" -ForegroundColor "green" } catch { - Write-Error "Unable to collect system information" + Write-Error "Unable to collect system information" Break - } - + } + } Function is_diskfull{ @@ -127,11 +127,11 @@ Function is_diskfull{ Write-Host "OK" -ForegroundColor "green" } catch { - Write-Error "Unable to Determine Free Disk Space" + Write-Error "Unable to Determine Free Disk Space" Break } if ($percent -lt $threshold){ - Write-Error "C: drive only has $percent% free space, please ensure there is at least $threshold% free disk space to collect and store the log files" + Write-Error "C: drive only has $percent% free space, please ensure there is at least $threshold% free disk space to collect and store the log files" Break } } @@ -328,7 +328,7 @@ Function get_containerd_logs{ Function get_network_info{ try { - Write-Host "Collecting network Information" + Write-Host "Collecting network Information" Get-HnsNetwork | Select Name, Type, Id, AddressPrefix > $info_system\network\hns\network.txt Get-hnsnetwork | Convertto-json -Depth 20 >> $info_system\network\hns\network.txt Get-hnsnetwork | % { Get-HnsNetwork -Id $_.ID -Detailed } | Convertto-json -Depth 20 >> $info_system\network\hns\networkdetailed.txt @@ -373,7 +373,7 @@ Function init{ create_working_dir get_sysinfo } - + Function collect{ init is_diskfull @@ -395,11 +395,11 @@ Function collect{ #-------------------------- #Main-function -Function main { +Function main { Write-Host "Running Default(Collect) Mode" -foregroundcolor "blue" cleanup collect - pack + pack } #Entry point diff --git a/mkdocs.yaml b/mkdocs.yaml new file mode 100644 index 000000000..56ec4c37e --- /dev/null +++ b/mkdocs.yaml @@ -0,0 +1,19 @@ +site_name: Amazon EKS AMI +docs_dir: doc/ +site_description: Build template and runtime resources for the Amazon EKS AMI +repo_name: awslabs/amazon-eks-ami +repo_url: https://github.com/awslabs/amazon-eks-ami +nav: + - 'Overview': README.md + - 'User Guide': USER_GUIDE.md + - 'Changelog': CHANGELOG.md + - 'Community': + - 'Contribution guidelines': CONTRIBUTING.md + - 'Code of Conduct': CODE_OF_CONDUCT.md + +theme: + name: material + palette: + primary: black + features: + - navigation.sections \ No newline at end of file diff --git a/scripts/cleanup.sh b/scripts/cleanup.sh index f99893412..61c399fee 100644 --- a/scripts/cleanup.sh +++ b/scripts/cleanup.sh @@ -24,6 +24,6 @@ sudo rm -rf \ /var/log/secure \ /var/log/wtmp \ /var/log/messages \ - /tmp/imds-tokens + /var/log/audit/* sudo touch /etc/machine-id diff --git a/scripts/cleanup_additional_repos.sh b/scripts/cleanup_additional_repos.sh index 79179d674..c9cb20f07 100644 --- a/scripts/cleanup_additional_repos.sh +++ b/scripts/cleanup_additional_repos.sh @@ -12,13 +12,13 @@ fi AWK_CMD=' BEGIN {RS=";";FS=","} { - delete vars; - for(i = 1; i <= NF; ++i) { - n = index($i, "="); - if(n) { + delete vars; + for(i = 1; i <= NF; ++i) { + n = index($i, "="); + if(n) { vars[substr($i, 1, n-1)] = substr($i, n + 1) } - } + } Repo = "/etc/yum.repos.d/"vars["repo"]".repo" } {cmd="rm -f " Repo; system(cmd)} diff --git a/scripts/enable-fips.sh b/scripts/enable-fips.sh new file mode 100755 index 000000000..399ab6b26 --- /dev/null +++ b/scripts/enable-fips.sh @@ -0,0 +1,10 @@ +#!/bin/bash +# https://aws.amazon.com/blogs/publicsector/enabling-fips-mode-amazon-linux-2/ +if [[ "$ENABLE_FIPS" == "true" ]]; then + # install and enable fips modules + sudo yum install -y dracut-fips openssl + sudo dracut -f + + # enable fips in the boot command + sudo /sbin/grubby --update-kernel=ALL --args="fips=1" +fi diff --git a/scripts/generate-version-info.sh b/scripts/generate-version-info.sh index 9a52f42ce..94ded309c 100644 --- a/scripts/generate-version-info.sh +++ b/scripts/generate-version-info.sh @@ -16,8 +16,24 @@ OUTPUT_FILE="$1" sudo rpm --query --all --queryformat '\{"%{NAME}": "%{VERSION}-%{RELEASE}"\}\n' | jq --slurp --sort-keys 'add | {packages:(.)}' > "$OUTPUT_FILE" # binaries -echo $(jq ".binaries.kubelet = \"$(kubelet --version | awk '{print $2}')\"" $OUTPUT_FILE) > $OUTPUT_FILE -echo $(jq ".binaries.awscli = \"$(aws --version | awk '{print $1}' | cut -d '/' -f 2)\"" $OUTPUT_FILE) > $OUTPUT_FILE +KUBELET_VERSION=$(kubelet --version | awk '{print $2}') +if [ "$?" != 0 ]; then + echo "unable to get kubelet version" + exit 1 +fi +echo $(jq ".binaries.kubelet = \"$KUBELET_VERSION\"" $OUTPUT_FILE) > $OUTPUT_FILE + +CLI_VERSION=$(aws --version | awk '{print $1}' | cut -d '/' -f 2) +if [ "$?" != 0 ]; then + echo "unable to get aws cli version" + exit 1 +fi +echo $(jq ".binaries.awscli = \"$CLI_VERSION\"" $OUTPUT_FILE) > $OUTPUT_FILE # cached images -echo $(jq ".images = [ $(sudo ctr -n k8s.io image ls -q | cut -d'/' -f2- | sort | uniq | grep -v 'sha256' | xargs -r printf "\"%s\"," | sed 's/,$//') ]" $OUTPUT_FILE) > $OUTPUT_FILE +if systemctl is-active --quiet containerd; then + echo $(jq ".images = [ $(sudo ctr -n k8s.io image ls -q | cut -d'/' -f2- | sort | uniq | grep -v 'sha256' | xargs -r printf "\"%s\"," | sed 's/,$//') ]" $OUTPUT_FILE) > $OUTPUT_FILE +elif [ "${CACHE_CONTAINER_IMAGES}" = "true" ]; then + echo "containerd must be active to generate version info for cached images" + exit 1 +fi diff --git a/scripts/install-worker.sh b/scripts/install-worker.sh index ba5a69422..a664485d3 100644 --- a/scripts/install-worker.sh +++ b/scripts/install-worker.sh @@ -32,6 +32,7 @@ validate_env_set PULL_CNI_FROM_GITHUB validate_env_set PAUSE_CONTAINER_VERSION validate_env_set CACHE_CONTAINER_IMAGES validate_env_set WORKING_DIR +validate_env_set SSM_AGENT_VERSION ################################################################################ ### Machine Architecture ####################################################### @@ -51,6 +52,13 @@ fi ### Packages ################################################################### ################################################################################ +sudo yum install -y \ + yum-utils \ + yum-plugin-versionlock + +# lock the version of the kernel and associated packages before we yum update +sudo yum versionlock kernel-$(uname -r) kernel-headers-$(uname -r) kernel-devel-$(uname -r) + # Update the OS to begin with to catch up to the latest packages. sudo yum update -y @@ -59,7 +67,6 @@ sudo yum install -y \ aws-cfn-bootstrap \ chrony \ conntrack \ - curl \ ec2-instance-connect \ ethtool \ ipvsadm \ @@ -68,15 +75,23 @@ sudo yum install -y \ socat \ unzip \ wget \ - yum-utils \ - yum-plugin-versionlock \ mdadm \ pigz -# Remove any old kernel versions. `--count=1` here means "only leave 1 kernel version installed" -sudo package-cleanup --oldkernels --count=1 -y +# skip kernel version cleanup on al2023 +if ! cat /etc/*release | grep "al2023" > /dev/null 2>&1; then + # Remove any old kernel versions. `--count=1` here means "only leave 1 kernel version installed" + sudo package-cleanup --oldkernels --count=1 -y +fi -sudo yum versionlock kernel-$(uname -r) +# packages that need special handling +if cat /etc/*release | grep "al2023" > /dev/null 2>&1; then + # exists in al2023 only (needed by kubelet) + sudo yum install -y iptables-legacy +else + # curl-minimal already exists in al2023 so install curl only on al2 + sudo yum install -y curl +fi # Remove the ec2-net-utils package, if it's installed. This package interferes with the route setup on the instance. if yum list installed | grep ec2-net-utils; then sudo yum remove ec2-net-utils -y -q; fi @@ -165,8 +180,15 @@ sudo mv $WORKING_DIR/pull-sandbox-image.sh /etc/eks/containerd/pull-sandbox-imag sudo mv $WORKING_DIR/pull-image.sh /etc/eks/containerd/pull-image.sh sudo chmod +x /etc/eks/containerd/pull-sandbox-image.sh sudo chmod +x /etc/eks/containerd/pull-image.sh - sudo mkdir -p /etc/systemd/system/containerd.service.d +CONFIGURE_CONTAINERD_SLICE=$(vercmp "$KUBERNETES_VERSION" gteq "1.24.0" || true) +if [ "$CONFIGURE_CONTAINERD_SLICE" == "true" ]; then + cat << EOF | sudo tee /etc/systemd/system/containerd.service.d/00-runtime-slice.conf +[Service] +Slice=runtime.slice +EOF +fi + cat << EOF | sudo tee /etc/systemd/system/containerd.service.d/10-compat-symlink.conf [Service] ExecStartPre=/bin/ln -sf /run/containerd/containerd.sock /run/dockershim.sock @@ -183,6 +205,16 @@ net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF +############################################################################### +### Nerdctl setup ############################################################# +############################################################################### + +sudo yum install -y nerdctl +sudo mkdir /etc/nerdctl +cat << EOF | sudo tee -a /etc/nerdctl/nerdctl.toml +namespace = "k8s.io" +EOF + ################################################################################ ### Docker ##################################################################### ################################################################################ @@ -316,6 +348,13 @@ if [[ $KUBERNETES_VERSION == "1.20"* ]]; then echo $KUBELET_CONFIG_WITH_CSI_SERVICE_ACCOUNT_TOKEN_ENABLED > $WORKING_DIR/kubelet-config.json fi +# Enable Feature Gate for KubeletCredentialProviders in versions less than 1.28 since this feature flag was removed in 1.28. +# TODO: Remove this during 1.27 EOL +if vercmp $KUBERNETES_VERSION lt "1.28"; then + KUBELET_CONFIG_WITH_KUBELET_CREDENTIAL_PROVIDER_FEATURE_GATE_ENABLED=$(cat $WORKING_DIR/kubelet-config.json | jq '.featureGates += {KubeletCredentialProviders: true}') + echo $KUBELET_CONFIG_WITH_KUBELET_CREDENTIAL_PROVIDER_FEATURE_GATE_ENABLED > $WORKING_DIR/kubelet-config.json +fi + sudo mv $WORKING_DIR/kubelet.service /etc/systemd/system/kubelet.service sudo chown root:root /etc/systemd/system/kubelet.service sudo mv $WORKING_DIR/kubelet-config.json /etc/kubernetes/kubelet/kubelet-config.json @@ -338,12 +377,6 @@ sudo chmod +x /etc/eks/bootstrap.sh sudo mv $WORKING_DIR/max-pods-calculator.sh /etc/eks/max-pods-calculator.sh sudo chmod +x /etc/eks/max-pods-calculator.sh -SONOBUOY_E2E_REGISTRY="${SONOBUOY_E2E_REGISTRY:-}" -if [[ -n "$SONOBUOY_E2E_REGISTRY" ]]; then - sudo mv $WORKING_DIR/sonobuoy-e2e-registry-config /etc/eks/sonobuoy-e2e-registry-config - sudo sed -i s,SONOBUOY_E2E_REGISTRY,$SONOBUOY_E2E_REGISTRY,g /etc/eks/sonobuoy-e2e-registry-config -fi - ################################################################################ ### ECR CREDENTIAL PROVIDER #################################################### ################################################################################ @@ -428,6 +461,7 @@ if [[ "$CACHE_CONTAINER_IMAGES" == "true" ]] && ! [[ ${ISOLATED_REGIONS} =~ $BIN ${VPC_CNI_IMGS[@]+"${VPC_CNI_IMGS[@]}"} ) PULLED_IMGS=() + REGIONS=$(aws ec2 describe-regions --all-regions --output text --query 'Regions[].[RegionName]') for img in "${CACHE_IMGS[@]}"; do ## only kube-proxy-minimal is vended for K8s 1.24+ @@ -452,12 +486,13 @@ if [[ "$CACHE_CONTAINER_IMAGES" == "true" ]] && ! [[ ${ISOLATED_REGIONS} =~ $BIN done #### Tag the pulled down image for all other regions in the partition - for region in $(aws ec2 describe-regions --all-regions | jq -r '.Regions[] .RegionName'); do + for region in ${REGIONS[*]}; do for img in "${PULLED_IMGS[@]}"; do - regional_img="${img/$BINARY_BUCKET_REGION/$region}" + region_uri=$(/etc/eks/get-ecr-uri.sh "${region}" "${AWS_DOMAIN}") + regional_img="${img/$ECR_URI/$region_uri}" sudo ctr -n k8s.io image tag "${img}" "${regional_img}" || : ## Tag ECR fips endpoint for supported regions - if [[ "${region}" =~ (us-east-1|us-east-2|us-west-1|us-west-2|us-gov-east-1|us-gov-east-2) ]]; then + if [[ "${region}" =~ (us-east-1|us-east-2|us-west-1|us-west-2|us-gov-east-1|us-gov-west-1) ]]; then regional_fips_img="${regional_img/.ecr./.ecr-fips.}" sudo ctr -n k8s.io image tag "${img}" "${regional_fips_img}" || : sudo ctr -n k8s.io image tag "${img}" "${regional_fips_img/-eksbuild.1/}" || : @@ -474,8 +509,16 @@ fi ### SSM Agent ################################################################## ################################################################################ -sudo yum install -y amazon-ssm-agent -sudo systemctl enable amazon-ssm-agent +if yum list installed | grep amazon-ssm-agent; then + echo "amazon-ssm-agent already present - skipping install" +else + echo "Installing amazon-ssm-agent" + if ! [[ ${ISOLATED_REGIONS} =~ $BINARY_BUCKET_REGION ]]; then + sudo yum install -y https://s3.${BINARY_BUCKET_REGION}.${S3_DOMAIN}/amazon-ssm-${BINARY_BUCKET_REGION}/${SSM_AGENT_VERSION}/linux_${ARCH}/amazon-ssm-agent.rpm + else + sudo yum install -y amazon-ssm-agent + fi +fi ################################################################################ ### AMI Metadata ############################################################### @@ -508,6 +551,7 @@ EOF echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf echo fs.inotify.max_user_instances=8192 | sudo tee -a /etc/sysctl.conf echo vm.max_map_count=524288 | sudo tee -a /etc/sysctl.conf +echo 'kernel.pid_max=4194304' | sudo tee -a /etc/sysctl.conf ################################################################################ ### adding log-collector-script ################################################ diff --git a/scripts/install_additional_repos.sh b/scripts/install_additional_repos.sh index caabbca4d..dd1862743 100644 --- a/scripts/install_additional_repos.sh +++ b/scripts/install_additional_repos.sh @@ -19,13 +19,13 @@ fi AWK_CMD=' BEGIN {RS=";";FS=","} { - delete vars; - for(i = 1; i <= NF; ++i) { - n = index($i, "="); - if(n) { + delete vars; + for(i = 1; i <= NF; ++i) { + n = index($i, "="); + if(n) { vars[substr($i, 1, n-1)] = substr($i, n + 1) } - } + } Repo = "/etc/yum.repos.d/"vars["repo"]".repo" } {print "["vars["repo"]"]" > Repo} diff --git a/scripts/upgrade_kernel.sh b/scripts/upgrade_kernel.sh index 52d696056..9b13a18bb 100755 --- a/scripts/upgrade_kernel.sh +++ b/scripts/upgrade_kernel.sh @@ -13,12 +13,16 @@ if [[ -z "$KERNEL_VERSION" ]]; then echo "kernel_version is unset. Setting to $KERNEL_VERSION based on Kubernetes version $KUBERNETES_VERSION." fi -if [[ $KERNEL_VERSION == "4.14" ]]; then - sudo yum update -y kernel +if [[ $KERNEL_VERSION == 4.14* ]]; then + sudo yum install -y "kernel-${KERNEL_VERSION}*" else - sudo amazon-linux-extras install -y "kernel-${KERNEL_VERSION}" + KERNEL_MINOR_VERSION=$(echo ${KERNEL_VERSION} | cut -d. -f-2) + sudo amazon-linux-extras enable "kernel-${KERNEL_MINOR_VERSION}" + sudo yum install -y "kernel-${KERNEL_VERSION}*" fi +sudo yum install -y "kernel-headers-${KERNEL_VERSION}*" "kernel-devel-${KERNEL_VERSION}*" + # enable pressure stall information sudo grubby \ --update-kernel=ALL \ @@ -29,5 +33,3 @@ sudo grubby \ sudo grubby \ --update-kernel=ALL \ --args="clocksource=tsc tsc=reliable" - -sudo reboot diff --git a/scripts/validate.sh b/scripts/validate.sh index 0b007e386..42da83266 100644 --- a/scripts/validate.sh +++ b/scripts/validate.sh @@ -1,13 +1,9 @@ #!/usr/bin/env bash -# -# Do basic validation of the generated AMI -# Validates that a file or blob doesn't exist -# -# Arguments: -# a file name or blob -# Returns: -# 1 if a file exists, after printing an error +set -o nounset +set -o errexit +set -o pipefail + validate_file_nonexists() { local file_blob=$1 for f in $file_blob; do @@ -45,8 +41,6 @@ else exit 1 fi -echo "Verifying that the package versionlocks are correct..." - function versionlock-entries() { # the format of this output is EPOCH:NAME-VERSION-RELEASE.ARCH # more info in yum-versionlock(1) @@ -58,21 +52,29 @@ function versionlock-packages() { versionlock-entries | xargs -I '{}' rpm --query '{}' --queryformat '%{NAME}\n' } -for ENTRY in $(versionlock-entries); do - if ! rpm --query "$ENTRY" &> /dev/null; then - echo "There is no package matching the versionlock entry: '$ENTRY'" - exit 1 +function verify-versionlocks() { + for ENTRY in $(versionlock-entries); do + if ! rpm --query "$ENTRY" &> /dev/null; then + echo "There is no package matching the versionlock entry: '$ENTRY'" + exit 1 + fi + done + + LOCKED_PACKAGES=$(versionlock-packages | wc -l) + UNIQUE_LOCKED_PACKAGES=$(versionlock-packages | sort -u | wc -l) + if [ $LOCKED_PACKAGES -ne $UNIQUE_LOCKED_PACKAGES ]; then + echo "Package(s) have multiple version locks!" + versionlock-entries fi -done -LOCKED_PACKAGES=$(versionlock-packages | wc -l) -UNIQUE_LOCKED_PACKAGES=$(versionlock-packages | sort -u | wc -l) -if [ $LOCKED_PACKAGES -ne $UNIQUE_LOCKED_PACKAGES ]; then - echo "Package(s) have multiple version locks!" - versionlock-entries -fi + echo "Package versionlocks are correct!" +} -echo "Package versionlocks are correct!" +# run verify-versionlocks on al2 only, as it is not needed on al2023 +if ! cat /etc/*release | grep "al2023" > /dev/null 2>&1; then + echo "Verifying that the package versionlocks are correct..." + verify-versionlocks +fi REQUIRED_COMMANDS=(unpigz) @@ -84,3 +86,14 @@ for ENTRY in "${REQUIRED_COMMANDS[@]}"; do done echo "Required commands were found: ${REQUIRED_COMMANDS[*]}" + +REQUIRED_FREE_MEBIBYTES=1024 +TOTAL_MEBIBYTES=$(df -m / | tail -n1 | awk '{print $2}') +FREE_MEBIBYTES=$(df -m / | tail -n1 | awk '{print $4}') +echo "Disk space in mebibytes (required/free/total): ${REQUIRED_FREE_MEBIBYTES}/${FREE_MEBIBYTES}/${TOTAL_MEBIBYTES}" +if [ ${FREE_MEBIBYTES} -lt ${REQUIRED_FREE_MEBIBYTES} ]; then + echo "Disk space requirements not met!" + exit 1 +else + echo "Disk space requirements were met." +fi diff --git a/test/README.md b/test/README.md index e688ca945..6d9f58a2f 100644 --- a/test/README.md +++ b/test/README.md @@ -1,10 +1,10 @@ ## Tests -This directory contains a Dockerfile that is able to be used locally to test the `/etc/eks/boostrap.sh` script without having to use a real AL2 EC2 instance for a quick dev-loop. It is still necessary to test the bootstrap script on a real instance since the Docker image is not a fully accurate representation. +This directory contains a Dockerfile that is able to be used locally to test the `/etc/eks/boostrap.sh` script without having to use a real AL2 EC2 instance for a quick dev-loop. It is still necessary to test the bootstrap script on a real instance since the Docker image is not a fully accurate representation. ## AL2 EKS Optimized AMI Docker Image -The image is built using the official AL2 image `public.ecr.aws/amazonlinux/amazonlinux:2`. It has several mocks installed including the [ec2-metadata-mock](https://github.com/aws/amazon-ec2-metadata-mock). Mocks are installed into `/sbin`, so adding addditional ones as necessary should be as simple as dropping a bash script in the `mocks` dir named as the command you would like to mock out. +The image is built using the official AL2 image `public.ecr.aws/amazonlinux/amazonlinux:2`. It has several mocks installed including the [ec2-metadata-mock](https://github.com/aws/amazon-ec2-metadata-mock). Mocks are installed into `/sbin`, so adding addditional ones as necessary should be as simple as dropping a bash script in the `mocks` dir named as the command you would like to mock out. ## Usage @@ -16,7 +16,7 @@ docker build -t eks-optimized-ami -f Dockerfile ../ docker run -it eks-optimized-ami /etc/eks/bootstrap.sh --b64-cluster-ca dGVzdA== --apiserver-endpoint http://my-api-endpoint test ``` -The `test-harness.sh` script wraps a build and runs test script in the `cases` dir. Tests scripts within the `cases` dir are invoked by the `test-harness.sh` script and have access to the `run` function. The `run` function accepts a temporary directory as an argument in order to mount as a volume in the container so that test scripts can check files within the `/etc/kubernetes/` directory after a bootstrap run. The remaining arguments to the `run` function are a path to a script within the AL2 EKS Optimized AMI Docker Container. +The `test-harness.sh` script wraps a build and runs test script in the `cases` dir. Tests scripts within the `cases` dir are invoked by the `test-harness.sh` script and have access to the `run` function. The `run` function accepts a temporary directory as an argument in order to mount as a volume in the container so that test scripts can check files within the `/etc/kubernetes/` directory after a bootstrap run. The remaining arguments to the `run` function are a path to a script within the AL2 EKS Optimized AMI Docker Container. Here's an example `run` call: @@ -31,7 +31,7 @@ run ${TEMP_DIR} /etc/eks/bootstrap.sh \ ## ECR Public -You may need to logout of ECR public or reauthenticate if your credentials are expired: +You may need to logout of ECR public or reauthenticate if your credentials are expired: ```bash docker logout public.ecr.aws diff --git a/test/cases/get-ecr-uri.sh b/test/cases/get-ecr-uri.sh new file mode 100755 index 000000000..5b4dd3209 --- /dev/null +++ b/test/cases/get-ecr-uri.sh @@ -0,0 +1,85 @@ +#!/usr/bin/env bash + +set -o nounset +set -o errexit +set -o pipefail + +echo "--> Should use specified account when passed in" +EXPECTED_ECR_URI="999999999999.dkr.ecr.mars-west-1.amazonaws.com.mars" +REGION="mars-west-1" +DOMAIN="amazonaws.com.mars" +ECR_URI=$(/etc/eks/get-ecr-uri.sh "${REGION}" "${DOMAIN}" "999999999999") +if [ ! "$ECR_URI" = "$EXPECTED_ECR_URI" ]; then + echo "โŒ Test Failed: expected ecr-uri=$EXPECTED_ECR_URI but got '${ECR_URI}'" + exit 1 +fi + +echo "--> Should use account mapped to the region when set" +EXPECTED_ECR_URI="590381155156.dkr.ecr.eu-south-1.amazonaws.com" +REGION="eu-south-1" +DOMAIN="amazonaws.com" +ECR_URI=$(/etc/eks/get-ecr-uri.sh "${REGION}" "${DOMAIN}") +if [ ! "$ECR_URI" = "$EXPECTED_ECR_URI" ]; then + echo "โŒ Test Failed: expected ecr-uri=$EXPECTED_ECR_URI but got '${ECR_URI}'" + exit 1 +fi + +echo "--> Should use non-opt-in account when not opt-in-region" +EXPECTED_ECR_URI="602401143452.dkr.ecr.us-east-2.amazonaws.com" +REGION="us-east-2" +DOMAIN="amazonaws.com" +ECR_URI=$(/etc/eks/get-ecr-uri.sh "${REGION}" "${DOMAIN}") +if [ ! "$ECR_URI" = "$EXPECTED_ECR_URI" ]; then + echo "โŒ Test Failed: expected ecr-uri=$EXPECTED_ECR_URI but got '${ECR_URI}'" + exit 1 +fi + +echo "--> Should use us-west-2 account and region when opt-in-region" +EXPECTED_ECR_URI="602401143452.dkr.ecr.us-west-2.amazonaws.com" +REGION="eu-south-100" +DOMAIN="amazonaws.com" +ECR_URI=$(/etc/eks/get-ecr-uri.sh "${REGION}" "${DOMAIN}") +if [ ! "$ECR_URI" = "$EXPECTED_ECR_URI" ]; then + echo "โŒ Test Failed: expected ecr-uri=$EXPECTED_ECR_URI but got '${ECR_URI}'" + exit 1 +fi + +echo "--> Should default us-gov-west-1 when unknown amazonaws.com.us-gov region" +EXPECTED_ECR_URI="013241004608.dkr.ecr.us-gov-west-1.amazonaws.com.us-gov" +REGION="us-gov-east-100" +DOMAIN="amazonaws.com.us-gov" +ECR_URI=$(/etc/eks/get-ecr-uri.sh "${REGION}" "${DOMAIN}") +if [ ! "$ECR_URI" = "$EXPECTED_ECR_URI" ]; then + echo "โŒ Test Failed: expected ecr-uri=$EXPECTED_ECR_URI but got '${ECR_URI}'" + exit 1 +fi + +echo "--> Should default cn-northwest-1 when unknown amazonaws.com.cn region" +EXPECTED_ECR_URI="961992271922.dkr.ecr.cn-northwest-1.amazonaws.com.cn" +REGION="cn-north-100" +DOMAIN="amazonaws.com.cn" +ECR_URI=$(/etc/eks/get-ecr-uri.sh "${REGION}" "${DOMAIN}") +if [ ! "$ECR_URI" = "$EXPECTED_ECR_URI" ]; then + echo "โŒ Test Failed: expected ecr-uri=$EXPECTED_ECR_URI but got '${ECR_URI}'" + exit 1 +fi + +echo "--> Should default us-iso-east-1 when unknown amazonaws.com.iso region" +EXPECTED_ECR_URI="725322719131.dkr.ecr.us-iso-east-1.amazonaws.com.iso" +REGION="us-iso-west-100" +DOMAIN="amazonaws.com.iso" +ECR_URI=$(/etc/eks/get-ecr-uri.sh "${REGION}" "${DOMAIN}") +if [ ! "$ECR_URI" = "$EXPECTED_ECR_URI" ]; then + echo "โŒ Test Failed: expected ecr-uri=$EXPECTED_ECR_URI but got '${ECR_URI}'" + exit 1 +fi + +echo "--> Should default us-isob-east-1 when unknown amazonaws.com.isob region" +EXPECTED_ECR_URI="187977181151.dkr.ecr.us-isob-east-1.amazonaws.com.isob" +REGION="us-isob-west-100" +DOMAIN="amazonaws.com.isob" +ECR_URI=$(/etc/eks/get-ecr-uri.sh "${REGION}" "${DOMAIN}") +if [ ! "$ECR_URI" = "$EXPECTED_ECR_URI" ]; then + echo "โŒ Test Failed: expected ecr-uri=$EXPECTED_ECR_URI but got '${ECR_URI}'" + exit 1 +fi diff --git a/test/cases/imds-token-refresh.sh b/test/cases/imds-token-refresh.sh deleted file mode 100755 index 1f4ca7039..000000000 --- a/test/cases/imds-token-refresh.sh +++ /dev/null @@ -1,69 +0,0 @@ -#!/usr/bin/env bash - -set -o nounset -set -o errexit -set -o pipefail - -echo "--> Should refresh IMDS token on configured interval" -exit_code=0 -TOKEN_DIR=/tmp/imds-tokens -TTL=5 -export IMDS_TOKEN_TTL_SECONDS=$TTL -export IMDS_DEBUG=true -imds /latest/meta-data/instance-id || exit_code=$? - -if [[ ${exit_code} -ne 0 ]]; then - echo "โŒ Test Failed: expected a non-zero exit code but got '${exit_code}'" - exit 1 -elif [[ $(ls $TOKEN_DIR | wc -l) -ne 1 ]]; then - echo "โŒ Test Failed: expected one token to be present after first IMDS call but got '$(ls $TOKEN_DIR)'" - exit 1 -fi - -imds /latest/meta-data/instance-id || exit_code=$? - -if [[ ${exit_code} -ne 0 ]]; then - echo "โŒ Test Failed: expected a non-zero exit code but got '${exit_code}'" - exit 1 -elif [[ $(ls $TOKEN_DIR | wc -l) -ne 1 ]]; then - echo "โŒ Test Failed: expected one token to be present after second IMDS call but got '$(ls $TOKEN_DIR)'" - exit 1 -fi - -sleep $(($TTL + 1)) - -imds /latest/meta-data/instance-id || exit_code=$? - -if [[ ${exit_code} -ne 0 ]]; then - echo "โŒ Test Failed: expected a non-zero exit code but got '${exit_code}'" - exit 1 -elif [[ $(ls $TOKEN_DIR | wc -l) -ne 2 ]]; then - echo "โŒ Test Failed: expected two tokens to be present after third IMDS call but got '$(ls $TOKEN_DIR)'" - exit 1 -fi - -sleep $(($TTL + 1)) - -# both tokens are now expired, but only one should be garbage-collected with a window of $TTL - -IMDS_MAX_TOKEN_TTL_SECONDS=$TTL imds /latest/meta-data/instance-id || exit_code=$? - -if [[ ${exit_code} -ne 0 ]]; then - echo "โŒ Test Failed: expected a non-zero exit code but got '${exit_code}'" - exit 1 -elif [[ $(ls $TOKEN_DIR | wc -l) -ne 2 ]]; then - echo "โŒ Test Failed: expected two tokens to be present after first garbage-collection but got '$(ls $TOKEN_DIR)'" - exit 1 -fi - -# the other expired token should be removed with a window of 0 - -IMDS_MAX_TOKEN_TTL_SECONDS=0 imds /latest/meta-data/instance-id || exit_code=$? - -if [[ ${exit_code} -ne 0 ]]; then - echo "โŒ Test Failed: expected a non-zero exit code but got '${exit_code}'" - exit 1 -elif [[ $(ls $TOKEN_DIR | wc -l) -ne 1 ]]; then - echo "โŒ Test Failed: expected one token to be present after second garbage-collection but got '$(ls $TOKEN_DIR)'" - exit 1 -fi diff --git a/test/cases/mount-bpf-fs.sh b/test/cases/mount-bpf-fs.sh index c5281d4e2..fe6e45907 100755 --- a/test/cases/mount-bpf-fs.sh +++ b/test/cases/mount-bpf-fs.sh @@ -49,7 +49,7 @@ fi export -nf mount rm $SYSTEMD_UNIT -echo "--> Should default to true on 1.27+" +echo "--> Should default to true" export KUBELET_VERSION=v1.27.0-eks-ba74326 MOUNT_BPF_FS_MOCK=$(mktemp) function mount-bpf-fs() { @@ -72,8 +72,8 @@ if [ ! "$(cat $MOUNT_BPF_FS_MOCK)" = "called" ]; then fi export -nf mount-bpf-fs -echo "--> Should default to false on 1.24-" -export KUBELET_VERSION=v1.24.0-eks-ba74326 +echo "--> Should be disabled by flag" +export KUBELET_VERSION=v1.27.0-eks-ba74326 MOUNT_BPF_FS_MOCK=$(mktemp) function mount-bpf-fs() { echo "called" >> $MOUNT_BPF_FS_MOCK @@ -84,6 +84,7 @@ EXIT_CODE=0 /etc/eks/bootstrap.sh \ --b64-cluster-ca dGVzdA== \ --apiserver-endpoint http://my-api-endpoint \ + --mount-bpf-fs false \ test || EXIT_CODE=$? if [[ ${EXIT_CODE} -ne 0 ]]; then echo "โŒ Test Failed: expected a zero exit code but got '${EXIT_CODE}'" diff --git a/test/cases/private-dns-name.sh b/test/cases/private-dns-name.sh new file mode 100755 index 000000000..c49246b49 --- /dev/null +++ b/test/cases/private-dns-name.sh @@ -0,0 +1,31 @@ +#!/usr/bin/env bash + +set -o nounset +set -o errexit +set -o pipefail + +echo "--> Should fetch PrivateDnsName correctly" +EXPECTED_PRIVATE_DNS_NAME="ip-10-0-0-157.us-east-2.compute.internal" +PRIVATE_DNS_NAME=$(private-dns-name) +if [ ! "$PRIVATE_DNS_NAME" = "$EXPECTED_PRIVATE_DNS_NAME" ]; then + echo "โŒ Test Failed: expected private-dns-name=$EXPECTED_PRIVATE_DNS_NAME but got '${PRIVATE_DNS_NAME}'" + exit 1 +fi + +echo "--> Should try to fetch PrivateDnsName until timeout is reached" +export PRIVATE_DNS_NAME_ATTEMPT_INTERVAL=3 +export PRIVATE_DNS_NAME_MAX_ATTEMPTS=2 +export AWS_MOCK_FAIL=true +START_TIME=$(date '+%s') +EXIT_CODE=0 +private-dns-name || EXIT_CODE=$? +STOP_TIME=$(date '+%s') +if [[ ${EXIT_CODE} -eq 0 ]]; then + echo "โŒ Test Failed: expected a non-zero exit code" + exit 1 +fi +ELAPSED_TIME=$((STOP_TIME - START_TIME)) +if [[ "$ELAPSED_TIME" -lt 6 ]]; then + echo "โŒ Test Failed: expected 6 seconds to elapse, but got: $ELAPSED_TIME" + exit 1 +fi diff --git a/test/cases/reserved-cpus-kubelet-arg.sh b/test/cases/reserved-cpus-kubelet-arg.sh new file mode 100755 index 000000000..2002b7060 --- /dev/null +++ b/test/cases/reserved-cpus-kubelet-arg.sh @@ -0,0 +1,73 @@ +#!/usr/bin/env bash +set -euo pipefail + +echo "-> Should not set systemReservedCgroup and kubeReservedCgroup when --reserved-cpus is set with containerd" +exit_code=0 +export KUBELET_VERSION=v1.24.15-eks-ba74326 +/etc/eks/bootstrap.sh \ + --b64-cluster-ca dGVzdA== \ + --apiserver-endpoint http://my-api-endpoint \ + --kubelet-extra-args '--node-labels=cnf=cnf1 --reserved-cpus=0-3 --cpu-manager-policy=static' \ + test || exit_code=$? + +if [[ ${exit_code} -ne 0 ]]; then + echo "โŒ Test Failed: expected a non-zero exit code but got '${exit_code}'" + exit 1 +fi + +KUBELET_CONFIG=/etc/kubernetes/kubelet/kubelet-config.json +if grep -q systemReservedCgroup ${KUBELET_CONFIG}; then + echo "โŒ Test Failed: expected systemReservedCgroup to be absent in ${KUBELET_CONFIG}.Found: $(grep systemReservedCgroup ${KUBELET_CONFIG})" + exit 1 +fi + +if grep -q kubeReservedCgroup ${KUBELET_CONFIG}; then + echo "โŒ Test Failed: expected kubeReservedCgroup to be absent ${KUBELET_CONFIG}.Found: $(grep kubeReservedCgroup ${KUBELET_CONFIG})" + exit 1 +fi + +echo "-> Should set systemReservedCgroup and kubeReservedCgroup when --reserved-cpus is not set with containerd" +exit_code=0 +export KUBELET_VERSION=v1.24.15-eks-ba74326 +/etc/eks/bootstrap.sh \ + --b64-cluster-ca dGVzdA== \ + --apiserver-endpoint http://my-api-endpoint \ + test || exit_code=$? + +if [[ ${exit_code} -ne 0 ]]; then + echo "โŒ Test Failed: expected a non-zero exit code but got '${exit_code}'" + exit 1 +fi + +if ! $(grep -q systemReservedCgroup ${KUBELET_CONFIG}); then + echo "โŒ Test Failed: expected systemReservedCgroup to be present in ${KUBELET_CONFIG}. Found: $(grep systemReservedCgroup ${KUBELET_CONFIG})" + exit 1 +fi + +if ! $(grep -q kubeReservedCgroup ${KUBELET_CONFIG}); then + echo "โŒ Test Failed: expected kubeReservedCgroup to be present ${KUBELET_CONFIG}.Found: $(grep kubeReservedCgroup ${KUBELET_CONFIG})" + exit 1 +fi + +echo "-> Should set systemReservedCgroup and kubeReservedCgroup when --reserved-cpus is set with dockerd" +exit_code=0 +export KUBELET_VERSION=v1.23.15-eks-ba74326 +/etc/eks/bootstrap.sh \ + --b64-cluster-ca dGVzdA== \ + --apiserver-endpoint http://my-api-endpoint \ + test || exit_code=$? + +if [[ ${exit_code} -ne 0 ]]; then + echo "โŒ Test Failed: expected a non-zero exit code but got '${exit_code}'" + exit 1 +fi + +if ! $(grep -q systemReservedCgroup ${KUBELET_CONFIG}); then + echo "โŒ Test Failed: expected systemReservedCgroup to be present in ${KUBELET_CONFIG}.Found: $(grep systemReservedCgroup ${KUBELET_CONFIG})" + exit 1 +fi + +if ! $(grep -q kubeReservedCgroup ${KUBELET_CONFIG}); then + echo "โŒ Test Failed: expected kubeReservedCgroup to be present ${KUBELET_CONFIG}.Found: $(grep kubeReservedCgroup ${KUBELET_CONFIG})" + exit 1 +fi diff --git a/test/mocks/aws b/test/mocks/aws index da5f00b50..78126330d 100755 --- a/test/mocks/aws +++ b/test/mocks/aws @@ -7,15 +7,30 @@ SCRIPTPATH="$( echo >&2 "mocking 'aws $@'" -if [[ $1 == "ec2" ]]; then +AWS_MOCK_FAIL=${AWS_MOCK_FAIL:-false} +if [ "$AWS_MOCK_FAIL" = "true" ]; then + echo >&2 "failing mocked 'aws $@'" + exit 1 +fi +if [[ $1 == "ec2" ]]; then if [[ $2 == "describe-instance-types" ]]; then instance_type=$(echo "${@}" | grep -o '[a-z]\+[0-9]\+[a-z]*\.[0-9a-z]\+' | tr '.' '-') if [[ -f "${SCRIPTPATH}/describe-instance-types/${instance_type}.json" ]]; then cat "${SCRIPTPATH}/describe-instance-types/${instance_type}.json" exit 0 fi - echo "instance type not found" + echo >&2 "instance type not found" + exit 1 + fi + if [[ $2 == "describe-instances" ]]; then + instance_id=$(echo "${@}" | grep -o 'i\-[a-z0-9]\+') + echo >&2 "instance-id: $instance_id" + if [[ -f "${SCRIPTPATH}/describe-instances/${instance_id}.json" ]]; then + cat "${SCRIPTPATH}/describe-instances/${instance_id}.json" + exit 0 + fi + echo >&2 "instance not found" exit 1 fi fi diff --git a/test/mocks/describe-instances/i-1234567890abcdef0.json b/test/mocks/describe-instances/i-1234567890abcdef0.json new file mode 100644 index 000000000..da64601da --- /dev/null +++ b/test/mocks/describe-instances/i-1234567890abcdef0.json @@ -0,0 +1,154 @@ +{ + "Reservations": [ + { + "Groups": [], + "Instances": [ + { + "AmiLaunchIndex": 0, + "ImageId": "ami-0abcdef1234567890", + "InstanceId": "i-1234567890abcdef0", + "InstanceType": "t3.nano", + "KeyName": "my-key-pair", + "LaunchTime": "2022-11-15T10:48:59+00:00", + "Monitoring": { + "State": "disabled" + }, + "Placement": { + "AvailabilityZone": "us-east-2a", + "GroupName": "", + "Tenancy": "default" + }, + "PrivateDnsName": "ip-10-0-0-157.us-east-2.compute.internal", + "PrivateIpAddress": "10-0-0-157", + "ProductCodes": [], + "PublicDnsName": "ec2-34-253-223-13.us-east-2.compute.amazonaws.com", + "PublicIpAddress": "34.253.223.13", + "State": { + "Code": 16, + "Name": "running" + }, + "StateTransitionReason": "", + "SubnetId": "subnet-04a636d18e83cfacb", + "VpcId": "vpc-1234567890abcdef0", + "Architecture": "x86_64", + "BlockDeviceMappings": [ + { + "DeviceName": "/dev/xvda", + "Ebs": { + "AttachTime": "2022-11-15T10:49:00+00:00", + "DeleteOnTermination": true, + "Status": "attached", + "VolumeId": "vol-02e6ccdca7de29cf2" + } + } + ], + "ClientToken": "1234abcd-1234-abcd-1234-d46a8903e9bc", + "EbsOptimized": true, + "EnaSupport": true, + "Hypervisor": "xen", + "IamInstanceProfile": { + "Arn": "arn:aws:iam::111111111111:instance-profile/AmazonSSMRoleForInstancesQuickSetup", + "Id": "111111111111111111111" + }, + "NetworkInterfaces": [ + { + "Association": { + "IpOwnerId": "amazon", + "PublicDnsName": "ec2-34-253-223-13.us-east-2.compute.amazonaws.com", + "PublicIp": "34.253.223.13" + }, + "Attachment": { + "AttachTime": "2022-11-15T10:48:59+00:00", + "AttachmentId": "eni-attach-1234567890abcdefg", + "DeleteOnTermination": true, + "DeviceIndex": 0, + "Status": "attached", + "NetworkCardIndex": 0 + }, + "Description": "", + "Groups": [ + { + "GroupName": "launch-wizard-146", + "GroupId": "sg-1234567890abcdefg" + } + ], + "Ipv6Addresses": [], + "MacAddress": "00:11:22:33:44:55", + "NetworkInterfaceId": "eni-1234567890abcdefg", + "OwnerId": "104024344472", + "PrivateDnsName": "ip-10-0-0-157.us-east-2.compute.internal", + "PrivateIpAddress": "10-0-0-157", + "PrivateIpAddresses": [ + { + "Association": { + "IpOwnerId": "amazon", + "PublicDnsName": "ec2-34-253-223-13.us-east-2.compute.amazonaws.com", + "PublicIp": "34.253.223.13" + }, + "Primary": true, + "PrivateDnsName": "ip-10-0-0-157.us-east-2.compute.internal", + "PrivateIpAddress": "10-0-0-157" + } + ], + "SourceDestCheck": true, + "Status": "in-use", + "SubnetId": "subnet-1234567890abcdefg", + "VpcId": "vpc-1234567890abcdefg", + "InterfaceType": "interface" + } + ], + "RootDeviceName": "/dev/xvda", + "RootDeviceType": "ebs", + "SecurityGroups": [ + { + "GroupName": "launch-wizard-146", + "GroupId": "sg-1234567890abcdefg" + } + ], + "SourceDestCheck": true, + "Tags": [ + { + "Key": "Name", + "Value": "my-instance" + } + ], + "VirtualizationType": "hvm", + "CpuOptions": { + "CoreCount": 1, + "ThreadsPerCore": 2 + }, + "CapacityReservationSpecification": { + "CapacityReservationPreference": "open" + }, + "HibernationOptions": { + "Configured": false + }, + "MetadataOptions": { + "State": "applied", + "HttpTokens": "optional", + "HttpPutResponseHopLimit": 1, + "HttpEndpoint": "enabled", + "HttpProtocolIpv6": "disabled", + "InstanceMetadataTags": "enabled" + }, + "EnclaveOptions": { + "Enabled": false + }, + "PlatformDetails": "Linux/UNIX", + "UsageOperation": "RunInstances", + "UsageOperationUpdateTime": "2022-11-15T10:48:59+00:00", + "PrivateDnsNameOptions": { + "HostnameType": "ip-name", + "EnableResourceNameDnsARecord": true, + "EnableResourceNameDnsAAAARecord": false + }, + "MaintenanceOptions": { + "AutoRecovery": "default" + } + } + ], + "OwnerId": "111111111111", + "ReservationId": "r-1234567890abcdefg" + } + ] +}