Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: [DC-761] Update nomad v0.10.4 #11

Merged
merged 1,130 commits into from
Mar 17, 2020
Merged

feat: [DC-761] Update nomad v0.10.4 #11

merged 1,130 commits into from
Mar 17, 2020

Conversation

spavell
Copy link

@spavell spavell commented Mar 16, 2020

Update master to 0.10.4

Mahmood Ali and others added 30 commits January 15, 2020 09:02
alloc.Job may be stale as well and need to migrate it.  It does cost
extra cycles but should be negligible.
This changeset is part of the work to improve our E2E provisioning
process to allow our upgrade tests:

* Move more of the setup into the AMI image creation so it's a little
 more obvious to provisioning config authors which bits are essential
 to deploying a specific version of Nomad.

* Make the service file update do a systemd daemon-reload so that we
  can update an already-running cluster with the same script we use to
  deploy it initially.
This builds on API changes in hashicorp#6017 and hashicorp#6021 to conditionally turn off the
“Run Job” button based on the current token’s capabilities, or the capabilities
of the anonymous policy if no token is present.

If you try to visit the job-run route directly, it redirects to the job list.
Group service checks cannot interpolate task fields, because the task
fields are not available at the time the script check hook is created
for the group service. When f31482a was merged this e2e test began
failing because we are now correctly matching the script check ID to
the service ID, which revealed this jobspec was invalid.
I originally planned to add component documentation, but as this dragged on and I found that JSDoc-to-Markdown sometimes needed hand-tuning, I decided to skip it and focus on replicating what was already present in Freestyle. Adding documentation is a finite task that can be revisited in the future.

My goal was to migrate everything from Freestyle with as few changes as possible. Some adaptations that I found necessary:
• the DelayedArray and DelayedTruth utilities that delay component rendering until slightly after initial render because without them:
  ◦ charts were rendering with zero width
  ◦ the JSON viewer was rendering with empty content
• Storybook in Ember renders components in a routerless/controllerless context by default, so some component stories needed changes:
  ◦ table pagination/sorting stories access to query params, which necessitates some reaching into Ember internals to start routing and dynamically generate a Storybook route/controller to render components into
  ◦ some stories have a faux controller as part of their Storybook context that hosts setInterval-linked dynamic computed properties
• some jiggery-pokery with anchor tags
  ◦ inert href='#' had to become href='javascript:;
  ◦ links that are actually meant to navigate need target='_parent' so they don’t navigate inside the Storybook iframe

Maybe some of these could be addressed by fixes in ember-cli-storybook but I’m wary of digging around in there any more than I already have, as I’ve lost a lot of time to Storybook confusion and frustrations already 😞

The STORYBOOK=true environment variable tweaks some environment settings to get things working as expected in the Storybook context.

I chose to:
• use angle bracket invocation within stories rather than have to migrate them soon after having moved to Storybook
• keep Freestyle around for now for its palette and typeface components
The e2e framework instantiates clients for Nomad/Consul but the
provisioning of the actual Nomad cluster is left to Terraform. The
Terraform provisioning process uses `remote-exec` to deploy specific
versions of Nomad so that we don't have to bake an AMI every time we
want to test a new version. But Terraform treats the resulting
instances as immutable, so we can't use the same tooling to update the
version of Nomad in-place. This is a prerequisite for upgrade testing.

This changeset extends the e2e framework to provide the option of
deploying Nomad (and, in the future, Consul/Vault) with specific
versions to running infrastructure. This initial implementation is
focused on deploying to a single cluster via `ssh` (because that's our
current need), but provides interfaces to hook the test run at the
start of the run, the start of each suite, or the start of a given
test case.

Terraform work includes:
* provides Terraform output that written to JSON used by the framework
  to configure provisioning via `terraform output provisioning`.
* provides Terraform output that can be used by test operators to
  configure their shell via `$(terraform output environment)`
* drops `remote-exec` provisioning steps from Terraform
* makes changes to the deployment scripts to ensure they can be run
  multiple times w/ different versions against the same host.
Fixes a bug introduced in 0aa58b9 where we're writing a test file to
a taskdir-interpolated location, which works when we `alloc exec` but
not in the jobspec for a group script check.

This changeset also makes the test safe to run multiple times by
namespacing the file with the alloc ID, which has the added bonus of
exercising our alloc interpolation code for group script checks.
If an existing system allocation is running and the node its running on
is marked as ineligible, subsequent plan/applys return an RPC error
instead of a more helpful plan result.

This change logs the error, and appends a failedTGAlloc for the
placement.
…ineligible

Return FailedTGAlloc metric instead of no node err
Fixes a deadlock in leadership handling if leadership flapped.

Raft propagates leadership transition to Nomad through a NotifyCh channel.
Raft blocks when writing to this channel, so channel must be buffered or
aggressively consumed[1]. Otherwise, Raft blocks indefinitely in `raft.runLeader`
until the channel is consumed[1] and does not move on to executing follower
related logic (in `raft.runFollower`).

While Raft `runLeader` defer function blocks, raft cannot process any other
raft operations.  For example, `run{Leader|Follower}` methods consume
`raft.applyCh`, and while runLeader defer is blocked, all raft log applications
or config lookup will block indefinitely.

Sadly, `leaderLoop` and `establishLeader` makes few Raft calls!
`establishLeader` attempts to auto-create autopilot/scheduler config [3]; and
`leaderLoop` attempts to check raft configuration [4].  All of these calls occur
without a timeout.

Thus, if leadership flapped quickly while `leaderLoop/establishLeadership` is
invoked and hit any of these Raft calls, Raft handler _deadlock_ forever.

Depending on how many times it flapped and where exactly we get stuck, I suspect
it's possible to get in the following case:

* Agent metrics/stats http and RPC calls hang as they check raft.Configurations
* raft.State remains in Leader state, and server attempts to handle RPC calls
  (e.g. node/alloc updates) and these hang as well

As we create goroutines per RPC call, the number of goroutines grow over time
and may trigger a out of memory errors in addition to missed updates.

[1] https://github.com/hashicorp/raft/blob/d90d6d6bdacf1b35d66940b07be515b074d89e88/config.go#L190-L193
[2] https://github.com/hashicorp/raft/blob/d90d6d6bdacf1b35d66940b07be515b074d89e88/raft.go#L425-L436
[3] https://github.com/hashicorp/nomad/blob/2a89e477465adbe6a88987f0dcb9fe80145d7b2f/nomad/leader.go#L198-L202
[4] https://github.com/hashicorp/nomad/blob/2a89e477465adbe6a88987f0dcb9fe80145d7b2f/nomad/leader.go#L877
website: add ‘intro to nomad’ video to /intro
drewbailey and others added 24 commits February 4, 2020 13:59
e2e: wait 2m rather than 10s after disabling consul acls
…ver-buttons

UI: Explicit transparent bg on popover actions
…ot-full-width

UI: Override the max-width on mobile to avoid losing space due to non-existent gutter menu
Note that 0.10.4, nomad windows binaries will be signed.

[ci skip]
changelog windows binaries being signed
Consul CLI uses CONSUL_HTTP_TOKEN, so Nomad should use the same.
Note that consul-template uses CONSUL_TOKEN, which Nomad also uses,
so be careful to preserve any reference to that in the consul-template
context.
nomad: unset consul token on job register
command: use consistent CONSUL_HTTP_TOKEN name
@spavell spavell requested a review from Manicqin March 16, 2020 11:09
@Manicqin
Copy link

2794 files changed... I will start now and finish when the Corona passes.

@spavell spavell changed the title Update nomad v0.10.4 feat: [DC-761] Update nomad v0.10.4 Mar 16, 2020
@spavell spavell merged commit becd650 into master Mar 17, 2020
@spavell spavell deleted the update_nomad_v0.10.4 branch March 17, 2020 22:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet