-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GH-524 staging environment #532
Conversation
Split out docker setup for potentially used on other nodes. Putting custom vars into extra-vars file to be referenced on cli while executing script.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
|
||
# slow_start = 60 | ||
|
||
# TODO: need to figure out healthcheck for IPFS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this is a rabbit hole I started down at some point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking great its certainly a more complete set up than we had before.
Coupe of notes:
- This does not set up a receptor instance for staging. I think that's ok for now, but I think as soon as the receptor starts doing fancier things (like actually rejecting jobs based on criteria) we will want to add a receptor for testing purposes.
- I'm a little worried about the requester instances having an auto scaling group. Mostly because I don't really know what it means for two bacalhau requester nodes (that are not peered together)
to share the same set of compute nodes. What would happen if one requester node accepts a job and hands it off to a compute node, then the cli later requests the job status and that request goes to a different requester node? The requester node is not doing any computation so I don't anticipate ever really needing more than one. Unless the idea is to never n > 1 for this asg. - This still won't automatically run the ansible provision scripts when an instance launches right? For now were still running ansible-playbook from command line?
@@ -1,109 +1,67 @@ | |||
- name: Provision Bacalhau Compute Instance | |||
remote_user: ubuntu | |||
hosts: tag_Type_compute_only:&tag_Env_prod |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you limiting to specific Env's when running ansible-playbook? Or is there some magic I am missing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah using --limit tag_Env_staging
while executing ansible-playbook
command.
Yes recepter will definitely come later.
|
Against incorrect environment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, but I'll defer to @hevans66 on the final approval. I do think the large upcoming strategic decision that is coming up is, how we organize the infrastructure code for the private versus public clusters.
@@ -4,6 +4,12 @@ | |||
vars: | |||
ipfs_path: /opt/local/ipfs | |||
tasks: | |||
# Must provide limit flag to ensure running against current environment | |||
- fail: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, didn't know about this trick
Split out docker setup for potentially used on other nodes.
Putting custom vars into extra-vars file to be referenced on cli
while executing script.
Fixes #524