-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
data/aws: use azs for master set in manifests #1121
data/aws: use azs for master set in manifests #1121
Conversation
data/data/aws/master/outputs.tf
Outdated
output "subnet_ids" { | ||
value = "${var.subnet_ids}" | ||
} | ||
|
||
output "cluster_id" { | ||
value = "${var.cluster_id}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is, like subnet_ids
above, also a useless "tell them what they told you" output. Looks like we've been dragging them around since e5c8b41 (platform/aws: add bootstrap node and step for joining it, 2018-02-13, coreos/tectonic-installer#2924).
/retest
|
Rebased on top of #1045 in preparation for that merging. |
Made a WIP to add support for multiple masters in an availability zone. |
I have removed the code that enforces the restriction that an availability zone cannot have multiple masters. There were no changes that needed to be made to the installer to support multiple masters in a zone. The changes were made in cf3ff39 to 0a9ef3b. |
/retest |
testing locally ...
controlPlane:
name: master
platform:
aws:
zones:
- us-east-1b
- us-east-1d
- us-east-1f
... looks like the bootstrap instance is still in the first availability zone... $ AWS_PROFILE=openshift-dev aws ec2 describe-instances --filters Name=tag-key,Values=kubernetes.io/cluster/adahiya-0-88bqx | jq '.Reservations[].Instances[] | (.Tags[] | select(.Key == "Name") | .Value) + " " + .Placement.AvailabilityZone'
"adahiya-0-88bqx-master-1 us-east-1d"
"adahiya-0-88bqx-bootstrap us-east-1a"
"adahiya-0-88bqx-master-0 us-east-1b"
"adahiya-0-88bqx-master-2 us-east-1f" @staebler is it possible to create bootstrap machine in the 0th AZ given for |
data/data/aws/vpc/common.tf
Outdated
public_subnet_ids = "${aws_subnet.public_subnet.*.id}" | ||
private_subnet_count = "${local.new_az_count}" | ||
public_subnet_count = "${local.new_az_count}" | ||
az_to_private_subnet_id = "${zipmap(local.new_subnet_azs, local.private_subnet_ids)}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: This seems to be used only in output, why not do the calculation there itself ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
^^ totaly ignorable 😇
rate limiting /retest |
/lgtm |
/retest |
Conflict with #1296. |
/retest Please review the full test history for this PR and help us cut down flakes. |
These changes pass the availability zones to use for the masters set in 99_openshift-cluster-api_master-machines.yaml through to terraform. Prior to these changes the masters were always placed in the first 3 availability zones. There is no validation done on the availability zones to verify that they are valid for the region. Fix for https://bugzilla.redhat.com/show_bug.cgi?id=1662119.
The bootstrap node was being placed in the first availability zone in the region. Now, place the bootstrap node in the same availability zone as the first master. Remove the local az_to_private_subnet_id variable from the vpc module as it is only used as an output from the module. The output value is now calculated at the place where the output value is defined. Remove the cluster_id output value from the vpc module as it is unused.
/test e2e-aws |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhinavdahiya, staebler The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Rate limiting /retest |
/retest |
/retest |
openshift@afa0b59 had moved the bootstrap node to private subnet based on openshift#1121 (comment), but we need the bootstrap node in public subnet to be able to ssh. The bootstrap node is accesible on ssh again. ```console $ ush core@18.215.154.240 Warning: Permanently added '18.215.154.240' (ECDSA) to the list of known hosts. Red Hat CoreOS 4.0 Beta WARNING: Direct SSH access to machines is not recommended. This node has been annotated with machineconfiguration.openshift.io/ssh=accessed --- This is the bootstrap node; it will be destroyed when the master is fully up. The primary service is "bootkube.service". To watch its status, run e.g. journalctl -b -f -u bootkube.service [core@ip-10-0-8-165 ~]$ ```
These changes pass the availability zones to use for the masters set in 99_openshift-cluster-api_master-machines.yaml through to terraform. Prior to these changes the masters were always placed in the first 3 availability zones.
There is no validation done on the availability zones to verify that the are valid for the region.
Fix for https://bugzilla.redhat.com/show_bug.cgi?id=1662119.