Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

data/aws/vpc: Add an S3 endpoint to new VPCs #745

Merged
merged 2 commits into from
Dec 14, 2018

Commits on Dec 13, 2018

  1. data/aws/vpc: Add an S3 endpoint to new VPCs

    As suggested by Stephen Cuppett, this allows registry <-> S3 transfers
    to bypass the (NAT) gateways.  Traffic over the NAT gateways costs
    money, so the new endpoint should make S3 access from the cluster
    cheaper (and possibly more reliable).  This also allows for additional
    security policy flexibility, although I'm not taking advantage of that
    in this commit.  Docs for VPC endpoints are in [1,2,3,4].
    
    Endpoints do not currently support cross-region requests [1].  And
    based on discussion with Stephen, adding an endpoint may *break*
    access to S3 on other regions.  But I can't find docs to back that up,
    and [3] has:
    
      We use the most specific route that matches the traffic to determine
      how to route the traffic (longest prefix match).  If you have an
      existing route in your route table for all internet traffic
      (0.0.0.0/0) that points to an internet gateway, the endpoint route
      takes precedence for all traffic destined for the service, because
      the IP address range for the service is more specific than
      0.0.0.0/0.  All other internet traffic goes to your internet
      gateway, including traffic that's destined for the service in other
      regions.
    
    which suggests that access to S3 on other regions may be unaffected.
    In any case, our registry buckets, and likely any other buckets
    associated with the cluster, will be living in the same region.
    
    concat is documented in [5].  The wrapping brackets avoid [6]:
    
      level=error msg="Error: module.vpc.aws_vpc_endpoint.s3: route_table_ids: should be a list"
    
    although I think that's a Terraform bug.  See also 8a37f72
    (modules/aws/bootstrap: Pull AWS bootstrap setup into a module,
    2018-09-05, openshift#217), which talks about this same issue.
    
    [1]: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html
    [2]: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html
    [3]: https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html
    [4]: https://www.terraform.io/docs/providers/aws/r/vpc_endpoint.html
    [5]: https://www.terraform.io/docs/configuration/interpolation.html#concat-list1-list2-
    [6]: https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/745/pull-ci-openshift-installer-master-e2e-aws/1673/build-log.txt
    wking committed Dec 13, 2018
    Configuration menu
    Copy the full SHA
    4a09672 View commit details
    Browse the repository at this point in the history
  2. vendor: Bump hive to 802db5420da

    Pulling in openshift/hive@b7d71518 (remove VPC endpoints, 2018-12-03,
    openshift/hive#122).
    
    Generated with:
    
      $ sed -i s/2349f175d3e4fc6542dec79add881a59f2d7b1b8/802db5420da6a88f034fc2501081e2ab12e8463e/ Gopkg.toml
      $ dep ensure
    
    using:
    
      $ dep version
      dep:
       version     : v0.5.0
       build date  :
       git hash    : 22125cf
       go version  : go1.10.3
       go compiler : gc
       platform    : linux/amd64
       features    : ImportDuringSolve=false
    wking committed Dec 13, 2018
    Configuration menu
    Copy the full SHA
    cdcaeb2 View commit details
    Browse the repository at this point in the history