Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-lego managed Ingress resources #182

Closed
jrnt30 opened this issue Apr 28, 2017 · 7 comments
Closed

kube-lego managed Ingress resources #182

jrnt30 opened this issue Apr 28, 2017 · 7 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Milestone

Comments

@jrnt30
Copy link
Contributor

jrnt30 commented Apr 28, 2017

I have been attempting to fully automate the provisioning and binding of ingress resources on AWS using nginx-ingress, kube-lego and external-dns. kube-lego behind the scenes is creating a shadow Ingress resource in the kube-system namespace to manage some of the TLS challenge process.

The issue I am running into is that the external-dns' Ingress source assumes that if the external-dns.alpha.kubernetes.io/controller annotation is missing, it should make a DNS request. Ultimately this has lead to a situation where the resource in multiple, identical changes being added to the batch.

A few ways this could be approached would be:

  • Ensuring the set of endpoints ultimately being added to the change set are unique
  • Provide a "strict" flag that ensure that not only the equal to the expected value, but that it also

As an additional scenario, I can see a situation (especially using path based ingress) where one may want to represent the same Host attribute in multiple Ingress resources with different paths which would lean me heavily towards the de-dup to unique records prior to the call to ChangeResourceRecordSets

@hjacobs
Copy link
Contributor

hjacobs commented Apr 29, 2017

👍 the de-duplication approach sounds reasonable.

@ideahitme ideahitme added the kind/feature Categorizes issue or PR as related to a new feature. label May 2, 2017
@linki
Copy link
Member

linki commented May 5, 2017

Ensuring the set of endpoints ultimately being added to the change set are unique

This is fixed in v0.3.0-beta.1 which makes it work with kube-lego.

The --strict flag still sounds reasonable.

@linki linki modified the milestone: v0.4 Jun 12, 2017
@linki
Copy link
Member

linki commented Jun 30, 2017

Moving this to v0.5.

@linki linki modified the milestones: v0.5, v0.4 Jun 30, 2017
@linki linki added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/medium labels Jan 2, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

No branches or pull requests

6 participants