Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A unique hostPort is not generated when using extraPortMappings without specifying hostPort. #3301

Closed
prahaladramji opened this issue Jul 8, 2023 · 9 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@prahaladramji
Copy link

What happened:

Creating a kind cluster without any hostPort defined in the extraPortMappings should use random but unique hostPort. However it seems like it's using the same and so fails before cluster creation during validation. This was/is working as expected with kind v0.18.0

There is a workaround if creating only 1 cluster, which is to manually add hostPort and select a unique port. However this does not scale when we need to run multiple kind cluster. (eg for demonstrating or testing a multi cluster to replacate a multi region behaviour etc).

What you expected to happen:
The kind cluster to be created successfully

How to reproduce it (as minimally and precisely as possible):
Given the below config file and creating a cluster will fail with the folloing error.

ERROR: failed to create cluster: invalid configuration for node 1: invalid portMapping: port mapping with same listen address, port and protocol already configured: <nil>:0/

command run: kind create cluster --config config.yaml

# config.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
    extraPortMappings:
      - containerPort: 8080
      - containerPort: 8443

Anything else we need to know?:
This error only occurs after #3175 so all version from v0.19.0 on wards.
v0.18.0 continues to work as expected.

Environment:

  • kind version: (use kind version): v0.20.0
  • Runtime info: (use docker info or podman info): Docker engine 24.0.2
  • OS (e.g. from /etc/os-release): OSX 13.4
  • Kubernetes version: (use kubectl version): 1.24.15
  • Any proxies or other special environment settings?: nil
@prahaladramji prahaladramji added the kind/bug Categorizes issue or PR as related to a bug. label Jul 8, 2023
@aojea
Copy link
Contributor

aojea commented Jul 8, 2023

/assign @aroradaman

@aojea
Copy link
Contributor

aojea commented Jul 8, 2023

Creating a kind cluster without any hostPort defined in the extraPortMappings should use random but unique hostPort.

I'm curious about this use case, how do you know which port is assigned to the container then?

@prahaladramji
Copy link
Author

I'm curious about this use case, how do you know which port is assigned to the container then?

For the usecase I have, essentially i actually don't want/need to expose anything to the host port. What i do is run another independent container envoy proxy on the same kind network --network kind which is created as part of the kind create cluster. This allows all continers to talk to one another via the exposed containerPort.

The previous behaviour i've sees is that it selects a random hostPort v0.18.0 of kind, so when I raised this issue essentially i'm looking for the same behaviour, otherwise a few or my test envs break. I suppose the option to also not expose to a hostPort sounds like a good feature and would serve this usecase.

@aojea
Copy link
Contributor

aojea commented Jul 10, 2023

The previous behaviour i've sees is that it selects a random hostPort v0.18.0 of kind, so when I raised this issue essentially i'm looking for the same behaviour, otherwise a few or my test envs break. I suppose the option to also not expose to a hostPort sounds like a good feature and would serve this usecase.

no no, do not misunderstand me, the bug is legit, the behavior should be recovered, is that I was curious to understand the use case

@prahaladramji
Copy link
Author

no no, do not misunderstand me, the bug is legit, the behavior should be recovered, is that I was curious to understand the use case

All good, no misunderstanding here. Hope i've clarified the use case. Essentially it's being able to create complex multi container envs that network with the kind cluster.

In my case i'm testing istio in a multi cluster setup, also additional containers pretending to be vms in a cloud env which are networked with kubernetes.

@prahaladramji
Copy link
Author

Following up on this issue, are there any updates or decision on how this will move forward? I see there's an op PR but has been blocked. If that's just an implementation detail that would be good to know or if this is the intended behaviour and there is no intention to change it back to the previous behaviour then i'll look at investing some other sort of work around.

Either way it would be good to know the path forward.

@BenTheElder
Copy link
Member

#3302 has outstanding review comments. If you'd like to help someone could try again.

@aroradaman
Copy link
Member

@prahaladramji #3513 fixes the issue
/close

@k8s-ci-robot
Copy link
Contributor

@aroradaman: Closing this issue.

In response to this:

@prahaladramji #3513 fixes the issue
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants