Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update docs of containers.conf configs affecting /etc/hosts #2184

Merged
merged 1 commit into from
Oct 4, 2024

Conversation

PhrozenByte
Copy link
Contributor

Follow-up to containers/podman#24043 and containers/podman#24122

This just clarifies a few minor things about these configs after we've updated the docs of Podman's --add-host, --no-hosts, et. al. CLI options.

@Luap99: Is it true that host_containers_internal_ip has no effect at all with podman machine, or is it just not possible to disable the host.containers.internal and host.docker.internal hostnames?

Copy link

We were not able to find or create Copr project packit/containers-common-2184 specified in the config with the following error:

Cannot create a new Copr project (owner=packit project=containers-common-2184 chroots=['fedora-eln-x86_64']): Copr: 'packit/containers-common-2184' already exists. Copr HTTP response is 400 BAD REQUEST.

Unless the HTTP status code above is >= 500, please check your configuration for:

  1. typos in owner and project name (groups need to be prefixed with @)
  2. whether the project name doesn't contain not allowed characters (only letters, digits, underscores, dashes and dots must be used)
  3. whether the project itself exists (Packit creates projects only in its own namespace)
  4. whether Packit is allowed to build in your Copr project
  5. whether your Copr project/group is not private

@Luap99
Copy link
Member

Luap99 commented Oct 1, 2024

@Luap99: Is it true that host_containers_internal_ip has no effect at all with podman machine, or is it just not possible to disable the host.containers.internal and host.docker.internal hostnames?

This is the code

switch opts.Conf.Containers.HostContainersInternalIP {
case "":
// if empty (default) we will automatically choose one below
// if machine using gvproxy we let the gvproxy dns server handle the dns name so do not add it
if machine.IsGvProxyBased() {
return ""
}
case "none":
return ""
default:
return opts.Conf.Containers.HostContainersInternalIP
}

So yes you just cannot disable it because the dns name is resolved by gvproxy, if you set a specific ip it should add it correctly.

Comment on lines 213 to 216
Note: This config has no effect with `podman machine`, because Podman isn't
modifying the guest's `/etc/hosts` file. The `host.containers.internal` and
`host.docker.internal` hostnames are instead resolved by the gvproxy DNS
resolver. Therefore it is not possible to disable the hostnames in this case.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So yes this should say setting this to none has no effect, and notably with podman machine we do not add this as host entry but rather let gvproxy resolve it as dns name which is different.

Also for reference this doesn't work today: containers/podman#21681

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see... Just to get the expected behaviour straight (I've never used podman machine myself):

  • containers/podman#21681 is indeed a bug that will be resolved at some point, i.e. host-gateway not working with podman machine is not intended. Therefore we must not mention this in the docs of --add-host. Correct?

  • Just to clarify, Podman's --add-host is always modifying the /etc/hosts file, even with podman machine (i.e. gvproxy isn't involved with --add-host), correct?

  • host_containers_internal_ip="none" not working with podman machine is the intended behaviour and there are no plans to support it (i.e. it's no bug that will be resolved at some point)? I don't know the code (like, at all), so please excuse me if I'm asking stupid questions, but since gvproxy already reads host_containers_internal_ip, why can't it interpret the "none" value? It seems a bit odd from an user's perspective. There might be good technical reasons though (so please don't feel obligated to explain them in detail, just "it's intended" fully satisfies me and I'll update the docs accordingly).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

containers/podman#21681 is indeed a bug that will be resolved at some point, i.e. host-gateway not working with podman machine is not intended. Therefore we must not mention this in the docs of --add-host. Correct?

Yes that should be fixed at some point.

Just to clarify, Podman's --add-host is always modifying the /etc/hosts file, even with podman machine (i.e. gvproxy isn't involved with --add-host), correct?

Correct

host_containers_internal_ip="none" not working with podman machine is the intended behaviour and there are no plans to support it (i.e. it's no bug that will be resolved at some point)? I don't know the code (like, at all), so please excuse me if I'm asking stupid questions, but since gvproxy already reads host_containers_internal_ip, why can't it interpret the "none" value?

gvproxy is running on the host and proxies the network between host and VM, it doesn't read containers.conf at all so it has no idea about this entry as there is no process inside the VM
With machine we try to ensure host is the actual VM host OS (i.e. macos) so if user try to make connection they actually reach other services running on host OS.
Now because the podman in the VM has no idea what the real host ip addresses are I decided to simply skip this entry in that case and let gvproxy (which at that time already resolved this dns name) resolve it as it was the easiest way to go about.

I am not sure I would call this intended but I do not see a way to avoid that. "none" means to not write the entry and because machine already does not write it and depends on dns there is no difference in practise between the two with podman machine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, now I get it 💡

I had the misconception that gvproxy is reading host_containers_internal_ip from containers.conf. I got this false impression from "if you set a specific ip it should add it correctly", because I assumed that "it" meant gvproxy since you were talking about gvproxy there, but you actually meant /etc/hosts.

So, to summarize: gvproxy isn't reading host_containers_internal_ip at all. However, setting host_containers_internal_ip to an IP address still causes Podman to write that IP address to /etc/hosts. This /etc/hosts entry then causes most software to bypass gvproxy; however, if one would query gvproxy directly (e.g. with nslookup host.containers.internal), gvproxy would still yield a different IP address. host_containers_internal_ip="none" simply has no effect because not writing anything to /etc/hosts can't possibly bypass gvproxy. Correct?

I've updated the docs accordingly.

I intentionally refrained from mentioning host_containers_internal_ip="none" in this context. The reason is that nothing special is happening there, it's just the consequence of gvproxy not reading host_containers_internal_ip and that only writing something to /etc/hosts can bypass gvproxy. I furthermore assume that very few users actually want to specifically disable the internal hostnames.

Copy link
Member

@Luap99 Luap99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks content LGTM bit can you update the comments in pkg/config/containers.conf as well so they match.

Signed-off-by: Daniel Rudolf <github.com@daniel-rudolf.de>
@PhrozenByte
Copy link
Contributor Author

Done 👍

Copy link
Member

@Luap99 Luap99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

openshift-ci bot commented Oct 4, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Luap99, PhrozenByte

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved label Oct 4, 2024
@rhatdan
Copy link
Member

rhatdan commented Oct 4, 2024

Thanks @PhrozenByte
/lgtm

@openshift-ci openshift-ci bot added the lgtm label Oct 4, 2024
@openshift-merge-bot openshift-merge-bot bot merged commit cd4f09c into containers:main Oct 4, 2024
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants