-
Notifications
You must be signed in to change notification settings - Fork 308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Pi-hole] Only a single client #709
Comments
Well, that's not what I see. For example: To be honest, I can't understand how Pi-hole would ever actually see 172.X.0.1 in incoming query packets.
Please see if you can spot a hole (no pun intended) in my reasoning. Make these assumptions:
When the client issues a DNS query, the source IP is 192.168.0.100 and the destination IP is 192.168.0.60. When the packet arrives at the Pi, iptables rules set up by Docker rewrite (de-NAT) the destination address to be 172.16.0.4, then the packet is routed to Pi-hole. The source IP isn't changed so Pi-hole can see 192.168.0.100 and can log that against the query. The reply to the query will have a source IP of 172.16.0.4 and a destination IP of 192.168.0.100. The packet routes out of the internal bridged network to the Pi where iptables rules rewrite (NAT) the source IP address to be 192.168.0.60, and the packet is then forwarded back to the client. It's fairly easy to verify this:
I have yet to spot the IP address of the default gateway on the internal bridged network (ie 172.16.0.1). All incoming queries originating from devices on my home network have source IP addresses matching 192.168.132.x. Outside container-space, you can also use
I see plenty of queries and replies, both answered directly by Pi-hole, or being relayed to BIND9 or 8.8.8.8 etc, but no sign of 172.16.x.x. It's possible that the reason I see all my local clients is because of how I have Pi-hole configured so here's my service definition:
The way I use Pi-hole is like this. DHCP divvies-up clients into two buckets:
BIND9 is authoritative for When a client directs a query to Pi-hole, there are three possible outcomes:
Because BIND9 can answer reverse queries, Pi-hole can translate local IP addresses to local domain names, which then show up in the GUI (the example above). From the Discourse post:
The "problem" of Docker allocating the subnet dynamically when it creates the default network can be "solved" like this:
Fairly obviously you can pick any subnet you like and prefix length you like. Some people would rather let Docker make the choice (which is why I put "problem" in quotes) but I prefer predictability. |
By the way, the 10.244.124.118 in the client list is a remote device communicating via ZeroTier Cloud. My BIND9 instance does define a domain name for that but Pi-hole can't reverse resolve the IP to the name because the |
Thank you Phill for taking the time to give such a thorough answer! At the same time I must apologize for my low-quality issue report; a bit embarrassing really. For one, I did not mention that my IOTstack runs on a Mac. Your feedback and that of the author of the issue I linked to, seem to confirm this be no issue on the Pi ("running Pi-hole in a Raspberry Pi...without any issues"). So, I guess other than maybe adding a hint IOTstack Pi-hole documentation for Mac users nothing needs changing. |
Ah! Gotcha. My bad. But there's no need for any apologies, mate. Any question that results in new knowledge is worthwhile. TL;DRI agree that:
Now the TL bitI did run Docker Desktop for Mac for a while but the darn thing kept giving me the screaming rhymes-with-fits. It would either not start, or not shut down cleanly, or crash, or would threaten me that this, that or the other would break if I didn't stand on one leg and whistle Waltzing Matilda in E minor. Eventually, I nuked it. That was about a year ago. When I was running Docker Desktop for Mac, I mostly stuck with MING. I'm pretty sure I never tried any containers that needed privileged ports (eg Pi-hole) because it was my understanding it wouldn't work. So, read all that as some limited experience with the environment, but nothing directly related to the problem at hand. That said, I just installed it on my M2 MacBookPro.
I agree that PiHole records the IP address of the default gateway for all clients. In the following, the querying host is 192.168.132.60 but, in the first line of
I think I see why. This is from inside the container:
The traceroute back to the querying host has three hops. On a Pi, there are only two hops. The 192.168.65.0/24 subnet shows up in the Docker Desktop Settings»Resources»Network as "the resources network". Now there's a phrase to launch a thousand IP datagrams! My guess is that it represents another level of NAT. How the .5 fits in is a mystery. Earlier, I said I had believed that privileged ports (anything less than 1024) wouldn't work. Because I never tried it before nuking everything, I can't say whether it was true then but this additional level of NAT has come along since as the fix, or if this extra level of NAT was always present, there never was any restriction on containers using privileged ports, and my belief was misplaced. But this whole thing is really weird. Inside Pi-hole:
No sign of anything other than the If you've ever tried to run
All in all, some seriously hinky things are going on here. When I searched for the PID associated with port 53, it arrived at:
The logical conclusion is that all this jiggery-pokery is going on inside Docker Desktop and it's all completely independent of macOS networking. I. Hate. Opaque. If this were a Pi, you could work around extra NAT shenanigans by placing the container into "host mode". But that doesn't work for containers in Docker Desktop for Mac. You can enable host mode. The containers come up. Neither Docker nor the containers report any errors or warnings. But the containers just aren't reachable. I. Hate. Silent. Fails.
|
I ran through the same set of commands as you did (thanks for those) and I see mostly identical results - but not 100%.
-> The traceroute back to the querying host 192.168.0.36 has two(!) hops. Without understanding that in detail I conclude that the behavior on Mac (and most likely Windows as well) is due to the fact that Docker doesn't run "natively" on those platforms but in a Linux VM instead. |
That traceroute is interesting. So, my tests were on an M2 MacBookPro with Docker Desktop freshly download yesterday, a clean clone of IOTstack, and the compose file fragments borrowed from the Pi. I haven't repeated the test on my Intel iMac. Are you doing this on Intel or Apple silicon? |
In terms of hardware/software versions our two setups are very different. I run the IOTstack on a late 2012 Mac Mini, Catalina, Docker Desktop 4.15. |
Hmmm. Well, mine says version 4.20.1. I've just repeated everything on my Intel iMac and I see three hops. I could just - barely - make sense of the observed behaviour (Pi-hole logging all clients as the default gateway) if I hypothesised extra NAT. And, based on that assumption, the 192.168.65.5 made sense as the vehicle for NAT. But now ... ? 🤷♂️ To answer your earlier (implied) question, traceroute works by issuing packets, starting with a time-to-live (aka "hop count") of 1, with each successive packet incrementing the initial hop count. As packets arrive at devices, the hop count is decremented. If the count goes zero, the packet is returned to sender ("bounced"). It's the bounces that constitute the output. The actual idea of the time-to-live is to break forwarding loops when routers are misconfigured. Traceroute just takes advantage of that behaviour. A device receiving a packet is either the intended recipient, or is expected to be configured as a router. If neither, it's an error and the packet is dropped as undeliverable. If it's a router, the packet will be forwarded to the next hop according to the device's routing table (in most cases that means following the default route). So, any line in a traceroute, other than the last, tells you a couple of things. It tells you "here be a router". It also tells you the near side (aka ingress side) IP address of that router - but not the far side (egress side) IP address. Plenty can go wrong with traceroutes because a lot of sites think they're improving security by blocking the packets, so you often see a lot of holes in the output. Parallel paths can also confuse traceroute a bit, particularly if two paths to a destination involve different hop-counts. |
No question about traceroute but thanks anyway 😄 My old Docker version is dictated by the old hardware: old Mini -> old OS -> old Docker. |
Applies changes recommended by `yamllint`. Adds `FTLCONF_MAXDBDAYS` to the template with the existing default of 365 days, and adds words to the documentation explaining its use. Adds `PIHOLE_DNS_` with the existing defaults of 8.8.8.8 and 8.8.4.4, and adds words and screenshot to the documentation explaining its use. Explains issue with Docker Desktop for macOS raised in SensorsIot#709 where all client activity is logged against the default gateway on the internal bridged network. Tidies-up problematic code-fence language specs. Shortens some subheadings so they fit on a single line in the Wiki TOC. Fixes SensorsIot#709 Signed-off-by: Phill Kelley <34226495+Paraphraser@users.noreply.github.com>
Just in case anyone follows this all the way down here, here's a related Pi-hole issue: pi-hole/docker-pi-hole#135 (goes back to 2017). |
And, on this same theme (people making it all the way to the end of this issue), the TL;DR bit of my post from June 2023 said it would be a good idea to document the behaviour. I submitted a PR to do that. This is what you see here: |
Since Pi-hole runs in a Docker container on the default Docker network, it identifies all clients in its logs as 172.X.0.1.
This is discussed in a number of places e.g. https://discourse.pi-hole.net/t/pi-hole-in-docker-on-macos-has-only-one-client/55875
The text was updated successfully, but these errors were encountered: