-
Notifications
You must be signed in to change notification settings - Fork 51
FAQ
The imq device has two common usage cases:
With linux only egress shaping is possible (except for the ingress queue which can only do rate limiting). IMQ enables you to use egress qdiscs for real ingress shaping.
Qdiscs get attached to devices. A consequence of this is that one qdisc can only handle traffic going to the interface it is attached to. Sometimes it is desireable to have global limits on multiple interfaces. With IMQ you can use iptables to specify which packets the qdiscs sees, so global limits can be placed.
It seems to be pretty stable, a lot of people are using it without problems. There is one case which is not entirely clear at this time, enqueueing packets going to a gre tunnel and also enqueueing the encapsulated packets to the same imq device results in the kernel assuming the gre device to be deadlooped.
Another thing to note is that touching localy generated traffic may cause problems.
The imq device registers NF_IP_PRE_ROUTING (for ingress) and NF_IP_POST_ROUTING (egress) netfilter hooks. These hooks are also registered by iptables. Hooks can be registered with different priorities which determine the order in which the registered functions will be called. Packet delivery to the imq device in NF_IP_PRE_ROUTING happens directly after the mangle table has been passed (not in the table itself!). In NF_IP_POST_ROUTING packets reach the device after ALL tables have been passed. This means you will be able to use netfilter marks for classifying incoming and outgoing packets.
Packets seen in NF_IP_PRE_ROUTING include the ones that will be dropped by packet filtering later (since they already occupied bandwidth), in NF_IP_POST_ROUTING only packets which already passed packet filtering are seen.
kernel: ip_queue: initialisation failed: unable to create queue
kernel: ip6_queue: initialisation failed: unable to create queue
The imq device feeds itself packets through netfilter queueing mechanism. At the moment there can only be one netfilter queue per protocol family so this means imq came first and ip(6)_queue cannot register as PF_INET(6) netfilter queue.
kernel: nf_hook: Verdict = QUEUE.
You have compiled your kernel with CONFIG_NETFILTER_DEBUG=y
. Turn it off to get rid of these messages.
iptables v1.2.6a: Couldn't load target IMQ:/usr/local/lib/iptables/libipt_IMQ.so: cannot open shared object file: No such file or directory
You haven't patched/rebuilt/installed iptables correct. The iptables IMQ target shared libs are only built if your kernel tree has been patched to include the IMQ target using patch-o-matic before. If you took the precompiled shared libraries you haven't copied them to the right place.
You can use IMQ as module safely:
Under 2.4 compiling (and using) IMQ as a dynamically loadable kernel module is perfectly fine and heavily tested.
Under 2.6 if you use patch linux-2.6.7-imq1 or newer ones you are OK, modules work fine.
If you are still using older patches there were problems:
• compiling IMQ device driver (imq.o) as a kernel module caused the kernel compilation to stop with an error; this issue was solved ages ago
• unloading (=rmmod='ing) the IMQ device driver module (imq.o) caused a kernel panic; this issue has been solved also
• when used as a kernel module, the IMQ driver did nothing; this issue came from the previous (kernel panic when unloading), and has been solved
We recommend not using any IMQ patch earlier than linux-2.6.7-imq1 (for 2.6 kernels) or linux-2.4.26-imq.diff (2.4 kernels). (These patches apply on earlier versions as well, than their names suggest.)
Note that if you ever need more than 16 devices, there is no other option than to dive into the source code, because there is a hard-coded limit: #define IMQ_MAX_DEVS 16
.
Remember, you can set the number of imq devices (imq0, imq1, ...) before the IMQ device driver is initialized.
That means that if you use IMQ as
• compiled into kernel, you can give (via your bootloader) your kernel the parameter imq.numdevs = n (of course, without any spaces on both sides of the equal sign), where n is the number of devices you want
• loadable kernel module, you can give modprobe the parameter numdevs = n (of course, without any spaces on both sides of the equal sign), where n is the number of devices you want. Remember, that first you have to unload the module, to load it with a different numdevs parameter.
You can safely initialize IMQ with more devices created than is actually used. The only disadvantage is some dozen of bytes kernel memory allocated per device.
The default behaviour is that in PREROUTING, IMQ sees the packets before NAT, and in POSTROUTING, IMQ sees packets after NAT.
With the default behaviour, on a NATing (masquerading) machine, you should set up your IMQ devices like this, to enable QoS u32 filters to see packets before NAT (that is, classify according to private IP addresses).
If eth1 is the interface facing the network with private addresses, then you should say:
iptables -t mangle -A PREROUTING -i eth1 -j IMQ --todev 0
iptables -t mangle -A POSTROUTING -o eth1 -j IMQ --todev 1
Now QoS u32 filters attached to imq0 and imq1 both see the private IP addresses of packets originating from, or going to your private network, accordingly.
8. Can I change the way IMQ hooks in netfilter? (Can I make IMQ see packets in PREROUTING/POSTROUTING before/after (de)NAT?)
Yes, you can.
With 2.6 series kernels, you can easily change it at configuration time. (Look at Device drivers_/Networking support/_IMQ behavior (PRE/POSTROUTING).)
With 2.4 series kernels, currently the only way is to edit the source, and recompile your kernel.
To hook after NAT in PREROUTING: find the priority member (the last member) of the imq_ingress_ipv4 structure in linux/drivers/net/imq.c
, and change from NF_IP_PRI_MANGLE + 1 to NF_IP_PRI_NAT_DST + 1
. Similarly, change the priority member (last member, too) of imq_ingress_ipv6 from NF_IP6_PRI_MANGLE + 1
to NF_IP6_PRI_NAT_DST + 1
.
To hook before NAT in POSTROUTING: find the priority member (the last member) of the imq_egress_ipv4 structure in linux/drivers/net/imq.c
, and change from NF_IP_PRI_LAST
to NF_IP_PRI_NAT_SRC - 1
. Similarly, change the priority member (last member, too) of imq_egress_ipv6 from NF_IP6_PRI_LAST
to NF_IP6_PRI_NAT_SRC - 1
.
Make sure you do not send locally generated traffic (traffic generated by userspace programs or the kernel itself - eg. GRE, IPsec tunnels) into the IMQ device, there's a known bug affecting 2.6-series kernels.
Make sure you use the latest stable patches and a sensibly recent kernel, and that the patch matches the kernel.
Make ends in:
CC net/ipv4/netfilter/ipt_IMQ.o
net/ipv4/netfilter/ipt_IMQ.c: In function
imq_target':`
net/ipv4/netfilter/ipt_IMQ.c:19: error: structure has no member named
imq_flags'`
make[3]: *** [net/ipv4/netfilter/ipt_IMQ.o] Error 1
make[2]: *** [net/ipv4/netfilter] Error 2
make[1]: *** [net/ipv4] Error 2
make: *** [net] Error 2
Solution: Enable CONFIG_IMQ
in your kernel .config
Patch doesn't actively dispatch separate CPUs but depends on either RPS or multi-IRQ NIC to have packets running through multiple CPUs. Multi-queues on IMQ then tries to avoid serialization by single qdisc lock.
This should work best with combined with new RPS & RFS features of 2.6.35. (see here and here)
Script for multi-queue https://github.com/imq/linuximq/blob/master/kernel/v2.6-multiqueue/load-imq-multiqueue.sh