forked from aabc/ipt-netflow
-
Notifications
You must be signed in to change notification settings - Fork 0
/
README
816 lines (592 loc) · 32.9 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
ipt_NETFLOW linux 2.6.x-5.x kernel module by <abc@openwall.com> -- 2008-2021.
High performance NetFlow v5, v9, IPFIX flow data export module for Linux
kernel. Created to be useful for linux routers in high-throughput networks.
It should be used as iptables target.
=========================
= Detailed Feature List =
=========================
* High performance and scalability. For highest performance module could be
run without conntrack being enabled in kernel. Reported to be able to
handle 10Gbit traffic with more than 1500000 pps with negligible server
load (on S5500BC).
* NetFlow v5, v9, and IPFIX are fully supported.
Support of v9/IPFIX is adding flexibility to exporting of flow data
plus greater visibility of traffic, letting export many additional fields
besides what was possible in v5 era. Such as
* IPv6 option headers, IPv4 options, TCP options, ethernet type, dot1q
service and customer VLAN ids, MAC addresses, and
* Full IPv6 support,
* NAT translations events (from conntrack) using NetFlow Event Logging (NEL).
This is standardized way for v9/IPFIXr, but module export such events even
for v5 collectors via specially crafted pseudo-records.
* Deterministic (systematic count-based), random and hash Flow Sampling.
With appropriate differences in support of v5, v9, and IPFIX.
* SNMP agent (for net-snmp) for remote management and monitoring.
* Options Templates (v9/IPFIX) let export useful statistical,
configurational, and informational records to collector.
Such as metering, exporting, sampling stat and reliability stat, sampling
configuration, network devices ifName, ifDescr list.
* Tested to compile and work out of the box on Centos 6, 7, Debian and
* Ubuntu. Many vanilla Linux kernels since 2.6.18 up to the latest (as of
* writing is 3.19) are supported and tested.
* Module load time and run-time (via sysctl) configuration.
* Flexibility in enabling features via ./configure script. This will let you
disable features you don't need, which increase compatibility with custom
kernels and performance.
* SNMP-index translation rules, let convert meaningless and unstable
interface indexes (ifIndex) to more meaningful numbering scheme.
* Easy support for catching mirrored traffic with promisc option. Which is
also supporting optional MPLS decapsulation and MPLS-aware NetFlow.
============================
= OBTAINING LATEST VERSION =
============================
$ git clone git://github.com/aabc/ipt-netflow.git ipt-netflow
$ cd ipt-netflow
================
= INSTALLATION =
================
Five easy steps.
** 1. Prepare Kernel source
If you have package system install kernel-devel package, otherwise install
raw kernel source from http://kernel.org matching _exactly_ version of your
installed kernel.
a) What to do for Centos:
~# yum install kernel-devel
b) What to do for Debian and Ubuntu:
~# apt-get install module-assistant
~# m-a prepare
c) Otherwise, if you downloaded raw kernel sources don't forget to create
.config by copying it from your distribution's kernel. Its copy could reside
in /boot or sometimes in /proc, examples:
kernel-src-dir/# cp /boot/config-`uname -r` .config
or
kernel-src-dir/# zcat /proc/config.gz > .config
Assuming you unpacked kernel source into `kernel-src-dir/' directory.
Then run:
kernel-src-dir/# make oldconfig
After that you'll need to prepare kernel for modules build:
kernel-src-dir/# make prepare modules_prepare
Note: Don't try to `make prepare' in Centos kernel-devel package directory
(which is usually something like /usr/src/kernels/2.6.32-431.el6.x86_64)
as this is wrong and meaningless.
** 2. Prepare Iptables
Before this step it also would be useful to install pkg-config if don't
already have.
If you have package system just install iptables-devel (or on Debian, Ubuntu
and derivatives libxtables-dev if available, otherwise iptables-dev)
package, otherwise install iptables source matching version of your
installation from ftp://ftp.netfilter.org/pub/iptables/
a) What to do for Centos:
# yum install iptables-devel
b) What to do for Debian or Ubuntu:
# apt-get install iptables-dev pkg-config
c) Otherwise, for raw iptables source build it and make install.
** 3. Prepare net-snmp (optional)
In case you want to manage or monitor module performance via SNMP you
may install net-snmp. If you want to skip this step run configure
with --disable-snmp-agent option.
a) For Centos:
# yum install net-snmp net-snmp-devel
b) For Debian or Ubuntu:
# apt-get install snmpd libsnmp-dev
c) Otherwise install net-snmp from www.net-snmp.org
** 4. Now, to actually build the module run:
~/ipt-netflow# ./configure
~/ipt-netflow# make all install
~/ipt-netflow# depmod
This will install kernel module and iptables specific library.
Troubleshooting:
a) Sometimes you will want to add CC=gcc-3 to make command.
Example: make CC=gcc-3.3
b) Compile module with actual kernel source compiled.
I.e. first compile kernel and boot into it, and then compile module.
If you are using kernel-devel package check that its version matches
your kernel package.
c) If you have sources in non-standard places or configure isn't able to
find something run ./configure --help to see how to specify paths manually.
d) To run irqtop on Debian 8 you may need to install:
# apt-get install ruby ruby-dev ncurses-dev
# gem install curses
z) If all fails create ticket at
https://github.com/aabc/ipt-netflow/issues
** 5. After this point you should be able to load module and
use -j NETFLOW target in your iptables. See next section.
=====================
= Configure Options =
=====================
Configure script allows to enable or disable optional features:
--enable-natevents
enables natevents (NEL) support, (this and option will require
conntrack support to be enabled into kernel and conntack
module (nf_conntrack) loaded before ipt_NETFLOW. Usually this is
done automatically because of `depmod', but if you don't do `make
install' you'll need to load nf_conntrack manually.
Read below for explanation of natevents.
--enable-sampler
enables flow sampler. Read below for explanation of its configuration
option.
--enable-sampler=hash
additionally enables 'hash' sampler.
--disable-snmp-agent
disables building net-snmp agent module, which is enabled by default.
--enable-snmp-rules
enables SNMP-index conversion rules. Read below for explanation
of snmp-rules.
--enable-macaddress
enables exporting of src and dst MAC addresses for every flow
in v9/IPFIX. Difference in any of MAC address will be accounted
as different flow. I.e. MAC addresses will be part of flow key.
--enable-vlan
enables exporting of dot1q VLAN Ids and Priorities for every flow
in v9/IPFIX. It supports outer and second dot1q tags if present.
Any of two previous options will enable exporting of Ethernet Packet
Type, ethernetType(256).
--enable-direction
enables exporting of flowDirection(61) Element for v9/IPFIX.
Packets captured in PREROUTING and INPUT chains will be accounted as
ingress flows(0), in OUTPUT and POSTROUTING as egress flows(1), and
in FORWARD will have flowDirection set to undefined value 255.
--enable-aggregation
enables aggregation rules. Read below for explanation of aggregation.
--disable-dkms
disable creating dkms.conf and auto-install module into DKMS tree.
--disable-dkms-install
only disable auto-install into DKMS, but still create dkms.conf, in
case you will want to install it manually.
--enable-physdev
Export ingressPhysicalInterface(252) and egressPhysicalInterface(253)
(relevant for bridges) in V9 and IPFIX. If your collector does not
support these Elements but you still need physdevs then use
--enable-physdev-override, in that case physdevs will override normal
interface numbers ingressInterface(10) and egressInterface(14).
--enable-promisc
Enables capturing of promiscuous packets into raw/PREROUTING chain.
See README.promisc Solution 1 for usage details and example.
--promisc-mpls
Enables MPLS label stack decapsulation for promiscuous packets. (For
IPv4 and IPv6 packets only). This also enables MPLS-aware NetFlow (v9
and IPFIX), you may wish to specify with --promisc-mpls=n how much MPLS
labels you want to be recorded and exported (default is 3, maximum is
10, set to 0 to not report anything).
===========
= RUNNING =
===========
1. You can load module directly by insmod like this:
# insmod ipt_NETFLOW.ko destination=127.0.0.1:2055 debug=1
Or if properly installed (make install; depmod) by this:
# modprobe ipt_NETFLOW destination=127.0.0.1:2055
See, you may add options in insmod/modprobe command line, or add
them in /etc/modprobe.conf or /etc/modprobe.d/ipt_NETFLOW.conf
like thus:
options ipt_NETFLOW destination=127.0.0.1:2055 protocol=9 natevents=1
2. Statistics is in /proc/net/stat/ipt_netflow
Machine readable statistics is in /proc/net/stat/ipt_netflow_snmp
To view boring slab statistics: grep ipt_netflow /proc/slabinfo
Dump of all flows is in /proc/net/stat/ipt_netflow_flows
3. You can view parameters and control them via sysctl, example:
# sysctl net.netflow
# sysctl net.netflow.hashsize=32768
Note: For after-reboot configuration I recommend to store module parameters
in modprobe configs instead of storing them in /etc/sysctl.conf, as it's
less clear when init process will apply sysctl.conf, before of after
module's load.
4. Example of directing all IPv4 traffic into the module:
# iptables -I FORWARD -j NETFLOW
# iptables -I INPUT -j NETFLOW
# iptables -I OUTPUT -j NETFLOW
Note: It is preferable (because easier to understand) to _insert_
NETFLOW target at the top of the chain, otherwise not all traffic may
reach NETFLOW if your iptables configuration is complicated and some
other rule inadvertently consume the traffic (dropping or acepting before
NETFLOW is reached). It's always good to test your configuration.
Use iptables -L -nvx to check pkts/bytes counters on the rules.
5. If you want to account IPv6 traffic you should use protocol 9 or 10.
Example of directing all IPv6 traffic into the module:
# sysctl net.netflow.protocol=10
# ip6tables -I FORWARD -j NETFLOW
# ip6tables -I INPUT -j NETFLOW
# ip6tables -I OUTPUT -j NETFLOW
Note: First enable right version of protocol and after that add ip6tables
rules, otherwise you will get errors in dmesg.
6. If you want to account NAT events (NEL):
# sysctl net.netflow.natevents=1
Note that natevents feature is completely independent from traffic accounting
(it's using so called conntrack events), thus you don't need to set or change
any iptables rules to use that. You may need to enable kernel config option
CONFIG_NF_CONNTRACK_EVENTS though (if it isn't already enabled).
For details on how they are exported for different protocol versions see
below.
7. For SNMP support you will need to add this command into snmpd.conf to
enable IPT-NETFLOW-MIB in SNMP agent:
dlmod netflow /usr/lib/snmp/dlmod/snmp_NETFLOW.so
Restart snmpd for changes to take effect. Don't forget to properly configure
access control. Example simplest configuration may looks like (note that this
is whole /etc/snmp/snmpd.conf):
rocommunity public 127.0.0.1
dlmod netflow /usr/lib/snmp/dlmod/snmp_NETFLOW.so
Note, that this config will also allow _full_ read-only access to the whole
linux MIB. To install IPT-NETFLOW-MIB locally, copy file IPT-NETFLOW-MIB.my
into ~/.snmp/mibs/
* Detailed example of SNMP configuration is there:
* https://github.com/aabc/ipt-netflow/wiki/Configuring-SNMP-access
To check that MIB is installed well you may issue:
$ snmptranslate -m IPT-NETFLOW-MIB -IR -Tp iptNetflowMIB
This should output IPT-NETFLOW-MIB in tree form.
To check that snmp agent is working well issue:
$ snmpwalk -v 1 -c public 127.0.0.1 -m IPT-NETFLOW-MIB iptNetflowMIB
Should output full MIB. If MIB is not installed try:
$ snmpget -v 1 -c public 127.0.0.1 .1.3.6.1.4.1.37476.9000.10.1.1.1.1.0
Which should output STRING: "ipt_NETFLOW".
MIB provides access to very similar statistics that you have in
/proc/net/stat/ipt_netflow, you can read description of objects in
text file IPT-NETFLOW-MIB.my
If you want to access to SNMP stat in machine readable form for your
scripts there is file /proc/net/stat/ipt_netflow_snmp
Note: Using of SNMP v2c or v3 is mandatory for most tables, because
this MIB uses 64-bit counters (Counter64) which is not supported in old
SNMP v1. You should understand that 32-bit counter will wrap on 10Gbit
traffic in just 3.4 seconds! So, always pass option `-v2c' or `-v3'
to net-snmp utils. Or, for example, configure option `defVersion 2c'
in ~/.snmp/snmp.conf You can also have `defCommunity public' ov v3
auth parameters (defSecurityName, defSecurityLevel, defPassphrase)
set there (man snmp.conf).
Examples for dumping typical IPT-NETFLOW-MIB objects:
- Module info (similar to modinfo, SNMPv1 is ok for following two objects):
$ snmpwalk -v 1 -c public 127.0.0.1 -m IPT-NETFLOW-MIB iptNetflowModule
- Read-write sysctl-like parameters (yes, they are writable via snmpset, you
may need to configure write access to snmpd, though):
$ snmpwalk -v 1 -c public 127.0.0.1 -m IPT-NETFLOW-MIB iptNetflowSysctl
- Global performance stat of the module (note -v2c, because rest of the
objects require SNMP v2c or SNMP v3):
$ snmpwalk -v2c -c public 127.0.0.1 -m IPT-NETFLOW-MIB iptNetflowTotals
- Per-CPU (metering) and per-socket (exporting) statistics in table format:
$ snmptable -v2c -c public 127.0.0.1 -m IPT-NETFLOW-MIB iptNetflowCpuTable
$ snmptable -v2c -c public 127.0.0.1 -m IPT-NETFLOW-MIB iptNetflowSockTable
===========
= OPTIONS =
===========
Options can be passed as parameters to module or changed dynamically
via sysctl net.netflow or IPT-NETFLOW-MIB::iptNetflowSysctl
protocol=5
- what version of NetFlow protocol to use. Default is 5.
You can choose from 5, 9, or 10 (where 10 is IPFIX). If you plan
to account IPv6 traffic you should use protocol 9 or 10 (IPFIX),
because NetFlow v5 isn't compatible with IPv6.
destination=127.0.0.1:2055
- where to export netflow, to this ip address. Port is optional, default
is 2055. You will see this connection in netstat like this:
udp 0 0 127.0.0.1:32772 127.0.0.1:2055 ESTABLISHED
destination=[2001:db8::1]:2055
- export target using IPv6 address. Brackets are optional, but otherwise
you should delimit port with 'p' or '#' character.
destination=127.0.0.1:2055,192.0.0.1:2055
- mirror flows to two (can be more) addresses, separate addresses
with comma.
destination=127.0.0.1:2055@127.0.0.2
- bind socket to address (127.0.0.2).
destination=127.0.0.1:2055%eth0
- bind socket to interface (eth0). May be useful for multi-homed boxes.
sampler=deterministic:123
sampler=random:123
sampler=hash:123
- enables Flow Sampling. To disable set to the empty value or to `0'.
Note, that this is flow sampling (as of RFC 7014), not packet
sampling (PSAMP).
There is three sampling modes:
deterministic: select each N-th observed flow; in IPFIX this mode
is called Systematic count-based Sampling;
random: select randomly one out of N flows.
hash: select hash-randomly one out of N flows.
Number after colon is population size N, with valid values 2-16383.
(This 16383 limit is for compatibility with NetFlow v5.)
Using 'deterministic' and 'random' sampling will not reduce resource
usage caused by the module, because flows are sampled late in exporting
process. This will reduces amount of flows which go to the collector,
thus, reducing load on the collector.
On the other hand, using 'hash' sampling will reduce CPU and memory
load caused by the module, because flows are discarded early in the
processing chain. They are discarded almost like in random sampler,
except that pseudo-random value is depend on the Flow Key hash for each
packet.
All required NetFlow/IPFIX information to signal use of sampling is
also sent to the collector. 'Hash' sampling will be presented as 'random'
sampling to the collector, because of their similarity.
Note, that Flow Sampling is compatible with NetFlow v5, v9, and IPFIX.
natevents=1
- Collect and send NAT translation events as NetFlow Event Logging (NEL)
for NetFlow v9/IPFIX, or as dummy flows compatible with NetFlow v5.
Default is 0 (don't send).
For NetFlow v5 protocol meaning of fields in dummy flows are such:
Src IP, Src Port is Pre-nat source address.
Dst IP, Dst Port is Post-nat destination address.
- These two fields made equal to data flows caught in FORWARD chain.
Nexthop, Src AS is Post-nat source address for SNAT. Or,
Nexthop, Dst AS is Pre-nat destination address for DNAT.
TCP Flags is SYN+SCK for start event, RST+FIN for stop event.
Pkt/Traffic size is 0 (zero), so it won't interfere with accounting.
Natevents are compilation disabled by default, to enable you will need to
add --enable-natevents option to ./configure script.
For technical description of NAT Events see:
http://tools.ietf.org/html/draft-ietf-behave-ipfix-nat-logging-04
inactive_timeout=15
- export flow after it's inactive for 15 seconds. Default value is 15.
active_timeout=1800
- export flow after it's active for 1800 seconds (30 minutes). Default
value is 1800.
refresh-rate=20
- for NetFlow v9 and IPFIX it's rate how frequently to re-send templates
(per packets). You probably don't need to change default (which is 20).
timeout-rate=30
- for NetFlow v9 and IPFIX it's rate when to re-send old templates (in
minutes). No need to change it.
debug=0
- debug level (none).
sndbuf=number
- size of output socket buffer in bytes. I recommend you to put higher
value if you experience netflow packet drops (can be seen in statistics
as 'sock: fail' number.)
Default value is system default.
hashsize=number
- Hash table bucket size. Used for performance tuning.
Abstractly speaking, it should be minimum two times bigger than flows
you usually have, but not need to.
Default is system memory dependent small enough value.
maxflows=2000000
- Maximum number of flows to account. It's here to prevent DOS attacks.
After this limit is reached new flows will not be accounted. Default is
2000000, zero is unlimited.
aggregation=string..
- Few aggregation rules (or some say they are rule.)
Buffer for aggregation string 1024 bytes, and sysctl limit it
to ~700 bytes, so don't write there a lot.
Rules worked in definition order for each packet, so don't
write them a lot again.
Rules applied to both directions (dst and src).
Rules tried until first match, but for netmask and port
aggregations separately.
Delimit them with commas.
Rules are of two kinds: for netmask aggregation
and port aggregation:
a) Netmask aggregation example: 192.0.0.0/8=16
Which mean to strip addresses matching subnet 192.0.0.0/8 to /16.
b) Port aggregation example: 80-89=80
Which mean to replace ports from 80 to 89 with 80.
Full example:
aggregation=192.0.0.0/8=16,10.0.0.0/8=16,80-89=80,3128=80
Aggregation rules are enabled by default, if you feel you don't need them
you may add --disable-aggregation to ./configure script.
snmp-rules=string...
- Few SNMP-index conversion rules similar to fproble-ulog.
Quoting man fprobe-ulog:
"Comma separated list of interface name to SNMP-index conversion
rules. Each rule consists of interface base name and SNMP-index
base separated by colon (e.g. ppp:200). Final SNMP-index is sum
of corresponding SNMP-index base and interface number.
In the above example SNMP-index of interface ppp11 is 211.
If interface name did not fit to any of conversion rules then
SNMP-index will be taken from kernel."
This implementation isn't optimized for performance (no rule caching
or hashing), but should be fast if rules list are short.
Rules are parsed in order from first to last until first match.
snmp-rules are compilation disabled by default, to enable you will need
to add --enable-snmp option to ./configure script.
scan-min=1
- Minimal interval between flow export scans. Sometimes could be useful
to reduce load on exporting CPU by increasing this interval. Value are
in kernel jiffies units (which is x/HZ seconds).
promisc=1
- Enables promisc hack. See README.promisc Solution 1 for details.
exportcpu=number
- Lock exporter to single CPU. This may be useful to fine control CPU
load. Common use case: with smp_affinity and RSS you spread packet
processing to all CPUs except one, and lock it to the exporter. While
exporter CPU load generally is not high, for someone it may be not
desirable to combine it with packet processing on very highly loaded
routers.
This option could be changed at runtime with:
# echo number > /sys/module/ipt_NETFLOW/parameters/exportcpu
engine_id=number
- Observation Domain ID (on IPFIX, Source Id on NetFlow v9, or Engine Id
on NetFlow v5) value to be exported. This may help your collector to
distinguish between multiple exporters. On Netflow v9 and IPFIX this
value is 32-bit on NetFlow v5 only 8 low bits are significant.
Default value is 0.
This option could be changed at runtime with:
# echo number > /sys/module/ipt_NETFLOW/parameters/engine_id
====================
= HOW TO READ STAT =
====================
Statistics is your friend to fine tune and understand netflow module
performance.
To see stat in human readable form:
# cat /proc/net/stat/ipt_netflow
How to interpret the data:
> ipt_NETFLOW version v1.8-122-gfae9d59-dirty, srcversion 6141961152BE0DFA6A21EF4; aggr mac vlan
This line helps to identify actual source that your module is build on.
Please always supply it in all bug reports.
v1.8-122: 1.8 is release, 122 is commit number after release;
-gfae9d59: fae9d59 is short git commit id;
-dirty: if present, meaning that git detected that sources are changed since
last git commit, you may wish to do `git diff' to view changes;
srcversion 6141961152BE0DFA6A21EF4: binary version of module, you can
compare this with data from `modinfo ./ipt_NETFLOW.ko' to identify
actual binary loaded;
aggr mac vlan: tags to identify compile time options that are enabled.
> Protocol version 10 (ipfix), refresh-rate 20, timeout-rate 30, (templates 2, active 2). Timeouts: active 5, inactive 15. Maxflows 2000000
Protocol version currently in use. Refresh-rate and timeout-rate
for v9 and IPFIX. Total templates generated and currently active.
Timeout: active X: how much seconds to wait before exporting active flow.
- same as sysctl net.netflow.active_timeout variable.
inactive X: how much seconds to wait before exporting inactive flow.
- same as sysctl net.netflow.inactive_timeout variable.
Maxflows 2000000: maxflows limit.
- all flows above maxflows limit must be dropped.
- you can control maxflows limit by sysctl net.netflow.maxflows variable.
> Promisc hack is disabled (observed 0 packets, discarded 0).
observed n: To see that promisc hack is really working.
> Natevents disabled, count start 0, stop 0.
- Natevents mode disabled or enabled, and how much start or stop events
are reported.
> Flows: active 5187 (peak 83905 reached 0d0h1m ago), mem 283K, worker delay 100/1000 (37 ms, 0 us, 4:0 0 [3]).
active X: currently active flows in memory cache.
- for optimum CPU performance it is recommended to set hash table size to
at least twice of average of this value, or higher.
peak X reached Y ago: peak value of active flows.
mem XK: how much kilobytes of memory currently taken by active flows.
- one active flow taking 56 bytes of memory.
- there is system limit on cache size too.
worker delay X/HZ: how frequently exporter scan flows table per second.
Rest is boring debug info.
> Hash: size 8192 (mem 32K), metric 1.00, [1.00, 1.00, 1.00]. InHash: 1420 pkt, 364 K, InPDU 28, 6716.
Hash: size X: current hash size/limit.
- you can control this by sysctl net.netflow.hashsize variable.
- increasing this value can significantly reduce CPU load.
- default value is not optimal for performance.
- optimal value is twice of average of active flows.
mem XK: how much memory occupied by hash table.
- hash table is fixed size by nature, taking 4 bytes per entry.
metric X, [X, X, X]: how optimal is your hash table being used.
- lesser value mean more optimal hash table use, min is 1.0.
- last three numbers in squares is moving average (EWMA) of hash table
access divided by match rate (searches / matches) for 4sec, and 1, 5, and
15 minutes. Sort of hash table load average. First value is instantaneous.
You can try to increase hashsize if averages more than 1 (increase
certainly if >= 2).
InHash: X pkt, X K: how much traffic accounted for flows in the hash table.
InPDU X, X: how much traffic in flows preparing to be exported.
> Rate: 202448 bits/sec, 83 packets/sec; 1 min: 668463 bps, 930 pps; 5 min: 329039 bps, 483 pps
- Module throughput values for 1 second, 1 minute, and 5 minutes.
> cpu# pps; <search found new [metric], trunc frag alloc maxflows>, traffic: <pkt, bytes>, drop: <pkt, bytes>
> cpu0 123; 980540 10473 180600 [1.03], 0 0 0 0, traffic: 188765, 14 MB, drop: 27863, 1142 K
cpu#: this is Total and per CPU statistics for:
pps: packets per second on this CPU. It's useful to debug load imbalance.
<search found new, trunc frag alloc maxflows>: internal stat for:
search found new: hash table searched, found, and not found counters.
[metric]: one minute (ewma) average hash metric per cpu.
trunc: how much truncated packets are ignored
- for example if packets don't have valid IP header.
- it's also accounted in drop packets counter, but not in drop bytes.
frag: how much fragmented packets have seen.
- kernel defragments INPUT/OUTPUT chains for us if nf_defrag_ipv[46]
module is loaded.
- these packets are not ignored but not reassembled either, so:
- if there is no enough data in fragment (ex. tcp ports) it is considered
to be zero.
alloc: how much cache memory allocations are failed.
- packets ignored and accounted in traffic drop stat.
- probably increase system memory if this ever happen.
maxflows: how much packets ignored on maxflows (maximum active flows reached).
- packets ignored and accounted in traffic drop stat.
- you can control maxflows limit by sysctl net.netflow.maxflows variable.
traffic: <pkt, bytes>: how much traffic is accounted.
pkt, bytes: sum of packets/megabytes accounted by module.
- flows that failed to be exported (on socket error) is accounted here too.
drop: <pkt, bytes>: how much of traffic is not accounted.
pkt, bytes: sum of packets/kilobytes that are dropped by metering process.
- reasons these drops are accounted here:
truncated/fragmented packets,
packet is for new flow but failed to allocate memory for it,
packet is for new flow but maxflows is already reached.
Traffic lost due to socket errors is not accounted here. Look below
about export and socket errors.
> Export: Rate 0 bytes/s; Total 2 pkts, 0 MB, 18 flows; Errors 0 pkts; Traffic lost 0 pkts, 0 Kbytes, 0 flows.
Rate X bytes/s: traffic rate generated by exporter itself.
Total X pkts, X MB: total amount of traffic generated by exporter.
X flows: how much data flows are exported.
Errors X pkts: how much packets not sent due to socket errors.
Traffic lost 0 pkts, 0 Kbytes, 0 flows: how much metered traffic is lost
due to socket errors.
Note that `cberr' errors are not accounted here due to their asynchronous
nature. Read below about `cberr' errors.
> sock0: 10.0.0.2:2055 unconnected (1 attempts).
If socket is unconnected (for example if module loaded before interfaces is
up) it shows now much connection attempts was failed. It will try to connect
until success.
> sock0: 10.0.0.2:2055, sndbuf 106496, filled 0, peak 106848; err: sndbuf reached 928, connect 0, cberr 0, other 0
sockX: per destination stats for:
X.X.X.X:Y: destination ip address and port.
- controlled by sysctl net.netflow.destination variable.
sndbuf X: how much data socket can hold in buffers.
- controlled by sysctl net.netflow.sndbuf variable.
- if you have packet drops due to sndbuf reached (error -11) increase this
value.
filled X: how much data in socket buffers right now.
peak X: peak value of how much data in socket buffers was.
- you will be interested to keep it below sndbuf value.
err: how much packets are dropped due to errors.
- all flows from them will be accounted in drop stat.
sndbuf reached X: how much packets dropped due to sndbuf being too small
(error -11).
connect X: how much connection attempts was failed.
cberr X: how much connection refused ICMP errors we got from export target.
- probably you are not launched collector software on destination,
- or specified wrong destination address.
- flows lost in this fashion is not possible to account in drop stat.
- these are ICMP errors, and would look like this in tcpdump:
05:04:09.281247 IP alice.19440 > bob.2055: UDP, length 120
05:04:09.281405 IP bob > alice: ICMP bob udp port 2055 unreachable, length 156
other X: dropped due to other possible errors.
> aggr0: ...
aggrX: aggregation rulesets.
- controlled by sysctl net.netflow.aggregation variable.
==========================
= NetFlow considerations =
==========================
List of all IPFIX Elements http://www.iana.org/assignments/ipfix/ipfix.xhtml
Flow Keys are Elements that distinguish flows. Quoting RFC: "If a Flow
Record for a specific Flow Key value already exists, the Flow Record is
updated; otherwise, a new Flow Record is created."
In this implementation following Elements are treated as Flow Keys:
IPv4 source address: sourceIPv4Address(8),
IPv6 source address: sourceIPv6Address(27),
IPv4 destination address: destinationIPv4Address(12),
IPv6 destination address: destinationIPv6Address(28),
TCP/UDP source port: sourceTransportPort(7),
TCP/UDP destination port: destinationTransportPort(11),
input interface: ingressInterface(10),
IP protocol: protocolIdentifier(4),
IP TOS: ipClassOfService(5),
and address family (IP or IPv6).
Additional Flow Keys if VLAN exporting is enabled:
First (outer) dot1q VLAN tag: dot1qVlanId(243) and
dot1qPriority(244) for IPFIX,
or vlanId(243) for NetFlow v9.
Second (customer) dot1q VLAN tag: dot1qCustomerVlanId(245)
and dot1qCustomerPriority(246).
Additional Flow Keys if MAC address exporting is enabled:
Destination MAC address: destinationMacAddress(80),
Source MAC address: sourceMacAddress(56).
Additional Flow Keys if MPLS-aware NetFlow is enabled:
Captured MPLS stack is fully treated as flow key (including TTL values),
which is Elements from mplsTopLabelStackSection(70) to
mplsLabelStackSection10(79), and, if present, mplsTopLabelTTL(200).
Other Elements are not Flow Keys. Note that outer interface, which is
egressInterface(14), is not regarded as Flow Key. Quoting RFC 7012: "For
Information Elements ... for which the value may change from packet to packet
within a single Flow, the exported value of an Information Element is by
default determined by the first packet observed for the corresponding Flow".
Note that NetFlow and IPFIX modes of operation may have slightly different
Elements being used and different statistics sent via Options Templates.
=========
= VOILA =
=========