forked from pmacct/pmacct
-
Notifications
You must be signed in to change notification settings - Fork 0
/
QUICKSTART
2987 lines (2440 loc) · 125 KB
/
QUICKSTART
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
pmacct [IP traffic accounting : BGP : BMP : RPKI : IGP : Streaming Telemetry]
pmacct is Copyright (C) 2003-2020 by Paolo Lucente
TABLE OF CONTENTS:
I. Daemons and plugins included with pmacct distribution
II. Configuring pmacct for compilation and installing
III. Brief SQL (MySQL, PostgreSQL, SQLite 3.x) setup examples
IV. Running the libpcap-based daemon (pmacctd)
V. Running the NetFlow/IPFIX and sFlow daemons (nfacctd/sfacctd)
VI. Running the NFLOG-based daemon (uacctd)
VII. Running the pmacct IMT client (pmacct)
VIII. Running the RabbitMQ/AMQP plugin
IX. Running the Kafka plugin
X. Internal buffering and queueing
XI. Quickstart guide to packet classification
XII. Quickstart guide to setup a NetFlow/IPFIX agent/probe
XIII. Quickstart guide to setup a sFlow agent/probe
XIV. Quickstart guide to setup the BGP daemon
XV. Quickstart guide to setup a NetFlow/IPFIX/sFlow replicator
XVI. Quickstart guide to setup the IS-IS daemon
XVII. Quickstart guide to setup the BMP daemon
XVIII. Quickstart guide to setup Streaming Telemetry collection
XIX. Running the print plugin to write to flat-files
XX. Quickstart guide to setup GeoIP lookups
XXI. Using pmacct as traffic/event logger
XXII. Connecting pmacct to a Redis cache
XXIII. Miscellaneous notes and troubleshooting tips
I. Daemons and plugins included with pmacct distribution
All traffic accounting daemons can print statistics to stdout, keep them in
memory tables, store persistently to open-source RDBMS (MySQL, PostgreSQL,
Sqlite 3) or to noSQL databates (ie. BerkeleyDB) and to flat-files, and
publish to AMQP and Kafka brokers (typically to insert in ElasticSearch,
InfluxDB, Druid, ClickHouse and, more in general, all backends which are not
natively supported by pmacct). BGP, BMP and Streaming Telemetry daemons can
publish control and infrastructure planes to AMQP and Kafka brokers. This is
a list of the daemons included in the pmacct distribution:
pmacctd libpcap-based accounting daemon: it captures packets from one
or multiple interfaces it is bound to. Other than acting as a
collector, this daemon can also export statistics via NetFlow,
IPFIX and sFlow protocols.
nfacctd NetFlow/IPFIX accounting daemon: it listens for NetFlow v5/v9
and IPFIX packets on one or more interfaces (IPv4 and IPv6).
Other than acting as a collector, this daemon can also
replicate to 3rd party collectors.
sfacctd sFlow accounting daemon; it listens for sFlow packets v2, v4
and v5 on one or more interfaces (both IPv4 and IPv6). Other
than acting as a collector, this daemon can also replicate to
3rd party collectors.
uacctd Linux Netlink NFLOG accounting daemon; it captures packets by
leveraging a NFLOG multicast group - and works only on Linux.
Other than acting as a collector, this daemon can also export
statistics via NetFlow, IPFIX and sFlow protocols.
pmtelemetryd Standalone Streaming Telemetry collector daemon; listens for
telemetry data binding to a TCP or UDP port and logs real-time
and/or dumps at regular time-intervals to configured backends.
pmbgpd Standalone BGP collector daemon; acts as a passive iBGP or
eBGP neighbor and maintains per-peer RIBs; can log real-time
and/or dump at regular time-intervals BGP data to configured
backends.
pmbmpd Standalone BMP collector daemon; can log real-time and/or dump
at regular time-intervals BMP/BGP data to configured backends.
pmacct commandline pmacct client; it allows to retrieve data from a
memory table plugin; it can perform queries over data or do
bulk data retrieval. Output is formatted, CSV or JSON format.
suitable for data injection in 3rd party tools like RRDtool,
Gnuplot or SNMP server among the others.
Given its open and pluggable architecture, pmacct is easily extensible with new
plugins. Here is a list of traffic accounting plugins included in the official
pmacct distribution:
memory data is stored in a memory table and can be fetched via the
pmacct command-line client tool, 'pmacct'. This plugin also
implements a push model and allows easily to inject data into
3rd party tools. The plugin is recommended for prototyping and
smaller-scale environments and is compiled in by default.
mysql a working MySQL/MariaDB installation can be used for data
storage. This plugin can be compiled using the --enable-mysql
switch.
pgsql a working PostgreSQL installation can be used for data storage.
This plugin can be compiled using the --enable-pgsql switch.
sqlite3 a working SQLite 3.x or BerkeleyDB 5.x (compiled in with the
SQLite API) installation can be used for data storage. This
plugin can be compiled using the --enable-sqlite3 switch.
print data is printed at regular intervals to flat-files or standard
output in tab-spaced, CSV and JSON formats. This plugin is
compiled in by default.
amqp data is sent to a RabbitMQ broker, running AMQP protocol, for
delivery to consumer applications or tools. Popular consumers
are ElasticSearch, InfluxDB, Druid and ClickHouse. This plugin
can be compiled using the --enable-rabbitmq switch.
kafka data is sent to a Kafka broker for delivery to consumer
applications or tools. Popular consumers are ElasticSearch,
InfluxDB, Druid and ClickHouse. This plugin can be compiled
using the --enable-kafka switch.
tee applies to nfacctd and sfacctd daemons only. It's a featureful
packet replicator for NetFlow/IPFIX/sFlow data. This plugin is
compiled in by default.
nfprobe applies to pmacctd and uacctd daemons only. Exports collected
data via NetFlow v5/v9 or IPFIX. This plugin is compiled in by
default.
sfprobe applies to pmacctd and uacctd daemons only. Exports collected
data via sFlow v5. This plugin is compiled in by default.
II. Configuring pmacct for compilation and installing
The simplest way to configure the package for compilation is to download the
latest stable released tarball from http://www.pmacct.net/ and let the configure
script to probe default headers and libraries for you. The only dependency that
pmacct brings is libpcap library and headers: libpcap-dev on Debian/Ubuntu,
libpcap-devel on CentOS/RHEL (note: this may need enabling extra yum repos!) or
(self-compiled) equivalent must be installed on the system. Then, a first round
of guessing is done via pkg-config then, for some libraries, "typical" default
locations are checked, ie. /usr/local/lib. Switches one likely wants enabled are
already set so, ie. 64 bits counters and multi-threading (pre- requisite for
the BGP, BMP, and IGP daemon codes); the full list of switches enabled by default
are marked as 'default: yes' in the "./configure --help" output. SQL plugins, AMQP
and Kafka support are all disabled by default instead. A few examples will follow;
to get the list of available switches, you can use the following command-line:
shell> ./configure --help
Examples on how to enable support for (1) MySQL, (2) PostgreSQL and (3) SQLite:
(1) libmysqlclient-dev package or (self-compiled) equivalent being installed:
shell> ./configure --enable-mysql
(2) libpq-dev package or (self-compiled) equivalent being installed:
shell> ./configure --enable-pgsql
(3) libsqlite3-dev package or (self-compiled) equivalent being installed:
shell> ./configure --enable-sqlite3
If cloning the GitHub repository ( https://github.com/pmacct/pmacct ) instead,
the configure script has to be generated, resulting in one extra step than the
process just described. Please refer to the Building section of the README.md
document for instruction about cloning the repo, generate the configure script
along with the required installed packages.
Then compile and install simply typing:
shell> make; make install
Should you want, for example, to compile pmacct with PostgreSQL support and
have installed PostgreSQL in /usr/local/postgresql and pkg-config is unable
to help, you can supply this non-default location as follows (assuming you
are running the bash shell):
shell> export PGSQL_LIBS="-L/usr/local/postgresql/lib -lpq"
shell> export PGSQL_CFLAGS="-I/usr/local/postgresql/include"
shell> ./configure --enable-pgsql
If the library does actually support pkg-config but the .pc pkg-config file
is in some non-standard location, this can be supplied as follows:
shell> export PKG_CONFIG_PATH=/usr/local/postgresql/pkgconfig/
shell> ./configure --enable-pgsql
Special case is to compile pmacct with MySQL support but MySQL is installed
in some non-default location. MySQL brings the mysql_config tool that works
similarly to pkg-config. Make sure the tool is on the path so that it can be
executed by the configure script, ie.:
shell> export PATH=$PATH:/usr/local/mysql/bin
shell> ./configure --enable-mysql
By default all tools - flow, BGP, BMP and Streaming Telemetry - are compiled.
Specific tool sets can be disabled. For example, to compile only flow tools
(ie. no pmbgpd, pmbmpd, pmtelemetryd) the following command-line can be used:
shell> ./configure --disable-bgp-bins --disable-bmp-bins --disable-st-bins
Once daemons are installed you can check:
* Basic instrumenting of each daemon via its help page, ie.:
shell> pmacctd -h
* Review daemon version and build details, ie.:
shell> sfacctd -V
* Check supported traffic aggregation primitives and their description, ie.:
shell> nfacctd -a
IIa. Compiling pmacct with JSON support
JSON encoding is supported via the Jansson library (http://www.digip.org/jansson/
and https://github.com/akheron/jansson); a library version >= 2.5 is required. To
compile pmacct with JSON support simply do:
shell> ./configure --enable-jansson
However should you have installed Jansson in the /usr/local/jansson directory
and pkg-config is unable to help, you can supply this non-default location as
follows (assuming you are running the bash shell):
shell> export JANSSON_LIBS="-L/usr/local/jansson/lib -ljansson"
shell> export JANSSON_CFLAGS="-I/usr/local/jansson/include"
shell> ./configure --enable-jansson
IIb. Compiling pmacct with Apache Avro support
Apache Avro encoding is supported via libavro library (http://avro.apache.org/
and https://avro.apache.org/docs/1.9.1/api/c/index.html); Avro depends on the
Jansson JSON parser version 2.3 or higher so please review the previous section
"Compiling pmacct with JSON support"; then, to compile pmacct with Apache Avro
support simply do:
shell> ./configure --enable-avro
However should you have installed libavro in the /usr/local/avro directory
and pkg-config is unable to help, you can supply this non-default location as
follows (assuming you are running the bash shell):
shell> export AVRO_LIBS="-L/usr/local/avro/lib -lavro"
shell> export AVRO_CFLAGS="-I/usr/local/avro/include"
shell> ./configure --enable-kafka --enable-avro
IIc. Compiling pmacct against a own libpcap library
Compiling against a downloaded libpcap library may be wanted for several
reasons including the version packaged with the Operating System is too
old, a custom libpcap library needs to be compiled (ie. with support for
PF_RING) or static linking is wanted.
Once libpcap is downloaded, if static linking is wanted (ideal for example
for distributing pmacct without external dependencies), the library can be
configured for comipiling:
shell> ./configure --disable-shared
pmacct should be pointed to the own libpcap library when configuring for
compiling:
shell> ./configure --with-pcap-libs=/path/to/libpcap-x.y.z --with-pcap-includes=/path/to/libpcap-x.y.z
Once pmacct is compiled, it can be confirmed that the right library was
picked by doing, for example, a 'pmacctd -V' and seeing the version of
libpcap matches with the supplied version.
A use-case for a PF_RING-enabled libpcap is that by hashing and balancing
collected traffic over multiple NIC queues (ie. if using Intel X520) it
is possible to scale pmacctd horizontally, with one pmacctd instance
reading from one or multiple queues. The queues can be managed via the
'ethtool' tool (ie. 'ethtool -l enp1s0f0' to list, 'ethtool -L enp1s0f0
combined 16' to access 16 queues, etc.) and pmacctd can be bound to a
single queue, ie. 'pmacctd -i enp1s0f0@0', or multiple ones via a
pcap_interfaces_map, ie.
ifname=enp1s0f0@0 ifindex=100
ifname=enp1s0f0@1 ifindex=101
ifname=enp1s0f0@2 ifindex=102
ifname=enp1s0f0@3 ifindex=103
III. Brief SQL and noSQL setup examples
RDBMS require a table schema to store data. pmacct offers two options: use one
of the few pre-determined table schemas available (sections IIIa, b and c) or
compose a custom schema to fit your needs (section IIId). If you are blind to
SQL the former approach is recommended, although it can pose scalability issues
in larger deployments; if you know some SQL the latter is definitely the way to
go. Scripts for setting up RDBMS are located in the 'sql/' tree of the pmacct
distribution tarball. For further guidance read the relevant README files in
such directory. One of the crucial concepts to deal with, when using default
table schemas, is table versioning: please read more about this topic in the
FAQS document (Q17).
IIIa. MySQL examples
shell> cd sql/
- To create v1 tables:
shell> mysql -u root -p < pmacct-create-db_v1.mysql
shell> mysql -u root -p < pmacct-grant-db.mysql
Data will be available in 'acct' table of 'pmacct' DB.
- To create v2 tables:
shell> mysql -u root -p < pmacct-create-db_v2.mysql
shell> mysql -u root -p < pmacct-grant-db.mysql
Data will be available in 'acct_v2' table of 'pmacct' DB.
... And so on for the newer versions.
IIIb. PostgreSQL examples
Which user has to execute the following two scripts and how to autenticate with the
PostgreSQL server depends upon your current configuration. Keep in mind that both
scripts need postgres superuser permissions to execute some commands successfully:
shell> cp -p *.pgsql /tmp
shell> su - postgres
To create v1 tables:
shell> psql -d template1 -f /tmp/pmacct-create-db.pgsql
shell> psql -d pmacct -f /tmp/pmacct-create-table_v1.pgsql
To create v2 tables:
shell> psql -d template1 -f /tmp/pmacct-create-db.pgsql
shell> psql -d pmacct -f /tmp/pmacct-create-table_v2.pgsql
... And so on for the newer versions.
A few tables will be created into 'pmacct' DB. 'acct' ('acct_v2' or 'acct_v3') table is
the default table where data will be written when in 'typed' mode (see 'sql_data' option
in CONFIG-KEYS document; default value is 'typed'); 'acct_uni' ('acct_uni_v2' or
'acct_uni_v3') is the default table where data will be written when in 'unified' mode.
Since v6, PostgreSQL tables are greatly simplified: unified mode is no longer supported
and an unique table ('acct_v6', for example) is created instead.
IIIc. SQLite examples
shell> cd sql/
- To create v1 tables:
shell> sqlite3 /tmp/pmacct.db < pmacct-create-table.sqlite3
Data will be available in 'acct' table of '/tmp/pmacct.db' DB. Of course, you can change
the database filename basing on your preferences.
- To create v2 tables:
shell> sqlite3 /tmp/pmacct.db < pmacct-create-table_v2.sqlite3
Data will be available in 'acct_v2' table of '/tmp/pmacct.db' DB.
... And so on for the newer versions.
IIId. Custom SQL tables
Custom tables can be built by creating your own SQL schema and indexes. This
allows to mix-and-match the primitives relevant to your accounting scenario.
To flag intention to build a custom table the sql_optimize_clauses directive
must be set to true, ie.:
sql_optimize_clauses: true
sql_table: <table name>
aggregate: <aggregation primitives list>
How to build the custom schema? Let's say the aggregation method of choice
(aggregate directive) is "vlan, in_iface, out_iface, etype" the table name is
"acct" and the database of choice is MySQL. The SQL schema is composed of four
main parts, explained below:
1) A fixed skeleton needed by pmacct logics:
CREATE TABLE <table_name> (
packets INT UNSIGNED NOT NULL,
bytes BIGINT UNSIGNED NOT NULL,
stamp_inserted DATETIME NOT NULL,
stamp_updated DATETIME
);
2) Indexing: primary key (of your choice, this is only an example) plus
any additional index you may find relevant.
3) Primitives enabled in pmacct, in this specific example the ones below; should
one need more/others, these can be looked up in the sql/README.mysql file in
the section named "Aggregation primitives to SQL schema mapping:" :
vlan INT(2) UNSIGNED NOT NULL,
iface_in INT(4) UNSIGNED NOT NULL,
iface_out INT(4) UNSIGNED NOT NULL,
etype INT(2) UNSIGNED NOT NULL,
4) Any additional fields, ignored by pmacct, that can be of use, these can be
for lookup purposes, auto-increment, etc. and can be of course also part of
the indexing you might choose.
Putting the pieces together, the resulting SQL schema is below along with the
required statements to create the database:
DROP DATABASE IF EXISTS pmacct;
CREATE DATABASE pmacct;
USE pmacct;
DROP TABLE IF EXISTS acct;
CREATE TABLE acct (
vlan INT(2) UNSIGNED NOT NULL,
iface_in INT(4) UNSIGNED NOT NULL,
iface_out INT(4) UNSIGNED NOT NULL,
etype INT(2) UNSIGNED NOT NULL,
packets INT UNSIGNED NOT NULL,
bytes BIGINT UNSIGNED NOT NULL,
stamp_inserted DATETIME NOT NULL,
stamp_updated DATETIME,
PRIMARY KEY (vlan, iface_in, iface_out, etype, stamp_inserted)
);
To grant default pmacct user permission to write into the database look at the
file sql/pmacct-grant-db.mysql
IIIe. Historical accounting
Enabling historical accounting allows to aggregate data in time-bins (ie. 5 mins, hour,
day, etc.) in a flexible and fully configurable way. Two timestamps are available: the
'stamp_inserted' field, representing the basetime of the time-bin, and 'stamp_updated',
the last time the time-bin was updated. Following a pretty standard config fragment to
slice data into nicely aligned (or rounded-off) 5 minutes time-bins:
sql_history: 5m
sql_history_roundoff: m
IIIf. INSERTs-only
UPDATE queries are expensive; this is why, even if they are supported by pmacct, a
savy approach would be to cache data for longer times in memory and write them off
once per time-bin (sql_history): this results into a much lighter INSERTs-only setup.
This is an example based on 5 minutes time-bins:
sql_refresh_time: 300
sql_history: 5m
sql_history_roundoff: m
sql_dont_try_update: true
Note that sql_refresh_time is always expressed in seconds. An alternative approach
for cases where sql_refresh_time must be kept shorter than sql_history (for example
because a) of long sql_history periods, ie. hours or days, and/or because b) near
real-time data feed is a requirement) is to set up a synthetic auto-increment 'id'
field: it successfully prevents duplicates but comes at the expenses of GROUP BYs
when querying data.
IV. Running the libpcap-based daemon (pmacctd)
All deamons including pmacctd can be run with commandline options, using a
config file or a mix of the two. Sample configuration files are in examples/
tree. Note also that most of the new features are available only as config
directives. To be aware of the existing configuration directives, please
read the CONFIG-KEYS document.
Show all available pmacctd commandline switches:
shell> pmacctd -h
Run pmacctd reading configuration from a specified file (see examples/ tree
for a brief list of some commonly useed keys; divert your eyes to CONFIG-KEYS
for the full list). This example applies to all daemons:
shell> pmacctd -f pmacctd.conf
Daemonize the process; listen on eth0; aggregate data by src_host/dst_host;
write to a MySQL server; filter in only traffic with source prefix 10.0.0.0/16;
note that filters work the same as tcpdump so you can refer to libpcap/tcpdump
man pages for examples and further reading about the supported filtering syntax.
shell> pmacctd -D -c src_host,dst_host -i eth0 -P mysql src net 10.0.0.0/16
Or written the configuration way:
!
daemonize: true
plugins: mysql
aggregate: src_host, dst_host
pcap_interface: eth0
pcap_filter: src net 10.0.0.0/16
! ...
Print collected traffic data aggregated by src_host/dst_host over the screen;
refresh data every 30 seconds and listen on eth0.
shell> pmacctd -P print -r 30 -i eth0 -c src_host,dst_host
Or written the configuration way:
!
plugins: print
print_refresh_time: 30
aggregate: src_host, dst_host
pcap_interface: eth0
! ...
Print collected traffic data aggregated by src_host/dst_host over the screen;
refresh data every 30 seconds and listen on eth0 and eth1, listed in the file
pointed by pcap_interfaces_map (see 'examples/pcap_interfaces.map.example' for
more advanced uses of the map):
!
plugins: print
print_refresh_time: 30
aggregate: src_host, dst_host
pcap_interfaces_map: /path/to/pcap_interfaces.map
! ...
Then in /path/to/pcap_interfaces.map:
!
ifindex=100 ifname=eth0
ifindex=200 ifname=eth1
! ...
Daemonize the process; let pmacct aggregate traffic in order to show in vs out
traffic for network 192.168.0.0/16; send data to a PostgreSQL server. This
configuration is not possible via commandline switches; the corresponding
configuration follows:
!
daemonize: true
plugins: pgsql[in], pgsql[out]
aggregate[in]: dst_host
aggregate[out]: src_host
aggregate_filter[in]: dst net 192.168.0.0/16
aggregate_filter[out]: src net 192.168.0.0/16
sql_table[in]: acct_in
sql_table[out]: acct_out
! ...
And now enabling historical accounting. Split traffic by hour and write
to the database every 60 seconds:
!
daemonize: true
plugins: pgsql[in], pgsql[out]
aggregate[in]: dst_host
aggregate[out]: src_host
aggregate_filter[in]: dst net 192.168.0.0/16
aggregate_filter[out]: src net 192.168.0.0/16
sql_table[in]: acct_in
sql_table[out]: acct_out
sql_refresh_time: 60
sql_history: 1h
sql_history_roundoff: h
! ...
Let's now translate the same example in the memory plugin world. One of
the use-cases for this plugin is when feeding 3rd party tools with bytes/
packets/flows counters. Examples how to query the memory table with the
'pmacct' client tool will follow later in this document. Now, note that
each memory table need its own pipe file in order to get queried by the
client:
!
daemonize: true
plugins: memory[in], memory[out]
aggregate[in]: dst_host
aggregate[out]: src_host
aggregate_filter[in]: dst net 192.168.0.0/16
aggregate_filter[out]: src net 192.168.0.0/16
imt_path[in]: /tmp/pmacct_in.pipe
imt_path[out]: /tmp/pmacct_out.pipe
! ...
As a further note, check CONFIG-KEYS document about more imt_* directives
as they will support in the task of fine tuning the size and boundaries
of memory tables, if default values are not ok for your setup.
Now, fire multiple instances of pmacctd, each on a different interface;
again, because each instance will have its own memory table, it will
require its own pipe file for client queries aswell (as explained in the
previous examples):
shell> pmacctd -D -i eth0 -m 8 -s 65535 -p /tmp/pipe.eth0
shell> pmacctd -D -i ppp0 -m 0 -s 32768 -p /tmp/pipe.ppp0
Run pmacctd logging what happens to syslog and using "local2" facility:
shell> pmacctd -c src_host,dst_host -S local2
NOTE: superuser privileges are needed to execute pmacctd correctly.
V. Running the NetFlow/IPFIX and sFlow daemons (nfacctd/sfacctd)
All examples about pmacctd are also valid for nfacctd and sfacctd with the exception
of directives that apply exclusively to libpcap. If you have skipped examples in the
previous section, please read them before continuing. All config keys available are
in the CONFIG-KEYS document. Some examples:
Run nfacctd reading configuration from a specified file:
shell> nfacctd -f nfacctd.conf
Daemonize the process; aggregate data by sum_host (by host, summing inbound + outbound
traffic); write to a local MySQL server. Listen on port 5678 for incoming Netflow
datagrams (from one or multiple NetFlow agents):
shell> nfacctd -D -c sum_host -P mysql -l 5678
Let's now configure pmacct to insert data in MySQL every two minutes, enable historical
accounting with 10 minutes time-bins and make use of a SQL table version 4:
!
daemonize: true
plugins: mysql
aggregate: sum_host
nfacctd_port: 5678
sql_refresh_time: 120
sql_history: 10m
sql_history_roundoff: mh
sql_table_version: 4
! ...
Va. NetFlow daemon & accounting NetFlow v9/IPFIX options
NetFlow v9/IPFIX can send option records other than flow ones, typically used to send
to a collector mappings among interface SNMP ifIndexes to interface names or VRF ID's
to VRF names or extra sampling information. nfacctd_account_options enables accounting
of option records then these should be split from regular flow records. Below is a
sample config:
nfacctd_time_new: true
nfacctd_account_options: true
!
plugins: print[data], print[option_vrf], print[option_if], print[option_sampling]
!
pre_tag_filter[data]: 100
aggregate[data]: peer_src_ip, in_iface, out_iface, tos, vrf_id_ingress, vrf_id_egress
print_refresh_time[data]: 300
print_history[data]: 300
print_history_roundoff[data]: m
print_output_file_append[data]: true
print_output_file[data]: /path/to/flow_%s
print_output[data]: csv
!
pre_tag_filter[option_vrf]: 200
aggregate[option_vrf]: peer_src_ip, vrf_id_ingress, vrf_name
print_refresh_time[option_vrf]: 300
print_history[option_vrf]: 300
print_history_roundoff[option_vrf]: m
print_output_file_append[option_vrf]: true
print_output_file[option_vrf]: /path/to/option_vrf_%s
print_output[option_vrf]: event_csv
!
pre_tag_filter[option_if]: 200
aggregate[option_if]: peer_src_ip, in_iface, int_descr
print_refresh_time[option_if]: 300
print_history[option_if]: 300
print_history_roundoff[option_if]: m
print_output_file_append[option_if]: true
print_output_file[option_if]: /path/to/option_if_%s
print_output[option_if]: event_csv
!
pre_tag_filter[option_sampling]: 200
aggregate[option_sampling]: peer_src_ip, sampler_id, sampler_interval
print_refresh_time[option_sampling]: 300
print_history[option_sampling]: 300
print_history_roundoff[option_sampling]: m
print_output_file_append[option_sampling]: true
print_output_file[option_sampling]: /path/to/option_sampling_%s
print_output[option_sampling]: event_csv
!
aggregate_primitives: /path/to/primitives.lst
pre_tag_map: /path/to/pretag.map
maps_refresh: true
Below is the referenced pretag.map:
set_tag=100 ip=0.0.0.0/0 sample_type=flow
set_tag=200 ip=0.0.0.0/0 sample_type=option
Below is the referenced primitives.lst:
name=vrf_id_ingress field_type=234 len=4 semantics=u_int
name=vrf_id_egress field_type=235 len=4 semantics=u_int
name=vrf_name field_type=236 len=32 semantics=str
!
name=int_descr field_type=83 len=64 semantics=str
!
name=sampler_interval field_type=50 len=4 semantics=u_int
name=sampler_id field_type=48 len=2 semantics=u_int
Vb. Examples configuring NetFlow v9/IPFIX export
Example to configure NetFlow v9 export on a Cisco running IOS/IOS-XE:
ip flow-cache timeout active 1
ip flow-cache mpls label-positions 1
!
ip flow-export source Loopback0
ip flow-export version 9 bgp-nexthop
ip flow-export template timeout-rate 1
ip flow-export template refresh-rate 4
ip flow-export destination X.X.X.X 2100
!
interface GigabitEthernet0/0
ip address Y.Y.Y.Y Z.Z.Z.Z
ip flow ingress
Example to configure NetFlow v9 export on a Cisco running IOS-XR:
sampler-map NFACCTD-SMP
random 1 out-of 10
!
flow monitor-map NFACCTD-MON
record ipv4
exporter NFACCTD-EXP
!
flow exporter-map NFACCTD-EXP
version v9
transport udp 2100
destination X.X.X.X
!
interface GigabitEthernet0/0/0/1
ipv4 address Y.Y.Y.Y Z.Z.Z.Z
flow ipv4 monitor NFACCTD-MON sampler NFACCTD-SMP ingress
Example to configure IPFIX export on a Juniper:
services {
flow-monitoring {
version-ipfix {
template ipv4 {
flow-active-timeout 60;
flow-inactive-timeout 70;
template-refresh-rate seconds 30;
option-refresh-rate seconds 30;
ipv4-template;
}
}
}
}
chassis {
fpc 0 {
sampling-instance s1;
}
}
forwarding-options {
sampling {
instance {
s1 {
input {
rate 10;
}
family inet {
output {
flow-server X.X.X.X {
port 2100;
version-ipfix {
template {
ipv4;
}
}
}
inline-jflow {
source-address Y.Y.Y.Y;
}
}
}
}
}
}
}
Example to configure NetFlow v9 export on a Huawei:
ip netstream timeout active 1
ip netstream timeout inactive 5
ip netstream mpls-aware label-and-ip
ip netstream export version 9 origin-as bgp-nexthop
ip netstream export index-switch 32
ip netstream export template timeout-rate 1
ip netstream sampler fix-packets 1000 inbound
ip netstream export source Y.Y.Y.Y
ip netstream export host X.X.X.X 2100
ipv6 netstream timeout active 1
ipv6 netstream timeout inactive 5
ipv6 netstream mpls-aware label-and-ip
ipv6 netstream export version 9 origin-as bgp-nexthop
ipv6 netstream export index-switch 32
ipv6 netstream export template timeout-rate 1
ipv6 netstream sampler fix-packets 1000 inbound
interface Eth-Trunk1.100
ip netstream inbound
ipv6 netstream inbound
Contribution of further configuration examples for Cisco and Juniper devices
and/or other relevant vendors is more than welcome.
VI. Running the NFLOG-based daemon (uacctd)
All examples about pmacctd are also valid for uacctd with the exception of directives
that apply exclusively to libpcap. If you've skipped examples in the "Running the
libpcap-based daemon (pmacctd)" section, please read them before continuing. All
configuration keys available are in the CONFIG-KEYS document.
The daemon depends on the package libnetfilter-log-dev in Debian/Ubuntu,
libnetfilter_log in CentOS/RHEL (or equivalent package in the preferred Linux
distribution). The support for NFLOG is disabled by default and should be enabled
as follows:
shell> ./configure --enable-nflog
NFLOG_CFLAGS and NFLOG_LIBS can be used if includes and library are not in default
locations. The Linux NFLOG infrastructure requires a couple parameters in order to
work properly: the NFLOG multicast group (uacctd_group) to which captured packets
have to be sent to and the Netlink buffer size (uacctd_nl_size). The default buffer
settings (128KB) typically works OK for small environments. The traffic is captured
with an iptables rule. For example in one of the following ways:
* iptables -t mangle -I POSTROUTING -j NFLOG --nflog-group 5
* iptables -t raw -I PREROUTING -j NFLOG --nflog-group 5
Apart from determining how and what traffic to capture with iptables, which is topic
outside the scope of this document, the most relevant point is the "--nflog-nlgroup"
iptables setting has to match with the "uacctd_group" uacctd one. To review the packet
flow in iptables: https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg
A couple examples follow:
Run uacctd reading configuration from a specified file.
shell> uacctd -f uacctd.conf
Daemonize the process; aggregate data by sum_host (by host, summing inbound + outbound
traffic); write to a local MySQL server. Listen on NFLOG multicast group #5. Let's make
pmacct divide data into historical time-bins of 5 minutes. Let's disable UPDATE queries
and hence align refresh time with the timeslot length. Finally, let's make use of a SQL
table, version 4:
!
uacctd_group: 5
daemonize: true
plugins: mysql
aggregate: sum_host
sql_refresh_time: 300
sql_history: 5m
sql_history_roundoff: mh
sql_table_version: 4
sql_dont_try_update: true
! ...
VII. Running the pmacct IMT client (pmacct)
The 'pmacct' client tool allows to query memory tables. Messaging happens over a UNIX
pipe file: authorization is strictly connected to permissions of the pipe file. Note:
while writing queries commandline, it may happen to write chars with a special meaning
for the shell itself (ie. ; or *). Mind to either escape ( \; or \* ) them or put in
quotes ( " ).
Show all available pmacct client commandline switches:
shell> pmacct -h
Fetch data stored in the memory table:
shell> pmacct -s
Fetch data stored in the memory table using JSON output (and nicely format it with
the 'jq' tool):
shell> pmacct -s -O json | jq
Match data between source IP 192.168.0.10 and destination IP 192.168.0.3 and return
a formatted output; display all fields (-a), this way the output is easy to be parsed
by tools like awk/sed; each unused field will be zero-filled:
shell> pmacct -c src_host,dst_host -M 192.168.0.10,192.168.0.3 -a
Similar to the previous example; it is requested to reset data for matched entries;
the server will return the actual counters to the client, then will reset them:
shell> pmacct -c src_host,dst_host -M 192.168.0.10,192.168.0.3 -r
Fetch data for IP address dst_host 10.0.1.200; we also ask for a 'counter only' output
('-N') suitable, this time, for injecting data in tools like MRTG or RRDtool (sample
scripts are in the examples/ tree). Bytes counter will be returned (but the '-n' switch
allows also select which counter to display). If multiple entries match the request (ie
because the query is based on dst_host but the daemon is actually aggregating traffic
as "src_host, dst_host") their counters will be summed:
shell> pmacct -c dst_host -N 10.0.1.200
Query the memory table available via pipe file /tmp/pipe.eth0:
shell> pmacct -c sum_port -N 80 -p /tmp/pipe.eth0
Find all data matching host 192.168.84.133 as either their source or destination address.
In particular, this example shows how to use wildcards and how to spawn multiple queries
(each separated by the ';' symbol). Take care to follow the same order when specifying
the primitive name (-c) and its actual value ('-M' or '-N'):
shell> pmacct -c src_host,dst_host -N "192.168.84.133,*;*,192.168.84.133"
Find all web and smtp traffic; we are interested in have just the total of such traffic
(for example, to split legal network usage from the total); the output will be a unique
counter, sum of the partial (coming from each query) values.
shell> pmacct -c src_port,dst_port -N "25,*;*,25;80,*;*,80" -S
Show traffic between the specified hosts; this aims to be a simple example of a batch
query; note that as value of both '-N' and '-M' switches it can be supplied a value like:
'file:/home/paolo/queries.list': actual values will be read from the specified file (and
they need to be written into it, one per line) instead of commandline:
shell> pmacct -c src_host,dst_host -N "10.0.0.10,10.0.0.1;10.0.0.9,10.0.0.1;10.0.0.8,10.0.0.1"
shell> pmacct -c src_host,dst_host -N "file:/home/paolo/queries.list"
VIII. Running the RabbitMQ/AMQP plugin
The Advanced Message Queuing Protocol (AMQP) is an open standard for passing business
messages between applications. RabbitMQ is a messaging broker, an intermediary for
messaging, which implementes AMQP. pmacct RabbitMQ/AMQP plugin is designed to send
aggregated network traffic data, in JSON or Avro format, through a RabbitMQ server
to 3rd party applications (typically, but not limited to, noSQL databases like
ElasticSearch, InfluxDB, etc.). Requirements to use the plugin are:
* A working RabbitMQ server: http://www.rabbitmq.com/
* RabbitMQ C API, rabbitmq-c: https://github.com/alanxz/rabbitmq-c/
* Libjansson to cook JSON objects: http://www.digip.org/jansson/
Additionally, the Apache Avro C library (http://avro.apache.org/) needs to be
installed to be able to send messages packed using Avro (you will also need to
pass --enable-avro to the configuration script).
Once these elements are installed, pmacct can be configured for compiling. pmacct
makes use of pkg-config for finding libraries and headers location and checks some
"typical" default locations, ie. /usr/local/lib and /usr/local/include. So all
you should do is just:
shell> ./configure --enable-rabbitmq --enable-jansson
But, for example, should you have installed RabbitMQ in /usr/local/rabbitmq and
pkg-config is unable to help, you can supply this non-default location as follows
(assuming you are running the bash shell):
shell> export RABBITMQ_LIBS="-L/usr/local/rabbitmq/lib -lrabbitmq"
shell> export RABBITMQ_CFLAGS="-I/usr/local/rabbitmq/include"
shell> ./configure --enable-rabbitmq --enable-jansson
You can check further information on how to compile pmacct with JSON/libjansson
support in the section "Compiling pmacct with JSON support" of this document.
You can check further information on how to compile pmacct with Avro support in
the section "Compiling pmacct with Apache Avro support" of this document.
Then "make; make install" as usual. Following a configuration snippet showing a
basic RabbitMQ/AMQP plugin configuration (assumes: RabbitMQ server is available
at localhost; look all configurable directives up in the CONFIG-KEYS document):
! ..
plugins: amqp
!
aggregate: src_host, dst_host, src_port, dst_port, proto, tos
amqp_output: json
amqp_exchange: pmacct
amqp_routing_key: acct
amqp_refresh_time: 300
amqp_history: 5m
amqp_history_roundoff: m
! ..
pmacct will only declare a message exchange and provide a routing key, ie. it
will not get involved with queues at all. A basic consumer script, in Python,
is provided as sample to: declare a queue, bind the queue to the exchange and
show consumed data on the screen or post to a REST API. The script is located
in the pmacct default distribution tarball in 'examples/amqp/amqp_receiver.py'
and requires the 'pika' Python module installed. Should this not be available,
installation instructions are available at the following page:
http://www.rabbitmq.com/tutorials/tutorial-one-python.html
IX. Running the Kafka plugin
Apache Kafka is a distributed streaming platform. Its qualities being: fast,
scalable, durable and distributed by design. pmacct Kafka plugin is designed
to send aggregated network traffic data, in JSON or Avro format, through a
Kafka broker to 3rd party applications (typically, but not limited to, noSQL
databases like ElasticSearch, InfluxDB, etc.). Requirements to use the plugin
are:
* A working Kafka broker (and Zookeper server): http://kafka.apache.org/
* Librdkafka: https://github.com/edenhill/librdkafka/
* Libjansson to cook JSON objects: http://www.digip.org/jansson/
Additionally, the Apache Avro C library (http://avro.apache.org/) needs to be
installed to be able to send messages packed using Avro (you will also need to
pass --enable-avro to the configuration script).
Once these elements are installed, pmacct can be configured for compiling.
pmacct makes use of pkg-config for finding libraries and headers location and
checks some default locations, ie. /usr/local/lib and /usr/local/include. If
this is satisfactory, all you should do is just:
shell> ./configure --enable-kafka --enable-jansson
But, for example, should you have installed Kafka in /usr/local/kafka and pkg-
config is unable to help, you can supply this non-default location as follows
(assuming you are running the bash shell):
shell> export KAFKA_LIBS="-L/usr/local/kafka/lib -lrdkafka"
shell> export KAFKA_CFLAGS="-I/usr/local/kafka/include"
shell> ./configure --enable-kafka --enable-jansson
You can check further information on how to compile pmacct with JSON/libjansson
support in the section "Compiling pmacct with JSON support" of this document.
As a proof-of-concept once some data is produced in JSON format to a Kafka topic,
it can be consumed with the kafka-console-consumer tool, part of standard Kafka
distribution, as:
kafka-console-consumer --bootstrap-server <Kafka broker host> \
--topic <topic>
You can check further information on how to compile pmacct with Avro support in
the section "Compiling pmacct with Apache Avro support" of this document. Also,
if using Confluent Platform, a Schema Registry component is included with it and
allows for seamless inter-change of Avro schemas among producers and consumers
of a Kafka broker; to make use of this, pmacct should be compiled with libserdes
support:
shell> ./configure --enable-kafka --enable-jansson --enable-avro --enable-serdes
For further info on Confluent Schema Registry and libserdes: