Edit: Turns out someone is smarter than me on Stack Overflow’s sub site for Unix & Linux. You can do time based rotation, but you must limit your filename to only include the part of a timestamp that you are rotating on. So only include
%H if you are doing
-G 3600 to get hourly files. When the day rolls over, the file with hour
00 will get overwritten. Thanks dfernan.
So if you are trying to catch an intermittent networking issue, sometimes you need to leave tcpdump running for a while and then grab the logs after something goes wrong.
Want to create a new log file every X seconds and timestamp them? Use
-G and supply some formatted filename. The command below will log one hours worth of data to a file named after the start time of that file. See strftime(3) for formatting details.
tcpdump -w 'trace__%Y-%m-%d_%H:%M:%S.pcap' -G 3600
This will happily run forever and eat all of your disk space though. If you want to limit to just a set number of files, use
tcpdump -w 'trace__%Y-%m-%d_%H:%M:%S.pcap' -G 3600 -W 5
This will cause tcpdump to stop after 5 hours.
But what if we want rotating files? Like with logs, so we can run forever and not fill up the disk. Well we need to abandon the fancy naming of our files to get it to work. We need to specify a maximum file size and remove the time constraint.
tcpdump -w trace -W 5 -C 1024
This will capture 5 files named
trace2, etc. They will contain one gigabyte of data each. Once 5 files exist, the oldest will be deleted and replaced with new data.
If you try and use
-C at the same time, weird things happen. If you use a basic filename, tcpdump will go back to file 0 when either it runs out of space in the 5th file (a gig of data) OR the timer runs out. So if you get less than a gig per hour, you’ll only ever have one file with the current timers data in it.
If you use a formatted file name, then
-W is just ignored and you get infinite files of either an hour or a gig of data, whichever comes first.
So to get log style rotation, you need to use
-G or formatted filenames.