How to get tcpdump to capture continuous ?

frankhelk used Ask the Experts™
I have some SLES 12.2 server where I need to monitor certain network traffic for diagnosing a problem that occurs every now an then. I'll try to record the traffic with tcpdump, and when that problem arises, I could dissect the correcponding network traffic with Wireshark.

I've set up a main script which contains
tcpdump -iany -G $((30*60)) -n -w -z ./ net or net > tcpdump.statistics

Open in new window

and a helper script for some postprocessing:
gzip *.pcap
find . -maxdepth 0 -mmin +$((12*60)) -name '*.pcap.gz' -delete

Open in new window

I'd expect that script to run indefinitely, creating capture files containing 30 minutes of data each, until I stop tcpdump with i.e. [CTRL-C] or kill. The postprocessing called after stopping (and whenever a new capture file is created) will zip the created capture files and limit the backlog of capture files to 12 hours.

So far, so good. Now to the problem:

tcpdump stops capturing data in the middle of the second file and exits (without error, as far as I could see).

What have I missed ?
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Addendum, just for your information: Output of tcpdump --help to document the available options:
       tcpdump [ -AbdDefhHIJKlLnNOpqStuUvxX# ] [ -B buffer_size ]
               [ -c count ]
               [ -C file_size ] [ -G rotate_seconds ] [ -F file ]
               [ -i interface ] [ -j tstamp_type ] [ -m module ] [ -M secret ]
               [ --number ] [ -Q in|out|inout ]
               [ -r file ] [ -V file ] [ -s snaplen ] [ -T type ] [ -w file ]
               [ -W filecount ]
               [ -E spi@ipaddr algo:secret,...  ]
               [ -y datalinktype ] [ -z postrotate-command ] [ -Z user ]
               [ --time-stamp-precision=tstamp_precision ]
               [ --immediate-mode ] [ --version ]
               [ expression ]

Open in new window

nociSoftware Engineer
Distinguished Expert 2018

is it possible the disk was full when tcpdump stopped?
using echo $? you can get an exit status can you tell which one? It might be the only message given.
Did terminate with errors? (not exit 0)

Instead of tcpdump, you may want to look at tshark, the command line only companion to wireshark.
it can more or less do the same. IMHO it has better disectors.  Also do not run tcpdump/t-shark/wireshark as root while dissecting protocols.
The disectors and other modules have not been scrutinized for buffer overflows etc.
Ok ... I've investigated a bit further. First, the version info of tcpdump is
tcpdump version 4.9.0
libpcap version 1.8.1
OpenSSL 1.0.2j-fips  26 Sep 2016
SMI-library: 0.4.8

Open in new window

I've changed the call to tcpdump to cycle every 10 minutes and ran it again. Looks like I didn't looked precisely on its behaviour.

The first generated capture file is built correctly with packes in the expected time frame.

But any file created afterwards seems to be completely empty (size = 0 bytes, with 54 byte zipped size ...).

Any hint ?

P.S.: No trace of problems in /var/log/messages or journalctl (both show no signs of tcpdump)
Ensure you’re charging the right price for your IT

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden using our free interactive tool and use it to determine the right price for your IT services. Start calculating Now!

David FavorFractional CTO
Distinguished Expert 2018

As noci suggested, once tcpdump starts it continues + never... just stops... unless some catastrophic error occurs, like full disk.

Add echo $? as noci suggested + report what error occurs.
Just to clarify, as I've seen now, tcpdump DOESN'T ABORT. I have to stop it with CRTL-C or kill.

The first file is created correct, all subsequent capture files come up empty. Looks like cycling the output file breaks something w/o stopping tcpdump. After half an hour the doirectory looks like this:
total 56
-rw-r--r-- 1 root root 32121 Jan 24 15:04
-rw-r--r-- 1 root root    54 Jan 24 15:04
-rw-r--r-- 1 root root    54 Jan 24 15:14
-rw-r--r-- 1 root root    54 Jan 24 15:24
-rw-r--r-- 1 root root    54 Jan 24 15:34
-rw-r--r-- 1 root root     0 Jan 24 14:54 tcpdump.statistics
-rwxrwxrwx 1 root root   512 Jan 24 14:53
-rwxrwxrwx 1 root root   193 Jan 23 17:23

Open in new window

Unzipping such a 54-byte-zipfile results in
-rw-r--r-- 1 root root     0 Jan 24 15:04

Open in new window

Thanks to both of you, I found out by myself ... looks like the gzip command as used in my postprocessing script not only zipped the file but even zipped the newly created next capture file away. While doing that, it seemed to have tcpdump unlinked from that file, so I ended up with empty files zipped.

I modified my scripts (this time with capture slices of 5 minutes), the postprocessing script now leaving alone all caputre files aged 15 minutes or less:

# Capture 
# 	- all traffic on the nets and
#   - into files with timestamps like YYYY-MM-DD_hhmmss in the name
#	- cycling to a new file every 5*60 seconds ( = 5 minutes )
# and call whenever starting a new cature file
tcpdump -iany -G $((5*60)) -n -w -z ./ net or net > tcpdump.statistics

Open in new window
# compress capture files aged more than 15 minutes
find . -maxdepth 1 -mmin +$((1*15)) -name '*.pcap' -exec gzip '{}' \;
# delete zipped capture files aged more than 12 hours
find . -maxdepth 1 -mmin +$((12*60)) -name '*.pcap.gz' -delete

Open in new window

Again, thanks to noci and David ... they gave me the mental nudge into the right direction.
nociSoftware Engineer
Distinguished Expert 2018

A find will get anything indeed including the new next file...
Then again why use find...?  $1 contains the correct name that just has been produced...

gzip $1

Open in new window

Should be sufficient.  probably running tcpdump with "-z $( which gzip)"    should work as well.

From the tcpdump manpage:

       -z postrotate-command
              Used in conjunction with the -C or -G options, this will make tcpdump run " postrotate-command file " where file is the savefile being closed after  each  rotation.
              For example, specifying -z gzip or -z bzip2 will compress each savefile using gzip or bzip2.

              Note that tcpdump will run the command in parallel to the capture, using the lowest priority so that this doesn't disturb the capture process.

              And  in case you would like to use a command that itself takes flags or different arguments, you can always write a shell script that will take the savefile name as
              the only argument, make the flags & arguments arrangements and execute the command that you want.
Yup - you're right.

In the meantime I've changed the first "find" command to "gzip $1" in the postprocessing script.

I still use a postprocessing script because the dump needs to run a long time and the length of the backlog should be limited (the second find command ....). For convenience I shove tcpdump in the background now, too.

Thanks for mentioning that ...

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial