Who’s eating my swap?

Oh noes! Your monitoring just alerted you that a server is running low on swap! How can you find what is using the most swap space? Well top will tell you what is using the most memory, but not which process is using the most swap.

And with things like docker and kubernetes implementing cgroups to limit the RAM usage of things, you might have plenty of free memory and still run into swap issues. (protip: disable swap if using kubernetes). So how can you find out? Like this:

for file in /proc/*/status
  echo -n "$file "
  awk '/VmSwap|Name/{printf $2 " " $3}END{ print ""}' $file
done | sort -k 3 -n -r | less

This will dig through all the processes on the system and sort them by VmSwap usage, making it easy to find the pid and process binary eating all your swap space.


Rotating tcpdump

Edit: Turns out someone is smarter than me on Stack Overflow’s sub site for Unix & Linux. You can do time based rotation, but you must limit your filename to only include the part of a timestamp that you are rotating on. So only include %H if you are doing -G 3600 to get hourly files. When the day rolls over, the file with hour 00 will get overwritten. Thanks¬†dfernan.

So if you are trying to catch an intermittent networking issue, sometimes you need to leave tcpdump running for a while and then grab the logs after something goes wrong.

Want to create a new log file every X seconds and timestamp them? Use -G and supply some formatted filename. The command below will log one hours worth of data to a file named after the start time of that file. See strftime(3) for formatting details.

tcpdump -w 'trace__%Y-%m-%d_%H:%M:%S.pcap' -G 3600

This will happily run forever and eat all of your disk space though. If you want to limit to just a set number of files, use -W

tcpdump -w 'trace__%Y-%m-%d_%H:%M:%S.pcap' -G 3600 -W 5

This will cause tcpdump to stop after 5 hours.

But what if we want rotating files? Like with logs, so we can run forever and not fill up the disk. Well we need to abandon the fancy naming of our files to get it to work. We need to specify a maximum file size and remove the time constraint.

tcpdump -w trace -W 5 -C 1024

This will capture 5 files named trace0, trace1, trace2, etc. They will contain one gigabyte of data each. Once 5 files exist, the oldest will be deleted and replaced with new data.

If you try and use -G and -C at the same time, weird things happen. If you use a basic filename, tcpdump will go back to file 0 when either it runs out of space in the 5th file (a gig of data) OR the timer runs out. So if you get less than a gig per hour, you’ll only ever have one file with the current timers data in it.

If you use a formatted file name, then -W is just ignored and you get infinite files of either an hour or a gig of data, whichever comes first.

So to get log style rotation, you need to use -W and -C without -G or formatted filenames.

mysqldump and DEFINER troubles

If you are using mysqldump to transfer a database between servers as a non-super user and you dump contains VIEWs or FUNCTIONs that have a definer (possibly your user even) then you will get an error during import due to not being super enough.

ERROR 1227 (42000) at line XXXX: Access denied; you need (at least one of) the SUPER privilege(s) for this operation

This is what you typically get for a VIEW

/*!50013 DEFINER=`slashterix`@`%` SQL SECURITY DEFINER */
/*!50001 VIEW `neater_data` AS (select `neat_data`.`foo` AS `foo`,`neat_data`.`bla` AS `bla` from `neat_data`) */;

And you will get one of these for a FUNCTION depending on your MySQL version.

/*!50003 CREATE*/ /*!50020 DEFINER=`slashterix`@`%`*/ /*!50003 FUNCTION `myCoolFunc` (
CREATE DEFINER=DEFINER=`slashterix`@`%` FUNCTION `myCoolFunc` (

For the VIEW and the second FUNCTION a simple sed will work.

sed -r 's/DEFINER=`?\w+`?@`[^`]+`//'

The -r turns on extended regex support. For the middle case though, this will strip out the definer and leave us with an empty version comment that some versions don’t like.

/*!50020 */

Use this one first to fix those cases.

sed -r 's/\/\*![0-9]+ DEFINER=`?\w+`?@`[^`]+`\*\/ //'

I string both seds together for maximum compatibility with all versions of MySQL.

Parallel SSH

The knife command can use a lot of memory when doing a search. So knife ssh "name:web-*" can fail after eating all you RAMs and getting OOM killed.

parallel -j 10 -i \
  ssh -o StrictHostKeyChecking=no \
      -o UserKnownHostsFile=/dev/null \
      -i private.key \
      user@{}.example.com \
      'hostname; sudo chef-client;' \
  -- $( knife node list | grep '^web-' )

Different versions of parallel have different options, this was used on CentOS 7.

  • -j number of jobs to run in parallel
  • -i enable use of {} to insert the strings
  • — end of args, next is list of things