Perl is an incredibly powerful language, and the immense number of additional modules available from CPAN augment its usefulness to such an extent, that we sometimes forget what is going on under the hood. Case in point is a smallish check program for Nagios/Icinga written in Perl. The program itself is rather simple: it retrieves a bit of data, does some date manipulation (using a few of said modules) and prints a result in typical Nagios fashion, as a one-liner, before exiting with an appropriate code indicating success or failure. I was a bit surprised that the program seemed a bit sluggish; nothing specific, just that feeling that I sometimes get. The script gave the impression of taking longer to run that I'd imagined, so I rewrote it in C. The C program is obviously faster. Must faster. I wasn't really surprised at the result, but as the program is so trivial, I took a few minutes to inspect what was going on. First the elapsed "wall clock" time. The diagram speaks for itself. But where is that relatively high elapsed time in the Perl version coming from? A large amount of resources are being consumed by the system calls that Perl is invoking to run the Perl version. A closer look at the system calls themselves, reveal what is happening: over 1300 stat() and open() system calls are being invoked to find module files and their dependencies! 400+ read()s and 250 lseek()s do the rest. The "sluggishness" I was experiencing is explained. I'm not saying you should run off and port all your Perl programs to C; that would be idiotic. What I'm saying is that if you have a program that runs very frequently it may well be worth looking at its runtime a bit more closely, particularly if you have systems with a highish average load.