I wrote multiple scripts in Perl and shell and I have compared the real execution time. In all the cases, the Perl script was more than 10 times faster than the shell script.
So I wondered if it possible to write a shell script which is faster than the same script in Perl? And why is Perl faster than shell although I use the system function in Perl script?
There are few ways to make your shell (eg Bash) execute faster.
for you. Eg, excessive use of
sed,grep,awket for string/textmanipulation.
Use awk. If you are manipulating really BIG files, you can use grep to search for the patterns you want, and then pass them to awk to “edit”. grep’s searching algorithm is very good and fast. If you want to get only front or end of the file, use head and tail.
with one awk script or using Bash internals if not complicated. Therefore, you can try to cut down the use of these tools that overlap in their functions.
Unix pipes/chaining is excellent, but using too many of them,
eg
command|grep|grep|cut|sedmakes your code slow. Each pipe is an overhead.For this example, just one awk does them all.
command | awk '{do everything here}'The closest tool you can use which can match Perl’s speed for certain tasks, eg string manipulation or maths, is awk. Here’s a fun benchmark for this solution. There are around 9million numbers in the file
Output
For each try, awk is faster than Perl.
Lastly, try to learn awk beyond what they can do as one liners.