Consider the following shell script:
gzip -dc in.gz | sed -e 's/@/_at_/g' | gzip -c > out.gz
This has three processes working in parallel to decompress a stream, modify it, and re-compress it. Running time I can see my user time is about twice that of my real time, which indicates the program is effectively working in parallel.
I’ve attempted to create the same program in Java by placing each task in it’s own thread. Unfortunately, the multithreaded Java program is only about 30% faster than the single threaded version for the above sample. I’ve tried using both an Exchanger and a ConcurrentLinkedQueue. The ConcurrentLinkedQueue linked queue causes a lot of contention, although all three threads are generally kept busy. The Exchanger has lower contention, but is more complicated, and the doesn’t seem to keep the slowest worker running 100% of the time.
I’m trying to figure out a pure Java solution to this problem without looking at one of the byte code weaving frameworks or a JNI based MPI.
Most of the concurrency research and APIs concern themselves with divide-and-conquer algorithms, giving each node work which is orthogonal and non-dependent on prior calculations. Another approach to concurrency is the pipeline approach, where each worker does some work and passes the data onto the next worker.
I’m not trying to find the most efficient way to sed a gzip’d file, but rather I’m looking at how to efficiently break down tasks in a pipeline, in order to reduce the runtime to that of the slowest task.
Current timings for a 10m line file are as follows:
Testing via shell
real 0m31.848s
user 0m58.946s
sys 0m1.694s
Testing SerialTest
real 0m59.997s
user 0m59.263s
sys 0m1.121s
Testing ParallelExchangerTest
real 0m41.573s
user 1m3.436s
sys 0m1.830s
Testing ConcurrentQueueTest
real 0m44.626s
user 1m24.231s
sys 0m10.856s
I’m offering a bounty for a 10% improvement in Java, as measured by real time on a four core system with 10m rows of test data. Current sources are available on Bitbucket.
I individually verified the time taken, it seem like reading takes less than 10% of the time,and reading plus processing takes less than 30% of the whole time.
So I took ParallelExchangerTest (best performer in your code) and modified it to
just have 2 thread, first thread does reading & replace, and second thread does the writing.
Here are the figures to compare (on my machine Intel dual core (not core2) running ubuntu with 1gb ram)
I knew that string processing takes longer time so I replace line.repalce
with matcher.replaceAll, I got this figures
Now I took a step ahead, instead of reading one line at a time, I read
char[] buffer of various sizes and timed it, (with the regexp search/replace,)
I got these figures
Looks like 500 bytes is optimal for the size of data.
I forked and have a copy of my changes here
https://bitbucket.org/chinmaya/java-concurrent_response/