Challenge:
Perform a bitwise XOR on two equal sized buffers. The buffers will be required to be the python str type since this is traditionally the type for data buffers in python. Return the resultant value as a str. Do this as fast as possible.
The inputs are two 1 megabyte (2**20 byte) strings.
The challenge is to substantially beat my inefficient algorithm using python or existing third party python modules (relaxed rules: or create your own module.) Marginal increases are useless.
from os import urandom
from numpy import frombuffer,bitwise_xor,byte
def slow_xor(aa,bb):
a=frombuffer(aa,dtype=byte)
b=frombuffer(bb,dtype=byte)
c=bitwise_xor(a,b)
r=c.tostring()
return r
aa=urandom(2**20)
bb=urandom(2**20)
def test_it():
for x in xrange(1000):
slow_xor(aa,bb)
First Try
Using
scipy.weaveand SSE2 intrinsics gives a marginal improvement. The first invocation is a bit slower since the code needs to be loaded from the disk and cached, subsequent invocations are faster:Second Try
Taking into account the comments, I revisited the code to find out if the copying could be avoided. Turns out I read the documentation of the string object wrong, so here goes my second try:
The difference is that the string is allocated inside the C code. It’s impossible to have it aligned at a 16-byte-boundary as required by the SSE2 instructions, therefore the unaligned memory regions at the beginning and the end are copied using byte-wise access.
The input data is handed in using numpy arrays anyway, because
weaveinsists on copying Pythonstrobjects tostd::strings.frombufferdoesn’t copy, so this is fine, but the memory is not aligned at 16 byte, so we need to use_mm_loadu_si128instead of the faster_mm_load_si128.Instead of using
_mm_store_si128, we use_mm_stream_si128, which will make sure that any writes are streamed to main memory as soon as possible—this way, the output array does not use up valuable cache lines.Timings
As for the timings, the
slow_xorentry in the first edit referred to my improved version (inline bitwise xor,uint64), I removed that confusion.slow_xorrefers to the code from the original questions. All timings are done for 1000 runs.slow_xor: 1.85s (1x)faster_slow_xor: 1.25s (1.48x)inline_xor: 0.95s (1.95x)inline_xor_nocopy: 0.32s (5.78x)The code was compiled using gcc 4.4.3 and I’ve verified that the compiler actually uses the SSE instructions.