I’m working on a program where I store some data in an integer and process it bitwise. For example, I might receive the number 48, which I will process bit-by-bit. In general the endianness of integers depends on the machine representation of integers, but does Python do anything to guarantee that the ints will always be little-endian? Or do I need to check endianness like I would in C and then write separate code for the two cases?
I ask because my code runs on a Sun machine and, although the one it’s running on now uses Intel processors, I might have to switch to a machine with Sun processors in the future, which I know is big-endian.
Python’s
inthas the same endianness as the processor it runs on. Thestructmodule lets you convert byte blobs to ints (and viceversa, and some other data types too) in either native, little-endian, or big-endian ways, depending on the format string you choose: start the format with@or no endianness character to use native endianness (and native sizes — everything else uses standard sizes), ‘~’ for native, ‘<‘ for little-endian, ‘>’ or ‘!’ for big-endian.This is byte-by-byte, not bit-by-bit; not sure exactly what you mean by bit-by-bit processing in this context, but I assume it can be accomodated similarly.
For fast “bulk” processing in simple cases, consider also the array module — the
fromstringandtostringmethods can operate on large number of bytes speedily, and thebyteswapmethod can get you the “other” endianness (native to non-native or vice versa), again rapidly and for a large number of items (the whole array).