Assumption:
Converting a byte[] from Little Endian to Big Endian means inverting the order of the bits in each byte of the byte[].
Assuming this is correct, I tried the following to understand this:
byte[] data = new byte[] { 1, 2, 3, 4, 5, 15, 24 }; byte[] inverted = ToBig(data); var little = new BitArray(data); var big = new BitArray(inverted); int i = 1; foreach (bool b in little) { Console.Write(b ? '1' : '0'); if (i == 8) { i = 0; Console.Write(' '); } i++; } Console.WriteLine(); i = 1; foreach (bool b in big) { Console.Write(b ? '1' : '0'); if (i == 8) { i = 0; Console.Write(' '); } i++; } Console.WriteLine(); Console.WriteLine(BitConverter.ToString(data)); Console.WriteLine(BitConverter.ToString(ToBig(data))); foreach (byte b in data) { Console.Write('{0} ', b); } Console.WriteLine(); foreach (byte b in inverted) { Console.Write('{0} ', b); }
The convert method:
private static byte[] ToBig(byte[] data) { byte[] inverted = new byte[data.Length]; for (int i = 0; i < data.Length; i++) { var bits = new BitArray(new byte[] { data[i] }); var invertedBits = new BitArray(bits.Count); int x = 0; for (int p = bits.Count - 1; p >= 0; p--) { invertedBits[x] = bits[p]; x++; } invertedBits.CopyTo(inverted, i); } return inverted; }
The output of this little application is different from what I expected:
00000001 00000010 00000011 00000100 00000101 00001111 00011000 00000001 00000010 00000011 00000100 00000101 00001111 00011000 80-40-C0-20-A0-F0-18 01-02-03-04-05-0F-18 1 2 3 4 5 15 24 1 2 3 4 5 15 24
For some reason the data remains the same, unless printed using BitConverter.
What am I not understanding?
Update
New code produces the following output:
10000000 01000000 11000000 00100000 10100000 11110000 00011000 00000001 00000010 00000011 00000100 00000101 00001111 00011000 01-02-03-04-05-0F-18 80-40-C0-20-A0-F0-18 1 2 3 4 5 15 24 128 64 192 32 160 240 24
But as I have been told now, my method is incorrect anyway because I should invert the bytes and not the bits?
This hardware developer I’m working with told me to invert the bits because he cannot read the data.
Context where I’m using this
The application that will use this does not really work with numbers.
I’m supposed to save a stream of bits to file where
1 = white and 0 = black.
They represent pixels of a bitmap 256×64.
byte 0 to byte 31 represents the first row of pixels byte 32 to byte 63 the second row of pixels.
I have code that outputs these bits… but the developer is telling me they are in the wrong order… He says the bytes are fine but the bits are not.
So I’m left confused :p
Your method may be correct at this point. There are different meanings of endianness, and it depends on the hardware.
Typically, it’s used for converting between computing platforms. Most CPU vendors (now) use the same bit ordering, but different byte ordering, for different chipsets. This means, that, if you are passing a 2-byte int from one system to another, you leave the bits alone, but swap bytes 1 and 2, ie:
However, this isn’t always true. Some hardware still uses inverted BIT ordering, so what you have may be correct. You’ll need to either trust your hardware dev. or look into it further.
I recommend reading up on this on Wikipedia – always a great source of info:
http://en.wikipedia.org/wiki/Endianness
Your ToBig method has a bug.
At the end:
You need to change that to:
You’re resetting your input data, so you’re receiving both arrays inverted. The problem is that arrays are reference types, so you can modify the original data.