I have a function which writes a 32-bit value to a buffer, and a uint64_t on the stack. Is the following code a sane way to store it?
uint64_t size = 0;
// ...
getBytes((uint32_t*)&size+0x1);
I’m assuming that this would be the canonical, safe style:
uint64_t size = 0;
// ...
uint32_t smallSize;
getBytes(&smallSize);
size = smallSize;
No. It works correctly only on big-endian machines. And assuming a particular byte order – without even checking it first – is not sane.
Even if you are sure that your program runs only on big-endian machines right now, you’ll never know whether it might have to run on a little-endian machine in the future. (I’m writing this on a computer made by a company which used big-endian processors for decades, then switched to little-endian processors a couple years ago, and is now also quite successful with bi-endian processors in certain devices ;-))