I’m trying to convert a struct to a char array to send over the network. However, I get some weird output from the char array when I do.
#include <stdio.h> struct x { int x; } __attribute__((packed)); int main() { struct x a; a.x=127; char *b = (char *)&a; int i; for (i=0; i<4; i++) printf('%02x ', b[i]); printf('\n'); for (i=0; i<4; i++) printf('%d ', b[i]); printf('\n'); return 0; }
Here is the output for various values of a.x (on an X86 using gcc):
127:
7f 00 00 00
127 0 0 0
128:
ffffff80 00 00 00
-128 0 0 0
255:
ffffffff 00 00 00
-1 0 0 0
256:
00 01 00 00
0 1 0 0
I understand the values for 127 and 256, but why do the numbers change when going to 128? Why wouldn’t it just be: 80 00 00 00 128 0 0 0
Am I forgetting to do something in the conversion process or am I forgetting something about integer representation?
*Note: This is just a small test program. In a real program I have more in the struct, better variable names, and I convert to little-endian.
*Edit: formatting
The
xformat specifier by itself says that the argument is anint, and since the number is negative,printfrequires eight characters to show all four non-zero bytes of theint-sized value. The0modifier tells to pad the output with zeros, and the2modifier says that the minimum output should be two characters long. As far as I can tell,printfdoesn’t provide a way to specify a maximum width, except for strings.Now then, you’re only passing a
char, so barextells the function to use the fullintthat got passed instead — due to default argument promotion for ‘...‘ parameters. Try thehhmodifier to tell the function to treat the argument as just acharinstead: