Consider this program
int main()
{
float f = 11.22;
double d = 44.55;
int i,j;
i = f; //cast float to int
j = d; //cast double to int
printf("i = %d, j = %d, f = %d, d = %d", i,j,f,d);
//This prints the following:
// i = 11, j = 44, f = -536870912, d = 1076261027
return 0;
}
Can someone explain why the casting from double/float to int works correctly in the first case, and does not work when done in printf?
This program was compiled on gcc-4.1.2 on 32-bit linux machine.
EDIT:
Zach’s answer seems logical, i.e. use of format specifiers to figure out what to pop off the stack. However then consider this follow up question:
int main()
{
char c = 'd'; // sizeof c is 1, however sizeof character literal
// 'd' is equal to sizeof(int) in ANSI C
printf("lit = %c, lit = %d , c = %c, c = %d", 'd', 'd', c, c);
//this prints: lit = d, lit = 100 , c = d, c = 100
//how does printf here pop off the right number of bytes even when
//the size represented by format specifiers doesn't actually match
//the size of the passed arguments(char(1 byte) & char_literal(4 bytes))
return 0;
}
How does this work?
The
printffunction uses the format specifiers to figure out what to pop off the stack. So when it sees%d, it pops off 4 bytes and interprets them as anint, which is wrong (the binary representation of(float)3.0is not the same as(int)3).You’ll need to either use the
%fformat specifiers or cast the arguments toint. If you’re using a new enough version ofgcc, then turning on stronger warnings catches this sort of error:Response to the edited part of the question:
C’s integer promotion rules say that all types smaller than
intget promoted tointwhen passed as a vararg. So in your case, the'd'is getting promoted to anint, then printf is popping off anintand casting to achar. The best reference I could find for this behavior was this blog entry.