I was goint through k & r. I was having problem in understanding following lines on page 197(section A6)
Integral conversions: any integer is
converted to a given unsigned type by
finding the smallest non negative
value that is congruent to that
integer,modulo one more than the
largest value that can be represented
in the unsigned type.
Can any body explain this in a bit detail.
Thanks
Let’s take this bit by bit and from backwards:
Now, the why part: The standard does not mandate what bit representation is used. Hence the jargon. In a two’s complement representation — which is by far the most commonly used — this conversion does not change the bit pattern (unless there is a a truncation, of course).
Refer to Integral Conversions from the Standard for further details.