I was working with bit shift operators (see my question Bit Array Equality) and a SO user pointed out a bug in my calculation of my shift operand–I was calculating a range of [1,32] instead of [0,31] for an int. (Hurrah for the SO community!)
In fixing the problem, I was surprised to find the following behavior:
-1 << 32 == -1
In fact, it would seem that n << s is compiled (or interpreted by the CLR–I didn’t check the IL) as n << s % bs(n) where bs(n) = size, in bits, of n.
I would have expected:
-1 << 32 == 0
It would seem that the compiler is realizing that you are shifting beyond the size of the target and correcting your mistake.
This is purely an academic question, but does anyone know if this is defined in the spec (I could not find anything at 7.8 Shift operators), just a fortuitous fact of undefined behavior, or is there a case where this might produce a bug?
I believe that the relevant part of the spec is here:
If the resulting shift count is zero, the shift operators simply return the value
of x.
The value
32is0x20. The expression0x20 & 0x1Fevaluates to0. Therefore, the shift count is zero, and no shift is done; the expression-1 << 32(or anyx << 32) just returns the original value.