This is a follow up question. So, Java store’s integers in two’s-complements and you can do the following:
int ALPHA_MASK = 0xff000000;
In C# this requires the use of an unsigned integer, uint, because it interprets this to be 4278190080 instead of -16777216.
My question, how do declare negative values in hexadecimal notation in c#, and how exactly are integers represented internally? What are the differences to Java here?
C# (rather, .NET) also uses the two’s complement, but it supports both signed and unsigned types (which Java doesn’t). A bit mask is more naturally an unsigned thing – why should one bit be different than all the other bits?
In this specific case, it is safe to use an unchecked cast:
To ‘directly’ represent this number as a signed value, you write
Hexadecimal is not (or should not) be any different from decimal: to represent a negative number, you need to write a negative sign, followed by the digits representing the absolute value.