Consider the following code:
void f(byte x) {print("byte");}
void f(short x) {print("short");}
void f(int x) {print("int");}
void main() {
byte b1, b2;
short s1, s2;
f(b1 + b2); // byte + byte = int
f(s1 + s2); // short + short = int
}
In C++, C#, D, and Java, both function calls resolve to the “int” overloads… I already realize this is “in the specs”, but why are languages designed this way? I’m looking for a deeper reason.
To me, it makes sense for the result to be the smallest type able to represent all possible values of both operands, for example:
byte + byte --> byte
sbyte + sbyte --> sbyte
byte + sbyte --> short
short + short --> short
ushort + ushort --> ushort
short + ushort --> int
// etc...
This would eliminate inconvenient code such as short s3 = (short)(s1 + s2), as well as IMO being far more intuitive and easier to understand.
Is this a left-over legacy from the days of C, or are there better reasons for the current behavior?
Quoted from this MSDN blog post:
Also, it’s worth noting that adding in these casts only means extra typing, and nothing more. Once the JIT (or possibly the static compiler itself) reduces the arithmetic operation to a basic processor instruction, there’s nothing clever going on – it’s just whether the number gets treated as an
intorbyte.This is a good question, however… not at all an obvious one. Hope that makes the reasons clear for you now.