Allegedly, the native code is shared for instantiated generic types when it is instantiated with a reference type, but not for value types.
Why is that? would someone explain the in-depth details?
To make more concrete:
class MyGenericType{
}
MyGenericType<string> and MyGenericType<Shape>
will have only one code generated, whereas
MyGenericType<int> and MyGenericType<long>
will NOT, hence it begs the question if using reference types is more efficient —
MyGenericType<int> vs. MyGenericType<SomeIntegerWrapper>
Thanks
First, to correct a fallacy in the question,
intandSystem.Int32are synonymous.MyGenericType<int>andMyGenericType<Int32>are exactly the same type.Secondly, to address the question (and slightly expand on Mehrdad’s answer): consider what the CLR needs to know about a type. It includes:
For all reference types, the answers to these questions are the same. The size is just the size of a pointer, and the value is always just a reference (so if the variable is considered a root, the GC needs to recursively descend into it).
For value types, the answers can vary significantly. For instance, consider:
When the GC looks at some memory, it needs to know the difference between
FirstandSecondso it can recurse intoyandabut notxandb. I believe this information is generated by the JIT. Now consider the information forList<First>andList<Second>– it differs, so the JIT needs to treat the two differently.Apologies if this isn’t as clear as it might be – this is somewhat deep stuff, and I’m not as hot on CLR details as I might be.