What exactly happens when i perform casting on reference types and values types and vice versa (boxing and un boxing) at the compiler or runtime level?
Can any body explain for the below four conditions ?Please feel free to add conditions ,if i miss any.
1. Stream stream = new MemoryStream();
MemoryStream memoryStream = (MemoryStream) stream;
2. double k=10.0;
int l = (int)k;
3. object k =20;
int l = (int)k;
4. int k =23;
double m = k;
There are three types of conversions going on here:
Conversion 1 is a reference conversion. The CLR will check that the value of
streamis actually a reference to aMemoryStream(or subtype) and then simply copy the reference intomemoryStream. No new objects are created or anything like that. Afterwards, bothstreamandmemoryStreamrefer to the same objects. The values of the two variables will be exactly the same in memory.Conversions 2 and 4 are numeric conversions – they’re changing from one numeric form to another. This is basically an FPU type operation. Conversion 2 is explicit (because it may lose information) whereas conversion 4 is implicit, but fundamentally they’re similar in that they’re both changing actual representations.
Conversion 3 is an unboxing operation: the CLR checks that the value of
kis a reference to a boxed int (or a compatible type, such as an enum with an underlying type ofint), and copies the value out of that box intol.You can see the IL generated for all of this by compiling the code and then using ildasm or Reflector, of course.
Eric Lippert has a blog post about all of this which you may find useful.