How is the decimal type implemented?
Update
- It’s a 128-bit value type (16 bytes)
- 1 sign bit
- 96 bits (12 bytes) for the mantissa
- 8 bits for the exponent
- remaining bits (23 of them!) set to 0
Thanks! I’m gonna stick with using a 64-bit long with my own implied scale.
Decimal Floating Point article on Wikipedia with specific link to this article about
System.Decimal.