I am having to interface some C# code with SQL Server and I want to have exactly as much precision when storing a value in the database as I do with my C# code. I use one of .NET’s decimal type for a value. What datatype/precision would I use for this value in SQL Server?
I am aware that the SQL Server decimal type is the type that most likely fits my needs for this. My question though is what scale and precision do I use so that it matches .NET’s decimal type?
You can use the SQL Server
decimaltype (documentation here). That allows you to specify a precision and scale for the numbers in the database, which you can set to be the same as in your C# code.Your question, however, seems to be about mapping the .NET decimal type to the SQL Server decimal type. And unfortunately, that’s not straightforward. The SQL Server decimal, at maximum precision, covers the range from -1038+1 to 1038-1. In .NET, however, it can cover anywhere from ±10−28 to ±7.9×1028, depending on the number.
The reason is that, internally, .NET treats decimals very similar to floats—it stores it as a pair of numbers, a mantissa and an exponent. If you have an especially large mantissa, you’re going to lose some of the precision. Essentially, .NET allows you to mix numbers which have both a high and a low scale; SQL Server requires you to choose it for a given column ahead of time.
In short, you’re going to need to decide what the valid range of decimal values are, for your purposes, and ensure both your program and the database can handle it. You need to do that anyway for your program (if you’re mixing values like 10−28 with 7.9×1028, you can’t rely on the decimal type anyway).