If someone could explain to me the difference between Decimal and decimal in C# that would be great.
In a more general fashion, what is the difference between the lower-case structs like decimal, int, string and the upper case classes Decimal, Int32, String.
Is the only difference that the upper case classes also wrap functions (like Decimal.Divide())?
They are the same. The type decimal is an alias for System.Decimal.
So basically decimal is the same thing as Decimal. It’s down to user’s preference which one to use but most prefer to use int and string as they are easier to type and more familiar among C++ programmers.