In real terms, what is the difference between a “string” and a “string option”?
Aside from minor sytnax issues, the only difference I have seen is that you can pass a “null” to string while a string option expects a “none”.
Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.
Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
I don’t particularly like the answer I’ve typed up below, because I think the reader will either see it as ‘preaching to the choir’ or as ‘some complex nonsense’, but I’ve decided to post it anyway, in case it invites fruitful comment-discussion.
First off, it may be noteworthy to understand that
is a valid value, that indicates the presence (rather than absence) of a value (Some rather than None), but the value itself is null. (The meaning of such a value would depend on context.)
If you’re looking for what I see as the ‘right’ mental model, it goes something like this…
The whole notion that “all reference types admit a ‘null’ value” is one of the biggest and most costly mistakes of .Net and the CLR. If the platform were resdesigned from scratch today, I think most folks agree that references would be non-nullable by default, and you would need an explicit mechanism to opt-in to null. As it stands today, there are hundreds, if not thousands of APIs that take e.g. “string foo” and do not want a null (e.g. would throw ArgumentNullException if you passed null). Clearly this is something better handled by a type system. Ideally, ‘string’ would mean ‘non-null’, and for the minority of APIs that do want null, you spell that out, e.g. “Nullable<string> foo” or “Option<string> foo” or whatever. So it’s the existing .Net platform that’s the ‘oddball’ here.
Many functional languages (such as ML, one of the main influences of F#) have known this forever, and so designed their type systems ‘right’, where if you want to admit a ‘null’ value, you use a generic type constructor to explicitly signal data that intentionally can have ‘asbence of a value’ as a legal value. In ML, this is done with the “‘t option” type – ‘option’ is a fine, general-purpose solution to this issue. F#’s core is compatible (cross-compiles) with OCaml, an ML dialect, and thus F# inherits this type from its ML ancestry.
But F# also needs to integrate with the CLR, and in the CLR, all references can be null. F# attempts to walk a somewhat fine line, in that you can define new class types in F#, and for those types, F# will behave ML-like (and not easily admit null as a value):
however the same type defined in a C# assembly will admit null as a proper value in F#. (And F# still allows a back-door:
to effectively get around the normal F# type system and access the CLR directly.)
This is all starting to sound complicated, but basically the F# system lets you pretty much program lots of F# in the ‘ML’ style, where you never need to worry about null/NullReferenceExceptions, because the type system prevents you from doing the wrong things here. But F# has to integrate nicely with .Net, so all types that originate from non-F# code (like ‘string’) still admit null values, and so when programming with those types you still have to program as defensively as you normally do on the CLR. With regards to null, effectively F# provides a haven where it is easy to do ‘programming without null, the way God intended’, but at the same time interoperate with the rest of .Net.
I haven’t really answered your question, but if you follow my logic, then you would not ask the question (or would unask it, a la Joshu’s MU from “Godel, Escher, Bach”).