I have a column of type Bit (called BBoolVal in this example).
I have a LinqToSQL Statement Like this:
var query = List<MyClass> myList = _ctx.DBList
.Where(x => x.AGuidID == paramID)
.Where(x => x.BBoolVal == false);
When I look at the sql it ends up like this (I added the spacing and changed the names):
SELECT [t0].[Id], [t0].[AGuidID], [t0].[OtherIDID],
[t0].[TimeColumn], [t0].[BBoolVal], [t0].[MoreID]
FROM [dbo].[MyTable] AS [t0]
WHERE (NOT ([t0].[BBoolVal] = 1)) AND ([t0].[AGuidID] = @p0)
Because my x.BBoolVal == false translates to [BBoolVal] == 1 I gather that false = 1 (and thus true = 0).
I am asking because this seems a bit backwards to me. I am fine to accept it, I just want to be sure.
Notice it says WHERE (NOT ([t0].[BBoolVal] = 1)) AND ([t0].[AGuidID] = @p0)
While is is really odd that it did that, it is effectively the same as [t0].[BBoolVal] = 0
EDIT
Giving it some thought, this way BBoolVal will match on False or NULL (assuming that BBoolVal is nullable).