I’m running into an odd bug using datetime fields in SQL Server 2005. The datetime field shows up with millisecond-level accuracy, but it looks like the milliseconds are not always used. Here’s my test query:
SELECT col1, YEAR(col1) AS yr, MONTH(col1) AS mn, DAY(col1) AS dy
FROM mytable
WHERE col1 >= '2009-12-31 00:00:00.0' AND col1 <= '2009-12-31 23:59:59.999'
ORDER BY col1
In my results I get:
col1 | yr | mn | dy ----------------------------+------+----+---- 2009-12-31 00:00:00:00.000 | 2009 | 12 | 31 2010-01-01 00:00:00:00.000 | 2010 | 1 | 1
The problem is that I got the 2010-01-01 date, even though that shouldn’t be less than or equal to “2009-12-31 23:59:59.999”. But if I change the query to use “2009-12-31 23:59:59.998” it works OK (no 2010 datetimes are returned).
Is this a bug, or is this just how SQL Server works? If this is how it works, is there some reason for it? I ran into this migrating some queries from MySQL, where this works as expected (even though MySQL doesn’t even store the milliseconds!).
SQL Serverstores time part as number of1/300second long ticks from the midnight.23:59:59.999gets rounded to the nearest tick which happens to be00:00:00.000of the next day.In the first value, the date part,
0x9B8F(39823) is the number of days sinceJan 1st, 1900, and the time part,0, is the number of ticks since midnight.In the second value,
0x018B81FF(25919999, or24 * 60 * 60 * 300 - 1) is the maximal possible number of ticks since midnight.Finally, the third value has the
0in the time part and the date part increased by one.