I have a table structured like this:
CREATE TABLE [TESTTABLE] ( [ID] [int] IDENTITY(1,1) NOT NULL, [DateField] [datetime] NULL, [StringField] [varchar](50), [IntField] [int] NULL, [BitField] [bit] NULL )
I execute the following code:
BEGIN INSERT INTO TESTTABLE (IntField, BitField, StringField, DateField) VALUES ('1', 1, 'hello', {ts '2009-04-03 15:41:27.378'}); SELECT SCOPE_IDENTITY() END
And then
select * from testtable with (NOLOCK)
and my result shows:
2009-04-03 15:41:27.*377*
for the DateField column.
Any ideas why I seem to be losing a millisecond??
SQL Server only stores time to approximately 1/300th of a second. These always fall on the 0, 3 and 7 milliseconds. E.g. counting up from 0 in the smallest increment:
00:00:00.000
00:00:00.003
00:00:00.007
00:00:00.010
00:00:00.013
…
If you need that millisecond accuracy, there’s no pleasant way around it. The best options I’ve seen are to store the value in custom number fields and rebuild it every time you fetch the value, or to store it as a string of a known format. You can then (optionally) store an ‘approximate’ date in the native date type for the sake of speed, but it introduces a conceptual complexity that often isn’t wanted.