This SQL (called from c#) occasionally results in a deadlock. The server is not under much load so the approach used is to lock as much as possible.
-- Lock to prevent race-conditions when multiple instances of an application calls this SQL: BEGIN TRANSACTION -- Check that no one has inserted the rows in T1 before me, and that T2 is in a valid state (Test1 != null) IF NOT EXISTS (SELECT TOP 1 1 FROM T1 WITH(HOLDLOCK, TABLOCKX) WHERE FKId IN {0}) AND NOT EXISTS(SELECT TOP 1 1 FROM T2 WITH(HOLDLOCK, TABLOCKX) WHERE DbID IN {0} AND Test1 IS NOT NULL) BEGIN -- Great! Im the first - go insert the row in T1 and update T2 accordingly. Finally write a log to T3 INSERT INTO T1(FKId, Status) SELECT DbId, {1} FROM T2 WHERE DbId IN {0}; UPDATE T2 SET LastChangedBy = {2}, LastChangedAt = GETDATE() WHERE DbId IN {0}; INSERT INTO T3 (F1, FKId, F3) SELECT {2}, DbId, GETDATE() FROM T2 WHERE DbId IN {0} ; END; -- Select status on the rows so the program can evaluate what just happened SELECT FKId, Status FROM T1 WHERE FkId IN {0}; COMMIT TRANSACTION
I believe the problem is that multiple tables needs to be locked.
I’m a bit unsure when the tables are actually xlocked – when a table is used the first time – or are all tables locked at one time at BEGIN TRANS?
Locks are done when you call lock or select with lock and released on commit or rollback.
You could get a dead lock if another procedure locks in T3 first and in T1 or T2 afterwards. Then two transactions are waiting for each other to get a resource, while locking what the other needs.
You could also avoid the table lock and use isolation level serializable.