We are using C# and Linq2SQL to get data from a database for some reports. In some cases this takes a while. More than 30 seconds, which seems to be the default CommandTimeout.
So, I guess I have to up the CommandTimeout. But question is, how much? Is it bad to just set it very high? Wouldn’t it be bad if a customer was trying to do something and just because he happend to have a lot more data in his database than the average customer he couldn’t get his reports out because of timeouts? But how can I know how much time it potentially could take? Is there some way to set it to infinity? Or is that considered very bad?
And where should I set it? I have a static database class which generates a new datacontext for me when I need it. Could I just create a constant and set it whenever I create a new datacontext? Or should it be set to different values depending on the usecase? Is it bad to have a high timeout for something that wont take much time at all? Or doesn’t it really matter?
Too high ConnectionTimeout can of course be more annoying. But is there a case where a user/customer would like something to time out? Can the SQL server freeze so that a command never finishes?
CommandTimeoutetc should indeed only be increased on per-specific-scenario basis. This can avoid unexpectedly long blocking etc scenarios (or worse: the undetected deadlock scenario). As for how high… how long does the query take? Add some headroom and you have your answer.The other thing to do, of course, is to reduce the time the query takes. This might mean hand-optimising some TSQL in a sproc, usually in combination with checking the indexing strategy, and perhaps bigger changes such as denormalization, or other schema changes. This might also involve a data-warehousing strategy so you can shift load to a separate database (away from the transactional data), with a schema optimised for reporting. Maybe a star-schema.
I wouldn’t set it to infinity… I don’t expect it to take forever to run a report. Pick a number that makes sense for the report.
Yes, SQL Server can freeze so that a command never finishes. An open blocking transaction would be the simplest… get two and you can deadlock. Usually the system will detect a local deadlock – but not always, especially if DTC is involved (i.e. non-local locks).