I’ll go first.
I’m 100% in the set-operations camp. But what happens when the set logic on the entire desired input domain leads to a such a large retrieval that the query slows down significantly, comes to a crawl, or basically takes infinite time?
That’s one case where I’ll use a itty-bitty cursor (or a while loop) of perhaps most dozens of rows (as opposed to the millions I’m targeting). Thus, I’m still working in (partitioned sub) sets, but my retrieval runs faster.
Of course, an even faster solution would be to call the partioned input domains in parallel from outside, but that introduces an interaction will an external system, and when ‘good enough’ speed can be achieved by looping in serial, just may not be worth it (epecially during development).
Sure, there are a number of places where cursors might be better than set-based operations.
One is if you’re updating a lot of data in a table (for example a SQL Agent job to pre-compute data on a schedule) then you might use cursors to do it in multiple small sets rather than one large one to reduce the amount of concurrent locking and thus reduce the chance of lock contention and/or deadlocks with other processes accessing the data.
Another is if you want to take application-level locks using the
sp_getapplockstored procedure, which is useful when you want to ensure rows that are being polled for by multiple processes are retrieved exactly once (example here).In general though, I’d agree that it’s best to start using set based operations if possible, and only move to cursors if required either for functionality or performance reasons (with evidence to back the latter up).