At work someone said
When we design and optimize the stored procedures, we have to take into account that fact that they will be running on large database servers.
I’m confused by that statement in a number of respects:
-
Does “large database server” imply a large volume of data and, if so, how does that impact the design of the stored procedure?
-
Is the design and optimization of a stored procedure the same as the design and optimization of regular SQL?
-
Is “design and optimize” redundant? In other words, if you design a stored procedure well, wouldn’t it automatically be optimized for performance?
Large DB servers typically differ from small ones in several ways: more data, more CPUs, more RAM, and bigger and faster disks or a SAN. Some queries run differently in that environment than on small machines. For example, complex joins against large tables might run reasonably fast there, and be prohibitively slow on a smaller machine. There are also caching and memory management approaches that make sense on large machines that aren’t nearly as useful on smaller ones.
Not entirely. For example, when you’re working on a stored procedure, you are also taking batch boundaries and possible multiple result sets into account. SPs can also have security-related issues that don’t exist with dynamic SQL or parameterized queries.
No. Design means to build something that works correctly and meets business requirements. Optimize relates to speed or scalability or both. An SP can be slow, but still do what it’s supposed to do. The usual best practice (though one I don’t always agree with) is to get it working correctly first, then optimize if it turns out to be necessary.