A friend of mine claims that in a typical database, using (for example) nvarchar[256] will give marginally better performance than nvarchar[200] or nvarchar[250] because of the granularity of page allocations.
Is there any truth to this whatsoever?
Thanks!
This is not true. Tables are allocated on disk in 8k pages. When a table is read from disk, the entire page is read in one IO operation and stored in memory. Therefore, the length of a column will not affect memory alignment at all. In fact, with non-variable length data types, shorter is definitely better: an nchar(200) column will allow more rows per page than an nchar(256) column. This allows more rows to be read per single physical IO, which can have a dramatic affect on database performance.