Quick question, I was curious if there is any difference between a database table that has been defined in one shot, and one that has had columns added to it over time. Does the one with added columns performance suffer in someway?
Anyways database vendor isn’t too relevant here unless there are vendor differences for this case.
Thanks
There are vendor differences here. The SQL language is defined by a standard, but storage details are left to each vendor to implement.
For example, in MySQL when you add a column, the database engine copies the entire table, making space for the new column on each row. Once it has copied all rows, it drops the old copy of the table. Thus the new table is stored exactly as if you had defined it with the new column from its inception. But you have to wait for the copy to finish before
ALTER TABLEreturns (which can take a long time if the table is huge).In another brand of database (although I don’t have one in mind), adding a column might store the extra column(s) separately from the original table. This would help make it very quick to add new columns, but retrieving data would be affected by the disconnected sections of data. Presumably you could later tell the database engine to ‘defragment’ the table and bring all the columns together.