We have an Oracle database here that’s been around for about 10 years. It’s passed through a lot of hands. In the course of those years, it’s grown quite large, and there are some interesting anomalies in its design that have me perplexed.
Now, I’m historically a SQL Server developer. I used to steam and fume about the differences between The Microsoft Way(tm) and The Oracle Way(R). Now, I realize, they’re just different. I also used to yank my hair out and slam my head against the desk thinking that the people who came before me were blind, deaf mutes jacked up on Jolt and Red Bull, who wrote code in Tourette’s.NET.
(Yes, I’m going somewhere.)
As time passed, I realized that neither database platform was inherently better than the other. They’re just different. Further, I also realized that the developers who came before me often had compelling reasons for designing and writing things the way they did. Just because I wasn’t privy to it didn’t make it untrue. Sure, the documentation could have been better, but still.
So here’s where all this leads me:
-
We have a few tables in the database that have two separate owners. Both owners define identical primary key constraints on the table. This has me perplexed. Why would a table have multiple owners? And why would each owner define separate yet identical primary keys?
-
These guys designed a pretty well-layed out database with lots of primary keys. But they didn’t make a lot of use of indexes. When they did use indexes, they tended to make one large index instead of many distinct indexes. Is there some compelling performance gain to be had from that?
-
We also avoided foreign key constraints like the plague. Not sure why we would have done that. Is there a reason to avoid them in Oracle? I can see a lot of reasons to use them to enforce data integrity between tables, and we’re just not using them. I’m assuming that there’s a compelling reason, and I’m just not privy to it.
-
Finally, is there a compelling reason to avoid the use of triggers (aside from the obvious pitfall that lies in performance hits)? We don’t seem to be using those much either.
For the record, we’re still using Oracle 9i.
Again, thanks for your patience, everyone. I’m an old Microsoft hand, so bending my brain around the Oracle Way is challenging at times. It’s a big beast, with tons to learn, and sometimes, finding that information on the Web is a chore.
Thank His Noodliness for StackOverflow.
Salient Post-Post Points
- Historically, we haven’t used sequences, except in very rare cases.
- Historically, we haven’t used stored procedures or functions, except in very rare cases.
- There are some references in very old documents to ERWIN. (Thanks to the poster below for bringing it to my memory.) Chances are, the bulk of the design was the product of an ORM, and the natural design flowed from that.
- The vast majority of the SQL appears hard-coded in the application, and there’s a lot of it.
- I’m doing everything in my power to move us away from hard-coded SQL, and to get the SQL into the database where it belongs. But I’m trying to do that in a way that makes sense, is practical, and doesn’t break the business in the process. (Read: On new software only.)
You cannot define two
PRIMARY KEY‘s on one table inOracle. You can define onePRIMARY KEYand oneUNIQUEkey on the same column set. I can see no point in such a design.In
Oracle, an index cannot be used forRANGE SCANSon something that doesn’t constitute a leftmost prefix of this index.A composite index on
(col1, col2, col3)cannot be used to do a plainRANGE SCANoncol2alone orcol3alone.If you make all interaction with the database through a set of well-defined procedures, a
MERGEstatement can yield far better performance than aFOREIGN KEYwithON DELETE CASCADE. You, though, should be very very careful and get used to this programming paradigma.I personally don’t use triggers at all. Not every business rule can be expressed in terms of cascading inserts or updates, and any two-pass
DMLoperation will lead to mutating tables. If all interaction with the database is done via stored procedures (or packages), triggers become useless.Using triggers means in fact using
SQLstatements insideCURSORloops, which everySQLcheechako knows to be a bad thing.You don’t want to be seen using cursors instead of set-based operations, do you?
FOREIGN KEY‘s are not as bad as triggers (as long as you don’t defineCASCADEoperations on them), since they just don’t let you do wrong things at the expense of some performance loss.But when your database grows large, you will notice that the rules for integrity checking are far more complex than just verifying that the values being inserted into one table exist in another one.
You will have to check newly inserted values against aggregates, complex joins, etc., and all will checks will imply having a corresponding value in other table, and failing these checks compromises your database integrity just as good as violating the
FOREIGN KEY‘sSo it will turn out that these
FOREIGN KEY‘s are double and triple checked anyway, and there is no point to keep data integrity rules scattered all around the database rather than having them in one place (a stored procedure that is always used for updating the data).