MySQL has special table type MyISAM that does not support transactions. Does Oracle has something like this? I’d like to create write-only database(for logging) that needs to be very fast(will store a lot of data) and doesnt need transactions.
MySQL has special table type MyISAM that does not support transactions. Does Oracle has
Share
Transactions are key to SQL database operations. They are certainly fundamental in Oracle. There is no way to write permanently to Oracle tables without issuing a commit, and lo! there is the transaction.
Oracle allows us to specify tables to be NOLOGGING, which doesn’t generate redo log. This is only meant to be for bulk loading (using the
INSERT /*+ APPEND */hint), with the advice to switch to LOGGING and take a back as soon as possible. Because data which is not logged is not recoverable. And if you don’t want to recover it, why bother writing it in the first place?An alternative approach is to batch up the writes in memory, and then use bulk inserts to write them. This is pretty fast.
Here is a simple log table and a proof of concept package:
The log records are kept in a PL/SQL collection, which is an in-memory structure with a session scope. The INIT() procedure initialises the buffer. The FLUSH() procedure writes the contents of the buffer to LOG_TABLE. The WRITE() procedure inserts an entry into the buffer, and if the buffer has the requisite number of entries calls FLUSH().
The write to log table uses the AUTONOMOUS_TRANSACTION pragma, so the COMMIT occurs without affecting the surrounding transaction which triggered the flush.
The call to DBMS_OUTPUT.PUT_LINE() is there to make it easy to monitor progress. So, let’s see how fast it goes….
Hmmm, 3456 records in 0.12 seconds, that’s not too shabby. The main problem with this approach is the need to flush the buffer to round up loose records; this is a pain e.g. at the end of a session. If something causes the server to crash, unflushed records are lost. The other problem with doing stuff in-memory is that it consumes memory (durrrr), so we cannot make the cache too big.
For the sake of comparison I added a procedure to the package which inserts a single record directly in to LOG_TABLE each time it is called, again using the autonomous transactions:
Here are its timings:
Wall clock timings are notoriously unreliable but the batched approach is 2-3 times faster than the single record appraoch. Even so, I could execute well over three thousand discrete transactions in less than half a second, on a (far from top-of-the-range) laptop. So, the question is: how much of a bottleneck is logging?
To avoid any misunderstanding:
@JulesLt had posted his answer while I was working on my PoC. Although there are similarities in our views I think the differences in suggested workaround merits posting this.
My timings suggest something slightly different. Replacing a COMMIT per write with a single COMMIT at the end roughly halves the elapsed time. Still slower than the bulked approach, but not by nearly as much.
The key thing here is benchmarking. My proof of concept is running about six times faster than Jules’s test (my table has one index). There are all sorts of reasons why this might be – machine spec, database version (I’m using Oracle 11gR1), table structure, etc. In other words, YMMV.
So the teaching is: first decide what the right thing to do for your application, then benchmark that for your environment. Only consider a different approach if your benchmark suggests a serious performance problem. Knuth’s warning about premature optimization applies.