Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • SEARCH
  • Home
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 922863
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 15, 20262026-05-15T19:06:00+00:00 2026-05-15T19:06:00+00:00

MySQL has special table type MyISAM that does not support transactions. Does Oracle has

  • 0

MySQL has special table type MyISAM that does not support transactions. Does Oracle has something like this? I’d like to create write-only database(for logging) that needs to be very fast(will store a lot of data) and doesnt need transactions.

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-15T19:06:01+00:00Added an answer on May 15, 2026 at 7:06 pm

    Transactions are key to SQL database operations. They are certainly fundamental in Oracle. There is no way to write permanently to Oracle tables without issuing a commit, and lo! there is the transaction.

    Oracle allows us to specify tables to be NOLOGGING, which doesn’t generate redo log. This is only meant to be for bulk loading (using the INSERT /*+ APPEND */ hint), with the advice to switch to LOGGING and take a back as soon as possible. Because data which is not logged is not recoverable. And if you don’t want to recover it, why bother writing it in the first place?

    An alternative approach is to batch up the writes in memory, and then use bulk inserts to write them. This is pretty fast.

    Here is a simple log table and a proof of concept package:

    create table log_table
    (ts timestamp(6)
     , short_text varchar(128)
     , long_text varchar2(4000)
     )
     /
    
    create or replace package fast_log is
         procedure init;
         procedure flush;
         procedure write (p_short log_table.short_text%type
                          , p_long log_table.long_text%type);
    end fast_log;
    /
    

    The log records are kept in a PL/SQL collection, which is an in-memory structure with a session scope. The INIT() procedure initialises the buffer. The FLUSH() procedure writes the contents of the buffer to LOG_TABLE. The WRITE() procedure inserts an entry into the buffer, and if the buffer has the requisite number of entries calls FLUSH().

    create or replace package body fast_log is
    
        type log_buffer is table of log_table%rowtype;
        session_log log_buffer;
    
        write_limit constant pls_integer := 1000;
        write_count pls_integer;
    
         procedure init
         is
         begin
            session_log := log_buffer();
            session_log.extend(write_limit);
            write_count := 0;
         end init;
    
         procedure flush
         is
         begin
            dbms_output.put_line('FLUSH::'||to_char(systimestamp,'HH24:MI:SS.FF6')||'::'||to_char(write_count));
            forall i in 1..write_count
                insert into log_table
                    values session_log(i);
            init;
         end flush;
    
         procedure write (p_short log_table.short_text%type
                          , p_long log_table.long_text%type)
    
         is
            pragma autonomous_transaction;
         begin
            write_count := write_count+1;
            session_log(write_count).ts := systimestamp;
            session_log(write_count).short_text := p_short;
            session_log(write_count).long_text := p_long;
    
            if write_count = write_limit
            then
                flush;
            end if;
    
            commit;
    
         end write;
    
    begin
        init;
    end fast_log;
    /
    

    The write to log table uses the AUTONOMOUS_TRANSACTION pragma, so the COMMIT occurs without affecting the surrounding transaction which triggered the flush.

    The call to DBMS_OUTPUT.PUT_LINE() is there to make it easy to monitor progress. So, let’s see how fast it goes….

    SQL> begin
      2      fast_log.flush;
      3      for r in 1..3456 loop
      4          fast_log.write('SOME TEXT', 'blah blah blah '||to_char(r));
      5      end loop;
      6      fast_log.flush;
      7  end;
      8  /
    FLUSH::12:32:22.640000::0
    FLUSH::12:32:22.671000::1000
    FLUSH::12:32:22.718000::1000
    FLUSH::12:32:22.749000::1000
    FLUSH::12:32:22.781000::456
    
    PL/SQL procedure successfully completed.
    
    SQL>
    

    Hmmm, 3456 records in 0.12 seconds, that’s not too shabby. The main problem with this approach is the need to flush the buffer to round up loose records; this is a pain e.g. at the end of a session. If something causes the server to crash, unflushed records are lost. The other problem with doing stuff in-memory is that it consumes memory (durrrr), so we cannot make the cache too big.

    For the sake of comparison I added a procedure to the package which inserts a single record directly in to LOG_TABLE each time it is called, again using the autonomous transactions:

     procedure write_each (p_short log_table.short_text%type
                      , p_long log_table.long_text%type)
    
     is
        pragma autonomous_transaction;
     begin
        insert into log_table values ( systimestamp, p_short, p_long );
    
        commit;
    
     end write_each;
    

    Here are its timings:

    SQL> begin
      2      fast_log.flush;
      3      for r in 1..3456 loop
      4          fast_log.write_each('SOME TEXT', 'blah blah blah '||to_char(r));
      5      end loop;
      6      fast_log.flush;
      7  end;
      8  /
    FLUSH::12:32:44.157000::0
    FLUSH::12:32:44.610000::0
    
    PL/SQL procedure successfully completed.
    
    SQL>
    

    Wall clock timings are notoriously unreliable but the batched approach is 2-3 times faster than the single record appraoch. Even so, I could execute well over three thousand discrete transactions in less than half a second, on a (far from top-of-the-range) laptop. So, the question is: how much of a bottleneck is logging?


    To avoid any misunderstanding:

    @JulesLt had posted his answer while I was working on my PoC. Although there are similarities in our views I think the differences in suggested workaround merits posting this.


    “What’s the timing for write_each
    without the autonomous but a single
    commit at the end? My timings suggest
    it is not significant – that bulking
    the insert is the big win”

    My timings suggest something slightly different. Replacing a COMMIT per write with a single COMMIT at the end roughly halves the elapsed time. Still slower than the bulked approach, but not by nearly as much.

    The key thing here is benchmarking. My proof of concept is running about six times faster than Jules’s test (my table has one index). There are all sorts of reasons why this might be – machine spec, database version (I’m using Oracle 11gR1), table structure, etc. In other words, YMMV.

    So the teaching is: first decide what the right thing to do for your application, then benchmark that for your environment. Only consider a different approach if your benchmark suggests a serious performance problem. Knuth’s warning about premature optimization applies.

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.