I have an Oracle global temporary table which is “ON COMMIT DELETE ROWS”.
I have a loop in which I:
- Insert to global temporary table
- Select from global temporary table (post-processing)
- Commit, so that the table is purged before next iteration of the loop
Insertion is done with a call to oci_execute($stmt, OCI_DEFAULT). Retrieval is made through a call to oci_fetch_all($stmt, $result, 0, -1, OCI_FETCHSTATEMENT_BY_ROW | OCI_ASSOC). After that, a commit is made: oci_commit().
The problem is that retrieval sometimes works, and sometime I get one of the following errors:
- ORA-08103: object no longer exists
- ORA-01410: invalid ROWID
As if the session cannot “see” the records that it previously inserted.
Do you have any idea what could be causing this?
Thanks.
Are you using connection pooling? If so then it could be that different calls are executing in separate sessions.
A better solution would be to have a single PL/SQL procedure which populates the temporary table and returns a resultset set in a single call. Which then suggests an even better solution: do away with the temporary table altogether.
There are few situations in Oracle which demand the use of temporary tables. Most things are solvable with pure SQL or perhaps bulk collecting into nested tables. What actual manipulation of the data in the temporary table do you undertake between the insert and the subsequent select?
edit
Temporary tables have a performance hit – the rows are written to disk. PL/SQL collections remain in (session) memory and so are faster. Of course, because they are in session memory they won’t solve the problem you have with connection pooling.
Is the reason you need to chunk up the data because you don’t want to pass 200,000 rows to your PHP in one fell swoop? I think I need a little more context if I am to help you any further.