I have this query:
select distinct id,name from table1
For a given ID, the name will always be the same. Both fields are indexed. There’s no separate table that maps the id to the name. The table is very large (10 of millions of rows), so the query could take some time.
This query is very fast, since it’s indexed:
select distinct name from table1
Likewise for this query:
select distinct id from table1
Assuming I can’t get the database structure changed (a very safe assumption) what’s a better way to structure the first query for performance?
Edit to add a sanitized desc of the table:
Name Null Type ------------------------------ -------- ---------------------------- KEY NOT NULL NUMBER COL1 NOT NULL NUMBER COL2 NOT NULL VARCHAR2(4000 CHAR) COL3 VARCHAR2(1000 CHAR) COL4 VARCHAR2(4000 CHAR) COL5 VARCHAR2(60 CHAR) COL6 VARCHAR2(150 CHAR) COL7 VARCHAR2(50 CHAR) COL8 VARCHAR2(3 CHAR) COL9 VARCHAR2(3 CHAR) COLA VARCHAR2(50 CHAR) COLB NOT NULL DATE COLC NOT NULL DATE COLD NOT NULL VARCHAR2(1 CHAR) COLE NOT NULL NUMBER COLF NOT NULL NUMBER COLG VARCHAR2(600 CHAR) ID NUMBER NAME VARCHAR2(50 CHAR) COLH VARCHAR2(3 CHAR) 20 rows selected
[LATEST EDIT]
My ORIGINAL ANSWER regarding creating the appropriate index on (name,id) to replace the index on (name) is below. (That wasn’t an answer to the original question, which disallowed any database changes.)
Here are statements that I have not yet tested. There’s probably some obvious reason these won’t work. I’d never actually suggest writing statements like this (at the risk of being drummed thoroughly for such ridiculous suggestion.)
If these queries even return result sets, the ressult set will only resemble the result set from the OP query, almost by accident, taking advantage of a quirky guarantee about the data that Don has provided us. This statement is NOT equivalent to the original SQL, these statements are designed for the special case as described by Don.
Let’s unpack that:
idwithnameSomeone else suggested the idea of an index merge. I had previously dismissed that idea, an optimizer plan to match 10s of millions of rowids without eliminating any of them.
With sufficiently low cardinality for id and name, and with the right optimizer plan:
Let’s unpack that
IMPORTANT NOTE
These statements are FUNDAMENTALLY different that the OP query. They are designed to return a DIFFERENT result set than the OP query. The happen to return the desired result set because of a quirky guarantee about the data. Don has told us that a
nameis determined byid. (Is the converse true? Isiddetermined byname? Do we have a STATED GUARANTEE, not necessarily enforced by the database, but a guarantee that we can take advantage of?) For anyIDvalue, every row with thatIDvalue will have the sameNAMEvalue. (And we are also guaranteed the converse is true, that for anyNAMEvalue, every row with thatNAMEvalue will have the sameIDvalue?)If so, maybe we can make use of that information. If
IDandNAMEappear in distinct pairs, we only need to find one particular row. The “pair” is going to have a matching ROWID, which conveniently happens to be available from each of the existing indexes. What if we get the minimum ROWID for eachID, and get the minimum ROWID for eachNAME. Couldn’t we then match theIDto theNAMEbased on the ROWID that contains the pair? I think it might work, given a low enough cardinality. (That is, if we’re dealing with only hundreds of ROWIDs rather than 10s of millions.)[/LATEST EDIT]
[EDIT]
The question is now updated with information concerning the table, it shows that the
IDcolumn and theNAMEcolumn both allow for NULL values. If Don can live without any NULLs returned in the result set, then adding the IS NOT NULL predicate on both of those columns may enable an index to be used. (NOTE: in an Oracle (B-Tree) index, NULL values do NOT appear in the index.)[/EDIT]
ORIGINAL ANSWER:
create an appropriate index
Okay, that’s not the answer to the question you asked, but it’s the right answer to fixing the performance problem. (You specified no changes to the database, but in this case, changing the database is the right answer.)
Note that if you have an index defined on
(name,id), then you (very likely) don’t need an index on(name), sine the optimizer will consider the leadingnamecolumn in the other index.(UPDATE: as someone more astute than I pointed out, I hadn’t even considered the possibility that the existing indexes were bitmap indexes and not B-tree indexes…)
Re-evaluate your need for the result set… do you need to return
id, or would returningnamebe sufficient.For a particular name, you could submit a second query to get the associated
id, if and when you needed it…If you you really need the specified result set, you can try some alternatives to see if the performance is any better. I don’t hold out much hope for any of these:
or
or
UPDATE: as others have astutely pointed out, with this approach we’re testing and comparing performance of alternative queries, which is a sort of hit or miss approach. (I don’t agree that it’s random, but I would agree that it’s hit or miss.)
UPDATE: tom suggests the ALL_ROWS hint. I hadn’t considered that, because I was really focused on getting a query plan using an INDEX. I suspect the OP query is doing a full table scan, and it’s probably not the scan that’s taking the time, it’s the sort unique operation (<10g) or hash operation (10gR2+) that takes the time. (Absent timed statistics and event 10046 trace, I’m just guessing here.) But then again, maybe it is the scan, who knows, the high water mark on the table could be way out in a vast expanse of empty blocks.
It almost goes without saying that the statistics on the table should be up-to-date, and we should be using SQL*Plus AUTOTRACE, or at least EXPLAIN PLAN to look at the query plans.
But none of the suggested alternative queries really address the performance issue.
It’s possible that hints will influence the optimizer to chooze a different plan, basically satisfying the ORDER BY from an index, but I’m not holding out much hope for that. (I don’t think the FIRST_ROWS hint works with GROUP BY, the INDEX hint may.) I can see the potential for such an approach in a scenario where there’s gobs of data blocks that are empty and sparsely populated, and ny accessing the data blocks via an index, it could actually be significantly fewer data blocks pulled into memory… but that scenario would be the exception rather than the norm.
UPDATE: As Rob van Wijk points out, making use of the Oracle trace facility is the most effective approach to identifying and resolving performance issues.
Without the output of an EXPLAIN PLAN or SQL*Plus AUTOTRACE output, I’m just guessing here.
I suspect the performance problem you have right now is that the table data blocks have to be referenced to get the specified result set.
There’s no getting around it, the query can not be satisfied from just an index, since there isn’t an index that contains both the
NAMEandIDcolumns, with either theIDorNAMEcolumn as the leading column. The other two “fast” OP queries can be satisfied from index without need reference the row (data blocks).Even if the optimizer plan for the query was to use one of the indexes, it still has to retrieve the associated row from the data block, in order to get the value for the other column. And with no predicate (no WHERE clause), the optimizer is likely opting for a full table scan, and likely doing a sort operation (<10g). (Again, an EXPLAIN PLAN would show the optimizer plan, as would AUTOTRACE.)
I’m also assuming here (big assumption) that both columns are defined as NOT NULL.
You might also consider defining the table as an index organized table (IOT), especially if these are the only two columns in the table. (An IOT isn’t a panacea, it comes with it’s own set of performance issues.)
You can try re-writing the query (unless that’s a database change that is also verboten) In our database environments, we consider a query to be as much a part of the database as the tables and indexes.)
Again, without a predicate, the optimizer will likely not use an index. There’s a chance you could get the query plan to use one of the existing indexes to get the first rows returned quickly, by adding a hint, test a combination of:
With a hint, you may be able to influence the optimizer to use an index, and that may avoid the sort operation, but overall, it make take more time to return the entire result set.
(UPDATE: someone else pointed out that the optimizer might choose to merge two indexes based on ROWID. That’s a possibility, but without a predicate to eliminate some rows, that’s likely going to be a much more expensive approach (matching 10s of millions ROWIDs) from two indexes, especially when none of the rows are going to be excluded on the basis of the match.)
But all that theorizing doesn’t amount to squat without some performance statistics.
Absent altering anything else in the database, the only other hope (I can think of) of you speeding up the query is to make sure the sort operation is tuned so that the (required) sort operation can be performed in memory, rather than on disk. But that’s not really the right answer. The optimizer may not be doing a sort operation at all, it may be doing a hash operation (10gR2+) instead, in which case, that should be tuned. The sort operation is just a guess on my part, based on past experience with Oracle 7.3, 8, 8i, 9i.)
A serious DBA is going to have more issue with you futzing with the
SORT_AREA_SIZEand/orHASH_AREA_SIZEparameters for your session(s) than he will in creating the correct indexes. (And those session parameters are “old school” for versions prior to 10g automatic memory management magic.)Show your DBA the specification for the result set, let the DBA tune it.