We're glad you found us here, but we've moved. Please visit our official forums instead.
Not really. It depends on what data Oracle is going after. If Oracle is going after a small % of the rows in the table with a predicate, it is probably better of using the access startegy for the predicate which is the use of an index. If Oracle is going after a large % of the rows in the table (more than 20% usually), Oracle is better off using the filter access strategy where it goes directly against the table and filters the rows it needs.
The access predicate is either going be an index operation or a join operation. It is simply a more "direct" way to "access" the data by retrieving only the rows in the table that meet one or more of the conditions in the predicate or when matching the columns that join two tables.
Filter predicates aren't "bad". They are simply less exact in how they retrieve the data. When a filter predicate is applied, all rows in the current row source must be checked to see if they pass the filter. On the other hand, an access predicate will only place rows into a resulting row source when there is an exact match. Thus, access predicates are thought of as more efficient since they will typically only gather rows that match the conditions vs. hitting all rows and throwing away what doesn't match.
I'll make one small change to what Mahesh stated in his reply. While % of rows is partly involved in Oracle's decision to choose an index use or not, it is actually more correct to say that Oracle will typically use an index if the number of blocks (not rows) that must be accessed to get the rows you need is low.
For example, if you have a table with 100,000 rows that exists in 100 blocks, and you have an index on a column you are using in the WHERE clause to retrieve only 1000 rows, you'd think that Oracle "should" use an index since 1000 rows is only 10% of the total rows in the table. However, it is entirely possible that Oracle would choose to do a full table scan and filter the rows instead. Why? Well, if the 1000 rows we're uniformly distributed across all 100 blocks in the table (i.e. rows matching what you're looking for in every block in the table), Oracle might decide that doing a full table scan - which would access only 100 blocks - would be more efficient than it would be to use an index to get the same rows but have to hit more blocks to do it. If an index was used to get 1000 rows, you'd likely see that more than 1000 block accesses would be required to get the same rows. So, 100 block accesses vs. 1000 (or more) block accesses might make Oracle decide to full table scan and filter. That would mean that Oracle would read all 100,000 rows in the table and throw away all of them except for 1000.
Which is more efficient? In truth, the only way to know is to test the execution both ways and prove it out for yourself. All I wanted to caution against was falling into the trap of only looking at % of rows as the primary indicator for whether or not Oracle will use an index and also to suggest that filter predicates are not "bad"...just different from access predicates in how they are applied.
Saturday, January 05, 2008
This topic is archived. No further replies will be accepted.Other recent topics