Method R Discussion Group

Method R helps developers, DBAs, and decision-makers optimize Oracle-based software in every phase of the software life cycle.

Discuss Method R products, techniques, and events.

Ask new questions or post your own responses.

Correlated Delete with NOT EXISTS

Hello,

I have a question on correlated delete. we are using the NOT EXISTS operator, to perform a correlated delete on multiple tables. The transaction is taking pretty long, 7-10 seconds.
Generating the trace file, noticed that the outer table is doing a full table scan, and the inner table is doing an index fast full scan. Is there anything we can do to speed up the things?
delete from conversation_history ch
where not exists (select 'x' from docswitchbpo db where ch.bpid=db.vtoid) and rownum<=100;

This is the plan:
Rows     
0      DELETE  CONVERSATION_HISTORY (cr=21,083 pr=11,313 pw=0 time=6.5530s)
0        COUNT STOPKEY (cr=21,083 pr=11,313 pw=0 time=6.5530s)
0            HASH JOIN RIGHT ANTI (cr=21,083 pr=11,313 pw=0 time=6.5530s)
575,690      INDEX FAST FULL SCAN SYS_C001236 (cr=10,336 pr=572 pw=0 time=0.0319s)
573,787      TABLE ACCESS FULL CONVERSATION_HISTORY (cr=10,747 pr=10,741 pw=0 time=3.4752s)
Thank you,
Diana
Diana Robete Send private email
Wednesday, October 31, 2007
 
 
It might help to list the table layout and any indexes that are on the table. Then people can see, based on the query, if it "should" be using an index of any kind.
Doug Gault Send private email
Thursday, November 1, 2007
 
 
Sorry, I should have added that info in the first place.

Here are the indexes on the conversation_history table:

INDEX_NAME                    COLUMN_NAME
------------------------------ ----------------------------
CONVERSATION_HISTORY          ACTION
CONVERSATION_HISTORY          ORDERIDENTIFIER
CONVERSATION_HISTORY          ORDERIDENTIFIER2
PK_CONVERSATION_HISTORY        CNVSTN_HISTORY_ID
CONVERSATION_HISTORY_ID        BPID
CONVERSATION_HISTORY_OID2_IDX  ORDERIDENTIFIER2

desc conversation_history

CNVSTN_HISTORY_ID    NOT NULL NUMBER
CONV_IDSQN                    NUMBER
CV_ID_DOC_TYPE                NUMBER
DIRECTION                      CHAR(1)
ORDERIDENTIFIER                VARCHAR2(100)
BPID                          VARCHAR2(100)
ORDERIDENTIFIER2              VARCHAR2(100)
CONV_REL_ID          NOT NULL NUMBER
ACTION                        CHAR(1)
CID                            NUMBER


Here are the indexes on docswitchbpo table:
INDEX_NAME                    COLUMN_NAME
------------------------------ ---------------------------
SYS_C001236                    VTOID

Each table has just over 600K rows.

Thank you,
Diana
Diana Robete Send private email
Thursday, November 1, 2007
 
 
In your DELETE statement shown, the delete uses ROWNUM <= 100.  Can you provide a bit more background on what you're expecting to happen with this DELETE?  It appears that you only want 100 rows to get deleted no matter how many might actually need to be deleted but I'm not sure that's really true.  On the other hand, you may also be trying to only delete rows from the first 100 found in the conversation_history table. 

I've got a couple of ideas about rewrites, but would like to get an explanation of exactly what is trying to be accomplished with this query so I'll know which way to suggest.

As written, the delete is probably operating as fast as could be expected.  The question is "are you asking the optimizer correctly for what you really want"?
Karen Morton Send private email
Wednesday, November 7, 2007
 
 
Hi Karen,

In the delete docswitchbpo is the parent table and
conversation_history is the child. Currently the setup is this way: there is no FK PK relationship in the db. So we are deleting first from the parent table, and then we are deleting the children that have no parents. So basically that is what we are trying to accomplish. I suggested to have the children removed first, but that is not an option.
the rownum <= 100 can be changed, that was just for test. We want to commit every x number of rows, to keep the transaction smaller, that is why we have the rownum condition.

Thank you for your help,
Diana
Diana Robete Send private email
Wednesday, November 14, 2007
 
 
Given your description of what the DELETE is intended to accomplish, I don't believe there is a better way to write the delete to improve performance.  The ANTI-Join that is being performed appears to be a logical and appropriate for what you want to do.  But, if you want an alternative to try (just for fun), try this:

delete from conversation_history
where rowid in (
select ch.rowid
from conversation_history ch, docswitchbpo db
where ch.bpid = db.vtoid(+)
and db.vtoid is null );

However, regardless of whether or not the above alternative provides any better response time or not, I'd suggest that you lose the ROWNUM.  You really just need to make sure you have a large enough rollback/undo segment to accommodate the delete.  With the table sizes being what they are, they are relatively small generally speaking.  Even if every row in the table had to be deleted, you'd only be talking about 10,000 blocks.  That's not really that big.  And, adding ROWNUM adds some potentially confusing information that the optimizer has to deal with when determining the plan.  I'd suggest it's just better practice to *not* try and control transaction size that way.  A likely better alternative to ROWNUM, if you insist you need to control transaction size, would be some kind of bulk delete with a FORALL using a LIMIT clause.
Karen Morton Send private email
Friday, November 16, 2007
 
 
Thank you Karen for your input. We will investigate the FORALL option, ( I didn't even know it existed).

Diana
Diana Robete Send private email
Tuesday, November 20, 2007
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Discussion Groups Main Page

Powered by FogBugz