Method R Discussion GroupMethod R helps developers, DBAs, and decision-makers optimize Oracle-based software in every phase of the software life cycle. |
||
Discuss Method R products, techniques, and events. Ask new questions or post your own responses. |
We have a sql script that executes numerous times during the day. This sql script usually executes in about 2 to 3 minutes. About three or four months ago the script started executing in about 2 to 4 hours depending on when you started it. For example you could start it at 8:00 am and it would run in 2 to 3 minutes. If you started another sql plus session at 8:30 am and executed it, it it would run in 2 or 4 hours. If you started another sql plus session at 8:45 am and executed it, it would run in about 2 or 3 minutes. This is the same script with the same parameters. Our DBAs had the pga set at 10 meg. We have sinced increased our pga to 34 meg. This has not helped. Our profiler showed that db file sequential read took 88.8% of the run time. Any ideas - what else could we do to be more consistent - back to the 2 to 3 minutes?
This is my first time using the profiler and posting to this discussion group.
Douglas,
the easiest way to solve this would be to get a trace when the process is running normal, and compare that to the trace for a slow run. It might well be that for the 3 minute run you also have 88% of db file sequential read. When you have the profile for the two runs side by side, you can check the execution plans involved and see if there's a difference. As a sidenote, a 34 Mb PGA sounds on the small side! Remember that this 34 Mb is "the target aggregate PGA memory available to all server processes attached to the instance". (See http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams157.htm#CHDBJCDE ). Unless this is a *very* small system, 34Mb is undersized. You can check v$pgastat for "maximum PGA allocated" to see if Oracle has exceeded that 34Mb limit...
Another thing that situations like this indicate is the likelihood of contention. There may be something else executing on your system at the times when it runs slow and it is that other process that you need to investigate.
I want to thank each of you for your comments. I believe both are correct - we don't have enough in the pga. We are increasing our sga from 8.8 gig to 14 git this weekend. We will then try to tweek the pga. I also believe there is contention going on. What tools would you use to see the contention (other jobs) that are executing on the database. Thanks again!
We have a case study which explains this behavior. A program runs for 25 minutes during the day when there is a lot of activities and takes about 10 minutes during the night when there is not. A trace file and the Hotsos Profiler for the two runs showed the following.
Fast Run at Night Event # of Calls Average Time for event db file sequential read 71,414 0.005385 Slow Run in Day Event # of Calls Average Time for event db file sequential read 119,156 0.007283 As you can see during the day, there were a lot more calls to db file sequential read than at night and each read was longer during the day due to competition for resources at the SAN level. The customer was thinking that the problem was in the SAN. We found several high LIO SQL's and optimized them and reducing LIO demand freed up the memory for other buffers to stay longer avoiding these db file sequential reads. If you would like to see it, send us a e-mail and we can look at your case and show you our case study as well. |
Powered by FogBugz