Method R Discussion Group

Method R helps developers, DBAs, and decision-makers optimize Oracle-based software in every phase of the software life cycle.

Discuss Method R products, techniques, and events.

Ask new questions or post your own responses.

Multi Piece Rows - ie more than 255 columns

We have a problem with pieces of rows being migrated to different blocks which causes more blocks to be visted to return data from each pice of the block.  We have been working on possible solutions to this problem but what I was wndering is - how does orcle decide how to store the 2 pieces??  We have done some block dumps and sometims wwe see pice number on with 255 columns and pice number two with the rest, and we have also seen piece number 2 with 255 colums and piece number 1 with the rest.  Also, other than working with pctfree and pctuse, is there anyother tricks/solution that could, if not eliminate the problem. at least slow it down.  What we are atempting to do, is to load some dummy data in a field past the last field in the row that is used.  Our theroy is that by doing this we will have 2 full pieces created on the first insert.  Does this make any sense.  In our limited testing it does seem to dramtically slow down the outer block row chaining.
Brian Teigland Send private email
Friday, May 11, 2007
 
 
When Oracle added the ability to have more than 255 columns in a row back in the 8.x release, the way that it chose to maintain things was to chain data after the 255th column within the same block.  This allowed them to maintain the basic block structure (with single byte addressing) but also allow more than 255 columns. 

Now, this kind of chaining is not the same as inter block chaining where an additional block or blocks contain the rest of the data.  The intent was to have all the data contained in the same block.  So, as long as the row will actually fit in a single block, additional I/O should not be seen.

However, if you have numerous trailing columns (in the columns beyond 255) that are NULL originally and then later are populated (even with a single character), all the NULL columns between the last column that was populated originally and this new column now will take up space.  Initially, the trailing NULL columns didn't take up any space in the row.  So, the row will have to be split into the two row pieces and the new row piece is migrated to a new block....and that's when the row actually becomes chained.

If, as you theorize, you initially populate a column at the end of the row with some dummy data, this should prevent the migration when the other NULL columns are later updated with values.  Again, this will depend on if the entire row - including this extraneous dummy data that you're now adding - will actually fit into a single block.  If you've got really long rows, or rows that contain LOBs, or if this table happens to be an Index Organized table, you've got more/different issues to deal with in order to deal with this problem.  However, if this is just a situation with a table containing > 255 columns, then populating the last column with some dummy value - or adding a dummy column at the end and populating that - should fix the majority of your problems.
Karen Morton Send private email
Friday, May 11, 2007
 
 
I need to clarify something. I think you're saying that sometimes, for a two piece row, the first piece has most of the columns and other times the second piece has most of the columns. And so, you're wondering why it varies?

Additionally, you're asking how to keep the Oracle kernel from migrating or chaining a row?

I looked closely at your test method and I can't tell whether you are inserting a row and then updating it later.

I think a complete test method (i.e., one that would reveal the answer to your first question) has to include, at least, the following variables:

1. are the data added with one statement
2. if #1's answer is 'no', then are they added in the same transaction
3. how full is the block before the data are added

If you account for all of these variables, then you'll probably be able to answer your first question. There really is no constant answer because Oracle can change the way their kernel behaves with each release. So, you have to consistently be a good tester if you want truth.

Your second question (i.e., how to prevent it), which I assume is the more important one, has two answers: traditional and revolutionary.

The traditional answer is you have to do two things:
a. make the block size big enough to hold the largest row
b. never insert a row into a block that doesn't or won't have enough free space to hold a larger version of that row

The revolutionary answer really isn't that revolutionary because the concept is extremely old: you have to properly measure skew.

When there's skew, not all rows get updated equally. The problem is the most difficult to solve when there is uniform distribution in the resulting updated row lengths. In other words, if each row is updated so that it has a different length than all the other rows, then ideal allocation of rows to blocks is very difficult to accomplish. It's almost exactly like the problem that Oracle addressed when they started allocating space in the shared pool in uniform sizes.

If most or all rows are updated equally (i.e., the columns that are updated and their new values), then there is little or no skew. This is good because you can solve most or all of the problem quite easily.

Knowing how the rows are updated is the only problem to solve. You can do that partially by looking at the source code. But the more reliable way is to trace a statistically sufficient number of executions so you can see the input values.

You might be able to eliminate the need to trace the application if the segment is large enough before and after one or more changes. If it is, then you can periodically query the segment to reveal how rows are changing.
Jeff Holt Send private email
Friday, May 11, 2007
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Discussion Groups Main Page

Powered by FogBugz