Chinaunix首页 | 论坛 | 博客
  • 博客访问: 97171
  • 博文数量: 45
  • 博客积分: 2500
  • 博客等级: 少校
  • 技术积分: 395
  • 用 户 组: 普通用户
  • 注册时间: 2010-09-15 10:29
文章分类

全部博文(45)

文章存档

2011年(15)

2010年(30)

我的朋友

分类: Oracle

2010-11-23 18:35:08

Oracle Performance Tuning Tips
Performing large physical reads improves performance and reduces load because it is more efficient for the hardware to process one large read request, than several equivalent smaller ones. This difference can be explained in terms of the component .

Consider reading 64K of contiguous data from a single disk. What are the differences between performing one 64K read or eight separate 8K reads? The total transfer time would be the same. But the single multiblock read would require only one seek and one rotational latency, whereas the eight separate reads might require extra seeks and rotational latencies depending on whether the disk had to service other requests in the interim. Also, the CPU usage of the eight separate reads would include eight times more overhead than the one multiblock read. In terms of resource usage, the only benefit of the eight separate reads is that they can conceivably all use the same 8K memory buffer, whereas the multiblock read might require a 64K memory buffer. The only other potentially significant difference is that the multiblock read would take longer to return the first block of data to the application for it to begin processing.

So, a large multiblock read size improves raw and direct I/O throughput significantly and reduces CPU usage, at the expense of a tiny increase in memory usage and a slight reduction in the "first rows" response times. This response time impact is of little concern. Multiblock reads are only performed for full table scans and fast full index scans, and these execution plans should not normally be used if response times are more important than throughput. So the multiblock read size should be as large as possible if you use raw or direct I/O (). Unfortunately, finding and using the maximum effective multiblock read count possible for your database is not always easy. .

Of course, the foregoing only applies to unbuffered I/O. If you use buffered I/O the operating system will service multiblock reads as a series of small I/O operations anyway and so there will be almost no benefit in using a large multiblock read size, only a small saving in CPU usage. On the other hand, there is no point in using a small multiblock read size either, because the file system read ahead mechanism will be triggered anyway. So for buffered I/O, it is best to match the multiblock read size to the file system read ahead size exactly, in order to make optimum use of track buffering and file system read ahead.

You will notice that we are speaking of the multiblock read size as a constant, even though it is tunable - indeed it is session-modifiable. However, it is best to regard its setting as fixed from database creation because it is a factor in determining the optimal extent sizes for your database, and it is also a key factor in choosing the stripe element size for disk striping. Once these things have been fixed, they are not readily changed and so the multiblock read size should not be changed either.

Of the other read count parameters, there is no reason to use anything less than the maximum effective value for the _db_file_direct_io_count parameter. This parameter dictates the size of the multiblock I/O operations performed for all Oracle direct I/O except sorts and hash joins. The size of the multiblock I/O operations for sorts and hash joins are set by the _sort_multiblock_read_count and _hash_multiblock_io_count parameters. Although the principle that "large reads are efficient" also applies to these parameters, there are other factors that need to be considered, and so much greater care is needed here. Please use the search button below for information about setting the sort and hash multiblock read sizes.

阅读(251) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~