Chinaunix首页 | 论坛 | 博客
  • 博客访问: 2836239
  • 博文数量: 599
  • 博客积分: 16398
  • 博客等级: 上将
  • 技术积分: 6875
  • 用 户 组: 普通用户
  • 注册时间: 2009-11-30 12:04
个人简介

WINDOWS下的程序员出身,偶尔也写一些linux平台下小程序, 后转行数据库行业,专注于ORACLE和DB2的运维和优化。 同时也是ios移动开发者。欢迎志同道合的朋友一起研究技术。 数据库技术交流群:58308065,23618606

文章分类

全部博文(599)

文章存档

2014年(12)

2013年(56)

2012年(199)

2011年(105)

2010年(128)

2009年(99)

分类: Oracle

2012-01-18 14:28:05

    In earlier releases of Oracle, there was a single block buffer cache, and all blocks from any segment went into this single area. Starting with Oracle 8.0, we had three places to store cached blocks from individual segments in the SGA:
 
* Default pool: The location where all segment blocks are normally cached. This is the original¿and previously only¿buffer pool.
 
* Keep pool: An alternate buffer pool where by convention you would assign segments that were accessed fairly frequently, but still got aged out of the default buffer pool due to other segments needing space.
 
* Recycle pool: An alternate buffer pool where by convention you would assign large segments that you access very randomly, and which would therefore cause excessive buffer flushing but would offer no benefit because by the time you wanted the block again it would have been aged out of the cache. You would separate these segments out from the segments in the default and keep pools so that they would not cause those blocks to age out of the cache.
 
    Note that in the keep and recycle pool descriptions I used the phrase 'by convention.' There is nothing in place to ensure that you use neither the keep pool nor the recycle pool in the fashion described. In fact, the three pools manage blocks in a mostly identical fashion; they do not have radically different algorithms for aging or caching blocks. The goal here was to give the DBA the ability to segregate segments to hot, warm, and do not care to cache areas. The theory was that objects in the default pool would be hot enough (i.e., used enough) to warrant staying in the cache all by themselves. The cache would keep them in memory since they were very popular blocks. You might have had some segments that were fairly popular, but not really hot; these would be considered the warm blocks. These segments' blocks could get flushed from the cache to make room for some blocks you used infrequently (the 'do not care to cache' blocks). To keep these warm segments blocks cached, you could do one of the following:
 
* Assign these segments to the keep pool, in an attempt to let the warm blocks stay in the buffer cache longer.
 
* Assign the 'do not care to cache' segments to the recycle pool, keeping the recycle pool fairly small so as to let the blocks come into the cache and leave the cache rapidly (decrease the overhead of managing them all).
 
    This increased the management work the DBA had to perform, as there were three caches to think about, size, and assign objects to. Remember also that there is no sharing between them, so if the keep pool has lots of unused space, it won¿t give it to the overworked default or recycle pool. All in all, these pools were generally regarded a very fine, low-level tuning device, only to be used after most all other tuning alternatives had been looked at (if I could rewrite a query to do one-tenth the I/O rather then set up multiple buffer pools, that would be my choice!).
 
re-read the answer above, the keep, recycle and default pools are just three separate buffer pools - they do not really manage blocks *any differently*. They are just three different caches.
 
    How about this - instead of calling them default, keep and recycle - just call them pool1, pool2, pool3. Pool1 will not share blocks with pool2 and pool3 and vice versa. They are just three separate areas that do not overlap, do not share.

    alter table cache - that changes the way blocks that are read into the pool as the result of a large table full table scan (tables bigger than about 2% of the buffer cache) are treated - period.
 
    alter table cache - does not cause a table to be cached. I wish they had not called it that at all, very misleading.
 
    Normally, if a table is larger than 2% of the buffer cache in size, when you full scan it - the blocks you read in are aged out very rapidly - so that you do not "blow out" the cache by reading a big table. You would use and reuse the same set of blocks over and over - instead of wiping out the entire cache. So, when you full scan a large table - it will typically result in the data NOT remaining in the cache afterwords.
 
    Unless you "alter table cache", then the blocks are read into the cache like they were single block IO blocks - and are cached using the least recently used algorithms.

ops$tkyte%ORA10GR2> create table t1
  2  as
  3  select level id, rpad('*',4000,'*') data, rpad('*',2000,'*') data2
  4    from dual
  5   connect by level <= 15000;
Table created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t2
  2  CACHE
  3  as
  4  select level id, rpad('*',4000,'*') data, rpad('*',2000,'*') data2
  5    from dual
  6   connect by level <= 15000;
Table created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select count(*) from t1;
  COUNT(*)
----------
     15000
ops$tkyte%ORA10GR2> select count(*) from t2;
  COUNT(*)
----------
     15000
ops$tkyte%ORA10GR2> set autotrace traceonly statistics;
ops$tkyte%ORA10GR2> select count(*) from t1;

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      15011  consistent gets
       7672  physical reads
          0  redo size
        412  bytes sent via SQL*Net to client
        396  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
ops$tkyte%ORA10GR2> select count(*) from t1;

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      15011  consistent gets
      14936  physical reads
          0  redo size
        412  bytes sent via SQL*Net to client
        396  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
ops$tkyte%ORA10GR2> select count(*) from t2;

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      15011  consistent gets
          1  physical reads
          0  redo size
        412  bytes sent via SQL*Net to client
        396  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
ops$tkyte%ORA10GR2> select count(*) from t2;

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      15011  consistent gets
          0  physical reads
          0  redo size
        412  bytes sent via SQL*Net to client
        396  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
ops$tkyte%ORA10GR2> set autotrace off
 

note the differences in physical IO requests - the large table T2 is cached as if it were the size of DUAL, table T1 on the other hand - that tries to NOT wipe out the entire buffer cache.
Remember, this will vary by your CACHE size - so if you do this test on your system, you might see different numbers based on your block size (mine was 8k, a row per block) and cache size (sga target was 512m on my system)
 
阅读(1137) | 评论(0) | 转发(1) |
给主人留下些什么吧!~~