Chinaunix首页 | 论坛 | 博客
  • 博客访问: 2885487
  • 博文数量: 599
  • 博客积分: 16398
  • 博客等级: 上将
  • 技术积分: 6875
  • 用 户 组: 普通用户
  • 注册时间: 2009-11-30 12:04
个人简介

WINDOWS下的程序员出身,偶尔也写一些linux平台下小程序, 后转行数据库行业,专注于ORACLE和DB2的运维和优化。 同时也是ios移动开发者。欢迎志同道合的朋友一起研究技术。 数据库技术交流群:58308065,23618606

文章分类

全部博文(599)

文章存档

2014年(12)

2013年(56)

2012年(199)

2011年(105)

2010年(128)

2009年(99)

分类: Oracle

2011-09-06 21:15:29

1. Sequence setup

Different sequence setups are available.

a. CACHE + NOORDER

This setting has the least performance impact on RAC, and it is the default when creating a new sequence wihtout options. Each instances caches a distinct set of numbers in the shared pool. Sequences will not be globally ordered, and gaps will occur when the shared pool is refreshed (e.g. instance shut down) like on single instance databases.

b. CACHE + ORDER

Each instances caches the same set of numbers. Ordering of sequence numbers is guaranteed, and gaps will occur. Performance is better than with NOCACHE sequences. Gaps in the sequence numbering occur when the sequence cache is lost e.g. any shared pool flush or instance shutdown like an single instance databases.

c. NOCACHE + NOORDER

Use this setting when e.g. government regulations or laws legally require sequence numbers without gaps. Ordering is not guaranteed. It has a better performance than NOCACHE / ORDER.

d. NOCACHE + ORDER

Use these settings when gapless and ordered sequences are required; no gaps will occur and ordering is guaranteed. This setting though has the most negative performance impact on RAC.

2. Gap in sequences

Gap sequences are always possible even on non RAC databases, e.g.

a. when any  kind of failure (like ora-600) or transaction rollbacks occur

1. user A want a nextval and get 111
2. user B want a nextval and get 112
3. user B commit
4. user A session failed and/or his transaction is rollbacked.
5. user C want a nextval and get 113

b. when the sequence caching is used and the cached values are flushed from the shared pool. The same happen in RAC as in single instance databases. Any flush on any shared pool is enough to invalidate the cache value on RAC systems.

3. Performance impact when using CACHE or NOCACHE

When caching is used, then the dictionary cache (the rowcache) is updated only once with the new highwater mark, e.g. when a caching of 20 is used and a nextval is requested the first time, then the rowcache value of the sequence is changed in the dictionary cache and increased by 20. The LAST_NUMBER of the DBA_SEQUENCES get increased with the cache value or 20. The extracted 20 values, stored in the shared pool, will be distributed to the sessions requesting the nextval of it. View v$_sequences permits to know whether the sequence has been cached in the shared pool or not.

When no caching is used, then the dictionary cache has to be updated for any nextval request. It means the row cache has to be locked and updated with a nextval request. Multiple sessions requesting a nextval will hence be blocked on a 'row cache lock' wait.

When caching + ordering is used and RAC is enabled (cluster_database = true), then the session wanting to get the NEXTVAL of a sequence need to get an exclusive instance SV lock before inserting or updating the sequence values in the the shared pool. When multiple sessions want the nextval of the same sequence, then some sessions will wait on 'DFS lock handle' waitevent with id1 equal to the sequence number.

As a consequence of those serialisation mechanisms, the sequence throughput, i.e. the maximum speed at which it is possible to increment a sequence, doesn't scale with the number of RAC nodes, when the sequence is not cached and/or when the sequence is ordered. The sequence throughput is always better for cached sequences compared to non cached sequences.

In summary, due to the serialization mechanisms for non-cached ordered sequences, the sequence throughput doesn’t scale with the number of RAC nodes, i.e. the throughput don't increase. The throughput even decrease a bit with the number of nodes since e.g. it takes more time to get a row cache lock or the SQ enqueue when more nodes are involved. That decrease effect is certainly visible when going from 1 to 2 nodes and is still perceptible to a lower extent when going from 2 to 3 nodes, but flat away with more nodes.

阅读(1942) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~