简单的算法来评估:
-
--先收集一下表的统计信息,也可忽略此步
-
EXEC DBMS_STATS.GATHER_TABLE_STATS('A','BIG')
-
-
--将平均长度(占几个字节)乘以要删除的记录数
-
select round(AVG_ROW_LEN*5000000/1024/1024*1.4) arch_size_mb
-
from dba_tables where table_name='BIG';
-
-
在这里用了一个系数 1.4,即占用空间要多40%出来,系数不具有普遍正确性,如果索引比较多,可能需要改为5!
大量的删除、更新通常会导致数据库块出现碎片的情况,影响性能和浪费空间。
oracle是这样设计的:
This is best illustrated with an example: Consider a transaction that updates a million row table. This obviously visits a large number of database blocks to make the change to the data. When the user commits the transaction Oracle does NOT go back and revisit these blocks to make the change permanent. It is left for the next transaction that visits any block affected by the update to 'tidy up' the block (hence the term 'delayed block cleanout')
更新百万行表的事务显然会访问大量数据库块以对数据进行更改。当用户提交事务时,Oracle不会返回并重新访问这些块以使更改永久化。它被留给下一个交易,该交易访问受更新影响的任何区块以“整理”区块(因此术语“延迟区块清理”)。
说到底还是为了性能。
评估库里哪些表存在碎片情况:
-
col frag format 9999.99
-
col owner format a30
-
col table_name format a30
-
-
SELECT *
-
FROM
-
(SELECT a.owner,
-
a.table_name,
-
a.num_rows,
-
round(a.avg_row_len * a.num_rows/1024/1024) esti_size_mb,
-
round(sum(b.bytes/1024/1024)) real_size_mb,
-
(a.avg_row_len * a.num_rows) / sum(b.bytes) frag
-
FROM dba_tables a, dba_segments b
-
WHERE a.table_name = b.segment_name
-
AND a.owner= b.owner
-
AND b.bytes/1024/1024>100
-
AND a.owner IN
-
(SELECT username
-
FROM dba_users
-
WHERE ORACLE_MAINTAINED='N')
-
GROUP BY a.owner,a.table_name,a.avg_row_len, a.num_rows
-
HAVING a.avg_row_len * a.num_rows / sum(b.bytes) < 0.7
-
ORDER BY sum(b.bytes) desc)
-
WHERE rownum <= 30;
对100M以上的表进行估算,准确的情况可以参考show_space结果,详见之前的
http://blog.chinaunix.net/uid-20687159-id-5848525.html
如果发现
碎片化高的表,消除的方法可以考虑:
1. alter table XXX move; --索引需要重建
2. create table NEW as select * from OLD; --适合保留数据较少的情况
3. 在线重定义
阅读(401) | 评论(0) | 转发(0) |