先说概念:
-行迁移migrate通常由 UPDATE 操作引起
-行链接chain 通常由 INSERT 操作引起。
-创建/查询这些链接/迁移行的 SQL 语句将由于更多的 I/O 工作而降低性能。
-要诊断链接/迁移的行,请使用 ANALYZE 命令 ,查询 V$SYSSTAT 视图和USER_TABLES
-要删除链接/迁移的行,请使用更高的PCTFREE,使用alter table move,exp/imp。
-在大多数情况下,链接是不可避免的,特别是当这涉及具有大列(如LONGS,LOB等)的表时。如果不同的表中有很多链接行,并且这些表的平均行长度不是那么大,则可以考虑使用较大的块大小重建数据库。
-具有超过 255 列的表可能会强制链接
再说解决方法:
-alter table move (注意索引、物化视图会失效,需要重建)
-增大PCTFREE (只对新数据有效,或见下面的精细控制方法)
-导出导入(为保持数据一致性最好停止应用)
-避免单表超过255个列
--行迁移模拟
create table test (c1 varchar2(20));
begin
for i in 1..1000 loop
insert into test values (null);
end loop;
end;
/
commit;
analyze table test compute statistics;
select pct_free,pct_used,avg_row_len,chain_cnt,blocks from user_tables where table_name='TEST';
--不执行analyze命令 chain_cnt列不会有数值
SELECT chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used, blocks
FROM user_tables
WHERE table_name = 'TEST';
--产生行迁移
update test set c1='hello row move here';
analyze table test compute statistics;
select pct_free,pct_used,avg_row_len,chain_cnt,blocks from user_tables where table_name='TEST';
--确认
@?/rdbms/admin/utlchain.sql
analyze table test LIST CHAINED ROWS;
select count(*) from chained_rows;
--治理
alter table test move; --或者到出再导入 CTAS可能会遇到long类型列而报错
analyze table test compute statistics;
select pct_free,pct_used,avg_row_len,chain_cnt,blocks from user_tables where table_name='TEST';
SELECT name, value FROM v$sysstat WHERE name = 'table fetch continued row';
awr中 table fetch continued row
精细控制方法
并不是所有的块都存在行迁移的话,只处理那些需要合并的
-- Get the name of the table with migrated rows:
ACCEPT table_name PROMPT 'Enter the name of the table with migrated rows: '
-- Clean up from last execution
set echo off
DROP TABLE migrated_rows;
DROP TABLE chained_rows;
-- Create the CHAINED_ROWS table
@$ORACLE_HOME/rdbms/admin/utlchain.sql
set echo on
spool fix_mig
-- List the chained and migrated rows
ANALYZE TABLE &table_name LIST CHAINED ROWS;
-- Copy the chained/migrated rows to another table
create table migrated_rows as
SELECT orig.*
FROM &table_name orig, chained_rows cr
WHERE orig.rowid = cr.head_rowid
AND cr.table_name = upper('&table_name');
-- Delete the chained/migrated rows from the original table
DELETE FROM &table_name WHERE rowid IN (SELECT head_rowid FROM chained_rows);
-- Copy the chained/migrated rows back into the original table
INSERT INTO &table_name SELECT * FROM migrated_rows;
spool off
有时候觉得查询一个记录数不多的表很慢,或者创建索引遇到db file sequential read,有可能就是遇到了行迁移/链接的情况(基于p1的结果),最简单的方法是参考上面绿色字体sql。
参考:
Row Chaining and Row Migration (Doc ID 122020.1)
How to Identify, Avoid and Eliminate Chained and Migrated Rows ? (Doc ID 746778.1)
续貂:
明明应该是db file scatter read的事件,怎么看到的却是db file sequential read事件?
select p1,count(*)from dba_hist_active_sess_history
where to_char(sample_time,'yyyymmdd hh24:mi')
between '20220722 08:30' and '20220722 09:30' and event='db file sequential read'
-- and sql_id=&sqlid
-- and user_id=(select user_id from dba_users where username='&username')
group by p1;
P1: The absolute file number 文件号
P2: The block being read first block#
P3: The number of blocks (should be 1) block数量
因此通过看p1 对应的文件是哪个表空间的即可初步判定是行链接还是别的会话修改了数据而导致从undo中获取未提交之前的信息。
阅读(504) | 评论(0) | 转发(0) |