Chinaunix首页 | 论坛 | 博客
  • 博客访问: 20969
  • 博文数量: 11
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 10
  • 用 户 组: 普通用户
  • 注册时间: 2015-05-17 11:07
文章分类
文章存档

2015年(11)

我的朋友

分类: Oracle

2015-07-16 20:13:49

众所周知,统计信息直接影响到Oracle优化器最后的执行计划,所以定期收集统计信息成为DBA一项常规的工作,但是,对于一些大表,比如数据量超过几千万条,表分析后却有可能会导致应用系统一些SQL执行计划变差,比如出现大量的全表扫,严重影响数据库性能.如果出现这种情况,一种方法是对找一张该查询涉及到的小表,执行grant select on table_name to public操作,然后再看执行计划是否变好,有时可能这个动作要尝试多次.如果实在不行,只有导回原先的统计信息。这就需要分析之前对统计信息作过备份。下面阐述备份的操作:

 

conn perfstat/perfstat

创建统计信息的备份表:

exec dbms_stats.CREATE_STAT_TABLE(OWNNAME=>'PERFSTAT',STATTAB=>'BK_STAT_200809',TBLSPACE=>'TOOLS');

表的ownerperfstat,这张存放统计信息的表名为:bk_stat_200809,表空间为perfstat的默认表空间tools

为这张表创建一个同义词,方便其它用户访问:

create public synonym BK_STAT_200809 for BK_STAT_200809;

赋予perfstat权限,否则无法备份统计信息:

conn /as sysdba

grant analyze any to perfstat;

生成备份需要分析的表的统计信息的SQL,条件因需求不同而不同,我这里是锁定某几个用户下面081月前分析过的表。

conn perfstat/perfstat

set line 250

set pages 10000

set timing on

set time on

col script_for_export_stat for a250

spool script_for_export_stat.sql

select 'exec dbms_stats.EXPORT_TABLE_STATS(OWNNAME=>' || '''' ||

       upper(owner) || ''',statown=>''PERFSTAT'',TABNAME=>' || '''' || upper(table_name) ||

       ''',STATTAB=>''BK_STAT_200809'',STATID=>''BK_STAT_20080918'');' as script_for_export_stat

 from dba_tables

 where last_analyzed is not null

   and owner in ('MOCSCARD','MOCSACCT')

   and last_analyzed < to_date('20080101', 'yyyymmdd')

   order by owner;

spool off

这样,生成如下面的SQL

exec dbms_stats.EXPORT_TABLE_STATS(OWNNAME=>'MOCSACCT',statown=>'PERFSTAT',TABNA

ME=>'TPAYORDER',STATTAB=>'BK_STAT_200809',STATID=>'BK_STAT_20080918');

 

exec dbms_stats.EXPORT_TABLE_STATS(OWNNAME=>'MOCSACCT',statown=>'PERFSTAT',TABNA

ME=>'T_ACCTWATER',STATTAB=>'BK_STAT_200809',STATID=>'BK_STAT_20080918');

 

exec dbms_stats.EXPORT_TABLE_STATS(OWNNAME=>'MOCSACCT',statown=>'PERFSTAT',TABNA

ME=>'T_BALANCELOG',STATTAB=>'BK_STAT_200809',STATID=>'BK_STAT_20080918');

 

exec dbms_stats.EXPORT_TABLE_STATS(OWNNAME=>'MOCSACCT',statown=>'PERFSTAT',TABNA

ME=>'T_DEDUCTIONINFO',STATTAB=>'BK_STAT_200809',STATID=>'BK_STAT_20080918');

 

exec dbms_stats.EXPORT_TABLE_STATS(OWNNAME=>'MOCSACCT',statown=>'PERFSTAT',TABNA

ME=>'T_BOSSUSERSTATUSNOTIFY',STATTAB=>'BK_STAT_200809',STATID=>'BK_STAT_20080918

');

另外,可以查看一下需要分析的表的大小:

select a.owner,a.segment_name,a.segment_type,a.tablespace_name,round(a.bytes/1024/1024/1024,2) tablesize,b.last_analyzed

from dba_segments a,dba_tables b

where a.segment_name=b.table_name

and b.owner in ('MOCSACCT','MOCSCARD')

and b.last_analyzed < to_date('20080101', 'yyyymmdd');

 

分析表的方法可用analyze语句进行分析,也可用dbms_stats包进行分析。Oracle 9i开始就推荐使用dbms_stats包进行分析,且analyze语句对于分区表的支持不是很好:用analyze语句分析后,查询dba_tables.last_analyed信息会不准确,但是dba_part_tables. last_analyed信息是准确的,存在这个bug。因此,在分析时,推荐使用dbms_stats包来进行分析。―――小荷语

根据不表的不同大小,执行不同的采样值

小于500M的表,estimate_percent100%分析,大于500M小于1G的表,按50%分析;大于1G小于5G的表,按10%分析;大于5G小于10G的表,按3%分析;大于10G的表小于20G的表按1%分析;大于20G的表不建议轻易分析,当然,这也与每个项目及系统的特点有关。

 

可以用以下的语句直接生成分析语句:

set line 250

col script_for_gather_stat for a250

spool script_for_gather_stat.sql

select 'exec dbms_stats.gather_table_stats(OWNNAME=>'||''''||upper(owner)||''',tabname=>'||''''||

upper(table_name)||''',cascade=>TRUE);' as script_for_gather_stat

from (

select owner,segment_name table_name, sum(bytes) / 1024 / 1024 size_m

 from dba_segments

 where owner in ('MOCSCARD','MOCSACCT')

   and segment_name in

       (select table_name

          from dba_tables

         where last_analyzed is not null

           and owner in ('MOCSCARD','MOCSACCT')

           and table_name not in

               ('SUBSCRIPTION_HISTORY', 'SUBSCRIBER_HISTORY')

           and last_analyzed < to_date('20080101', 'yyyymmdd'))

           group by segment_name,owner

 order by size_m) a

 where a.size_m<=500

 union all

 select 'exec dbms_stats.gather_table_stats(OWNNAME=>'||''''||upper(owner)||''',tabname=>'||''''||

upper(table_name)||''',cascade=>TRUE,estimate_percent=> 50);' as script_for_gather_stat

from (

select owner,segment_name table_name, sum(bytes) / 1024 / 1024 size_m

 from dba_segments

 where owner in ('MOCSCARD','MOCSACCT')

   and segment_name in

       (select table_name

          from dba_tables

         where last_analyzed is not null

           and owner in ('MOCSCARD','MOCSACCT')

           and table_name not in

               ('SUBSCRIPTION_HISTORY', 'SUBSCRIBER_HISTORY')

           and last_analyzed < to_date('20080101', 'yyyymmdd'))

           group by segment_name,owner

 order by size_m) a

 where a.size_m between 500 and 1024

 union all

 select 'exec dbms_stats.gather_table_stats(OWNNAME=>'||''''||upper(owner)||''',tabname=>'||''''||

upper(table_name)||''',cascade=>TRUE,estimate_percent=> 10);' as script_for_gather_stat

from (

select owner,segment_name table_name, sum(bytes) / 1024 / 1024 size_m

 from dba_segments

 where owner in ('MOCSCARD','MOCSACCT')

   and segment_name in

       (select table_name

          from dba_tables

         where last_analyzed is not null

           and owner in ('MOCSCARD','MOCSACCT')

           and table_name not in

               ('SUBSCRIPTION_HISTORY', 'SUBSCRIBER_HISTORY')

           and last_analyzed < to_date('20080101', 'yyyymmdd'))

           group by segment_name,owner

 order by size_m) a

 where a.size_m between 1024 and 5120

 union all

 select 'exec dbms_stats.gather_table_stats(OWNNAME=>'||''''||upper(owner)||''',tabname=>'||''''||

upper(table_name)||''',cascade=>TRUE,estimate_percent=> 3);' as script_for_gather_stat

from (

select owner,segment_name table_name, sum(bytes) / 1024 / 1024 size_m

 from dba_segments

 where owner in ('MOCSCARD','MOCSACCT')

   and segment_name in

       (select table_name

          from dba_tables

         where last_analyzed is not null

           and owner in ('MOCSCARD','MOCSACCT')

           and table_name not in

               ('SUBSCRIPTION_HISTORY', 'SUBSCRIBER_HISTORY')

           and last_analyzed < to_date('20080101', 'yyyymmdd'))

           group by segment_name,owner

 order by size_m) a

 where a.size_m between 5120 and 10240

   union all

 select 'exec dbms_stats.gather_table_stats(OWNNAME=>'||''''||upper(owner)||''',tabname=>'||''''||

upper(table_name)||''',cascade=>TRUE,estimate_percent=> 1);' as script_for_gather_stat

from (

select owner,segment_name table_name, sum(bytes) / 1024 / 1024 size_m

 from dba_segments

 where owner in ('MOCSCARD','MOCSACCT')

   and segment_name in

       (select table_name

          from dba_tables

         where last_analyzed is not null

           and owner in ('MOCSCARD','MOCSACCT')

           and table_name not in

               ('SUBSCRIPTION_HISTORY', 'SUBSCRIBER_HISTORY')

           and last_analyzed < to_date('20080101', 'yyyymmdd'))

           group by segment_name,owner

 order by size_m) a

 where a.size_m between 10240 and 20480

    union all

select '--NOT ANALYZE TABLE LAGER THAN 20G:'||table_name from (

select owner,segment_name table_name, sum(bytes) / 1024 / 1024 size_m

 from dba_segments

 where owner in ('MOCSCARD','MOCSACCT')

   and segment_name in

       (select table_name

          from dba_tables

         where last_analyzed is not null

           and owner in ('MOCSCARD','MOCSACCT')

           and table_name not in

               ('SUBSCRIPTION_HISTORY', 'SUBSCRIBER_HISTORY')

           and last_analyzed < to_date('20080101', 'yyyymmdd'))

           group by segment_name,owner

 order by size_m) a

 where a.size_m>20480

执行后我得到下列结果:

exec dbms_stats.gather_table_stats(OWNNAME=>'MOCSCARD',tabname=>'T_CARDCODE',cascade=>TRUE);

exec dbms_stats.gather_table_stats(OWNNAME=>'MOCSACCT',tabname=>'SERVICEINFO',cascade=>TRUE);

exec dbms_stats.gather_table_stats(OWNNAME=>'MOCSCARD',tabname=>'SERVICEINFO',cascade=>TRUE);

exec dbms_stats.gather_table_stats(OWNNAME=>'MOCSACCT',tabname=>'TPAYAPPINFO',cascade=>TRUE);

exec dbms_stats.gather_table_stats(OWNNAME=>'MOCSACCT',tabname=>'T_USERACCT',cascade=>TRUE,estimate_percent=> 10);

exec dbms_stats.gather_table_stats(OWNNAME=>'MOCSACCT',tabname=>'T_ACCTINFO',cascade=>TRUE,estimate_percent=> 10);

exec dbms_stats.gather_table_stats(OWNNAME=>'MOCSACCT',tabname=>'T_ACCTBOOK',cascade=>TRUE,estimate_percent=> 10);

exec dbms_stats.gather_table_stats(OWNNAME=>'MOCSCARD',tabname=>'T_CARDWATER',cascade=>TRUE,estimate_percent=> 10);

exec dbms_stats.gather_table_stats(OWNNAME=>'MOCSCARD',tabname=>'T_CARDINFO',cascade=>TRUE,estimate_percent=> 10);

exec dbms_stats.gather_table_stats(OWNNAME=>'MOCSACCT',tabname=>'T_ACCTINFO_HIS',cascade=>TRUE,estimate_percent=> 3);

exec dbms_stats.gather_table_stats(OWNNAME=>'MOCSACCT',tabname=>'T_TEMP_ACCTWATER',cascade=>TRUE,estimate_percent=> 3);

 

现在,就可以执行以上的分析语句进行分析了,建议在系统空闲的时候进行分析用crontab定时任务完成。

 

一旦发生意外,则可以导回原来的统计信息!

col script_for_import_stat for a250

spool script_for_import_stat.sql

select 'exec dbms_stats.IMPORT_TABLE_STATS(OWNNAME=>' || '''' ||

       upper(owner) || ''',statown=>''PERFSTAT'',TABNAME=>' || '''' || upper(table_name) ||

       ''',STATTAB=>''BK_STAT_200809'',STATID=>''BK_STAT_20080918'');' as script_for_import_stat

 from dba_tables

 where last_analyzed is not null

   and owner in ('MOCSCARD','MOCSACCT')

   and table_name not in ('SUBSCRIPTION_HISTORY', 'SUBSCRIBER_HISTORY')

   and last_analyzed < to_date('20080101', 'yyyymmdd');

 

spool off

 
--------------------------------------------
 
刷新缓存

在日常中,经常有让重新解析的需求,比如说使用了bind peeking,第一次绑定特定值的时候执行计划走的特别糟,因为绑定变量导致之后的语句不作重新解析,重用了最差的执行计划,这时候我们希望重新解析来得到一个相对好的执行计划,常见的方法有:

a.alter system flush shared_pool;

b.对语句中的对象做个ddl ;

c.重新收集统计信息

但是这些操作的影响都比较大,因此在10.2.0.4后提供了个dbms_shared_pool.purge的方法,能够将某个sql的shared  cursor从共享池中清除,这样只会对单个sql产生影响.

 

:

 

10.2.0.4:

 

SQL> create table test(id int);

表已创建。

SQL> select * from test;

未选定行

SQL> select address,hash_value,executions,parse_calls from v$sql where sql_TEXT like 'select * from test%';

ADDRESS          HASH_VALUE EXECUTIONS PARSE_CALLS

---------------- ---------- ---------- -----------
0000040229F039E0 1689401402          1           1


QL> exec dbms_shared_pool.purge('0000040229F039E0,1689401402','C');

PL/SQL 过程已成功完成。

SQL> select address,hash_value,executions,parse_calls from v$sql where sql_TEXT like 'select * from test%';

ADDRESS          HASH_VALUE EXECUTIONS PARSE_CALLS
---------------- ---------- ---------- -----------
0000040229F039E0 1689401402          1           1

 

可以看到purge并没有成功,为了进一步证实,再做一遍查询

 

SQL> select * from test;

未选定行

SQL> select address,hash_value,executions,parse_calls from v$sql where sql_TEXT like 'select * from test%';

ADDRESS          HASH_VALUE EXECUTIONS PARSE_CALLS
---------------- ---------- ---------- -----------
0000040229F039E0 1689401402          2           2

 

executions和parse_calls增加,说明前面的parse确实没生效

 

SQL> alter session set events '5614566 trace name context forever';

会话已更改。

SQL> exec dbms_shared_pool.purge('0000040229F039E0,1689401402','C');

PL/SQL 过程已成功完成。

SQL> select address,hash_value,executions,parse_calls from v$sql where sql_TEXT like 'select * from test%';

未选定行

 

参考 metalink Doc ID:  751876.1  
10.2.0.4默认不开启,要靠event 5614566或者补丁5614566来激活

 

11g:

 

11g>create table test_purge(id int);

 

11g>select * from test_purge;

 

11g>select address,hash_value from v$sql where sql_text like 'select * from test%';

ASH_VALUE
---------
683003671

11g>exec dbms_shared_pool.purge('215E2F78,3683003671','C');

PL/SQL 过程已成功完成。

11g>select address,hash_value from v$sql where sql_text like 'select * from test%';


 

 
 
阅读(869) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~