分类: Oracle
2009-09-14 19:56:46
statspack
env:
os: Suse 9
oracle: 9i
参考文档:<
一.statspack系统安装
statspack的安装程序在@?/rdbms/admin/下:
spcreate.sql 安装
spdrop.sql 卸载
spauto.sql: 生成自动采集数据任务
spreport.sql 生成分析报告
spuexp.sql 参数文件
sppurge.sql 清除不再需要的数据
sptrunc.sql 清除所有的数据
sprepsql.sql 用于根据给定的SQL HASH值生成SQL报告
为了能顺利地运行Statspack 工具,则需要设置以下参数:
job_queue_processes>0 (如果不采用自动采集数据则可不设)
Alter system set job_queue_processes=6 scope=both
timed_statistics=true (如果不采用自动采集数据则可不设)
Alter system set timed_statistics=true
安装
1.建一个单独的表空间用于Statspack 或建在tools表空间上(>100MB)
$sqlplus “/ as sysdba”
SQL>create tablespace perfstat
SQL>datafile ‘/oracle/oradata/perfstat.dbf’
SQL>size
2.建用户perfstat及表
SQL>@?/rdbms/admin/spcreate.sql
要求输入表空间及临时表空间.
建完后会在本目录下生成:
spauto.lis
spcpkg.lis
spctab.lis
spcusr.lis
spdtab.lis
spdusr.lis
grep “ora-“ *.lis 查看是否有错。
3.删除statspack表
SQL>@?/rdbms/admin/spdrop.sql
4.测试statspack
采样数据
SQL> exec statspack.snap
后隔几分钟后再次采样数据
SQL> exec statspack.snap
生成报表
SQL>@?/rdbms/admin/spreport.sql
5 规划自动任务
Statspack 正确安装后,就可以设置定时任务,开始收集数据了.可以使用spauto.sql来定义自动任务.
Spauto.sql的关键内容:
Dbms_job.submit(:jobno,’statspack.snamp;’, trunc(sysdate+1/24,’HH’),’trunc(SYSDATE+1/24,”HH”)’,TRUE,:instno);
这个job任务定义了收集数据的时间间隔:
一天有24个小时,1440分钟,那么:
1/24 HH 每小时一次
1/
1/
1/
我们可以修改spauto.sql来更改执行间隔,如:
Dbms_job.submit(:jobno,’statspack.snamp;’, trunc(sysdate+1/48,’MI),’trunc(SYSDATE+1/48,”MI”)’,TRUE,:instno);
然后我们执行spatuo,这样我们就建立了一个每30分钟执行一次的数据收集计划.可以查看spauto.lis来获得输出信息:
SQL>@spauto
关于采样间隔,我们通常建议以1小时为时间间隔,对于有特殊需要的环境,可以设置更短的,如半小时作为采样间隔,但不推荐更短的.因为statspack的执行本身需要消耗资源,对于繁忙的生产系统,太短的采样对系统的性能会产生较大的影响.
6 生成分析报告
调用spreport.sql可以生成分析报告:
SQL>@spreport
输入begin_snap:
输入end_snap:
输入 report_name:
如果中间停过机,那么可能收到错误信息:
ERROR: snapshots chosen span an instance shutdown: results are invalid.
一个statspack的报告不能跨越一次停机,但是之前或之后的连续区间,收集的信息依然有效.可以选择之前或之后的连续的采样生成report.
7 移除定时的采样任务
可以有如下操作:
SQL> select job, log_user, priv_usr, last_date, next_date, interval from user_jobs;
SQL> execute dbms_job.remove(‘jobid’);
当你完成了一个采样报告,应及时地移除这个job任务,在生产环境中,遗漏一个无人照顾的job是非常危险的.如果,statspack运行一个星期,采样的数据量是非常惊人的.有的生产企业因疏忽面当机.
8删除历史采样数据
删除stats$snapshot 数据表中的相应数据.其它表中的数据会相应的级连删除.
SQL> select max(snap_id) from stats$snapshot;
SQL> delete from stats$snapshot where snap_id <= xxx;
如果采样的数据量很大,直接Delete 是非常缓慢的,可以考虑采用执行sptrunce.sql来截断相关联的表。
9 其它重要的脚本
9.1导出数据, 在诊断问题时,可能需要向专业人士提供原始数据,这时我们可以导出Statspack表数据,其中可能用到spuexp.par
File spuexp.par内容:
File=spuexp.dmp log=spuexp.log compress=y grants=y indexes=y rows=y constraints=y owner=perfstat consistent=y
用exp命令导出如下:
Exp userid=perfstat/xxx parfile=spuexp.par
9.2 显示当前snap id
To take a snapshot and display the snap_id, call the STATSPACK.SNAP function.
Example 21–1 Calling the snap Function in SQL*Plus
Using an anonymous PL/SQL block,
SQL> variable snap number;
SQL> begin :snap := statspack.snap; end;
2 /
PL/SQL procedure successfully completed.
SQL> print snap
SNAP
----------
12
9.3 sprepsql.sql 用于根据给定的SQL Hash值生成SQL报告.But you have to provide the hash value of this SQL statement context
10 调整Statspack收集数据的Level and Threshold
Both the snapshot level and the thresholds specified affect the amount of data Statspack captures. Snapshot level and threshold information used by the package is stored in the stats$statspack_parameter table.
Level: 控制收集数据的类型.即收集哪些类型的数据.
Threshold:设置收集数据的阈值.即应达到了某一指定值才收集.
Changing the Default values for snapshot Levels and SQL thresholds
Change parameters temporarily and with a snapshot.
SQL> Execute statspack.snap(i_ucomment=>’this is a temporary comment’);
Or Change parameters temporarily and without a snapshots.
SQL> Execute statspack.modify_statspack_parameter(i_ucomment=>’this is a comment that is saved’);
Change parameters permanently as the new default value and save the new parameter in the stats$statspack_parameter table, these thresholds are used for all subsequent snapshots.
SQL> Execute statspack.snap(i_snap_level=>10, i_modify_parameter=>’true’);
Or Change parameters permanently and without a snapshots.
SQL>Execute statspack.modify_statspack_parameter(i_snap_level=>10,i_buffer_gets_th=>1000,i_disk_reads_th=>1000);
11 Snapshot Levels
Level>=0 General Performance Statistics. Such as wait statistics, system events, system statistics, rollback segment data, row cache, SGA, background events, session events, lock statistics, buffer pool statistics, and parent latch statistics.
Level>=5 additional Data: SQL Statements. This level includes all statistics gathered in the lower level, as well as performance data on SQL statements with high resource usage.
Level>=6 Additional Data: SQL Plans and SQL Plan Usage. In addition to a SQL plans and plan usage data from each of the high-resource SQL statements captured.
Level>=7 Additional data: Segment Level Statistics. Besides all statistics gathered in the lower levels, and additionally gathers the performance data on highly used segments. RAC specific segment level statistics are also captured with level 7. On a RAC environment, this information allows us to easily spot the segments responsible for much of the cross-instance traffic.
Levels >= 10 Additional Statistics: Parent and Child Latches. This level includes all statistics gathered in the lower levels, as well as parent and child latch information.
In a level snapshot(or higher), the time required for the snapshot to complete depends on the SHARED_POOL_SIZE and on the number of SQL statements in the shared pool at the time of the snapshot. The larger the shared_pool, the longer it takes to complete the snapshot.