Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1796983
  • 博文数量: 334
  • 博客积分: 11301
  • 博客等级: 上将
  • 技术积分: 3452
  • 用 户 组: 普通用户
  • 注册时间: 2006-10-18 10:19
个人简介

路虽弥,不行不至;事虽少,不做不成。

文章分类

全部博文(334)

文章存档

2013年(4)

2012年(19)

2011年(27)

2010年(71)

2009年(45)

2008年(15)

2007年(84)

2006年(69)

分类: Oracle

2007-06-20 11:10:25

--承上--
 
4.2. Running the instance report when there are multiple instances
  spreport.sql assumes you are connected to the database you wish to report
  on.  There are certain situations where this assumption may not be
  valid:
   - In a clustered database environment, you may be connected to
     an instance which is not the instance you wish to report on
   - If you are archiving baseline Statspack data in a separate database
     from your production database, or when importing Statspack data
     (e.g. in the case of Oracle support)
  In these situations, you would not be able to produce the Statspack
  instance report using spreport.sql, as the instance assumed may be
  unavailable, possibly on a totally different host.
  To circumvent this problem, you should run the sprepins.sql report
  instead.  The sprepins.sql report output is identical to the
  spreport.sql output, as spreport.sql simply calls sprepins.sql, first
  defaulting the Instance Number and DBId of the database you are
  currently connected to.
  If you run sprepins.sql directly, you are prompted for the DBId and
  Instance Number for the instance you wish to report on, in addition
  to the begin_snap and end_snap Ids and report output name (i.e. the
  current DBId and Instance Number are not defaulted).
  You will be prompted for:
    1. The DBId
    2. The Instance Number
    3. The beginning snapshot Id
    4. The ending    snapshot Id
    5. The name of the report text file to be created
    Example output:
    SQL>  connect perfstat/perfstat_password
    Connected.
    SQL>  @sprepins
 
Instances in this Statspack schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   DB Id    Inst Num DB Name      Instance     Host
----------- -------- ------------ ------------ ------------
  590400074        1 CON90        con90        dlsun525
 4290976145        1 MAIL         MAIL         mailhost

Enter value for dbid: 4290976145
Using 4290976145 for database Id
Enter value for inst_num: 1

  Then similarly to spreport, the available snapshots are displayed,
  and the begin and end snaps and report name are prompted for.
      Batch mode report generation
      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      To run the sprepins.sql report without being prompted, assign values
      to the  SQL*Plus variables which specify the dbid, instance number,
      begin snap id, the end snap id, and the report name, before running
      spreport.
      The variables are:
        dbid         -> specifies the dbid
        inst_num     -> specifies the instance number
        begin_snap   -> specifies the begin Snapshot Id
        end_snap     -> specifies the end   Snapshot Id
        report_name  -> specifies the Report output name
      e.g.
          SQL>  connect perfstat/perfstat_password
          SQL>  define dbid=4290976145
          SQL>  define inst_num=1
          SQL>  define begin_snap=1
          SQL>  define end_snap=2
          SQL>  define report_name=batch_run
          SQL>  @?/rdbms/admin/sprepins
      sprepins will no longer prompt for the above information.

4.3.  Running the SQL report
  Once the instance report has been analyzed, often there are high-load SQL
  statements which should be examined to determine if they are causing
  unnecessary load.
  The SQL report sprepsql.sql, displays statistics, the complete SQL text
  and (if level 6 snapshot has been taken), information on any SQL Plan(s)
  associated with that statement.
  The SQL statement to be reported on is identified by the statement's Hash
  Value (which is a numerical representation of the statement's SQL text).
  The Hash Value for each statement is displayed in the high-load SQL
  sections of the instance report.
  The sprepsql.sql file is executed while being connected to the PERFSTAT
  user, and is located in the rdbms/admin directory of the Oracle Home.
  Note:  To run sprepsql.sql in a Cluster environment, you must connect
         to the instance you wish to report on.
  You will be prompted for:
    1. The beginning snapshot Id
    2. The ending    snapshot Id
    3. The Hash Value for the SQL statement
    4. The name of the report text file to be created
    Example output:
    SQL>  connect perfstat/perfstat_password
    Connected.
    SQL>  @sprepsql
 
   DB Id    DB Name      Inst Num Instance
----------- ------------ -------- ------------
 2618106428 PRD1                1 prd1

Completed Snapshots
                           Snap                    Snap
Instance     DB Name         Id   Snap Started    Level Comment
------------ ------------ ----- ----------------- ----- ----------------------
prd1         PRD1            37 02 Mar 2001 11:01     6
                             38 02 Mar 2001 12:01     6
                             39 08 Mar 2001 09:01     5
                             40 08 Mar 2001 10:02     5

Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 39
Begin Snapshot Id specified: 39
Enter value for end_snap: 40
End   Snapshot Id specified: 40
Specify the Hash Value
~~~~~~~~~~~~~~~~~~~~~~
Enter value for hash_value: 1988538571
Hash Value specified is: 1988538571

Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is sp_39_40_1988538571.  To use this name,
press to continue, otherwise enter an alternative.
Enter value for report_name:
Using the report name sp_39_40_1988538571
  The report will scroll past, and also be written to the file
  specified (e.g. sp_39_40_1988538571.lis).

      Batch mode report generation
      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      Similarly to spreport.sql, the SQL report can be run in batch mode.
      To run a report without being prompted, assign values to the
      SQL*Plus variables which specify the begin snap id, the end snap id,
      the SQL hash value, and the report name before running spreport.
      The variables are:
        begin_snap   -> specifies the begin Snapshot Id
        end_snap     -> specifies the end   Snapshot Id
        hash_value   -> specifies the Hash Value
        report_name  -> specifies the Report output name
      e.g.
          SQL>  connect perfstat/perfstat_password
          SQL>  define begin_snap=39
          SQL>  define end_snap=40
          SQL   define hash_value=1988538571
          SQL>  define report_name=batch_sql_run
          SQL>  @sprepsql
      sprepsql will no longer prompt for the above information.

4.4. Gathering Optimizer statistics on the PERFSTAT schema
  For best performance when running the performance reports, collect
  optimizer statistics for tables and indexes owned by PERFSTAT.  This
  should be performed whenever significant change in data volumes in
  PERFSTAT's tables.  To do this, either to use dbms_stats, or dbms_utility,
  and specify the PERFSTAT user:
     execute dbms_stats.gather_schema_stats(ownname=>'PERFSTAT',cascade=>true);
       or
     execute dbms_utility.analyze_schema('PERFSTAT','COMPUTE');
 
5.  Configuring the amount of data captured
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Both the snapshot level, and the thresholds specified will affect the amount
of data Statspack captures.
5.1. Snapshot Level
  It is possible to change the amount of information gathered by the package,
  by specifying a different snapshot 'level'.  In other words, the level
  chosen (or defaulted) will decide the amount of data collected.
  The higher the snapshot level, the more data is gathered.  The default
  level set by the installation is level 5.
  For typical usage, level 5 snapshot is effective on most sites.  There
  are certain situations when using a level 6 snapshot is beneficial, such
  as when taking a baseline.
  The events listed below are a subset of events which should prompt
  taking a new baseline, using level 6:
  - when taking the first snapshots
  - when a new application is installed, or an application is modified/upgraded
  - after gathering optimizer statistics
  - before and after upgrading
  The various levels are explained in detail section 5.4 below.

5.2. Snapshot SQL thresholds
  There are other parameters which can be configured in addition to the
  snapshot level.
  These parameters are used as thresholds when collecting data on SQL
  statements; data will be captured on any SQL statements that breach
  the specified thresholds.
  Snapshot level and threshold information used by the package is stored
  in the stats$statspack_parameter table.

5.3. Changing the default values for Snapshot Level and SQL Thresholds
  If you wish to, you can change the default parameters used for taking
  snapshots, so that they are tailored to the instance's workload.
  The full list of parameters which can be passed into the
  modify_statspack_parameter procedure are the same as those for the
  snap procedure.  These are listed in section 5.6. below.

  Temporarily using new values
  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  To temporarily use a snapshot level or threshold which is different to
  the instance's default snapshot values, simply specify the required
  threshold or snapshot level when taking the snapshot.  This value will
  only be used for immediate snapshot taken - the new value will
  not be saved as the default.
  e.g. Take a single level 6 snapshot (do not save level 6 as the default):
    SQL>  execute statspack.snap(i_snap_level=>6);

  Saving new defaults
  ~~~~~~~~~~~~~~~~~~~~
  If you wish to save the new value as the instance's default, you can do
  this either by:
  o  Taking a snapshot, and specifying the new defaults to be saved to the
     database (using statspack.snap, and using the i_modify_parameter
     input variable).
     SQL>  execute statspack.snap -
           (i_snap_level=>10, i_modify_parameter=>'true');
     Setting the i_modify_parameter value to true will save the new
     thresholds in the stats$statspack_parameter table; these thresholds
     will be used for all subsequent snapshots.
     If the i_modify_parameter was set to false or if it were omitted, the
     new parameter values would not be saved.  Only the snapshot taken at
     that point will use the specified values, any subsequent snapshots will
     use the preexisting values in the stats$statspack_parameter table.
  o  Changing the defaults immediately without taking a snapshot, using the
     statspack.modify_statspack_parameter procedure.  For example to change
     the snapshot level to 10, and the SQL thresholds for buffer_gets and
     disk_reads, the following statement can be issued:
     SQL>  execute statspack.modify_statspack_parameter -
            (i_snap_level=>10, i_buffer_gets_th=>10000, i_disk_reads_th=>1000);
     This procedure changes the values permanently, but does not
     take a snapshot.

5.4 Snapshot Levels - details
 Levels  >= 0   General performance statistics
    Statistics gathered:
    This level and any level greater than 0 collects general
    performance statistics, such as: wait statistics, system events,
    system statistics, rollback segment data, row cache, SGA, background
    events, session events, lock statistics, buffer pool statistics,
    latch statistics, resource limit, enqueue statistics, and statistics
    for each of the following, if enabled: automatic undo management,
    buffer cache advisory data, auto PGA memory management, Cluster DB
    statistics.
 Levels  >= 5  Additional data:  SQL Statements
    This level includes all statistics gathered in the lower level(s),
    and additionally gathers the performance data on high resource
    usage SQL statements.
    In a level 5 snapshot (or above), note that the time required for the
    snapshot to complete is dependent on the shared_pool_size and on the
    number of SQL statements in the shared pool at the time the snapshot
    is taken: the larger the shared pool, the longer the time taken to
    complete the snapshot.
    SQL 'Thresholds'
       The SQL statements gathered by Statspack are those which exceed one of
       six predefined threshold parameters:
        - number of executions of the SQL statement            (default 100)
        - number of disk reads performed by the SQL statement  (default 1,000)
        - number of parse calls performed by the SQL statement (default 1,000)
        - number of buffer gets performed by the SQL statement (default 10,000)
        - size of sharable memory used by the SQL statement    (default 1m)
        - version count for the SQL statement                  (default 20)
       The values of each of these threshold parameters are used when
       deciding which SQL statements to collect - if a SQL statement's
       resource usage exceeds any one of the above threshold values, it
       is captured during the snapshot.
 
       The SQL threshold levels used are either those stored in the table
       stats$statspack_parameter, or by the thresholds specified when
       the snapshot is taken.
 Levels  >= 6  Additional data:  SQL Plans and SQL Plan usage
    This level includes all statistics gathered in the lower level(s),
    and additionally gathers optimizer execution plans, and plan usage
    data for each of the high resource usage SQL statements captured.
    A level 6 snapshot gathers information which is invaluable when
    determining whether the execution plan used for a SQL statement
    has changed.  Therefore level 6 snapshots should be used
    whenever there is the possibility a plan may change, such as
    after large data loads, or after gathering new optimizer
    statistics.
    To capture the plan for a SQL statement, the statement must be in the
    shared pool at the time the snapshot is taken, and must exceed one of
    the SQL thresholds.  To gather plans for all statements in the
    shared pool, you can temporarily specify the executions threshold
    (i_executions_th) to be zero (0) for those snapshots.  For information
    on how to do this, see section 5.3. above.
 Levels  >= 7  Additional data:  Segment level statistics
    This level includes all statistics gathered in the lower level(s),
    and additionally gathers the performance data on highly used segments.
    A level 7 snapshot gathers information which determines what segments are
    more heavily accessed and contended.
    With this information, you can decide to modify the physical layout
    of some segments or of the tablespaces they reside in.
    For example, to better spread the segment io load, you can add files
    residing on different disks to a tablespace storing a heavily accessed
    segment or you can (re)partition a segment. This information can also help
    decide on changing segment attributes values such as PCTFREE
    and/or INITRANS. On a RAC environment, this information allows us to easily
    spot the segments responsible for much of the cross-instance traffic.
 
    Segment statistics are:
       - logical reads
       - db block changes
       - physical reads
       - physical writes
       - physical reads direct
       - physical writes direct
       - global cache cr blocks served  (RAC specific)
       - global cache current blocks served  (RAC specific)
       - buffer busy waits
       - ITL waits
       - row lock waits
    Although statspack capture all segment statistics, it reports only the
    following statistics:
       - logical reads
       - physical reads
       - buffer busy waits
       - ITL waits
       - row lock waits
       - global cache cr blocks served  (RAC only)
       - global cache current blocks served  (RAC only)
    Segment statistics 'Thresholds'
       The segments gathered by Statspack are those which exceed one of
       seven predefined threshold parameters:
        - number of logical reads on the segment               (default 10,000)
        - number of physical reads on the segment              (default 1,000)
        - number of buffer busy waits on the segment           (default 100)
        - number of row lock waits on the segment              (default 100)
        - number of ITL waits on the segment                   (default 100)
        - number of global cache ConsistentRead blocks served-RAC (default 1000)
        - number of global cache CUrrent blocks served (RAC)  (default 1000)
       The values of each of these threshold parameters are used when
       deciding which segment statistics to collect - if a segment's statistic
       exceeds any one of the above threshold values, all statistics
       regarding this segment are captured during the snapshot.
       The threshold levels used are either those stored in the table
       stats$statspack_parameter, or by the thresholds specified when
       the snapshot is taken.
 
 Levels  >= 10 Additional statistics:  Parent and Child latches
    This level includes all statistics gathered in the lower levels, and
    additionally gathers Parent and Child Latch information.  Data
    gathered at this level can sometimes cause the snapshot to take longer
    to complete i.e. this level can be resource intensive, and should
    only be used when advised by Oracle personnel.

5.5.  Specifying a Session Id
  If you would like to gather session statistics and wait events for a
  particular session (in addition to the instance statistics and wait events),
  it is possible to specify the session id in the call to Statspack.  The
  statistics gathered for the session will include session statistics,
  session events and lock activity.  The default behaviour is to not to
  gather session level statistics.
    SQL>  execute statspack.snap(i_session_id=>3);

5.6.  Input Parameters for the SNAP and MODIFY_STATSPACK_PARAMETERS procedures
   Parameters able to be passed in to the statspack.snap and
   statspack.modify_statspack_parameter procedures are as follows:
                    Range of      Default
Parameter Name      Valid Values  Value    Meaning
------------------  ------------  -------  -----------------------------------
i_snap_level     0, 5,6,7, 10  5        Snapshot Level
i_ucomment     Text          Blank    Comment to be stored with Snapshot
i_executions_th     Integer >=0   100      SQL Threshold: number of times
                                           the statement was executed
i_disk_reads_th     Integer >=0   1,000    SQL Threshold: number of disk reads
                                           the statement made
i_parse_calls_th    Integer >=0   1,000    SQL Threshold: number of parse
                                           calls the statement made
i_buffer_gets_th    Integer >=0   10,000   SQL Threshold: number of buffer
                                           gets the statement made
i_sharable_mem_th   Integer >=0   1048576  SQL Threshold: amount of sharable
                                           memory
i_version_count_th  Integer >=0   20       SQL Threshold: number of versions
                                           of a SQL statement
i_seg_phy_reads_th  Integer >=0   1,000    Segment statistic Threshold: number
                                           of physical reads on a segment.
i_seg_log_reads_th  Integer >=0   1,0000   Segment statistic Threshold: number
                                           of logical reads on a segment.
i_seg_buff_busy_th  Integer >=0   100      Segment statistic Threshold: number
                                           of buffer busy waits for a segment.
i_seg_rowlock_w_th  Integer >=0   100      Segment statistic Threshold: number
                                           of row lock waits for a segment.
i_seg_itl_waits_th  Integer >=0   100      Segment statistic Threshold: number
                                           of itl waits for a segment.
i_seg_cr_bks_sd_th  Integer >=0   1000     Segment statistic Threshold: number
                                           of Consistent Reads blocks served by
                                           the instance for the segment (RAC).
i_seg_cu_bks_sd_th  Integer >=0   1000     Segment statistic Threshold: number
                                           of CUrrent blocks served by the
                                           instance for the segment (RAC).
i_session_id        Valid sid     0 (no    Session Id of the Oracle Session
                    from          session) to capture session granular
                    v$session              statistics for
i_modify_parameter  True, False   False    Save the parameters specified for
                                           future snapshots?
 
6.  Time Units used for Performance Statistics
----------------------------------------------
Oracle now supports capturing certain performance data with millisecond and
microsecond granularity. 
Views which include microsecond timing include:
  - v$session_wait, v$system_event, v$session_event (time_waited_micro column)
  - v$sql, v$sqlarea (cpu_time, elapsed_time columns)
  - v$latch, v$latch_parent, v$latch_children (wait_time column)
  - v$sql_workarea, v$sql_workarea_active (active_time column)
Views which include millisecond timings include:
  - v$enqueue_stat (cum_wait_time)
Note that existing columns in other views continue to capture centi-second
times.
As centi-second and microsecond timing may not be appropriate for rolled
up data such as that displayed by Statspack, Statspack displays most
cumulative times in seconds, and average times in milliseconds (for easier
comparison with Operating System monitoring utilities which often report
timings in milliseconds).
For clarity, the time units used are specified in the column headings of
each timed column in the Statspack report.  The convention used is:
      (s)  - a second
      (cs) - a centisecond - which is       100th of a second
      (ms) - a millisecond - which is     1,000th of a second
      (us) - a microsecond - which is 1,000,000th of a second

7.  Event Timings
-----------------
If timings are available, the Statspack report will order wait events by time
(in the Top-5 and background and foreground Wait Events sections).
If timed_statistics is false for the instance, however a subset of users or
programs set timed_statistics set to true dynamically, the Statspack report
output may look inconsistent, where some events have timings (those which the
individual programs/users waited for), and the remaining events do not.
The Top-5 section will also look unusual in this situation.
Optimally, timed_statistics should be set to true at the instance level for
ease of diagnosing performance problems.
 
8.  Managing and Sharing performance data
-----------------------------------------
8.1. Sharing data via export
  If you wish to share data with other sites (for example if Oracle
  Support requires the raw statistics), it is possible to export
  the PERFSTAT user.
  An export parameter file (spuexp.par) has been supplied for this
  purpose.  To use this file, supply the export command with the
  userid parameter, along with the export parameter file name.
  e.g.
    exp userid=perfstat/perfstat_password parfile=spuexp.par
  This will create a file called spuexp.dmp and the log file spuexp.log
  If you wish to load the data into another database, use the import
  command.  For more information on using export and import, please
  see the Oracle Utilities manual.

8.2. Purging/removing unnecessary data
  It is possible to purge unnecessary data from the PERFSTAT schema using
  sppurge.sql.   This script deletes snapshots which fall between
  the begin and end range of Snapshot Id's specified.
  NOTE:
    It is recommended you export the schema as a backup before running this
    script, either using your own export parameters, or those provided in
    spuexp.par
    Purging may require the use of a large rollback segment, as all data
    relating each Snapshot Id to be purged will be deleted.
    To avoid rollback segment extension errors, explicitly use a large
    rollback segment.  This can be done by executing the 'set transaction
    use rollback segment..' command before running the sppurge.sql script
    (for more information on the set transaction command see the SQL reference
    manual).  Alternatively, to avoid rollback segment extension errors
    specify a smaller range of Snapshot Id's to purge.
  When sppurge is run, the instance currently connected to and the
  available snapshots are displayed.  The DBA is then prompted for the
  low Snap Id and high Snap Id.  All snapshots which fall within this
  range will be purged.
  e.g. Purging data - connect to PERFSTAT using SQL*Plus, then run the
       script - an example output appears below.
   SQL>  connect perfstat/perfstat_password
   SQL>  set transaction use rollback segment rbig;
   SQL>  @sppurge
   Database Instance currently connected to
   ========================================
                                   Instance
      DB Id    DB Name    Inst Num Name
   ----------- ---------- -------- ----------
     720559826 PERF              1 perf

   Snapshots for this database instance
   ====================================
               Snap
      Snap Id Level Snapshot Started      Host            Comment
   ---------- ----- --------------------- --------------- -------------------
            1     5  30 Feb 2000 10:00:01 perfhost
            2     5  30 Feb 2000 12:00:06 perfhost
            3     5  01 Mar 2000 02:00:01 perfhost
            4     5  01 Mar 2000 06:00:01 perfhost
   WARNING
   =======
   sppurge.sql deletes all snapshots ranging between the lower and
   upper bound Snapshot Id's specified, for the database instance
   connected to.
   You may wish to export this data before continuing.
   Specify the Lo Snap Id and Hi Snap Id range to purge
   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   Enter value for losnapid: 1
   Using 1 for lower bound.
   Enter value for hisnapid: 2
   Using 2 for upper bound.
   Deleting snapshots 1 - 2
   Purge of specified Snapshot range complete.  If you wish to ROLLBACK
   the purge, it is still possible to do so.  Exiting from SQL*Plus will
   automatically commit the purge.
   SQL> -- end of example output
 
      Batch mode purging
      ~~~~~~~~~~~~~~~~~~
      To purge in batch mode, you must assign values to the SQL*Plus
      variables which specify the low and high snapshot Ids to purge.
      The variables are:
        losnapid   -> Begin Snapshot Id
        hisnapid   -> End Snapshot Id
      e.g.
          SQL>  connect perfstat/perfstat_password
          SQL>  define losnapid=1
          SQL>  define hisnapid=2
          SQL>  @sppurge
      sppurge will no longer prompt for the above information.

8.3. Truncating all data
  If you wish to truncate all performance data indiscriminately, it is
  possible to do this using sptrunc.sql  This script truncates all
  statistics data gathered.
  NOTE:
  It is recommended you export the schema as a backup before running this
  script either using your own export parameters, or those provided in
  spuexp.par
  e.g. Truncating all data - connect to PERFSTAT using SQL*Plus, and run
       the script - an example is below
    SQL>  connect perfstat/perfstat_password
    SQL>  @sptrunc
    About to Truncate Statspack Tables
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    NOTE:
    Running sptrunc.sql removes ALL data from Statspack tables
    You may wish to export the data before continuing
    If you would like to continue, enter any string, followed by
 
    Enter value for anystring:
    entered - starting truncate operation
    Table truncated.
   
    Truncate operation complete
 
9.  New and Changed Features
----------------------------
9.1.  Changes between 9.0 and 9.2
Changes on the Summary Page of the Instance Report (spreport.sql)
  o  The Top 5 Wait Events has been changed to be the Top 5 Timed Events.
     What was previously the Top 5 Wait Events has been expanded to give the
     Top 5 time usage within the instance: i.e. in addition to including Wait
     events, this section can now include the 'CPU used by this session'
     statistic.  This statistic will appear in the Top 5 only if it's value
     is one of the the Top 5 users of time for the snapshot interval.
         Note that the name of the statistic 'CPU used by this session' will
         actually appear in the Top 5 section as 'CPU Time'.  The statistic
         name is masked in the Top 5 to avoid the confusion of the suffix
         'by this session'.
         The statistic will continue to appear in the System Statistics
         (SYSSTAT) section of the report as 'CPU used by this session'.
     Additionally, instead of the percentage calculation being the % Total
     Wait Time (which is time for each wait event divided by the total wait
     time for this snapshot interval), the percentage calculation is now
     % Total Elapsed Time (which is time for each timed event divided by
     the total elapsed time).
     i.e.
       previously:  time for each wait event  / total wait time for all events
       now:         time for each timed event / total elapsed time
     The total elapsed time is computed using the total wait time for all
     events added to the total CPU time used for the interval
     i.e.
       total elapsed time = total wait time + total CPU time
     Purpose
     ~~~~~~~
     The purpose for including CPU time with wait events:
     When tuning a system, the first step is to identify where the most of the
     time is spent, in order to identify where the most productive tuning
     effort should be concentrated.
     The majority of time could be spent in waiting for events to complete
     (and so be identifyable in the wait event data), or the system could be
     consuming much CPU (for which Operating System statistics, and the Oracle
     CPU statistic 'CPU used by this session' in SYSSTAT are examined).
     Having the CPU Time co-located with the wait events in the Top 5 section
     of the instance report makes it easier to compare the relative values
     and to identify whether the most productive investigation would occur
     by drilling down the wait events, or in reducing Oracle CPU usage
     (e.g. by tuning SQL).
Changes on the Top SQL sections of the Report (spreport.sql)
  o  When specified by the application, the MODULE information is reported
     just before the SQL statement itself.
     This information is preceded by the mention "Module: "
New columns added to
  - stats$db_cache_advice
    size_factor: compares the estimated cache size with the current cache size
  - stats$sql_plan
    search_columns: the number of index columns with matching predicates.
    access_predicates: predicates used to locate rows in an access structure.
        For example, start and/or stop predicates for an index range scan.
    filter_predicates: predicates used to filter rows before producing them.
  - stats$sql_summary
    child_latch: the library cache child latch number which protects this
        SQL statement (join to v$latch_children.child#). A parent SQL
        statement, and all it's children are protected by the same library
        cache child latch.
    fetches: the number of fetches performed for this SQL statement
New Scripts
  o  spup90.sql  - Upgrades a 9.0 Statspack schema to the 9.2 format
New Data captured/reported on - Level 1
  - Shared Pool Advisory
  - PGA statistics including PGA Advisory, PGA Histogram usage
New Data captured/reported on - Level 7
  - Segment level Statistics
Cluster Features
  o  Cluster Statistics page (page 2 of a clustered database report) has
     been significantly modified to add new ratios and remove ratios
     deemed less-useful.
  o  RAC specific segment level statistics are captured with level 7
SQL Plan Usage capture changed
  o  The logic for capturing SQL Plan Usage data (level 6) has been modified
     significantly.  Instead of capturing a Plan's Usage once the first time
     the plan is used and never again thereafter, the algorithm now captures
     the plans used each snapshot.  This allows tracking whether multiple
     plans are in use concurrently, or whether a plan has reverted back to
     an older plan.
     Note that plan usage data is only captured for high-load SQL (this is
     unchanged between 9.0 and 9.2).
     Due to the significant change in data capture, it is not possible to
     convert existing data.  Instead, any pre-existing data will be
     archived into the table STATS$SQL_PLAN_USAGE_90 (this allows querying
     the archived data, should this be necessary).

sprepsql.sql
  o  'All Optimizer Plan(s) for this Hash Value' change:
     Instead of showing the first time a plan was seen for a specific hash
     value, this section now shows each time the Optimizer Plan
     changed since the SQL statement was first seen e.g. if the SQL statement
     had the following plan changes:
       snap ids      plan hash value
       --------      ---------------
         1 ->  12    AAAAAAA
        13 -> 134    BBBBBBB
       145 -> 299    CCCCCCC
       300 -> 410    AAAAAAA
    Then this section of the report will now show:
       snap id       plan hash value
       --------      ---------------
              1      AAAAAAA
             13      BBBBBBB
            145      CCCCCCC
            300      AAAAAAA
     Previously, only the rows with snap_id's 1, 13 and 145 would have been
     displayed, as these were the first snap Id's these plans were found.
     However this data could not show that plan AAAAAA was found again in
     snap_id 300.
     The new output format makes it easier to see when an older plan is again
     in use.  This is possible due to the change in the SQL Plan Usage
     capture (described above).

9.2.  Changes between 8.1.7 and 9.0
Timing data
  o columns with cumulative times are now displayed in seconds.
Changes on the Summary Page
  o  All cache sizes are now reported in M or K
New Statistics on the Summary page
  o  open cursors per session values for the begin and end snapshot
  o  comments specified when taking a snapshot are displayed for the
     begin and end snapshots
Latches
  o The Latch Activity, Child and Parent Latch sections have the following
   additional column:
   -  wait_time: cumulative time spent waiting for the latch
New Scripts
  o  spup817.sql  - Upgrades an 8.1.7 Statspack schema to the 9.0 format
  o  sprepsql.sql - Reports on a single hash_value, including
                    the SQL statistics for the snapshot, the complete SQL
                    text and optimizer execution plan information.
  o  sprepins.sql - A report which can be run to query performance data
                    for any instance which the PERFSTAT schema contains.
                    The report will prompt for a dbid, instance_number and
                    begin and end snap id's.
                    This report can be used when importing data from another
                    instance, or in a Real Application Clusters environment
                    to report on an instance which you are not directly
                    connected to.
New Data captured/reported on - Level 1
  - Data from v$resource_limit
  - If the instance is a Cluster instance, v$dlm_misc data
  - Additional columns are now captured in stats$enqueue_stat
  - Automatic Undo Management statistics
  - Buffer Cache advisory data
  - New Auto-PGA memory management data
  - Support for multiple sized-block buffer pools
  - Support for resizable buffer pool and shared pool
  - Data from v$instance_recovery
New Snapshot Level - Level 6
  - New SQL plans and SQL Plan usage information for high-load SQL
    statements are captured.
Cluster Features
 o  There is additional derived data and statistics which are now included
    in the Statspack report for a clustered database.  For more information,
    see section 11.3. Cluster Specific Data
New SNAP function
  o the call to take a snapshot can also be a PL/SQL function call which
    returns the snapshot Id of the snapshot taken.  Using the function rather
    than the procedure is useful in situations where you wish to know the
    snap_id immediately, such as when running Statspack reports in batch
    mode, or during benchmark runs.
Installation
  o  The installation script will no longer accept the SYSTEM tablespace for
     the PERFSTAT user's DEFAULT or TEMPORARY tablespace.  If SYSTEM is
     specified, the installation will error.
SQL
  o  Each SQL report has two new columns CPU Time and Elapsed Time.  These
     show the cumulative CPU time and Elapsed time for all executions of
     that SQL statement for the snapshot period.  If cumulative CPU and
     Elapsed times are not shown, the CPU and Elapsed times per execute
     are shown.
Changed
  o  The SGA Breakdown difference section of the statspack report now
     shows the difference between begin and end values as a percentage
     of the begin value, rather than in bytes.
  o  The data in the Dictionary Cache Stats and Library Cache Activity
     section are only printed if the number of gets is greater than zero.

9.3.  Changes between 8.1.6 and 8.1.7
New Statistics on the Summary page
  o  connections at the begin snapshot and connections at the end snapshot
  Load Profile
  o  executes per transaction and per second
  o  logons per transaction and per second
  Instance Efficiency
  o  % Non-Parse CPU: which is the parse time CPU / CPU used by this session
  o  Parse CPU to Parse Elapsd%: which is the parse time CPU / parse time
     elapsed
  o  Execute to Parse %:  The ratio of executions to parses
  Instance Efficiency - Shared Pool Statistics are shown for the begin and
  end snapshots.
  o  Memory Usage %:  The percentage of the shared pool which is used.
  o  % SQL with executions>1:  The percentage of reused SQL (i.e. the
     percentage of SQL statements with more than one execution).
  o  % Memory for SQL w/exec>1:  The percentage of memory used for SQL
     statements with more than one execution.
  This data is newly gathered by the 8.1.7 Statspack for level 5 snapshots
  and above, and so will not evident if the report is run against older
  data captured using the 8.1.6 Statspack.
Tablespace and File IO
  o  Tempfile statistics are now captured.  The statistics for tempfiles are
     shown in the same sections with statistics for datafiles and tablespaces.
  o  The tablespace and File IO reports have been modified to include reads/s
     and writes/s.
Latches
  o  The report has been modified to include parent and child latch
     sections, which only appears in the report when a level 10 snapshot
     is taken.
New Scripts
  o  sppurge.sql - Purges a range of Snapshot Ids
  o  sptrunc.sql - Deletes all data
  o  spup816.sql - Upgrades an 8.1.6 Statspack to the 8.1.7 schema
Batch Mode execution
  o  The installation, reporting and purge scripts (spcreate.sql, spreport.sql
     and sppurge.sql) have been modified so they can be run in batch mode, if
     the appropriate SQL*Plus variables are defined before the scripts are run.
SQL
  o  Two new SQL thresholds (and sections in the report) have been added: 
     sharable_mem and version_count
  o  The report which was previously ordered by rows processed has been
     changed to be ordered by executions
  o  The full text of a SQL statement is now captured (previously only the
     first 1000 bytes of the text was captured); the text is captured once
     only.  Previously, Statspack gathered all SQL related information,
     including all the SQL text for each snapshot.  The new strategy will
     result less space usage.
  o  The first 5 lines of a SQL statement are shown in each SQL report
     (rather than the first line)
File Rename
  o  The Statspack files have been renamed, with all files now beginning
     with the prefix sp.
     The new and old file names are given below.  For more information on
     the purpose of each file, please see the Supplied Scripts Overview
     section.
     New Name       Old Name
     ------------   -------------
     spdoc.txt      statspack.doc
     spcreate.sql   statscre.sql
     spreport.sql   statsrep.sql
     spauto.sql     statsauto.sql
     spuexp.par     statsuexp.par
     sppurge.sql    - new file -
     sptrunc.sql    - new file -
     spup816.sql    - new file -
     spdrop.sql     statsdrp.sql
     spcpkg.sql     statspack.sql
     spctab.sql     statsctab.sql
     spcusr.sql     statscusr.sql
     spdtab.sql     statsdtab.sql
     spdusr.sql     statsdusr.sql
  o  The default Statspack report output file name prefix has been modified
     to sp_ (was st_) to be consistent with the new script names.
 
10.  Compatibility and Upgrading from previous releases
-------------------------------------------------------
10.1  Compatibility Matrix
    Database      - Statspack Release -
    Release      9.2    9.0  8.1.7  8.1.6
    --------   -----  -----  -----  -----
    9.2           Y       -      -      -
    9.0           -       Y      -      -
    8.1.7         -       -      Y      -
    8.1.6         -       -      -      Y
In summary, it is best to use the Statspack release shipped with
the version of the database you are using.
If you are already using an earlier release of Statspack must use
a newer Statspack release (e.g. because you are upgrading the database),
it is possible to upgrade an existing Statspack schema, and so
keep previously captured data.  See the section 10.2. below which
describes Upgrading an existing Statspack schema to a newer release.

10.1.1. Using Statspack shipped with 9.2
The Statspack scripts shipped with 9.2 can not be used with any release
earlier than 9.2, as Statspack uses new v$views (and new columns added to
existing v$views) introduced in this server release.

10.1.2. Using Statspack shipped with 9.0
The Statspack scripts shipped with 9.0 can not be used with any release
earlier than 9.0, as the 9.2 release uses new v$views (and new columns added
to existing v$views) introduced in this server release.

10.1.3. Using Statspack shipped with 8.1.7 on 9i releases
It is not possible to use the Statspack shipped with 8.1.7 with any 9i
instance, due to the definition of an undocumented view Statspack 8i used,
changing between Oracle8i and Oracle9i.  Attempting to use 8.1 Statspack
on an instance running 9i will result in package compilation errors.

10.2.  Upgrading an existing Statspack schema to a newer release
Scripts are provided which convert performance data in an existing
Statspack schema running an older Statspack release, to the newer schema
format.
Although data conversion is not a supported activity, these scripts have been
provided as a convenient way of keeping previously captured Statspack data.
Due to the differences in schema layout, minor irregularities may result
in statistics captured before conversion.  An example of this is the
Enqueue statistics data migration: do not compare Enqueue statistics data
collected pre-9.0 to the Enqueue statistics data captured in 9.0 (for more
details, see section 10.2.2. Upgrading the Statspack schema from 8.1.7 to 9.0).
Note: There is no downgrade script.  Backup the PERFSTAT schema using
      export BEFORE attempting the upgrade, in case the upgrade fails.
      The only method of downgrading, or re-running the upgrade is to
      de-install Statspack, and import a previously made export.
Before running the upgrade script, export the Statspack schema (for a
backup), then disable any scripts which use Statspack, as these will
interfere with the upgrade.  For example, if you use a dbms_job to
gather statistics, disable this job for the duration of the upgrade.
If there is a large volume of data in the Statspack schema (i.e. a large
number of snapshots), to avoid a long upgrade time or avoid an unsuccessful
upgrade:
 - ensure there is enough free space in PERFSTAT's default tablespace
   before starting the upgrade (each individual upgrade section will
   describe how to estimate the required disk space)
 - if you do not use Automatic Undo Management, ensure you specify a large
   rollback segment when prompted
 - if you do not use Automatic Memory Management, ensure you specify a large
   sort_area_size (e.g. 1048576) when prompted
The upgrade script will prompt you for the rollback segment and sort_area_size.
If you do not need to specify a a rollback segment or sort_area_size, then
simply press return, and ignore the following errors appearing in the
upgrade log file:
  alter session set sort_area_size =
                                  *
  ERROR at line 1:
  ORA-02017: integer value required

  set transaction use rollback segment
                                    *
  ERROR at line 1:
  ORA-02245: invalid ROLLBACK SEGMENT name
 
10.2.1. Upgrading the Statspack schema from 9.0   to 9.2
   Follow the general instructions in section 10.2. 'Upgrading an existing
   Statspack schema to a newer release' above.
   This release creates new tables and indexes, and requires approx.
   20 extra MB.
   To upgrade:
     - ensure you have sufficient free space in the tablespace
     - disable any programs which use Statspack
     - backup the Statspack schema (e.g. using export)
     - run the upgrade by connecting as a user with SYSDBA privilege:
    SQL>  connect / as sysdba
    SQL>  @spup90
Once the upgrade script completes, check the log files (spup90a.lis and
spup90b.lis) for errors.  If errors are evident, determine and rectify the
cause.  If no errors are evident, re-enable any Statspack data
collection or reporting scripts which were previously disabled.

  SQL Plan Usage Data Upgrade note:
    If there is more than one database in a single Statspack schema (i.e.
    there are multiple distinct dbid's), AND if Level 6 snapshots have
    been taken using the 9.0 release Statspack, then the SQL plan usage
    data will be saved, but will not be queried by the sprepsql.sql
    SQL report (this is because during the data conversion, it will not
    be possible to identify which database first identified a plan
    usage).
    For more details see 'SQL Plan Usage capture changed' in section
    9.1. Changes between 9.0 and 9.2.

10.2.2. Upgrading the Statspack schema from 8.1.7 to 9.0
   Follow the general instructions in section 10.2. 'Upgrading an existing
   Statspack schema to a newer release' above.
   Then, to estimate whether you have sufficient free space to run this
   upgrade, execute the following SQL statement while connected as PERFSTAT in
   SQL*Plus:
     select 10 + (2*sum(bytes)/1024/1024) est_space_mb
       from dba_segments
      where segment_name in ('STATS$ENQUEUESTAT');
   The est_space_mb column will give you a guesstimate as to the required
   free space, in megabytes.
   To upgrade:
     - ensure you have sufficient free space in the tablespace
     - disable any programs which use Statspack
     - backup the Statspack schema (e.g. using export)
     - run the upgrade by connecting as a user with SYSDBA privilege:
    SQL>  connect / as sysdba
    SQL>  @spup817
Once the upgrade script completes, check the log files (spup817a.lis and
spup817b.lis) for errors.   If errors are evident, determine and rectify
the cause before proceeding.  If no errors are evident, and you are upgrading
to 9.2, you may proceed with the upgrade.

  Data Compatibility
  ~~~~~~~~~~~~~~~~~~
  Prior to release 9.0, the STATS$ENQUEUESTAT table gathered data based on
  an X$ table, rather than a V$view.  In 9.0, the column data within the
  underlying X$ table has been considerably improved, and the data
  externalised via the V$ENQUEUE_STAT view.
  The Statspack upgrade script spup817.sql migrates the data captured from
  prior releases into the new format, in order to avoid losing historical data.
  Note however, that the column names and data contained within the columns
  has changed considerably between the two releases: the STATS$ENQUEUE_STAT
  columns in 9.0 capture different data to the columns which existed in the
  STATS$ENQUEUESTAT table in the 8.1. Statspack releases.
  The column data migration performed by spup817.sql is as follows:
      8.1 STATS$ENQUEUESTAT   9.0 STATS$ENQUEUE_STAT
      ---------------------   ----------------------
      GETS                    TOTAL_REQ#
      WAITS                   TOTAL_WAIT#

  To further emphasise the difference, the column definitions appear below:
      STATS$ENQUEUESTAT.GETS         - 8.1
      Reflected the number of enqueue gets, excluding enqueue conversions.
      This statistic was incremented at the end of a get.
      STATS$ENQUEUE_STAT.TOTAL_REQ#  - 9.0
      Is the total number of requests for an enqueue + the number of
      enqueue conversions.  This statistic is incremented at the beginning
      of a get request.
      STATS$ENQUEUESTAT.WAITS        - 8.1
      Reflected the number of times a session waited for at least 3
      seconds for an enqueue operation (get or convert).  The statistic
      was incremented at the end of the wait (either if the enqueue was
      successfully gotten or if the request timed out).  If a session waited
      for less than 3 seconds, this statistic was not incremented.
      STATS$ENQUEUE_STAT.TOTAL_WAIT# - 9.0
      Is the total number of times a session waited for any enqueue operation.
      This statistic is incremented at the beginning of the wait.
  For these reasons it is not valid to compare Enqueue statistics data
  collected pre-9.0, to Enqueue statistics data captured in Oracle9i.

10.2.3. Upgrading the Statspack schema from 8.1.6 to 8.1.7
   Follow the general instructions in section 10.2. 'Upgrading an existing
   Statspack schema to a newer release' above.
   Then, to estimate whether you have sufficient free space to run this
   upgrade, execute the following SQL statement while connected as PERFSTAT in
   SQL*Plus:
     select 1.3*sum(bytes)/1024/1024 est_space_mb
       from dba_segments
      where segment_name in ('STATS$SQL_SUMMARY','STATS$SQL_SUMMARY_PK');
   The est_space_mb column will give you a guesstimate as to the required
   free space, in megabytes.
   The larger the SQL statements in the sql_summary table, the more space will
   be released after the upgrade is complete.
   To upgrade:
     - ensure you have sufficient free space in the tablespace
     - disable any programs which use Statspack
     - backup the Statspack schema (e.g. using export)
     - run the upgrade by connecting as a user with SYSDBA privilege:
    SQL>  connect / as sysdba
    SQL>  @spup816
Once the upgrade script completes, check the log files (spup816a.lis and
spup816b.lis) for errors.  If errors are evident, determine and rectify
the cause before proceeding.  If no errors are evident, and you are upgrading
to 9.0, you may proceed with the upgrade.

10.2.4. Upgrading the Statspack schema from 8.1.6 to 9.2
If you are running 8.1.6 Statspack and wish to upgrade to 9.2 Statspack, you
must follow the upgrade steps - in the following order:
  - 10.2.3. Upgrading the Statspack schema from 8.1.6 to 8.1.7
  - 10.2.2. Upgrading the Statspack schema from 8.1.7 to 9.0
  - 10.2.1. Upgrading the Statspack schema from 9.0   to 9.2

10.2.5. Upgrading the Statspack schema from 8.1.6 to 9.0
If you are running 8.1.6 Statspack and wish to upgrade to 9.0 Statspack, you
must follow the upgrade steps - in the following order:
  - 10.2.3. Upgrading the Statspack schema from 8.1.6 to 8.1.7
  - 10.2.2. Upgrading the Statspack schema from 8.1.7 to 9.0

10.2.6. Upgrading the Statspack schema from 8.1.7 to 9.2
If you are running 8.1.7 Statspack and wish to upgrade to 9.2 Statspack, you
must follow the upgrade steps - in the following order:
  - 10.2.2. Upgrading the Statspack schema from 8.1.7 to 9.0
  - 10.2.1. Upgrading the Statspack schema from 9.0   to 9.2
 
11. Oracle Real Application Clusters specific considerations
------------------------------------------------------------
11.1. Changing Instance Numbers
The unique identifier for a database instance used by Statspack is the
dbid and the instance_number.  When in a Real Application Clusters environment,
it is possible the instance_number may change between startups (either
because the instance_number initialization parameter is set, or
because the instances are started in a different order).
In this case, as Statspack uses the instance_number and the dbid to identify
the instance's snapshot preferences, it is important to note that this may
inadvertently result in a different set of levels or thresholds being
used when snapshotting an instance.
There are three conditions which must be met for this to occur:
 - the instance numbers must have switched between startups
 - the DBA must have modified the default Statspack parameters used for
   at least one of the instances
 - the parameters used (e.g. thresholds and snapshot level) must not be
   the same on all instances
Note that the only way the parameters will differ is if the parameters
have been explicitly modified by the DBA after installation, either by
saving the specified values or by using the modify_statspack_parameter
procedure.
It is easy to check whether any of the Statspack snapshot parameters are
different for the instances by querying the STATS$STATSPACK_PARAMETER table.
  NOTE:
  If you have changed the default Statspack parameters you may
  wish to avoid encountering this problem by hard-coding the instance_number
  initialization parameter for each of the instances of a Clustered
  database - this will avoid encountering this problem.
  For recommendations and issues with setting the instance_number
  initialization parameter, please see the Real Application Clusters
  documentation.

11.2. Cluster Specific Report
sprepins.sql can be run to query performance data for any instance which the
PERFSTAT schema contains.  The report will prompt for a dbid, instance_number
and begin and end snap id's.
This report can be used when importing data from another instance, or in a
Real Application Clusters environment to report on an instance which you are
not connected to.
For more information on sprepins.sql, see section 4.2.

11.3 Cluster Specific Data
New Cluster Specific data displayed in Statspack instance report:
- Page 2 of the Statspack report for a clustered instance displays cluster
  specific derived statistics.
- Page 3 of a clustered instance Statspack report shows data from
  v$dlm_misc.
- Cluster-specific data for Library Cache and Dictionary Cache
- RAC segment statistics

12.  Conflicts and differences compared to UTLBSTAT/UTLESTAT
------------------------------------------------------------
12.1. Running BSTAT/ESTAT in conjunction to Statspack
If you choose to run BSTAT/ESTAT in conjunction to Statspack, do not do
run both as the same user, as there is a table name conflict - this table
is stats$waitstat.

12.2. Differences between Statspack and BSTAT/ESTAT
Statspack considers a transaction to either finish with a commit or a
rollback, and so calculates the number of transactions thus:
  'user commits' + 'user rollbacks'
BSTAT/ESTAT considers a transaction to complete with a commit only, and
so assumes that transactions = 'user commits'
For this reason, comparing per transaction statistics between Statspack and
BSTAT/ESTAT may result in significantly different per transaction ratios.
 
13.  Removing the package
-------------------------
To deinstall the package, connect as a user with SYSDBA privilege and run
the following script from SQL*Plus:  spdrop
  e.g.
      SQL>  connect / as sysdba
      SQL>  @spdrop
This script actually calls 2 other scripts:
      1.  spdtab  ->  Drops tables and public synonyms
      2.  spdusr  ->  Drops the user
Check each of the two output files produced (spdtab.lis, spdusr.lis)
to ensure the package was completely deinstalled.
 
14.  Supplied Scripts Overview
------------------------------
Installation
  Must be run as a user with SYSDBA privilege
 spcreate.sql ->  Creates entire Statspack environment (calls
          spcusr.sql, spctab.sql, spcpkg.sql)
   spdrop.sql ->  Drops entire Statspack environment (calls
       spdtab.sql, spdusr.sql)
  Are run as a user with SYSDBA priv by the calling scripts (above)
 spdtab.sql ->  Drops Statspack tables
   spdusr.sql ->  Drops the Statspack user (PERFSTAT)
  Are run as PERFSTAT by the calling scripts (above)
   spcusr.sql ->  Creates the Statspack user (PERFSTAT)
   spctab.sql ->  Creates Statspack tables
 spcpkg.sql ->  Creates the Statspack package

Reporting and Automation
  Must be run as PERFSTAT
 spreport.sql ->  Generates a Statspack report
        sprepins.sql    ->  Generates a Statspack report for the database and
                            instance specified
        sprepsql.sql    ->  Generates a Statspack SQL report for the specific
                            SQL Hash Value specified
 spauto.sql ->  Automates Statspack statistics collection
       (using dbms_job)

Upgrading
  Must be run as SYSDBA
       spup90.sql       ->  Converts data from the 9.0 schema to the
                            newer 9.2 schema.  Backup the existing schema
                            before running the upgrade.  If upgrading from
                            Statspack 8.1.6, spup816.sql must be run, then
                            spup817.sql, then spup90.sql
       spup817.sql      ->  Converts data from the 8.1.7 schema to the
                            newer 9.0 schema.  Backup the existing schema
                            before running the upgrade.   If upgrading from
                            Statspack 8.1.6, spup816.sql must be run, then
                            spup817.sql
       spup816.sql      ->  Converts data from the 8.1.6 schema to the
                            8.1.7 schema.  Backup the existing schema
                            before running the upgrade.

Performance Data Maintenance
  Must be run as PERFSTAT
        sppurge.sql     ->  Purges a limited range of Snapshot Id's for
                            a given database instance.
        sptrunc.sql     ->  Truncates all Performance data in Statspack tables
                            WARNING - Do not use unless you wish to remove
                                      all data in the schema you are using.
                                      You may choose to export the data
                                      as a backup before using this script.
        spuexp.par ->  An export parameter file supplied for exporting
       the whole PERFSTAT user.

Documentation
  Should be read by the DBA running the scripts
 spdoc.txt ->  This file contains instructions and
       documentation on the STATSPACK package.
 
15.  Limitations and Modifications
----------------------------------
15.1.  Limitations
  As the Statspack schema is updated to reflect the features in the
  latest Oracle releases, the schema may change; backward compatibility
  is not guaranteed.

15.2.  Modifications
  All Statspack code is Oracle proprietary and must not be modified.  Any
  modifications made to Statspack software will render the code and
  data captured thereafter unsupported; unsupported changes may result in
  errors in data capture or reporting.  Instead, please request
  enhancements.

-------------------------------------------------------------------------
阅读(1800) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~