Amazon Partner

Monday 19 July 2010

Oracle 11g OCM - Oracle Database 11g Certified Master Exam

Exam Topics:
Oracle 11g OCM upgrade exam will be covering Eight Scenario's covering the following in 2 Full day Exam :
Server Configuration
[  ] Create the database
[  ] Determine and set sizing parameters for database structures
[  ] Create and manage temporary, permanent, and undo tablespaces
[  ] Stripe data files across multiple physical devices and locations
[  ] Configure the database environment to support optimal data access performance
[  ] Create and manage database configuration files
[  ] Create and manage bigfile tablespaces
[  ] Create and Manage a tablespace that uses NFS mounted file system file
[  ] Create and manage multiple network configuration files
[  ] Create and configure a listener
[  ] Configure the database instance to support shared server connections
[  ] Set up network tracing
[  ] Manage Oracle network processes
[  ] Configure the network environment to allow connections to multiple databases
[  ] Use configurationless connections
[  ] Use Grid Infrastructure to manage oracle databases and other resources.

Enterprise Manager Grid Control
[  ] Install and Patch Enterprise Manager Grid Control software
[  ] Configure the Enterprise Manager repository
[  ] Create Enterprise Manager Grid Control users
[  ] Use Enterprise Manager to modify a database configuration
[  ] Configure Enterprise Manager to modify a database availability
[  ] Create and manage jobs
[  ] Create and monitor alerts
[  ] Create notifications
[  ] Implement Grid Control and Database Control
[  ] Choose the appropriate tablespace type for the intended use
[  ] Create Scheduler jobs
[  ] Create schedules
[  ] Assign jobs to windows
[  ] Create programs
[  ] Create job classes
[  ] Install the Enterprise Manager Grid Control infrastructure
[  ] Deploy Enterprise Manager Grid Control agents
[  ] Configure Grid Control for business requirements 


For more detail Follow :-

Managing Database Availability
[  ] Mantain recovery catalogs.
[  ] Configure Recovery Manager
[  ] Use Recovery Manager to perform database backups
[  ] Use Recover Manager to perform complete database restore and recovery operations
[  ] Configure RMAN
[  ] Create different types of RMAN backups to cater for different performance and retention requirements
[  ] Set Flashback Database parameters
[  ] Configure a Fast Recovery Area
[  ] Perform various recovery operations using Flashback technology 

Data Management
[  ] Manage Materialized Views to improve rewrite and refresh performance
[  ] Configure and manage distributed materialized views
[  ] Create and Manage encrypted tablespaces
[  ] Manage Transport of tablespaces across platforms
[  ] Configure a schema to support a star transformation query
[  ] Administer external tables
[  ] Implement Data Pump export and import jobs for data transfer
[  ] Implement Data Pump to and from remote databases
[  ] Configure and use parallel execution for queries
[  ] Use SQL*Loader
[  ] Administer, manage and tune parallel execution

Data Warehouse Management
[  ] Administer partitioned tables and indexes using appropriate methods and keys
[  ] Perform partition maintenance operations
[  ] Maintain indexes on a partitioned table
[  ] Implement securefile LOB
[  ] Create and manage LOB segments
[  ] Implement fine-grained access control
[  ] Create and manage contexts
[  ] Administer flashback data archive and schema evolution
[  ] Configure and Manage Streams for Capture, Propagation and Apply.
Performance Management
[  ] Administer Resource Manager
[  ] Use Result Cache
[  ] Use multi column statistics
[  ] Gather statistics on a specific table without invalidating cursors
[  ] Use partitioned indexes
[  ] Administer and tune schema object to support various access methods
[  ] Interpret execution plan
[  ] Use SQL tuning tools and features
[  ] Use SQL Tuning Advisor
[  ] Use SQL Access Advisor
[  ] Use SQL Performance Analyzer
[  ] Configure baseline templates
[  ] Use SQL Plan Management feature
[  ] Implement instance caging

Real Application Clusters
[  ] Install Oracle Grid Infrastructure
[  ] Create ASM Disk Groups
[  ] Create and manage as ASM instance
[  ] Creating ACFS File System
[  ] Start,Stop Configure and Administer Oracle Grid Infrastructure
[  ] Install the Oracle Database 11gR2 software
[  ] Configure ASM for the shared disks and create a clustered database
[  ] Implement ASM failure groups
[  ] Configure archiving
[  ] Configure Services using both Manual and Policy Managed Methods


Data Guard
[  ] Create Physical Standby Database with real-time apply.
[  ] Configure the data guard environment to reduce overheads of fast incremental backups on the primary database
[  ] Configure the Observer
[  ] Switchover and switch back
[  ] configure connect time failover
[  ] Convert the standby to a snapshot standby
[  ] Configure archivelog deletion policy for the dataguard configuration

http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=41&p_org_id=&lang=&p_exam_id=11gOCM

Oracle Database 11g Certified Master Upgrade Exam

Exam Topics :

Oracle 11g OCM upgrade exam will be covering four task covering the following.
Database, RMAN, EM and Network Configuration
[  ]Configure server-side network
[  ]Configure client-side network
[  ]Create and Manage encrypted tablespaces
[  ]Create and Manage a tablespace that uses NFS mounted file system file
[  ]Set up ADR file based repository
[  ]Perform cold database backup
[  ]Manage user accounts and use case sensitive passwords
[  ]Use OPatch to install a patch
[  ]Install and configure EM Agent
[  ]Create Base Recovery Catalog
[  ]Configure RMAN
[  ]Perform multisection backup of a datafile
[  ]Create an Archival Backup

Data Guard
[  ]Create Physical Standby Database with real-time apply.
[  ]Configure the data guard environment to reduce overheads of fast incremental backups on the primary database
[  ]Configure the Observer
[  ]Switchover and switch back
[  ]Convert the standby to a snapshot standby
[  ]Configure archivelog deletion policy for the dataguard configuration

Data and Data Warehouse Management
[  ]Troubleshoot fast materialized views to fast refresh and query rewrite
[  ]Add a tablespace by using Transportable Tablespace Feature of Data Pump Import (cross platform transportable tablespace)
[  ]Configure a schema to support a star transformation query
[  ]Configure and use parallel execution for queries
[  ]Use and access SecureFile LOBS
[  ]Create partitioned tables (includes reference and interval partitioning)
[  ]Configure Flashback Data Archive
[  ]Use Oracle Streams to capture and propagate changes in a table

Performance Management
[  ]Set up and configure Resource Manager to control active sessions, number of I/Os, execution time..etc
[  ]Use Result Cache
[  ]Use multi column statistics
[  ]Gather statistics on a specific table without invalidating cursors
[  ]Use partitioned indexes
[  ]Use SQL Tuning Advisor
[  ]Use SQL Access Advisor
[  ]Configure baseline templates
[  ]Use SQL Plan Management feature
[  ]Replay a captured workload


Use Result Cache







  • Systems with large amounts of memory can take advantage of the result cache to improve response times of repetitive queries.
  • The result cache stores the results of SQL queries and PL/SQL functions in an area called Result Cache Memory in the shared pool. When these queries and functions are executed repeatedly, the results are retrieved directly from the cache memory. This results in a faster response time. The cached results stored become invalid when data in the dependent database objects is modified. The use of the result cache is database-wide decision. Result cache itself is instance specific and can be sized differently on different instances. 
  • Documentation stats "To disable the result cache in a cluster, you must explicitly set the RESULT_CACHE_MAX_SIZE initialization parameter to 0 during every instance startup " but thats not the case setting (11.2.0.1)result_cache_max_size=0 with scope=both and sid='*' will disable it permanently.


Using the Result Cache Mode

You can control the result cache mechanism using the RESULT_CACHE_MODE parameter in the server initialization (init.ora) file. This parameter can be set at the system, session, or table level to the following values:
  • MANUAL
    In MANUAL mode, the results of a query can only be stored in the result cache by using the /*+ result_cache */ hint.
  • FORCE
    In FORCE mode, all results are stored in the result cache. When set to FORCE, all results use the cache, if possible. You can use the no_result_cache hint to bypass the cache when using the FORCE mode.






If the RESULT_CACHE_MODE parameter is set to MANUAL, and you want to store the results of a query in the result cache, then you must explicitly use the /*+ result_cache */ hint in your query, as shown in the following example:








SELECT /*+ result_cache */ deptno, avg(sal) FROM emp GROUP BY deptno;

When you execute the query, OCI will look up the result cache memory to verify if the result for the query already exists in the OCI client cache. If it exists, then the result is retrieved directly out of the cache. Otherwise, the query is executed on the database, and the result is returned as output and stored in the client result cache memory.
If the RESULT_CACHE_MODE parameter is set to FORCE, and you do not want to include the result of the query in the result cache, then you must use the /*+ no_result_cache */ hint in your query, as shown in the following example:








SELECT /*+ no_result_cache */ deptno, avg(sal) FROM emp GROUP BY deptno



Restrictions on Using the SQL Query Result Cache
You cannot cache results when you use the following database objects or functions in your SQL query:
  • Dictionary and temporary tables
  • Sequence CURRVAL and NEXTVAL pseudo columns
  • SQL functions current_datecurrent_timestamplocal_timestampuserenv/sys_context (with non-constant variables), sys_guidsysdate, andsys_timestamp
  • Non-deterministic PL/SQL functions
Cached results are parameterized with the parameter values if any of the following constructs are used in the query:
  • Bind variables.
  • The following SQL functions: dbtimezonesessiontimezoneuserenv/sys_context (with constant variables), uid, and user.
  • NLS parameters.


SQL> set linesize 180
SQL> set timing on
SQL> set autot on explain stat

SQL> SELECT
  2  PRODUCT_ID,sum(QUANTITY),avg(sales_amount)
  3  from sh.order_items
  4  group by product_id;

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        52            80               100
        72             6             72000
        32             6               700
         1           134             14700
         6             2                70
         2            45        2066.66667
        21             1               900
        90            18             16400
        87             4              6200
        12            40             12000
        74            12              8000

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
         8            20              8400
        45             2                80

13 rows selected.

Elapsed: 00:00:26.37

Execution Plan
----------------------------------------------------------
Plan hash value: 2831044826

-------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name        | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |             |    13 |   130 |     2   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  PX COORDINATOR          |             |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)    | :TQ10001    |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY         |             |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE           |             |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH        | :TQ10000    |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY      |             |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|   7 |        PX BLOCK ITERATOR |             |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS FULL| ORDER_ITEMS |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWP |            |
-------------------------------------------------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        963  recursive calls
          0  db block gets
         69  consistent gets
          0  physical reads
          0  redo size
        923  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         13  rows processed


SQL>

=====================================With Result Cache Hint =========================

select /*+ result_cache */
PRODUCT_ID,sum(QUANTITY),avg(sales_amount) from sh.order_items group by product_id;
SQL>   2
PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        32             6               700
         1           134             14700
         2            45        2066.66667
         6             2                70
        87             4              6200
        52            80               100
        72             6             72000
        21             1               900
         8            20              8400
        45             2                80
        90            18             16400

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        74            12              8000
        12            40             12000

13 rows selected.

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 2831044826

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  RESULT CACHE             | 7mk3vfypbbvp90axh2xw10pzja |       |       |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR          |                            |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)    | :TQ10001                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     HASH GROUP BY         |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE           |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   6 |       PX SEND HASH        | :TQ10000                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   7 |        HASH GROUP BY      |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|   8 |         PX BLOCK ITERATOR |                            |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWC |            |
|   9 |          TABLE ACCESS FULL| ORDER_ITEMS                |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=3; dependencies=(SH.ORDER_ITEMS); name="select /*+ result_cache */
PRODUCT_ID,sum(QUANTITY),avg(sales_amount) from sh.order_items group by product_id"



Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        926  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         13  rows processed




====================
session 2 - update -

update sh.order_items set QUANTITY=100 where ORDER_ID=5 and PRODUCT_ID=1;

1 row updated.

SQL> commit;


=====================
Session 1:
SQL> /

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        32             6               700
        52            80               100
         1           194             14700
        72             6             72000
         6             2                70
         2            45        2066.66667
        87             4              6200
        90            18             16400
        21             1               900
        45             2                80
        12            40             12000

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        74            12              8000
         8            20              8400

13 rows selected.

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 2831044826

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  RESULT CACHE             | 7mk3vfypbbvp90axh2xw10pzja |       |       |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR          |                            |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)    | :TQ10001                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     HASH GROUP BY         |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE           |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   6 |       PX SEND HASH        | :TQ10000                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   7 |        HASH GROUP BY      |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|   8 |         PX BLOCK ITERATOR |                            |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWC |            |
|   9 |          TABLE ACCESS FULL| ORDER_ITEMS                |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=3; dependencies=(SH.ORDER_ITEMS); name="select /*+ result_cache */
PRODUCT_ID,sum(QUANTITY),avg(sales_amount) from sh.order_items group by product_id"



Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        926  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         13  rows processed

SQL> /

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
         1           134             14700
        32             6               700
        72             6             72000
        52            80               100
         6             2                70
         2            45        2066.66667
         8            20              8400
        21             1               900
        45             2                80
        90            18             16400
        87             4              6200

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        12            40             12000
        74            12              8000

13 rows selected.

Elapsed: 00:00:06.29

Execution Plan
----------------------------------------------------------
Plan hash value: 2831044826

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  RESULT CACHE             | 7mk3vfypbbvp90axh2xw10pzja |       |       |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR          |                            |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)    | :TQ10001                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     HASH GROUP BY         |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE           |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   6 |       PX SEND HASH        | :TQ10000                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   7 |        HASH GROUP BY      |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|   8 |         PX BLOCK ITERATOR |                            |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWC |            |
|   9 |          TABLE ACCESS FULL| ORDER_ITEMS                |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=3; dependencies=(SH.ORDER_ITEMS); name="select /*+ result_cache */
PRODUCT_ID,sum(QUANTITY),avg(sales_amount) from sh.order_items group by product_id"



Statistics
----------------------------------------------------------
        576  recursive calls
          0  db block gets
         69  consistent gets
          0  physical reads
          0  redo size
        923  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         13  rows processed

-----

SQL> /

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
         1           134             14700
        32             6               700
        72             6             72000
        52            80               100
         6             2                70
         2            45        2066.66667
         8            20              8400
        21             1               900
        45             2                80
        90            18             16400
        87             4              6200

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        12            40             12000
        74            12              8000

13 rows selected.

Elapsed: 00:00:00.05

Execution Plan
----------------------------------------------------------
Plan hash value: 2831044826

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  RESULT CACHE             | 7mk3vfypbbvp90axh2xw10pzja |       |       |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR          |                            |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)    | :TQ10001                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     HASH GROUP BY         |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE           |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   6 |       PX SEND HASH        | :TQ10000                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   7 |        HASH GROUP BY      |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|   8 |         PX BLOCK ITERATOR |                            |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWC |            |
|   9 |          TABLE ACCESS FULL| ORDER_ITEMS                |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=3; dependencies=(SH.ORDER_ITEMS); name="select /*+ result_cache */
PRODUCT_ID,sum(QUANTITY),avg(sales_amount) from sh.order_items group by product_id"



Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        923  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         13  rows processed




SQL> execute dbms_result_cache.memory_report
R e s u l t   C a c h e   M e m o r y   R e p o r t
[Parameters]
Block Size          = 1K bytes
Maximum Cache Size  = 10496K bytes (10496 blocks)
Maximum Result Size = 524K bytes (524 blocks)
[Memory]
Total Memory = 176680 bytes [0.015% of the Shared Pool]
... Fixed Memory = 12704 bytes [0.001% of the Shared Pool]
... Dynamic Memory = 163976 bytes [0.014% of the Shared Pool]
....... Overhead = 131208 bytes
....... Cache Memory = 32K bytes (32 blocks)
........... Unused Memory = 26 blocks
........... Used Memory = 6 blocks
............... Dependencies = 2 blocks (2 count)
............... Results = 4 blocks
................... SQL     = 2 blocks (2 count)
................... Invalid = 2 blocks (2 count)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.19
SQL> begin
   dbms_result_cache.flush;
end;  2    3
  4
  5  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00


--- Invalidate the Indivisual TABLE/OBJECT rather full RESULT cACHE.
SQL > begin
  dbms_result_cache.invalidate('ARUP','CUSTOMERS');
end;


OWNER, OBJECT_NAME ....

Sunday 18 July 2010

Troubleshoot/Tunning fast Materialized Views for Fast Refresh and Query Rewrite

Several DBMS_MVIEW procedures can help you create materialized views that are optimized for fast refresh and query rewrite. The EXPLAIN_MVIEW procedure can tell you whether a materialized view is fast refreshable or eligible for general query rewrite and EXPLAIN_REWRITE will tell you whether query rewrite will occur. However, neither tells you how to achieve fast refresh or query rewrite.
To further facilitate the use of materialized views, the TUNE_MVIEW procedure shows you how to optimize your CREATE MATERIALIZED VIEW statement and to meet other requirements such as materialized view log and rewrite equivalence relationship for fast refresh and general query rewrite. TUNE_MVIEW analyzes and processes the CREATEMATERIALIZED VIEW statement and generates two sets of output results: one for the materialized view implementation and the other for undoing the CREATE MATERIALIZEDVIEW operations. The two sets of output results can be accessed through views or be stored in external script files created by the SQL Access Advisor. These external script files are ready to execute to implement the materialized view.
With the TUNE_MVIEW procedure, you no longer require a detailed understanding of materialized views to create a materialized view in an application because the materialized view and its required components (such as a materialized view log) will be created correctly through the procedure.

Friday 16 July 2010

Useful AWR comparision stats scripts

set lines 130 pages 1000
col stat_name for a25
col BEGIN_INTERVAL_TIME for a25
col END_INTERVAL_TIME for a25
--col redo_size for '999,999,990.99'
--col sess_l_reads for '999,999,990.99'
--col blk_change for '999,999,990.99'
--col phy_reads for '999,999,990.99'
--col phy_writes for '999,999,990.99'
--col user_calls for '999,999,990.99'
--col parse_count_tot for '999,999,990.99'
--col parse_count_hard for '999,999,990.99'
--col sort_disk for '999,999,990.99'
--col logons for '999,999,990.99'
--col execute_count for '999,999,990.99'
--col trans for '999,999,990.99'

set pages 0
set feedback off

spool load_profile.txt
select 'date_time, redo_size, sess_l_reads, blk_change, phy_reads, phy_writes, user_calls, parse_count_tot, parse_count_hard, sort_disk, logons, execute_coun
t, trans' from dual;

select
date_time ||','||
sum(case WHEN stat_name='redo size' then round((e_val - b_val)/sec,2) else null end) ||','||
sum(case WHEN stat_name='session logical reads' then round((e_val - b_val)/sec,2) else null end)  ||','||
sum(case WHEN stat_name='db block changes' then round((e_val - b_val)/sec,2) else null end) ||','||
sum(case WHEN stat_name='physical reads' then round((e_val - b_val)/sec,2) else null end)  ||','||
sum(case WHEN stat_name='physical writes' then round((e_val - b_val)/sec,2) else null end) ||','||
sum(case WHEN stat_name='user calls' then round((e_val - b_val)/sec,2) else null end) ||','||
sum(case WHEN stat_name='parse count (total)' then round((e_val - b_val)/sec,2) else null end) ||','||
sum(case WHEN stat_name='parse count (hard)' then round((e_val - b_val)/sec,2) else null end) ||','||
sum(case WHEN stat_name='sorts (disk)' then round((e_val - b_val)/sec,2) else null end) ||','||
sum(case WHEN stat_name='logons cumulative' then round((e_val - b_val)/sec,2) else null end) ||','||
sum(case WHEN stat_name='execute count' then round((e_val - b_val)/sec,2) else null end)  ||','||
round((sum(case WHEN stat_name='user commits' then (e_val - b_val)/sec else null end) +
sum(case WHEN stat_name='user rollbacks' then (e_val - b_val)/sec else null end)),2) trans
from
(
select
to_char(sn.BEGIN_INTERVAL_TIME,'mm/dd/yy_hh24_mi')|| to_char(sn.END_INTERVAL_TIME,'_hh24_mi') Date_Time,
b.stat_name stat_name,
e.value e_val,
b.value b_val,
(extract( day from (end_interval_time-begin_interval_time) )*24*60*60+
extract( hour from (end_interval_time-begin_interval_time) )*60*60+
extract( minute from (end_interval_time-begin_interval_time) )*60+
extract( second from (end_interval_time-begin_interval_time)) ) sec
FROM
dba_hist_sysstat b,
dba_hist_sysstat e,
dba_hist_snapshot sn
where
trunc(sn.begin_interval_time) between sysdate -100 and sysdate
and
b.snap_id(+) = e.snap_id-1
and e.snap_id = sn.snap_id
and b.dbid(+) = e.dbid
and e.dbid = (select dbid from v$database)
and sn.dbid = (select dbid from v$database)
and b.instance_number(+) = e.instance_number
and e.instance_number = (select instance_number from v$instance)
and sn.instance_number = (select instance_number from v$instance)
and b.instance_number(+) = e.instance_number
and b.stat_name = e.stat_name
and b.stat_name in (
'redo size',
'session logical reads',
'db block changes',
'physical reads',
'physical writes',
'user calls',
'parse count (total)',
'parse count (hard)',
'sorts (disk)',
'logons cumulative',
'execute count',
'transactions',
'user commits',
'user rollbacks'
)
)
group by
date_time
Order by date_time
;

spool off


-------------------------
event_reponse.sql
alter session set nls_date_format='dd-mm-yy';

Accept From_date prompt "Enter the Start date in DD-mm-YY Format:"
Accept To_date prompt "Enter the End date in DD-mm-YY Format:"

set pages 0
set linesize 300

Spool &Spoolfilename
select
        to_char(sn.BEGIN_INTERVAL_TIME,'mm/dd/yy_hh24_mi')|| to_char(sn.END_INTERVAL_TIME,'_hh24_mi') ||','||
        e.event_name ||','||
        nvl(e.total_waits - nvl(b.total_waits,0),0)   ||','||
        (e.total_waits - nvl(b.total_waits,0)) /
        (extract( day from (sn.end_interval_time-sn.begin_interval_time) )*24*60*60+
        extract( hour from (sn.end_interval_time-sn.begin_interval_time) )*60*60+
        extract( minute from (sn.end_interval_time-sn.begin_interval_time) )*60+
        extract( second from (sn.end_interval_time-sn.begin_interval_time))) ||','||
        100 * (e.total_timeouts - nvl(b.total_timeouts,0)) /
        (e.total_waits - nvl(b.total_waits,0)) ||','||
        (e.time_waited_micro -  nvl(b.time_waited_micro,0)) / 1000000 ||','||
        ((e.time_waited_micro -  nvl(b.time_waited_micro,0)) / 1000) / (e.total_waits - nvl(b.total_waits,0)) ||','||
        e.wait_class                 ||','
        from dba_hist_system_event b,
        dba_hist_system_event e,
        dba_hist_snapshot sn
        where
                trunc(sn.begin_interval_time)   between '&From_Date' and '&To_Date'
                and b.snap_id(+)                = e.snap_id-1
                and e.snap_id                   = sn.snap_id
                and b.dbid(+)                   = e.dbid
                and e.dbid                      = (select dbid from v$database)
                and sn.dbid                     = (select dbid from v$database)
                and b.instance_number(+)        = e.instance_number
                and e.instance_number           = (select instance_number from v$instance)
                and sn.instance_number          = (select instance_number from v$instance)
                and b.instance_number(+)        = e.instance_number
                and b.event_id(+)               = e.event_id
                and e.total_waits               > nvl(b.total_waits,0)
                and e.wait_class                != 'Idle'
--                and e.event_name='Backup: sbtwrite2'
--                and e.event_name='read by other session'
--                and e.event_name='db file scattered read'
                and e.event_name='ASM file metadata operation'
        order by to_char(sn.BEGIN_INTERVAL_TIME,'mm/dd/yy_hh24_mi')|| to_char(sn.END_INTERVAL_TIME,'_hh24_mi')
;
spool off


Configure a schema to support a star transformation query

What is Star Schema:
The star schema is perhaps the simplest data warehouse schema. It is called a star schema because the entity-relationship diagram of this schema resembles a star, with points radiating from a central table. The center of the star consists of a large fact table and the points of the star are the dimension tables.

Description of Figure 19-2 follows











Star Queries :-

A star query is a join between a fact table and a number of dimension tables. Each dimension table is joined to the fact table using a primary key to foreign key join, but the dimension tables are not joined to each other. The optimizer recognizes star queries and generates efficient execution plans for them. It is not mandatory to have any foreign keys on the fact table for star transformation to take effect.


Tuning Star Queries

To get the best possible performance for star queries, it is important to follow some basic guidelines:




  • A bitmap index should be built on each of the foreign key columns of the fact table or tables.
  • The initialization parameter STAR_TRANSFORMATION_ENABLED should be set to TRUE. This enables an important optimizer feature for star-queries. It is set to FALSEby default for backward-compatibility.

Once above conditions are satisfied majority of star queries in warehouse will use star transformation to provides very efficient query performance.


http://download.oracle.com/docs/cd/B28359_01/server.111/b28313/schemas.htm#sthref1026



How Oracle Chooses to Use Star Transformation

The optimizer generates and saves the best plan it can produce without the transformation. If the transformation is enabled, the optimizer then tries to apply it to the query and, if applicable, generates the best plan using the transformed query. Based on a comparison of the cost estimates between the best plans for the two versions of the query, the optimizer will then decide whether to use the best plan for the transformed or untransformed version.

If the query requires accessing a large percentage of the rows in the fact table, it might be better to use a full table scan and not use the transformations. However, if the constraining predicates on the dimension tables are sufficiently selective that only a small portion of the fact table needs to be retrieved, the plan based on the transformation will probably be superior.

Note that the optimizer generates a subquery for a dimension table only if it decides that it is reasonable to do so based on a number of criteria. There is no guarantee that subqueries will be generated for all dimension tables. The optimizer may also decide, based on the properties of the tables and the query, that the transformation does not merit being applied to a particular query. In this case the best regular plan will be used.

Star Transformation Restrictions


  • Star transformation is not supported for tables with any of the following characteristics:

  • Queries with a table hint that is incompatible with a bitmap access path

  • Queries that contain bind variables

  • Tables with too few bitmap indexes. There must be a bitmap index on a fact table column for the optimizer to generate a subquery for it.

  • Remote fact tables. However, remote dimension tables are allowed in the subqueries that are generated.

  • Anti-joined tables

  • Tables that are already used as a dimension table in a subquery

  • Tables that are really unmerged views, which are not view partitions

  • The star transformation may not be chosen by the optimizer for the following cases:

  • Tables that have a good single-table access path

  • Tables that are too small for the transformation to be worthwhile

  • In addition, temporary tables will not be used by star transformation under the following conditions:

  • The database is in read-only mode

  • The star query is part of a transaction that is in serializable mode