Amazon Partner

Tuesday, 23 November 2010

Oracle cluvfy failed with error PRVF-2415 on HP-UX

Oracle Cluster Verfication utility may fail with followin error on HP-UX .

--------cluvfy log --------------------------
File "/etc/resolv.conf" is consistent across nodes

Time zone consistency check passed
Checking settings of device file "/dev/async"
Check for settings of device file "/dev/async" failed.
ERROR:
PRVF-2415 : The minor number of device file "/dev/async" is incorrect on the nodes:
Check failed on nodes:
server3,server1

Starting check for The SSH LoginGraceTime setting ...

Problem: By Default device ID is incorrect for HP-UX 11.31, and need to be created as per oracle minor and major device.

Solution :

Monday, 25 October 2010

Oracle Database 11g: Capture & Replay

Oracle Database 11g : Database Capture and Database Replay Feature.
A.1 Capture the workload from production database.

BEGIN
  DBMS_WORKLOAD_CAPTURE.START_CAPTURE (name => 'TEST_CAPTURE01',
                           dir => 'DUMPDIR',
                           duration => 60);
END;
/

B.1  Process the Workload. You must process the captured workload before replay. Its recommended to perform this activity on Test system to avoid any adverse impact on production db performance.

BEGIN
  DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE (capture_dir => 'DUMPDIR');
END;
/


Following steps would be require to replay a captured load on Test System.
1) Initialize the Captured workload you like to replay
Execute   DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY (replay_name => 'REPLAY001', replay_dir => 'DUMPDIR');


2) Query DBA_WORKLOAD_CONNECTION_MAP and ensure you happy with Captured and Replay mapping. update the Remap Connection if necessary using below  procedure.

Execute   DBMS_WORKLOAD_REPLAY.REMAP_CONNECTION (connection_id => 101,replay_connection => 'dlsun244:3434/bjava21');


3) Update any Parameter to non default value.
EXECUTE  DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY (synchronization => TRUE);


4) Connect to client, you want to use as Replay client and cd to location where capture & processed files are stored.
cd /u01/app/oracle/dumpdir

5) Run the replay client.
$ORACLE_HOME/bin/wrc username/password


6) Come back to previous session and start the replay.
 EXECUTE  DBMS_WORKLOAD_REPLAY.START_REPLAY ();

7) use Oracle dictionary Views or Enterprise manager to review progress.









Tuesday, 12 October 2010

Gather Table statistics without without invalidating cursors

Gather stats for EMP table in Scott schema without invalidating the Cursor and Explain plan stored in memory.

execute DBMS_STATS.GATHER_TABLE_STATS('SCOTT','EMP',NO_INVALIDATE=>TRUE);

PL/SQL procedure successfully completed.

Setting NO_INVALIDATE to TRUE will ensure plan/cursor doesn't get invalidated after stats collection.


Monday, 11 October 2010

Initializing and Tuning Parameters for Parallel Execution

PARALLEL_MAX_SERVERS <=> CPU_COUNT x PARALLEL_THREADS_PER_CPU x 2x 5 <=>Specifies the maximum number of parallel execution processes and parallel recovery processes for an instance.

PARALLEL_EXECUTION_MESSAGE_SIZE <=> 2 KB (port specific) <=> Increase to 4k or 8k to improve parallel execution performance if sufficient SGA memory exists.

PARALLEL_ADAPTIVE_MULTI_USER <=> TRUE <=> Causes parallel execution SQL to throttle DOP requests to prevent system overload.

The following example shows a statement that sets the DOP to 4 on a table:

ALTER TABLE orders PARALLEL 4;
This next example sets the DOP on an index to 4:

ALTER INDEX iorders PARALLEL 4;

This last example sets a hint to 4 on a query:

SELECT /*+ PARALLEL(orders, 4) */ COUNT(*) FROM orders;


The initialization parameter PARALLEL_MIN_PERCENT specifies the desired minimum percentage of requested parallel execution servers. This parameter affects DML and DDL operations as well as queries

Thursday, 30 September 2010

cross Platforms - Transportable Tablespaces

 Across Platforms Using Transportable Tablespaces :


Step for Cross platform Transportable tablespace

0.1 mkdir /u01/app/oracle/dumpdir
0.2 Create directory dumpdir as '/u01/app/oracle/dumpdir';


1. expdp  TRANSPORT_TABLESPACES=TEST_EXP DUMPFILE=expdp_TEST_EXP.dmp  DIRECTORY=dumpdir TRANSPORT_FULL_CHECK=y

2. RMAN> CONVERT TABLESPACE sales_1,sales_2 
         TO PLATFORM 'Microsoft Windows NT'
         FORMAT '/temp/%U';

3. Copy to Destination Server/Host

4. RMAN> CONVERT DATAFILE 
         '/hq/finance/work/tru/tbs_31.f',
         '/hq/finance/work/tru/tbs_32.f',
         '/hq/finance/work/tru/tbs_41.f'
         TO PLATFORM="Solaris[tm] OE (32-bit)"
         FROM PLATFORM="HP TRu64 UNIX"
         DB_FILE_NAME_CONVERT=
         "/hq/finance/work/tru/", "/hq/finance/dbs/tru"
         PARALLELISM=5;

5. IMPDP system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
   TRANSPORT_DATAFILES=
   /salesdb/sales_101.dbf,
   /salesdb/sales_201.dbf
   REMAP_SCHEMA=(dcranney:smith) REMAP_SCHEMA=(jfee:williams)

6. ALTER TABLESPACE sales_1 READ WRITE;
   ALTER TABLESPACE sales_2 READ WRITE;

Oracle 11g: Use Direct NFS for Datafiles

Oracle 11g (11.1.1) : Use Direct NFS for Datafiles: -

With Oracle Database 11g release 1 (11.1), instead of using the operating system kernel NFS client, you can configure Oracle Database to access NFS V3 servers directly using an Oracle internal Direct NFS client.

To enable Oracle Database to use Direct NFS, the NFS file systems must be mounted and available over regular NFS mounts. The mount options used are not relevant, as Direct NFS manages settings after installation.

You can use optional oranfstab (in eighter/etc or to $ORACLE_HOME/dbs.) to specify additional paths for a mount point.   

Direct NFS determines mount point settings to NFS storage devices based on the configurations in /etc/mtab, which are changed with configuring the /etc/fstab file.
Direct NFS searches for mount entries in the following order:

  1. $ORACLE_HOME/dbs/oranfstab
  2. /etc/oranfstab
  3. /etc/mtab
Direct NFS uses the first matching entry found.

Direct NFS can use up to four network paths defined in the oranfstab file for an NFS server. The Direct NFS client performs load balancing across all specified paths. If a specified path fails, then Direct NFS reissues I/O commands over any remaining paths

Complete the following procedure to enable Direct NFS:

1. Create an oranfstab file
2. use an ODM library, libnfsodm11.so, to enable Direct NFS

1. Sample oranfstab file

server:  MyDataServer1
path:  132.34.35.12
path:  132.34.35.13
export: /vol/oradata1 mount: /mnt/oradata1
 
server: MyDataServer2
path:  NfsPath1
path:  NfsPath2
path:  NfsPath3
path:  NfsPath4
export: /vol/oradata2 mount: /mnt/oradata2
export: /vol/oradata3 mount: /mnt/oradata3
export: /vol/oradata4 mount: /mnt/oradata4
export: /vol/oradata5 mount: /mnt/oradata5

2. Steps to use ODM NFS Library.
  1. Change directory to $ORACLE_HOME/lib.
  2. Enter the following commands:
    mv libodm11.so libodm11.so_stub
    ln -s libnfsodm11.so libodm11.so

Wednesday, 22 September 2010

TNS-12502: TNS:listener received no CONNECT_DATA from client

TNS-12502: "TNS:listener received no CONNECT_DATA from client"

*Cause:  No CONNECT_DATA was passed to the listener.

*Action: Check that the service name resolved from TNSNAMES.ORA has the CONNECT_DATA component of the connect descriptor.

If you getting this error while registering database services with listener using local_listener or remote_listener problem might be with sqlnet.ora and tnsnames.ora setting not implemented by current instance and restarting the instance again after setting TNS_ADMIN parameter might resolve the problem.

Case Study : Implementing TCPS authentication
1. Once you have create all the configuartion and Listener to use TCPS connection and if listener is running on non default port, you would have to set local_listener parameter to register your database with new listener (TCPS). When you creating TNS alias for local_listener, you don't specify connect_data parameter as its not required for service register, however service registration will fail with  TNS-12502 in listener.log until database instance is bounced.
Bouncing/Restarting the instance will read sqlnet.ora , tnsnames.ora with new security config and listener will be able to register the new service on TCPS port.



Monday, 20 September 2010

Install Weblogic Server 10.3.2.0 for Grid Control 11gR1

To Install Weblogic Server 10.3.2 on Linux X64 , download wls1032_generic.jar from oracle.com


$lynx http://download.oracle.com/otn/nt/middleware/11g/wls/wls1032_generic.jar

$ java -jar wls1032_generic.jar


Extracting 0%....................................................................................................100%

Unable to instantiate GUI, defaulting to console mode. - Weblogic Install

$ ./wls1032_linux32.binExtracting 0%....................................................................................................100%


Unable to instantiate GUI, defaulting to console mode.
<---------------------------------------------------------- Oracle Installer - WebLogic 10.3.2.0 --------------------------------------------------------->
Welcome:
--------
This installer will guide you through the installation of WebLogic 10.3.2.0. Type "Next" or enter to proceed to the next prompt. If you want to changedata entered previously, type "Previous". You may quit the installer at any time by typing "Exit".
Enter [Exit][Next]>

Solution :

Required java is missing on the server or weblogic server installer is failed to find one.
Install SUN JDK 1.6_18 or higher from sun/Oracle Website.

http://www.oracle.com/technetwork/java/javase/downloads/jdk6-jsp-136632.html

If problem persists.
Set JAVA_HOME

$java -fullversion

java full version "1.6.0_21-b06"

& Try Installing again.

Wednesday, 11 August 2010

Oracle Grid Control 11g Installation

Oracle 11g Grid Control no longer run on Oracle application server, you have to install Middleware Infrastructure Weblogic Server 10.3.2 before GC 11g.

We Can use the following sequence to Install the Grid Control Environment.

1  Install ORacle 11.2.0.1 database and Patches
2. Create database to be used for Grid Repository
3. Install Weblogic Server 10.3.2 and patches.
4. Install Grid Control Software.
5. Install Grid Agent on all the Servers.



1.1 Install RDBMS Software 11.2.0.1
1.2 Install patch.
NON RAC -  9654983  - Opatch apply - Oracle user
GI + RAC - 9655006  - Opatch auto  - root user

1.3 Create Listener with PORT 1521 or


2.1 Create Database  using DBCA or sqlplus command Line.


2.2 If you have created the database with default parameters, change the following parameters to ensure its meet minimum requirement for OMS.

sqlplus "/ as sysdba"
alter system set session_cached_cursors=210 scope=spfile;
alter system set processes=550 scope=spfile;

Ensure Undo tablespace is aleast 200m size;

-- bounce the database to ensure parameter are effective.
Startup force;

2.3 Deconfigure the EM REporsitory to avoid any failure.

$ORACLE_HOME/bin/emca -deconfig dbcontrol db -repos drop -sys_PWD -SYSMAN_PWD

Enter the following Information :
Database SID:
Listener port number :
Password for SYS user:

3.1 WLS 10.3.2
Install WLS and JDK 1.6

3.2 Apply Patches. using $BASE/Middleware/utils/bsu

$./bsu.sh

8990616, 9100465, and 9221722

4.0 Install Management Server. ( Grid Control Server).

4.1 Unzip Grid control Installer
4.2 run Installer

./runInstaller


Complete Install by running allroot.sh  from Middleware/oms11g/allroot.sh



5. Install Grid Agent on all the Servers.

cd /u03/app/product
chmod +x agentDownload.linux


./agentDownload.linux -b /u02/agent11gr2 -m servername.domain. -r 4489

64bit Linux :- 



chmod +x agentDownload.linux_x64



./agentDownload.linux_x64 -b /u02/agent11gr2 -m servername.domain. -r 4489 


Note :You might need to use -y option to ensure , agent registration password prompt you.


Monday, 19 July 2010

Oracle 11g OCM - Oracle Database 11g Certified Master Exam

Exam Topics:
Oracle 11g OCM upgrade exam will be covering Eight Scenario's covering the following in 2 Full day Exam :
Server Configuration
[  ] Create the database
[  ] Determine and set sizing parameters for database structures
[  ] Create and manage temporary, permanent, and undo tablespaces
[  ] Stripe data files across multiple physical devices and locations
[  ] Configure the database environment to support optimal data access performance
[  ] Create and manage database configuration files
[  ] Create and manage bigfile tablespaces
[  ] Create and Manage a tablespace that uses NFS mounted file system file
[  ] Create and manage multiple network configuration files
[  ] Create and configure a listener
[  ] Configure the database instance to support shared server connections
[  ] Set up network tracing
[  ] Manage Oracle network processes
[  ] Configure the network environment to allow connections to multiple databases
[  ] Use configurationless connections
[  ] Use Grid Infrastructure to manage oracle databases and other resources.

Enterprise Manager Grid Control
[  ] Install and Patch Enterprise Manager Grid Control software
[  ] Configure the Enterprise Manager repository
[  ] Create Enterprise Manager Grid Control users
[  ] Use Enterprise Manager to modify a database configuration
[  ] Configure Enterprise Manager to modify a database availability
[  ] Create and manage jobs
[  ] Create and monitor alerts
[  ] Create notifications
[  ] Implement Grid Control and Database Control
[  ] Choose the appropriate tablespace type for the intended use
[  ] Create Scheduler jobs
[  ] Create schedules
[  ] Assign jobs to windows
[  ] Create programs
[  ] Create job classes
[  ] Install the Enterprise Manager Grid Control infrastructure
[  ] Deploy Enterprise Manager Grid Control agents
[  ] Configure Grid Control for business requirements 


For more detail Follow :-

Managing Database Availability
[  ] Mantain recovery catalogs.
[  ] Configure Recovery Manager
[  ] Use Recovery Manager to perform database backups
[  ] Use Recover Manager to perform complete database restore and recovery operations
[  ] Configure RMAN
[  ] Create different types of RMAN backups to cater for different performance and retention requirements
[  ] Set Flashback Database parameters
[  ] Configure a Fast Recovery Area
[  ] Perform various recovery operations using Flashback technology 

Data Management
[  ] Manage Materialized Views to improve rewrite and refresh performance
[  ] Configure and manage distributed materialized views
[  ] Create and Manage encrypted tablespaces
[  ] Manage Transport of tablespaces across platforms
[  ] Configure a schema to support a star transformation query
[  ] Administer external tables
[  ] Implement Data Pump export and import jobs for data transfer
[  ] Implement Data Pump to and from remote databases
[  ] Configure and use parallel execution for queries
[  ] Use SQL*Loader
[  ] Administer, manage and tune parallel execution

Data Warehouse Management
[  ] Administer partitioned tables and indexes using appropriate methods and keys
[  ] Perform partition maintenance operations
[  ] Maintain indexes on a partitioned table
[  ] Implement securefile LOB
[  ] Create and manage LOB segments
[  ] Implement fine-grained access control
[  ] Create and manage contexts
[  ] Administer flashback data archive and schema evolution
[  ] Configure and Manage Streams for Capture, Propagation and Apply.
Performance Management
[  ] Administer Resource Manager
[  ] Use Result Cache
[  ] Use multi column statistics
[  ] Gather statistics on a specific table without invalidating cursors
[  ] Use partitioned indexes
[  ] Administer and tune schema object to support various access methods
[  ] Interpret execution plan
[  ] Use SQL tuning tools and features
[  ] Use SQL Tuning Advisor
[  ] Use SQL Access Advisor
[  ] Use SQL Performance Analyzer
[  ] Configure baseline templates
[  ] Use SQL Plan Management feature
[  ] Implement instance caging

Real Application Clusters
[  ] Install Oracle Grid Infrastructure
[  ] Create ASM Disk Groups
[  ] Create and manage as ASM instance
[  ] Creating ACFS File System
[  ] Start,Stop Configure and Administer Oracle Grid Infrastructure
[  ] Install the Oracle Database 11gR2 software
[  ] Configure ASM for the shared disks and create a clustered database
[  ] Implement ASM failure groups
[  ] Configure archiving
[  ] Configure Services using both Manual and Policy Managed Methods


Data Guard
[  ] Create Physical Standby Database with real-time apply.
[  ] Configure the data guard environment to reduce overheads of fast incremental backups on the primary database
[  ] Configure the Observer
[  ] Switchover and switch back
[  ] configure connect time failover
[  ] Convert the standby to a snapshot standby
[  ] Configure archivelog deletion policy for the dataguard configuration

http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=41&p_org_id=&lang=&p_exam_id=11gOCM

Oracle Database 11g Certified Master Upgrade Exam

Exam Topics :

Oracle 11g OCM upgrade exam will be covering four task covering the following.
Database, RMAN, EM and Network Configuration
[  ]Configure server-side network
[  ]Configure client-side network
[  ]Create and Manage encrypted tablespaces
[  ]Create and Manage a tablespace that uses NFS mounted file system file
[  ]Set up ADR file based repository
[  ]Perform cold database backup
[  ]Manage user accounts and use case sensitive passwords
[  ]Use OPatch to install a patch
[  ]Install and configure EM Agent
[  ]Create Base Recovery Catalog
[  ]Configure RMAN
[  ]Perform multisection backup of a datafile
[  ]Create an Archival Backup

Data Guard
[  ]Create Physical Standby Database with real-time apply.
[  ]Configure the data guard environment to reduce overheads of fast incremental backups on the primary database
[  ]Configure the Observer
[  ]Switchover and switch back
[  ]Convert the standby to a snapshot standby
[  ]Configure archivelog deletion policy for the dataguard configuration

Data and Data Warehouse Management
[  ]Troubleshoot fast materialized views to fast refresh and query rewrite
[  ]Add a tablespace by using Transportable Tablespace Feature of Data Pump Import (cross platform transportable tablespace)
[  ]Configure a schema to support a star transformation query
[  ]Configure and use parallel execution for queries
[  ]Use and access SecureFile LOBS
[  ]Create partitioned tables (includes reference and interval partitioning)
[  ]Configure Flashback Data Archive
[  ]Use Oracle Streams to capture and propagate changes in a table

Performance Management
[  ]Set up and configure Resource Manager to control active sessions, number of I/Os, execution time..etc
[  ]Use Result Cache
[  ]Use multi column statistics
[  ]Gather statistics on a specific table without invalidating cursors
[  ]Use partitioned indexes
[  ]Use SQL Tuning Advisor
[  ]Use SQL Access Advisor
[  ]Configure baseline templates
[  ]Use SQL Plan Management feature
[  ]Replay a captured workload


Use Result Cache







  • Systems with large amounts of memory can take advantage of the result cache to improve response times of repetitive queries.
  • The result cache stores the results of SQL queries and PL/SQL functions in an area called Result Cache Memory in the shared pool. When these queries and functions are executed repeatedly, the results are retrieved directly from the cache memory. This results in a faster response time. The cached results stored become invalid when data in the dependent database objects is modified. The use of the result cache is database-wide decision. Result cache itself is instance specific and can be sized differently on different instances. 
  • Documentation stats "To disable the result cache in a cluster, you must explicitly set the RESULT_CACHE_MAX_SIZE initialization parameter to 0 during every instance startup " but thats not the case setting (11.2.0.1)result_cache_max_size=0 with scope=both and sid='*' will disable it permanently.


Using the Result Cache Mode

You can control the result cache mechanism using the RESULT_CACHE_MODE parameter in the server initialization (init.ora) file. This parameter can be set at the system, session, or table level to the following values:
  • MANUAL
    In MANUAL mode, the results of a query can only be stored in the result cache by using the /*+ result_cache */ hint.
  • FORCE
    In FORCE mode, all results are stored in the result cache. When set to FORCE, all results use the cache, if possible. You can use the no_result_cache hint to bypass the cache when using the FORCE mode.






If the RESULT_CACHE_MODE parameter is set to MANUAL, and you want to store the results of a query in the result cache, then you must explicitly use the /*+ result_cache */ hint in your query, as shown in the following example:








SELECT /*+ result_cache */ deptno, avg(sal) FROM emp GROUP BY deptno;

When you execute the query, OCI will look up the result cache memory to verify if the result for the query already exists in the OCI client cache. If it exists, then the result is retrieved directly out of the cache. Otherwise, the query is executed on the database, and the result is returned as output and stored in the client result cache memory.
If the RESULT_CACHE_MODE parameter is set to FORCE, and you do not want to include the result of the query in the result cache, then you must use the /*+ no_result_cache */ hint in your query, as shown in the following example:








SELECT /*+ no_result_cache */ deptno, avg(sal) FROM emp GROUP BY deptno



Restrictions on Using the SQL Query Result Cache
You cannot cache results when you use the following database objects or functions in your SQL query:
  • Dictionary and temporary tables
  • Sequence CURRVAL and NEXTVAL pseudo columns
  • SQL functions current_datecurrent_timestamplocal_timestampuserenv/sys_context (with non-constant variables), sys_guidsysdate, andsys_timestamp
  • Non-deterministic PL/SQL functions
Cached results are parameterized with the parameter values if any of the following constructs are used in the query:
  • Bind variables.
  • The following SQL functions: dbtimezonesessiontimezoneuserenv/sys_context (with constant variables), uid, and user.
  • NLS parameters.


SQL> set linesize 180
SQL> set timing on
SQL> set autot on explain stat

SQL> SELECT
  2  PRODUCT_ID,sum(QUANTITY),avg(sales_amount)
  3  from sh.order_items
  4  group by product_id;

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        52            80               100
        72             6             72000
        32             6               700
         1           134             14700
         6             2                70
         2            45        2066.66667
        21             1               900
        90            18             16400
        87             4              6200
        12            40             12000
        74            12              8000

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
         8            20              8400
        45             2                80

13 rows selected.

Elapsed: 00:00:26.37

Execution Plan
----------------------------------------------------------
Plan hash value: 2831044826

-------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name        | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |             |    13 |   130 |     2   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  PX COORDINATOR          |             |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)    | :TQ10001    |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY         |             |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE           |             |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH        | :TQ10000    |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY      |             |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|   7 |        PX BLOCK ITERATOR |             |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS FULL| ORDER_ITEMS |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWP |            |
-------------------------------------------------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        963  recursive calls
          0  db block gets
         69  consistent gets
          0  physical reads
          0  redo size
        923  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         13  rows processed


SQL>

=====================================With Result Cache Hint =========================

select /*+ result_cache */
PRODUCT_ID,sum(QUANTITY),avg(sales_amount) from sh.order_items group by product_id;
SQL>   2
PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        32             6               700
         1           134             14700
         2            45        2066.66667
         6             2                70
        87             4              6200
        52            80               100
        72             6             72000
        21             1               900
         8            20              8400
        45             2                80
        90            18             16400

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        74            12              8000
        12            40             12000

13 rows selected.

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 2831044826

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  RESULT CACHE             | 7mk3vfypbbvp90axh2xw10pzja |       |       |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR          |                            |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)    | :TQ10001                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     HASH GROUP BY         |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE           |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   6 |       PX SEND HASH        | :TQ10000                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   7 |        HASH GROUP BY      |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|   8 |         PX BLOCK ITERATOR |                            |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWC |            |
|   9 |          TABLE ACCESS FULL| ORDER_ITEMS                |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=3; dependencies=(SH.ORDER_ITEMS); name="select /*+ result_cache */
PRODUCT_ID,sum(QUANTITY),avg(sales_amount) from sh.order_items group by product_id"



Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        926  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         13  rows processed




====================
session 2 - update -

update sh.order_items set QUANTITY=100 where ORDER_ID=5 and PRODUCT_ID=1;

1 row updated.

SQL> commit;


=====================
Session 1:
SQL> /

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        32             6               700
        52            80               100
         1           194             14700
        72             6             72000
         6             2                70
         2            45        2066.66667
        87             4              6200
        90            18             16400
        21             1               900
        45             2                80
        12            40             12000

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        74            12              8000
         8            20              8400

13 rows selected.

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 2831044826

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  RESULT CACHE             | 7mk3vfypbbvp90axh2xw10pzja |       |       |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR          |                            |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)    | :TQ10001                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     HASH GROUP BY         |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE           |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   6 |       PX SEND HASH        | :TQ10000                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   7 |        HASH GROUP BY      |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|   8 |         PX BLOCK ITERATOR |                            |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWC |            |
|   9 |          TABLE ACCESS FULL| ORDER_ITEMS                |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=3; dependencies=(SH.ORDER_ITEMS); name="select /*+ result_cache */
PRODUCT_ID,sum(QUANTITY),avg(sales_amount) from sh.order_items group by product_id"



Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        926  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         13  rows processed

SQL> /

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
         1           134             14700
        32             6               700
        72             6             72000
        52            80               100
         6             2                70
         2            45        2066.66667
         8            20              8400
        21             1               900
        45             2                80
        90            18             16400
        87             4              6200

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        12            40             12000
        74            12              8000

13 rows selected.

Elapsed: 00:00:06.29

Execution Plan
----------------------------------------------------------
Plan hash value: 2831044826

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  RESULT CACHE             | 7mk3vfypbbvp90axh2xw10pzja |       |       |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR          |                            |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)    | :TQ10001                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     HASH GROUP BY         |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE           |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   6 |       PX SEND HASH        | :TQ10000                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   7 |        HASH GROUP BY      |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|   8 |         PX BLOCK ITERATOR |                            |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWC |            |
|   9 |          TABLE ACCESS FULL| ORDER_ITEMS                |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=3; dependencies=(SH.ORDER_ITEMS); name="select /*+ result_cache */
PRODUCT_ID,sum(QUANTITY),avg(sales_amount) from sh.order_items group by product_id"



Statistics
----------------------------------------------------------
        576  recursive calls
          0  db block gets
         69  consistent gets
          0  physical reads
          0  redo size
        923  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         13  rows processed

-----

SQL> /

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
         1           134             14700
        32             6               700
        72             6             72000
        52            80               100
         6             2                70
         2            45        2066.66667
         8            20              8400
        21             1               900
        45             2                80
        90            18             16400
        87             4              6200

PRODUCT_ID SUM(QUANTITY) AVG(SALES_AMOUNT)
---------- ------------- -----------------
        12            40             12000
        74            12              8000

13 rows selected.

Elapsed: 00:00:00.05

Execution Plan
----------------------------------------------------------
Plan hash value: 2831044826

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  RESULT CACHE             | 7mk3vfypbbvp90axh2xw10pzja |       |       |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR          |                            |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)    | :TQ10001                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     HASH GROUP BY         |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE           |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   6 |       PX SEND HASH        | :TQ10000                   |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   7 |        HASH GROUP BY      |                            |    13 |   130 |     2   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|   8 |         PX BLOCK ITERATOR |                            |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWC |            |
|   9 |          TABLE ACCESS FULL| ORDER_ITEMS                |    19 |   190 |     2   (0)| 00:00:01 |     1 |    13 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=3; dependencies=(SH.ORDER_ITEMS); name="select /*+ result_cache */
PRODUCT_ID,sum(QUANTITY),avg(sales_amount) from sh.order_items group by product_id"



Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        923  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         13  rows processed




SQL> execute dbms_result_cache.memory_report
R e s u l t   C a c h e   M e m o r y   R e p o r t
[Parameters]
Block Size          = 1K bytes
Maximum Cache Size  = 10496K bytes (10496 blocks)
Maximum Result Size = 524K bytes (524 blocks)
[Memory]
Total Memory = 176680 bytes [0.015% of the Shared Pool]
... Fixed Memory = 12704 bytes [0.001% of the Shared Pool]
... Dynamic Memory = 163976 bytes [0.014% of the Shared Pool]
....... Overhead = 131208 bytes
....... Cache Memory = 32K bytes (32 blocks)
........... Unused Memory = 26 blocks
........... Used Memory = 6 blocks
............... Dependencies = 2 blocks (2 count)
............... Results = 4 blocks
................... SQL     = 2 blocks (2 count)
................... Invalid = 2 blocks (2 count)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.19
SQL> begin
   dbms_result_cache.flush;
end;  2    3
  4
  5  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00


--- Invalidate the Indivisual TABLE/OBJECT rather full RESULT cACHE.
SQL > begin
  dbms_result_cache.invalidate('ARUP','CUSTOMERS');
end;


OWNER, OBJECT_NAME ....