Archive for July, 2007

Oracle Critical Patch Update – July 2007

Last Friday, on a test box, I installed the July 2007 – Oracle Critical Patch Update.

The target database is a 10.2.0.3 running on Oracle Enterprise Linux 4. No CPU has yet been installed on this database.

I went through the following error while installing the CPU:

[oracle@orcltest 6079591]$ opatch napply ./6079591 -skip_subset -skip_duplicate
Invoking OPatch 10.2.0.3.0

Oracle interim Patch Installer version 10.2.0.3.0
Copyright (c) 2005, Oracle Corporation. All rights reserved..

Syntax Error… Unrecognized Command or Option: failed to parse arguments “unknown option ‘-skip_subset'”
Please use the option ‘opatch -help’ to get correct syntax

OPatch failed with error code 14

It turns out I read through the readme file a bit too fast, and skipped the following section:

  • 3.2 OPatch Utility Information

3.2 specifies to use OPatch utility release 10.2.0.3.2 or later.

The OPatch 10.2.0.3.2 can be downloaded from Oracle Metalink with patch 4898608.

I strongly suggest to read the July 2007 CPU readme file carefully since there is also new stuff as:

If you have already installed CPUJan2007, CPUApr2007, or CPUApr2007 CPU merge patch, the patch needs to be rolled back first.

Also, a new database CPU patch format is being introduced with the July 2007 CPU, the new format, napply CPU (pronounced “n apply CPU”).

This means, no more:

opatch apply

Use:

opatch napply ./6079591 -skip_subset -skip_duplicate.

Useful links for July’s CPU:

Oracle Critical Patch Update – July 2007

Critical Patch Update Availability Information for Oracle Server and Middleware Products

Critical Patch Update – Introduction to Database Napply CPU Patches

Using Enterprise Manager to achieve grid automation with deployment procedures

Nice to have links for Oracle CPU’s:

Security Technology Center

RSS feed for Oracle Security Alerts

DataCenter Automation and Configuration Management

2 Comments

SQL Tuning Advisor

Even though Oracle Enterprise Manager 10g Grid Control is a very handy tool, it is possible to use the SQL Tuning Advisor via SQL*Plus, so if you don’t have access to OEM, or if you simply are a command line fan, here is the alternative to using OEM for the SQL Tuning Advisor.

Below is a SQL*Plus script execution output, this script (available below) simply requires to paste in the query to be tuned.

SQL> @tune

Enter value for query: select a.empno, a.ename, b.dname from emp a, dept b where a.deptno = b.deptno and a.ename = ”SMITH”

This script will execute the SQL Tuning Advisor and will provide the same recommendations returned by OEM.

Here is the output:

TUNING_TASK
———————————————————————————————–
GENERAL INFORMATION SECTION
——————————————————————————-
Tuning Task Name : TASK_33842
Tuning Task Owner : STEEVE
Scope : COMPREHENSIVE
Time Limit(seconds) : 1800
Completion Status : COMPLETED
Started at : 07/24/2007 09:33:08
Completed at : 07/24/2007 09:33:08
Number of Index Findings : 1

——————————————————————————-
Schema Name: STEEVE
SQL ID : 8uu071up6vk3q
SQL Text : select a.empno, a.ename, b.dname from emp a, dept b where
a.deptno = b.deptno and a.ename = ‘SMITH’

——————————————————————————-
FINDINGS SECTION (1 finding)
——————————————————————————-

1- Index Finding (see explain plans section below)
————————————————–
The execution plan of this statement can be improved by creating one or more
indices.

Recommendation (estimated benefit: 100%)
—————————————-
– Consider running the Access Advisor to improve the physical schema design
or creating the recommended index.
create index STEEVE.IDX$$_84320001 on STEEVE.EMP(‘ENAME’);

Rationale
———
Creating the recommended indices significantly improves the execution plan
of this statement. However, it might be preferable to run “Access Advisor”
using a representative SQL workload as opposed to a single statement. This
will allow to get comprehensive index recommendations which takes into
account index maintenance overhead and additional space consumption.

——————————————————————————-
EXPLAIN PLANS SECTION
——————————————————————————-

1- Original
———–
Plan hash value: 3487251775

—————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 26 | 5 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 26 | 5 (0)| 00:00:01 |
|* 2 | TABLE ACCESS FULL | EMP | 1 | 13 | 4 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 13 | 1 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | DEPT_PK | 1 | | 0 (0)| 00:00:01 |
—————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – filter(“A”.”ENAME”=’SMITH’)
4 – access(“A”.”DEPTNO”=”B”.”DEPTNO”)

2- Using New Indices
——————–
Plan hash value: 2308654808

———————————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
———————————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 26 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 26 | 3 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 13 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | IDX$$_84320001 | 1 | | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 13 | 1 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | DEPT_PK | 1 | | 0 (0)| 00:00:01 |
———————————————————————————————–

Predicate Information (identified by operation id):
—————————————————

3 – access(“A”.”ENAME”=’SMITH’)
5 – access(“A”.”DEPTNO”=”B”.”DEPTNO”)

——————————————————————————-

The scripts can be downloaded here: tune_advisor.txt and tune.txt

Note: The user running the script needs the Advisor role.

References:

SQL Tuning Advisor

Using SQL Tuning Advisor APIs

Required License

Leave a comment

Job Notification Package

One interesting features of the Oracle Scheduler is the job notification package.

The job notification package can be downloaded here:Oracle Scheduler

It is very easy to install, simply run the job_notification.sql script and provide the outgoing e-mail SMTP server and port number.

In order to get notified about a job’s completion, I ran the following command:

EXEC add_job_email_notification(‘RUN_SERCICE_120DAYS_JOB’,’my_adress@myspace.com’
,’JOB_SUCCEEDED’);

Unfortunately, this command returned an error, stating that the job name was too long for the character buffer:

*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at “SYS.ADD_JOB_EMAIL_NOTIFICATION”, line 248

Since the job name is already in place and used accross all of the environments involved, I wanted to keep the job name as is. In order to do so, the solution is the following:

Simply increase the size of the “notifier_job_name” variable from VARCHAR2(30) to VARCHAR2(60) in the “add_job_email_notification” procedure. That simple change did the trick!

There is an OTN Thread about this same issue.

 

 

Leave a comment

Control File Auto Backup Not copied to the Flashback Recovery Area (FRA)

During a migration exercise, a database was upgraded from 10.2.0.1 to 10.2.0.3,
the OS was also ported from Fedora core 4 32bits to Oracle Enterprise Linux 4 64 bits.

The original database didn’t have Flashback Recovery Area configured because of the previous limited disk space, on this new box, since the disk space is not anymore an issue, the FRA has been enabled.

After the migration and the completion of the initial manual backup, it was time to test the RMAN backups.
I was surprised to realize that the copy of the controlfile was not copied in the FRA Backup structure along with the backupset. (eventhough the controlfile autobackup was setup)

In order to troubleshoot, I issued the following RMAN command:

RMAN> backup current controlfile;

The controlfile has been copied to the following directory:
/opt/oracle/app/oracle/product/10.2.0/db_1/dbs/

Also, there was no indication of any wrong value when running the RMAN show all; statement.

So I did a quick research in metalink and I found the following solution:
Note:302153.1

By simply running the following RMAN command:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;
The problem was solved.

While running a new RMAN backup, the controlfile is now being copied in the right directory:

Starting Control File and SPFILE Autobackup at 08-JUL-07
piece handle=/OraArch1/ORCL/autobackup/2007_07_08/o1_mf_s_627424692_3930vocb_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 08-JUL-07

Leave a comment

Data Pump and Streams Pool

Last week, while exporting a couple of tables for a BI project using the Data Pump utility, the following error occurred:

ORA-31626: job does not exist
ORA-31637: cannot create job SYS_EXPORT_SCHEMA_01 for user SCOTT
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPV$FT_INT”, line 600
ORA-39080: failed to create queues “KUPC$C_1_20070604114151” and “KUPC$S_1_20070604114151” for Data Pump job
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPC$QUE_INT”, line 1580
ORA-00832: no streams pool created and cannot automatically create one

This database is not setup with the “Automatic Shared Memory Management” feature –> ASMM, all of the memory parameters are manually set.

You might wonder why ? Well, the reason why this database was not setup to use ASMM is that the database resides on a 32bits OS and uses RAMFS and HugePages to enable the use of 10GB of SGA. When using RAMFS and HugePages, it is required to manually set each memory parameter, I will discuss this feature in details next week.

Also, the « streams_pool_size” parameter was not setup since Oracle Streams is not used on this database, and I was surprised to see that the Data Pump utility needed a Streams Pool to actually work !

The Oracle Documentation explains the following Oracle Streams feature:Streams_pool_size Documentation“If both the STREAMS_POOL_SIZE and the SGA_TARGET initialization parameters are set to 0 (zero), then, by default, the first use of Streams in a database transfers an amount of memory equal to 10% of the shared pool from the buffer cache to the Streams pool”.

As we can see, Oracle did its homework when it comes to the use of Streams when no memory has been explicitly allocated (streams_pool_size). I am wondering why the Data Pump utility won’t work as per the same logic ? (if no streams_pool is setup, then have the Data Pump automatically grab the memory it needs from the shared_pool or the buffer_cache.) Maybe a new feature in 11g ??

Once the streams_pool_size parameter has been configured, the data pump worked just fine.

1 Comment