Steeve Bisson DBA


Oracle OpenWorld, here I come


Leave a comment

ORA-01555: snapshot too old when using flashback query

Last week, a fellow DBA needed help with the error ORA-01555 using the flashback query feature in oracle 10g. On his database, the automatic undo management feature was enabled.

He wanted to obtain a table’s data from 12 hours ago because of a user’s mistake.
The source of the issue was that the UNDO tablespace was inappropriately sized and that the UNDO_RETENTION parameter value was set to the default value of 900(15 minutes). UNDO_RETENTION Reference

Since the database is an “OLTP” system with hundreds of transactions per minutes, the undo tablespace did not have the needed data to flashback the table to the desired time.
The only solution was to restore the whole database to the last valid backup on another server and then export the desired information. The process took an afternoon, but could have taken 5 minutes if the proper settings where in place.

The ORA-01555 error can be caused when the UNDO_RETENTION parameter is improperly set. And the UNDO_RETENTION parameter is only honored if the current undo tablespace has enough space. ORA-01555 Using Automatic Undo Management – Causes and Solutions

Here is a script estimate_undo.txt to help you size the undo tablespace appropriately with the desired UNDO_RETENTION parameter value.

sys@ORCL> @estimate_undo

Current UNDO_RETENTION value

Specify the desired UNDO_RETENTION in seconds. Ex: 86400=24 hours
Enter value in seconds: 86400

Using 86400 seconds of UNDO_RETENTION

Megs needed for UNDO

Current max Undo Size in MB

—————— ——————————————————
2500 /OraData1/oradata/orcl/undotbs01.dbf
2500 /OraData2/oradata/orcl/undotbs02.dbf

In this case, the undo tablespace needs to be enlarged by 4.5 GB.
The undo_retention parameter needs to be set to 86400.

Alter database datafile ‘/OraData1/oradata/orcl/undotbs01.dbf’ resize 4750m;
Alter database datafile ‘/OraData2/oradata/orcl/undotbs02.dbf’ resize 4750m;
Alter system set undo_retention = 86400 scope=both;

The Flashback operations could even be 100% guaranteed by specifying the RETENTION GUARANTEE clause of the undo tablespace. Enabling retention guarantee can cause multiple DML operations to fail. Use with caution.

The UNDO Advisor can also help with the undo retention settings and for the sizing of the UNDO tablespace. Chris Foot has more info about the Undo Advisor here; his article also explains how the ORA-01555 error maintains the read consistency.

No additional Oracle license is required for using the UNDO Advisor; but I recommend you read this Metalink Note.

In Oracle 11g, there is a new feature called “Flashback Data Archive” that captures historical data that can be retained for as long as the business demands.

Yuri van Buren has a good example of this new feature on his blog: Blogging about 11g – Part 3 – Flashback Data Archive a.k.a. Total Recall

More info on Flashback Data Archive:

Flashback Data Archive Whitepaper

Total Recall


How To Size UNDO Tablespace For Automatic Undo Management

Recovering Table data Using the Flashback Table Feature

ORA-01555 “Snapshot too old” – Detailed Explanation

1 Comment

Oracle Critical Patch Update – July 2007

Last Friday, on a test box, I installed the July 2007 – Oracle Critical Patch Update.

The target database is a running on Oracle Enterprise Linux 4. No CPU has yet been installed on this database.

I went through the following error while installing the CPU:

[oracle@orcltest 6079591]$ opatch napply ./6079591 -skip_subset -skip_duplicate
Invoking OPatch

Oracle interim Patch Installer version
Copyright (c) 2005, Oracle Corporation. All rights reserved..

Syntax Error… Unrecognized Command or Option: failed to parse arguments “unknown option ‘-skip_subset'”
Please use the option ‘opatch -help’ to get correct syntax

OPatch failed with error code 14

It turns out I read through the readme file a bit too fast, and skipped the following section:

  • 3.2 OPatch Utility Information

3.2 specifies to use OPatch utility release or later.

The OPatch can be downloaded from Oracle Metalink with patch 4898608.

I strongly suggest to read the July 2007 CPU readme file carefully since there is also new stuff as:

If you have already installed CPUJan2007, CPUApr2007, or CPUApr2007 CPU merge patch, the patch needs to be rolled back first.

Also, a new database CPU patch format is being introduced with the July 2007 CPU, the new format, napply CPU (pronounced “n apply CPU”).

This means, no more:

opatch apply


opatch napply ./6079591 -skip_subset -skip_duplicate.

Useful links for July’s CPU:

Oracle Critical Patch Update – July 2007

Critical Patch Update Availability Information for Oracle Server and Middleware Products

Critical Patch Update – Introduction to Database Napply CPU Patches

Using Enterprise Manager to achieve grid automation with deployment procedures

Nice to have links for Oracle CPU’s:

Security Technology Center

RSS feed for Oracle Security Alerts

DataCenter Automation and Configuration Management


SQL Tuning Advisor

Even though Oracle Enterprise Manager 10g Grid Control is a very handy tool, it is possible to use the SQL Tuning Advisor via SQL*Plus, so if you don’t have access to OEM, or if you simply are a command line fan, here is the alternative to using OEM for the SQL Tuning Advisor.

Below is a SQL*Plus script execution output, this script (available below) simply requires to paste in the query to be tuned.

SQL> @tune

Enter value for query: select a.empno, a.ename, b.dname from emp a, dept b where a.deptno = b.deptno and a.ename = ”SMITH”

This script will execute the SQL Tuning Advisor and will provide the same recommendations returned by OEM.

Here is the output:

Tuning Task Name : TASK_33842
Tuning Task Owner : STEEVE
Time Limit(seconds) : 1800
Completion Status : COMPLETED
Started at : 07/24/2007 09:33:08
Completed at : 07/24/2007 09:33:08
Number of Index Findings : 1

Schema Name: STEEVE
SQL ID : 8uu071up6vk3q
SQL Text : select a.empno, a.ename, b.dname from emp a, dept b where
a.deptno = b.deptno and a.ename = ‘SMITH’


1- Index Finding (see explain plans section below)
The execution plan of this statement can be improved by creating one or more

Recommendation (estimated benefit: 100%)
– Consider running the Access Advisor to improve the physical schema design
or creating the recommended index.
create index STEEVE.IDX$$_84320001 on STEEVE.EMP(‘ENAME’);

Creating the recommended indices significantly improves the execution plan
of this statement. However, it might be preferable to run “Access Advisor”
using a representative SQL workload as opposed to a single statement. This
will allow to get comprehensive index recommendations which takes into
account index maintenance overhead and additional space consumption.


1- Original
Plan hash value: 3487251775

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 26 | 5 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 26 | 5 (0)| 00:00:01 |
|* 2 | TABLE ACCESS FULL | EMP | 1 | 13 | 4 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 13 | 1 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | DEPT_PK | 1 | | 0 (0)| 00:00:01 |

Predicate Information (identified by operation id):

2 – filter(“A”.”ENAME”=’SMITH’)
4 – access(“A”.”DEPTNO”=”B”.”DEPTNO”)

2- Using New Indices
Plan hash value: 2308654808

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 26 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 26 | 3 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 13 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | IDX$$_84320001 | 1 | | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 13 | 1 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | DEPT_PK | 1 | | 0 (0)| 00:00:01 |

Predicate Information (identified by operation id):

3 – access(“A”.”ENAME”=’SMITH’)
5 – access(“A”.”DEPTNO”=”B”.”DEPTNO”)


The scripts can be downloaded here: tune_advisor.txt and tune.txt

Note: The user running the script needs the Advisor role.


SQL Tuning Advisor

Using SQL Tuning Advisor APIs

Required License

Leave a comment

Job Notification Package

One interesting features of the Oracle Scheduler is the job notification package.

The job notification package can be downloaded here:Oracle Scheduler

It is very easy to install, simply run the job_notification.sql script and provide the outgoing e-mail SMTP server and port number.

In order to get notified about a job’s completion, I ran the following command:

EXEC add_job_email_notification(‘RUN_SERCICE_120DAYS_JOB’,’’

Unfortunately, this command returned an error, stating that the job name was too long for the character buffer:

ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small

Since the job name is already in place and used accross all of the environments involved, I wanted to keep the job name as is. In order to do so, the solution is the following:

Simply increase the size of the “notifier_job_name” variable from VARCHAR2(30) to VARCHAR2(60) in the “add_job_email_notification” procedure. That simple change did the trick!

There is an OTN Thread about this same issue.



Leave a comment

Control File Auto Backup Not copied to the Flashback Recovery Area (FRA)

During a migration exercise, a database was upgraded from to,
the OS was also ported from Fedora core 4 32bits to Oracle Enterprise Linux 4 64 bits.

The original database didn’t have Flashback Recovery Area configured because of the previous limited disk space, on this new box, since the disk space is not anymore an issue, the FRA has been enabled.

After the migration and the completion of the initial manual backup, it was time to test the RMAN backups.
I was surprised to realize that the copy of the controlfile was not copied in the FRA Backup structure along with the backupset. (eventhough the controlfile autobackup was setup)

In order to troubleshoot, I issued the following RMAN command:

RMAN> backup current controlfile;

The controlfile has been copied to the following directory:

Also, there was no indication of any wrong value when running the RMAN show all; statement.

So I did a quick research in metalink and I found the following solution:

By simply running the following RMAN command:
The problem was solved.

While running a new RMAN backup, the controlfile is now being copied in the right directory:

Starting Control File and SPFILE Autobackup at 08-JUL-07
piece handle=/OraArch1/ORCL/autobackup/2007_07_08/o1_mf_s_627424692_3930vocb_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 08-JUL-07

Leave a comment

Data Pump and Streams Pool

Last week, while exporting a couple of tables for a BI project using the Data Pump utility, the following error occurred:

ORA-31626: job does not exist
ORA-31637: cannot create job SYS_EXPORT_SCHEMA_01 for user SCOTT
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPV$FT_INT”, line 600
ORA-39080: failed to create queues “KUPC$C_1_20070604114151” and “KUPC$S_1_20070604114151” for Data Pump job
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPC$QUE_INT”, line 1580
ORA-00832: no streams pool created and cannot automatically create one

This database is not setup with the “Automatic Shared Memory Management” feature –> ASMM, all of the memory parameters are manually set.

You might wonder why ? Well, the reason why this database was not setup to use ASMM is that the database resides on a 32bits OS and uses RAMFS and HugePages to enable the use of 10GB of SGA. When using RAMFS and HugePages, it is required to manually set each memory parameter, I will discuss this feature in details next week.

Also, the « streams_pool_size” parameter was not setup since Oracle Streams is not used on this database, and I was surprised to see that the Data Pump utility needed a Streams Pool to actually work !

The Oracle Documentation explains the following Oracle Streams feature:Streams_pool_size Documentation“If both the STREAMS_POOL_SIZE and the SGA_TARGET initialization parameters are set to 0 (zero), then, by default, the first use of Streams in a database transfers an amount of memory equal to 10% of the shared pool from the buffer cache to the Streams pool”.

As we can see, Oracle did its homework when it comes to the use of Streams when no memory has been explicitly allocated (streams_pool_size). I am wondering why the Data Pump utility won’t work as per the same logic ? (if no streams_pool is setup, then have the Data Pump automatically grab the memory it needs from the shared_pool or the buffer_cache.) Maybe a new feature in 11g ??

Once the streams_pool_size parameter has been configured, the data pump worked just fine.

1 Comment