AOL and Sysadmin Interview Questions
AOL and Sysadmin Interview Questions
Profile Option values control the behavior of Oracle Apps, in other words they determine how Oracle Apps
should run. The value for a profile option can be changed any time.
For Example we have a profile option called MO: Operating Unit. Assigning a value to this profile option
will determine what operating unit it should use when a user gets into a particular responsibility.
We have two types of Profile options – System and Personal profile options depending on to whom they are
visible and who can update their values.
System Profile options are visible and can be updated only in System Administrator responsibility. In short
they will be maintained by a System Administrator only.
User Profile Options are visible and can be updated by any end user of Oracle Apps.
Profile Options can be set at different levels. Site level being the lowest and User being the highest in the
hierarchy. If a profile option is assigned at two levels, then value assigned at lowest level takes the
precedence.
Levels:-
No! As per Oracle standard process, the application will not allow file extensions. It will also not allow any special
characters and spaces.
US folder is nothing but a language specific folder. Oracle Apps uses american language by default and so is the US
folder.
Multiple folders can be kept for language specification depending on the languages that are installed.
what is difference between API and Interface
base tables directly without writing the program for validating or inserting in Interface tables.
But through User Interface, we have to write codes for validation and
API – it’s validating the record(s) and inserting directly into base tables. (if any error occurs, it’s just thrown an error
at run time.)
Interface – Loads the data through interface table(s) and error-out record(s) histories' available in the same interface
tables or provided error tables.
Here is the SQL Query to findout Application Tops unix physical path:
/*********************************************************
*PURPOSE: To list physical path of APPL_TOPs in unix *
* and its Parameters *
*AUTHOR: Shailender Thallam *
**********************************************************/
SELECT variable_name,
VALUE
FROM fnd_env_context
WHERE variable_name LIKE '%\_TOP' ESCAPE '\'
AND concurrent_process_id =
(SELECT MAX(concurrent_process_id) FROM fnd_env_context
)
ORDER BY 1;
How do you create a table validated value set dependent on another value set?
We can use $FLEX$.<Value set name> in the where condition of the target valueset to restrict LOV values bases on
parent value set.
How to ensure that only one instance of a concurrent program runs at a time?
‘Run Alone‘ check box should be enabled in Concurrent program definition to make it run one request at a
point of time.
How to find out base path of a Custom Application top only from front end ?
We can find Custom Application top physical path from Application Developer responsibility:
navigate to Application Developer –> Application –> Register. Query for the custom application
name. The value in the field Basepath, is the OS system variable that stores the actual directory info.
Navigate to:
Profile->system
Query on the Profile Option: “Concurrent: Allow Debugging”
If it isn’t set, then set it, then logout and bounce the APPS services.
The ‘Debug Options’ button on the concurrent program will now be enabled. )
Step3: Get the request id from below query. i.e. Request id = 11111111
Write a PL/SQL script to submit concurrent request using FND_REQUEST.Submit() API in a file and
execute that file in shell script using SQLPLUS
Example
sqlplus -s apps/apps @XX_SUBMIT_CP.sql
You can also use CONCSUB utility in unix to submit concurrent program from unix.
Syntax:
CONCSUB <APPS username>/<APPS password> \
<responsibility application short name> \
<responsibility name> \
<Oracle Applications username> \
[WAIT=N|Y|<n seconds>] \
CONCURRENT \
<program application short name> \
<program name> \[PROGRAM_NAME=â€<description>â€]\
[REPEAT_TIME=<resubmission time>] \
[REPEAT_INTERVAL= <number>] \
[REPEAT_INTERVAL_UNIT=< resubmission unit>] \
[REPEAT_INTERVAL_TYPE=< resubmission type>] \
[REPEAT_END=<resubmission end date and time>] \
[START=<date>] \
[IMPLICIT=< type of concurrent request> \
100 is the limit for the number of parameters for a concurrent program
FlexRpt
The execution file is wrnitten using the FlexReport API.
FlexSql
The execution file is written using the FlexSql API.
Host
The execution file is a host script.
Oracle Reports
The execution file is an Oracle Reports file.
SQL*Loader
The execution file is a SQL script.
SQL*Plus
The execution file is a SQL*Plus script.
Spawned
The execution file is a C or Pro*C program.
Immediate
The execution file is a program written to run as a subroutine of the concurrent manager. We recommend
against defining new immediate concurrent programs, and suggest you use either a PL/SQL Stored
Procedure or a Spawned C Program instead.
There are two types of program incompatibilities, “Global” incompatibilities, and “Domain-specific”
incompatibilities.
You can define a concurrent program to be globally incompatible with another program — that is, the two
programs cannot be run simultaneously at all; or you can define a concurrent program to be incompatible
with another program in a Conflict Domain. Conflict domains are abstract representations of groups of data.
They can correspond to other group identifiers, such as sets of books, or they can be arbitrary.
What is Application Top ? What are the different types of Application Tops?
Application Top is a physical folder on server which holds all the executable, UI and support files.
1. Product Top
2. Custom Top
Product Top:
Product top is the default top built by the manufacturer. This is usually called as APPL_TOP which stores
all components provided by the Oracle.
Custom Top:
Custom top can be defined as the customer top which is created exclusively for customers. According to the
requirement of the client many number of customer tops can be made. Custom top is made used for the
purpose of storing components, which are developed as well as customized. At the time when the oracle
corporation applies patches, every module other than custom top are overridden.
Autonomous Transaction is a kind of transaction that is independent of another transaction. This kind of transaction
allows you in suspending the main transaction and helps in performing SQL operations, rolling back of operations
and also committing them. The autonomous transactions do not support resources, locks or any kind of commit
dependencies that are part of main transaction.
Though FND_GLOBAL and FND_PROFILE gives us same result but they work in a different fashion
FND_GLOBAL is a server-side package which returns the values of system globals, such as the
login/signon or “session” type of values. Where as FND_PROFILE uses user profile routines to
manipulate the option values stored in client and server user profile caches.
From this we can understand that FND_GLOBAL works on server side and FND_PROFILE works on
client side.
On the client, a single user profile cache is shared by multiple form sessions. Thus, when Form A and Form
B are both running on a single client, any changes Form A makes to the client’s user profile cache affect
Form B’s run-time environment, and vice versa.
On the server, each form session has its own user profile cache. Thus, even if Form A and Form B are
running on the same client, they have separate server profile caches. Server profile values changed from
Form A’s session do not affect Form B’s session, and vice versa.
Token is used for transferring values towards report builder. Tokens are usually not case – sensitive.
_ALL Table holds all the information about different operating units in a Multi-Org environment. You can
also set the client_info to specific operating unit to see the data specific to that operating unit only.
Check out the article -> MOAC – Oracle Apps ORG_ID, Multi Org Concept
fnd_profile.value(‘USER_ID’)
or
fnd_global.USER_ID
What is P_CON_REQUEST_ID
No its not mandatory. Without p_conc_request_id also we can
submit the reports in Ora Apps.
it is mandatory if we need to work on profiles like
org_id,user_id,resp_id....etc.By using p_conc_request_id we
can capture the resources like profiles for the current
concurrent program which we are going to submit in srs
window.
FND_CONCURRENT_QUEUE_SIZE:
Stores information about the number of requests a concurrent manager can process at once, according to its
work shift.
FND_CONCURRENT_REQUESTS:
Stores information about individual concurrent requests.
FND_CONCURRENT_REQUEST_CLASS:
Stores information about concurrent request types.
FND_CONC_REQ_OUTPUTS:
This table stores output files created by Concurrent Request.
FND_CURRENCIES:
Stores information about currencies.
FND_DATABASES:
It tracks the databases employed by the eBusiness suite. This table stores information about the database that
is not instance specific.
FND_DATABASE_INSTANCES:
Stores instance specific information. Every database has one or more instance.
FND_DESCRIPTIVE_FLEXS:
Stores setup information about descriptive flexfields.
FND_DESCRIPTIVE_FLEXS_TL:
Stores translated setup information about descriptive flexfields.
FND_DOCUMENTS:
Stores language-independent information about a document.
FND_EXECUTABLES:
Stores information about concurrent program executables.
FND_FLEX_VALUES:
Stores valid values for key and descriptive flexfield segments.
FND_FLEX_VALUE_SETS:
Stores information about the value sets used by both key and descriptive flexfields.
FND_LANGUAGES:
Stores information regarding languages and dialects.
FND_MENUS:
It lists the menus that appear in the Navigate Window, as determined by the System Administrator when
defining responsibilities for function security.
FND_MENUS_TL:
Stores translated information about the menus in FND_MENUS.
FND_MENU_ENTRIES:
Stores information about individual entries in the menus in FND_MENUS.
FND_PROFILE_OPTIONS:
Stores information about user profile options.
FND_REQUEST_GROUPS:
Stores information about report security groups.
FND_REQUEST_SETS:
Stores information about report sets.
FND_RESPONSIBILITY:
Stores information about responsibilities. Each row includes the name and description of the responsibility,
the application it belongs to, and values that identify the main menu, and the first form that it uses.
FND_RESPONSIBILITY_TL:
Stores translated information about responsibilities.
FND_RESP_FUNCTIONS:
Stores security exclusion rules for function security menus. Security exclusion rules are lists of functions
and menus inaccessible to a particular responsibility.
FND_SECURITY_GROUPS:
Stores information about security groups used to partition data in a Service Bureau architecture.
FND_SEQUENCES:
Stores information about the registered sequences in your applications.
FND_TABLES:
Stores information about the registered tables in your applications.
FND_TERRITORIES:
Stores information for countries, alternatively known as territories.
FND_USER:
Stores information about application users.
FND_VIEWS:
Stores information about the registered views in your applications.
1. accounting key flexfeild: By using accounting key flexfiled we can design our organization structure
2.reporting attribute key flexfield: this key flexfild is used to reporting purpose
3. Gl ledger name key flexfield : This is mirror of the Accounting Key flexfield. what ever we enterd in accounting key
flexfield it will copy to GL ledger flexfield and it gives one more segment that will call as SEGMENT it is useful when
we are doing FSG and MASS ALLOCATION
Migrate AOL objects from one environment to another environment using FNDLOAD
The Generic Loader (FNDLOAD) is a concurrent program that can transfer Oracle Application entity data between
database and text file. The loader reads a configuration file to determine which entity to access. In simple words
FNDLOAD is used to transfer entity data from one instance/database to other. For example if you want to move a
concurrent program/menu/value sets developed in DEVELOPMENT instance to PRODUCTION instance you can use
this command.
## For this you will be firstly required to download the request set definition.
## Next you will be required to download the Sets Linkage definition
## Well, lets be clear here, the above sequence is more important while uploading
FNDLOAD apps/$CLIENT_APPS_PWD 0 Y DOWNLOAD $FND_TOP/patch/115/import/afcprset.lct
XX_GL_MY_INTERFACE_SET.ldt REQ_SET REQUEST_SET_NAME="FNDRSSUB4610101_Will_look_like_this"
FNDLOAD apps/$CLIENT_APPS_PWD 0 Y DOWNLOAD $FND_TOP/patch/115/import/afcprset.lct
XX_GL_MY_INTERFACE_SET_LINK.ldt REQ_SET_LINKS
REQUEST_SET_NAME="FNDRSSUB4610101_Will_look_like_this"
## Note that FNDRSSUB4610101 can be found by doing an examine on the
########----->select request_set_name from fnd_request_sets_vl
########----->where user_request_set_name = 'User visible name for the request set here'
## Now for uploading the request set, execute the below commands
FNDLOAD apps/$CLIENT_APPS_PWD 0 Y UPLOAD $FND_TOP/patch/115/import/afcprset.lct
XX_GL_MY_INTERFACE_SET.ldt
FNDLOAD apps/$CLIENT_APPS_PWD 0 Y UPLOAD $FND_TOP/patch/115/import/afcprset.lct
XX_GL_MY_INTERFACE_SET_LINK.ldt
To FNDLOAD responsibility
Notes :
This sample control file (loader.ctl) will load an external data file containing delimited data: load data
infile 'c:\data\mydata.csv'
Another Sample control file with in-line data formatted as fix length records. The trick is to specify "*" as the name of
the data file, and use BEGINDATA to start the data section in the control file.
load data
infile *
replace
begindata
MATH MATHEMATICS
set echo off newpage 0 space 0 pagesize 0 feed off head off trimspool on
spool oradata.txt
from tab1
spool off
Alternatively use the UTL_FILE PL/SQL package:
declare
fp utl_file.file_type;
begin
fp := utl_file.fopen('c:\oradata','tab1.txt','w');
utl_file.fclose(fp);
end;
You might also want to investigate third party tools like TOAD or ManageIT Fast Unloader from CA to help you
unload data from Oracle.
LOAD DATA
INFILE *
TRAILING NULLCOLS
( data1, data2 )
BEGINDATA
11111,AAAAAAAAAA
22222,"A,B,C,D,"
If you need to load positional data (fixed length), look at the following control file example: LOAD DATA
INFILE *
( data1 POSITION(1:5),
data2 POSITION(6:15) )
BEGINDATA
11111AAAAAAAAAA
22222BBBBBBBBBB
INFILE *
SKIP 5
( data1 POSITION(1:5),
data2 POSITION(6:15) )
BEGINDATA
11111AAAAAAAAAA
22222BBBBBBBBBB
LOAD DATA
INFILE *
( rec_no "my_db_sequence.nextval",
BEGINDATA
11111AAAAAAAAAA991201
22222BBBBBBBBBB990112
LOAD DATA
INFILE 'mail_orders.txt'
BADFILE 'bad_orders.txt'
APPEND
( addr,
city,
state,
zipcode,
mailing_state )
LOAD DATA
INFILE *
REPLACE
Can one selectively load only the records that one need?
Look at this example, (01) is the first character, (30:37) are characters 30 to 37:
LOAD DATA
INFILE 'mydata.dat'
BADFILE 'mydata.bad'
DISCARDFILE 'mydata.dis'
APPEND
WHEN (01) <> 'H' and (01) <> 'T' and (30:37) = '19991217'
LOAD DATA
TRUNCATE
INTO TABLE T1
( field1,
field2 FILLER,
field3 )
CONCATENATE: - use when SQL*Loader should combine the same number of physical records together to form one
logical record.
CONTINUEIF - use if a condition indicates that multiple records should be treated as one. Eg. by having a '#' character
in column 1.
How can get SQL*Loader to COMMIT only at the end of the load file?
One cannot, but by setting the ROWS= parameter to a large value, committing can be reduced. Make sure you have
big rollback segments ready when you use a high value for ROWS=.
Turn off database logging by specifying the UNRECOVERABLE option. This option can only be used with direct data
loads.
What is the difference between the conventional and direct path loader?
The conventional path loader essentially loads the data by using standard INSERT statements. The direct path loader
(DIRECT=TRUE) bypasses much of the logic involved with that, and loads directly into the Oracle data files. More
information about the restrictions of direct path loading can be obtained from the Utilities Users Guide.
Kishore C B at 21:52
SQL LOADER
SQL LOADER utility is used to load data from other data source into Oracle. For example, if you have a table in
FOXPRO, ACCESS or SYBASE or any other third party database, you can use SQL Loader to load the data into Oracle
Tables. SQL Loader will only read the data from Flat files. So If you want to load the data from Foxpro or any other
database, you have to first convert that data into Delimited Format flat file or Fixed length format flat file, and then
use SQL loader to load the data into Oracle.
Following is procedure to load the data from Third Party Database into Oracle using SQL Loader.
Convert the Data into Flat file using third party database command.
Write a Control File, describing how to interpret the flat file and options to load the data.
Execute SQL Loader utility specifying the control file in the command line argument
EMPNO INTEGER
NAMETEXT(50)
SALCURRENCY
JDATEDATE
This table contains some 10,000 rows. Now you want to load the data from this table into an Oracle Table. Oracle
Database is running in LINUX O/S.
Solution
Steps
Start MS-Access and convert the table into comma delimited flat (popularly known as csv) , by clicking on File/SaveAs
menu. Let the delimited file name be emp.csv
At the command prompt type FTP followed by IP address of the server running Oracle.
FTP will then prompt you for username and password to connect to the Linux Server. Supply a valid username and
password of Oracle User in Linux
For example:-
C:\>ftp 200.200.100.111
Name: oracle
Password:oracle
FTP>
Now give PUT command to transfer file from current Windows machine to Linux machine.
FTP>put
Local file:C:\>emp.csv
remote-file:/u01/oracle/emp.csv
FTP>
Now after the file is transferred quit the FTP utility by typing bye command.
FTP>bye
Good-Bye
Now come the Linux Machine and create a table in Oracle with the same structure as in MS-ACCESS by taking
appropriate datatypes. For example,create a table like this
$sqlplus scott/tiger
name varchar2(50),
salnumber(10,2),
jdate date);
After creating the table, you have to write a control file describing the actions which SQL Loader should do. You can
use any text editor to write the control file. Now let us write a controlfile for our case study
$vi emp.ctl
1LOAD DATA
2INFILE ‘/u01/oracle/emp.csv’
3BADFILE‘/u01/oracle/emp.bad’
4DISCARDFILE‘/u01/oracle/emp.dsc’
Notes:
(Do not write the line numbers, they are meant for explanation purpose)
3.Specifying BADFILE is optional. If you specify,then bad records found during loading will be stored in this file.
4.Specifying DISCARDFILE is optional. If you specify, then records which do not meet a WHEN condition will be
written to this file.
3.REPLACE: First deletes all the rows in the existing table and then, load rows.
6.This line indicates how the fields are separated in input file. Since in our case the fields are separated by “,” so we
have specified “,” as the terminating char for fields. You can replace this by any char which is used to terminate
fields. Some of the popularly use terminating characters are semicolon “;”, colon “:”, pipe “|” etc.
TRAILINGNULLCOLS means if the last column is null then treat this as null value, otherwise,SQL LOADER will treat the
record as bad if the last column is null.
7.In this line specify the columns of the target table. Note how do you specify format for Date columns
After you have wrote the control file save it and then, call SQLLoader utility by typing the following command
After you have executed the above command SQLLoader will shows you the output describing how many rows it has
loaded.
The LOG option of sqlldr specifies where the log file of this sql loader session should be created.The log file contains
all actions which SQLloader has performed i.e. how many rows were loaded, how many were rejected and how
much time is taken to load the rows and etc. You have to view this file for any errors encountered while running
SQLLoader.
CASE STUDY (Loading Data from Fixed Length file into Oracle)
Suppose we have a fixed length format file containing employees data, as shown below, and wants to load this data
into an Oracle table.
7782 CLARKMANAGER78392572.5010
7839 KINGPRESIDENT5500.0010
7934 MILLERCLERK7782920.0010
7566 JONESMANAGER78393123.7520
7499 ALLENSALESMAN76981600.00300.00 30
7654 MARTINSALESMAN76981312.501400.00 30
7658 CHANANALYST75663450.0020
7654 MARTINSALESMAN76981312.501400.00 30
SOLUTION:
Steps :-
1.First Open the file in a text editor and count the length of fields, for example in our fixed length file, employee
number is from 1st position to 4th position, employee name is from 6th position to 15th position, Job name is from
17th position to 25th position. Similarly other columns are also located.
2.Create a table in Oracle, by any name, but shouldmatch columns specified in fixed length file. In our case give the
following command to create the table.
name VARCHAR2(20),
jobVARCHAR2(10),
mgrNUMBER(5),
salNUMBER(10,2),
comm NUMBER(10,2),
deptnoNUMBER(3) );
3.After creating the table, now write a control file by using any text editor
$vi empfix.ctl
1)LOAD DATA
2)INFILE '/u01/oracle/fix.dat'
3)INTO TABLE emp
4)(empnoPOSITION(01:04)INTEGER EXTERNAL,
namePOSITION(06:15)CHAR,
jobPOSITION(17:25)CHAR,
mgrPOSITION(27:30)INTEGER EXTERNAL,
salPOSITION(32:39)DECIMAL EXTERNAL,
commPOSITION(41:48)DECIMAL EXTERNAL,
5)deptnoPOSITION(50:51)INTEGER EXTERNAL)
Notes:
(Do not write the line numbers, they are meant for explanation purpose)
1. The LOAD DATA statement is required at the beginning of the control file.
2. The name of the file containing data follows the INFILE parameter.
3. The INTO TABLE statement is required to identify the table to be loaded into.
4. Lines 4 and 5 identify a column name and the location of the data in the datafile to be loaded into that column.
empno, name, job, and so on are names of columns in table emp. The datatypes (INTEGER EXTERNAL, CHAR,
DECIMAL EXTERNAL) identify the datatype of data fields in the file, not of corresponding columns in the emp table.
4.After saving the control file now start SQL Loader utility by typing the following command.
After you have executed the above command SQLLoader will shows you the output describing how many rows it has
loaded.
7782 CLARKMANAGER78392572.5010
7839 KINGPRESIDENT5500.0010
7934 MILLERCLERK7782920.0010
7566 JONESMANAGER78393123.7520
7499 ALLENSALESMAN76981600.00300.00 30
7654 MARTINSALESMAN76981312.501400.00 30
7658 CHANANALYST75663450.0020
7654 MARTINSALESMAN76981312.501400.00 30
Now we want to load all the employees whose deptno is 10 into emp1 table and those employees whose deptno is
not equal to 10 in emp2 table. To do this first create the tables emp1 and emp2 by taking appropriate columns and
datatypes. Then, write a control file as shown below
$vi emp_multi.ctl
Load Data
infile ‘/u01/oracle/empfix.dat’
WHEN (deptno=’10 ‘)
(empnoPOSITION(01:04)INTEGER EXTERNAL,
namePOSITION(06:15)CHAR,
jobPOSITION(17:25)CHAR,
mgrPOSITION(27:30)INTEGER EXTERNAL,
salPOSITION(32:39)DECIMAL EXTERNAL,
commPOSITION(41:48)DECIMAL EXTERNAL,
deptnoPOSITION(50:51)INTEGER EXTERNAL)
WHEN (deptno<>’10 ‘)
(empnoPOSITION(01:04)INTEGER EXTERNAL,
namePOSITION(06:15)CHAR,
jobPOSITION(17:25)CHAR,
mgrPOSITION(27:30)INTEGER EXTERNAL,
salPOSITION(32:39)DECIMAL EXTERNAL,
commPOSITION(41:48)DECIMAL EXTERNAL,
deptnoPOSITION(50:51)INTEGER EXTERNAL)
SQL Loader can load the data into Oracle database using Conventional Path method or Direct Path method. You can
specify the method by using DIRECT command line option. If you give DIRECT=TRUE then SQL loader will use Direct
Path Loading otherwise, if omit this option or specify DIRECT=false, then SQL Loader will use Conventional Path
loading method.
Conventional Path
Conventional path load (the default) uses the SQL INSERT statement and a bind array buffer to load data into
database tables.
When SQL*Loader performs a conventional path load, it competes equally with all other processes for buffer
resources. This can slow the load significantly. Extra overhead is added as SQL statements are generated, passed to
Oracle, and executed.
The Oracle database looks for partially filled blocks and attempts to fill them on each insert. Although appropriate
during normal use, this can slow bulk loads dramatically.
Direct Path
In Direct Path Loading, Oracle will not use SQL INSERT statement for loading rows. Instead it directly writes the rows,
into fresh blocks beyond High Water Mark, in datafiles i.e. it does not scan for free blocks before high water mark.
Direct Path load is very fast because
Partial blocks are not used, so no reads are needed to find them, and fewer writes are performed.
SQL*Loader need not execute any SQL INSERT statements; therefore, the processing load on the Oracle database is
reduced.
A direct path load calls on Oracle to lock tables and indexes at the start of the load and releases them when the load
is finished. A conventional path load calls Oracle once for each array of rows to process a SQL INSERT statement.
A direct path load uses multiblock asynchronous I/O for writes to the database files.
During a direct path load, processes perform their own write I/O, instead of using Oracle's buffer cache. This
minimizes contention with other Oracle users.
What is Autoinvoice?
AutoInvoice is a concurrent program in Oracle Receivables that performs invoice processing at both the order and line
levels. Once an order or line or set of lines is eligible for invoicing, the Invoice Interface workflow activity interfaces
the data to Receivables. Oracle Order Management inserts records into the following interface tables:
RA_INTERFACE_LINES and RA_INTERFACE_SALES_CREDITS.
MOAC Architecture
STEP -1
Navigation: Login into Oracle Applications –> Go to System Administrator Responsibility –> Concurrent –>
Executable
FIELDS:
Executable: This is User Understandable Name
Short Name: This is Unique and for system reference
Application: Under which application you want to register this CONA_PO Program
Description: Description
Execution Method: Based on this field, your file has to be placed in respective directory or database.
Execution File Name: This is the actual Report file name.
Action: Save
STEP -2
Create a new concurrent program Purchase Order Report that will call the CONA_PO executable declared above.
Make sure that output format is placed as XML.
Note: Output Format should by 'XML' for registering the report in XML.
STEP -3
Make sure that the report parameter name and the token name are same.
STEP -4
STEP -5
Next process is to attach the designed rtf file with the XML code.
In order to attach the rtf file the user should have the responsibility XML Publisher Administrator assigned to him.
First provide the concurrent program short name as Data Definition name in the template manager and register the
template using the data definition created.
Note: Make sure the code of the data definition must be the same as the short name of the Concurrent Program we
registered for the procedure. So that the concurrent manager can retrieve the templates associated with the
concurrent program
STEP -6
FlexRpt
The execution file is wrnitten using the FlexReport API.
FlexSql
The execution file is written using the FlexSql API.
Host
The execution file is a host script.
Oracle Reports
The execution file is an Oracle Reports file.
SQL*Loader
The execution file is a SQL script.
SQL*Plus
The execution file is a SQL*Plus script.
Spawned
The execution file is a C or Pro*C program.
Immediate
The execution file is a program written to run as a subroutine of the concurrent manager. We recommend
against defining new immediate concurrent programs, and suggest you use either a PL/SQL Stored
Procedure or a Spawned C Program instead.
INFILE 'C:\oracle\VIS\apps\apps_st\appl\gl\12.0.0\bin\GL_FLATFILE.csv'
INSERT
TRAILING NULLCOLS
( STATUS
,LEDGER_NAME
,ACCOUNTING_DATE
,CURRENCY_CODE
,DATE_CREATED
,CREATED_BY
,ACTUAL_FLAG
,USER_JE_CATEGORY_NAME
,USER_JE_SOURCE_NAME
,SEGMENT1
,SEGMENT2
,SEGMENT3
,SEGMENT4
,SEGMENT5
,ENTERED_DR
,ENTERED_CR
,ACCOUNTED_DR
,ACCOUNTED_CR
,LEDGER_ID
,ERROR_FLAG
,ERROR_MESSAGE
,ERROR_STATUS )
How To Develop reports (RDF):
Step1: To Develop the report or we create the report
Step2: To place the report in the server specific path or To Move the report in server
Step3: To Create the Concurrent Executable
Step4: To Create the Concurrent Program
Step5: To Attach the Concurrent Program to the Request Group
Step6: Submit the Concurrent Program
Interface and Conversion are used to migrate the system from Legacy system to Oracle system. Below are few
differences between Interface and Conversion.
Interface :
-------------------
1. Interface is periodic activity , it may be daily or weekly or monthly or quarterly or Yearly .
2. Both Legacy system and oracle system are active.
Conversion:
----------------
1. Conversion is one time activity.
2. All the data will be transferred into oracle systems one time.
3. Legacy system will not be after conversion process.
Most of the views are dynamic in nature... you will find data only when you execute them.
The below is the name of the view (the name mentioned in your post is incorrect )
Most probably you need to initialize the environment prior trying to select from those tables/views
Try this
order by 2
begin
end ;
begin
fnd_global.apps_initialize(:P_USER_ID,:P_RESP_ID,: P_RESP_APPL_ID);
/*
P_resp_appl_id is the application id (which you get from the first query itself
*/
end;
begin
MO_GLOBAL.INIT('SQLAP'); -- Payables 'ONT' order management etc
--MO_GLOBAL.INIT('PO');
end;
Once the above bits are executed, you should get the data through a simple select statement
Register table
Say you have a custom table called “ERPS_EMPLOYEE” with columns EMP_ID, EMP_NAME and EMP_TYPE in your
database. You need to create a TABLE type Value set that pulls up information from this table as LOV. If you give in
the custom table name in “TABLE NAME” field in the “Validation Table Information” Form, Oracle Apps will not
recognize it and you will get the below error saying table does not exist.
So to make your custom table visible in front end ( while creating Value Set or in Alerts or Audits etc), you have to
register it in Oracle Apps.
Let’s now see how to register a custom table. You will need API named AD_DD for this.
begin
ad_dd.register_table
end;
commit;
begin
ad_dd.register_column
);
end;
Commit;
begin
ad_dd.register_column
(p_appl_short_name=> 'CUSTOM',
p_col_seq => 2,
);
end;
begin
ad_dd.register_column
(p_appl_short_name=> 'CUSTOM',
p_col_seq => 3,
end;
commit;
3. Thirdly you register Primary Key if the table has any using the below code snippet:
Begin
ad_dd.register_primary_key
end;
commit;
4. Finally you register Primary Key column if your table has a primary key:
Begin
ad_dd.register_primary_key_column
end;
commit;
Navigate to Application Developer responsibility > Application > Database > Table
Query for the table name that we have registered – “ERPS_EMPLOYEE”. Please note that you cannot register your
table using this form in the front end. You will have to use API. This form is only meant for viewing the information.
Check for the primary key information by clicking on the Primary Key button
Now in your Value set, you will be able to use the table ERPS_EMPLOYEE without any errors.
To delete the registered Tables and its columns, use the below API:
AD_DD.DELETE_COLUMN(appl_short_name,
table_name,
column_name);
New Tables
Table Name
Feature Area
CE_BANK_ACCOUNTS
CE_BANK_ACCT_USES_ALL
CE_GL_ACCOUNTS_CCID
CE_INTEREST_BALANCE_RANGES
CE_INTEREST_RATES
CE_INTEREST_SCHEDULES
CE_BANK_ACCT_BALANCES
Balances and Interest Calculation
CE_PROJECTED_BALANCES
CE_INT_CALC_DETAILS_TMP
CE_CASHFLOWS
CE_CASHFLOW_ACCT_H
CE_PAYMENT_TRANSACTIONS
CE_PAYMENT_TEMPLATES
CE_TRXNS_SUBTYPE_CODES
CE_XLA_EXT_HEADERS
Subledger Accounting
CE_CONTACT_ASSIGNMENTS
CE_AP_PM_DOC_CATEGORIES
CE_PAYMENT_DOCUMENTS
CE_SECURITY_PROFILES_GT
CE_CHECKBOOKS
Changed Tables
Table Name
Feature Area
Brief Description of Change
CE_AVAILABLE_TRANSACTIONS_TMP
CE_STATEMENT_RECONCILS_ALL
Add LEGAL_ENTITY_ID
CE_ARCH_RECONCILIATIONS_ALL
Add LEGAL_ENTITY_ID
CE_SYSTEM_PARAMETERS_ALL
System Parameters
CE_ARCH_HEADERS
Bank Statement
droap ORG_ID
CE_ARCH_INTERFACE_HEADERS
Bank Statement
CE_ARCH_INTRA_HEADERS
Bank Statement
droap ORG_ID
CE_INTRA_STMT_HEADERS
Bank Statement
droap ORG_ID
CE_STATEMENT_HEADERS
Bank Statement
droap ORG_ID
CE_STATEMENT_HEADERS_INTERFACE
Bank Statement
droap ORG_ID; Add more balance columns
CE_CASHPOOLS
Cash Leveling
Add LEGAL_ENTITY_ID
CE_PROPOSED_TRANSFERS
Cash Leveling
CE_LEVELING_MESSAGE
Cash Leveling
CE_TRANSACTIONS_CODES
Bank Statement
CE_STATEMENT_LINES
Bank Statement
Add CASHFLOW_ID
Obsolete Tables
Table Name
Feature Area
Replaced By
CE_ARCH_HEADERS_ALL
Bank Statement
CE_ARCH_HEADERS
CE_ARCH_INTERFACE_HEADERS_ALL
Bank Statement
CE_ARCH_INTERFACE_HEADERS
CE_ARCH_INTRA_HEADERS_ALL
Bank Statement
CE_ARCH_INTRA_HEADERS
CE_INTRA_STMT_HEADERS_ALL
Bank Statement
CE_INTRA_STMT_HEADERS
CE_STATEMENT_HEADERS_ALL
Bank Statement
CE_STATEMENT_HEADERS
CE_STATEMENT_HEADERS_INT_ALL
Bank Statement
CE_STATEMENT_HEADERS_INTERFACE
New Views
View Name
Feature Area
CE_SECURITY_PROFILES_V
CE_LE_BG_OU_VS_V
CE_BANK_ACCOUNTS_V
CE_BANK_BRANCHES_V
CE_BANK_ACCT_USES
CE_BANK_ACCTS_GT_V
CE_BANK_ACCT_USES_BG_V
CE_BANK_ACCT_USES_LE_V
CE_BANK_ACCT_USES_OU_V
CE_BANK_ACCTS_CALC_V
Balances and Interests Calculation
CE_INTEREST_RATES_V
CE_260_CF_RECONCILED_V
CE_260_CF_TRANSACTIONS_V
CE_260_CF_REVERSAL_V
CE_INTERNAL_BANK_ACCTS_GT_V
Cash Positioning
CE_XLA_EXT_HEADERS_V
Subledger Accounting
CE_INTERNAL_BANK_ACCTS_V
CE_BANKS_V
CE_BANK_ACCTS_SEARCH_GT_V
CE_XLA_TRANSACTIONS_V
Subledger Accounting
CEFV_BANK_ACCOUNTS
CEBV_BANK_ACCOUNTS
CEFV_BANK_BRANCHES
CEBV_BANK_BRANCHES
Changed Views
View Name
Feature Area
CE_101_RECONCILED_V
CE_101_TRANSACTIONS_V
CE_185_RECONCILED_V
CE_185_TRANSACTIONS_V
Enhancement related to MOAC and Subledger Accounting and Bank Account Model features
CE_200_RECONCILED_V
CE_200_BATCHES_V
CE_200_REVERSAL_V
CE_200_TRANSACTIONS_V
CE_222_RECONCILED_V
CE_222_REVERSAL_V
CE_222_TRANSACTIONS_V
CE_222_TXN_FOR_BATCH_V
CE_260_RECONCILED_V
CE_260_TRANSACTIONS_V
CE_801_RECONCILED_V
CE_801_TRANSACTIONS_V
CE_801_EFT_RECONCILED_V
CE_801_EFT_TRANSACTIONS_V
CE_999_RECONCILED_V
CE_999_REVERSAL_V
CE_ALL_STATEMENTS_V
CE_ARCH_RECONCILIATIONS
CE_AVAIL_STATEMENTS_V
CE_AVAILAVLE_BATCHES_V
CE_AVAILAVLE_TRANSACTIONS_V
CE_BANK_TRX_CODES_V
CE_INTERNAL_BANK_ACCOUNTS_V
CE_MISC_TRANSACTIONS_V
CE_RECEIVABLE_ACTIVITIES_V
CE_RECONCILED_TRANSACTIONS_V
CE_REVERSAL_TRANSACTIONS_V
CE_STAT_HDRS_INF_V
Enhancement related to MOAC, Balances and Interests Calculation and Bank Account Model feature
CE_STATEMENT_HEADERS_V
Enhancement related to MOAC, Balances and Interests Calculation and Bank Account Model features
CE_STATEMENT_LINES_V
CE_STATEMENT_RECONCILIATIONS
CE_SYSTEM_PARAMETERS
CE_TRANSACTION_CODES_V
CEBV_CASH_FORECAST_CELLS
BIS Views
CEBV_ECT
BIS Views
CEFV_BANK_STATEMENTS
BIS Views
CEFV_CASH_FORECAST_CELLS
BIS Views
CEFV_ECT
BIS Views
CE_TRANSACTION_CODES_V
CE_CP_XTR_BANK_ACCOUNTS_V
Cash Positioning
CE_XTR_CASHFLOWS_V
Cash Positioning
Cash Forecasting
CE_AR_FC_RECEIPTS_V
Cash Forecasting
CE_AR_FC_INVOICES_V
Cash Forecasting
CE_SO_FC_ORDERS_V
Cash Forecasting
CE_SO_FC_ORDERS_NO_TERMS_V
Cash Forecasting
CE_SO_FC_ORDERS_TERMS_V
Cash Forecasting
CE_FORECAST_ROWS_V
Cash Forecasting
CE_CP_BANK_ACCOUNTS_V
Cash Positioning
CE_CP_WS_BA_V
Cash Positioning
CE_CP_DISC_OPEN_V
Cash Positioning
Cash Positioning
CE_CP_XTO_V
Cash Positioning
CE_CP_SUB_OPEN_BAL_V
Cash Positioning
CE_CP_WS_BA_DISC_V
Cash Positioning
CE_P_BA_SIGNATORY_HIST_V
BIS View
CE_FC_ARI_DISC_V
Cash Forecasting
Obsoleted Views
View Name
Feature Area
Replaced By
CE_ARCH_HEADERS
Bank Statement
CE_ARCH_INTERFACE_HEADERS
Bank Statement
CE_ARCH_INTRA_HEADERS
Bank Statement
Table with the same name
CE_INTRA_STMT_HEADERS
Bank Statement
CE_STATEMENT_HEADERS
Bank Statement
CE_STATEMENT_HEADERS_INTERFACE
Bank Statement
what are the major difference between oracle 11i and R12 ?
Que: what are the major difference between oracle 11i and R12 ?
Ans :
Ø11i only forms basis application but R12 is now forms and HTML pages.
Ø11i is particularly in responsibility and operating unit basis but R12 is multi operating unit basis.
Ø11i in MRC Reporting level set of books called reporting set of books but in R12 reporting ledgers called as
reporting currency. Banks are using at single operating unit level in 11i and ledgers level using in R12.
Differences between R12 & 11i.5.10 New R12 Upgrade to R12 – Pros and Cons Pros:
oSub-ledger Accounting – The new Oracle Sub-ledger Accounting (SLA) architecture allows users to customize the
standard Oracle accounting entries. As a result of Cost Management's uptake of this new architecture, users can
customize their accounting for Receiving, Inventory and Manufacturing transactions.
oEnhanced Reporting Currency (MRC) Functionality – Multiple Reporting Currencies functionality is enhanced to
support all journal sources. Reporting sets of books in R12 are now simply reporting currencies. Every journal that is
posted in the primary currency of a ledger can be automatically converted into one or more reporting currencies.
oDeferred COGS and Revenue Matching – R12 provides the ability to automatically maintain the same recognition
rules for COGS and revenue for each sales order line for each period. Deferred COGS and Deferred Revenue are thus
kept in synch.
Cons:
oIntegration with customized applications – In a customized environment, all the extensions & interfaces need to be
analyzed because of the architectural changes in R12
oMulti-Organization Access Control (MOAC) - Multi-Org Access Control enables uses to access multiple operating
units data from single responsibility. Users can access reports , Concurrent programs , all setup screens of multiple
operating units from single responsibility without switching responsibilities.
oUnified Inventory - R12 merges Oracle Process Manufacturing OPM Inventory and Oracle Inventory applications
into a single version . So OPM users can leverage the functionalities such as consigned & VMI and center led
procurement which were available only to discrete inventory in 11i.5.10.
oInventory valuation Reports -There are a significant number of reports which have been enhanced in the area of
Inventory Value Reporting
oInventory Genealogy - Enhanced genealogy tracking with simplified, complete access at the component level to
critical lot and serial information for material throughout production.
oFixed Component Usage Materials Enhancement – Enhanced BOM setup screen and WIP Material Requirement
screen that support materials having a fixed usage irrespective of the job size for WIP Jobs, OSFM lot-based jobs, or
Flow Manufacturing
oComponent Yield Enhancements - New functionality that provides flexibility to control the value of component
yield factors at WIP job level. This feature allows the user to include or exclude yield factor while calculating back
flush transactions.
oPeriodic Average Cost Absorption Enhancements – Enhanced functionality for WIP Final Completion, WIP Scrap
Absorption, PAC WIP Value Report, Material Overhead Absorption Rule, EAM work order, and PAC EAM Work Order
Cost Estimate Processor Report.
oComponent Yield benefits - Component yield functionality user have the flexibility to control the value of
component yield factors and use those factors for back flush transactions. Of course if the yield factor is not used,
yield losses can be accounted for using the manual component issue transaction.
New Features:
oProfessional Buyer’s Work Center – To speed up buyers’ daily purchasing tasks – view & act upon requisition
demand, create & manage orders and agreements, run negotiation events, manage supplier information.
oFreight and Miscellaneous Charges – New page for viewing acquisition cost to track freight & miscellaneous
delivery cost components while receiving. Actual delivery costs are tracked during invoice matching.
oComplex Contract Payments – Support for payments for services related procurement including progress payments,
recoupment of advances, and retainage.
oUnified Inventory – Support for the converged inventory between Oracle Process Manufacturing – OPM Inventory
& Oracle Inventory.
oDocument Publishing Enhancements – Support for RTF & PDF layouts and publish contracts using user specified
layouts
oSupport for Contractor Purchasing Users – Support for contingent workers to create & maintain requisitions,
conduct negotiations, and purchase orders.
New Features:
oMulti-Organization Access Control (MOAC) - Multi-Org Access Control enables uses to access multiple operating
units data from single responsibility. Users can access reports , Concurrent programs , all setup screens of multiple
operating units from single responsibility without switching responsibilities. They can also use Order Import to bring
in orders for different Operating Units from within a single responsibility. The same applies to the Oracle Order
Management Public Application Program Interfaces (APIs).
oPost Booking Item Substitution - Item Substitution functionality support has been extended to post-Booking
through Scheduling/re-scheduling in Sales Order, Quick Sales Order, and Scheduling Order Organizer forms. Item
Substitution functionality is also supported from Planner’s Workbench (loop-back functionality) till the line is pick-
released.
oItem Orderability - Businesses need the ability to define which customers are allowed to order which products, and
the ability to apply the business logic when the order is created.
oMass Scheduling Enhancements – Mass Scheduling can now schedule lines that never been scheduled or those that
have failed manual scheduling. Mass Scheduling also supports unscheduling and rescheduling
oException Management Enhancements – Improved visibility to workflow errors and eases the process of retrying
workflows that have experienced processing errors
oSales Order Reservation for Lot-Based Jobs – Lot-Based Jobs as a Source of Supply to Reserve Against Sales
Order(s). OSFM Displays Sales Order Information on Reserved Jobs
oCascading Attributes – Cascading means that if the Order header attributes change, the corresponding line
attributes change
oCustomer Credit Check Hold Source Support across Operating Units - Order Management honors credit holds
placed on customers from AR across operating Units. When Receivables places a customer on credit hold a hold
source will be created in all operating units which have:
New Features:
§Pick Release enhancements - Enhancements will be made to the Release Sales Order Form and the Release Rules
Form to support planned crossdocking and task priority for Oracle Warehouse Management (WMS) organizations.
Pick release will allow a user to specify location methods and if crossdocking is required, a cross-dock rule. The task
priority will be able to be set for each task in a sales order picking wave when that wave is pick released. The priority
indicated at pick release will be defaulted to every Oracle WMS task created
§Parallel Pick Release Submission - This new feature will allow users to run multiple pick release processes in parallel
to improve overall performance. By distributing the workload across multiple processors, users can reduce the
overall time required for a single pick release run.
oWorkflow Shipping Transaction Enhancement – Oracle has enabled Workflow in the Shipping process for: workflow,
process workflow, activity and notification workflow, and business event
oSupport for Miscellaneous Shipping Transactions - Oracle Shipping Execution users will now be able to create a
delivery for a shipment that is not tied to a sales order via XML (XML-equivalent of EDI 940 IN). Once this delivery has
been created, users will be able to print shipping documents, plan, rate, tender, audit and record the issuance out of
inventory. Additionally, an XML Shipment Advice (XML- equivalent of EDI 945 OUT) will be supported to record the
outbound transactions.
oFlexible Documents: With this new feature ,Shipping Execution users will be able to create template-based, easy-to-
use formats to quickly produce and easily maintain shipping documents unique to their business. Additional
attributes will be added to the XML templates for each report for added flexibility
oEnhanced LPN Support - Oracle Shipping Execution users will now have improved visibility to the Oracle WMS
packing hierarchy at Pick Confirmation. The packing hierarchy, including the License Plate Number (LPN), will be
visible in the Shipping Transactions form as well as in the Quick Ship user interface.
New Features:
oCrossdock Execution – WMS allow you to determine final staging lane, merge with existing delivery or create a new
delivery, synchronize inbound operation plan with outbound consolidation plan, enhance outbound consolidation
plans and manage crossdock tasks.
oLabor Management – WMS provides labor analysis. It gives the warehouse manager increased visibility to resource
requirements. Detailed information for the productivity of individual employees and warehouses is provided
oUser Extensible Label Fields – WMS, users are now able to add their own variables without customizing the
application, by simply defining in SQL the way to get to that data element
oMaterial Consolidation across deliveries – WMS allows you to consolidate material across deliveries in a staging
lane
New Features:
oLot and Serial Controlled Assembly– Lot controlled job can now be associated with serial numbers to track and
trace serialized lot/item during shop floor transactions as well as post manufacturing and beyond
oFixed Component Usage Support for Lot Based Jobs – OSFM now supports fixed component usage defined in the
Bill of Material of an end it.
oSupport for Partial Move Transactions – Users are able to execute movement of a partial job quantity
interoperation
oEnhanced BOM to Capture Inverse Usage – Users can now capture the inverse component usage through the new
inverse usage field in BOM UI
oSupport for Rosetta Net Transaction - comprising of 7B1 (work in process) and 7B5 (manufacturing work order).
New Features:
oThe latest RCD (Release Content Documents) can be accessed from the metalink note 404152.1 (requires user
name & password).
oThe TOI (Transfer Of Information) sessions released by Oracle Learning can be accessed from its portal at http://
www.oracle.com/education/oukc/ebs.html
oOracle White papers – Extending the value of Your Oracle E-Business Suite 11i.10 Investment & Application
Upgrades and Service Oriented Architecture
An Explain Plan is a tool that you can use to have Oracle explain to you how it plans on executing your query....
This is useful in tuning queries to the database to get them to perform better. Once you know how Oracle plans on
executing your query, you can change your environment to run the query faster.
Before you can use the EXPLAIN PLAN command, you need to have a PLAN_TABLE installed. This can be done by
simply running the $ORACLE_HOME/rdbms/admin/utlxplan.sql script in your schema. It creates the table for you.
After you have the PLAN_TABLE created, you issue an EXPLAIN PLAN for the query you are interested in tuning. The
command is of the form:
SQL> explain plan set statement_id = 'q1' for 2 select object_name from test where object_name like 'T%';
Explained.
I used 'q1' for my statement id (short for query 1). But you can use anything you want. My SQL statement is the
second line. Now I query the PLAN_TABLE to see how this statement is executed. This is done with the following
query:
SQL> SELECT LPAD(' ',2*(level-1)) || operation || ' ' || options || ' ' || 2 object_name || ' ' || DECODE(id,0,'Cost = '
|| position) AS "Query Plan",other 3 FROM plan_table 4 START WITH id = 0 5 AND statement_id='q1' 6 CONNECT BY
PRIOR ID = PARENT_ID 7* AND statement_id = 'q1' Query Plan OTHER --------------------------------------------------
-------------------------------------------------- SELECT STATEMENT Cost = TABLE ACCESS FULL TEST
This tells me that my SQL statement will perform a FULL table scan on the TEST table (TABLE ACCESS FULL TEST).
Now let's add an index on that table!
SQL> create index test_name_idx on test(object_name); Index created. SQL> truncate table plan_table; Table
truncated. SQL> explain plan set statement_id = 'q1' for 2 select object_name from test where object_name like 'T
%'; Explained. SQL> SELECT LPAD(' ',2*(level-1)) || operation || ' ' || options || ' ' || 2 object_name || ' ' ||
DECODE(id,0,'Cost = ' || position) AS "Query Plan",other 3 FROM plan_table 4 START WITH id = 0 5 AND
statement_id='q1' 6 CONNECT BY PRIOR ID = PARENT_ID 7* AND statement_id = 'q1' Query Plan OTHER
-------------------------------------------------- -------------------------------------------------- SELECT STATEMENT Cost = INDEX
RANGE SCAN TEST_NAME_IDX
I added an index to the table. Before I issue another EXPLAIN PLAN, I truncate the contents of my PLAN_TABLE to
prepare for the new plan. Then I query the PLAN TABLE. Notice that this time I'm using an index (TEST_NAME_IDX)
that I created!! Hopefully, this query will run faster now that it has an index to use. But this may not always be the
case
Introduction
In this paper we’ll discuss an overview of the EXPLAIN PLAN and TKPROF functions built into the Oracle 8i server and
learn how developers and DBAs use these tools to get the best performance out of their applications. We’ll look at
how to invoke these tools both from the command line and from graphical development tools. In the remainder of
the paper we’ll discuss how to read and interpret Oracle 8i execution plans and TKPROF reports. We’ll look at lots of
examples so that you’ll come away with as much practical knowledge as possible.
In this section we’ll take a high-level look at the EXPLAIN PLAN and TKPROF facilities: what they are, prerequisites
for using them, and how to invoke them. We will also look at how these facilities help you tune your applications.
Before the database server can execute a SQL statement, Oracle must first parse the statement and develop an
execution plan. The execution plan is a task list of sorts that decomposes a potentially complex SQL operation into a
series of basic data access operations. For example, a query against the dept table might have an execution plan that
consists of an index lookup on the deptno index, followed by a table access by ROWID.
The EXPLAIN PLAN statement allows you to submit a SQL statement to Oracle and have the database prepare the
execution plan for the statement without actually executing it. The execution plan is made available to you in the
form of rows inserted into a special table called a plan table. You may query the rows in the plan table using ordinary
SELECT statements in order to see the steps of the execution plan for the statement you explained. You may keep
multiple execution plans in the plan table by assigning each a unique statement_id. Or you may choose to delete the
rows from the plan table after you are finished looking at the execution plan. You can also roll back an EXPLAIN PLAN
statement in order to remove the execution plan from the plan table.
The EXPLAIN PLAN statement runs very quickly, even if the statement being explained is a query that might run for
hours. This is because the statement is simply parsed and its execution plan saved into the plan table. The actual
statement is never executed by EXPLAIN PLAN. Along these same lines, if the statement being explained includes
bind variables, the variables never need to actually be bound. The values that would be bound are not relevant since
the statement is not actually executed.
You don’t need any special system privileges in order to use the EXPLAIN PLAN statement. However, you do need to
have INSERT privileges on the plan table, and you must have sufficient privileges to execute the statement you are
trying to explain. The one difference is that in order to explain a statement that involves views, you must have
privileges on all of the tables that make up the view. If you don’t, you’ll get an “ORA-01039: insufficient privileges on
underlying objects of the view” error.
STATEMENT_ID VARCHAR2(30)
TIMESTAMP DATE
REMARKS VARCHAR2(80)
OPERATION VARCHAR2(30)
OPTIONS VARCHAR2(30)
OBJECT_NODE VARCHAR2(128)
OBJECT_OWNER VARCHAR2(30)
OBJECT_NAME VARCHAR2(30)
OBJECT_INSTANCE NUMBER(38)
OBJECT_TYPE VARCHAR2(30)
OPTIMIZER VARCHAR2(255)
SEARCH_COLUMNS NUMBER
ID NUMBER(38)
PARENT_ID NUMBER(38)
POSITION NUMBER(38)
COST NUMBER(38)
CARDINALITY NUMBER(38)
BYTES NUMBER(38)
OTHER_TAG VARCHAR2(255)
PARTITION_START VARCHAR2(255)
PARTITION_STOP VARCHAR2(255)
PARTITION_ID NUMBER(38)
OTHER LONG
DISTRIBUTION VARCHAR2(30)
There are other ways to view execution plans besides issuing the EXPLAIN PLAN statement and querying the plan
table. SQL*Plus can automatically display an execution plan after each statement is executed. Also, there are many
GUI tools available that allow you to click on a SQL statement in the shared pool and view its execution plan. In
addition, TKPROF can optionally include execution plans in its reports as well.
TKPROF is a utility that you invoke at the operating system level in order to analyze SQL trace files and generate
reports that present the trace information in a readable form. Although the details of how you invoke TKPROF vary
from one platform to the next, Oracle Corporation provides TKPROF with all releases of the database and the basic
functionality is the same on all platforms.
The term trace file may be a bit confusing. More recent releases of the database offer a product called Oracle Trace
Collection Services. Also, Net8 is capable of generating trace files. SQL trace files are entirely different. SQL trace is a
facility that you enable or disable for individual database sessions or for the entire instance as a whole. When SQL
trace is enabled for a database session, the Oracle server process handling that session writes detailed information
about all database calls and operations to a trace file. Special database events may be set in order to cause Oracle to
write even more specific information—such as the values of bind variables—into the trace file.
SQL trace files are text files that, strictly speaking, are human readable. However, they are extremely verbose,
repetitive, and cryptic. For example, if an application opens a cursor and fetches 1000 rows from the cursor one row
at a time, there will be over 1000 separate entries in the trace file.
TKPROF is a program that you invoke at the operating system command prompt in order to reformat the trace file
into a format that is much easier to comprehend. Each SQL statement is displayed in the report, along with counts of
how many times it was parsed, executed, and fetched. CPU time, elapsed time, logical reads, physical reads, and
rows processed are also reported, along with information about recursion level and misses in the library cache.
TKPROF can also optionally include the execution plan for each SQL statement in the report, along with counts of
how many rows were processed at each step of the execution plan.
The SQL statements can be listed in a TKPROF report in the order of how much resource they used, if desired. Also,
recursive SQL statements issued by the SYS user to manage the data dictionary can be included or excluded, and
TKPROF can write SQL statements from the traced session into a spool file.
How EXPLAIN PLAN and TKPROF Aid in the Application Tuning Process
EXPLAIN PLAN and TKPROF are valuable tools in the tuning process. Tuning at the application level typically yields
the most dramatic results, and these two tools can help with the tuning in many different ways.
EXPLAIN PLAN and TKPROF allow you to proactively tune an application while it is in development. It is relatively
easy to enable SQL trace, run an application in a test environment, run TKPROF on the trace file, and review the
output to determine if application or schema changes are called for. EXPLAIN PLAN is handy for evaluating individual
SQL statements.
By reviewing execution plans, you can also validate the scalability of an application. If the database operations are
dependent upon full table scans of tables that could grow quite large, then there may be scalability problems ahead.
On the other hand, if large tables are accessed via selective indexes, then scalability may not be a problem.
EXPLAIN PLAN and TKPROF may also be used in an existing production environment in order to zero in on resource
intensive operations and get insights into how the code may be optimized. TKPROF can further be used to quantify
the resources required by specific database operations or application functions.
EXPLAIN PLAN is also handy for estimating resource requirements in advance. Suppose you have an ad hoc reporting
request against a very large database. Running queries through EXPLAIN PLAN will let you determine in advance if
the queries are feasible or if they will be resource intensive and will take unacceptably long to run.
In this section we will discuss the details of how to generate execution plans (both with the EXPLAIN PLAN
statement and other methods) and how to generate SQL trace files and create TKPROF reports.
Before you can use the EXPLAIN PLAN statement, you must have INSERT privileges on a plan table. The plan table
can have any name you like, but the names and data types of the columns are not flexible. You will find a script
called utlxplan.sql in $ORACLE_HOME/rdbms/admin that creates a plan table with the name plan_table in the local
schema. If you use this script to create your plan table, you can be assured that the table will have the right
definition for use with EXPLAIN PLAN.
Once you have access to a plan table, you are ready to run the EXPLAIN PLAN statement. The syntax is as follows:
The EXPLAIN PLAN statement runs quickly because all Oracle has to do is parse the SQL statement being explained
and store the execution plan in the plan table. The SQL statement can include bind variables, although the variables
will not get bound and the values of the bind variables will be irrelevant.
If you issue the EXPLAIN PLAN statement from SQL*Plus, you will get back the feedback message “Explained.” At this
point the execution plan for the explained SQL statement has been inserted into the plan table, and you can now
query the plan table to examine the execution plan.
Execution plans are a hierarchical arrangement of simple data access operations. Because of the hierarchy, you need
to use a CONNECT BY clause in your query from the plan table. Using the LPAD function, you can cause the output to
be formatted in such a way that the indenting helps you traverse the hierarchy. There are many different ways to
format the data retrieved from the plan table. No one query is the best, because the plan table holds a lot of
detailed information. Different DBAs will find different aspects more useful in different situations.
A simple SQL*Plus script to retrieve an execution plan from the plan table is as follows:
REM
REM explain.sql
REM
SELECT id, parent_id, LPAD (' ', LEVEL - 1) || operation || ' ' ||
FROM plan_table
START WITH id = 0
CONNECT BY PRIOR
id = parent_id
The explain.sql SQL*Plus script above displays the execution plan for the invoice item query as follows:
0 SELECT STATEMENT
1 0 NESTED LOOPS
2 1 NESTED LOOPS
The execution plan shows that Oracle is using nested loops joins to join three tables, and that accesses from all three
tables are by unique index lookup. This is probably a very efficient query. We will look at how to read execution plans
in greater detail in a later section.
The explain.sql script for displaying an execution plan is very basic in that it does not display a lot of the information
contained in the plan table. Things left off of the display include optimizer estimated cost, cardinality, partition
information (only relevant when accessing partitioned tables), and parallelism information (only relevant when
executing parallel queries or parallel DML).
If you are using Oracle 8.1.5 or later, you can find two plan query scripts in $ORACLE_HOME/rdbms/admin.
utlxpls.sql is intended for displaying execution plans of statements that do not involve parallel processing, while
utlxplp.sql shows additional information pertaining to parallel processing. The output of the latter script is more
confusing, so only use it when parallel query or DML come into play. The output from utlxpls.sql for the invoice item
query is as follows:
Plan Table
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
| SELECT STATEMENT | | 1 | 39 | 4| | |
| NESTED LOOPS | | 1 | 39 | 4| | |
| NESTED LOOPS | | 1 | 27 | 3| | |
--------------------------------------------------------------------------------
When you no longer need an execution plan, you should delete it from the plan table. You can do this by rolling back
the EXPLAIN PLAN statement (if you have not committed yet) or by deleting rows from the plan table. If you have
multiple execution plans in the plan table, then you should delete selectively by statement_id. Note that if you
explain two SQL statements and assign both the same statement_id, you will get an ugly cartesian product when you
query the plan table!
SQL*Plus has an autotrace feature which allows you to automatically display execution plans and helpful statistics
for each statement executed in a SQL*Plus session without having to use the EXPLAIN PLAN statement or query the
plan table. You turn this feature on and off with the following SQL*Plus command:
When you turn on autotrace in SQL*Plus, the default behavior is for SQL*Plus to execute each statement and display
the results in the normal fashion, followed by an execution plan listing and a listing of various server-side resources
used to execute the statement. By using the TRACEONLY keyword, you can have SQL*Plus suppress the query
results. By using the EXPLAIN or STATISTICS keywords, you can have SQL*Plus display just the execution plan without
the resource statistics or just the statistics without the execution plan.
In order to have SQL*Plus display execution plans, you must have privileges on a plan table by the name of
plan_table. In order to have SQL*Plus display the resource statistics, you must have SELECT privileges on v$sesstat,
v$statname, and v$session. There is a script in $ORACLE_HOME/sqlplus/admin called plustrce.sql which creates a
role with these three privileges in it, but this script is not run automatically by the Oracle installer.
The autotrace feature of SQL*Plus makes it extremely easy to generate and view execution plans, with resource
statistics as an added bonus. One key drawback, however, is that the statement being explained must actually be
executed by the database server before SQL*Plus will display the execution plan. This makes the tool unusable in the
situation where you would like to predict how long an operation might take to complete.
A sample output from SQL*Plus for the invoice item query is as follows:
Execution Plan
----------------------------------------------------------
=2 Card=1 Bytes=15)
Cost=1 Card=2)
rd=2 Bytes=24)
d=100 Bytes=1200)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
8 consistent gets
0 physical reads
0 redo size
0 sorts (memory)
0 sorts (disk)
1 rows processed
Although we haven’t discussed how to read an execution plan yet, you can see that the output from SQL*Plus
provides the same basic information, with several additional details in the form of estimates from the query
optimizer.
There are many GUI tools available that allow you to view execution plans for SQL statements you specify or for
statements already sitting in the shared pool of the database instance. Any comprehensive database management
tool will offer this capability, but there are several free tools available for download on the internet that have this
feature as well.
One tool in particular that I really like is TOAD (the Tool for Oracle Application Developers). Although TOAD was
originally developed as a free tool, Quest Software now owns TOAD and it is available in both a free version (limited
functionality) and an enhanced version that may be purchased (full feature set). You may download TOAD from
Quest Software at http://www.toadsoft.com/downld.html. TOAD has lots of handy features. The one relevant to us
here is the ability to click on any SQL statement in the shared pool and instantly view its execution plan.
As with the EXPLAIN PLAN statement and the autotrace facility in SQL*Plus, you will need to have access to a plan
table. Here is TOAD’s rendition of the execution plan for the invoice item query we’ve been using:
You can see that the information displayed is almost identical to that from the autotrace facility in SQL*Plus. One
nice feature of TOAD’s execution plan viewer is that you can collapse and expand the individual operations that
make up the execution plan. Also, the vertical and horizontal lines connecting different steps help you keep track of
the nesting and which child operations go with which parent operations in the hierarchy. The benefits of these
features become more apparent when working with extremely complicated execution plans.
Unfortunately, when looking at execution plans for SQL statements that involve database links or parallelism, TOAD
leaves out critical information that is present in the plan table and is reported by the autotrace feature of SQL*Plus.
Perhaps this deficiency only exists in the free version of TOAD; I would like to think that if you pay for the full version
of TOAD, you’ll get complete execution plans.
SQL trace may be enabled at the instance or session level. To enable SQL trace at the instance level, add the
following parameter setting to the instance parameter file and restart the database instance:
sql_trace = true
When an Oracle instance starts up with the above parameter setting, every database session will run in SQL trace
mode, meaning that all SQL operations for every database session will be written to trace files. Even the daemon
processes like PMON and SMON will be traced! In practice, enabling SQL trace at the instance level is usually not
very useful. It can be overpowering, sort of like using a fire hose to pour yourself a glass of water.
It is more typical to enable SQL trace in a specific session. You can turn SQL trace on and off as desired in order to
trace just the operations that you wish to trace. If you have access to the database session you wish to trace, then
use the ALTER SESSION statement as follows to enable and disable SQL trace:
In situations where you cannot invoke an ALTER SESSION command from the session you wish to trace—as with
prepackaged applications, for example—you can connect to the database as a DBA user and invoke the
dbms_system built-in package in order to turn on or off SQL trace in another session. You do this by querying
v$session to find the SID and serial number of the session you wish to trace and then invoking the dbms_system
package with a command of the form:
When you enable SQL trace in a session for the first time, the Oracle server process handling that session will create
a trace file in the directory on the database server designated by the user_dump_dest initialization parameter. As
the server is called by the application to perform database operations, the server process will append to the trace
file.
Note that tracing a database session that is using multi-threaded server (MTS) is a bit complicated because each
database request from the application could get picked up by a different server process. In this situation, each server
process will create a trace file containing trace information about the operations performed by that process only.
This means that you will potentially have to combine multiple trace files together to get the full picture of how the
application interacted with the database. Furthermore, if multiple sessions are being traced at once, it will be hard to
tell which operations in the trace file belong to which session. For these reasons, you should use dedicated server
mode when tracing a database session with SQL trace.
SQL trace files contain detailed timing information. By default, Oracle does not track timing, so all timing figures in
trace files will show as zero. If you would like to see legitimate timing information, then you need to enable timed
statistics. You can do this at the instance level by setting the following parameter in the instance parameter file and
restarting the instance:
timed_statistics = true
You can also dynamically enable or disable timed statistics collection at either the instance or the session level with
the following commands:
There is no known way to enable timed statistics collection for an individual session from another session (akin to
the SYS.dbms_system.set_sql_trace_in_session built-in).
There is very high overhead associated with enabling SQL trace. Some DBAs believe the performance penalty could
be over 25%. Another concern is that enabling SQL trace causes the generation of potentially large trace files. For
these reasons, you should use SQL trace sparingly. Only trace what you need to trace and think very carefully before
enabling SQL trace at the instance level.
On the other hand, there is little, if any, measurable performance penalty in enabling timed statistics collection.
Many DBAs run production databases with timed statistics collection enabled at the system level so that various
system statistics (more than just SQL trace files) will include detailed timing information. Note that Oracle 8.1.5 had
some serious memory corruption bugs associated with enabling timed statistics collection at the instance level, but
these seem to have been fixed in Oracle 8.1.6.
On Unix platforms, Oracle will typically set permissions so that only the oracle user and members of the dba Unix
group can read the trace files. If you want anybody with a Unix login to be able to read the trace files, then you
should set the following undocumented (but supported) initialization parameter in the parameter file:
_trace_files_public = true
If you trace a database session that makes a large number of calls to the database server, the trace file can get quite
large. The initialization parameter max_dump_file_size allows you to set a maximum trace file size. On Unix
platforms, this parameter is specified in units of 512 byte blocks. Thus a setting of 10240 will limit trace files to 5 Mb
apiece. When a SQL trace file reaches the maximum size, the database server process stops writing trace information
to the trace file. On Unix platforms there will be no limit on trace file size if you do not explicitly set the
max_dump_file_size parameter.
If you are tracing a session and realize that the trace file is about to reach the limit set by max_dump_file_size, you
can eliminate the limit dynamically so that you don’t lose trace information. To do this, query the PID column in
v$process to find the Oracle PID of the process writing the trace file. Then execute the following statements in
SQL*Plus:
CONNECT / AS SYSDBA
ORADEBUG UNLIMIT
Before you can use TKPROF, you need to generate a trace file and locate it. Oracle writes trace files on the database
server to the directory specified by the user_dump_dest initialization parameter. (Daemon processes such as PMON
write their trace files to the directory specified by background_dump_dest.) On Unix platforms, the trace file will
have a name that incorporates the operating system PID of the server process writing the trace file.
If there are a lot of trace files in the user_dump_dest directory, it could be tricky to find the one you want. One
tactic is to examine the timestamps on the files. Another technique is to embed a comment in a SQL statement in
the application that will make its way into the trace file. An example of this is as follows:
Because TKPROF is a utility you invoke from the operating system and not from within a database session, there will
naturally be some variation in the user interface from one operating system platform to another. On Unix platforms,
you run TKPROF from the operating system prompt with a syntax as follows:
If you invoke TKPROF with no arguments at all, you will get a help screen listing all of the options. This is especially
helpful because TKPROF offers many sort capabilities, but you select the desired sort by specifying a cryptic keyword.
The help screen identifies all of the sort keywords.
In its simplest form, you run TKPROF specifying the name of a SQL trace file and an output filename. TKPROF will
read the trace file and generate a report file with the output filename you specified. TKPROF will not connect to the
database, and the report will not include execution plans for the SQL statements. SQL statements that were
executed by the SYS user recursively (to dynamically allocate an extent in a dictionary-managed tablespace, for
example) will be included in the report, and the statements will appear in the report approximately in the order in
which they were executed in the database session that was traced.
If you include the explain keyword, TKPROF will connect to the database and execute an EXPLAIN PLAN statement
for each SQL statement found in the trace file. The execution plan results will be included in the report file. As we
will see later, TKPROF merges valuable information from the trace file into the execution plan display, making this
just about the most valuable way to display an execution plan. Note that the username you specify when running
TKPROF should be the same as the username connected in the database session that was traced. You do not need to
have a plan table in order to use the explain keyword—TKPROF will create and drop its own plan table if needed.
If you specify sys=n, TKPROF will exclude from the report SQL statements initiated by Oracle as the SYS user. This
will make your report look tidier because it will only contain statements actually issued by your application. The
theory is that Oracle internal SQL has already been fully optimized by the kernel developers at Oracle Corporation, so
you should not have to deal with it. However, using sys=n will exclude potentially valuable information from the
TKPROF report. Suppose the SGA is not properly sized on the instance and Oracle is spending a lot of time resolving
dictionary cache misses. This would manifest itself in lots of time spent on recursive SQL statements initiated by the
SYS user. Using sys=n would exclude this information from the report.
If you specify the insert keyword, TKPROF will generate a SQL script in addition to the regular report. This SQL script
creates a table called tkprof_table and inserts one row for each SQL statement displayed on the report. The row will
contain the text of the SQL statement traced and all of the statistics displayed in the report. You could use this
feature to effectively load the TKPROF report into the database and use SQL to analyze and manipulate the statistics.
I’ve never needed to use this feature, but I suppose it could be helpful in some situations.
If you specify the record keyword, TKPROF will generate another type of SQL script in addition to the regular report.
This SQL script will contain a copy of each SQL statement issued by the application while tracing was enabled. You
could get this same information from the TKPROF report itself, but this way could save some cutting and pasting.
The sort keyword is extremely useful. Typically, a TKPROF report may include hundreds of SQL statements, but you
may only be interested in a few resource intensive queries. The sort keyword allows you to order the listing of the
SQL statements so that you don’t have to scan the entire file looking for resource hogs. In some ways, the sort
feature is too powerful for its own good. For example, you cannot sort statements by CPU time consumed—instead
you sort by CPU time spent parsing, CPU time spent executing, or CPU time spent fetching.
A sample TKPROF report for the invoice item query we’ve been using so far is as follows:
********************************************************************************
********************************************************************************
********************************************************************************
********************************************************************************
------- ---------------------------------------------------
1 NESTED LOOPS
1 NESTED LOOPS
------- ---------------------------------------------------
1 NESTED LOOPS
1 NESTED LOOPS
'INVOICE_ITEMS'
(UNIQUE)
'INVOICES'
(UNIQUE)
(UNIQUE)
********************************************************************************
********************************************************************************
********************************************************************************
1 session in tracefile.
RSCHRAG.prof$plan_table
You can see that there is a lot going on in a TKPROF report. We will talk about how to read the report and interpret
the different statistics in the next section.
In this section we will discuss how to read and interpret execution plans and TKPROF reports. While generating an
execution plan listing or creating a TKPROF report file is usually a straightforward process, analyzing the data and
reaching the correct conclusions can be more of an art. We’ll look at lots of examples along the way.
An execution plan is a hierarchical structure somewhat like an inverted tree. The SQL statement being examined can
be thought of as the root of the tree. This will be the first line on an execution plan listing, the line that is least
indented. This statement can be thought of as the result of one or more subordinate operations. Each of these
subordinate operations can possibly be decomposed further. This decomposition process continues repeatedly until
eventually even the most complex SQL statement is broken down into a set of basic data access operations.
FROM customers
ORDER BY customer_name;
0 SELECT STATEMENT
1 0 SORT ORDER BY
The root operation—that which we explained—is a SELECT statement. The output of the statement will be the
results of a sort operation (for the purposes of satisfying the ORDER BY clause). The input to the sort will be the
results of a full table scan of the customers table. Stated more clearly, the database server will execute this query by
checking every row in the customers table for a criteria match and sorting the results. Perhaps the developer
expected Oracle to use an index on the customer_name column to avoid a full table scan, but the use of the UPPER
function defeated the index. (A function-based index could be deployed to make this query more efficient.)
0 SELECT STATEMENT
1 0 NESTED LOOPS
Again, the root operation is a SELECT statement. This time, the SELECT statement gets its input from the results of a
nested loops join operation. The nested loops operation takes as input the results of accesses to the invoices and
customers tables. (You can tell from the indenting that accesses to both tables feed directly into the nested loops
operation.) The invoices table is accessed by a range scan of the invoices_date index, while the customers table is
accessed by a unique scan of the customers_pk index.
In plainer language, here is how Oracle will execute this query: Oracle will perform a range scan on the
invoices_date index to find the ROWIDs of all rows in the invoices table that have an invoice date matching the query
criteria. For each ROWID found, Oracle will fetch the corresponding row from the invoices table, look up the
customer_id from the invoices record in the customers_pk index, and use the ROWID found in the customers_pk
index entry to fetch the correct customer record. This, in effect, joins the rows fetched from the invoices table with
their corresponding matches in the customers table. The results of the nested loops join operation are returned as
the query results.
GROUP BY a.customer_name;
0 SELECT STATEMENT
1 0 SORT GROUP BY
This execution plan is more complex than the previous two, and here you can start to get a feel for the way in which
complex operations get broken down into simpler subordinate operations. To execute this query, the database
server will do the following: First Oracle will perform a range scan on the invoices_status index to get the ROWIDs of
all rows in the invoices table with the desired status. For each ROWID found, the record from the invoices table will
be fetched.
This set of invoice records will be set aside for a moment while the focus turns to the customers table. Here, Oracle
will fetch all customers records with a full table scan. To perform a hash join between the invoices and customers
tables, Oracle will build a hash from the customer records and use the invoice records to probe the customer hash.
Next, a nested loops join will be performed between the results of the hash join and the invoice_items_pk index. For
each row resulting from the hash join, Oracle will perform a unique scan of the invoice_items_pk index to find index
entries for matching invoice items. Note that Oracle gets everything it needs from the index and doesn’t even need
to access the invoice_items table at all. Also note that the nested loops operation is an outer join. A sort operation
for the purposes of grouping is performed on the results of the nested loops operation in order to complete the
SELECT statement.
It is interesting to note that Oracle chose to use a hash join and a full table scan on the customers table instead of
the more traditional nested loops join. In this database there are many invoices and a relatively small number of
customers, making a full table scan of the customers table less expensive than repeated index lookups on the
customers_pk index. But suppose the customers table was enormous and the relative number of invoices was quite
small. In that scenario a nested loops join might be better than a hash join. Examining the execution plan allows you
to see which join method Oracle is using. You could then apply optimizer hints to coerce Oracle to use alternate
methods and compare the performance.
You may wonder how I got that whole detailed explanation out of the eight line execution plan listing shown above.
Did I read anything into the execution plan? No! It’s all there! Understanding the standard inputs and outputs of
each type of operation and coupling this with the indenting is key to reading an execution plan.
A nested loops join operation always takes two inputs: For every row coming from the first input, the second input
is executed once to find matching rows. A hash join operation also takes two inputs: The second input is read
completely once and used to build a hash. For each row coming from the first input, one probe is performed against
this hash. Sorting operations, meanwhile, take in one input. When the entire input has been read, the rows are
sorted and output in the desired order.
SELECT customer_name
FROM customers a
WHERE EXISTS
(
SELECT 1
FROM invoices_view b
ORDER BY customer_name;
0 SELECT STATEMENT
1 0 SORT ORDER BY
2 1 FILTER
4 2 VIEW INVOICES_VIEW
5 4 FILTER
6 5 SORT GROUP BY
7 6 NESTED LOOPS
This execution plan is somewhat complex because the query includes a subquery that the optimizer could not
rewrite as a simple join, and a view whose definition could not be merged into the query. The definition of the
invoices_view view is as follows:
AS
COUNT(*) number_of_lines
Oracle will assemble the view by performing an index range scan on the invoices_customer_id index and fetching
the rows from the invoices table containing one specific customer_id. For each invoice record found, the
invoice_items_pk index will be range scanned to get a nested loops join of invoices to their invoice_items records.
The results of the join are sorted for grouping, and then groups with 100 or fewer invoice_items records are filtered
out.
What is left at the step with ID 4 is a list of invoices for one specific customer that have more than 100 invoice_items
records associated. If at least one such invoice exists, then the customer passes the filter at the step with ID 2.
Finally, all customer records passing this filter are sorted for correct ordering and the results are complete.
Note that queries involving simple views will not result in a “view” operation in the execution plan. This is because
Oracle can often merge a view definition into the query referencing the view so that the table accesses required to
implement the view just become part of the regular execution plan. In this example, the GROUP BY clause
embedded in the view foiled Oracle’s ability to merge the view into the query, making a separate “view” operation
necessary in order to execute the query.
Also note that the filter operation can take on a few different forms. In general, a filter operation is where Oracle
looks at a set of candidate rows and eliminates some based on certain criteria. This criteria could involve a simple
test such as number_of_lines > 100 or it could be an elaborate subquery.
In this example, the filter at step ID 5 takes only one input. Here Oracle evaluates each row from the input one at a
time and either adds the row to the output or discards it as appropriate. Meanwhile, the filter at step ID 2 takes two
inputs. When a filter takes two inputs, Oracle reads the rows from the first input one at a time and executes the
second input once for each row. Based on the results of the second input, the row from the first input is either
added to the output or discarded.
Oracle is able to perform simple filtering operations while performing a full table scan. Therefore, a separate filter
operation will not appear in the execution plan when Oracle performs a full table scan and throws out rows that
don’t satisfy a WHERE clause. Filter operations with one input commonly appear in queries with view operations or
HAVING clauses, while filter operations with multiple inputs will appear in queries with EXISTS clauses.
An important note about execution plans and subqueries: When a SQL statement involves subqueries, Oracle tries
to merge the subquery into the main statement by using a join. If this is not feasible and the subquery does not have
any dependencies or references to the main query, then Oracle will treat the subquery as a completely separate
statement from the standpoint of developing an execution plan—almost as if two separate SQL statements were
sent to the database server. When you generate an execution plan for a statement that includes a fully autonomous
subquery, the execution plan may not include the operations for the subquery. In this situation, you need to
generate an execution plan for the subquery separately.
Although the plan table contains 24 columns, so far we have only been using six of them in our execution plan
listings. These six will get you very far in the tuning process, but some of the other columns can be mildly interesting
at times. Still other columns can be very relevant in specific situations.
The optimizer column in the plan table shows the mode (such as RULE or CHOOSE) used by the optimizer to
generate the execution plan. The timestamp column shows the date and time that the execution plan was
generated. The remarks column is an 80 byte field where you may put your own comments about each step of the
execution plan. You can populate the remarks column by using an ordinary UPDATE statement against the plan
table.
The object_owner, object_node, and object_instance columns can help you further distinguish the database object
involved in the operation. You might look at the object_owner column, for example, if objects in multiple schemas
have the same name and you are not sure which one is being referenced in the execution plan. The object_node is
relevant in distributed queries or transactions. It indicates the database link name to the object if the object resides
in a remote database. The object_instance column is helpful in situations such as a self-join where multiple instances
of the same object are used in one SQL statement.
The partition_start, partition_stop, and partition_id columns offer additional information when a partitioned table is
involved in the execution plan. The distribution column gives information about how the multiple Oracle processes
involved in a parallel query or parallel DML operation interact with each other.
The cost, cardinality, and bytes columns show estimates made by the cost-based optimizer as to how expensive an
operation will be. Remember that the execution plan is inserted into the plan table without actually executing the
SQL statement. Therefore, these columns reflect Oracle’s estimates and not the actual resources used. While it can
be amusing to look at the optimizer’s predictions, sometimes you need to take them with a grain of salt. Later we’ll
see that TKPROF reports can include specific information about actual resources used at each step of the execution
plan.
The “other” column in the plan table is a wild card where Oracle can store any sort of textual information about
each step of an execution plan. The other_tag column gives an indication of what has been placed in the “other”
column. This column will contain valuable information during parallel queries and distributed operations.
Consider the following distributed query and output from the SQL*Plus autotrace facility:
Execution Plan
----------------------------------------------------------
2 1 MERGE JOIN
3 2 SORT (JOIN)
4 3 REMOTE* SALES.ACME.COM
5 2 SORT (JOIN)
Here is how to read this execution plan: Oracle observed a hint and used the RULE optimizer mode in order to
develop the execution plan. First, a remote query will be sent to sales.acme.com to fetch the contact_ids and names
from a remote table. These fetched rows will be sorted for joining purposes and temporarily set aside. Next, Oracle
will fetch all records from the customers table with a full table scan and sort them for joining purposes. Next, the set
of contacts and the set of customers will be joined using the merge join algorithm. Finally, the results of the merge
join will be sorted for proper ordering and the results will be returned.
The merge join operation always takes two inputs, with the prerequisite that each input has already been sorted on
the join column or columns. The merge join operation reads both inputs in their entirety at one time and outputs the
results of the join. Merge joins and hash joins are usually more efficient than nested loops joins when remote tables
are involved, because these types of joins will almost always involve fewer network roundtrips. Hash joins are not
supported when rule-based optimization is used. Because of the RULE hint, Oracle chose a merge join.
Every TKPROF report starts with a header that lists the TKPROF version, the date and time the report was generated,
the name of the trace file, the sort option used, and a brief definition of the column headings in the report. Every
report ends with a series of summary statistics. You can see the heading and summary statistics on the sample
TKPROF report shown earlier in this paper.
The main body of the TKPROF report consists of one entry for each distinct SQL statement that was executed by the
database server while SQL trace was enabled. There are a few subtleties at play in the previous sentence. If an
application queries the customers table 50 times, each time specifying a different customer_id as a literal, then there
will be 50 separate entries in the TKPROF report. If however, the application specifies the customer_id as a bind
variable, then there will be only one entry in the report with an indication that the statement was executed 50 times.
Furthermore, the report will also include SQL statements initiated by the database server itself in order to perform
so-called “recursive operations” such as manage the data dictionary and dictionary cache.
The entries for each SQL statement in the TKPROF report are separated by a row of asterisks. The first part of each
entry lists the SQL statement and statistics pertaining to the parsing, execution, and fetching of the SQL statement.
Consider the following example:
********************************************************************************
SELECT table_name
FROM user_tables
ORDER BY table_name
This may not seem like a useful example because it is simply a query against a dictionary view and does not involve
application tables. However, this query actually serves the purpose well from the standpoint of highlighting the
elements of a TKPROF report.
Reading across, we see that while SQL trace was enabled, the application called on the database server to parse this
statement once. 0.01 CPU seconds over a period of 0.02 elapsed seconds were used on the parse call, although no
physical disk I/Os or even any buffer gets were required. (We can infer that all dictionary data required to parse the
statement were already in the dictionary cache in the SGA.)
The next line shows that the application called on Oracle to execute the query once, with less than 0.01 seconds of
CPU time and elapsed time being used on the execute call. Again, no physical disk I/Os or buffer gets were required.
The fact that almost no resources were used on the execute call might seem strange, but it makes perfect sense
when you consider that Oracle defers all work on most SELECT statements until the first row is fetched.
The next line indicates that the application performed 14 fetch calls, retrieving a total of 194 rows. The 14 calls used
a total of 0.59 CPU seconds and 0.99 seconds of elapsed time. Although no physical disk I/Os were performed,
33,633 buffers were gotten in consistent mode (consistent gets). In other words, there were 33,633 hits in the buffer
cache and no misses. I ran this query from SQL*Plus, and we can see here that SQL*Plus uses an array interface to
fetch multiple rows on one fetch call. We can also see that, although no disk I/Os were necessary, it took quite a bit
of processing to complete this query.
The remaining lines on the first part of the entry for this SQL statement show that there was a miss in the library
cache (the SQL statement was not already in the shared pool), the CHOOSE optimizer goal was used to develop the
execution plan, and the parsing was performed in the RSCHRAG schema.
Notice the text in square brackets concerning recursive depth. This did not actually appear on the report—I added it
for effect. The fact that the report did not mention recursive depth for this statement indicates that it was executed
at the top level. In other words, the application issued this statement directly to the database server. When
recursion is involved, the TKPROF report will indicate the depth of the recursion next to the parsing user.
There are two primary ways in which recursion occurs. Data dictionary operations can cause recursive SQL
operations. When a query references a schema object that is missing from the dictionary cache, a recursive query is
executed in order to fetch the object definition into the dictionary cache. For example, a query from a view whose
definition is not in the dictionary cache will cause a recursive query against view$ to be parsed in the SYS schema.
Also, dynamic space allocations in dictionary-managed tablespaces will cause recursive updates against uet$ and
fet$ in the SYS schema.
Use of database triggers and stored procedures can also cause recursion. Suppose an application inserts a row into a
table that has a database trigger. When the trigger fires, its statements run at a recursion depth of one. If the trigger
invokes a stored procedure, the recursion depth could increase to two. This could continue through any number of
levels.
So far we have been looking at the top part of the SQL statement entry in the TKPROF report. The remainder of the
entry consists of a row source operation list and optionally an execution plan display. (If the explain keyword was not
used when the TKPROF report was generated, then the execution plan display will be omitted.) Consider the
following example, which is the rest of the entry shown above:
------- ---------------------------------------------------
------- ---------------------------------------------------
The row source operation listing looks very much like an execution plan. It is based on data collected from the SQL
trace file and can be thought of as a “poor man’s execution plan”. It is close, but not complete.
The execution plan shows the same basic information you could get from the autotrace facility of SQL*Plus or by
querying the plan table after an EXPLAIN PLAN statement—with one key difference. The rows column along the left
side of the execution plan contains a count of how many rows of data Oracle processed at each step during the
execution of the statement. This is not an estimate from the optimizer, but rather actual counts based on the
contents of the SQL trace file.
Although the query in this example goes against a dictionary view and is not terribly interesting, you can see that
Oracle did a lot of work to get the 194 rows in the result: 11,146 range scans were performed against the i_obj2
index, followed by 11,146 accesses on the obj$ table. This led to 12,665 non-unique lookups on the i_obj# index,
11,339 accesses on the tab$ table, and so on.
In situations where it is feasible to actually execute the SQL statement you wish to explain (as opposed to merely
parsing it as with the EXPLAIN PLAN statement), I believe TKPROF offers the best execution plan display. GUI tools
such as TOAD will give you results with much less effort, but the display you get from TOAD is not 100% complete
and in certain situations critical information is missing. (Again, my experience is with the free version!) Meanwhile,
simple plan table query scripts like my explain.sql presented earlier in this paper or utlxpls.sql display very
incomplete information. TKPROF gives the most relevant detail, and the actual row counts on each operation can be
very useful in diagnosing performance problems. Autotrace in SQL*Plus gives you most of the information and is
easy to use, so I give it a close second place.
The information displayed in a TKPROF report can be extremely valuable in the application tuning process. Of course
the execution plan listing will give you insights into how Oracle executes the SQL statements that make up the
application, and ways to potentially improve performance. However, the other elements of the TKPROF report can
be helpful as well.
Looking at the repetition of SQL statements and the library cache miss statistics, you can determine if the
application is making appropriate use of Oracle’s shared SQL facility. Are bind variables being used, or is every query
a unique statement that must be parsed from scratch?
From the counts of parse, execute, and fetch calls, you can see if applications are making appropriate use of Oracle’s
APIs. Is the application fetching rows one at a time? Is the application reparsing the same cursor thousands of times
instead of holding it open and avoiding subsequent parses? Is the application submitting large numbers of simple
SQL statements instead of bulking them into PL/SQL blocks or perhaps using array binds?
Looking at the CPU and I/O statistics, you can see which statements consume the most system resources. Could
some statements be tuned so as to be less CPU intensive or less I/O intensive? Would shaving just a few buffer gets
off of a statement’s execution plan have a big impact because the statement gets executed so frequently?
The row counts on the individual operations in an execution plan display can help identify inefficiencies. Are tables
being joined in the wrong order, causing large numbers of rows to be joined and eliminated only at the very end?
Are large numbers of duplicate rows being fed into sorts for uniqueness when perhaps the duplicates could have
been weeded out earlier on?
TKPROF reports may seem long and complicated, but nothing in the report is without purpose. (Well, okay, the row
source operation listing sometimes isn’t very useful!) You can learn volumes about how your application interacts
with the database server by generating and reading a TKPROF report.
Conclusion
In this paper we have discussed how to generate execution plans and TKPROF reports, and how to interpret them.
We’ve walked through several examples in order to clarify the techniques presented. When you have a firm
understanding of how the Oracle database server executes your SQL statements and what resources are required
each step of the way, you have the ability to find bottlenecks and tune your applications for peak performance.
EXPLAIN PLAN and TKPROF give you the information you need for this process.
When is a full table scan better than an index range scan? When is a nested loops join better than a hash join? In
which order should tables be joined? These are all questions without universal answers. In reality, there are many
factors that contribute to determining which join method is better or which join order is optimal.
In this paper we have looked at the tools that give you the information you need to make tuning decisions. How to
translate an execution plan or TKPROF report into an action plan to achieve better performance is not something
that can be taught in one paper. You will need to read several papers or books in order to give yourself some
background on the subject, and then you will need to try potential solutions in a test environment and evaluate
them. If you do enough application tuning, you will develop an intuition for spotting performance problems and
potential solutions. This intuition comes from lots of experience, and you can’t gain it solely from reading papers or
books.
For more information about the EXPLAIN PLAN facility, execution plans in general, and TKPROF, consult the Oracle
manual entitled Oracle8i Designing and Tuning for Performance. To learn more about application tuning techniques,
I suggest you pick up Richard Niemiec’s tome on the subject, Oracle Performance Tuning Tips & Techniques,
available from Oracle Press.
Roger Schrag has been an Oracle DBA and application architect for over eleven years, starting out at Oracle
Corporation on the Oracle Financials development team. He is the founder of Database Specialists, Inc., a consulting
group specializing in business solutions based on Oracle technology. You can visit Database Specialists on the web at
http://www.dbspecialists.com, and you can reach Roger by calling +1.415.344.0500 o
Regular Trace
- Enable the trace option by selecting the Enable Trace Checkbox in the concurrent program definition.
- Disable the trace option by unchecking the Enable Trace Checkbox in the concurrent program definition.
Note : If it is a Custom report then there is a mandatory XML tags which should be added by the developers.
Without that the above trace will not work.
- Navigation:
- Login into Self Service under the same user used to set the profile value.
- Click the diagnostic icon at the top of the page. Two options are displayed:
Show Log
- Click Go.
Disable Trace
Trace (regular)
Trace with binds
- Click Save.
Click Go
- Note the trace id numbers - there will be more than one and exit the Application.
Enable the SQL Trace facility for the desired session, and run the application.
Run TKPROF to translate the trace file created in Step 2 into a readable output file. This step can optionally create a
SQL script that can be used to store the statistics in a database.
Interpret the output file created in Step 3.Formatting Trace File with TKPROF
Run TKPROF procedure on the raw trace files. For example, Doyen_ora_18190.trc is the name of the raw trace file
and trace1.txt is the name of the TKPROF file.
To enable trace for an API when executed from a SQL script outside of Oracle Applications
-- enable trace
ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';
-- Set the trace file identifier, to locate the file on the server
-- Execute the API from the SQL script, in the same session.
EXEC <procedure name> ;
2. Start the debug session with the SPID of the process that needs traced.
The oradebug command below will enable the maximum tracing possible:
2. Obtain the trace file name. The oradebug facility provides an easy way to obtain the file name:
c:\oracle9i\admin\ORCL92\udump\mooracle_ora_2280.trc
Now we can use the Tkprof Utilty to get the readable format.
For example to enable level 1 trace in a session with SID 9 and serial number 29 use
dbms_support.stop_trace_in_session (9,29);
Reference:
How To Use SQL Trace And TKPROF For Performance Issues with EBusiness Suite [ID 980711.1]
How to Enable Trace or Debug for APIs executed as SQL Script Outside of the Applications ? [Video] [ID 869386.1]
Tracing is a very handy tool for any DBA to be the first thing to look for when diagnosing any performance related
issues.
Tracing one’s own session– many times while working on performance issue, I need to take the trace of my own
database session. Steps which I follow
alter session set statistics_level=ALL (this is the default behaviour in Oracle 10g going forward)
alter session set sql_trace true –> this will enable level 4 trace OR
alter session set events ‘10046 trace name context forever, level x‘
Go to the User dump location (user_dump_dest) on the database server and look for file named
“tracefile_identifier”
Tracing other user’s database session(You MUST know the user session details – SID and Serial#). Different ways are
EXECUTE sys.dbms_system.set_ev(<SID>,<Serial#>,10046,<x>,”)
User is complaining that it is taking a lot of time while saving the transaction
In order to troubleshoot or isolate the issue, we MUST know the query that runs in the database when the user
click on ‘SAVE’ button and hence should enable the trace before the ‘SAVE’ button is hit.
Navigate to the required form (You need to enable the trace immediately before the ‘Save’ button is hit. This is done
to ensure ONLY the offending SQLs are captured in the trace file)
Once on the Form –> Click on ‘Help’ –> Diagnostic –> Trace and Select any of the five options listed there. It will
enable the trace and Oracle will prompt you the trace file identifier and its location as well.
Do the problematic transaction and once done, disable the trace (Follow the same navigation and select ‘No Trace’
In such scenarios, we need to trace the complete navigation or User activity to identify the offending SQL .This is
achieved by enabling the trace using “profile level tracing”
Profile –> System –> Check the ‘USER’ check box –> put the username whose transaction is being traced (NEVER DO
THIS at SITE level) –> Query for the profile “Initialization SQL Statement – Custom”
Update this value for the USER with “BEGIN FND_CTL.FND_SESS_CTL (”,”,’TRUE’,’TRUE’,”, ‘ALTER SESSION SET
TRACEFILE_IDENTIFIER =”<any_identifier>” MAX_DUMP_FILE_SIZE = ”UNLIMITED’ ‘ EVENTS=”10046 TRACE NAME
CONTEXT FOREVER, LEVEL <level_no>”’); END;”
CAUTION: Be very careful in updating with the above value since any wrong entry can prevent specific user to login
to Oracle application
Save this profile setting and ask the user to do the transaction. And once done, again query the profile for the
specific user and remove the entry
Performance Issue for any concurrent program
This is normally done through enabling “Enable Trace” check box after querying the concurrent program
For enabling Level 12 trace for any concurrent program, either it can be done from database side (as described
above under section “Tracing User session” OR can be done from Oracle application front end. How to do it from
Front end–
Set the profile “Concurrent: Allow Debugging” to YES (ALWAYS DO this at USER level)
Login to application using the same responsibility as used to run the concurrent program
On the request submission screen, “Debug option” would be enabled (otherwise it is disabled)
Click on the “Debug Option” button and select the check box “SQL Trace”
You need to set the profile – “FND:Diagnostic” to YES (ALWAYS do at USER level)
You will see “Diagnostic” link on the top right cornet of the page
Click on GO
Next page will give you the option to select the kind of trace you want and once selected, trace file identified will be
displayed on the screen. You need to make a note of this.
Now do the required transactions and once done, Disable the trace (Diagnostic–>set trace level –> Disable Trace)
( Navigate to:
Profile->system
If it isn't set, then set it, then logout and bounce the APPS services.
The 'Debug Options' button on the concurrent program will now be enabled. )
, fcp.user_concurrent_program_name "Program"
, decode(fcr.phase_code,'R','Running')||'-'||decode(fcr.status_code,'R','Normal') "Status"
, v$parameter p1
, v$parameter p2
, fnd_concurrent_programs_vl fcp
, fnd_executables fe
where p1.name='user_dump_dest'
and p2.name='db_name'
and fcp.executable_id=fe.executable_id
8. Upload the raw trace file from step 4. above , and the tkprofed trace as well.
Tablespaces
A database is divided into one or more logical storage units called tablespaces. A database administrator can use
tablespaces to do the following:
Read-Only Tablespaces
Temporary Tablespaces
Every Oracle database contains a tablespace named SYSTEM that Oracle creates automatically when the database is
created. The SYSTEM tablespace always contains the data dictionary tables for the entire database.
A small database might need only the SYSTEM tablespace; however, it is recommended that you create at least one
additional tablespace to store user data separate from data dictionary information. This allows you more flexibility in
various database administration operations and can reduce contention among dictionary objects and schema objects
for the same datafiles.
Note: The SYSTEM tablespace must always be kept online. See "Online and Offline Tablespaces" .
All data stored on behalf of stored PL/SQL program units (procedures, functions, packages and triggers) resides in
the SYSTEM tablespace. If you create many of these PL/SQL objects, the database administrator needs to plan for the
space in the SYSTEM tablespace that these objects use. For more information about these objects and the space that
they require, see Chapter 14, "Procedures and Packages", and Chapter 15, "Database Triggers".
To enlarge a database, you have three options. You can add another datafile to one of its existing tablespaces,
thereby increasing the amount of disk space allocated for the corresponding tablespace. Figure 4 - 2 illustrates this
kind of space increase.
Alternatively, a database administrator can create a new tablespace (defined by an additional datafile) to increase
the size of a database. Figure 4 - 3 illustrates this.
The size of a tablespace is the size of the datafile(s) that constitute the tablespace, and the size of a database is the
collective size of the tablespaces that constitute the database.
The third option is to change a datafile's size or allow datafiles in existing tablespaces to grow dynamically as more
space is needed. You accomplish this by altering existing files or by adding files with dynamic extension properties.
Figure 4 - 4 illustrates this.
For more information about increasing the amount of space in your database, see the Oracle7 Server
Administrator's Guide.
Online and Offline Tablespaces
A database administrator can bring any tablespace (except the SYSTEM tablespace) in an Oracle database online
(accessible) or offline (not accessible) whenever the database is open.
Note: The SYSTEM tablespace must always be online because the data dictionary must always be available to Oracle.
A tablespace is normally online so that the data contained within it is available to database users. However, the
database administrator might take a tablespace offline for any of the following reasons:
to make a portion of the database unavailable, while allowing normal access to the remainder of the database
to perform an offline tablespace backup (although a tablespace can be backed up while online and in use)
to make an application and its group of tables temporarily unavailable while updating or maintaining the application
When a tablespace goes offline, Oracle does not permit any subsequent SQL statements to reference objects
contained in the tablespace. Active transactions with completed statements that refer to data in a tablespace that
has been taken offline are not affected at the transaction level. Oracle saves rollback data corresponding to
statements that affect data in the offline tablespace in a deferred rollback segment (in the SYSTEM tablespace).
When the tablespace is brought back online, Oracle applies the rollback data to the tablespace, if needed.
You cannot take a tablespace offline if it contains any rollback segments that are in use.
When a tablespace goes offline or comes back online, it is recorded in the data dictionary in the SYSTEM tablespace.
If a tablespace was offline when you shut down a database, the tablespace remains offline when the database is
subsequently mounted and reopened.
You can bring a tablespace online only in the database in which it was created because the necessary data dictionary
information is maintained in the SYSTEM tablespace of that database. An offline tablespace cannot be read or edited
by any utility other than Oracle. Thus, tablespaces cannot be transferred from database to database (transfer of
Oracle data can be achieved with tools described in Oracle7 Server Utilities).
Oracle automatically changes a tablespace from online to offline when certain errors are encountered (for example,
when the database writer process, DBWR, fails in several attempts to write to a datafile of the tablespace). Users
trying to access tables in the tablespace with the problem receive an error. If the problem that causes this disk I/O to
fail is media failure, the tablespace must be recovered after you correct the hardware problem.
By using multiple tablespaces to separate different types of data, the database administrator can also take specific
tablespaces offline for certain procedures, while other tablespaces remain online and the information in them is still
available for use. However, special circumstances can occur when tablespaces are taken offline. For example, if two
tablespaces are used to separate table data from index data, the following is true:
If the tablespace containing the indexes is offline, queries can still access table data because queries do not require
an index to access the table data.
If the tablespace containing the tables is offline, the table data in the database is not accessible because the tables
are required to access the data.
In summary, if Oracle determines that it has enough information in the online tablespaces to execute a statement, it
will do so. If it needs data in an offline tablespace, then it causes the statement to fail.
Read-Only Tablespaces
The primary purpose of read-only tablespaces is to eliminate the need to perform backup and recovery of large,
static portions of a database. Oracle never updates the files of a read-only tablespace, and therefore the files can
reside on read-only media, such as CD ROMs or WORM drives.
Note: Because you can only bring a tablespace online in the database in which it was created, read-only tablespaces
are not meant to satisfy archiving or data publishing requirements.
Whenever you create a new tablespace, it is always created as read-write. The READ ONLY option of the ALTER
TABLESPACE command allows you to change the tablespace to read-only, making all of its associated datafiles read-
only as well. You can then use the READ WRITE option to make a read-only tablespace writeable again.
Read-only tablespaces cannot be modified. Therefore, they do not need repeated backup. Also, should you need to
recover your database, you do not need to recover any read-only tablespaces, because they could not have been
modified.
You can drop items, such as tables and indexes, from a read-only tablespace, just as you can drop items from an
offline tablespace. However, you cannot create or alter objects in a read-only tablespace.
Use the SQL command ALTER TABLESPACE to change a tablespace to read-only. For information on the ALTER
TABLESPACE command, see the Oracle7 Server SQL Reference.
Making a tablespace read-only does not change its offline or online status.
Offline datafiles cannot be accessed. Bringing a datafile in a read-only tablespace online makes the file readable. The
file cannot be written to unless its associated tablespace is returned to the read-write state. The files of a read-only
tablespace can independently be taken online or offline using the DATAFILE option of the ALTER DATABASE
command.
You cannot add datafiles to a tablespace that is read-only, even if you take the tablespace offline. When you add a
datafile, Oracle must update the file header, and this write operation is not allowed.
To update a read-only tablespace, you must first make the tablespace writeable. After updating the tablespace, you
can then reset it to be read-only.
Read-only tablespaces have several implications upon instance or media recovery. See Chapter 24, "Database
Recovery", for more information about recovery.
Temporary Tablespaces
Space management for sort operations is performed more efficiently using temporary tablespaces designated
exclusively for sorts. This scheme effectively eliminates serialization of space management operations involved in the
allocation and deallocation of sort space. All operations that use sorts, including joins, index builds, ordering (ORDER
BY), the computation of aggregates (GROUP BY), and the ANALYZE command to collect optimizer statistics, benefit
from temporary tablespaces. The performance gains are significant in parallel server environments.
A temporary tablespace is a tablespace that can only be used for sort segments. No permanent objects can reside in
a temporary tablespace. Sort segments are used when a segment is shared by multiple sort operations. One sort
segment exists in every instance that performs a sort operation in a given tablespace.
Temporary tablespaces provide performance improvements when you have multiple sorts that are too large to fit
into memory. The sort segment of a given temporary tablespace is created at the time of the first sort operation. The
sort segment grows by allocating extents until the segment size is equal to or greater than the total storage demands
of all of the active sorts running on that instance.
You can also alter a tablespace from PERMANENT to TEMPORARY or vice versa using the following syntax:
For more information on the CREATE TABLESPACE and ALTER TABLESPACE Commands, see Chapter 4 of Oracle7
Server SQL Reference.
Datafiles
A tablespace in an Oracle database consists of one or more physical datafiles. A datafile can be associated with only
one tablespace, and only one database.
When a datafile is created for a tablespace, Oracle creates the file by allocating the specified amount of disk space
plus the overhead required for the file header. When a datafile is created, the operating system is responsible for
clearing old information and authorizations from a file before allocating it to Oracle. If the file is large, this process
might take a significant amount of time.
Additional Information: For information on the amount of space required for the file header of datafiles on your
operating system, see your Oracle operating system specific documentation.
Since the first tablespace in any database is always the SYSTEM tablespace, Oracle automatically allocates the first
datafiles of any database for the SYSTEM tablespace during database creation.
Datafile Contents
After a datafile is initially created, the allocated disk space does not contain any data; however, Oracle reserves the
space to hold only the data for future segments of the associated tablespace -- it cannot store any other program's
data. As a segment (such as the data segment for a table) is created and grows in a tablespace, Oracle uses the free
space in the associated datafiles to allocate extents for the segment.
The data in the segments of objects (data segments, index segments, rollback segments, and so on) in a tablespace
are physically stored in one or more of the datafiles that constitute the tablespace. Note that a schema object does
not correspond to a specific datafile; rather, a datafile is a repository for the data of any object within a specific
tablespace. Oracle allocates the extents of a single segment in one or more datafiles of a tablespace; therefore, an
object can "span" one or more datafiles. Unless table "striping" is used, the database administrator and end-users
cannot control which datafile stores an object.
Size of Datafiles
You can alter the size of a datafile after its creation or you can specify that a datafile should dynamically grow as
objects in the tablespace grow. This functionality allows you to have fewer datafiles per tablespace and can simplify
administration of datafiles.
For more information about resizing datafiles, see the Oracle7 Server Administrator's Guide.
Offline Datafiles
You can take tablespaces offline (make unavailable) or bring them online (make available) at any time. Therefore, all
datafiles making up a tablespace are taken offline or brought online as a unit when you take the tablespace offline or
bring it online, respectively. You can take individual datafiles offline; however, this is normally done only during
certain database recovery procedures.
Pseudocolumns
A pseudocolumn behaves like a table column, but is not actually stored in the table. You can select from
pseudocolumns, but you cannot insert, update, or delete their values
Pseudo-column
A pseudo-column is an Oracle assigned value (pseudo-field) used in the same context as an Oracle Database column,
but not stored on disk. SQL and PL/SQL recognizes the following SQL pseudocolumns, which return specific data
items: SYSDATE, SYSTIMESTAMP, ROWID, ROWNUM, UID, USER, LEVEL, CURRVAL, NEXTVAL, ORA_ROWSCN, etc.
Pseudocolumns are not actual columns in a table but they behave like columns. For example, you can select values
from a pseudocolumn. However, you cannot insert into, update, or delete from a pseudocolumn. Also note that
pseudocolumns are allowed in SQL statements, but not in procedural statements.
[edit]
SYSDATE SYSTIMESTAMP
--------- ----------------------------------------
[edit]
UID USER
---------- ------------------------------
50 MICHEL
[edit]
A sequence is a schema object that generates sequential numbers. When you create a sequence, you can specify its
initial value and an increment. CURRVAL returns the current value in a specified sequence.
Before you can reference CURRVAL in a session, you must use NEXTVAL to generate a number. A reference to
NEXTVAL stores the current sequence number in CURRVAL. NEXTVAL increments the sequence and returns the next
value. To obtain the current or next value in a sequence, you must use dot notation, as follows:
sequence_name.CURRVAL
sequence_name.NEXTVAL
After creating a sequence, you can use it to generate unique sequence numbers for transaction processing.
However, you can use CURRVAL and NEXTVAL only in a SELECT list, the VALUES clause, and the SET clause. In the
following example, you use a sequence to insert the same employee number into two tables:
If a transaction generates a sequence number, the sequence is incremented immediately whether you commit or roll
back the transaction.
[edit]
LEVEL
You use LEVEL with the SELECT CONNECT BY statement to organize rows from a database table into a tree structure.
LEVEL returns the level number of a node in a tree structure. The root is level 1, children of the root are level 2,
grandchildren are level 3, and so on.
In the START WITH clause, you specify a condition that identifies the root of the tree. You specify the direction in
which the query walks the tree (down from the root or up from the branches) with the PRIOR operator.
[edit]
ROWID
ROWID returns the rowid (binary address) of a row in a database table. You can use variables of type UROWID to
store rowids in a readable format. In the following example, you declare a variable named row_id for that purpose:
DECLARE row_id UROWID;
When you select or fetch a physical rowid into a UROWID variable, you can use the function ROWIDTOCHAR, which
converts the binary value to an 18-byte character string. Then, you can compare the UROWID variable to the ROWID
pseudocolumn in the WHERE clause of an UPDATE or DELETE statement to identify the latest row fetched from a
cursor.
[edit]
ROWNUM
ROWNUM returns a number indicating the order in which a row was selected from a table. The first row selected
has a ROWNUM of 1, the second row has a ROWNUM of 2, and so on. If a SELECT statement includes an ORDER BY
clause, ROWNUMs are assigned to the retrieved rows before the sort is done.
You can use ROWNUM in an UPDATE statement to assign unique values to each row in a table. Also, you can use
ROWNUM in the WHERE clause of a SELECT statement to limit the number of rows retrieved, as follows:
DECLARE
WHERE sal > 2000 AND ROWNUM < 10; -- returns 9 rows
The value of ROWNUM increases only when a row is retrieved, so the only meaningful uses of ROWNUM in a WHERE
clause are
[edit]
ORA_ROWSCN
ORA_ROWSCN returns the system change number (SCN) of the last change inside the block containing a row. It can
return the last modification for the row if the table is created with the option ROWDEPENDENCIES (default is
NOROWDEPENDENCIES).
Understanding Indexes
Re-creating Indexes
Although cost-based optimization helps avoid the use of nonselective indexes within query execution, the SQL
engine must continue to maintain all indexes defined against a table, regardless of whether they are used. Index
maintenance can present a significant CPU and I/O resource demand in any write-intensive application. In other
words, do not build indexes unless necessary.
To maintain optimal performance, drop indexes that an application is not using. You can find indexes that are not
being used by using the ALTER INDEX MONITORING USAGE functionality over a period of time that is representative
of your workload. This monitoring feature records whether or not an index has been used. If you find that an index
has not been used, then drop it. Be careful to select a representative workload to monitor.
See Also:
Indexes within an application sometimes have uses that are not immediately apparent from a survey of statement
execution plans. An example of this is a foreign key index on a parent table, which prevents share locks from being
taken out on a child table.
If you are deciding whether to create new indexes to tune statements, then you can also use the EXPLAIN PLAN
statement to determine whether the optimizer will choose to use these indexes when the application is run. If you
create new indexes to tune a statement that is currently parsed, then Oracle invalidates the statement. When the
statement is next executed, the optimizer automatically chooses a new execution plan that could potentially use the
new index. If you create new indexes on a remote database to tune a distributed statement, then the optimizer
considers these indexes when the statement is next parsed.
Also keep in mind that the way you tune one statement can affect the optimizer's choice of execution plans for
other statements. For example, if you create an index to be used by one statement, then the optimizer can choose to
use that index for other statements in the application as well. For this reason, reexamine the application's
performance and rerun the SQL trace facility after you have tuned those statements that you initially identified for
tuning.
Note:
You can use the Oracle Index Tuning Wizard to detect tables with inefficient indexes. The Oracle Index Tuning wizard
is an Oracle Enterprise Manager integrated application available with the Oracle Tuning Pack. Similar functionality is
available from the Virtual Index Advisor (a feature of SQL Analyze) and Oracle Expert.
See Also:
A key is a column or expression on which you can build an index. Follow these guidelines for choosing keys to index:
Consider indexing keys that are used frequently to join tables in SQL statements. For more information on optimizing
joins, see the section "Using Hash Clusters".
Index keys that have high selectivity. The selectivity of an index is the percentage of rows in a table having the same
value for the indexed key. An index's selectivity is optimal if few rows have the same value.
Note:
Oracle automatically creates indexes, or uses existing indexes, on the keys and expressions of unique and primary
keys that you define with integrity constraints.
Indexing low selectivity columns can be helpful if the data distribution is skewed so that one or two values occur
much less often than other values.
Do not use standard B-tree indexes on keys or expressions with few distinct values. Such keys or expressions usually
have poor selectivity and therefore do not optimize performance unless the frequently selected key values appear
less frequently than the other key values. You can use bitmap indexes effectively in such cases, unless a high
concurrency OLTP application is involved where the index is modified frequently.
Do not index columns that are modified frequently. UPDATE statements that modify indexed columns and INSERT
and DELETE statements that modify indexed tables take longer than if there were no index. Such SQL statements
must modify data in indexes as well as data in tables. They also generate additional undo and redo.
Do not index keys that appear only in WHERE clauses with functions or operators. A WHERE clause that uses a
function, other than MIN or MAX, or an operator with an indexed key does not make available the access path that
uses the index except with function-based indexes.
Consider indexing foreign keys of referential integrity constraints in cases in which a large number of concurrent
INSERT, UPDATE, and DELETE statements access the parent and child tables. Such an index allows UPDATEs and
DELETEs on the parent table without share locking the child table.
When choosing to index a key, consider whether the performance gain for queries is worth the performance loss for
INSERTs, UPDATEs, and DELETEs and the use of the space required to store the index. You might want to experiment
by comparing the processing times of the SQL statements with and without indexes. You can measure processing
time with the SQL trace facility.
See Also:
Oracle9i Application Developer's Guide - Fundamentals for more information on the effects of foreign keys on
locking
Improved selectivity
Sometimes two or more columns or expressions, each with poor selectivity, can be combined to form a composite
index with higher selectivity.
Reduced I/O
If all columns selected by a query are in a composite index, then Oracle can return these values from the index
without accessing the table.
A SQL statement can use an access path involving a composite index if the statement contains constructs that use a
leading portion of the index.
Note:
This is no longer the case with index skip scans. See "Index Skip Scans".
A leading portion of an index is a set of one or more columns that were specified first and consecutively in the list of
columns in the CREATE INDEX statement that created the index. Consider this CREATE INDEX statement:
ON table1(x, y, z);
x, xy, and xyz combinations of columns are leading portions of the index
yz, y, and z combinations of columns are not leading portions of the index
Consider creating a composite index on keys that are used together frequently in WHERE clause conditions
combined with AND operators, especially if their combined selectivity is better than the selectivity of either key
individually.
If several queries select the same set of keys based on one or more key values, then consider creating a composite
index containing all of these keys.
Of course, consider the guidelines associated with the general performance advantages and trade-offs of indexes
described in the previous sections.
Create the index so the keys used in WHERE clauses make up a leading portion.
If some keys are used in WHERE clauses more frequently, then be sure to create the index so that the more
frequently selected keys make up a leading portion to allow the statements that use only these keys to use the
index.
If all keys are used in WHERE clauses equally often, then ordering these keys from most selective to least selective in
the CREATE INDEX statement best improves query performance.
If all keys are used in the WHERE clauses equally often but the data is physically ordered on one of the keys, then
place that key first in the composite index.
Even after you create an index, the optimizer cannot use an access path that uses the index simply because the
index exists. The optimizer can choose such an access path for a SQL statement only if it contains a construct that
makes the access path available. To allow the CBO the option of using an index access path, ensure that the
statement contains a construct that makes such an access path available.
In some cases, you might want to prevent a SQL statement from using an access path that uses an existing index.
You might want to do this if you know that the index is not very selective and that a full table scan would be more
efficient. If the statement contains a construct that makes such an index access path available, then you can force
the optimizer to use a full table scan through one of the following methods:
Use the NO_INDEX hint to give the CBO maximum flexibility while disallowing the use of a certain index.
Use the FULL hint to force the optimizer to choose a full table scan instead of an index scan.
Use the INDEX, INDEX_COMBINE, or AND_EQUAL hints to force the optimizer to use one index or a set of listed
indexes instead of another.
See Also:
Chapter 5, "Optimizer Hints" for more information on the NO_INDEX, FULL, INDEX, INDEX_COMBINE, and
AND_EQUAL hints
Parallel execution uses indexes effectively. It does not perform parallel index range scans, but it does perform
parallel index lookups for parallel nested loop join execution. If an index is very selective (there are few rows for
each index entry), then it might be better to use sequential index lookup rather than parallel table scan.
Re-creating Indexes
You might want to re-create an index to compact it and minimize fragmented space, or to change the index's
storage characteristics. When creating a new index that is a subset of an existing index or when rebuilding an existing
index with new storage characteristics, Oracle might use the existing index instead of the base table to improve the
performance of the index build.
Note:
To avoid calling DBMS_STATS after the index creation or rebuild, include the COMPUTE STATISTICS statement on the
CREATE or REBUILD. You can use the Oracle Enterprise Manager Reorg Wizard to identify indexes that require
rebuilding. The Reorg Wizard can also be used to rebuild the indexes.
However, there are cases where it can be beneficial to use the base table instead of the existing index. Consider an
index on a table on which a lot of DML has been performed. Because of the DML, the size of the index can increase
to the point where each block is only 50% full, or even less. If the index refers to most of the columns in the table,
then the index could actually be larger than the table. In this case, it is faster to use the base table rather than the
index to re-create the index.
Use the ALTER INDEX ... REBUILD statement to reorganize or compact an existing index or to change its storage
characteristics. The REBUILD statement uses the existing index as the basis for the new one. All index storage
statements are supported, such as STORAGE (for extent allocation), TABLESPACE (to move the index to a new
tablespace), and INITRANS (to change the initial number of entries).
Usually, ALTER INDEX ... REBUILD is faster than dropping and re-creating an index, because this statement uses the
fast full scan feature. It reads all the index blocks using multiblock I/O, then discards the branch blocks. A further
advantage of this approach is that the old index is still available for queries while the rebuild is in progress.
See Also:
Oracle9i SQL Reference for more information about the CREATE INDEX and ALTER INDEX statements, as well as
restrictions on rebuilding indexes
Compacting Indexes
You can coalesce leaf blocks of an index by using the ALTER INDEX statement with the COALESCE option. This option
lets you combine leaf levels of an index to free blocks for reuse. You can also rebuild the index online.
See Also:
Oracle9i SQL Reference and Oracle9i Database Administrator's Guide for more information about the syntax for this
statement
You can use an existing nonunique index on a table to enforce uniqueness, either for UNIQUE constraints or the
unique aspect of a PRIMARY KEY constraint. The advantage of this approach is that the index remains available and
valid when the constraint is disabled. Therefore, enabling a disabled UNIQUE or PRIMARY KEY constraint does not
require rebuilding the unique index associated with the constraint. This can yield significant time savings on enable
operations for large tables.
Using a nonunique index to enforce uniqueness also lets you eliminate redundant indexes. You do not need a
unique index on a primary key column if that column already is included as the prefix of a composite index. You can
use the existing index to enable and enforce the constraint. You also save significant space by not duplicating the
index. However, if the existing index is partitioned, then the partitioning key of the index must also be a subset of the
UNIQUE key; otherwise, Oracle creates an additional unique index to enforce the constraint.
An enabled novalidated constraint behaves similarly to an enabled validated constraint for new data. Placing a
constraint in the enabled novalidated state signifies that any new data entered into the table must conform to the
constraint. Existing data is not checked. By placing a constraint in the enabled novalidated state, you enable the
constraint without locking the table.
If you change a constraint from disabled to enabled, then the table must be locked. No new DML, queries, or DDL
can occur, because there is no mechanism to ensure that operations on the table conform to the constraint during
the enable operation. The enabled novalidated state prevents operations violating the constraint from being
performed on the table.
An enabled novalidated constraint can be validated with a parallel, consistent-read query of the table to determine
whether any data violates the constraint. No locking is performed, and the enable operation does not block readers
or writers to the table. In addition, enabled novalidated constraints can be validated in parallel: Multiple constraints
can be validated at the same time and each constraint's validity check can be determined using parallel query.
Use the following approach to create tables with constraints and indexes:
Create the tables with the constraints. NOT NULL constraints can be unnamed and should be created enabled and
validated. All other constraints (CHECK, UNIQUE, PRIMARY KEY, and FOREIGN KEY) should be named and created
disabled.
Note:
Enable novalidate all constraints. Do this to primary keys before foreign keys.
With a separate ALTER TABLE statement for each constraint, validate all constraints. Do this to primary keys before
foreign keys. For example,
Now you can use Import or Fast Loader to load data into table t.
At this point, users can start performing INSERTs, UPDATEs, DELETEs, and SELECTs on table t.
See Also:
A function-based index includes columns that are either transformed by a function, such as the UPPER function, or
included in an expression, such as col1 + col2.
Defining a function-based index on the transformed column or expression allows that data to be returned using the
index when that function or expression is used in a WHERE clause or an ORDER BY clause. Therefore, a function-
based index can be beneficial when frequently-executed SQL statements include transformed columns, or columns
in expressions, in a WHERE or ORDER BY clause.
Function-based indexes defined with the UPPER(column_name) or LOWER(column_name) keywords allow case-
insensitive searches. For example, the following index:
To use function-based indexes in queries, you need to set the QUERY_REWRITE_ENABLED and
QUERY_REWRITE_INTEGRITY parameters.
QUERY_REWRITE_ENABLED
To enable function-based indexes for queries, set the QUERY_REWRITE_ENABLED session parameter to TRUE.
QUERY_REWRITE_ENABLED can be set to the following values:
FALSE: no rewrite
When QUERY_REWRITE_ENABLED is set to FALSE, then function-based indexes are not used for obtaining the values
of an expression in the function-based index. However, function-based indexes can still be used for obtaining values
in real columns.
When QUERY_REWRITE_ENABLED is set to FORCE, Oracle always uses rewrite and does not evaluate the cost before
doing so. FORCE is useful when you know that the query will always benefit from rewrite, when reduction in compile
time is important, and when you know that the optimizer may be underestimating the benefits of materialized
views.
QUERY_REWRITE_INTEGRITY
Setting the value of the QUERY_REWRITE_INTEGRITY parameter determines how function-based indexes are used,
If the QUERY_REWRITE_INTEGRITY parameter is set to ENFORCED (the default), then Oracle uses function-based
indexes to derive values of SQL expressions only. This also includes SQL functions.
If QUERY_REWRITE_INTEGRITY is set to any value other than ENFORCED, then Oracle uses the function-based index,
even if it is based on a user-defined, rather than SQL, function.
Function-based indexes are an efficient mechanism for evaluating statements that contain functions in WHERE
clauses. You can create a function-based index to store computation-intensive expressions in the index. This permits
Oracle to bypass computing the value of the expression when processing SELECT and DELETE statements. When
processing INSERT and UPDATE statements, however, Oracle evaluates the function to process the statement.
SELECT a
FROM table_1
You can also use function-based indexes for linguistic sort indexes that provide efficient linguistic collation in SQL
statements.
Oracle treats descending indexes as function-based indexes. The columns marked DESC are sorted in descending
order.
See Also:
Oracle9i Application Developer's Guide - Fundamentals and Oracle9i Database Administrator's Guide for more
information on using function-based indexes
Oracle9i SQL Reference for more information on the CREATE INDEX statement
An index-organized table differs from an ordinary table in that the data for the table is held in its associated index.
Changes to the table data, such as adding new rows, updating rows, or deleting rows, result only in updating the
index. Because data rows are stored in the index, index-organized tables provide faster key-based access to table
data for queries that involve exact match or range search or both.
See Also:
See Also:
Oracle9i Database Concepts and Oracle9i Data Warehousing Guide for more information on bitmap indexing
This section describes three aspects of indexing that you must evaluate when deciding whether to use bitmap
indexing on a given table:
Bitmap indexes can substantially improve performance of queries that have all of the following characteristics:
The individual predicates on these low- or medium-cardinality columns select a large number of rows.
Bitmap indexes have been created on some or all of these low- or medium-cardinality columns.
You can use multiple bitmap indexes to evaluate the conditions on a single table. Bitmap indexes are thus highly
advantageous for complex ad hoc queries that contain lengthy WHERE clauses. Bitmap indexes can also provide
optimal performance for aggregate queries and for optimizing joins in star schemas.
See Also:
Oracle9i Database Concepts for more information on optimizing anti-joins and semi-joins
Bitmap indexes can provide considerable storage savings over the use of B-tree indexes. In databases containing
only B-tree indexes, you must anticipate the columns that are commonly accessed together in a single query, and
create a composite B-tree index on these columns.
Not only would this B-tree index require a large amount of space, it would also be ordered. That is, a B-tree index on
(marital_status, region, gender) is useless for queries that only access region and gender. To completely index the
database, you must create indexes on the other permutations of these columns. For the simple case of three low-
cardinality columns, there are six possible composite B-tree indexes. You must consider the trade-offs between disk
space and performance needs when determining which composite B-tree indexes to create.
Bitmap indexes solve this dilemma. Bitmap indexes can be efficiently combined during query execution, so three
small single-column bitmap indexes can do the job of six three-column B-tree indexes.
Bitmap indexes are much more efficient than B-tree indexes, especially in data warehousing environments. Bitmap
indexes are created not only for efficient space usage but also for efficient execution, and the latter is somewhat
more important.
Do not create bitmap indexes on unique key columns. However, for columns where each value is repeated hundreds
or thousands of times, a bitmap index typically is less than 25% of the size of a regular B-tree index. The bitmaps
themselves are stored in compressed format.
Simply comparing the relative sizes of B-tree and bitmap indexes is not an accurate measure of effectiveness,
however. Because of their different performance characteristics, keep B-tree indexes on high-cardinality columns,
while creating bitmap indexes on low-cardinality columns.
Bitmap indexes benefit data warehousing applications, but they are not appropriate for OLTP applications with a
heavy load of concurrent INSERTs, UPDATEs, and DELETEs. In a data warehousing environment, data is maintained
usually by way of bulk inserts and updates. Index maintenance is deferred until the end of each DML operation. For
example, when you insert 1000 rows, the inserted rows are placed into a sort buffer and then the updates of all 1000
index entries are batched. (This is why SORT_AREA_SIZE must be set properly for good performance with inserts and
updates on bitmap indexes.) Thus, each bitmap segment is updated only once in each DML operation, even if more
than one row in that segment changes.
Note:
The sorts described previously are regular sorts and use the regular sort area, determined by SORT_AREA_SIZE. The
BITMAP_MERGE_AREA_SIZE and CREATE_BITMAP_AREA_SIZE initialization parameters described in "Initialization
Parameters for Bitmap Indexing" affect only the specific operations indicated by the parameter names.
DML and DDL statements, such as UPDATE, DELETE, and DROP TABLE, affect bitmap indexes the same way they do
traditional indexes; the consistency model is the same. A compressed bitmap for a key value is made up of one or
more bitmap segments, each of which is at most half a block in size (although it can be smaller). The locking
granularity is one such bitmap segment. This can affect performance in environments where many transactions
make simultaneous updates. If numerous DML operations have caused increased index size and decreasing
performance for queries, then you can use the ALTER INDEX ... REBUILD statement to compact the index and restore
efficient performance.
A B-tree index entry contains a single rowid. Therefore, when the index entry is locked, a single row is locked. With
bitmap indexes, an entry can potentially contain a range of rowids. When a bitmap index entry is locked, the entire
range of rowids is locked. The number of rowids in this range affects concurrency. As the number of rowids increases
in a bitmap segment, concurrency decreases.
Locking issues affect DML operations and can affect heavy OLTP environments. Locking issues do not, however,
affect query performance. As with other types of indexes, updating bitmap indexes is a costly operation.
Nonetheless, for bulk inserts and updates where many rows are inserted or many updates are made in a single
statement, performance with bitmap indexes can be better than with regular B-tree indexes.
The INDEX hint works with bitmap indexes in the same way as with traditional indexes.
The INDEX_COMBINE hint identifies the most cost effective indexes for the optimizer. The optimizer recognizes all
indexes that can potentially be combined, given the predicates in the WHERE clause. However, it might not be cost
effective to use all of them. Oracle recommends using INDEX_COMBINE rather than INDEX for bitmap indexes,
because it is a more versatile hint.
In deciding which of these hints to use, the optimizer includes nonhinted indexes that appear cost effective, as well
as indexes named in the hint. If certain indexes are given as arguments for the hint, then the optimizer tries to use
some combination of those particular bitmap indexes.
If the hint does not name indexes, then all indexes are considered hinted. Hence, the optimizer tries to combine as
many as possible, given the WHERE clause, without regard to cost effectiveness. The optimizer always tries to use
hinted indexes in the plan, regardless of whether it considers them cost effective.
See Also:
To make compressed bitmaps as small as possible, declare NOT NULL constraints on all columns that cannot contain
null values.
Fixed-length datatypes are more amenable to a compact bitmap representation than variable length datatypes.
See Also:
Chapter 9, "Using EXPLAIN PLAN" for more information about bitmap EXPLAIN PLAN output
Use SQL statements with the ALTER TABLE syntax to optimize the mapping of bitmaps to rowids. The MINIMIZE
RECORDS_PER_BLOCK clause enables this optimization, and the NOMINIMIZE RECORDS_PER_BLOCK clause disables
it.
When MINIMIZE RECORDS_PER_BLOCK is enabled, Oracle scans the table and determines the maximum number of
records in any block and restricts this table to this maximum number. This enables bitmap indexes to allocate fewer
bits for each block and results in smaller bitmap indexes. The block and record allocation restrictions that this
statement places on the table are beneficial only to bitmap indexes. Therefore, Oracle does not recommend using
this mapping on tables that are not heavily indexed with bitmap indexes.
See Also:
The rowids used in bitmap indexes on index-organized tables are in a mapping table, not in the base table. The
mapping table maintains a mapping of logical rowids (needed to access the index-organized table) to physical rowids
(needed by the bitmap index code). Each index-organized table has one mapping table, used by all the bitmap
indexes created on that table.
Note:
Moving rows in an index-organized table does not make the bitmap indexes built on that index-organized table
unusable.
See Also:
Oracle9i Database Concepts for information on bitmap indexes and index-organized tables
Bitmap indexes index nulls, whereas all other index types do not. Consider, for example, a table with STATE and
PARTY columns, on which you want to perform the following query:
SELECT COUNT(*)
FROM people
WHERE state='CA'
AND party !='D';
Indexing nulls enables a bitmap minus plan where bitmaps for party equal to D and NULL are subtracted from state
bitmaps equal to CA. The EXPLAIN PLAN output looks like the following:
SELECT STATEMENT
SORT AGGREGATE
BITMAP MINUS
BITMAP MINUS
If a NOT NULL constraint exists on party, then the second minus operation (where party is null) is left out, because it
is not needed.
BITMAP_MERGE_AREA_SIZE affects memory used to merge bitmaps from an index range scan.
See Also:
If there is at least one bitmap index on the table, then the optimizer considers using a bitmap access path using
regular B-tree indexes for that table. This access path can involve combinations of B-tree and bitmap indexes, but
might not involve any bitmap indexes at all. However, the optimizer does not generate a bitmap access path using a
single B-tree index unless instructed to do so by a hint.
To use bitmap access paths for B-tree indexes, the rowids stored in the indexes must be converted to bitmaps. After
such a conversion, the various Boolean operations available for bitmaps can be used. As an example, consider the
following query, where there is a bitmap index on column c1, and regular B-tree indexes on columns c2 and c3.
SELECT COUNT(*)
FROM t
WHERE c1 = 2 AND c2 = 6
SORT AGGREGATE
BITMAP OR
BITMAP AND
SORT ORDER BY
Note:
Here, a COUNT option for the BITMAP CONVERSION row source counts the number of rows matching the query.
There are also conversions from rowids in the plan to generate bitmaps from the rowids retrieved from the B-tree
indexes. The ORDER BY sort appears in the plan because the conditions on column c3 result in the return of more
than one list of rowids from the B-tree index. These lists are sorted before being converted into a bitmap.
For bitmap indexes with direct load, the SORTED_INDEX flag does not apply.
In addition to a bitmap index on a single table, you can create a bitmap join index, which is a bitmap index for the
join of two or more tables. A bitmap join index is a space-saving way to reduce the volume of data that must be
joined, by performing restrictions in advance. For each value in a column of a table, a bitmap join index stores the
rowids of corresponding rows in another table. In a data warehousing environment, the join condition is an equi-
inner join between the primary key column(s) of the dimension tables and the foreign key column(s) in the fact
table.
Bitmap join indexes are much more efficient in storage than materialized join views, an alternative for materializing
joins in advance. This is because the materialized join views do not compress the rowids of the fact tables.
See Also:
Oracle9i Data Warehousing Guide for examples and restrictions of bitmap join indexes
The cartridge determines the parameters you can specify in creating and maintaining the domain index. Similarly,
the performance and storage characteristics of the domain index are presented in the specific cartridge
documentation.
Refer to the appropriate cartridge documentation for information such as the following:
Note:
You can also create index types with the CREATE INDEXTYPE statement.
See Also:
Oracle Spatial User's Guide and Reference for information about the SpatialIndextype
Using Clusters
Clusters are groups of one or more tables that are physically stored together because they share common columns
and usually are used together. Because related rows are physically stored together, disk access time improves.
See Also:
Cluster tables that are accessed frequently by the application in join statements.
Do not cluster tables if the application joins them only occasionally or modifies their common column values
frequently. Modifying a row's cluster key value takes longer than modifying the value in an unclustered table,
because Oracle might need to migrate the modified row to another block to maintain the cluster.
Do not cluster tables if the application often performs full table scans of only one of the tables. A full table scan of a
clustered table can take longer than a full table scan of an unclustered table. Oracle is likely to read more blocks,
because the tables are stored together.
Cluster master-detail tables if you often select a master record and then the corresponding detail records. Detail
records are stored in the same data block(s) as the master record, so they are likely still to be in memory when you
select them, requiring Oracle to perform less I/O.
Store a detail table alone in a cluster if you often select many detail records of the same master. This measure
improves the performance of queries that select detail records of the same master, but does not decrease the
performance of a full table scan on the master table. An alternative is to use an index organized table.
Do not cluster tables if the data from all tables with the same cluster key value exceeds more than one or two Oracle
blocks. To access a row in a clustered table, Oracle reads all blocks containing rows with that value. If these rows
take up multiple blocks, then accessing a single row could require more reads than accessing the same row in an
unclustered table.
Do not cluster tables when the number of rows for each cluster key value varies significantly. This causes waste of
space for the low cardinality key value; it causes collisions for the high cardinality key values. Collisions degrade
performance.
Consider the benefits and drawbacks of clusters with respect to the needs of the application. For example, you might
decide that the performance gain for join statements outweighs the performance loss for statements that modify
cluster key values. You might want to experiment and compare processing times with the tables both clustered and
stored separately.
See Also:
Hash clusters group table data by applying a hash function to each row's cluster key value. All rows with the same
cluster key value are stored together on disk. Consider the benefits and drawbacks of hash clusters with respect to
the needs of the application. You might want to experiment and compare processing times with a particular table as
it is stored in a hash cluster, and as it is stored alone with an index.
Use hash clusters to store tables accessed frequently by SQL statements with WHERE clauses, if the WHERE clauses
contain equality conditions that use the same column or combination of columns. Designate this column or
combination of columns as the cluster key.
Store a table in a hash cluster if you can determine how much space is required to hold all rows with a given cluster
key value, including rows to be inserted immediately as well as rows to be inserted in the future.
Do not store a table in a hash cluster if the application often performs full table scans and if you must allocate a great
deal of space to the hash cluster in anticipation of the table growing. Such full table scans must read all blocks
allocated to the hash cluster, even though some blocks might contain few rows. Storing the table alone reduces the
number of blocks read by full table scans.
Do not store a table in a hash cluster if the application frequently modifies the cluster key values. Modifying a row's
cluster key value can take longer than modifying the value in an unclustered table, because Oracle might need to
migrate the modified row to another block to maintain the cluster.
Storing a single table in a hash cluster can be useful, regardless of whether the table is joined frequently with other
tables, as long as hashing is appropriate for the table based on the points in this list.