0% found this document useful (0 votes)
28 views112 pages

AOL and Sysadmin Interview Questions

Uploaded by

Rameshwar Alure
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
28 views112 pages

AOL and Sysadmin Interview Questions

Uploaded by

Rameshwar Alure
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 112

AOL and Sysadmin Interview Questions

Profile Option in Oracle Apps

Profile Option, Profile option Value

Profile Option in Oracle Apps:

Profile Option values control the behavior of Oracle Apps, in other words they determine how Oracle Apps
should run. The value for a profile option can be changed any time.

For Example we have a profile option called MO: Operating Unit. Assigning a value to this profile option
will determine what operating unit it should use when a user gets into a particular responsibility.

We have two types of Profile options – System and Personal profile options depending on to whom they are
visible and who can update their values.

System Profile options are visible and can be updated only in System Administrator responsibility. In short
they will be maintained by a System Administrator only.

User Profile Options are visible and can be updated by any end user of Oracle Apps.

Profile Options can be set at different levels. Site level being the lowest and User being the highest in the
hierarchy. If a profile option is assigned at two levels, then value assigned at lowest level takes the
precedence.

Levels:-

 Site (restricted to the whole of Apps)


 Application ( restricted only to a particular application like Payables, Receivables)
 Responsibility (restricted only to a particular responsibility)
 Organization (restricted to a particular organization)
 User (restricted to a user)

Can we give file extensions in executable file name?

No! As per Oracle standard process, the application will not allow file extensions. It will also not allow any special
characters and spaces.

What is the significance of US folder in Application Top?

US folder is nothing but a language specific folder. Oracle Apps uses american language by default and so is the US
folder.
Multiple folders can be kept for language specification depending on the languages that are installed.
what is difference between API and Interface

#1 An API (Application Programming Interface) is inbuilt

program through which data’s can be transferred to Oracle

base tables directly without writing the program for validating or inserting in Interface tables.

But through User Interface, we have to write codes for validation and

Insertion of data’s in Interface tables and then in Oracle base tables

What is the difference between API and interface?

An API is nothing but “Application Programming Interface”.

API – it’s validating the record(s) and inserting directly into base tables. (if any error occurs, it’s just thrown an error
at run time.)

Interface – Loads the data through interface table(s) and error-out record(s) histories' available in the same interface
tables or provided error tables.

(The below image can help you understands easier way.)

Here is the SQL Query to findout Application Tops unix physical path:

/*********************************************************
*PURPOSE: To list physical path of APPL_TOPs in unix *
* and its Parameters *
*AUTHOR: Shailender Thallam *
**********************************************************/
SELECT variable_name,
VALUE
FROM fnd_env_context
WHERE variable_name LIKE '%\_TOP' ESCAPE '\'
AND concurrent_process_id =
(SELECT MAX(concurrent_process_id) FROM fnd_env_context
)
ORDER BY 1;

How do you create a table validated value set dependent on another value set?

We can use $FLEX$.<Value set name> in the where condition of the target valueset to restrict LOV values bases on
parent value set.

How to “write” to the concurrent request Log and Output?

Syntax for log:


FND_FILE.PUT(FND_FILE.LOG, <Text>);

Syntax for Output:


FND_FILE.PUT(FND_FILE.OUTPUT, <Text>);

How to ensure that only one instance of a concurrent program runs at a time?

‘Run Alone‘ check box should be enabled in Concurrent program definition to make it run one request at a
point of time.

How to find out base path of a Custom Application top only from front end ?

We can find Custom Application top physical path from Application Developer responsibility:
navigate to Application Developer –> Application –> Register. Query for the custom application
name. The value in the field Basepath, is the OS system variable that stores the actual directory info.

How to generate a trace file for a PL/SQL based concurrent program?

Step1: Enable Trace at the Report Definition.


Go to System Administrator -> Concurrent Programs -> Define , Query the report and check the ‘Enable
trace‘ check box

Navigate to:
Profile->system
Query on the Profile Option: “Concurrent: Allow Debugging”

This should be set to ‘yes’ at ‘Site’ level.

If it isn’t set, then set it, then logout and bounce the APPS services.

The ‘Debug Options’ button on the concurrent program will now be enabled. )

Step2: Run the report for Step 1. above

Step3: Get the request id from below query. i.e. Request id = 11111111

SELECT fcr.request_id “Request ID”


–, fcr.oracle_process_id “Trace ID”
,
p1.value
||’/’
||lower(p2.value)
||’_ora_’
||fcr.oracle_process_id
||’.trc’ “Trace File” ,
TO_CHAR(fcr.actual_completion_date, ‘dd-mon-yyyy hh24:mi:ss’) “Completed” ,
fcp.user_concurrent_program_name “Program” ,
fe.execution_file_name
|| fe.subroutine_name “Program File” ,
DECODE(fcr.phase_code,’R’,’Running’)
||’-‘
||DECODE(fcr.status_code,’R’,’Normal’) “Status” ,
fcr.enable_trace “Trace Flag”
FROM fnd_concurrent_requests fcr ,
v$parameter p1 ,
v$parameter p2 ,
fnd_concurrent_programs_vl fcp ,
fnd_executables fe
WHERE p1.name =’user_dump_dest’
AND p2.name =’db_name’
AND fcr.concurrent_program_id = fcp.concurrent_program_id
AND fcr.program_application_id = fcp.application_id
AND fcp.application_id = fe.application_id
AND fcp.executable_id =fe.executable_id
AND ((fcr.request_id =
&request_id
OR fcr.actual_completion_date > TRUNC(sysdate)))
ORDER BY DECODE(fcr.request_id,
&request_id, 1, 2),
fcr.actual_completion_date DESC;

–you will be prompted to enter the request_id;

Step4: trace file directory using an sql query


SELECT value FROM v$parameter WHERE name = ‘user_dump_dest';

Step5: Go to directory in Step 4. above.

Step6: Find out the trace file using GREP command


eg:- grep ‘11111111’ *.trc

Step7: Run TKPROF to get a structured information from trace file.


Syntax of the command:
$ tkprof <RAW TRACE> <output> explain=apps_uname/apps_pwd sys=no sort=prsela,exeela,fchela

How to submit a concurrent request directly from the operating system(unix)?

Write a PL/SQL script to submit concurrent request using FND_REQUEST.Submit() API in a file and
execute that file in shell script using SQLPLUS

Example
sqlplus -s apps/apps @XX_SUBMIT_CP.sql

You can also use CONCSUB utility in unix to submit concurrent program from unix.
Syntax:
CONCSUB <APPS username>/<APPS password> \
<responsibility application short name> \
<responsibility name> \
<Oracle Applications username> \
[WAIT=N|Y|<n seconds>] \
CONCURRENT \
<program application short name>&nbsp;\
<program name> \[PROGRAM_NAME=”<description>”]\
[REPEAT_TIME=<resubmission time>] \
[REPEAT_INTERVAL= <number>] \
[REPEAT_INTERVAL_UNIT=< resubmission unit>] \
[REPEAT_INTERVAL_TYPE=< resubmission type>] \
[REPEAT_END=<resubmission end date and time>] \
[START=<date>] \
[IMPLICIT=< type of concurrent request> \

1. Is there a limit for number of parameters of a concurrent program??

100 is the limit for the number of parameters for a concurrent program

What are different types of Value Sets??

There are 8 different types of Value Set Validations.

1. None: this is the indication of minimal validation.


2. Independent: Input should be there in the list of – values that are defined previously.
3. Dependent: According to the previous value, input is compared with a subset of values.
4. Table: Input is checked on the basis of values that exist in the application table.
5. Special: These are the values that make use of flex field.
6. Pair: A pair can be defined as the set of values that make use of flex fields.
7. Translated Independent: This is a kind of value that can be made used only if there is any existence
for the input in the list that is defined previously.
8. Translatable dependent: In this kind of validation rules that compare the input with the subset of
values associated with the previously defined list.

What are the 10 different types of Executables in Oracle Apps???

FlexRpt
The execution file is wrnitten using the FlexReport API.

FlexSql
The execution file is written using the FlexSql API.

Host
The execution file is a host script.

Oracle Reports
The execution file is an Oracle Reports file.

PL/SQL Stored Procedure


The execution file is a stored procedure.

SQL*Loader
The execution file is a SQL script.
SQL*Plus
The execution file is a SQL*Plus script.

Spawned
The execution file is a C or Pro*C program.

Immediate
The execution file is a program written to run as a subroutine of the concurrent manager. We recommend
against defining new immediate concurrent programs, and suggest you use either a PL/SQL Stored
Procedure or a Spawned C Program instead.

Request Set Stage Function


PL/SQL Stored Function that can be uesd to calculate the completion statuses of request set stages.

What are the different types of incompatibilities in concurrent program window?

There are two types of program incompatibilities, “Global” incompatibilities, and “Domain-specific”
incompatibilities.
You can define a concurrent program to be globally incompatible with another program — that is, the two
programs cannot be run simultaneously at all; or you can define a concurrent program to be incompatible
with another program in a Conflict Domain. Conflict domains are abstract representations of groups of data.
They can correspond to other group identifiers, such as sets of books, or they can be arbitrary.

What is Application Top ? What are the different types of Application Tops?

Application Top is a physical folder on server which holds all the executable, UI and support files.

We have two different types of Application Tops

1. Product Top
2. Custom Top

Product Top:
Product top is the default top built by the manufacturer. This is usually called as APPL_TOP which stores
all components provided by the Oracle.

Custom Top:
Custom top can be defined as the customer top which is created exclusively for customers. According to the
requirement of the client many number of customer tops can be made. Custom top is made used for the
purpose of storing components, which are developed as well as customized. At the time when the oracle
corporation applies patches, every module other than custom top are overridden.

What is Autonomous transaction?

Autonomous Transaction is a kind of transaction that is independent of another transaction. This kind of transaction
allows you in suspending the main transaction and helps in performing SQL operations, rolling back of operations
and also committing them. The autonomous transactions do not support resources, locks or any kind of commit
dependencies that are part of main transaction.

What is Job Scheduling and How to do it?


The Scheduler enables database administrators to full fill business tasks in an organized and controlled
fashion.
To achieve job scheduling Oracle provides a collection of functions and procedures in the
DBMS_SCHEDULER package.

Below are the major this a Scheduler can do:

1. Schedule job execution based on time or events


2. Schedule job processing in a way that models your business requirements
3. Manage and monitor jobs
4. Execute and manage jobs in a clustered environment

For More information on Scheduler, go through this post http://oracle-base.com/articles/10g/scheduler-


10g.php

What is the difference between FND_GLOBAL and FND_PROFILE??

Though FND_GLOBAL and FND_PROFILE gives us same result but they work in a different fashion

FND_GLOBAL is a server-side package which returns the values of system globals, such as the
login/signon or “session” type of values. Where as FND_PROFILE uses user profile routines to
manipulate the option values stored in client and server user profile caches.

From this we can understand that FND_GLOBAL works on server side and FND_PROFILE works on
client side.

On the client, a single user profile cache is shared by multiple form sessions. Thus, when Form A and Form
B are both running on a single client, any changes Form A makes to the client’s user profile cache affect
Form B’s run-time environment, and vice versa.

On the server, each form session has its own user profile cache. Thus, even if Form A and Form B are
running on the same client, they have separate server profile caches. Server profile values changed from
Form A’s session do not affect Form B’s session, and vice versa.

That is the reason in forms we use FND_GLOBAL.

What is the purpose of Token in Concurrent Program Definition form?

Token is used for transferring values towards report builder. Tokens are usually not case – sensitive.

What is the significance of “Priority” field in Concurrent Program Definition??

Oracle definition for Priority:


Priority is used to indicate the priority that the concurrent request will be assigned when it is submitted. If you do
not assign a priority, the user’s profile option Concurrent:Priority sets the request’s priority at submission.
To Know how to set the priority, read this article: http://www.oracleappshub.com/aol/priority-for-concurrent-
program/
What is the significance of _ALL suffix in oracle tables?

_ALL Table holds all the information about different operating units in a Multi-Org environment. You can
also set the client_info to specific operating unit to see the data specific to that operating unit only.
Check out the article -> MOAC – Oracle Apps ORG_ID, Multi Org Concept

Which API used to get the current application user id ?

fnd_profile.value(‘USER_ID’)

or

fnd_global.USER_ID

Note: New Questions will be added to this list on regular basis

What is P_CON_REQUEST_ID
No its not mandatory. Without p_conc_request_id also we can
submit the reports in Ora Apps.
it is mandatory if we need to work on profiles like
org_id,user_id,resp_id....etc.By using p_conc_request_id we
can capture the resources like profiles for the current
concurrent program which we are going to submit in srs
window.

How will you debug your reports?


SRW.MESSAGE (msg_number NUMBER, msg_text CHAR);
Posted by Contact: highbrow.admin@gmail.com at 4:09 PM 1 comment:
Labels: Reports

Key FND Tables in Oracle Application


Posted by Velmurugan T 0 comments
Here there are few key FND tables that we use in our AOL queries.
FND_APPLICATION:
Stores applications registered with Oracle Application Object Library.
FND_APPLICATION_TL:
Stores translated information about all the applications registered with Oracle Application Object Library.
FND_APP_SERVERS:
This table will track the servers used by the E-Business Suite system.
FND_ATTACHED_DOCUMENTS:
Stores information relating a document to an application entity.
FND_CONCURRENT_PROCESSES:
Stores information about concurrent managers.
FND_CONCURRENT_PROCESSORS:
Stores information about immediate (subroutine) concurrent program libraries.
FND_CONCURRENT_PROGRAMS:
Stores information about concurrent programs. Each row includes a name and description of the concurrent
program.
FND_CONCURRENT_PROGRAMS_TL:
Stores translated information about concurrent programs in each of the installed languages.
FND_CONCURRENT_QUEUES:
Stores information about concurrent managers.

FND_CONCURRENT_QUEUE_SIZE:
Stores information about the number of requests a concurrent manager can process at once, according to its
work shift.
FND_CONCURRENT_REQUESTS:
Stores information about individual concurrent requests.
FND_CONCURRENT_REQUEST_CLASS:
Stores information about concurrent request types.
FND_CONC_REQ_OUTPUTS:
This table stores output files created by Concurrent Request.
FND_CURRENCIES:
Stores information about currencies.
FND_DATABASES:
It tracks the databases employed by the eBusiness suite. This table stores information about the database that
is not instance specific.
FND_DATABASE_INSTANCES:
Stores instance specific information. Every database has one or more instance.
FND_DESCRIPTIVE_FLEXS:
Stores setup information about descriptive flexfields.
FND_DESCRIPTIVE_FLEXS_TL:
Stores translated setup information about descriptive flexfields.
FND_DOCUMENTS:
Stores language-independent information about a document.
FND_EXECUTABLES:
Stores information about concurrent program executables.
FND_FLEX_VALUES:
Stores valid values for key and descriptive flexfield segments.
FND_FLEX_VALUE_SETS:
Stores information about the value sets used by both key and descriptive flexfields.
FND_LANGUAGES:
Stores information regarding languages and dialects.
FND_MENUS:
It lists the menus that appear in the Navigate Window, as determined by the System Administrator when
defining responsibilities for function security.
FND_MENUS_TL:
Stores translated information about the menus in FND_MENUS.
FND_MENU_ENTRIES:
Stores information about individual entries in the menus in FND_MENUS.
FND_PROFILE_OPTIONS:
Stores information about user profile options.
FND_REQUEST_GROUPS:
Stores information about report security groups.
FND_REQUEST_SETS:
Stores information about report sets.
FND_RESPONSIBILITY:
Stores information about responsibilities. Each row includes the name and description of the responsibility,
the application it belongs to, and values that identify the main menu, and the first form that it uses.
FND_RESPONSIBILITY_TL:
Stores translated information about responsibilities.
FND_RESP_FUNCTIONS:
Stores security exclusion rules for function security menus. Security exclusion rules are lists of functions
and menus inaccessible to a particular responsibility.
FND_SECURITY_GROUPS:
Stores information about security groups used to partition data in a Service Bureau architecture.
FND_SEQUENCES:
Stores information about the registered sequences in your applications.
FND_TABLES:
Stores information about the registered tables in your applications.
FND_TERRITORIES:
Stores information for countries, alternatively known as territories.
FND_USER:
Stores information about application users.
FND_VIEWS:
Stores information about the registered views in your applications.

We have 3 KFF in gl module those are

1. accounting key flexfeild: By using accounting key flexfiled we can design our organization structure

2.reporting attribute key flexfield: this key flexfild is used to reporting purpose

3. Gl ledger name key flexfield : This is mirror of the Accounting Key flexfield. what ever we enterd in accounting key
flexfield it will copy to GL ledger flexfield and it gives one more segment that will call as SEGMENT it is useful when
we are doing FSG and MASS ALLOCATION

Migrate AOL objects from one environment to another environment using FNDLOAD

The Generic Loader (FNDLOAD) is a concurrent program that can transfer Oracle Application entity data between
database and text file. The loader reads a configuration file to determine which entity to access. In simple words
FNDLOAD is used to transfer entity data from one instance/database to other. For example if you want to move a
concurrent program/menu/value sets developed in DEVELOPMENT instance to PRODUCTION instance you can use
this command.

To FNDLOAD Concurrent Programs

FNDLOAD apps/$CLIENT_APPS_PWD O Y DOWNLOAD $FND_TOP/patch/115/import/afcpprog.lct


XX_CUSTOM_ORACLE_INTERFACE_PROG.ldt PROGRAM APPLICATION_SHORT_NAME="XXGMS"
CONCURRENT_PROGRAM_NAME="XX_CUSTOM_ORACLE_INTERFACE_PROG"
##Note that
##---------
## XXGMS will be your custom GMS Application Shortname where concurrent program is registered
## XX_CUSTOM_ORACLE_INTERFACE_PROG
Will be the name of your request group
## XX_CUSTOM_ORACLE_INTERFACE_PROG.ldt is the file where concurrent program definition will be extracted
## ##To upload
FNDLOAD apps/$CLIENT_APPS_PWD O Y UPLOAD $FND_TOP/patch/115/import/afcpprog.lct
XX_CUSTOM_ORACLE_INTERFACE_PROG.ldt

To FNDLOAD Request sets with stages

## For this you will be firstly required to download the request set definition.
## Next you will be required to download the Sets Linkage definition
## Well, lets be clear here, the above sequence is more important while uploading
FNDLOAD apps/$CLIENT_APPS_PWD 0 Y DOWNLOAD $FND_TOP/patch/115/import/afcprset.lct
XX_GL_MY_INTERFACE_SET.ldt REQ_SET REQUEST_SET_NAME="FNDRSSUB4610101_Will_look_like_this"
FNDLOAD apps/$CLIENT_APPS_PWD 0 Y DOWNLOAD $FND_TOP/patch/115/import/afcprset.lct
XX_GL_MY_INTERFACE_SET_LINK.ldt REQ_SET_LINKS
REQUEST_SET_NAME="FNDRSSUB4610101_Will_look_like_this"
## Note that FNDRSSUB4610101 can be found by doing an examine on the
########----->select request_set_name from fnd_request_sets_vl
########----->where user_request_set_name = 'User visible name for the request set here'
## Now for uploading the request set, execute the below commands
FNDLOAD apps/$CLIENT_APPS_PWD 0 Y UPLOAD $FND_TOP/patch/115/import/afcprset.lct
XX_GL_MY_INTERFACE_SET.ldt
FNDLOAD apps/$CLIENT_APPS_PWD 0 Y UPLOAD $FND_TOP/patch/115/import/afcprset.lct
XX_GL_MY_INTERFACE_SET_LINK.ldt

To FNDLOAD Request groups

FNDLOAD apps/$CLIENT_APPS_PWD O Y DOWNLOAD $FND_TOP/patch/115/import/afcpreqg.lct


XX_MY_REPORT_GROUP_NAME.ldt REQUEST_GROUP REQUEST_GROUP_NAME="XX_MY_REPORT_GROUP_NAME"
APPLICATION_SHORT_NAME="XXGMS"
##Note that
##---------
## <> will be your Application Shortname where request group is registered
## XX_MY_REPORT_GROUP_NAME
Will be the name of your request group
## ##To upload this Request Group in other environment after having transferred the ldt file

FNDLOAD apps/$CLIENT_APPS_PWD O Y UPLOAD $FND_TOP/patch/115/import/afcpreqg.lct

To FNDLOAD responsibility

FNDLOAD apps/$CLIENT_APPS_PWD O Y DOWNLOAD $FND_TOP/patch/115/import/afscursp.lct XX_PERSON_RESPY.ldt


FND_RESPONSIBILITY RESP_KEY="XX_PERSON_RESPY"
## note that XX_PERSON_RESPY is the responsibility key
## Now to upload
FNDLOAD apps/$CLIENT_APPS_PWD O Y UPLOAD $FND_TOP/patch/115/import/afscursp.lct XX_PERSON_RESPY.ldt

To FNDLOAD Profile options

FNDLOAD apps/$CLIENT_APPS_PWD O Y DOWNLOAD $FND_TOP/patch/115/import/afscprof.lct


POR_ENABLE_REQ_HEADER_CUST.ldt PROFILE PROFILE_NAME="POR_ENABLE_REQ_HEADER_CUST"
APPLICATION_SHORT_NAME="ICX"
## Note that
## POR_ENABLE_REQ_HEADER_CUST is the short name of profile option
## We aren't passing the user profile option name in this case. Validate using ...
########----->select application_id, PROFILE_OPTION_NAME || '==>' || profile_option_id || '==>' ||
########----->USER_PROFILE_OPTION_NAME
########----->from FND_PROFILE_OPTIONS_VL
########----->where PROFILE_OPTION_NAME like '%' || upper('&profile_option_name') || '%'
########----->order by PROFILE_OPTION_NAME
########----->/
## Now to upload
FNDLOAD apps/$CLIENT_APPS_PWD O Y UPLOAD $FND_TOP/patch/115/import/afscprof.lct
POR_ENABLE_REQ_HEADER_CUST.ldt

To FNDLOAD User definitions

FNDLOAD apps/$CLIENT_APPS_PWD 0 Y DOWNLOAD $FND_TOP/patch/115/import/afscursp.lct


./XX_FND_USER_PASSI.ldt FND_USER USER_NAME='GANESH'
#Do not worry about your password being extracted, it will be encrypted as below in ldt file
#BEGIN FND_USER "GANESH"
# OWNER = "GEMBALI"
# LAST_UPDATE_DATE = "2007/06/12"
# ENCRYPTED_USER_PASSWORD = "ZGE45A8A9BE5CF4339596C625B99CAEDF136C34FEA244DC7A"
# SESSION_NUMBER = "0"To upload the FND_USER using FNDLOAD command use
FNDLOAD apps/$CLIENT_APPS_PWD 0 Y UPLOAD $FND_TOP/patch/115/import/afscursp.lct ./XX_FND_USER_PASSI.ldt
Notes for using FNDLOAD against FND_USER:-
1. After uploading using FNDLOAD, user will be promoted to change their password again during their next signon
attempt.
2. All the responsibilities will be extracted by FNDLOAD alongwith User Definition in FND_USER
3. In the Target Environment , make sure that you have done FNDLOAD for new responsibilities prior to running
FNDLOAD on users.

Notes :

1. Give special attention when downloading Menus or Responsibilities.


In case your client has several developers modifying Responsibilities and Menus, then be ultra carefull. Not being carefull
will mean that untested Forms and Functions will become available in your clients Production environment besides your
tested forms, functions and menus.
2. Be very careful when downloading flexfields that reference value sets with independent values for GL Segment Codes.
By doing so, you will download and extract all the test data in GL Codes that might not be applicable for production.
3. There are several variations possible for FNDLOAD, for example you can restrict the download and uploads to specific
segments within Descriptive Flex Fields. Please amend the above examples as desired for applying appropriate
filterations.
4. The list of examples by no mean cover all possible FNDLOAD entities.
5. FNDLOAD is very reliable and stable, if used properly. This happens to by one of my favourite Oracle utilities.
6. Last but not the least, please test your FNDLOAD properly, so as to ensure that you do not get any unexpected data. In
past I have noticed undesired results when the Lookup gets modified manually directly on production, and then the
FNDLOAD is run for similar changes. If possible, try to follow a good practice of modifying FNDLOADable data only by
FNDLOAD on production environment.
7. As the name suggests, FNDLOAD is useful for FND Related objects. However in any implementation, you will be
required to migrate the Setups in Financials and Oracle HRMS from one environment to another. For this you can use
iSetup. "Oracle iSetup".
Some of the things that can be migrated using Oracle iSetup are
GL Set of Books, HR Organization Structures, HRMS Employees, Profile Options Setup, Suppliers, Customers, Tax
Codes & Tax Rates, Financials Setup, Accounting Calendars, Chart of Accounts, GL Currencies.

SQL Loader With Examples

What is SQL*Loader and what is it used for?


SQL*Loader is a bulk loader utility used for moving data from external files into the Oracle database. Its syntax is
similar to that of the DB2 Load utility, but comes with more options. SQL*Loader supports various load formats,
selective loading, and multi-table loads.

How does one use the SQL*Loader utility?


One can load data into an Oracle database by using the sqlldr (sqlload on some platforms) utility. Invoke the utility
without arguments to get a list of available parameters. Look at the following example:
sqlldr scott/tiger control=loader.ctl

This sample control file (loader.ctl) will load an external data file containing delimited data: load data

infile 'c:\data\mydata.csv'

into table emp ( empno, empname, sal, deptno )

fields terminated by "," optionally enclosed by '"'

The mydata.csv file may look like this:

10001,"Scott Tiger", 1000, 40

10002,"Frank Naude", 500, 20

Another Sample control file with in-line data formatted as fix length records. The trick is to specify "*" as the name of
the data file, and use BEGINDATA to start the data section in the control file.

load data

infile *

replace

into table departments

( dept position (02:05) char(4),

deptname position (08:27) char(20) )

begindata

COSC COMPUTER SCIENCE

ENGL ENGLISH LITERATURE

MATH MATHEMATICS

POLY POLITICAL SCIENCE

Is there a SQL*Unloader to download data to a flat file?


Oracle does not supply any data unload utilities. However, you can use SQL*Plus to select and format your data and
then spool it to a file:

set echo off newpage 0 space 0 pagesize 0 feed off head off trimspool on

spool oradata.txt

select col1 ',' col2 ',' col3

from tab1

where col2 = 'XYZ';

spool off
Alternatively use the UTL_FILE PL/SQL package:

Remember to update initSID.ora, utl_file_dir='c:\oradata' parameter

declare

fp utl_file.file_type;

begin

fp := utl_file.fopen('c:\oradata','tab1.txt','w');

utl_file.putf(fp, '%s, %s\n', 'TextField', 55);

utl_file.fclose(fp);

end;

You might also want to investigate third party tools like TOAD or ManageIT Fast Unloader from CA to help you
unload data from Oracle.

Can one load variable and fix length data records?


Yes, look at the following control file examples. In the first we will load delimited data (variable length):

LOAD DATA

INFILE *

INTO TABLE load_delimited_data

FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'

TRAILING NULLCOLS

( data1, data2 )

BEGINDATA

11111,AAAAAAAAAA

22222,"A,B,C,D,"

If you need to load positional data (fixed length), look at the following control file example: LOAD DATA

INFILE *

INTO TABLE load_positional_data

( data1 POSITION(1:5),

data2 POSITION(6:15) )

BEGINDATA

11111AAAAAAAAAA
22222BBBBBBBBBB

Can one skip header records load while loading?


Use the "SKIP n" keyword, where n = number of logical rows to skip. Look at this example: LOAD DATA

INFILE *

INTO TABLE load_positional_data

SKIP 5

( data1 POSITION(1:5),

data2 POSITION(6:15) )

BEGINDATA

11111AAAAAAAAAA

22222BBBBBBBBBB

Can one modify data as it loads into the database?


Data can be modified as it loads into the Oracle Database. Note that this only applies for the conventional load path
and not for direct path loads.

LOAD DATA

INFILE *

INTO TABLE modified_data

( rec_no "my_db_sequence.nextval",

region CONSTANT '31',

time_loaded "to_char(SYSDATE, 'HH24:MI')",

data1 POSITION(1:5) ":data1/100",

data2 POSITION(6:15) "upper(:data2)",

data3 POSITION(16:22)"to_date(:data3, 'YYMMDD')" )

BEGINDATA

11111AAAAAAAAAA991201

22222BBBBBBBBBB990112

LOAD DATA

INFILE 'mail_orders.txt'

BADFILE 'bad_orders.txt'
APPEND

INTO TABLE mailing_list

FIELDS TERMINATED BY ","

( addr,

city,

state,

zipcode,

mailing_addr "decode(:mailing_addr, null, :addr, :mailing_addr)",

mailing_city "decode(:mailing_city, null, :city, :mailing_city)",

mailing_state )

Can one load data into multiple tables at once?


Look at the following control file:

LOAD DATA

INFILE *

REPLACE

INTO TABLE emp

WHEN empno != ' '

( empno POSITION(1:4) INTEGER EXTERNAL,

ename POSITION(6:15) CHAR,

deptno POSITION(17:18) CHAR,

mgr POSITION(20:23) INTEGER EXTERNAL )

INTO TABLE proj

WHEN projno != ' '

( projno POSITION(25:27) INTEGER EXTERNAL,

empno POSITION(1:4) INTEGER EXTERNAL )

Can one selectively load only the records that one need?
Look at this example, (01) is the first character, (30:37) are characters 30 to 37:

LOAD DATA

INFILE 'mydata.dat'
BADFILE 'mydata.bad'

DISCARDFILE 'mydata.dis'

APPEND

INTO TABLE my_selective_table

WHEN (01) <> 'H' and (01) <> 'T' and (30:37) = '19991217'

( region CONSTANT '31',

service_key POSITION(01:11) INTEGER EXTERNAL,

call_b_no POSITION(12:29) CHAR )

Can one skip certain columns while loading data?


One cannot use POSTION(x:y) with delimited data. Luckily, from Oracle 8i one can specify FILLER columns. FILLER
columns are used to skip columns/fields in the load file, ignoring fields that one does not want. Look at this example:

LOAD DATA

TRUNCATE

INTO TABLE T1

FIELDS TERMINATED BY ','

( field1,

field2 FILLER,

field3 )

How does one load multi-line records?


One can create one logical record from multiple physical records using one of the following two clauses:

CONCATENATE: - use when SQL*Loader should combine the same number of physical records together to form one
logical record.

CONTINUEIF - use if a condition indicates that multiple records should be treated as one. Eg. by having a '#' character
in column 1.

How can get SQL*Loader to COMMIT only at the end of the load file?

One cannot, but by setting the ROWS= parameter to a large value, committing can be reduced. Make sure you have
big rollback segments ready when you use a high value for ROWS=.

Can one improve the performance of SQL*Loader?


A very simple but easily overlooked hint is not to have any indexes and/or constraints (primary key) on your load
tables during the load process. This will significantly slow down load times even with ROWS= set to a high value.
Add the following option in the command line: DIRECT=TRUE. This will effectively bypass most of the RDBMS
processing. However, there are cases when you can't use direct load. Refer to chapter 8 on Oracle server Utilities
manual.

Turn off database logging by specifying the UNRECOVERABLE option. This option can only be used with direct data
loads.

Run multiple load jobs concurrently.

What is the difference between the conventional and direct path loader?

The conventional path loader essentially loads the data by using standard INSERT statements. The direct path loader
(DIRECT=TRUE) bypasses much of the logic involved with that, and loads directly into the Oracle data files. More
information about the restrictions of direct path loading can be obtained from the Utilities Users Guide.

Kishore C B at 21:52

SQL LOADER

SQL LOADER utility is used to load data from other data source into Oracle. For example, if you have a table in
FOXPRO, ACCESS or SYBASE or any other third party database, you can use SQL Loader to load the data into Oracle
Tables. SQL Loader will only read the data from Flat files. So If you want to load the data from Foxpro or any other
database, you have to first convert that data into Delimited Format flat file or Fixed length format flat file, and then
use SQL loader to load the data into Oracle.

Following is procedure to load the data from Third Party Database into Oracle using SQL Loader.

Convert the Data into Flat file using third party database command.

Create the Table Structure in Oracle Database using appropriate datatypes

Write a Control File, describing how to interpret the flat file and options to load the data.

Execute SQL Loader utility specifying the control file in the command line argument

To understand it better let us see the following case study.

CASE STUDY (Loading Data from MS-ACCESS to Oracle)


Suppose you have a table in MS-ACCESS by name EMP, running under Windows O/S, with the following structure

EMPNO INTEGER

NAMETEXT(50)

SALCURRENCY

JDATEDATE
This table contains some 10,000 rows. Now you want to load the data from this table into an Oracle Table. Oracle
Database is running in LINUX O/S.

Solution

Steps

Start MS-Access and convert the table into comma delimited flat (popularly known as csv) , by clicking on File/SaveAs
menu. Let the delimited file name be emp.csv

Now transfer this file to Linux Server using FTP command

Go to Command Prompt in windows

At the command prompt type FTP followed by IP address of the server running Oracle.

FTP will then prompt you for username and password to connect to the Linux Server. Supply a valid username and
password of Oracle User in Linux

For example:-

C:\>ftp 200.200.100.111

Name: oracle

Password:oracle

FTP>

Now give PUT command to transfer file from current Windows machine to Linux machine.

FTP>put

Local file:C:\>emp.csv

remote-file:/u01/oracle/emp.csv

File transferred in 0.29 Seconds

FTP>

Now after the file is transferred quit the FTP utility by typing bye command.

FTP>bye

Good-Bye

Now come the Linux Machine and create a table in Oracle with the same structure as in MS-ACCESS by taking
appropriate datatypes. For example,create a table like this

$sqlplus scott/tiger

SQL>CREATE TABLE emp (empno number(5),

name varchar2(50),

salnumber(10,2),
jdate date);

After creating the table, you have to write a control file describing the actions which SQL Loader should do. You can
use any text editor to write the control file. Now let us write a controlfile for our case study

$vi emp.ctl

1LOAD DATA

2INFILE ‘/u01/oracle/emp.csv’

3BADFILE‘/u01/oracle/emp.bad’

4DISCARDFILE‘/u01/oracle/emp.dsc’

5INSERT INTO TABLE emp

6FIELDS TERMINATED BY “,” OPTIONALLY ENCLOSED BY ‘”’ TRAILING NULLCOLS

7(empno,name,sal,jdate date ‘mm/dd/yyyy’)

Notes:

(Do not write the line numbers, they are meant for explanation purpose)

1.The LOADDATA statement is required at the beginning of the control file.

2.The INFILE option specifies where the input file is located

3.Specifying BADFILE is optional. If you specify,then bad records found during loading will be stored in this file.

4.Specifying DISCARDFILE is optional. If you specify, then records which do not meet a WHEN condition will be
written to this file.

5.You can use any of the following loading option

1.INSERT : Loads rows only if the target table is empty

2.APPEND: Load rows if the target table is empty or not.

3.REPLACE: First deletes all the rows in the existing table and then, load rows.

4.TRUNCATE: First truncates the table and then load rows.

6.This line indicates how the fields are separated in input file. Since in our case the fields are separated by “,” so we
have specified “,” as the terminating char for fields. You can replace this by any char which is used to terminate
fields. Some of the popularly use terminating characters are semicolon “;”, colon “:”, pipe “|” etc.
TRAILINGNULLCOLS means if the last column is null then treat this as null value, otherwise,SQL LOADER will treat the
record as bad if the last column is null.

7.In this line specify the columns of the target table. Note how do you specify format for Date columns

After you have wrote the control file save it and then, call SQLLoader utility by typing the following command

$sqlldr userid=scott/tiger control=emp.ctl log=emp.log

After you have executed the above command SQLLoader will shows you the output describing how many rows it has
loaded.
The LOG option of sqlldr specifies where the log file of this sql loader session should be created.The log file contains
all actions which SQLloader has performed i.e. how many rows were loaded, how many were rejected and how
much time is taken to load the rows and etc. You have to view this file for any errors encountered while running
SQLLoader.

CASE STUDY (Loading Data from Fixed Length file into Oracle)
Suppose we have a fixed length format file containing employees data, as shown below, and wants to load this data
into an Oracle table.

7782 CLARKMANAGER78392572.5010

7839 KINGPRESIDENT5500.0010

7934 MILLERCLERK7782920.0010

7566 JONESMANAGER78393123.7520

7499 ALLENSALESMAN76981600.00300.00 30

7654 MARTINSALESMAN76981312.501400.00 30

7658 CHANANALYST75663450.0020

7654 MARTINSALESMAN76981312.501400.00 30

SOLUTION:

Steps :-

1.First Open the file in a text editor and count the length of fields, for example in our fixed length file, employee
number is from 1st position to 4th position, employee name is from 6th position to 15th position, Job name is from
17th position to 25th position. Similarly other columns are also located.

2.Create a table in Oracle, by any name, but shouldmatch columns specified in fixed length file. In our case give the
following command to create the table.

SQL> CREATE TABLE emp (empnoNUMBER(5),

name VARCHAR2(20),

jobVARCHAR2(10),

mgrNUMBER(5),

salNUMBER(10,2),

comm NUMBER(10,2),

deptnoNUMBER(3) );

3.After creating the table, now write a control file by using any text editor

$vi empfix.ctl

1)LOAD DATA

2)INFILE '/u01/oracle/fix.dat'
3)INTO TABLE emp

4)(empnoPOSITION(01:04)INTEGER EXTERNAL,

namePOSITION(06:15)CHAR,

jobPOSITION(17:25)CHAR,

mgrPOSITION(27:30)INTEGER EXTERNAL,

salPOSITION(32:39)DECIMAL EXTERNAL,

commPOSITION(41:48)DECIMAL EXTERNAL,

5)deptnoPOSITION(50:51)INTEGER EXTERNAL)

Notes:

(Do not write the line numbers, they are meant for explanation purpose)

1. The LOAD DATA statement is required at the beginning of the control file.

2. The name of the file containing data follows the INFILE parameter.

3. The INTO TABLE statement is required to identify the table to be loaded into.

4. Lines 4 and 5 identify a column name and the location of the data in the datafile to be loaded into that column.
empno, name, job, and so on are names of columns in table emp. The datatypes (INTEGER EXTERNAL, CHAR,
DECIMAL EXTERNAL) identify the datatype of data fields in the file, not of corresponding columns in the emp table.

5. Note that the set of column specifications is enclosed in parentheses.

4.After saving the control file now start SQL Loader utility by typing the following command.

$sqlldr userid=scott/tiger control=empfix.ctl log=empfix.log direct=y

After you have executed the above command SQLLoader will shows you the output describing how many rows it has
loaded.

Loading Data into Multiple Tables using WHEN condition


You can simultaneously load data into multiple tables in the same session. You can also use WHEN condition to load
only specified rows which meets a particular condition (only equal to “=” and not equal to “<>” conditions are
allowed).

For example, suppose we have a fixed length file as shown below

7782 CLARKMANAGER78392572.5010

7839 KINGPRESIDENT5500.0010

7934 MILLERCLERK7782920.0010

7566 JONESMANAGER78393123.7520

7499 ALLENSALESMAN76981600.00300.00 30

7654 MARTINSALESMAN76981312.501400.00 30
7658 CHANANALYST75663450.0020

7654 MARTINSALESMAN76981312.501400.00 30

Now we want to load all the employees whose deptno is 10 into emp1 table and those employees whose deptno is
not equal to 10 in emp2 table. To do this first create the tables emp1 and emp2 by taking appropriate columns and
datatypes. Then, write a control file as shown below

$vi emp_multi.ctl

Load Data

infile ‘/u01/oracle/empfix.dat’

append into table scott.emp1

WHEN (deptno=’10 ‘)

(empnoPOSITION(01:04)INTEGER EXTERNAL,

namePOSITION(06:15)CHAR,

jobPOSITION(17:25)CHAR,

mgrPOSITION(27:30)INTEGER EXTERNAL,

salPOSITION(32:39)DECIMAL EXTERNAL,

commPOSITION(41:48)DECIMAL EXTERNAL,

deptnoPOSITION(50:51)INTEGER EXTERNAL)

INTO TABLE scott.emp2

WHEN (deptno<>’10 ‘)

(empnoPOSITION(01:04)INTEGER EXTERNAL,

namePOSITION(06:15)CHAR,

jobPOSITION(17:25)CHAR,

mgrPOSITION(27:30)INTEGER EXTERNAL,

salPOSITION(32:39)DECIMAL EXTERNAL,

commPOSITION(41:48)DECIMAL EXTERNAL,

deptnoPOSITION(50:51)INTEGER EXTERNAL)

After saving the file emp_multi.ctl run sqlldr$sqlldr userid=scott/tiger control=emp_multi.ctl

Conventional Path Load and Direct Path Load.

SQL Loader can load the data into Oracle database using Conventional Path method or Direct Path method. You can
specify the method by using DIRECT command line option. If you give DIRECT=TRUE then SQL loader will use Direct
Path Loading otherwise, if omit this option or specify DIRECT=false, then SQL Loader will use Conventional Path
loading method.

Conventional Path

Conventional path load (the default) uses the SQL INSERT statement and a bind array buffer to load data into
database tables.

When SQL*Loader performs a conventional path load, it competes equally with all other processes for buffer
resources. This can slow the load significantly. Extra overhead is added as SQL statements are generated, passed to
Oracle, and executed.

The Oracle database looks for partially filled blocks and attempts to fill them on each insert. Although appropriate
during normal use, this can slow bulk loads dramatically.

Direct Path

In Direct Path Loading, Oracle will not use SQL INSERT statement for loading rows. Instead it directly writes the rows,
into fresh blocks beyond High Water Mark, in datafiles i.e. it does not scan for free blocks before high water mark.
Direct Path load is very fast because

Partial blocks are not used, so no reads are needed to find them, and fewer writes are performed.

SQL*Loader need not execute any SQL INSERT statements; therefore, the processing load on the Oracle database is
reduced.

A direct path load calls on Oracle to lock tables and indexes at the start of the load and releases them when the load
is finished. A conventional path load calls Oracle once for each array of rows to process a SQL INSERT statement.

A direct path load uses multiblock asynchronous I/O for writes to the database files.

During a direct path load, processes perform their own write I/O, instead of using Oracle's buffer cache. This
minimizes contention with other Oracle users.

Restrictions on Using Direct Path Loads


The following conditions must be satisfied for you to use the direct path load method:

Tables are not clustered.

Tables to be loaded do not have any active transactions pending.

Loading a parent table together with a child Table

Loading BFILE columns

What is Autoinvoice?

AutoInvoice is a concurrent program in Oracle Receivables that performs invoice processing at both the order and line
levels. Once an order or line or set of lines is eligible for invoicing, the Invoice Interface workflow activity interfaces
the data to Receivables. Oracle Order Management inserts records into the following interface tables:
RA_INTERFACE_LINES and RA_INTERFACE_SALES_CREDITS.
MOAC Architecture

How to Register Oracle XML Reports in oracle apps R12


Overview
Oracle XML Publisher is a template-based publishing solution delivered with the Oracle E-Business Suite. It provides
a new approach to report design and publishing by integrating familiar desktop word processing tools with existing E-
Business Suite data reporting. At runtime, XML Publisher merges the custom templates with the concurrent request
data extracts to generate output in PDF, HTML, RTF, EXCEL or even TEXT for use with EFT and EDI transmissions.

STEP -1

Navigation: Login into Oracle Applications –> Go to System Administrator Responsibility –> Concurrent –>
Executable

FIELDS:
 Executable: This is User Understandable Name
 Short Name: This is Unique and for system reference
 Application: Under which application you want to register this CONA_PO Program
 Description: Description
 Execution Method: Based on this field, your file has to be placed in respective directory or database.
 Execution File Name: This is the actual Report file name.

Action: Save

STEP -2

Create a new concurrent program Purchase Order Report that will call the CONA_PO executable declared above.
Make sure that output format is placed as XML.

Navigation: Go to Application Developer Responsibility -> Concurrent ->Program

Note: Output Format should by 'XML' for registering the report in XML.

STEP -3

Make sure that the report parameter name and the token name are same.
STEP -4

Add this new concurrent program to the corresponding responsibility.


Navigation: Go to System Administrator Responsibility ->Security ->Responsibility->Request

STEP -5
Next process is to attach the designed rtf file with the XML code.
In order to attach the rtf file the user should have the responsibility XML Publisher Administrator assigned to him.

First provide the concurrent program short name as Data Definition name in the template manager and register the
template using the data definition created.

Navigation: Go to XML Publisher Administrator->Data Definitions->Create Data definition.

Note: Make sure the code of the data definition must be the same as the short name of the Concurrent Program we
registered for the procedure. So that the concurrent manager can retrieve the templates associated with the
concurrent program

Now create the template with the help of template manager

STEP -6

Navigation: Go to XML Publisher Administrator->Templates ->Create Templates.


What are different types of Value Sets??

There are 8 different types of Value Set Validations.

9. None: this is the indication of minimal validation.


10. Independent: Input should be there in the list of – values that are defined previously.
11. Dependent: According to the previous value, input is compared with a subset of values.
12. Table: Input is checked on the basis of values that exist in the application table.
13. Special: These are the values that make use of flex field.
14. Pair: A pair can be defined as the set of values that make use of flex fields.
15. Translated Independent: This is a kind of value that can be made used only if there is any existence
for the input in the list that is defined previously.
16. Translatable dependent: In this kind of validation rules that compare the input with the subset of
values associated with the previously defined list.

What are the 10 different types of Executables in Oracle Apps???

FlexRpt
The execution file is wrnitten using the FlexReport API.

FlexSql
The execution file is written using the FlexSql API.

Host
The execution file is a host script.

Oracle Reports
The execution file is an Oracle Reports file.

PL/SQL Stored Procedure


The execution file is a stored procedure.

SQL*Loader
The execution file is a SQL script.

SQL*Plus
The execution file is a SQL*Plus script.

Spawned
The execution file is a C or Pro*C program.

Immediate
The execution file is a program written to run as a subroutine of the concurrent manager. We recommend
against defining new immediate concurrent programs, and suggest you use either a PL/SQL Stored
Procedure or a Spawned C Program instead.

Request Set Stage Function


PL/SQL Stored Function that can be uesd to calculate the completion statuses of request set stages.

Control File Syntax


OPTIONS(SKIP=1)
LOAD DATA

INFILE 'C:\oracle\VIS\apps\apps_st\appl\gl\12.0.0\bin\GL_FLATFILE.csv'

INSERT

INTO TABLE APPS.RAM_GL_INTERFACE_STG

FIELDS TERMINATED BY ","

OPTIONALLY ENCLOSED BY '"'

TRAILING NULLCOLS

( STATUS

,LEDGER_NAME

,ACCOUNTING_DATE

,CURRENCY_CODE

,DATE_CREATED

,CREATED_BY

,ACTUAL_FLAG

,USER_JE_CATEGORY_NAME

,USER_JE_SOURCE_NAME

,SEGMENT1

,SEGMENT2

,SEGMENT3

,SEGMENT4

,SEGMENT5

,ENTERED_DR

,ENTERED_CR

,ACCOUNTED_DR

,ACCOUNTED_CR

,LEDGER_ID

,ERROR_FLAG

,ERROR_MESSAGE

,ERROR_STATUS )
How To Develop reports (RDF):
Step1: To Develop the report or we create the report
Step2: To place the report in the server specific path or To Move the report in server
Step3: To Create the Concurrent Executable
Step4: To Create the Concurrent Program
Step5: To Attach the Concurrent Program to the Request Group
Step6: Submit the Concurrent Program

How To Develop XML Publisher Report:


Step1: To create a report without Layout
Step2: Transfer the file to the server into a specific path (responsibility)
EX: $xxql-Top/reports/us
Step3: To create a Concurrent Executable based on the report
Step4: To create a Concurrent Program based on Concurrent Executable and change the
OUTPUT Type=XML
Step5: To Attach the Request Group (To identify url frontend oracle application select * from
icx_parameters)
Step6: We need to save the XML Output file
Step7: To create a rtf template (Rich Text Format) using the XML OutPut file.
Step8: Login into Oracle Application XML Publisher Administrator and select Responsibility
name
Step9: Create data definition.XML Publisher Administrator-->Home-->Data Definition
Note: create data definition click on
Name: Emp_DD Code:XXEM (c.p shortname)
Application: AOL Apply
Step10: Create a template.
click on templatetab click on create template button
Name:USER_EMP Code:XXEM
Application: AOL Data definition: EMP_DD
Template File
* File: Emp.rtf Browse (Here rtf file)
* Language : English Apply
Step11 Run the same Concurrent Program (PDF Format by default)

Difference between Interface and Conversion

Interface and Conversion are used to migrate the system from Legacy system to Oracle system. Below are few
differences between Interface and Conversion.
Interface :
-------------------
1. Interface is periodic activity , it may be daily or weekly or monthly or quarterly or Yearly .
2. Both Legacy system and oracle system are active.
Conversion:
----------------
1. Conversion is one time activity.
2. All the data will be transferred into oracle systems one time.
3. Legacy system will not be after conversion process.

How to initialize the view

Most of the views are dynamic in nature... you will find data only when you execute them.

The below is the name of the view (the name mentioned in your post is incorrect )

select * from GL_JE_JOURNAL_LINES_V

Most probably you need to initialize the environment prior trying to select from those tables/views

Try this

Get the application short name from the fnd_application table

Select application_id, application_short_name from fnd_application

order by 2

If you are using toad or sqldeveloper, after connecting as apps

begin

MO_GLOBAL.SET_POLICY_CONTEXT('S',:P_ORG_ID); -- pass in the organization id

end ;

begin

fnd_global.apps_initialize(:P_USER_ID,:P_RESP_ID,: P_RESP_APPL_ID);

/*

Where p_user_id is user_id from fnd_user table against username

P_resp_id is the responsibility id you are entitled to (against org_id)

P_resp_appl_id is the application id (which you get from the first query itself

*/

end;

begin
MO_GLOBAL.INIT('SQLAP'); -- Payables 'ONT' order management etc

--MO_GLOBAL.INIT('PO');

end;

Once the above bits are executed, you should get the data through a simple select statement

Register Custom Tables in Oracle Apps

Register table

Register Custom Tables in Oracle Apps:

Say you have a custom table called “ERPS_EMPLOYEE” with columns EMP_ID, EMP_NAME and EMP_TYPE in your
database. You need to create a TABLE type Value set that pulls up information from this table as LOV. If you give in
the custom table name in “TABLE NAME” field in the “Validation Table Information” Form, Oracle Apps will not
recognize it and you will get the below error saying table does not exist.

So to make your custom table visible in front end ( while creating Value Set or in Alerts or Audits etc), you have to
register it in Oracle Apps.

Let’s now see how to register a custom table. You will need API named AD_DD for this.

1. First you register the table using the below API:

begin

ad_dd.register_table

(p_appl_short_name=&gt; 'CUSTOM', --Application name in which you want to register

p_tab_name =&gt; 'ERPS_EMPLOYEE', --Table Name

p_tab_type =&gt; 'T', -- T for Transaction data , S for seeded data

p_next_extent =&gt; 512, -- default 512

p_pct_free =&gt; 10, -- Default 10

p_pct_used =&gt; 70--Default 70


);

end;

commit;

2. Secondly register each of the columns as below:

Register Column EMP_ID

begin

ad_dd.register_column

(p_appl_short_name=&gt; 'CUSTOM',--Application Name

p_tab_name =&gt; 'ERPS_EMPLOYEE',--Table Name

p_col_name =&gt; 'EMP_ID',--Column Name

p_col_seq =&gt; 1,--Column Sequence

p_col_type =&gt; 'NUMBER',--Column Data type

p_col_width =&gt; 10,--Column Width

p_nullable =&gt; 'N',--Use'N' if mandatory column otherwise 'Y'

p_translate =&gt; 'N',--Use 'Y' if this has translatable values

p_precision =&gt; null,--Decimal precision

p_scale =&gt; NULL--Number of digits in number

);

end;

Commit;

Register Column EMP_NAME

begin

ad_dd.register_column

(p_appl_short_name=&gt; 'CUSTOM',

p_tab_name =&gt; 'ERPS_EMPLOYEE',

p_col_name =&gt; 'EMP_NAME',

p_col_seq =&gt; 2,

p_col_type =&gt; 'VARCHAR2',


p_col_width =&gt; 15,

p_nullable =&gt; 'Y',

p_translate =&gt; 'N',

p_precision =&gt; null,

p_scale =&gt; null

);

end;

Register Column EMP_TYPE

begin

ad_dd.register_column

(p_appl_short_name=&gt; 'CUSTOM',

p_tab_name =&gt; 'ERPS_EMPLOYEE',

p_col_name =&gt; 'EMP_TYPE',

p_col_seq =&gt; 3,

p_col_type =&gt; 'VARCHAR2',

p_col_width =&gt; 15,

p_nullable =&gt; 'Y',

p_translate =&gt; 'N',

p_precision =&gt; null,

p_scale =&gt; null);

end;

commit;

3. Thirdly you register Primary Key if the table has any using the below code snippet:

Begin

ad_dd.register_primary_key

(p_appl_short_name=&gt; 'CUSTOM',--Application Name

p_key_name =&gt; 'EMP_ID_PK',--Unique name for primary key


p_tab_name =&gt; 'ERPS_EMPLOYEE',--Table Name

p_description =&gt; 'Emp ID Primary Key',--Description

p_key_type =&gt; 'S',--S for Surrogate, D for Developer

p_audit_flag =&gt; 'Y',

p_enabled_flag =&gt; 'Y');

end;

commit;

4. Finally you register Primary Key column if your table has a primary key:

Begin

ad_dd.register_primary_key_column

(p_appl_short_name=&gt; 'CUSTOM',--Application Name

p_key_name =&gt; 'EMP_ID_PK',--Primary Key name given above

p_tab_name =&gt; 'ERPS_EMPLOYEE',--Table Name

p_col_name =&gt; 'EMP_ID',--Primary Column name

p_col_sequence =&gt; 1);--Column seq

end;

commit;

Navigate to Application Developer responsibility > Application > Database > Table

Query for the table name that we have registered – “ERPS_EMPLOYEE”. Please note that you cannot register your
table using this form in the front end. You will have to use API. This form is only meant for viewing the information.

Check for the primary key information by clicking on the Primary Key button

Now in your Value set, you will be able to use the table ERPS_EMPLOYEE without any errors.

To delete the registered Tables and its columns, use the below API:

AD_DD.DELETE_COLUMN(appl_short_name,

table_name,

column_name);

AD_DD.DELETE_TABLE( appl_short_name, table_name)


Banks tables in R12

Cash Management (CE)

New Tables

Table Name

Feature Area

CE_BANK_ACCOUNTS

Bank Account Model

CE_BANK_ACCT_USES_ALL

Bank Account Model

CE_GL_ACCOUNTS_CCID

Bank Account Model

CE_INTEREST_BALANCE_RANGES

Balances and Interest Calculation

CE_INTEREST_RATES

Balances and Interest Calculation

CE_INTEREST_SCHEDULES

Balances and Interest Calculation

CE_BANK_ACCT_BALANCES
Balances and Interest Calculation

CE_PROJECTED_BALANCES

Balances and Interest Calculation

CE_INT_CALC_DETAILS_TMP

Balances and Interest Calculation

CE_CASHFLOWS

Bank Account Transfers

CE_CASHFLOW_ACCT_H

Bank Account Transfers

CE_PAYMENT_TRANSACTIONS

Bank Account Transfers

CE_PAYMENT_TEMPLATES

Bank Account Transfers

CE_TRXNS_SUBTYPE_CODES

Bank Account Transfers

CE_XLA_EXT_HEADERS

Subledger Accounting

CE_CONTACT_ASSIGNMENTS

Bank Account Model

CE_AP_PM_DOC_CATEGORIES

Bank Account Model

CE_PAYMENT_DOCUMENTS

Bank Account Model

CE_SECURITY_PROFILES_GT

Bank Account Model

CE_CHECKBOOKS

Bank Account Model

Changed Tables

Table Name

Feature Area
Brief Description of Change

CE_AVAILABLE_TRANSACTIONS_TMP

Bank Statement Reconciliation

Add LEGAL_ENTITY_ID and ORG_ID

CE_STATEMENT_RECONCILS_ALL

Bank Statement Reconciliation

Add LEGAL_ENTITY_ID

CE_ARCH_RECONCILIATIONS_ALL

Bank Statement Reconciliation

Add LEGAL_ENTITY_ID

CE_SYSTEM_PARAMETERS_ALL

System Parameters

Add LEGAL_ENTITY_ID; Add Columns for BAT project

CE_ARCH_HEADERS

Bank Statement

droap ORG_ID

CE_ARCH_INTERFACE_HEADERS

Bank Statement

droap ORG_ID; Add more balance columns

CE_ARCH_INTRA_HEADERS

Bank Statement

droap ORG_ID

CE_INTRA_STMT_HEADERS

Bank Statement

droap ORG_ID

CE_STATEMENT_HEADERS

Bank Statement

droap ORG_ID

CE_STATEMENT_HEADERS_INTERFACE

Bank Statement
droap ORG_ID; Add more balance columns

CE_CASHPOOLS

Cash Leveling

Add LEGAL_ENTITY_ID

CE_PROPOSED_TRANSFERS

Cash Leveling

Add columns for Balance project

CE_LEVELING_MESSAGE

Cash Leveling

Add columns for Balance project

CE_TRANSACTIONS_CODES

Bank Statement

Add Columns for Multi-pass reconciliation feature

CE_STATEMENT_LINES

Bank Statement

Add CASHFLOW_ID

Obsolete Tables

Table Name

Feature Area

Replaced By

CE_ARCH_HEADERS_ALL

Bank Statement

CE_ARCH_HEADERS

CE_ARCH_INTERFACE_HEADERS_ALL

Bank Statement

CE_ARCH_INTERFACE_HEADERS

CE_ARCH_INTRA_HEADERS_ALL

Bank Statement

CE_ARCH_INTRA_HEADERS

CE_INTRA_STMT_HEADERS_ALL
Bank Statement

CE_INTRA_STMT_HEADERS

CE_STATEMENT_HEADERS_ALL

Bank Statement

CE_STATEMENT_HEADERS

CE_STATEMENT_HEADERS_INT_ALL

Bank Statement

CE_STATEMENT_HEADERS_INTERFACE

New Views

View Name

Feature Area

CE_SECURITY_PROFILES_V

Bank Account Model

CE_LE_BG_OU_VS_V

Bank Account Model

CE_BANK_ACCOUNTS_V

Bank Account Model

CE_BANK_BRANCHES_V

Bank Account Model

CE_BANK_ACCT_USES

Bank Account Model

CE_BANK_ACCTS_GT_V

Bank Account Model

CE_BANK_ACCT_USES_BG_V

Bank Account Model

CE_BANK_ACCT_USES_LE_V

Bank Account Model

CE_BANK_ACCT_USES_OU_V

Bank Account Model

CE_BANK_ACCTS_CALC_V
Balances and Interests Calculation

CE_INTEREST_RATES_V

Balances and Interests Calculation

CE_260_CF_RECONCILED_V

Bank Statement Reconciliation

CE_260_CF_TRANSACTIONS_V

Bank Statement Reconciliation

CE_260_CF_REVERSAL_V

Bank Statement Reconciliation

CE_INTERNAL_BANK_ACCTS_GT_V

Cash Positioning

CE_XLA_EXT_HEADERS_V

Subledger Accounting

CE_INTERNAL_BANK_ACCTS_V

Bank Account Model

CE_BANKS_V

Bank Account Model

CE_BANK_ACCTS_SEARCH_GT_V

Bank Account Model

CE_XLA_TRANSACTIONS_V

Subledger Accounting

CEFV_BANK_ACCOUNTS

Business Intelligence Service

CEBV_BANK_ACCOUNTS

Business Intelligence Service

CEFV_BANK_BRANCHES

Business Intelligence Service

CEBV_BANK_BRANCHES

Business Intelligence Service

Changed Views
View Name

Feature Area

Brief Description of Change

CE_101_RECONCILED_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_101_TRANSACTIONS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_185_RECONCILED_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_185_TRANSACTIONS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Subledger Accounting and Bank Account Model features

CE_200_RECONCILED_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_200_BATCHES_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_200_REVERSAL_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_200_TRANSACTIONS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_222_RECONCILED_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features


CE_222_BATCHES_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_222_REVERSAL_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_222_TRANSACTIONS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_222_TXN_FOR_BATCH_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_260_RECONCILED_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_260_TRANSACTIONS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_801_RECONCILED_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_801_TRANSACTIONS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_801_EFT_RECONCILED_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_801_EFT_TRANSACTIONS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features


CE_999_TRANSACTIONS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_999_RECONCILED_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_999_REVERSAL_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_ALL_STATEMENTS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_ARCH_RECONCILIATIONS

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_AVAIL_STATEMENTS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_AVAILAVLE_BATCHES_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_AVAILAVLE_TRANSACTIONS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_BANK_TRX_CODES_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_INTERNAL_BANK_ACCOUNTS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features


CE_MISC_TAX_CODE_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_MISC_TRANSACTIONS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_RECEIVABLE_ACTIVITIES_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_RECONCILED_TRANSACTIONS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_REVERSAL_TRANSACTIONS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_STAT_HDRS_INF_V

Bank Statement Reconciliation

Enhancement related to MOAC, Balances and Interests Calculation and Bank Account Model feature

CE_STATEMENT_HEADERS_V

Bank Statement Reconciliation

Enhancement related to MOAC, Balances and Interests Calculation and Bank Account Model features

CE_STATEMENT_LINES_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_STATEMENT_RECONCILIATIONS

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_SYSTEM_PARAMETERS

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features


CE_SYSTEM_PARAMETERS_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CE_TRANSACTION_CODES_V

Bank Statement Reconciliation

Enhancement related to MOAC and Bank Account Model features

CEBV_CASH_FORECAST_CELLS

BIS Views

Enhancement related to Bank Account Model features

CEBV_ECT

BIS Views

Enhancement related to Bank Account Model features

CEFV_BANK_STATEMENTS

BIS Views

Enhancement related to Bank Account Model features

CEFV_CASH_FORECAST_CELLS

BIS Views

Enhancement related to Bank Account Model features

CEFV_ECT

BIS Views

Enhancement related to Bank Account Model features

CE_TRANSACTION_CODES_V

Bank Statement Reconciliation

Enhancement related to Bank Statement Reconciliation features

CE_CP_XTR_BANK_ACCOUNTS_V

Cash Positioning

Enhancement related to Bank Account Model features

CE_XTR_CASHFLOWS_V

Cash Positioning

Enhancement related to Bank Account Model features


CE_AP_FC_PAYMENTS_V

Cash Forecasting

Enhancement related to Bank Account Model features

CE_AR_FC_RECEIPTS_V

Cash Forecasting

Enhancement related to Bank Account Model features

CE_AR_FC_INVOICES_V

Cash Forecasting

Invalid object fix

CE_SO_FC_ORDERS_V

Cash Forecasting

Invalid object fix

CE_SO_FC_ORDERS_NO_TERMS_V

Cash Forecasting

Invalid object fix

CE_SO_FC_ORDERS_TERMS_V

Cash Forecasting

Invalid object fix

CE_FORECAST_ROWS_V

Cash Forecasting

Invalid object fix

CE_CP_BANK_ACCOUNTS_V

Cash Positioning

Enhancement related to Bank Account Model features

CE_CP_WS_BA_V

Cash Positioning

Enhancement related to Bank Account Model features

CE_CP_DISC_OPEN_V

Cash Positioning

Enhancement related to Bank Account Model features


CE_CP_WS_LE_V

Cash Positioning

Enhancement related to Bank Account Model features

CE_CP_XTO_V

Cash Positioning

Enhancement related to Bank Account Model features

CE_CP_SUB_OPEN_BAL_V

Cash Positioning

Enhancement related to Bank Account Model features

CE_CP_WS_BA_DISC_V

Cash Positioning

Enhancement related to Bank Account Model features

CE_P_BA_SIGNATORY_HIST_V

BIS View

Enhancement related to Bank Account Model features

CE_FC_ARI_DISC_V

Cash Forecasting

Enhancement related to Subledger Accounting features

Obsoleted Views

View Name

Feature Area

Replaced By

CE_ARCH_HEADERS

Bank Statement

Table with the same name

CE_ARCH_INTERFACE_HEADERS

Bank Statement

Table with the same name

CE_ARCH_INTRA_HEADERS

Bank Statement
Table with the same name

CE_INTRA_STMT_HEADERS

Bank Statement

Table with the same name

CE_STATEMENT_HEADERS

Bank Statement

Table with the same name

CE_STATEMENT_HEADERS_INTERFACE

Bank Statement

Table with the same name

what are the major difference between oracle 11i and R12 ?
Que: what are the major difference between oracle 11i and R12 ?

Ans :

Ø11i only forms basis application but R12 is now forms and HTML pages.

Ø11i is particularly in responsibility and operating unit basis but R12 is multi operating unit basis.

Ø11i is particularly in set of books using and R12 using in ledgers.

Ø11i in MRC Reporting level set of books called reporting set of books but in R12 reporting ledgers called as
reporting currency. Banks are using at single operating unit level in 11i and ledgers level using in R12.

Differences between R12 & 11i.5.10 New R12 Upgrade to R12 – Pros and Cons Pros:

oSub-ledger Accounting – The new Oracle Sub-ledger Accounting (SLA) architecture allows users to customize the
standard Oracle accounting entries. As a result of Cost Management's uptake of this new architecture, users can
customize their accounting for Receiving, Inventory and Manufacturing transactions.

oEnhanced Reporting Currency (MRC) Functionality – Multiple Reporting Currencies functionality is enhanced to
support all journal sources. Reporting sets of books in R12 are now simply reporting currencies. Every journal that is
posted in the primary currency of a ledger can be automatically converted into one or more reporting currencies.

oDeferred COGS and Revenue Matching – R12 provides the ability to automatically maintain the same recognition
rules for COGS and revenue for each sales order line for each period. Deferred COGS and Deferred Revenue are thus
kept in synch.

Cons:

oResources – Availability of knowledgeable resources


oMaturity – Though R12 is around since 2007, not all the modules are mature enough meaning modules like E-B Tax
have many bugs being fixed by Oracle development.

oIntegration with customized applications – In a customized environment, all the extensions & interfaces need to be
analyzed because of the architectural changes in R12

R12 Features and differences with 11.5.10 – Inventory / Cost Management

oMulti-Organization Access Control (MOAC) - Multi-Org Access Control enables uses to access multiple operating
units data from single responsibility. Users can access reports , Concurrent programs , all setup screens of multiple
operating units from single responsibility without switching responsibilities.

oUnified Inventory - R12 merges Oracle Process Manufacturing OPM Inventory and Oracle Inventory applications
into a single version . So OPM users can leverage the functionalities such as consigned & VMI and center led
procurement which were available only to discrete inventory in 11i.5.10.

oInventory valuation Reports -There are a significant number of reports which have been enhanced in the area of
Inventory Value Reporting

oInventory Genealogy - Enhanced genealogy tracking with simplified, complete access at the component level to
critical lot and serial information for material throughout production.

oFixed Component Usage Materials Enhancement – Enhanced BOM setup screen and WIP Material Requirement
screen that support materials having a fixed usage irrespective of the job size for WIP Jobs, OSFM lot-based jobs, or
Flow Manufacturing

oComponent Yield Enhancements - New functionality that provides flexibility to control the value of component
yield factors at WIP job level. This feature allows the user to include or exclude yield factor while calculating back
flush transactions.

oPeriodic Average Cost Absorption Enhancements – Enhanced functionality for WIP Final Completion, WIP Scrap
Absorption, PAC WIP Value Report, Material Overhead Absorption Rule, EAM work order, and PAC EAM Work Order
Cost Estimate Processor Report.

oComponent Yield benefits - Component yield functionality user have the flexibility to control the value of
component yield factors and use those factors for back flush transactions. Of course if the yield factor is not used,
yield losses can be accounted for using the manual component issue transaction.

New Features:

R12 Features and differences with 11.5.10 – Advanced Procurement Suite

oProfessional Buyer’s Work Center – To speed up buyers’ daily purchasing tasks – view & act upon requisition
demand, create & manage orders and agreements, run negotiation events, manage supplier information.

oFreight and Miscellaneous Charges – New page for viewing acquisition cost to track freight & miscellaneous
delivery cost components while receiving. Actual delivery costs are tracked during invoice matching.

oComplex Contract Payments – Support for payments for services related procurement including progress payments,
recoupment of advances, and retainage.

oUnified Inventory – Support for the converged inventory between Oracle Process Manufacturing – OPM Inventory
& Oracle Inventory.
oDocument Publishing Enhancements – Support for RTF & PDF layouts and publish contracts using user specified
layouts

oSupport for Contractor Purchasing Users – Support for contingent workers to create & maintain requisitions,
conduct negotiations, and purchase orders.

New Features:

R12 Features and differences with 11.5.10 – Order Management

oMulti-Organization Access Control (MOAC) - Multi-Org Access Control enables uses to access multiple operating
units data from single responsibility. Users can access reports , Concurrent programs , all setup screens of multiple
operating units from single responsibility without switching responsibilities. They can also use Order Import to bring
in orders for different Operating Units from within a single responsibility. The same applies to the Oracle Order
Management Public Application Program Interfaces (APIs).

oPost Booking Item Substitution - Item Substitution functionality support has been extended to post-Booking
through Scheduling/re-scheduling in Sales Order, Quick Sales Order, and Scheduling Order Organizer forms. Item
Substitution functionality is also supported from Planner’s Workbench (loop-back functionality) till the line is pick-
released.

oItem Orderability - Businesses need the ability to define which customers are allowed to order which products, and
the ability to apply the business logic when the order is created.

oMass Scheduling Enhancements – Mass Scheduling can now schedule lines that never been scheduled or those that
have failed manual scheduling. Mass Scheduling also supports unscheduling and rescheduling

oException Management Enhancements – Improved visibility to workflow errors and eases the process of retrying
workflows that have experienced processing errors

oSales Order Reservation for Lot-Based Jobs – Lot-Based Jobs as a Source of Supply to Reserve Against Sales
Order(s). OSFM Displays Sales Order Information on Reserved Jobs

oCascading Attributes – Cascading means that if the Order header attributes change, the corresponding line
attributes change

oCustomer Credit Check Hold Source Support across Operating Units - Order Management honors credit holds
placed on customers from AR across operating Units. When Receivables places a customer on credit hold a hold
source will be created in all operating units which have:

§A site defined for that customer

§An order placed against that customer.

New Features:

R12 Features and differences with 11.5.10 – Shipping

oPick Release/Confirm Features

§Pick Release enhancements - Enhancements will be made to the Release Sales Order Form and the Release Rules
Form to support planned crossdocking and task priority for Oracle Warehouse Management (WMS) organizations.
Pick release will allow a user to specify location methods and if crossdocking is required, a cross-dock rule. The task
priority will be able to be set for each task in a sales order picking wave when that wave is pick released. The priority
indicated at pick release will be defaulted to every Oracle WMS task created
§Parallel Pick Release Submission - This new feature will allow users to run multiple pick release processes in parallel
to improve overall performance. By distributing the workload across multiple processors, users can reduce the
overall time required for a single pick release run.

oWorkflow Shipping Transaction Enhancement – Oracle has enabled Workflow in the Shipping process for: workflow,
process workflow, activity and notification workflow, and business event

oSupport for Miscellaneous Shipping Transactions - Oracle Shipping Execution users will now be able to create a
delivery for a shipment that is not tied to a sales order via XML (XML-equivalent of EDI 940 IN). Once this delivery has
been created, users will be able to print shipping documents, plan, rate, tender, audit and record the issuance out of
inventory. Additionally, an XML Shipment Advice (XML- equivalent of EDI 945 OUT) will be supported to record the
outbound transactions.

oFlexible Documents: With this new feature ,Shipping Execution users will be able to create template-based, easy-to-
use formats to quickly produce and easily maintain shipping documents unique to their business. Additional
attributes will be added to the XML templates for each report for added flexibility

oEnhanced LPN Support - Oracle Shipping Execution users will now have improved visibility to the Oracle WMS
packing hierarchy at Pick Confirmation. The packing hierarchy, including the License Plate Number (LPN), will be
visible in the Shipping Transactions form as well as in the Quick Ship user interface.

New Features:

R12 Features and differences with 11.5.10 – Warehouse Management

oCrossdock Execution – WMS allow you to determine final staging lane, merge with existing delivery or create a new
delivery, synchronize inbound operation plan with outbound consolidation plan, enhance outbound consolidation
plans and manage crossdock tasks.

oLabor Management – WMS provides labor analysis. It gives the warehouse manager increased visibility to resource
requirements. Detailed information for the productivity of individual employees and warehouses is provided

oWarehouse Control Board Additions - Additional Task selection criteria

oUser Extensible Label Fields – WMS, users are now able to add their own variables without customizing the
application, by simply defining in SQL the way to get to that data element

oMaterial Consolidation across deliveries – WMS allows you to consolidate material across deliveries in a staging
lane

New Features:

R12 Features and differences with 11.5.10 – OSFM

oLot and Serial Controlled Assembly– Lot controlled job can now be associated with serial numbers to track and
trace serialized lot/item during shop floor transactions as well as post manufacturing and beyond

oFixed Component Usage Support for Lot Based Jobs – OSFM now supports fixed component usage defined in the
Bill of Material of an end it.

oSupport for Partial Move Transactions – Users are able to execute movement of a partial job quantity
interoperation

oEnhanced BOM to Capture Inverse Usage – Users can now capture the inverse component usage through the new
inverse usage field in BOM UI
oSupport for Rosetta Net Transaction - comprising of 7B1 (work in process) and 7B5 (manufacturing work order).

New Features:

R12 – Further Info…

oThe latest RCD (Release Content Documents) can be accessed from the metalink note 404152.1 (requires user
name & password).

oThe TOI (Transfer Of Information) sessions released by Oracle Learning can be accessed from its portal at http://
www.oracle.com/education/oukc/ebs.html

oOracle White papers – Extending the value of Your Oracle E-Business Suite 11i.10 Investment & Application
Upgrades and Service Oriented Architecture

What is Explain Plan in Oracle, & how do we use it?


Share this item with your network:

An Explain Plan is a tool that you can use to have Oracle explain to you how it plans on executing your query....

This is useful in tuning queries to the database to get them to perform better. Once you know how Oracle plans on
executing your query, you can change your environment to run the query faster.

Before you can use the EXPLAIN PLAN command, you need to have a PLAN_TABLE installed. This can be done by
simply running the $ORACLE_HOME/rdbms/admin/utlxplan.sql script in your schema. It creates the table for you.
After you have the PLAN_TABLE created, you issue an EXPLAIN PLAN for the query you are interested in tuning. The
command is of the form:

EXPLAIN PLAN SET STATEMENT_ID='somevalue' FOR some SQL statement;


You need to use a statement_id and then give your SQL statement. For instance, suppose I have a query to tune.
How does it get executed? I issue the following in SQL*Plus:

SQL> explain plan set statement_id = 'q1' for 2 select object_name from test where object_name like 'T%';
Explained.

I used 'q1' for my statement id (short for query 1). But you can use anything you want. My SQL statement is the
second line. Now I query the PLAN_TABLE to see how this statement is executed. This is done with the following
query:

SQL> SELECT LPAD(' ',2*(level-1)) || operation || ' ' || options || ' ' || 2 object_name || ' ' || DECODE(id,0,'Cost = '
|| position) AS "Query Plan",other 3 FROM plan_table 4 START WITH id = 0 5 AND statement_id='q1' 6 CONNECT BY
PRIOR ID = PARENT_ID 7* AND statement_id = 'q1' Query Plan OTHER --------------------------------------------------
-------------------------------------------------- SELECT STATEMENT Cost = TABLE ACCESS FULL TEST

This tells me that my SQL statement will perform a FULL table scan on the TEST table (TABLE ACCESS FULL TEST).
Now let's add an index on that table!

SQL> create index test_name_idx on test(object_name); Index created. SQL> truncate table plan_table; Table
truncated. SQL> explain plan set statement_id = 'q1' for 2 select object_name from test where object_name like 'T
%'; Explained. SQL> SELECT LPAD(' ',2*(level-1)) || operation || ' ' || options || ' ' || 2 object_name || ' ' ||
DECODE(id,0,'Cost = ' || position) AS "Query Plan",other 3 FROM plan_table 4 START WITH id = 0 5 AND
statement_id='q1' 6 CONNECT BY PRIOR ID = PARENT_ID 7* AND statement_id = 'q1' Query Plan OTHER
-------------------------------------------------- -------------------------------------------------- SELECT STATEMENT Cost = INDEX
RANGE SCAN TEST_NAME_IDX

I added an index to the table. Before I issue another EXPLAIN PLAN, I truncate the contents of my PLAN_TABLE to
prepare for the new plan. Then I query the PLAN TABLE. Notice that this time I'm using an index (TEST_NAME_IDX)
that I created!! Hopefully, this query will run faster now that it has an index to use. But this may not always be the
case

Introduction

In this paper we’ll discuss an overview of the EXPLAIN PLAN and TKPROF functions built into the Oracle 8i server and
learn how developers and DBAs use these tools to get the best performance out of their applications. We’ll look at
how to invoke these tools both from the command line and from graphical development tools. In the remainder of
the paper we’ll discuss how to read and interpret Oracle 8i execution plans and TKPROF reports. We’ll look at lots of
examples so that you’ll come away with as much practical knowledge as possible.

An Overview of EXPLAIN PLAN and TKPROF

In this section we’ll take a high-level look at the EXPLAIN PLAN and TKPROF facilities: what they are, prerequisites
for using them, and how to invoke them. We will also look at how these facilities help you tune your applications.

Execution Plans and the EXPLAIN PLAN Statement

Before the database server can execute a SQL statement, Oracle must first parse the statement and develop an
execution plan. The execution plan is a task list of sorts that decomposes a potentially complex SQL operation into a
series of basic data access operations. For example, a query against the dept table might have an execution plan that
consists of an index lookup on the deptno index, followed by a table access by ROWID.

The EXPLAIN PLAN statement allows you to submit a SQL statement to Oracle and have the database prepare the
execution plan for the statement without actually executing it. The execution plan is made available to you in the
form of rows inserted into a special table called a plan table. You may query the rows in the plan table using ordinary
SELECT statements in order to see the steps of the execution plan for the statement you explained. You may keep
multiple execution plans in the plan table by assigning each a unique statement_id. Or you may choose to delete the
rows from the plan table after you are finished looking at the execution plan. You can also roll back an EXPLAIN PLAN
statement in order to remove the execution plan from the plan table.

The EXPLAIN PLAN statement runs very quickly, even if the statement being explained is a query that might run for
hours. This is because the statement is simply parsed and its execution plan saved into the plan table. The actual
statement is never executed by EXPLAIN PLAN. Along these same lines, if the statement being explained includes
bind variables, the variables never need to actually be bound. The values that would be bound are not relevant since
the statement is not actually executed.

You don’t need any special system privileges in order to use the EXPLAIN PLAN statement. However, you do need to
have INSERT privileges on the plan table, and you must have sufficient privileges to execute the statement you are
trying to explain. The one difference is that in order to explain a statement that involves views, you must have
privileges on all of the tables that make up the view. If you don’t, you’ll get an “ORA-01039: insufficient privileges on
underlying objects of the view” error.

The columns that make up the plan table are as follows:

Name Null? Type

-------------------- -------- -------------

STATEMENT_ID VARCHAR2(30)

TIMESTAMP DATE

REMARKS VARCHAR2(80)

OPERATION VARCHAR2(30)

OPTIONS VARCHAR2(30)

OBJECT_NODE VARCHAR2(128)

OBJECT_OWNER VARCHAR2(30)
OBJECT_NAME VARCHAR2(30)

OBJECT_INSTANCE NUMBER(38)

OBJECT_TYPE VARCHAR2(30)

OPTIMIZER VARCHAR2(255)

SEARCH_COLUMNS NUMBER

ID NUMBER(38)

PARENT_ID NUMBER(38)

POSITION NUMBER(38)

COST NUMBER(38)

CARDINALITY NUMBER(38)

BYTES NUMBER(38)

OTHER_TAG VARCHAR2(255)

PARTITION_START VARCHAR2(255)

PARTITION_STOP VARCHAR2(255)

PARTITION_ID NUMBER(38)

OTHER LONG

DISTRIBUTION VARCHAR2(30)

There are other ways to view execution plans besides issuing the EXPLAIN PLAN statement and querying the plan
table. SQL*Plus can automatically display an execution plan after each statement is executed. Also, there are many
GUI tools available that allow you to click on a SQL statement in the shared pool and view its execution plan. In
addition, TKPROF can optionally include execution plans in its reports as well.

Trace Files and the TKPROF Utility

TKPROF is a utility that you invoke at the operating system level in order to analyze SQL trace files and generate
reports that present the trace information in a readable form. Although the details of how you invoke TKPROF vary
from one platform to the next, Oracle Corporation provides TKPROF with all releases of the database and the basic
functionality is the same on all platforms.

The term trace file may be a bit confusing. More recent releases of the database offer a product called Oracle Trace
Collection Services. Also, Net8 is capable of generating trace files. SQL trace files are entirely different. SQL trace is a
facility that you enable or disable for individual database sessions or for the entire instance as a whole. When SQL
trace is enabled for a database session, the Oracle server process handling that session writes detailed information
about all database calls and operations to a trace file. Special database events may be set in order to cause Oracle to
write even more specific information—such as the values of bind variables—into the trace file.

SQL trace files are text files that, strictly speaking, are human readable. However, they are extremely verbose,
repetitive, and cryptic. For example, if an application opens a cursor and fetches 1000 rows from the cursor one row
at a time, there will be over 1000 separate entries in the trace file.
TKPROF is a program that you invoke at the operating system command prompt in order to reformat the trace file
into a format that is much easier to comprehend. Each SQL statement is displayed in the report, along with counts of
how many times it was parsed, executed, and fetched. CPU time, elapsed time, logical reads, physical reads, and
rows processed are also reported, along with information about recursion level and misses in the library cache.
TKPROF can also optionally include the execution plan for each SQL statement in the report, along with counts of
how many rows were processed at each step of the execution plan.

The SQL statements can be listed in a TKPROF report in the order of how much resource they used, if desired. Also,
recursive SQL statements issued by the SYS user to manage the data dictionary can be included or excluded, and
TKPROF can write SQL statements from the traced session into a spool file.

How EXPLAIN PLAN and TKPROF Aid in the Application Tuning Process

EXPLAIN PLAN and TKPROF are valuable tools in the tuning process. Tuning at the application level typically yields
the most dramatic results, and these two tools can help with the tuning in many different ways.

EXPLAIN PLAN and TKPROF allow you to proactively tune an application while it is in development. It is relatively
easy to enable SQL trace, run an application in a test environment, run TKPROF on the trace file, and review the
output to determine if application or schema changes are called for. EXPLAIN PLAN is handy for evaluating individual
SQL statements.

By reviewing execution plans, you can also validate the scalability of an application. If the database operations are
dependent upon full table scans of tables that could grow quite large, then there may be scalability problems ahead.
On the other hand, if large tables are accessed via selective indexes, then scalability may not be a problem.

EXPLAIN PLAN and TKPROF may also be used in an existing production environment in order to zero in on resource
intensive operations and get insights into how the code may be optimized. TKPROF can further be used to quantify
the resources required by specific database operations or application functions.

EXPLAIN PLAN is also handy for estimating resource requirements in advance. Suppose you have an ad hoc reporting
request against a very large database. Running queries through EXPLAIN PLAN will let you determine in advance if
the queries are feasible or if they will be resource intensive and will take unacceptably long to run.

Generating Execution Plans and TKPROF Reports

In this section we will discuss the details of how to generate execution plans (both with the EXPLAIN PLAN
statement and other methods) and how to generate SQL trace files and create TKPROF reports.

Using the EXPLAIN PLAN Statement

Before you can use the EXPLAIN PLAN statement, you must have INSERT privileges on a plan table. The plan table
can have any name you like, but the names and data types of the columns are not flexible. You will find a script
called utlxplan.sql in $ORACLE_HOME/rdbms/admin that creates a plan table with the name plan_table in the local
schema. If you use this script to create your plan table, you can be assured that the table will have the right
definition for use with EXPLAIN PLAN.

Once you have access to a plan table, you are ready to run the EXPLAIN PLAN statement. The syntax is as follows:

EXPLAIN PLAN [SET STATEMENT_ID = <string in single quotes>]

[INTO <plan table name>]

FOR <SQL statement>;


If you do not specify the INTO clause, then Oracle assumes the name of the plan table is plan_table. You can use the
SET clause to assign a name to the execution plan. This is useful if you want to be able to have multiple execution
plans stored in the plan table at once—giving each execution plan a distinct name enables you to determine which
rows in the plan table belong to which execution plan.

The EXPLAIN PLAN statement runs quickly because all Oracle has to do is parse the SQL statement being explained
and store the execution plan in the plan table. The SQL statement can include bind variables, although the variables
will not get bound and the values of the bind variables will be irrelevant.

If you issue the EXPLAIN PLAN statement from SQL*Plus, you will get back the feedback message “Explained.” At this
point the execution plan for the explained SQL statement has been inserted into the plan table, and you can now
query the plan table to examine the execution plan.

Execution plans are a hierarchical arrangement of simple data access operations. Because of the hierarchy, you need
to use a CONNECT BY clause in your query from the plan table. Using the LPAD function, you can cause the output to
be formatted in such a way that the indenting helps you traverse the hierarchy. There are many different ways to
format the data retrieved from the plan table. No one query is the best, because the plan table holds a lot of
detailed information. Different DBAs will find different aspects more useful in different situations.

A simple SQL*Plus script to retrieve an execution plan from the plan table is as follows:

REM

REM explain.sql

REM

SET VERIFY OFF

SET PAGESIZE 100

ACCEPT stmt_id CHAR PROMPT "Enter statement_id: "

COL id FORMAT 999

COL parent_id FORMAT 999 HEADING "PARENT"

COL operation FORMAT a35 TRUNCATE

COL object_name FORMAT a30

SELECT id, parent_id, LPAD (' ', LEVEL - 1) || operation || ' ' ||

options operation, object_name

FROM plan_table

WHERE statement_id = '&stmt_id'

START WITH id = 0

AND statement_id = '&stmt_id'

CONNECT BY PRIOR

id = parent_id

AND statement_id = '&stmt_id';


I have a simple query that we will use in a few examples. We’ll call this “the invoice item query.” The query is as
follows:

SELECT a.customer_name, a.customer_number, b.invoice_number,

b.invoice_type, b.invoice_date, b.total_amount, c.line_number,

c.part_number, c.quantity, c.unit_cost

FROM customers a, invoices b, invoice_items c

WHERE c.invoice_id = :b1

AND c.line_number = :b2

AND b.invoice_id = c.invoice_id

AND a.customer_id = b.customer_id;

The explain.sql SQL*Plus script above displays the execution plan for the invoice item query as follows:

ID PARENT OPERATION OBJECT_NAME

---- ------ ----------------------------------- ------------------------------

0 SELECT STATEMENT

1 0 NESTED LOOPS

2 1 NESTED LOOPS

3 2 TABLE ACCESS BY INDEX ROWID INVOICE_ITEMS

4 3 INDEX UNIQUE SCAN INVOICE_ITEMS_PK

5 2 TABLE ACCESS BY INDEX ROWID INVOICES

6 5 INDEX UNIQUE SCAN INVOICES_PK

7 1 TABLE ACCESS BY INDEX ROWID CUSTOMERS

8 7 INDEX UNIQUE SCAN CUSTOMERS_PK

The execution plan shows that Oracle is using nested loops joins to join three tables, and that accesses from all three
tables are by unique index lookup. This is probably a very efficient query. We will look at how to read execution plans
in greater detail in a later section.

The explain.sql script for displaying an execution plan is very basic in that it does not display a lot of the information
contained in the plan table. Things left off of the display include optimizer estimated cost, cardinality, partition
information (only relevant when accessing partitioned tables), and parallelism information (only relevant when
executing parallel queries or parallel DML).

If you are using Oracle 8.1.5 or later, you can find two plan query scripts in $ORACLE_HOME/rdbms/admin.
utlxpls.sql is intended for displaying execution plans of statements that do not involve parallel processing, while
utlxplp.sql shows additional information pertaining to parallel processing. The output of the latter script is more
confusing, so only use it when parallel query or DML come into play. The output from utlxpls.sql for the invoice item
query is as follows:
Plan Table

--------------------------------------------------------------------------------

| Operation | Name | Rows | Bytes| Cost | Pstart| Pstop |

--------------------------------------------------------------------------------

| SELECT STATEMENT | | 1 | 39 | 4| | |

| NESTED LOOPS | | 1 | 39 | 4| | |

| NESTED LOOPS | | 1 | 27 | 3| | |

| TABLE ACCESS BY INDEX R|INVOICE_I | 1 | 15 | 2| | |

| INDEX UNIQUE SCAN |INVOICE_I | 2| | 1| | |

| TABLE ACCESS BY INDEX R|INVOICES | 2 | 24 | 1| | |

| INDEX UNIQUE SCAN |INVOICES_ | 2| | | | |

| TABLE ACCESS BY INDEX RO|CUSTOMERS | 100 | 1K| 1| | |

| INDEX UNIQUE SCAN |CUSTOMERS | 100 | | | | |

--------------------------------------------------------------------------------

When you no longer need an execution plan, you should delete it from the plan table. You can do this by rolling back
the EXPLAIN PLAN statement (if you have not committed yet) or by deleting rows from the plan table. If you have
multiple execution plans in the plan table, then you should delete selectively by statement_id. Note that if you
explain two SQL statements and assign both the same statement_id, you will get an ugly cartesian product when you
query the plan table!

The Autotrace Feature of SQL*Plus

SQL*Plus has an autotrace feature which allows you to automatically display execution plans and helpful statistics
for each statement executed in a SQL*Plus session without having to use the EXPLAIN PLAN statement or query the
plan table. You turn this feature on and off with the following SQL*Plus command:

SET AUTOTRACE OFF|ON|TRACEONLY [EXPLAIN] [STATISTICS]

When you turn on autotrace in SQL*Plus, the default behavior is for SQL*Plus to execute each statement and display
the results in the normal fashion, followed by an execution plan listing and a listing of various server-side resources
used to execute the statement. By using the TRACEONLY keyword, you can have SQL*Plus suppress the query
results. By using the EXPLAIN or STATISTICS keywords, you can have SQL*Plus display just the execution plan without
the resource statistics or just the statistics without the execution plan.

In order to have SQL*Plus display execution plans, you must have privileges on a plan table by the name of
plan_table. In order to have SQL*Plus display the resource statistics, you must have SELECT privileges on v$sesstat,
v$statname, and v$session. There is a script in $ORACLE_HOME/sqlplus/admin called plustrce.sql which creates a
role with these three privileges in it, but this script is not run automatically by the Oracle installer.

The autotrace feature of SQL*Plus makes it extremely easy to generate and view execution plans, with resource
statistics as an added bonus. One key drawback, however, is that the statement being explained must actually be
executed by the database server before SQL*Plus will display the execution plan. This makes the tool unusable in the
situation where you would like to predict how long an operation might take to complete.

A sample output from SQL*Plus for the invoice item query is as follows:

Execution Plan

----------------------------------------------------------

0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=39)

1 0 NESTED LOOPS (Cost=4 Card=1 Bytes=39)

2 1 NESTED LOOPS (Cost=3 Card=1 Bytes=27)

3 2 TABLE ACCESS (BY INDEX ROWID) OF 'INVOICE_ITEMS' (Cost

=2 Card=1 Bytes=15)

4 3 INDEX (UNIQUE SCAN) OF 'INVOICE_ITEMS_PK' (UNIQUE) (

Cost=1 Card=2)

5 2 TABLE ACCESS (BY INDEX ROWID) OF 'INVOICES' (Cost=1 Ca

rd=2 Bytes=24)

6 5 INDEX (UNIQUE SCAN) OF 'INVOICES_PK' (UNIQUE)

7 1 TABLE ACCESS (BY INDEX ROWID) OF 'CUSTOMERS' (Cost=1 Car

d=100 Bytes=1200)

8 7 INDEX (UNIQUE SCAN) OF 'CUSTOMERS_PK' (UNIQUE)

Statistics

----------------------------------------------------------

0 recursive calls

0 db block gets

8 consistent gets

0 physical reads

0 redo size

517 bytes sent via SQL*Net to client

424 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

0 sorts (memory)

0 sorts (disk)

1 rows processed
Although we haven’t discussed how to read an execution plan yet, you can see that the output from SQL*Plus
provides the same basic information, with several additional details in the form of estimates from the query
optimizer.

Using GUI Tools to View Execution Plans

There are many GUI tools available that allow you to view execution plans for SQL statements you specify or for
statements already sitting in the shared pool of the database instance. Any comprehensive database management
tool will offer this capability, but there are several free tools available for download on the internet that have this
feature as well.

One tool in particular that I really like is TOAD (the Tool for Oracle Application Developers). Although TOAD was
originally developed as a free tool, Quest Software now owns TOAD and it is available in both a free version (limited
functionality) and an enhanced version that may be purchased (full feature set). You may download TOAD from
Quest Software at http://www.toadsoft.com/downld.html. TOAD has lots of handy features. The one relevant to us
here is the ability to click on any SQL statement in the shared pool and instantly view its execution plan.

As with the EXPLAIN PLAN statement and the autotrace facility in SQL*Plus, you will need to have access to a plan
table. Here is TOAD’s rendition of the execution plan for the invoice item query we’ve been using:

You can see that the information displayed is almost identical to that from the autotrace facility in SQL*Plus. One
nice feature of TOAD’s execution plan viewer is that you can collapse and expand the individual operations that
make up the execution plan. Also, the vertical and horizontal lines connecting different steps help you keep track of
the nesting and which child operations go with which parent operations in the hierarchy. The benefits of these
features become more apparent when working with extremely complicated execution plans.

Unfortunately, when looking at execution plans for SQL statements that involve database links or parallelism, TOAD
leaves out critical information that is present in the plan table and is reported by the autotrace feature of SQL*Plus.
Perhaps this deficiency only exists in the free version of TOAD; I would like to think that if you pay for the full version
of TOAD, you’ll get complete execution plans.

Generating a SQL Trace File

SQL trace may be enabled at the instance or session level. To enable SQL trace at the instance level, add the
following parameter setting to the instance parameter file and restart the database instance:

sql_trace = true

When an Oracle instance starts up with the above parameter setting, every database session will run in SQL trace
mode, meaning that all SQL operations for every database session will be written to trace files. Even the daemon
processes like PMON and SMON will be traced! In practice, enabling SQL trace at the instance level is usually not
very useful. It can be overpowering, sort of like using a fire hose to pour yourself a glass of water.

It is more typical to enable SQL trace in a specific session. You can turn SQL trace on and off as desired in order to
trace just the operations that you wish to trace. If you have access to the database session you wish to trace, then
use the ALTER SESSION statement as follows to enable and disable SQL trace:

ALTER SESSION SET sql_trace = TRUE|FALSE;


This technique works well if you have access to the application source code and can add in ALTER SESSION
statements at will. It also works well when the application runs from SQL*Plus and you can execute ALTER SESSION
statements at the SQL*Plus prompt before invoking the application.

In situations where you cannot invoke an ALTER SESSION command from the session you wish to trace—as with
prepackaged applications, for example—you can connect to the database as a DBA user and invoke the
dbms_system built-in package in order to turn on or off SQL trace in another session. You do this by querying
v$session to find the SID and serial number of the session you wish to trace and then invoking the dbms_system
package with a command of the form:

EXECUTE SYS.dbms_system.set_sql_trace_in_session (<SID>, <serial#>, TRUE|FALSE);

When you enable SQL trace in a session for the first time, the Oracle server process handling that session will create
a trace file in the directory on the database server designated by the user_dump_dest initialization parameter. As
the server is called by the application to perform database operations, the server process will append to the trace
file.

Note that tracing a database session that is using multi-threaded server (MTS) is a bit complicated because each
database request from the application could get picked up by a different server process. In this situation, each server
process will create a trace file containing trace information about the operations performed by that process only.
This means that you will potentially have to combine multiple trace files together to get the full picture of how the
application interacted with the database. Furthermore, if multiple sessions are being traced at once, it will be hard to
tell which operations in the trace file belong to which session. For these reasons, you should use dedicated server
mode when tracing a database session with SQL trace.

SQL trace files contain detailed timing information. By default, Oracle does not track timing, so all timing figures in
trace files will show as zero. If you would like to see legitimate timing information, then you need to enable timed
statistics. You can do this at the instance level by setting the following parameter in the instance parameter file and
restarting the instance:

timed_statistics = true

You can also dynamically enable or disable timed statistics collection at either the instance or the session level with
the following commands:

ALTER SYSTEM SET timed_statistics = TRUE|FALSE;

ALTER SESSION SET timed_statistics = TRUE|FALSE;

There is no known way to enable timed statistics collection for an individual session from another session (akin to
the SYS.dbms_system.set_sql_trace_in_session built-in).

There is very high overhead associated with enabling SQL trace. Some DBAs believe the performance penalty could
be over 25%. Another concern is that enabling SQL trace causes the generation of potentially large trace files. For
these reasons, you should use SQL trace sparingly. Only trace what you need to trace and think very carefully before
enabling SQL trace at the instance level.

On the other hand, there is little, if any, measurable performance penalty in enabling timed statistics collection.
Many DBAs run production databases with timed statistics collection enabled at the system level so that various
system statistics (more than just SQL trace files) will include detailed timing information. Note that Oracle 8.1.5 had
some serious memory corruption bugs associated with enabling timed statistics collection at the instance level, but
these seem to have been fixed in Oracle 8.1.6.
On Unix platforms, Oracle will typically set permissions so that only the oracle user and members of the dba Unix
group can read the trace files. If you want anybody with a Unix login to be able to read the trace files, then you
should set the following undocumented (but supported) initialization parameter in the parameter file:

_trace_files_public = true

If you trace a database session that makes a large number of calls to the database server, the trace file can get quite
large. The initialization parameter max_dump_file_size allows you to set a maximum trace file size. On Unix
platforms, this parameter is specified in units of 512 byte blocks. Thus a setting of 10240 will limit trace files to 5 Mb
apiece. When a SQL trace file reaches the maximum size, the database server process stops writing trace information
to the trace file. On Unix platforms there will be no limit on trace file size if you do not explicitly set the
max_dump_file_size parameter.

If you are tracing a session and realize that the trace file is about to reach the limit set by max_dump_file_size, you
can eliminate the limit dynamically so that you don’t lose trace information. To do this, query the PID column in
v$process to find the Oracle PID of the process writing the trace file. Then execute the following statements in
SQL*Plus:

CONNECT / AS SYSDBA

ORADEBUG SETORAPID <pid>

ORADEBUG UNLIMIT

Running TKPROF on a SQL Trace File

Before you can use TKPROF, you need to generate a trace file and locate it. Oracle writes trace files on the database
server to the directory specified by the user_dump_dest initialization parameter. (Daemon processes such as PMON
write their trace files to the directory specified by background_dump_dest.) On Unix platforms, the trace file will
have a name that incorporates the operating system PID of the server process writing the trace file.

If there are a lot of trace files in the user_dump_dest directory, it could be tricky to find the one you want. One
tactic is to examine the timestamps on the files. Another technique is to embed a comment in a SQL statement in
the application that will make its way into the trace file. An example of this is as follows:

ALTER SESSION /* Module glpost.c */ SET sql_trace = TRUE;

Because TKPROF is a utility you invoke from the operating system and not from within a database session, there will
naturally be some variation in the user interface from one operating system platform to another. On Unix platforms,
you run TKPROF from the operating system prompt with a syntax as follows:

tkprof <trace file> <output file> [explain=<username/password>] [sys=n] \

[insert=<filename>] [record=<filename>] [sort=<keyword>]

If you invoke TKPROF with no arguments at all, you will get a help screen listing all of the options. This is especially
helpful because TKPROF offers many sort capabilities, but you select the desired sort by specifying a cryptic keyword.
The help screen identifies all of the sort keywords.

In its simplest form, you run TKPROF specifying the name of a SQL trace file and an output filename. TKPROF will
read the trace file and generate a report file with the output filename you specified. TKPROF will not connect to the
database, and the report will not include execution plans for the SQL statements. SQL statements that were
executed by the SYS user recursively (to dynamically allocate an extent in a dictionary-managed tablespace, for
example) will be included in the report, and the statements will appear in the report approximately in the order in
which they were executed in the database session that was traced.

If you include the explain keyword, TKPROF will connect to the database and execute an EXPLAIN PLAN statement
for each SQL statement found in the trace file. The execution plan results will be included in the report file. As we
will see later, TKPROF merges valuable information from the trace file into the execution plan display, making this
just about the most valuable way to display an execution plan. Note that the username you specify when running
TKPROF should be the same as the username connected in the database session that was traced. You do not need to
have a plan table in order to use the explain keyword—TKPROF will create and drop its own plan table if needed.

If you specify sys=n, TKPROF will exclude from the report SQL statements initiated by Oracle as the SYS user. This
will make your report look tidier because it will only contain statements actually issued by your application. The
theory is that Oracle internal SQL has already been fully optimized by the kernel developers at Oracle Corporation, so
you should not have to deal with it. However, using sys=n will exclude potentially valuable information from the
TKPROF report. Suppose the SGA is not properly sized on the instance and Oracle is spending a lot of time resolving
dictionary cache misses. This would manifest itself in lots of time spent on recursive SQL statements initiated by the
SYS user. Using sys=n would exclude this information from the report.

If you specify the insert keyword, TKPROF will generate a SQL script in addition to the regular report. This SQL script
creates a table called tkprof_table and inserts one row for each SQL statement displayed on the report. The row will
contain the text of the SQL statement traced and all of the statistics displayed in the report. You could use this
feature to effectively load the TKPROF report into the database and use SQL to analyze and manipulate the statistics.
I’ve never needed to use this feature, but I suppose it could be helpful in some situations.

If you specify the record keyword, TKPROF will generate another type of SQL script in addition to the regular report.
This SQL script will contain a copy of each SQL statement issued by the application while tracing was enabled. You
could get this same information from the TKPROF report itself, but this way could save some cutting and pasting.

The sort keyword is extremely useful. Typically, a TKPROF report may include hundreds of SQL statements, but you
may only be interested in a few resource intensive queries. The sort keyword allows you to order the listing of the
SQL statements so that you don’t have to scan the entire file looking for resource hogs. In some ways, the sort
feature is too powerful for its own good. For example, you cannot sort statements by CPU time consumed—instead
you sort by CPU time spent parsing, CPU time spent executing, or CPU time spent fetching.

A sample TKPROF report for the invoice item query we’ve been using so far is as follows:

TKPROF: Release 8.1.6.1.0 - Production on Wed Aug 9 19:06:36 2000

(c) Copyright 1999 Oracle Corporation. All rights reserved.

Trace file: example.trc

Sort options: default

********************************************************************************

count = number of times OCI procedure was executed

cpu = cpu time in seconds executing

elapsed = elapsed time in seconds executing

disk = number of physical reads of buffers from disk

query = number of buffers gotten for consistent read


current = number of buffers gotten in current mode (usually for update)

rows = number of rows processed by the fetch or execute call

********************************************************************************

ALTER SESSION /* TKPROF example */ SET sql_trace = TRUE

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 0 0.00 0.00 0 0 0 0

Execute 1 0.00 0.00 0 0 0 0

Fetch 0 0.00 0.00 0 0 0 0

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 1 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0

Misses in library cache during execute: 1

Optimizer goal: CHOOSE

Parsing user id: 34 (RSCHRAG)

********************************************************************************

ALTER SESSION SET timed_statistics = TRUE

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 1 0.00 0.00 0 0 0 0

Execute 1 0.00 0.00 0 0 0 0

Fetch 0 0.00 0.00 0 0 0 0

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 2 0.00 0.00 0 0 0 0

Misses in library cache during parse: 1

Optimizer goal: CHOOSE

Parsing user id: 34 (RSCHRAG)

********************************************************************************

SELECT a.customer_name, a.customer_number, b.invoice_number,

b.invoice_type, b.invoice_date, b.total_amount, c.line_number,


c.part_number, c.quantity, c.unit_cost

FROM customers a, invoices b, invoice_items c

WHERE c.invoice_id = :b1

AND c.line_number = :b2

AND b.invoice_id = c.invoice_id

AND a.customer_id = b.customer_id

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 1 0.05 0.02 0 0 0 0

Execute 1 0.00 0.00 0 0 0 0

Fetch 2 0.00 0.00 8 8 0 1

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 4 0.05 0.02 8 8 0 1

Misses in library cache during parse: 1

Optimizer goal: CHOOSE

Parsing user id: 34 (RSCHRAG)

Rows Row Source Operation

------- ---------------------------------------------------

1 NESTED LOOPS

1 NESTED LOOPS

1 TABLE ACCESS BY INDEX ROWID INVOICE_ITEMS

1 INDEX UNIQUE SCAN (object id 21892)

1 TABLE ACCESS BY INDEX ROWID INVOICES

1 INDEX UNIQUE SCAN (object id 21889)

1 TABLE ACCESS BY INDEX ROWID CUSTOMERS

1 INDEX UNIQUE SCAN (object id 21887)

Rows Execution Plan

------- ---------------------------------------------------

0 SELECT STATEMENT GOAL: CHOOSE

1 NESTED LOOPS
1 NESTED LOOPS

1 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF

'INVOICE_ITEMS'

1 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'INVOICE_ITEMS_PK'

(UNIQUE)

1 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF

'INVOICES'

1 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'INVOICES_PK'

(UNIQUE)

1 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF 'CUSTOMERS'

1 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'CUSTOMERS_PK'

(UNIQUE)

********************************************************************************

ALTER SESSION SET sql_trace = FALSE

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 1 0.00 0.00 0 0 0 0

Execute 1 0.00 0.00 0 0 0 0

Fetch 0 0.00 0.00 0 0 0 0

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 2 0.00 0.00 0 0 0 0

Misses in library cache during parse: 1

Optimizer goal: CHOOSE

Parsing user id: 34 (RSCHRAG)

********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 3 0.05 0.02 0 0 0 0

Execute 4 0.00 0.00 0 0 0 0


Fetch 2 0.00 0.00 8 8 0 1

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 9 0.05 0.02 8 8 0 1

Misses in library cache during parse: 3

Misses in library cache during execute: 1

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 24 0.02 0.04 1 0 1 0

Execute 62 0.01 0.05 0 0 0 0

Fetch 126 0.02 0.02 6 198 0 100

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 212 0.05 0.11 7 198 1 100

Misses in library cache during parse: 11

4 user SQL statements in session.

24 internal SQL statements in session.

28 SQL statements in session.

1 statement EXPLAINed in this session.

********************************************************************************

Trace file: example.trc

Trace file compatibility: 8.00.04

Sort options: default

1 session in tracefile.

4 user SQL statements in trace file.

24 internal SQL statements in trace file.

28 SQL statements in trace file.

15 unique SQL statements in trace file.

1 SQL statements EXPLAINed using schema:

RSCHRAG.prof$plan_table

Default table was used.


Table was created.

Table was dropped.

381 lines in trace file.

You can see that there is a lot going on in a TKPROF report. We will talk about how to read the report and interpret
the different statistics in the next section.

Interpreting Execution Plans and TKPROF Reports

In this section we will discuss how to read and interpret execution plans and TKPROF reports. While generating an
execution plan listing or creating a TKPROF report file is usually a straightforward process, analyzing the data and
reaching the correct conclusions can be more of an art. We’ll look at lots of examples along the way.

Understanding Execution Plans

An execution plan is a hierarchical structure somewhat like an inverted tree. The SQL statement being examined can
be thought of as the root of the tree. This will be the first line on an execution plan listing, the line that is least
indented. This statement can be thought of as the result of one or more subordinate operations. Each of these
subordinate operations can possibly be decomposed further. This decomposition process continues repeatedly until
eventually even the most complex SQL statement is broken down into a set of basic data access operations.

Consider the following simple query and execution plan:

SELECT customer_id, customer_number, customer_name

FROM customers

WHERE UPPER (customer_name) LIKE 'ACME%'

ORDER BY customer_name;

ID PARENT OPERATION OBJECT_NAME

---- ------ ----------------------------------- ------------------------------

0 SELECT STATEMENT

1 0 SORT ORDER BY

2 1 TABLE ACCESS FULL CUSTOMERS

The root operation—that which we explained—is a SELECT statement. The output of the statement will be the
results of a sort operation (for the purposes of satisfying the ORDER BY clause). The input to the sort will be the
results of a full table scan of the customers table. Stated more clearly, the database server will execute this query by
checking every row in the customers table for a criteria match and sorting the results. Perhaps the developer
expected Oracle to use an index on the customer_name column to avoid a full table scan, but the use of the UPPER
function defeated the index. (A function-based index could be deployed to make this query more efficient.)

Consider the following query and execution plan:

SELECT a.customer_name, b.invoice_number, b.invoice_date

FROM customers a, invoices b

WHERE b.invoice_date > TRUNC (SYSDATE - 1)


AND a.customer_id = b.customer_id;

ID PARENT OPERATION OBJECT_NAME

---- ------ ----------------------------------- ------------------------------

0 SELECT STATEMENT

1 0 NESTED LOOPS

2 1 TABLE ACCESS BY INDEX ROWID INVOICES

3 2 INDEX RANGE SCAN INVOICES_DATE

4 1 TABLE ACCESS BY INDEX ROWID CUSTOMERS

5 4 INDEX UNIQUE SCAN CUSTOMERS_PK

Again, the root operation is a SELECT statement. This time, the SELECT statement gets its input from the results of a
nested loops join operation. The nested loops operation takes as input the results of accesses to the invoices and
customers tables. (You can tell from the indenting that accesses to both tables feed directly into the nested loops
operation.) The invoices table is accessed by a range scan of the invoices_date index, while the customers table is
accessed by a unique scan of the customers_pk index.

In plainer language, here is how Oracle will execute this query: Oracle will perform a range scan on the
invoices_date index to find the ROWIDs of all rows in the invoices table that have an invoice date matching the query
criteria. For each ROWID found, Oracle will fetch the corresponding row from the invoices table, look up the
customer_id from the invoices record in the customers_pk index, and use the ROWID found in the customers_pk
index entry to fetch the correct customer record. This, in effect, joins the rows fetched from the invoices table with
their corresponding matches in the customers table. The results of the nested loops join operation are returned as
the query results.

Consider the following query and execution plan:

SELECT a.customer_name, COUNT (DISTINCT b.invoice_id) "Open Invoices",

COUNT (c.invoice_id) "Open Invoice Items"

FROM customers a, invoices b, invoice_items c

WHERE b.invoice_status = 'OPEN'

AND a.customer_id = b.customer_id

AND c.invoice_id (+) = b.invoice_id

GROUP BY a.customer_name;

ID PARENT OPERATION OBJECT_NAME

---- ------ ----------------------------------- ------------------------------

0 SELECT STATEMENT

1 0 SORT GROUP BY

2 1 NESTED LOOPS OUTER


3 2 HASH JOIN

4 3 TABLE ACCESS BY INDEX ROWID INVOICES

5 4 INDEX RANGE SCAN INVOICES_STATUS

6 3 TABLE ACCESS FULL CUSTOMERS

7 2 INDEX RANGE SCAN INVOICE_ITEMS_PK

This execution plan is more complex than the previous two, and here you can start to get a feel for the way in which
complex operations get broken down into simpler subordinate operations. To execute this query, the database
server will do the following: First Oracle will perform a range scan on the invoices_status index to get the ROWIDs of
all rows in the invoices table with the desired status. For each ROWID found, the record from the invoices table will
be fetched.

This set of invoice records will be set aside for a moment while the focus turns to the customers table. Here, Oracle
will fetch all customers records with a full table scan. To perform a hash join between the invoices and customers
tables, Oracle will build a hash from the customer records and use the invoice records to probe the customer hash.

Next, a nested loops join will be performed between the results of the hash join and the invoice_items_pk index. For
each row resulting from the hash join, Oracle will perform a unique scan of the invoice_items_pk index to find index
entries for matching invoice items. Note that Oracle gets everything it needs from the index and doesn’t even need
to access the invoice_items table at all. Also note that the nested loops operation is an outer join. A sort operation
for the purposes of grouping is performed on the results of the nested loops operation in order to complete the
SELECT statement.

It is interesting to note that Oracle chose to use a hash join and a full table scan on the customers table instead of
the more traditional nested loops join. In this database there are many invoices and a relatively small number of
customers, making a full table scan of the customers table less expensive than repeated index lookups on the
customers_pk index. But suppose the customers table was enormous and the relative number of invoices was quite
small. In that scenario a nested loops join might be better than a hash join. Examining the execution plan allows you
to see which join method Oracle is using. You could then apply optimizer hints to coerce Oracle to use alternate
methods and compare the performance.

You may wonder how I got that whole detailed explanation out of the eight line execution plan listing shown above.
Did I read anything into the execution plan? No! It’s all there! Understanding the standard inputs and outputs of
each type of operation and coupling this with the indenting is key to reading an execution plan.

A nested loops join operation always takes two inputs: For every row coming from the first input, the second input
is executed once to find matching rows. A hash join operation also takes two inputs: The second input is read
completely once and used to build a hash. For each row coming from the first input, one probe is performed against
this hash. Sorting operations, meanwhile, take in one input. When the entire input has been read, the rows are
sorted and output in the desired order.

Now let’s look at a query with a more complicated execution plan:

SELECT customer_name

FROM customers a

WHERE EXISTS

(
SELECT 1

FROM invoices_view b

WHERE b.customer_id = a.customer_id

AND number_of_lines > 100

ORDER BY customer_name;

ID PARENT OPERATION OBJECT_NAME

---- ------ ----------------------------------- ------------------------------

0 SELECT STATEMENT

1 0 SORT ORDER BY

2 1 FILTER

3 2 TABLE ACCESS FULL CUSTOMERS

4 2 VIEW INVOICES_VIEW

5 4 FILTER

6 5 SORT GROUP BY

7 6 NESTED LOOPS

8 7 TABLE ACCESS BY INDEX ROWID INVOICES

9 8 INDEX RANGE SCAN INVOICES_CUSTOMER_ID

10 7 INDEX RANGE SCAN INVOICE_ITEMS_PK

This execution plan is somewhat complex because the query includes a subquery that the optimizer could not
rewrite as a simple join, and a view whose definition could not be merged into the query. The definition of the
invoices_view view is as follows:

CREATE OR REPLACE VIEW invoices_view

AS

SELECT a.invoice_id, a.customer_id, a.invoice_date, a.invoice_status,

a.invoice_number, a.invoice_type, a.total_amount,

COUNT(*) number_of_lines

FROM invoices a, invoice_items b

WHERE b.invoice_id = a.invoice_id

GROUP BY a.invoice_id, a.customer_id, a.invoice_date, a.invoice_status,

a.invoice_number, a.invoice_type, a.total_amount;


Here is what this execution plan says: Oracle will execute this query by reading all rows from the customers table
with a full table scan. For each customer record, the invoices_view view will be assembled as a filter and the relevant
contents of the view will be examined to determine whether the customer should be part of the result set or not.

Oracle will assemble the view by performing an index range scan on the invoices_customer_id index and fetching
the rows from the invoices table containing one specific customer_id. For each invoice record found, the
invoice_items_pk index will be range scanned to get a nested loops join of invoices to their invoice_items records.
The results of the join are sorted for grouping, and then groups with 100 or fewer invoice_items records are filtered
out.

What is left at the step with ID 4 is a list of invoices for one specific customer that have more than 100 invoice_items
records associated. If at least one such invoice exists, then the customer passes the filter at the step with ID 2.
Finally, all customer records passing this filter are sorted for correct ordering and the results are complete.

Note that queries involving simple views will not result in a “view” operation in the execution plan. This is because
Oracle can often merge a view definition into the query referencing the view so that the table accesses required to
implement the view just become part of the regular execution plan. In this example, the GROUP BY clause
embedded in the view foiled Oracle’s ability to merge the view into the query, making a separate “view” operation
necessary in order to execute the query.

Also note that the filter operation can take on a few different forms. In general, a filter operation is where Oracle
looks at a set of candidate rows and eliminates some based on certain criteria. This criteria could involve a simple
test such as number_of_lines > 100 or it could be an elaborate subquery.

In this example, the filter at step ID 5 takes only one input. Here Oracle evaluates each row from the input one at a
time and either adds the row to the output or discards it as appropriate. Meanwhile, the filter at step ID 2 takes two
inputs. When a filter takes two inputs, Oracle reads the rows from the first input one at a time and executes the
second input once for each row. Based on the results of the second input, the row from the first input is either
added to the output or discarded.

Oracle is able to perform simple filtering operations while performing a full table scan. Therefore, a separate filter
operation will not appear in the execution plan when Oracle performs a full table scan and throws out rows that
don’t satisfy a WHERE clause. Filter operations with one input commonly appear in queries with view operations or
HAVING clauses, while filter operations with multiple inputs will appear in queries with EXISTS clauses.

An important note about execution plans and subqueries: When a SQL statement involves subqueries, Oracle tries
to merge the subquery into the main statement by using a join. If this is not feasible and the subquery does not have
any dependencies or references to the main query, then Oracle will treat the subquery as a completely separate
statement from the standpoint of developing an execution plan—almost as if two separate SQL statements were
sent to the database server. When you generate an execution plan for a statement that includes a fully autonomous
subquery, the execution plan may not include the operations for the subquery. In this situation, you need to
generate an execution plan for the subquery separately.

Other Columns in the Plan Table

Although the plan table contains 24 columns, so far we have only been using six of them in our execution plan
listings. These six will get you very far in the tuning process, but some of the other columns can be mildly interesting
at times. Still other columns can be very relevant in specific situations.

The optimizer column in the plan table shows the mode (such as RULE or CHOOSE) used by the optimizer to
generate the execution plan. The timestamp column shows the date and time that the execution plan was
generated. The remarks column is an 80 byte field where you may put your own comments about each step of the
execution plan. You can populate the remarks column by using an ordinary UPDATE statement against the plan
table.

The object_owner, object_node, and object_instance columns can help you further distinguish the database object
involved in the operation. You might look at the object_owner column, for example, if objects in multiple schemas
have the same name and you are not sure which one is being referenced in the execution plan. The object_node is
relevant in distributed queries or transactions. It indicates the database link name to the object if the object resides
in a remote database. The object_instance column is helpful in situations such as a self-join where multiple instances
of the same object are used in one SQL statement.

The partition_start, partition_stop, and partition_id columns offer additional information when a partitioned table is
involved in the execution plan. The distribution column gives information about how the multiple Oracle processes
involved in a parallel query or parallel DML operation interact with each other.

The cost, cardinality, and bytes columns show estimates made by the cost-based optimizer as to how expensive an
operation will be. Remember that the execution plan is inserted into the plan table without actually executing the
SQL statement. Therefore, these columns reflect Oracle’s estimates and not the actual resources used. While it can
be amusing to look at the optimizer’s predictions, sometimes you need to take them with a grain of salt. Later we’ll
see that TKPROF reports can include specific information about actual resources used at each step of the execution
plan.

The “other” column in the plan table is a wild card where Oracle can store any sort of textual information about
each step of an execution plan. The other_tag column gives an indication of what has been placed in the “other”
column. This column will contain valuable information during parallel queries and distributed operations.

Consider the following distributed query and output from the SQL*Plus autotrace facility:

SELECT /*+ RULE */

a.customer_number, a.customer_name, b.contact_id, b.contact_name

FROM customers a, contacts@sales.acme.com b

WHERE UPPER (b.contact_name) = UPPER (a.customer_name)

ORDER BY a.customer_number, b.contact_id;

Execution Plan

----------------------------------------------------------

0 SELECT STATEMENT Optimizer=HINT: RULE

1 0 SORT (ORDER BY)

2 1 MERGE JOIN

3 2 SORT (JOIN)

4 3 REMOTE* SALES.ACME.COM

5 2 SORT (JOIN)

6 5 TABLE ACCESS (FULL) OF 'CUSTOMERS'

4 SERIAL_FROM_REMOTE SELECT "CONTACT_ID","CONTACT_NAME" FROM "CONTACTS" "B”


In the execution plan hierarchy, the step with ID 4 is displayed as a remote operation through the sales.acme.com
database link. At the bottom of the execution plan you can see the actual SQL statement that the local database
server sends to sales.acme.com to perform the remote operation. This information came from the “other” and
other_tag columns of the plan table.

Here is how to read this execution plan: Oracle observed a hint and used the RULE optimizer mode in order to
develop the execution plan. First, a remote query will be sent to sales.acme.com to fetch the contact_ids and names
from a remote table. These fetched rows will be sorted for joining purposes and temporarily set aside. Next, Oracle
will fetch all records from the customers table with a full table scan and sort them for joining purposes. Next, the set
of contacts and the set of customers will be joined using the merge join algorithm. Finally, the results of the merge
join will be sorted for proper ordering and the results will be returned.

The merge join operation always takes two inputs, with the prerequisite that each input has already been sorted on
the join column or columns. The merge join operation reads both inputs in their entirety at one time and outputs the
results of the join. Merge joins and hash joins are usually more efficient than nested loops joins when remote tables
are involved, because these types of joins will almost always involve fewer network roundtrips. Hash joins are not
supported when rule-based optimization is used. Because of the RULE hint, Oracle chose a merge join.

Reading TKPROF Reports

Every TKPROF report starts with a header that lists the TKPROF version, the date and time the report was generated,
the name of the trace file, the sort option used, and a brief definition of the column headings in the report. Every
report ends with a series of summary statistics. You can see the heading and summary statistics on the sample
TKPROF report shown earlier in this paper.

The main body of the TKPROF report consists of one entry for each distinct SQL statement that was executed by the
database server while SQL trace was enabled. There are a few subtleties at play in the previous sentence. If an
application queries the customers table 50 times, each time specifying a different customer_id as a literal, then there
will be 50 separate entries in the TKPROF report. If however, the application specifies the customer_id as a bind
variable, then there will be only one entry in the report with an indication that the statement was executed 50 times.
Furthermore, the report will also include SQL statements initiated by the database server itself in order to perform
so-called “recursive operations” such as manage the data dictionary and dictionary cache.

The entries for each SQL statement in the TKPROF report are separated by a row of asterisks. The first part of each
entry lists the SQL statement and statistics pertaining to the parsing, execution, and fetching of the SQL statement.
Consider the following example:

********************************************************************************

SELECT table_name

FROM user_tables

ORDER BY table_name

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 1 0.01 0.02 0 0 0 0

Execute 1 0.00 0.00 0 0 0 0

Fetch 14 0.59 0.99 0 33633 0 194


------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 16 0.60 1.01 0 33633 0 194

Misses in library cache during parse: 1

Optimizer goal: CHOOSE

Parsing user id: RSCHRAG [recursive depth: 0]

This may not seem like a useful example because it is simply a query against a dictionary view and does not involve
application tables. However, this query actually serves the purpose well from the standpoint of highlighting the
elements of a TKPROF report.

Reading across, we see that while SQL trace was enabled, the application called on the database server to parse this
statement once. 0.01 CPU seconds over a period of 0.02 elapsed seconds were used on the parse call, although no
physical disk I/Os or even any buffer gets were required. (We can infer that all dictionary data required to parse the
statement were already in the dictionary cache in the SGA.)

The next line shows that the application called on Oracle to execute the query once, with less than 0.01 seconds of
CPU time and elapsed time being used on the execute call. Again, no physical disk I/Os or buffer gets were required.
The fact that almost no resources were used on the execute call might seem strange, but it makes perfect sense
when you consider that Oracle defers all work on most SELECT statements until the first row is fetched.

The next line indicates that the application performed 14 fetch calls, retrieving a total of 194 rows. The 14 calls used
a total of 0.59 CPU seconds and 0.99 seconds of elapsed time. Although no physical disk I/Os were performed,
33,633 buffers were gotten in consistent mode (consistent gets). In other words, there were 33,633 hits in the buffer
cache and no misses. I ran this query from SQL*Plus, and we can see here that SQL*Plus uses an array interface to
fetch multiple rows on one fetch call. We can also see that, although no disk I/Os were necessary, it took quite a bit
of processing to complete this query.

The remaining lines on the first part of the entry for this SQL statement show that there was a miss in the library
cache (the SQL statement was not already in the shared pool), the CHOOSE optimizer goal was used to develop the
execution plan, and the parsing was performed in the RSCHRAG schema.

Notice the text in square brackets concerning recursive depth. This did not actually appear on the report—I added it
for effect. The fact that the report did not mention recursive depth for this statement indicates that it was executed
at the top level. In other words, the application issued this statement directly to the database server. When
recursion is involved, the TKPROF report will indicate the depth of the recursion next to the parsing user.

There are two primary ways in which recursion occurs. Data dictionary operations can cause recursive SQL
operations. When a query references a schema object that is missing from the dictionary cache, a recursive query is
executed in order to fetch the object definition into the dictionary cache. For example, a query from a view whose
definition is not in the dictionary cache will cause a recursive query against view$ to be parsed in the SYS schema.
Also, dynamic space allocations in dictionary-managed tablespaces will cause recursive updates against uet$ and
fet$ in the SYS schema.

Use of database triggers and stored procedures can also cause recursion. Suppose an application inserts a row into a
table that has a database trigger. When the trigger fires, its statements run at a recursion depth of one. If the trigger
invokes a stored procedure, the recursion depth could increase to two. This could continue through any number of
levels.

So far we have been looking at the top part of the SQL statement entry in the TKPROF report. The remainder of the
entry consists of a row source operation list and optionally an execution plan display. (If the explain keyword was not
used when the TKPROF report was generated, then the execution plan display will be omitted.) Consider the
following example, which is the rest of the entry shown above:

Rows Row Source Operation

------- ---------------------------------------------------

194 SORT ORDER BY

194 NESTED LOOPS

195 NESTED LOOPS OUTER

195 NESTED LOOPS OUTER

195 NESTED LOOPS

11146 TABLE ACCESS BY INDEX ROWID OBJ$

11146 INDEX RANGE SCAN (object id 34)

11339 TABLE ACCESS CLUSTER TAB$

12665 INDEX UNIQUE SCAN (object id 3)

33 INDEX UNIQUE SCAN (object id 33)

193 TABLE ACCESS CLUSTER SEG$

387 INDEX UNIQUE SCAN (object id 9)

194 TABLE ACCESS CLUSTER TS$

388 INDEX UNIQUE SCAN (object id 7)

Rows Execution Plan

------- ---------------------------------------------------

0 SELECT STATEMENT GOAL: CHOOSE

194 SORT (ORDER BY)

194 NESTED LOOPS

195 NESTED LOOPS (OUTER)

195 NESTED LOOPS (OUTER)

195 NESTED LOOPS

11146 TABLE ACCESS (BY INDEX ROWID) OF 'OBJ$'

11146 INDEX (RANGE SCAN) OF 'I_OBJ2' (UNIQUE)

11339 TABLE ACCESS (CLUSTER) OF 'TAB$'

12665 INDEX (UNIQUE SCAN) OF 'I_OBJ#' (NON-UNIQUE)

33 INDEX (UNIQUE SCAN) OF 'I_OBJ1' (UNIQUE)


193 TABLE ACCESS (CLUSTER) OF 'SEG$'

387 INDEX (UNIQUE SCAN) OF 'I_FILE#_BLOCK#' (NON-UNIQUE)

194 TABLE ACCESS (CLUSTER) OF 'TS$'

388 INDEX (UNIQUE SCAN) OF 'I_TS#' (NON-UNIQUE)

The row source operation listing looks very much like an execution plan. It is based on data collected from the SQL
trace file and can be thought of as a “poor man’s execution plan”. It is close, but not complete.

The execution plan shows the same basic information you could get from the autotrace facility of SQL*Plus or by
querying the plan table after an EXPLAIN PLAN statement—with one key difference. The rows column along the left
side of the execution plan contains a count of how many rows of data Oracle processed at each step during the
execution of the statement. This is not an estimate from the optimizer, but rather actual counts based on the
contents of the SQL trace file.

Although the query in this example goes against a dictionary view and is not terribly interesting, you can see that
Oracle did a lot of work to get the 194 rows in the result: 11,146 range scans were performed against the i_obj2
index, followed by 11,146 accesses on the obj$ table. This led to 12,665 non-unique lookups on the i_obj# index,
11,339 accesses on the tab$ table, and so on.

In situations where it is feasible to actually execute the SQL statement you wish to explain (as opposed to merely
parsing it as with the EXPLAIN PLAN statement), I believe TKPROF offers the best execution plan display. GUI tools
such as TOAD will give you results with much less effort, but the display you get from TOAD is not 100% complete
and in certain situations critical information is missing. (Again, my experience is with the free version!) Meanwhile,
simple plan table query scripts like my explain.sql presented earlier in this paper or utlxpls.sql display very
incomplete information. TKPROF gives the most relevant detail, and the actual row counts on each operation can be
very useful in diagnosing performance problems. Autotrace in SQL*Plus gives you most of the information and is
easy to use, so I give it a close second place.

TKPROF Reports: More Than Just Execution Plans

The information displayed in a TKPROF report can be extremely valuable in the application tuning process. Of course
the execution plan listing will give you insights into how Oracle executes the SQL statements that make up the
application, and ways to potentially improve performance. However, the other elements of the TKPROF report can
be helpful as well.

Looking at the repetition of SQL statements and the library cache miss statistics, you can determine if the
application is making appropriate use of Oracle’s shared SQL facility. Are bind variables being used, or is every query
a unique statement that must be parsed from scratch?

From the counts of parse, execute, and fetch calls, you can see if applications are making appropriate use of Oracle’s
APIs. Is the application fetching rows one at a time? Is the application reparsing the same cursor thousands of times
instead of holding it open and avoiding subsequent parses? Is the application submitting large numbers of simple
SQL statements instead of bulking them into PL/SQL blocks or perhaps using array binds?

Looking at the CPU and I/O statistics, you can see which statements consume the most system resources. Could
some statements be tuned so as to be less CPU intensive or less I/O intensive? Would shaving just a few buffer gets
off of a statement’s execution plan have a big impact because the statement gets executed so frequently?

The row counts on the individual operations in an execution plan display can help identify inefficiencies. Are tables
being joined in the wrong order, causing large numbers of rows to be joined and eliminated only at the very end?
Are large numbers of duplicate rows being fed into sorts for uniqueness when perhaps the duplicates could have
been weeded out earlier on?

TKPROF reports may seem long and complicated, but nothing in the report is without purpose. (Well, okay, the row
source operation listing sometimes isn’t very useful!) You can learn volumes about how your application interacts
with the database server by generating and reading a TKPROF report.

Conclusion

In this paper we have discussed how to generate execution plans and TKPROF reports, and how to interpret them.
We’ve walked through several examples in order to clarify the techniques presented. When you have a firm
understanding of how the Oracle database server executes your SQL statements and what resources are required
each step of the way, you have the ability to find bottlenecks and tune your applications for peak performance.
EXPLAIN PLAN and TKPROF give you the information you need for this process.

When is a full table scan better than an index range scan? When is a nested loops join better than a hash join? In
which order should tables be joined? These are all questions without universal answers. In reality, there are many
factors that contribute to determining which join method is better or which join order is optimal.

In this paper we have looked at the tools that give you the information you need to make tuning decisions. How to
translate an execution plan or TKPROF report into an action plan to achieve better performance is not something
that can be taught in one paper. You will need to read several papers or books in order to give yourself some
background on the subject, and then you will need to try potential solutions in a test environment and evaluate
them. If you do enough application tuning, you will develop an intuition for spotting performance problems and
potential solutions. This intuition comes from lots of experience, and you can’t gain it solely from reading papers or
books.

For more information about the EXPLAIN PLAN facility, execution plans in general, and TKPROF, consult the Oracle
manual entitled Oracle8i Designing and Tuning for Performance. To learn more about application tuning techniques,
I suggest you pick up Richard Niemiec’s tome on the subject, Oracle Performance Tuning Tips & Techniques,
available from Oracle Press.

About the Author

Roger Schrag has been an Oracle DBA and application architect for over eleven years, starting out at Oracle
Corporation on the Oracle Financials development team. He is the founder of Database Specialists, Inc., a consulting
group specializing in business solutions based on Oracle technology. You can visit Database Specialists on the web at
http://www.dbspecialists.com, and you can reach Roger by calling +1.415.344.0500 o

Various ways of enabling trace in E-business suite


Generating a Trace File for Forms

Navigate to Menu - Help > Diagnostics


Select one of the following options:

Regular Trace

Trace with Binds

Trace with Waits

Trace with Binds and Waits

- Reproduce the performance issue.

- Disable the trace by selecting “No Trace”.

Generating a Trace File for a Concurrent Program

- Enable the trace option by selecting the Enable Trace Checkbox in the concurrent program definition.

- Reproduce the performance issue.

- Disable the trace option by unchecking the Enable Trace Checkbox in the concurrent program definition.

Note : If it is a Custom report then there is a mandatory XML tags which should be added by the developers.
Without that the above trace will not work.

Generating a Trace File for Java Programs

- Navigation:

Responsibility = System Administrator

Security > Profile

User: Enter User name of the user facing the issue.

Query the Profile: FND: Diagnostics

Set the FND : Diagnostics profile to Yes at User level.

- Login into Self Service under the same user used to set the profile value.

- Navigate to the point immediately before the error is received, if any.

- Click the diagnostic icon at the top of the page. Two options are displayed:

Show Log

Set Trace Level

- Select 'Set Trace Level'

- Click Go.

- A page with a set of options is displayed.

Disable Trace

Trace (regular)
Trace with binds

Trace with waits

Trace with binds and waits

- Choose Trace with binds and waits

- Click Save.

- Return back to the page and reproduce the error, if any.

- Turn off Trace.

Select the Diagnostic icon.

Click on option: Set Trace Level

Click Go

Select : Disable Trace

- Note the trace id numbers - there will be more than one and exit the Application.

Using the SQL Trace Option and TKPROF Utility

Set initialization parameters for trace file management.

Enable the SQL Trace facility for the desired session, and run the application.

Run TKPROF to translate the trace file created in Step 2 into a readable output file. This step can optionally create a
SQL script that can be used to store the statistics in a database.

Interpret the output file created in Step 3.Formatting Trace File with TKPROF

Run TKPROF procedure on the raw trace files. For example, Doyen_ora_18190.trc is the name of the raw trace file
and trace1.txt is the name of the TKPROF file.

tkprof Doyen_ora_18190.trc output_file.txt explain=apps/<apps pw> sort=‘(prsela,exeela,fchela)

To enable trace for an API when executed from a SQL script outside of Oracle Applications

For Example Inventory APIs

-- enable trace

ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';

-- Set the trace file identifier, to locate the file on the server

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'API_TRACE';

-- Execute the API from the SQL script, in the same session.
EXEC <procedure name> ;

-- Once the API completes execution, disable trace

ALTER SESSION SET EVENTS '10046 trace name context off';

-- Locate the trace file based on the tracefile identifier

SELECT VALUE FROM V$PARAMETER WHERE NAME = 'user_dump_dest';

-- Generate the tkprof of the trace file

tkprof <trace_File>.trc <tkprof>.out sys=no explain=apps/<apps pwd>

To Enable Trace using oradebug at session Level.

1. Obtain the SPID from v$process.

SQL> select username, spid from v$process;

2. Start the debug session with the SPID of the process that needs traced.

SQL> oradebug setospid 2280

The oradebug command below will enable the maximum tracing possible:

SQL> oradebug event 10046 trace name context forever, level 12

1. Turn tracing off.

SQL> oradebug event 10046 trace name context off

2. Obtain the trace file name. The oradebug facility provides an easy way to obtain the file name:

SQL> oradebug tracefile_name

c:\oracle9i\admin\ORCL92\udump\mooracle_ora_2280.trc

Now we can use the Tkprof Utilty to get the readable format.

Enable Trace for particular session :

For example to enable level 1 trace in a session with SID 9 and serial number 29 use

Event 10046 level 12 trace can be enabled using

EXECUTE dbms_support.start_trace_in_session (9,29,binds=>true,waits=>true);

Trace can be disabled using

dbms_support.stop_trace_in_session (9,29);

select a.SID,a.SERIAL#,b.USERNAME,b.PROGRAM from v$session a , v$process b where a.PADDR=b.ADDR and


b.SPID='&SPID'

EXECUTE DBMS_SYSTEM.set_sql_trace_in_session(sid=>1390, serial#=>7860, sql_trace=>TRUE);

Reference:

How To Use SQL Trace And TKPROF For Performance Issues with EBusiness Suite [ID 980711.1]
How to Enable Trace or Debug for APIs executed as SQL Script Outside of the Applications ? [Video] [ID 869386.1]

How to trace in E-Business Suite


As a DBA, the most challenging task comes when we need to work or diagnose the performance issue. Since I have
worked extensively on performance tuning, I would like to share different ways to trace Oracle user session.

Tracing is a very handy tool for any DBA to be the first thing to look for when diagnosing any performance related
issues.

Tracing one’s own session– many times while working on performance issue, I need to take the trace of my own
database session. Steps which I follow

alter session set statistics_level=ALL (this is the default behaviour in Oracle 10g going forward)

alter session set tracefile_identifier = ‘<some_name_to_identify_session>’

alter session set sql_trace true –> this will enable level 4 trace OR

alter session set events ‘10046 trace name context forever, level x‘

x=4 –> level 4 trace

x=8 –> level 8 trace

x=12 –> level 12 trace

Run the SQL

Once the SQL has finished, disable the trace

alter session set sql_trace false OR

alter session set events ‘10046 trace name context off’

Go to the User dump location (user_dump_dest) on the database server and look for file named
“tracefile_identifier”

Tracing other user’s database session(You MUST know the user session details – SID and Serial#). Different ways are
EXECUTE sys.dbms_system.set_ev(<SID>,<Serial#>,10046,<x>,”)

x=4 –> level 4 trace

x=8 –> level 8 trace

x=12 –> level 12 trace

EXECUTE dbms_support.start_trace_in_session(sid=><SID>,serial=><Serial#>,Waits=> true, binds=> true)

Tracing in oracle E-Business Suite– Different scenarios are

User is complaining that it is taking a lot of time while saving the transaction

In order to troubleshoot or isolate the issue, we MUST know the query that runs in the database when the user
click on ‘SAVE’ button and hence should enable the trace before the ‘SAVE’ button is hit.

How to enable/disable trace


Login to Oracle application

Navigate to the required form (You need to enable the trace immediately before the ‘Save’ button is hit. This is done
to ensure ONLY the offending SQLs are captured in the trace file)

Once on the Form –> Click on ‘Help’ –> Diagnostic –> Trace and Select any of the five options listed there. It will
enable the trace and Oracle will prompt you the trace file identifier and its location as well.

Do the problematic transaction and once done, disable the trace (Follow the same navigation and select ‘No Trace’

Performance issue is reported by user– Not specific to any specific transaction

In such scenarios, we need to trace the complete navigation or User activity to identify the offending SQL .This is
achieved by enabling the trace using “profile level tracing”

Login to Oracle application

Profile –> System –> Check the ‘USER’ check box –> put the username whose transaction is being traced (NEVER DO
THIS at SITE level) –> Query for the profile “Initialization SQL Statement – Custom”

Update this value for the USER with “BEGIN FND_CTL.FND_SESS_CTL (”,”,’TRUE’,’TRUE’,”, ‘ALTER SESSION SET
TRACEFILE_IDENTIFIER =”<any_identifier>” MAX_DUMP_FILE_SIZE = ”UNLIMITED’ ‘ EVENTS=”10046 TRACE NAME
CONTEXT FOREVER, LEVEL <level_no>”’); END;”

level_no –> could be 4,8 or 12

What this entry does

sets up a trace file identifier

sets the MAX_DUMP_FILE_SIZE as UNLIMITED

Enables the Level X trace

CAUTION: Be very careful in updating with the above value since any wrong entry can prevent specific user to login
to Oracle application

Save this profile setting and ask the user to do the transaction. And once done, again query the profile for the
specific user and remove the entry
Performance Issue for any concurrent program

This is normally done through enabling “Enable Trace” check box after querying the concurrent program

This way Level 8 trace is generated

For enabling Level 12 trace for any concurrent program, either it can be done from database side (as described
above under section “Tracing User session” OR can be done from Oracle application front end. How to do it from
Front end–

Set the profile “Concurrent: Allow Debugging” to YES (ALWAYS DO this at USER level)

Login to application using the same responsibility as used to run the concurrent program

On the request submission screen, “Debug option” would be enabled (otherwise it is disabled)

Click on the “Debug Option” button and select the check box “SQL Trace”

Tracing SSWA/OAF application in Oracle E-Business Suite

You need to set the profile – “FND:Diagnostic” to YES (ALWAYS do at USER level)

Once done, navigate to the OAF page or SSWA page

You will see “Diagnostic” link on the top right cornet of the page

Click on the link

It will give you options to select – select “Set Trace Level”

Click on GO

Next page will give you the option to select the kind of trace you want and once selected, trace file identified will be
displayed on the screen. You need to make a note of this.

Now do the required transactions and once done, Disable the trace (Diagnostic–>set trace level –> Disable Trace)

Diagnostic link will be visible as long as profile value is set to YES

How to take trace for Concurrent Program ?


1. Enable Trace at the Report Definition.
Go to System Administrator -> Concurrent Programs -> Define , Query the report and check the 'Enable trace'
check box

( Navigate to:

Profile->system

Query on the Profile Option: "Concurrent: Allow Debugging"

This should be set to 'yes' at 'Site' level.

If it isn't set, then set it, then logout and bounce the APPS services.

The 'Debug Options' button on the concurrent program will now be enabled. )

2. Run the report for Step 1. above

3. Get the request id from below query. i.e. Request id = 555555

select fcr.request_id "Request ID"

--, fcr.oracle_process_id "Trace ID"

, p1.value||'/'||lower(p2.value)||'_ora_'||fcr.oracle_process_id||'.trc' "Trace File"

, to_char(fcr.actual_completion_date, 'dd-mon-yyyy hh24:mi:ss') "Completed"

, fcp.user_concurrent_program_name "Program"

, fe.execution_file_name|| fe.subroutine_name "Program File"

, decode(fcr.phase_code,'R','Running')||'-'||decode(fcr.status_code,'R','Normal') "Status"

, fcr.enable_trace "Trace Flag"

from fnd_concurrent_requests fcr

, v$parameter p1

, v$parameter p2

, fnd_concurrent_programs_vl fcp

, fnd_executables fe

where p1.name='user_dump_dest'

and p2.name='db_name'

and fcr.concurrent_program_id = fcp.concurrent_program_id

and fcr.program_application_id = fcp.application_id

and fcp.application_id = fe.application_id

and fcp.executable_id=fe.executable_id

and ((fcr.request_id = &request_id


or fcr.actual_completion_date > trunc(sysdate)))

order by decode(fcr.request_id, &request_id, 1, 2), fcr.actual_completion_date desc;

--you will be prompted to enter the request_id;

4. In SQL: select value from v$parameter where name = 'user_dump_dest'

5. Go to directory in Step 4. above.

6. grep ‘555555’ *.trc

7. Find the file name from step 6. above

8. Upload the raw trace file from step 4. above , and the tkprofed trace as well.

$ tkprof <RAW TRACE> <output> explain=apps_uname/apps_pwd sys=no sort=prsela,exeela,fchela

Tablespaces

A database is divided into one or more logical storage units called tablespaces. A database administrator can use
tablespaces to do the following:

control disk space allocation for database data

assign specific space quotas for database users

control availability of data by taking individual tablespaces online or offline

perform partial database backup or recovery operations

allocate data storage across devices to improve performance


A database administrator can create new tablespaces, add and remove datafiles from tablespaces, set and alter
default segment storage settings for segments created in a tablespace, make a tablespace read-only or writeable,
make a tablespace temporary or permanent, and drop tablespaces.

This section includes the following topics:

The SYSTEM Tablespace

Allocating More Space for a Database

Online and Offline Tablespaces

Read-Only Tablespaces

Temporary Tablespaces

The SYSTEM Tablespace

Every Oracle database contains a tablespace named SYSTEM that Oracle creates automatically when the database is
created. The SYSTEM tablespace always contains the data dictionary tables for the entire database.

A small database might need only the SYSTEM tablespace; however, it is recommended that you create at least one
additional tablespace to store user data separate from data dictionary information. This allows you more flexibility in
various database administration operations and can reduce contention among dictionary objects and schema objects
for the same datafiles.

Note: The SYSTEM tablespace must always be kept online. See "Online and Offline Tablespaces" .

All data stored on behalf of stored PL/SQL program units (procedures, functions, packages and triggers) resides in
the SYSTEM tablespace. If you create many of these PL/SQL objects, the database administrator needs to plan for the
space in the SYSTEM tablespace that these objects use. For more information about these objects and the space that
they require, see Chapter 14, "Procedures and Packages", and Chapter 15, "Database Triggers".

Allocating More Space for a Database

To enlarge a database, you have three options. You can add another datafile to one of its existing tablespaces,
thereby increasing the amount of disk space allocated for the corresponding tablespace. Figure 4 - 2 illustrates this
kind of space increase.

Figure 4 - 2. Enlarging a Database by Adding a Datafile to a Tablespace

Alternatively, a database administrator can create a new tablespace (defined by an additional datafile) to increase
the size of a database. Figure 4 - 3 illustrates this.

Figure 4 - 3. Enlarging a Database by Adding a New Tablespace

The size of a tablespace is the size of the datafile(s) that constitute the tablespace, and the size of a database is the
collective size of the tablespaces that constitute the database.

The third option is to change a datafile's size or allow datafiles in existing tablespaces to grow dynamically as more
space is needed. You accomplish this by altering existing files or by adding files with dynamic extension properties.
Figure 4 - 4 illustrates this.

Figure 4 - 4. Enlarging a Database by Dynamically Sizing Datafiles

For more information about increasing the amount of space in your database, see the Oracle7 Server
Administrator's Guide.
Online and Offline Tablespaces

A database administrator can bring any tablespace (except the SYSTEM tablespace) in an Oracle database online
(accessible) or offline (not accessible) whenever the database is open.

Note: The SYSTEM tablespace must always be online because the data dictionary must always be available to Oracle.

A tablespace is normally online so that the data contained within it is available to database users. However, the
database administrator might take a tablespace offline for any of the following reasons:

to make a portion of the database unavailable, while allowing normal access to the remainder of the database

to perform an offline tablespace backup (although a tablespace can be backed up while online and in use)

to make an application and its group of tables temporarily unavailable while updating or maintaining the application

When a Tablespace Goes Offline

When a tablespace goes offline, Oracle does not permit any subsequent SQL statements to reference objects
contained in the tablespace. Active transactions with completed statements that refer to data in a tablespace that
has been taken offline are not affected at the transaction level. Oracle saves rollback data corresponding to
statements that affect data in the offline tablespace in a deferred rollback segment (in the SYSTEM tablespace).
When the tablespace is brought back online, Oracle applies the rollback data to the tablespace, if needed.

You cannot take a tablespace offline if it contains any rollback segments that are in use.

When a tablespace goes offline or comes back online, it is recorded in the data dictionary in the SYSTEM tablespace.
If a tablespace was offline when you shut down a database, the tablespace remains offline when the database is
subsequently mounted and reopened.

You can bring a tablespace online only in the database in which it was created because the necessary data dictionary
information is maintained in the SYSTEM tablespace of that database. An offline tablespace cannot be read or edited
by any utility other than Oracle. Thus, tablespaces cannot be transferred from database to database (transfer of
Oracle data can be achieved with tools described in Oracle7 Server Utilities).

Oracle automatically changes a tablespace from online to offline when certain errors are encountered (for example,
when the database writer process, DBWR, fails in several attempts to write to a datafile of the tablespace). Users
trying to access tables in the tablespace with the problem receive an error. If the problem that causes this disk I/O to
fail is media failure, the tablespace must be recovered after you correct the hardware problem.

Using Tablespaces for Special Procedures

By using multiple tablespaces to separate different types of data, the database administrator can also take specific
tablespaces offline for certain procedures, while other tablespaces remain online and the information in them is still
available for use. However, special circumstances can occur when tablespaces are taken offline. For example, if two
tablespaces are used to separate table data from index data, the following is true:

If the tablespace containing the indexes is offline, queries can still access table data because queries do not require
an index to access the table data.

If the tablespace containing the tables is offline, the table data in the database is not accessible because the tables
are required to access the data.

In summary, if Oracle determines that it has enough information in the online tablespaces to execute a statement, it
will do so. If it needs data in an offline tablespace, then it causes the statement to fail.
Read-Only Tablespaces

The primary purpose of read-only tablespaces is to eliminate the need to perform backup and recovery of large,
static portions of a database. Oracle never updates the files of a read-only tablespace, and therefore the files can
reside on read-only media, such as CD ROMs or WORM drives.

Note: Because you can only bring a tablespace online in the database in which it was created, read-only tablespaces
are not meant to satisfy archiving or data publishing requirements.

Whenever you create a new tablespace, it is always created as read-write. The READ ONLY option of the ALTER
TABLESPACE command allows you to change the tablespace to read-only, making all of its associated datafiles read-
only as well. You can then use the READ WRITE option to make a read-only tablespace writeable again.

Read-only tablespaces cannot be modified. Therefore, they do not need repeated backup. Also, should you need to
recover your database, you do not need to recover any read-only tablespaces, because they could not have been
modified.

You can drop items, such as tables and indexes, from a read-only tablespace, just as you can drop items from an
offline tablespace. However, you cannot create or alter objects in a read-only tablespace.

Making a Tablespace Read-Only

Use the SQL command ALTER TABLESPACE to change a tablespace to read-only. For information on the ALTER
TABLESPACE command, see the Oracle7 Server SQL Reference.

Read-Only vs. Online or Offline

Making a tablespace read-only does not change its offline or online status.

Offline datafiles cannot be accessed. Bringing a datafile in a read-only tablespace online makes the file readable. The
file cannot be written to unless its associated tablespace is returned to the read-write state. The files of a read-only
tablespace can independently be taken online or offline using the DATAFILE option of the ALTER DATABASE
command.

Restrictions on Read-Only Tablespaces

You cannot add datafiles to a tablespace that is read-only, even if you take the tablespace offline. When you add a
datafile, Oracle must update the file header, and this write operation is not allowed.

To update a read-only tablespace, you must first make the tablespace writeable. After updating the tablespace, you
can then reset it to be read-only.

Read-Only Tablespaces and Recovery

Read-only tablespaces have several implications upon instance or media recovery. See Chapter 24, "Database
Recovery", for more information about recovery.

Temporary Tablespaces

Space management for sort operations is performed more efficiently using temporary tablespaces designated
exclusively for sorts. This scheme effectively eliminates serialization of space management operations involved in the
allocation and deallocation of sort space. All operations that use sorts, including joins, index builds, ordering (ORDER
BY), the computation of aggregates (GROUP BY), and the ANALYZE command to collect optimizer statistics, benefit
from temporary tablespaces. The performance gains are significant in parallel server environments.
A temporary tablespace is a tablespace that can only be used for sort segments. No permanent objects can reside in
a temporary tablespace. Sort segments are used when a segment is shared by multiple sort operations. One sort
segment exists in every instance that performs a sort operation in a given tablespace.

Temporary tablespaces provide performance improvements when you have multiple sorts that are too large to fit
into memory. The sort segment of a given temporary tablespace is created at the time of the first sort operation. The
sort segment grows by allocating extents until the segment size is equal to or greater than the total storage demands
of all of the active sorts running on that instance.

You create temporary tablespaces using the following SQL syntax:

CREATE TABLESPACE tablespace TEMPORARY

You can also alter a tablespace from PERMANENT to TEMPORARY or vice versa using the following syntax:

ALTER TABLESPACE tablespace TEMPORARY

For more information on the CREATE TABLESPACE and ALTER TABLESPACE Commands, see Chapter 4 of Oracle7
Server SQL Reference.

Datafiles

A tablespace in an Oracle database consists of one or more physical datafiles. A datafile can be associated with only
one tablespace, and only one database.

When a datafile is created for a tablespace, Oracle creates the file by allocating the specified amount of disk space
plus the overhead required for the file header. When a datafile is created, the operating system is responsible for
clearing old information and authorizations from a file before allocating it to Oracle. If the file is large, this process
might take a significant amount of time.

Additional Information: For information on the amount of space required for the file header of datafiles on your
operating system, see your Oracle operating system specific documentation.

Since the first tablespace in any database is always the SYSTEM tablespace, Oracle automatically allocates the first
datafiles of any database for the SYSTEM tablespace during database creation.

Datafile Contents

After a datafile is initially created, the allocated disk space does not contain any data; however, Oracle reserves the
space to hold only the data for future segments of the associated tablespace -- it cannot store any other program's
data. As a segment (such as the data segment for a table) is created and grows in a tablespace, Oracle uses the free
space in the associated datafiles to allocate extents for the segment.

The data in the segments of objects (data segments, index segments, rollback segments, and so on) in a tablespace
are physically stored in one or more of the datafiles that constitute the tablespace. Note that a schema object does
not correspond to a specific datafile; rather, a datafile is a repository for the data of any object within a specific
tablespace. Oracle allocates the extents of a single segment in one or more datafiles of a tablespace; therefore, an
object can "span" one or more datafiles. Unless table "striping" is used, the database administrator and end-users
cannot control which datafile stores an object.

Size of Datafiles

You can alter the size of a datafile after its creation or you can specify that a datafile should dynamically grow as
objects in the tablespace grow. This functionality allows you to have fewer datafiles per tablespace and can simplify
administration of datafiles.
For more information about resizing datafiles, see the Oracle7 Server Administrator's Guide.

Offline Datafiles

You can take tablespaces offline (make unavailable) or bring them online (make available) at any time. Therefore, all
datafiles making up a tablespace are taken offline or brought online as a unit when you take the tablespace offline or
bring it online, respectively. You can take individual datafiles offline; however, this is normally done only during
certain database recovery procedures.

Pseudocolumns

A pseudocolumn behaves like a table column, but is not actually stored in the table. You can select from
pseudocolumns, but you cannot insert, update, or delete their values

Pseudo-column

A pseudo-column is an Oracle assigned value (pseudo-field) used in the same context as an Oracle Database column,
but not stored on disk. SQL and PL/SQL recognizes the following SQL pseudocolumns, which return specific data
items: SYSDATE, SYSTIMESTAMP, ROWID, ROWNUM, UID, USER, LEVEL, CURRVAL, NEXTVAL, ORA_ROWSCN, etc.

Pseudocolumns are not actual columns in a table but they behave like columns. For example, you can select values
from a pseudocolumn. However, you cannot insert into, update, or delete from a pseudocolumn. Also note that
pseudocolumns are allowed in SQL statements, but not in procedural statements.

[edit]

SYSDATE and SYSTIMESTAMP

Return the current DATE and TIMESTAMP:

SQL> SELECT sysdate, systimestamp FROM dual;

SYSDATE SYSTIMESTAMP

--------- ----------------------------------------

13-DEC-07 13-DEC-07 10.02.31.956842 AM +02:00

[edit]

UID and USER

Return the User ID and Name of a database user:

SQL> SELECT uid, user FROM dual;

UID USER
---------- ------------------------------

50 MICHEL

[edit]

CURRVAL and NEXTVAL

A sequence is a schema object that generates sequential numbers. When you create a sequence, you can specify its
initial value and an increment. CURRVAL returns the current value in a specified sequence.

Before you can reference CURRVAL in a session, you must use NEXTVAL to generate a number. A reference to
NEXTVAL stores the current sequence number in CURRVAL. NEXTVAL increments the sequence and returns the next
value. To obtain the current or next value in a sequence, you must use dot notation, as follows:

sequence_name.CURRVAL

sequence_name.NEXTVAL

After creating a sequence, you can use it to generate unique sequence numbers for transaction processing.
However, you can use CURRVAL and NEXTVAL only in a SELECT list, the VALUES clause, and the SET clause. In the
following example, you use a sequence to insert the same employee number into two tables:

INSERT INTO emp VALUES (empno_seq.NEXTVAL, my_ename, ...);

INSERT INTO sals VALUES (empno_seq.CURRVAL, my_sal, ...);

If a transaction generates a sequence number, the sequence is incremented immediately whether you commit or roll
back the transaction.

[edit]

LEVEL

You use LEVEL with the SELECT CONNECT BY statement to organize rows from a database table into a tree structure.
LEVEL returns the level number of a node in a tree structure. The root is level 1, children of the root are level 2,
grandchildren are level 3, and so on.

In the START WITH clause, you specify a condition that identifies the root of the tree. You specify the direction in
which the query walks the tree (down from the root or up from the branches) with the PRIOR operator.

[edit]

ROWID

ROWID returns the rowid (binary address) of a row in a database table. You can use variables of type UROWID to
store rowids in a readable format. In the following example, you declare a variable named row_id for that purpose:
DECLARE row_id UROWID;

When you select or fetch a physical rowid into a UROWID variable, you can use the function ROWIDTOCHAR, which
converts the binary value to an 18-byte character string. Then, you can compare the UROWID variable to the ROWID
pseudocolumn in the WHERE clause of an UPDATE or DELETE statement to identify the latest row fetched from a
cursor.
[edit]

ROWNUM

ROWNUM returns a number indicating the order in which a row was selected from a table. The first row selected
has a ROWNUM of 1, the second row has a ROWNUM of 2, and so on. If a SELECT statement includes an ORDER BY
clause, ROWNUMs are assigned to the retrieved rows before the sort is done.

You can use ROWNUM in an UPDATE statement to assign unique values to each row in a table. Also, you can use
ROWNUM in the WHERE clause of a SELECT statement to limit the number of rows retrieved, as follows:

DECLARE

CURSOR c1 IS SELECT empno, sal FROM emp

WHERE sal > 2000 AND ROWNUM < 10; -- returns 9 rows

The value of ROWNUM increases only when a row is retrieved, so the only meaningful uses of ROWNUM in a WHERE
clause are

... WHERE ROWNUM < constant;

... WHERE ROWNUM <= constant;

[edit]

ORA_ROWSCN

ORA_ROWSCN returns the system change number (SCN) of the last change inside the block containing a row. It can
return the last modification for the row if the table is created with the option ROWDEPENDENCIES (default is
NOROWDEPENDENCIES).

The function SCN_TO_TIMESTAMP allows you to convert SCN to timestamp.

SQL> select ename, ORA_ROWSCN, SCN_TO_TIMESTAMP(ORA_ROWSCN) from emp where empno=7369;

ENAME ORA_ROWSCN SCN_TO_TIMESTAMP(ORA_ROWSCN)

---------- ---------- ----------------------------------------------------------------

SMITH 2113048 20/12/2008 16:59:51.000

Understanding Indexes and Clusters


This chapter provides an overview of data access methods using indexes and clusters that can enhance or degrade
performance.
The chapter contains the following sections:

Understanding Indexes

This section describes the following:

Tuning the Logical Structure

Choosing Columns and Expressions to Index

Choosing Composite Indexes

Writing Statements That Use Indexes

Writing Statements That Avoid Using Indexes

Re-creating Indexes

Using Nonunique Indexes to Enforce Uniqueness

Using Enabled Novalidated Constraints

Tuning the Logical Structure

Although cost-based optimization helps avoid the use of nonselective indexes within query execution, the SQL
engine must continue to maintain all indexes defined against a table, regardless of whether they are used. Index
maintenance can present a significant CPU and I/O resource demand in any write-intensive application. In other
words, do not build indexes unless necessary.

To maintain optimal performance, drop indexes that an application is not using. You can find indexes that are not
being used by using the ALTER INDEX MONITORING USAGE functionality over a period of time that is representative
of your workload. This monitoring feature records whether or not an index has been used. If you find that an index
has not been used, then drop it. Be careful to select a representative workload to monitor.

See Also:

Oracle9i SQL Reference

Indexes within an application sometimes have uses that are not immediately apparent from a survey of statement
execution plans. An example of this is a foreign key index on a parent table, which prevents share locks from being
taken out on a child table.

If you are deciding whether to create new indexes to tune statements, then you can also use the EXPLAIN PLAN
statement to determine whether the optimizer will choose to use these indexes when the application is run. If you
create new indexes to tune a statement that is currently parsed, then Oracle invalidates the statement. When the
statement is next executed, the optimizer automatically chooses a new execution plan that could potentially use the
new index. If you create new indexes on a remote database to tune a distributed statement, then the optimizer
considers these indexes when the statement is next parsed.

Also keep in mind that the way you tune one statement can affect the optimizer's choice of execution plans for
other statements. For example, if you create an index to be used by one statement, then the optimizer can choose to
use that index for other statements in the application as well. For this reason, reexamine the application's
performance and rerun the SQL trace facility after you have tuned those statements that you initially identified for
tuning.

Note:
You can use the Oracle Index Tuning Wizard to detect tables with inefficient indexes. The Oracle Index Tuning wizard
is an Oracle Enterprise Manager integrated application available with the Oracle Tuning Pack. Similar functionality is
available from the Virtual Index Advisor (a feature of SQL Analyze) and Oracle Expert.

See Also:

Database Tuning with the Oracle Tuning Pack

Choosing Columns and Expressions to Index

A key is a column or expression on which you can build an index. Follow these guidelines for choosing keys to index:

Consider indexing keys that are used frequently in WHERE clauses.

Consider indexing keys that are used frequently to join tables in SQL statements. For more information on optimizing
joins, see the section "Using Hash Clusters".

Index keys that have high selectivity. The selectivity of an index is the percentage of rows in a table having the same
value for the indexed key. An index's selectivity is optimal if few rows have the same value.

Note:

Oracle automatically creates indexes, or uses existing indexes, on the keys and expressions of unique and primary
keys that you define with integrity constraints.

Indexing low selectivity columns can be helpful if the data distribution is skewed so that one or two values occur
much less often than other values.

Do not use standard B-tree indexes on keys or expressions with few distinct values. Such keys or expressions usually
have poor selectivity and therefore do not optimize performance unless the frequently selected key values appear
less frequently than the other key values. You can use bitmap indexes effectively in such cases, unless a high
concurrency OLTP application is involved where the index is modified frequently.

Do not index columns that are modified frequently. UPDATE statements that modify indexed columns and INSERT
and DELETE statements that modify indexed tables take longer than if there were no index. Such SQL statements
must modify data in indexes as well as data in tables. They also generate additional undo and redo.

Do not index keys that appear only in WHERE clauses with functions or operators. A WHERE clause that uses a
function, other than MIN or MAX, or an operator with an indexed key does not make available the access path that
uses the index except with function-based indexes.

Consider indexing foreign keys of referential integrity constraints in cases in which a large number of concurrent
INSERT, UPDATE, and DELETE statements access the parent and child tables. Such an index allows UPDATEs and
DELETEs on the parent table without share locking the child table.

When choosing to index a key, consider whether the performance gain for queries is worth the performance loss for
INSERTs, UPDATEs, and DELETEs and the use of the space required to store the index. You might want to experiment
by comparing the processing times of the SQL statements with and without indexes. You can measure processing
time with the SQL trace facility.

See Also:

Oracle9i Application Developer's Guide - Fundamentals for more information on the effects of foreign keys on
locking

Choosing Composite Indexes


A composite index contains more than one key column. Composite indexes can provide additional advantages over
single-column indexes:

Improved selectivity

Sometimes two or more columns or expressions, each with poor selectivity, can be combined to form a composite
index with higher selectivity.

Reduced I/O

If all columns selected by a query are in a composite index, then Oracle can return these values from the index
without accessing the table.

A SQL statement can use an access path involving a composite index if the statement contains constructs that use a
leading portion of the index.

Note:

This is no longer the case with index skip scans. See "Index Skip Scans".

A leading portion of an index is a set of one or more columns that were specified first and consecutively in the list of
columns in the CREATE INDEX statement that created the index. Consider this CREATE INDEX statement:

CREATE INDEX comp_ind

ON table1(x, y, z);

x, xy, and xyz combinations of columns are leading portions of the index

yz, y, and z combinations of columns are not leading portions of the index

Choosing Keys for Composite Indexes

Follow these guidelines for choosing keys for composite indexes:

Consider creating a composite index on keys that are used together frequently in WHERE clause conditions
combined with AND operators, especially if their combined selectivity is better than the selectivity of either key
individually.

If several queries select the same set of keys based on one or more key values, then consider creating a composite
index containing all of these keys.

Of course, consider the guidelines associated with the general performance advantages and trade-offs of indexes
described in the previous sections.

Ordering Keys for Composite Indexes

Follow these guidelines for ordering keys in composite indexes:

Create the index so the keys used in WHERE clauses make up a leading portion.

If some keys are used in WHERE clauses more frequently, then be sure to create the index so that the more
frequently selected keys make up a leading portion to allow the statements that use only these keys to use the
index.

If all keys are used in WHERE clauses equally often, then ordering these keys from most selective to least selective in
the CREATE INDEX statement best improves query performance.
If all keys are used in the WHERE clauses equally often but the data is physically ordered on one of the keys, then
place that key first in the composite index.

Writing Statements That Use Indexes

Even after you create an index, the optimizer cannot use an access path that uses the index simply because the
index exists. The optimizer can choose such an access path for a SQL statement only if it contains a construct that
makes the access path available. To allow the CBO the option of using an index access path, ensure that the
statement contains a construct that makes such an access path available.

Writing Statements That Avoid Using Indexes

In some cases, you might want to prevent a SQL statement from using an access path that uses an existing index.
You might want to do this if you know that the index is not very selective and that a full table scan would be more
efficient. If the statement contains a construct that makes such an index access path available, then you can force
the optimizer to use a full table scan through one of the following methods:

Use the NO_INDEX hint to give the CBO maximum flexibility while disallowing the use of a certain index.

Use the FULL hint to force the optimizer to choose a full table scan instead of an index scan.

Use the INDEX, INDEX_COMBINE, or AND_EQUAL hints to force the optimizer to use one index or a set of listed
indexes instead of another.

See Also:

Chapter 5, "Optimizer Hints" for more information on the NO_INDEX, FULL, INDEX, INDEX_COMBINE, and
AND_EQUAL hints

Parallel execution uses indexes effectively. It does not perform parallel index range scans, but it does perform
parallel index lookups for parallel nested loop join execution. If an index is very selective (there are few rows for
each index entry), then it might be better to use sequential index lookup rather than parallel table scan.

Re-creating Indexes

You might want to re-create an index to compact it and minimize fragmented space, or to change the index's
storage characteristics. When creating a new index that is a subset of an existing index or when rebuilding an existing
index with new storage characteristics, Oracle might use the existing index instead of the base table to improve the
performance of the index build.

Note:

To avoid calling DBMS_STATS after the index creation or rebuild, include the COMPUTE STATISTICS statement on the
CREATE or REBUILD. You can use the Oracle Enterprise Manager Reorg Wizard to identify indexes that require
rebuilding. The Reorg Wizard can also be used to rebuild the indexes.

However, there are cases where it can be beneficial to use the base table instead of the existing index. Consider an
index on a table on which a lot of DML has been performed. Because of the DML, the size of the index can increase
to the point where each block is only 50% full, or even less. If the index refers to most of the columns in the table,
then the index could actually be larger than the table. In this case, it is faster to use the base table rather than the
index to re-create the index.

Use the ALTER INDEX ... REBUILD statement to reorganize or compact an existing index or to change its storage
characteristics. The REBUILD statement uses the existing index as the basis for the new one. All index storage
statements are supported, such as STORAGE (for extent allocation), TABLESPACE (to move the index to a new
tablespace), and INITRANS (to change the initial number of entries).

Usually, ALTER INDEX ... REBUILD is faster than dropping and re-creating an index, because this statement uses the
fast full scan feature. It reads all the index blocks using multiblock I/O, then discards the branch blocks. A further
advantage of this approach is that the old index is still available for queries while the rebuild is in progress.

See Also:

Oracle9i SQL Reference for more information about the CREATE INDEX and ALTER INDEX statements, as well as
restrictions on rebuilding indexes

Compacting Indexes

You can coalesce leaf blocks of an index by using the ALTER INDEX statement with the COALESCE option. This option
lets you combine leaf levels of an index to free blocks for reuse. You can also rebuild the index online.

See Also:

Oracle9i SQL Reference and Oracle9i Database Administrator's Guide for more information about the syntax for this
statement

Using Nonunique Indexes to Enforce Uniqueness

You can use an existing nonunique index on a table to enforce uniqueness, either for UNIQUE constraints or the
unique aspect of a PRIMARY KEY constraint. The advantage of this approach is that the index remains available and
valid when the constraint is disabled. Therefore, enabling a disabled UNIQUE or PRIMARY KEY constraint does not
require rebuilding the unique index associated with the constraint. This can yield significant time savings on enable
operations for large tables.

Using a nonunique index to enforce uniqueness also lets you eliminate redundant indexes. You do not need a
unique index on a primary key column if that column already is included as the prefix of a composite index. You can
use the existing index to enable and enforce the constraint. You also save significant space by not duplicating the
index. However, if the existing index is partitioned, then the partitioning key of the index must also be a subset of the
UNIQUE key; otherwise, Oracle creates an additional unique index to enforce the constraint.

Using Enabled Novalidated Constraints

An enabled novalidated constraint behaves similarly to an enabled validated constraint for new data. Placing a
constraint in the enabled novalidated state signifies that any new data entered into the table must conform to the
constraint. Existing data is not checked. By placing a constraint in the enabled novalidated state, you enable the
constraint without locking the table.

If you change a constraint from disabled to enabled, then the table must be locked. No new DML, queries, or DDL
can occur, because there is no mechanism to ensure that operations on the table conform to the constraint during
the enable operation. The enabled novalidated state prevents operations violating the constraint from being
performed on the table.

An enabled novalidated constraint can be validated with a parallel, consistent-read query of the table to determine
whether any data violates the constraint. No locking is performed, and the enable operation does not block readers
or writers to the table. In addition, enabled novalidated constraints can be validated in parallel: Multiple constraints
can be validated at the same time and each constraint's validity check can be determined using parallel query.

Use the following approach to create tables with constraints and indexes:
Create the tables with the constraints. NOT NULL constraints can be unnamed and should be created enabled and
validated. All other constraints (CHECK, UNIQUE, PRIMARY KEY, and FOREIGN KEY) should be named and created
disabled.

Note:

By default, constraints are created in the ENABLED state.

Load old data into the tables.

Create all indexes, including indexes needed for constraints.

Enable novalidate all constraints. Do this to primary keys before foreign keys.

Allow users to query and modify data.

With a separate ALTER TABLE statement for each constraint, validate all constraints. Do this to primary keys before
foreign keys. For example,

CREATE TABLE t (a NUMBER CONSTRAINT apk PRIMARY KEY DISABLE,

b NUMBER NOT NULL);

CREATE TABLE x (c NUMBER CONSTRAINT afk REFERENCES t DISABLE);

Now you can use Import or Fast Loader to load data into table t.

CREATE UNIQUE INDEX tai ON t (a);

CREATE INDEX tci ON x (c);

ALTER TABLE t MODIFY CONSTRAINT apk ENABLE NOVALIDATE;

ALTER TABLE x MODIFY CONSTRAINT afk ENABLE NOVALIDATE;

At this point, users can start performing INSERTs, UPDATEs, DELETEs, and SELECTs on table t.

ALTER TABLE t ENABLE CONSTRAINT apk;

ALTER TABLE x ENABLE CONSTRAINT afk;

Now the constraints are enabled and validated.

See Also:

Oracle9i Database Concepts for a complete discussion of integrity constraints

Using Function-based Indexes

A function-based index includes columns that are either transformed by a function, such as the UPPER function, or
included in an expression, such as col1 + col2.

Defining a function-based index on the transformed column or expression allows that data to be returned using the
index when that function or expression is used in a WHERE clause or an ORDER BY clause. Therefore, a function-
based index can be beneficial when frequently-executed SQL statements include transformed columns, or columns
in expressions, in a WHERE or ORDER BY clause.
Function-based indexes defined with the UPPER(column_name) or LOWER(column_name) keywords allow case-
insensitive searches. For example, the following index:

CREATE INDEX uppercase_idx ON employees (UPPER(last_name));

facilitates processing queries such as:

SELECT * FROM employees

WHERE UPPER(last_name) = 'MARKSON';

Setting Parameters to Use Function-Based Indexes in Queries

To use function-based indexes in queries, you need to set the QUERY_REWRITE_ENABLED and
QUERY_REWRITE_INTEGRITY parameters.

QUERY_REWRITE_ENABLED

To enable function-based indexes for queries, set the QUERY_REWRITE_ENABLED session parameter to TRUE.
QUERY_REWRITE_ENABLED can be set to the following values:

TRUE: cost - based rewrite

FALSE: no rewrite

FORCE: forced rewrite

When QUERY_REWRITE_ENABLED is set to FALSE, then function-based indexes are not used for obtaining the values
of an expression in the function-based index. However, function-based indexes can still be used for obtaining values
in real columns.

When QUERY_REWRITE_ENABLED is set to FORCE, Oracle always uses rewrite and does not evaluate the cost before
doing so. FORCE is useful when you know that the query will always benefit from rewrite, when reduction in compile
time is important, and when you know that the optimizer may be underestimating the benefits of materialized
views.

QUERY_REWRITE_ENABLED is a session-level and also an instance-level parameter.

QUERY_REWRITE_INTEGRITY

Setting the value of the QUERY_REWRITE_INTEGRITY parameter determines how function-based indexes are used,

If the QUERY_REWRITE_INTEGRITY parameter is set to ENFORCED (the default), then Oracle uses function-based
indexes to derive values of SQL expressions only. This also includes SQL functions.

If QUERY_REWRITE_INTEGRITY is set to any value other than ENFORCED, then Oracle uses the function-based index,
even if it is based on a user-defined, rather than SQL, function.

Function-based indexes are an efficient mechanism for evaluating statements that contain functions in WHERE
clauses. You can create a function-based index to store computation-intensive expressions in the index. This permits
Oracle to bypass computing the value of the expression when processing SELECT and DELETE statements. When
processing INSERT and UPDATE statements, however, Oracle evaluates the function to process the statement.

For example, if you create the following index:

CREATE INDEX idx ON table_1 (a + b * (c - 1), a, b);


then Oracle can use it when processing queries such as:

SELECT a

FROM table_1

WHERE a + b * (c - 1) < 100;

You can also use function-based indexes for linguistic sort indexes that provide efficient linguistic collation in SQL
statements.

Oracle treats descending indexes as function-based indexes. The columns marked DESC are sorted in descending
order.

See Also:

Oracle9i Application Developer's Guide - Fundamentals and Oracle9i Database Administrator's Guide for more
information on using function-based indexes

Oracle9i SQL Reference for more information on the CREATE INDEX statement

Using Index-Organized Tables

An index-organized table differs from an ordinary table in that the data for the table is held in its associated index.
Changes to the table data, such as adding new rows, updating rows, or deleting rows, result only in updating the
index. Because data rows are stored in the index, index-organized tables provide faster key-based access to table
data for queries that involve exact match or range search or both.

See Also:

Oracle9i Database Concepts

Using Bitmap Indexes

This section describes:

When to Use Bitmap Indexes

Using Bitmap Indexes with Good Performance

Initialization Parameters for Bitmap Indexing

Using Bitmap Access Plans on Regular B-tree Indexes

Bitmap Index Restrictions

See Also:

Oracle9i Database Concepts and Oracle9i Data Warehousing Guide for more information on bitmap indexing

When to Use Bitmap Indexes

This section describes three aspects of indexing that you must evaluate when deciding whether to use bitmap
indexing on a given table:

Performance Considerations for Bitmap Indexes

Comparing B-tree Indexes to Bitmap Indexes


Maintenance Considerations for Bitmap Indexes

Performance Considerations for Bitmap Indexes

Bitmap indexes can substantially improve performance of queries that have all of the following characteristics:

The WHERE clause contains multiple predicates on low- or medium-cardinality columns.

The individual predicates on these low- or medium-cardinality columns select a large number of rows.

Bitmap indexes have been created on some or all of these low- or medium-cardinality columns.

The tables being queried contain many rows.

You can use multiple bitmap indexes to evaluate the conditions on a single table. Bitmap indexes are thus highly
advantageous for complex ad hoc queries that contain lengthy WHERE clauses. Bitmap indexes can also provide
optimal performance for aggregate queries and for optimizing joins in star schemas.

See Also:

Oracle9i Database Concepts for more information on optimizing anti-joins and semi-joins

Comparing B-tree Indexes to Bitmap Indexes

Bitmap indexes can provide considerable storage savings over the use of B-tree indexes. In databases containing
only B-tree indexes, you must anticipate the columns that are commonly accessed together in a single query, and
create a composite B-tree index on these columns.

Not only would this B-tree index require a large amount of space, it would also be ordered. That is, a B-tree index on
(marital_status, region, gender) is useless for queries that only access region and gender. To completely index the
database, you must create indexes on the other permutations of these columns. For the simple case of three low-
cardinality columns, there are six possible composite B-tree indexes. You must consider the trade-offs between disk
space and performance needs when determining which composite B-tree indexes to create.

Bitmap indexes solve this dilemma. Bitmap indexes can be efficiently combined during query execution, so three
small single-column bitmap indexes can do the job of six three-column B-tree indexes.

Bitmap indexes are much more efficient than B-tree indexes, especially in data warehousing environments. Bitmap
indexes are created not only for efficient space usage but also for efficient execution, and the latter is somewhat
more important.

Do not create bitmap indexes on unique key columns. However, for columns where each value is repeated hundreds
or thousands of times, a bitmap index typically is less than 25% of the size of a regular B-tree index. The bitmaps
themselves are stored in compressed format.

Simply comparing the relative sizes of B-tree and bitmap indexes is not an accurate measure of effectiveness,
however. Because of their different performance characteristics, keep B-tree indexes on high-cardinality columns,
while creating bitmap indexes on low-cardinality columns.

Maintenance Considerations for Bitmap Indexes

Bitmap indexes benefit data warehousing applications, but they are not appropriate for OLTP applications with a
heavy load of concurrent INSERTs, UPDATEs, and DELETEs. In a data warehousing environment, data is maintained
usually by way of bulk inserts and updates. Index maintenance is deferred until the end of each DML operation. For
example, when you insert 1000 rows, the inserted rows are placed into a sort buffer and then the updates of all 1000
index entries are batched. (This is why SORT_AREA_SIZE must be set properly for good performance with inserts and
updates on bitmap indexes.) Thus, each bitmap segment is updated only once in each DML operation, even if more
than one row in that segment changes.

Note:

The sorts described previously are regular sorts and use the regular sort area, determined by SORT_AREA_SIZE. The
BITMAP_MERGE_AREA_SIZE and CREATE_BITMAP_AREA_SIZE initialization parameters described in "Initialization
Parameters for Bitmap Indexing" affect only the specific operations indicated by the parameter names.

DML and DDL statements, such as UPDATE, DELETE, and DROP TABLE, affect bitmap indexes the same way they do
traditional indexes; the consistency model is the same. A compressed bitmap for a key value is made up of one or
more bitmap segments, each of which is at most half a block in size (although it can be smaller). The locking
granularity is one such bitmap segment. This can affect performance in environments where many transactions
make simultaneous updates. If numerous DML operations have caused increased index size and decreasing
performance for queries, then you can use the ALTER INDEX ... REBUILD statement to compact the index and restore
efficient performance.

A B-tree index entry contains a single rowid. Therefore, when the index entry is locked, a single row is locked. With
bitmap indexes, an entry can potentially contain a range of rowids. When a bitmap index entry is locked, the entire
range of rowids is locked. The number of rowids in this range affects concurrency. As the number of rowids increases
in a bitmap segment, concurrency decreases.

Locking issues affect DML operations and can affect heavy OLTP environments. Locking issues do not, however,
affect query performance. As with other types of indexes, updating bitmap indexes is a costly operation.
Nonetheless, for bulk inserts and updates where many rows are inserted or many updates are made in a single
statement, performance with bitmap indexes can be better than with regular B-tree indexes.

Using Bitmap Indexes with Good Performance

This section discusses performance issues with bitmap indexes.

Using Hints with Bitmap Indexes

The INDEX hint works with bitmap indexes in the same way as with traditional indexes.

The INDEX_COMBINE hint identifies the most cost effective indexes for the optimizer. The optimizer recognizes all
indexes that can potentially be combined, given the predicates in the WHERE clause. However, it might not be cost
effective to use all of them. Oracle recommends using INDEX_COMBINE rather than INDEX for bitmap indexes,
because it is a more versatile hint.

In deciding which of these hints to use, the optimizer includes nonhinted indexes that appear cost effective, as well
as indexes named in the hint. If certain indexes are given as arguments for the hint, then the optimizer tries to use
some combination of those particular bitmap indexes.

If the hint does not name indexes, then all indexes are considered hinted. Hence, the optimizer tries to combine as
many as possible, given the WHERE clause, without regard to cost effectiveness. The optimizer always tries to use
hinted indexes in the plan, regardless of whether it considers them cost effective.

See Also:

Chapter 5, "Optimizer Hints" for more information on the INDEX_COMBINE hint

Performance Tips for Bitmap Indexes


When creating bitmap indexes, Oracle needs to consider the theoretical maximum number of rows that will fit in a
data block. For this reason, to get optimal performance and disk space usage with bitmap indexes, consider the
following tips:

To make compressed bitmaps as small as possible, declare NOT NULL constraints on all columns that cannot contain
null values.

Fixed-length datatypes are more amenable to a compact bitmap representation than variable length datatypes.

See Also:

Chapter 9, "Using EXPLAIN PLAN" for more information about bitmap EXPLAIN PLAN output

Mapping Bitmaps to Rowids Efficiently

Use SQL statements with the ALTER TABLE syntax to optimize the mapping of bitmaps to rowids. The MINIMIZE
RECORDS_PER_BLOCK clause enables this optimization, and the NOMINIMIZE RECORDS_PER_BLOCK clause disables
it.

When MINIMIZE RECORDS_PER_BLOCK is enabled, Oracle scans the table and determines the maximum number of
records in any block and restricts this table to this maximum number. This enables bitmap indexes to allocate fewer
bits for each block and results in smaller bitmap indexes. The block and record allocation restrictions that this
statement places on the table are beneficial only to bitmap indexes. Therefore, Oracle does not recommend using
this mapping on tables that are not heavily indexed with bitmap indexes.

See Also:

"Using Bitmap Indexes" for more information

Oracle9i SQL Reference for syntax on MINIMIZE and NOMINIMIZE

Using Bitmap Indexes on Index-Organized Tables

The rowids used in bitmap indexes on index-organized tables are in a mapping table, not in the base table. The
mapping table maintains a mapping of logical rowids (needed to access the index-organized table) to physical rowids
(needed by the bitmap index code). Each index-organized table has one mapping table, used by all the bitmap
indexes created on that table.

Note:

Moving rows in an index-organized table does not make the bitmap indexes built on that index-organized table
unusable.

See Also:

Oracle9i Database Concepts for information on bitmap indexes and index-organized tables

Indexing Null Values

Bitmap indexes index nulls, whereas all other index types do not. Consider, for example, a table with STATE and
PARTY columns, on which you want to perform the following query:

SELECT COUNT(*)

FROM people

WHERE state='CA'
AND party !='D';

Indexing nulls enables a bitmap minus plan where bitmaps for party equal to D and NULL are subtracted from state
bitmaps equal to CA. The EXPLAIN PLAN output looks like the following:

SELECT STATEMENT

SORT AGGREGATE

BITMAP CONVERSION COUNT

BITMAP MINUS

BITMAP MINUS

BITMAP INDEX SINGLE VALUE STATE_BM

BITMAP INDEX SINGLE VALUE PARTY_BM

BITMAP INDEX SINGLE VALUE PARTY_BM

If a NOT NULL constraint exists on party, then the second minus operation (where party is null) is left out, because it
is not needed.

Initialization Parameters for Bitmap Indexing

The following initialization parameters affect the use of bitmap indexes:

CREATE_BITMAP_AREA_SIZE affects memory allocated for bitmap creation.

BITMAP_MERGE_AREA_SIZE affects memory used to merge bitmaps from an index range scan.

SORT_AREA_SIZE affects memory used when inserting or updating bitmap indexes.

See Also:

Oracle9i Database Reference for more information on these parameters

Using Bitmap Access Plans on Regular B-tree Indexes

If there is at least one bitmap index on the table, then the optimizer considers using a bitmap access path using
regular B-tree indexes for that table. This access path can involve combinations of B-tree and bitmap indexes, but
might not involve any bitmap indexes at all. However, the optimizer does not generate a bitmap access path using a
single B-tree index unless instructed to do so by a hint.

To use bitmap access paths for B-tree indexes, the rowids stored in the indexes must be converted to bitmaps. After
such a conversion, the various Boolean operations available for bitmaps can be used. As an example, consider the
following query, where there is a bitmap index on column c1, and regular B-tree indexes on columns c2 and c3.

EXPLAIN PLAN FOR

SELECT COUNT(*)

FROM t

WHERE c1 = 2 AND c2 = 6

OR c3 BETWEEN 10 AND 20;


SELECT STATEMENT

SORT AGGREGATE

BITMAP CONVERSION COUNT

BITMAP OR

BITMAP AND

BITMAP INDEX c1_ind SINGLE VALUE

BITMAP CONVERSION FROM ROWIDS

INDEX c2_ind RANGE SCAN

BITMAP CONVERSION FROM ROWIDS

SORT ORDER BY

INDEX c3_ind RANGE SCAN

Note:

This statement is executed by accessing indexes only, so no table access is necessary.

Here, a COUNT option for the BITMAP CONVERSION row source counts the number of rows matching the query.
There are also conversions from rowids in the plan to generate bitmaps from the rowids retrieved from the B-tree
indexes. The ORDER BY sort appears in the plan because the conditions on column c3 result in the return of more
than one list of rowids from the B-tree index. These lists are sorted before being converted into a bitmap.

Bitmap Index Restrictions

Bitmap indexes have the following restrictions:

For bitmap indexes with direct load, the SORTED_INDEX flag does not apply.

Bitmap indexes are not considered by the rule-based optimizer.

Bitmap indexes cannot be used for referential integrity checking.

Using Bitmap Join Indexes

In addition to a bitmap index on a single table, you can create a bitmap join index, which is a bitmap index for the
join of two or more tables. A bitmap join index is a space-saving way to reduce the volume of data that must be
joined, by performing restrictions in advance. For each value in a column of a table, a bitmap join index stores the
rowids of corresponding rows in another table. In a data warehousing environment, the join condition is an equi-
inner join between the primary key column(s) of the dimension tables and the foreign key column(s) in the fact
table.

Bitmap join indexes are much more efficient in storage than materialized join views, an alternative for materializing
joins in advance. This is because the materialized join views do not compress the rowids of the fact tables.

See Also:

Oracle9i Data Warehousing Guide for examples and restrictions of bitmap join indexes

Using Domain Indexes


Domain indexes are built using the indexing logic supplied by a user-defined indextype. An indextype provides an
efficient mechanism to access data that satisfy certain operator predicates. Typically, the user-defined indextype is
part of an Oracle option, like the Spatial option. For example, the SpatialIndextype allows efficient search and
retrieval of spatial data that overlap a given bounding box.

The cartridge determines the parameters you can specify in creating and maintaining the domain index. Similarly,
the performance and storage characteristics of the domain index are presented in the specific cartridge
documentation.

Refer to the appropriate cartridge documentation for information such as the following:

What datatypes can be indexed?

What indextypes are provided?

What operators does the indextype support?

How can the domain index be created and maintained?

How do we efficiently use the operator in queries?

What are the performance characteristics?

Note:

You can also create index types with the CREATE INDEXTYPE statement.

See Also:

Oracle Spatial User's Guide and Reference for information about the SpatialIndextype

Using Clusters

Clusters are groups of one or more tables that are physically stored together because they share common columns
and usually are used together. Because related rows are physically stored together, disk access time improves.

To create a cluster, use the CREATE CLUSTER statement.

See Also:

Oracle9i Database Concepts for more information on clusters

Follow these guidelines when deciding whether to cluster tables:

Cluster tables that are accessed frequently by the application in join statements.

Do not cluster tables if the application joins them only occasionally or modifies their common column values
frequently. Modifying a row's cluster key value takes longer than modifying the value in an unclustered table,
because Oracle might need to migrate the modified row to another block to maintain the cluster.

Do not cluster tables if the application often performs full table scans of only one of the tables. A full table scan of a
clustered table can take longer than a full table scan of an unclustered table. Oracle is likely to read more blocks,
because the tables are stored together.

Cluster master-detail tables if you often select a master record and then the corresponding detail records. Detail
records are stored in the same data block(s) as the master record, so they are likely still to be in memory when you
select them, requiring Oracle to perform less I/O.
Store a detail table alone in a cluster if you often select many detail records of the same master. This measure
improves the performance of queries that select detail records of the same master, but does not decrease the
performance of a full table scan on the master table. An alternative is to use an index organized table.

Do not cluster tables if the data from all tables with the same cluster key value exceeds more than one or two Oracle
blocks. To access a row in a clustered table, Oracle reads all blocks containing rows with that value. If these rows
take up multiple blocks, then accessing a single row could require more reads than accessing the same row in an
unclustered table.

Do not cluster tables when the number of rows for each cluster key value varies significantly. This causes waste of
space for the low cardinality key value; it causes collisions for the high cardinality key values. Collisions degrade
performance.

Consider the benefits and drawbacks of clusters with respect to the needs of the application. For example, you might
decide that the performance gain for join statements outweighs the performance loss for statements that modify
cluster key values. You might want to experiment and compare processing times with the tables both clustered and
stored separately.

See Also:

Oracle9i Database Administrator's Guide for more information on creating clusters

Using Hash Clusters

Hash clusters group table data by applying a hash function to each row's cluster key value. All rows with the same
cluster key value are stored together on disk. Consider the benefits and drawbacks of hash clusters with respect to
the needs of the application. You might want to experiment and compare processing times with a particular table as
it is stored in a hash cluster, and as it is stored alone with an index.

Follow these guidelines for choosing when to use hash clusters:

Use hash clusters to store tables accessed frequently by SQL statements with WHERE clauses, if the WHERE clauses
contain equality conditions that use the same column or combination of columns. Designate this column or
combination of columns as the cluster key.

Store a table in a hash cluster if you can determine how much space is required to hold all rows with a given cluster
key value, including rows to be inserted immediately as well as rows to be inserted in the future.

Do not store a table in a hash cluster if the application often performs full table scans and if you must allocate a great
deal of space to the hash cluster in anticipation of the table growing. Such full table scans must read all blocks
allocated to the hash cluster, even though some blocks might contain few rows. Storing the table alone reduces the
number of blocks read by full table scans.

Do not store a table in a hash cluster if the application frequently modifies the cluster key values. Modifying a row's
cluster key value can take longer than modifying the value in an unclustered table, because Oracle might need to
migrate the modified row to another block to maintain the cluster.

Storing a single table in a hash cluster can be useful, regardless of whether the table is joined frequently with other
tables, as long as hashing is appropriate for the table based on the points in this list.

You might also like