Oracle DBA Servival Guide
Oracle DBA Servival Guide
Page 1 of 287
Oracle DBA Basics FAQs
1. What is an instance? Draw Architecture?
2. What is SGA?
3. What is PGA (or) what is pga_aggregate_target?
4. What are new memory parameters in Oracle 10g?
5. What are new memory parameters in Oracle 11g?
6. What are the mandatory background processes?
7. What are the optional background processes?
8. What are the new background processes in Oracle 10g?
9. How do you use automatic PGA memory management with Oracle 9i and above?
10. Explain two easy SQL optimizations?
11. What are the new features in Oracle 11gR1?
12. What are the new features in Oracle 11g R2?
13. What are the new features in Oracle 12c?
14. What process will get data from datafiles to DB cache?
15. What background process will writes data to datafiles?
16. What background process will write undo data?
17. What are physical components of Oracle database?
18. What are logical components of Oracle database?
19. Types of segment space management?
20. Types of extent management?
21. What are the differences between LMTS and DMTS?
22. What is a datafile?
23. What are the contents of control file?
24. What is the use of redo log files?
25. What are the uses of undo tablespace or redo segments?
26. How undo tablespace can guarantee retain of required undo data?
27. What is ORA-01555 - snapshot too old error and how do you avoid it?
28. What is the use/size of temporary tablespace?
29. What is the use of password file?
30. How to create password file?
31. How many types of indexes are there?
32. What is bitmap index & when it’ll be used?
33. What is B-tree index & when it’ll be used?
34. How you will find out fragmentation of index?
35. What is the difference between delete and truncate?
36. What's the difference between a primary key and a unique key?
37. What is the difference between schema and user?
38. What is the difference between SYSDBA, SYSOPER and SYSASM?
39. What is the difference between SYS and SYSTEM?
40. What is the difference between view and materialized view?
41. What are materialized view refresh types and which is default?
42. How fast refresh happens?
43. How to find out when was a materialized view refreshed?
44. What is materialized view log (type)?
Page 2 of 287
45. What is atomic refresh in mviews?
46. How to find out whether database/tablespace/datafile is in backup mode or not?
47. What is row chaining?
48. What is row migration?
49. What are different types of partitions?
50. What is local partitioned index and global partitioned index?
51. How you will recover if you lost one/all control file(s)?
52. Why more archivelogs are generated, when database is begin backup mode?
53. What UNIX parameters you will set while Oracle installation?
54. What is the use of inittrans and maxtrans in table definition?
55. What are differences between dbms_job and dbms_schedular?
56. What are differences between dbms_schedular and cron jobs?
57. Difference between CPU & PSU patches?
58. What you will do if (local) inventory corrupted [or] opatch lsinventory is giving error?
59. What are the entries/location of oraInst.loc?
60. What is the difference between central/global inventory and local inventory?
61. What is the use of root.sh & oraInstRoot.sh?
62. What is transportable tablespace (and across platforms)?
63. How can you transport tablespaces across platforms with different endian formats?
64. What is xtss (cross platform transportable tablespace)?\
65. What is the difference between restore point & guaranteed restore point?
66. What is the difference between 10g/11g OEM Grid control and 12c Cloud control?
67. What are the components of Grid control?
68. What are the new features of 12c Cloud control?
69. How to find if your Oracle database is 32 bit or 64 bit?
70. How to find opatch Version?
71. Which of the following does not affect the size of the SGA?
72. A set of Dictionary tables are created?
73. The order in which Oracle processes a single SQL statement is?
74. What are the mandatory datafiles to create a database in Oracle 11g?
75. In one server can we have different oracle versions?
76. How do sessions communicate with database?
77. Which SGA memory structure cannot be resized dynamically after instance startup?
78. When a session changes data, where does the change get written?
79. How many maximum no of control files we can have within a database?
80. System Data File Consists of?
81. What is the function of SMON in instance recovery?
82. Which action occurs during a checkpoint?
83. SMON process is used to write into LOG files?
84. Oracle does not consider a transaction committed until?
85. How many maximum DBWn (Db writers) we can invoke?
86. Which activity would generate less undo data?
87. What happens when a user issues a COMMIT?
88. What happens when a user process fails?
89. What are the free buffers in the database buffer cache?
90. When the SMON Process perform ICR?
91. Which dynamic view can be queried when a database is started up in no mount state?
92. Which two tasks occur as a database transitions from the mount stage to the open stage?
Page 3 of 287
93. In which situation is it appropriate to enable the restricted session mode?
94. Which is the component of an Oracle instance?
95. Which process is involved when a user starts a new session on the database server?
96. In the event of an Instance failure, which files store command data NOT written to the datafiles?
97. When are the base tables of the data dictionary created?
98. Sequence of events takes place while starting a Database is?
99. The alert log will never contain information about which database activity?
100. Where can you find the non-default parameters when an instance is started?
101. Which tablespace is used as the temporary tablespace if TEMPORARY TABLESPACE is not specified for a
user?
102. User SCOTT creates an index with this statement: CREATE INDEX emp_indx on employee (empno). In which
tablespace would be the index created?
103. Which data dictionary view shows the available free space in a certain tablespace?
104. Which methods increase the size of a tablespace?
105. What does the command ALTER DATABASE . . . RENAME DATAFILE do?
106. Can you drop objects from a read-only tablespace?
107. SYSTEM TABLESPACE can be made off-line?
108. Data dictionary can span across multiple Tablespaces?
109. Multiple Tablespaces can share a single datafile?
110. All datafiles related to a Tablespace are removed when the Tablespace is dropped?
111. What is a default role?
112. Who is the owner of a role?
113. When granting the system privilege, which clause enables the grantee to further grant the privilege to other
users or roles?
114. Which view will show a list of privileges that are available for the current session to a user?
115. Which view shows all of the objects accessible to the user in a database?
116. Which statement about profiles is false?
117. Which password management feature is NOT available by using a profile?
118. Which resource can not be controlled using profiles?
119. You want to retrieve information about account expiration dates from the data dictionary. Which view do
you use?
120. It is very difficult to grant and manage common privileges needed by different groups of database users
using roles?
121. Which data dictionary view would you query to retrieve a table’s header block number?
122. When tables are stored in locally managed tablespaces, where is extent allocation information stored?
123. Which of the following three portions of a data block are collectively called as Overhead?
124. Can a tablespace hold objects from different schemes?
125. Which data dictionary view would you query to retrieve a table’s header block number?
126. What is default value for storage parameter INITIAL in 10g if extent management is Local?
127. Using which package we can convert Tablespace from DMTS to LMTS?
128. Is it Possible to Change ORACLE Block size after creating database?
129. Locally Managed table spaces will increase the performance?
130. Index is a Space demanding Object?
131. What is a potential reason for a Snapshot too old error message?
132. An Oracle user receives the following error? ORA-01555 SNAPSHOP TOO OLD, What is the possible solution?
133. The status of the Rollback segment can be viewed through?
134. Explicitly we can assign transaction to a rollback segment?
135. Are uncommitted transactions written to flashback redologs?
Page 4 of 287
136. Is it possible to do flashback after truncate?
137. Can we restore a dropped table after a new table with the same name has been created?
138. Which following command will clear database recyclebin?
139. What is the OPTIMAL parameter?
140. Flashback query time depends on?
141. Can we create spfile in shutdown mode?
142. Can we alter static parameters by using scope=both?
143. Can we take backup of spfile in RMAN?
144. . Does Drop Database command removes spfile?
145. Using which SQL command we can alter the parameters?
146. OMF database will improve the performance?
147. Max number of controlfiles that can be multiplexed in an OMF database?
148. Which environment variable is used to help set up Oracle names?
149. Which Net8 component waits for incoming requests on the server side?
150. What is the listener name when you start the listener without specifying an argument?
151. When is a request sent to a listener?
152. In which file is the information that host naming is enabled stored?
153. Which protocols can oracle Net 11g Use?
154. Which of the following statements about listeners is correct?
155. Can we perform DML operation on Materialized view?
156. Materialized views are schema objects that can be used to summarize pre compute replicate and distribute
data?
157. Does a materialized view occupy space?
158. Can we name a Materialized View log?
159. How to improve sqlldr (SQL*Loader) performance?
160. By using which view can a normal user see public database link?
161. Can we change the refresh interval of a Materialized View?
162. Can we use a database link even after the target user has changed his password?
163. Can we convert a materialized view from refresh fast to complete?
164. A normal user can create public database link?
165. If we truncate the master table, materialized view log on that table?
166. What is the correct procedure for multiplexing online redo logs?
167. In which situation would you need to create a new control file for an existing database?
168. When configuring a database for ARCHIVELOG mode, you use an initialisation parameter to specify which
action?
169. Which command creates a text backup of the control file?
170. You are configuring a database for ARCHIVELOG mode. Which initialization parameter should you use?
171. How does a DBA specify multiple control files?
172. Which dynamic view should a DBA query to obtain information about the different sections of the control
file?
173. What is the characteristic of the control file?
174. Which statements about online redo log members in a group are true?
175. Which command does a DBA use to list the current status of archiving?
176. When performing an open database backup, which statement is NOT true?
177. Which task can a DBA perform using the export/import facility?
178. Why does this command cause an error?
179. Which import option do you use to create tables without data?
Page 5 of 287
180. Which export option will generate code to create an initial extent that is equal to the sum of the sizes of all
the extents currently allocated to an object?
181. Can I take 1 dump file set from my source database and import it into multiple databases?
182. Can we export a dropped table?
183. What is the default value for IGNORE parameter in EXP/IMP?
184. Why is Direct Path Export Faster?
185. Is there a way to estimate the size of an export job before it gets underway?
186. Can I monitor a Data Pump Export or Import job while the job is in progress?
187. If a job is stopped either voluntarily or involuntarily, can I restart it?
188. Does Data Pump support Flashback?
189. If the tablespace is Read Only,Can we export objects from that tablespaces?
190. Dump files exported using traditional EXP are compatible with DATAPUMP?
191. Before a DBA creates a transportable tablespace, which condition must be completed?
192. Can we transport tablespace from one database to another database which is having SYS owned objects?
193. What is default value for TRANSPORT_TABLESPACE Parameter in EXP?
194. How to find whether tablespace is created in that database or transported from another database?
195. Can we Perform TTS using EXPDP?
196. Can we Transport Tablespace which has Materialized View in it?
197. When would a DBA need to perform a media recovery?
198. Why would you set a data file offline when the database is in MOUNT state?
199. What is the cause of media failures?
200. Which of the following would not require you to perform an incomplete recovery?
201. In what scenario you have to open a database with reset logs option?
202. Is it possible taking consistent backup if the database is in NOARCHIVELOG mode?
203. Database is in Archivelog mode and Loss of unbackedup datafile is?
204. You should issue a backup of the control file after issuing which command?
205. The alert log will never contain specific information about which database backup activity) ?
206. . A tablespace becomes unavailable because of a failure. The database is running in NOARCHIVELOG mode?
What should the DBA do to make the database available?
207. How often does a read-only tablespace need to be backed up?
208. With the instance down, how would you recover a lost control file?
209. Which action does Oracle recommend after a DBA recovers from the loss of the current online redo-log?
210. Which command creates a text backup of the control file?
211. Which option is used in the parameter file to detect corruptions in an Oracle data block?
212. Your database is configured in ARCHIVELOG mode. Which backups cannot be performed?
213. You are using hot backup without being in archivelog mode, can you recover in the event of a failure?
214. Which following statement is true when tablespaces are put in backup mode for hot backups?
215. Can Consistant Backup be performed when the database is open?
216. Can we shut down the database if it is in BEGIN BACKUP mode?
217. Which data dictionary view helps you to view whether tablespace is in BEGIN BACKUP Mode or not?
218. Which command is used to allow RMAN to store a group of commands in the recovery catalog?
219. When using Recovery Manager without a catalog, the connection to the target database?
220. Work is done by Recovery Manager through?
221. You perform an incomplete database recovery using RMAN. Which state of target database is needed?
222. Is it possible to perform Transportable tablespace (TTS) using RMAN?
223. Which type of file does Not RMAN include in its backups?
224. When using Recovery Manager without a catalog, the connection to the target database should be made as?
225. RMAN online backup generates excessive Redo information?
Page 6 of 287
226. Which background process will be invoked when we enable BLOCK CHANGE TRACKING?
227. Where should a recovery catalog be created?
228. How to list restore points in RMAN?
229. Without LIST FAILURE can we say ADVISE FAILURE in Data Recovery Advisor?
230. Import Catalog Command is used for?
231. Interfile backup parallelism does?
232. What is the difference between pfile and spfile. Where these files are located?
233. What will you do if pfile and spfile file is deleted? Can you start the database?
234. What is the difference between Static and Dynamic init.ora/spfile parameters?
235. What is the complete syntax to set DB_CACHE_SIZE in memory and spfile?
236. How do we configure multiple Buffer Cache in Oracle. Whats the benefit? Does setting multiple Cache
requires Database Restart?
237. What is Oracle Golden Gate?
238. Can we create Tablespaces of multiple Block Sizes. If yes, what is the Syntax?
239. How do you calculate the size of oracle memory areas Buffer Cache, Log Buffer, Shared Pool, PGA etc?
240. What is OMF? What spfile parameters are used to configure OMF. What is the benefit?
241. What is Database Cloning? Why Cloning is needed? What are the steps to clone a database?
242. What is Oracle Streams?
243. There are 2 control files for a database. What will happen when 1 control file is deleted and you try to start
database? How you will fix this problem?
244. What is Dynamic performance view and What is Data Dictionary Views. Give some examples of each?
245. You are working in database that does lot of Sorting , i.e SELECT queries use a lot of ORDER BY and GROUP
BY? What Oracle memory area and Physical File/Tablespace you need to tune and How?
246. Why we upgrade a database. What are the steps to upgrade database. Any errors you got during upgrade?
247. What is MEMORY_TARGET not supported error. How do you fix it?
248. What are the steps to manually create a database?
249. A DBA ran a delete statement to delete all records on a table. The table has 50 Million rows? While Delete is
running his SQLPLUS session terminate abnormally? What oracle will do internally?
250. What is Oracle Dataguard?
251. Can we change the DB_BLOCK_SIZE? if Yes. What are the steps?
252. Explain the Oracle Architecture?
253. What happens internally in Oracle when a User Connects and run a SELECT Query? What SGA areas and
background processes are involved?
254. How do you create a tablespace, undo tablespace and temp tablespace. What are the Syntax?
255. As a HR user you logged in and Creating a EMP_BIG Table and inserting 10 lac rows? While inserting 10 lac
rows you got error ORA-01688: unable to extend table EMP_BIG by 512 in tablespace HR_DATA? What are
the two ways to fix this Tablespace error?
256. What are the steps to rename a database?
257. What is the syntax to create a user and roles?
258. What is the 3 init.ora parameter to manage UNDO? What is their usage?
259. What is/are Snapshot too old error? How do you fix it?
260. What is Undo Retention Gurantee? How do we set it? What is the Proc and Cons of setting it?
261. What are System Privileges and Object Privileges? Give some examples? What Data Dictionary view we use
to check both?
262. What is PGA? What information is stored in PGA? What is PGA Tuning?
263. What are the steps to identify a slow running SQL and tune it?
264. What is all the preparation works a DBA need to do before installing Oracle?
265. Any error that you got during Oracle Installation and how did you fixes it?
Page 7 of 287
266. What is default tablespace and temporary tablespace?
267. Which privilege allows you to select from tables owned by other users?
268. What command we use to revoke system privilege?
269. How do we create a Role?
270. Difference between Non-Deffered and deffered constraint?
271. Difference between varchar and varchar2 data types?
272. In which language Oracle has been developed?
273. What is RAW datatype?
274. What is the use of NVL function?
275. Whether any commands are used for Months calculation? If so, what are they?
276. What are nested tables?
277. What is COALESCE function?
278. What is BLOB datatype?
279. How do we represent comments in Oracle?
280. What is DML?
281. What is the difference is between TRANSLATE and REPLACE?
282. How do we display rows from the table without duplicates?
283. What is the usage of Merge Statement?
284. What is NULL value in oracle?
285. What is USING Clause and give example?
286. What is key preserved table?
287. What is WITH CHECK OPTION?
288. What is the use of Aggregate functions in Oracle?
289. What do you mean by GROUP BY Clause?
290. What is a sub query and what are the different types of subqueries?
291. What is cross join?
292. What are temporal data types in Oracle?
293. How do we create privileges in Oracle?
294. What is VArray?
295. How do we get field details of a table?
296. What is the difference between rename and alias?
297. What is a View?
298. What is a cursor variable?
299. What are cursor attributes?
300. What are SET operators?
301. How can we delete duplicate rows in a table?
302. What are the attributes of Cursor?
303. Can we store pictures in the database and if so, how it can be done?
304. What is an integrity constraint?
305. What is an ALERT?
306. What is hash cluster?
307. What are the various constraints used in Oracle?
308. What is difference between SUBSTR and INSTR?
309. What is the parameter mode that can be passed to a procedure?
310. What are the different Oracle Database objects?
311. What are the differences between LOV and List Item?
312. What are privileges and Grants?
313. What is the difference between $ORACLE_BASE and $ORACLE_HOME?
Page 8 of 287
314. What is the fastest query method to fetch data from the table?
315. What is the maximum number of triggers that can be applied to a single table?
316. How to display row numbers with the records?
317. How can we view last record added to a table?
318. What is the data type of DUAL table?
319. What is difference between Cartesian Join and Cross Join?
320. How to display employee records who gets more salary than the average salary in the department?
321. What is the difference between RMAN and a traditional hot backup?
322. What are bind variables and why are they important?
323. In PL/SQL, what is bulk binding, and when/how would it help performance?
324. Why is SQL*Loader direct path so fast?
325. What are the tradeoffs between many vs few indexes? When would you want to have many, and when
would it be better to have fewer?
326. What is the difference between RAID 5 and RAID 10? Which is better for Oracle?
327. When using Oracle export/import what character set concerns might come up? How do you handle them?
328. Name three SQL operations that perform a SORT?
329. What is your favorite tool for day-to-day Oracle operation?
330. What is the difference between Truncate and Delete? Why is one faster? Can we ROLLBACK both? How
would a full table scan behave after?
331. What is the difference between a materialized view (snapshot) fast refresh versus complete refresh? When
is one better, and when the other?
332. What does the NO LOGGING option do? Why would we use it? Why would we be careful of using it?
333. Tell me about standby database? What are some of the configurations of it? What should we watch out for?
334. What do you know about privileges?
Page 9 of 287
Answers
1. What is an instance? Draw Architecture?
SGA + background processes.
The System Global Area (SGA) is a group of shared memory areas that are dedicated to an Oracle “instance” (an
instance is your database programs and RAM).
All Oracle processes use the SGA to hold information. The SGA is used to store incoming data (the data buffers as
defined by the db_cache_size parameter), and internal control information that is needed by the database. You
control the amount of memory to be allocated to the SGA by setting some of the Oracle “initialization parameters”.
These might include db_cache_size, shared_pool_size and log_buffer.
In Oracle Database 10g you only need to define two parameters (sga_target and sga_max_size) to configure your
SGA. If these parameters are configured, Oracle will calculate how much memory to allocate to the different areas of
the SGA using a feature called Automatic Memory Management (AMM). As you gain experience you may want to
manually allocate memory to each individual area of the SGA with the initialization parameters.
We have already noted that the SGA was sub-divided into several memory structures that each have different
missions. The main areas contained in the SGA that you will be initially interested in have complicated names, but are
actually quite simple:
* The buffer cache (db_cache_size)
* The shared pool (shared_pool_size)
* The redo log buffer (log_buffer)
Page 10 of 287
Let’s look at these memory areas in more detail.
Note: AMM and dynamic Oracle memory management has measurable overhead.
Inside the Data Buffer Cache
The Buffer Cache (also called the database buffer cache) is where Oracle stores data blocks. With a few exceptions,
any data coming in or going out of the database will pass through the buffer cache.
The total space in the Database Buffer Cache is sub-divided by Oracle into units of storage called “blocks”. Blocks are
the smallest unit of storage in Oracle and you control the data file blocksize when you allocate your database files.
An Oracle block is different from a disk block. An Oracle block is a logical construct -- a creation of Oracle, rather than
the internal block size of the operating system. In other words, you provide Oracle with a big whiteboard, and Oracle
takes pens and draws a bunch of boxes on the board that are all the same size. The whiteboard is the memory, and
the boxes that Oracle creates are individual blocks in the memory.
Each block inside a file is determined by your db_block_size parameter and the size of your “default” blocks are
defined when the database is created. You control the default database block size, and you can also define
tablespaces with different block sizes. For example, many Oracle professionals place indexes in a 32k block size and
leave the data files in a 16k block size.
Google: ”oracle multiple blocksizes”
When Oracle receives a request to retrieve data, it will first check the internal memory structures to see if the data is
already in the buffer. This practice allows to server to avoid unnecessary I/O. In an ideal world, DBAs would be able to
create one buffer for each database page, thereby ensuring that Oracle Server would read each block only once.
The db_cache_size and shared_pool_size parameters define most of the size of the in-memory region that Oracle
consumes on startup and determine the amount of storage available to cache data blocks, SQL, and stored
procedures.
Google:”oracle sga size”
The default size for the buffer pool (64k) is too small. We suggest you set this to a value of 1m when you configure
Oracle.
The common components are:
Data buffer cache - cache data and index blocks for faster access.
Shared pool - cache parsed SQL and PL/SQL statements.
Dictionary Cache - information about data dictionary objects.
Redo Log Buffer - committed transactions that are not yet written to the redo log files.
JAVA pool - caching parsed Java programs.
Streams pool - cache Oracle Streams objects.
Large pool - used for backups, UGAs, etc.
Shared Pool:
The shared pool consists of the following areas:
Library cache includes the shared SQL area, private SQL areas, PL/SQL procedures and packages the control
structures such as locks and library cache handles. Oracle code is first parsed, then executed , this parsed code is
stored in the library cache, oracle first checks the library cache to see if there is an already parsed and ready to
execute form of the statement in there, if there is this will reduce CPU time considerably, this is called a soft parse, If
Oracle has to parse it then this is called a hard parse. If there is not enough room in the cache oracle will remove
older parsed code, obviously it is better to keep as much parsed code in the library cache as possible. Keep an eye on
missed cache hits which is an indication that a lot of hard parsing is going on.
Dictionary cache is a collection of database tables and views containing information about the database, its
structures, privileges and users. When statements are issued oracle will check permissions, access, etc and will obtain
this information from its dictionary cache, if the information is not in the cache then it has to be read in from the disk
and placed in to the cache. The more information held in the cache the less oracle has to access the slow disks.
The parameter SHARED_POOL_SIZE is used to determine the size of the shared pool, there is no way to adjust the
caches independently, you can only adjust the shared pool size.
The shared pool uses a LRU (least recently used) list to maintain what is held in the buffer, see buffer cache for more
details on the LRU.
You can clear down the shared pool area by using the following command
alter system flush shared_pool;
Page 11 of 287
Buffer cache:
This area holds copies of read data blocks from the datafiles. The buffers in the cache contain two lists, the write list
and the least used list (LRU). The write list holds dirty buffers which contain modified data not yet written to disk.
The LRU list has the following
• free buffers hold no useful data and can be reused
• pinned buffers actively being used by user sessions
• dirty buffers contain data that has been read from disk and modified but hasn't been written to disk
It's the database writers job to make sure that they are enough free buffers available to users session, if not then it
will write out dirty buffers to disk to free up the cache.
There are 3 buffer caches
• Default buffer cache, which is everything not assigned to the keep or recycle buffer pools, DB_CACHE_SIZE
• Keep buffer cache which keeps the data in memory (goal is to keep warm/hot blocks in the pool for as long as
possible), DB_KEEP_CACHE_SIZE.
• Recycle buffer cache which removes data immediately from the cache after use (goal here is to age out a
blocks as soon as it is no longer needed), DB_RECYCLE_CACHE_SIZE.
The standard block size is determined by the DB_CACHE_SIZE, if tablespaces are created with a different block sizes
then you must also create an entry to match that block size.
DB_2K_CACHE_SIZE (used with tablespace block size of 2k)
DB_4K_CACHE_SIZE (used with tablespace block size of 4k)
DB_8K_CACHE_SIZE (used with tablespace block size of 8k)
DB_16K_CACHE_SIZE (used with tablespace block size of 16k)
DB_32K_CACHE_SIZE (used with tablespace block size of 32k)
buffer cache hit ratio is used to determine if the buffer cache is sized correctly, the higher the value the more is being
read from the cache.
hit rate = (1 - (physical reads / logical reads)) * 100
You can clear down the buffer pool area by using the following command
alter system flush buffer_cache;
Redo buffer:
The redo buffer is where data that needs to be written to the online redo logs will be cached temporarily before it is
written to disk, this area is normally less than a couple of megabytes in size. These entries contain necessary
information to reconstruct/redo changes by the INSERT, UPDATE, and DELETE, CREATE, ALTER and DROP commands.
The contents of this buffer are flushed:
• Every three seconds
• Whenever someone commits a transaction
• When its gets one third full or contains 1MB of cached redo log data.
• When LGWR is asked to switch logs
Use the parameter LOG_BUFFER parameter to adjust but be-careful increasing it too large as it will reduce your I/O
but commits will take longer.
Large Pool:
This is an optional memory area that provide large areas of memory for:
• Shared Server - to allocate the UGA region in the SGA
• Parallel execution of statements - to allow for the allocation of inter-processing message buffers, used to
coordinate the parallel query servers.
• Backup - for RMAN disk I/O buffers
The large pool is basically a non-cached version of the shared pool.
Use the parameter LARGE_POOL_SIZE parameter to adjust
Java Pool:
Used to execute java code within the database.
Use the parameter JAVA_POOL_SIZE parameter to adjust (default is 20MB)
Streams Pool:
Streams are used for enabling data sharing between databases or application environment.
Use the parameter STREAMS_POOL_SIZE parameter to adjust
Page 12 of 287
3. What is PGA (or) what is pga_aggregate_target?
Automatic PGA Management: To reduce response times sorts should be performed in the PGA cache area (optimal
mode operation), otherwise the sort will spill on to the disk (single-pass / multiple-pass operation) this will reduce
Page 13 of 287
performance, so there is a direct relationship between the size of the PGA and query performance. You can manually
tune the below to increase performance
• sort_area_size - total memory that will be used to sort information before swapping to disk
• sort_area_retained_size - memory that is used to retained data after a sort
• hash_area_size - memory that will would be used to store hash tables
• bitmap_merge_area_size - memory Oracle uses to merge bitmaps retrieved from a range scan of the index.
Staring with Oracle 9i there is a new to manage the above settings that is to let oracle manage the PGA area
automatically by setting the parameter following parameters Oracle will automatically adjust the PGA area basic on
users demand.
• workarea_size_policy - you can set this option to manual or auto (default)
• pga_aggregate_target - controls how much to allocate the PGA in total
Oracle will try and keep the PGA under the target value, but if you exceed this value Oracle will perform multi-pass
operations (disk operations).
System Parameters
workarea_size_policy manual or auto (default)
pga_aggregate_target total amount of memory allocated to the PGA
Page 14 of 287
• Buffer cache (DB_CACHE_SIZE)
• Shared pool (SHARED_POOL_SIZE)
• Large pool (LARGE_POOL_SIZE)
• Java pool ( JAVA_POOL_SIZE)
SGA_MAX_SIZE specifies the hard limit upto which the SGA_TARGET can dynamically grow. While executing DBCA,
Oracle suggests that the estimated SGA_MAX_SIZE is to set aside 40% of memory. However, it should be set
according to your requirement that depends on multiple factors such as no of concurrent users, volume of
transactions and growth rate of database. Under normal operation, you can set the SGA_MAX_SIZE equals to the
SGA_TARGET. Sometimes, we need to perform some extra-heavy batch processing jobs that leads to more SGA size.
At this circumstance, you must have capability to adjust for peak loads. That is why, you set hard limit for your
SGA_MAX_SIZE.
SGA_MAX_SIZE cannot be changed dynamically without bouncing the database whereas SGA_TARGET can be
changed dynamically without bouncing the database.
If you try to modify SGA_MAX_SIZE dynamically, you will get an error of
ORA-02095: specified initialization parameter cannot be modified.
SGA_TARGET can never be greater than SGA_MAX_SIZE. If you try to set the SGA_TARGET to a value which is greater
than that of SGA_MAX_SIZE, then Oracle will throw an error of
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00823: specified value of SGA_TARGET greater than SGA_MAX_SIZE.
If the SGA_MAX_SIZE is not set and the SGA_TARGET is set, then the SGA_MAX_SIZE takes the value of SGA_TARGET.
If you set the SGA_MAX_SIZE greater than your server memory capacity and bounce the database, you will get an
error of
ORA-27102 : out of memory
SVR4 Error : 12 : not enough space
Automatic memory management is introduced into Oracle 11g. This can be configured using a target memory size
initialization parameter MEMORY_TARGET and a maximum memory size initialization parameter
MEMORY_MAX_TARGET. Oracle Database then tunes to MEMORY_TARGET size, by distrubuting memory as needed
between the system global area (SGA) and the instance program global area (instance PGA).
Relation between MEMORY_TARGET, SGA_TARGET and PGA_AGGREGATE_TARGET :
If MEMORY_TARGET is set to a non-zero value.
• If SGA_TARGET and PGA_AGGREGATE_TARGET are set, they will be considered the minimum values for the
sizes of SGA and the PGA respectively. MEMORY_TARGET can take values from SGA_TARGET +
PGA_AGGREGATE_TARGET to MEMORY_MAX_TARGET.
• If SGA_TARGET is set and PGA_AGGREGATE_TARGET is not set, we will still auto-tune both parameters.
PGA_AGGREGATE_TARGET will be initialized to a value of (MEMORY_TARGET-SGA_TARGET).
• If PGA_AGGREGATE_TARGET is set and SGA_TARGET is not set, we will still auto-tune both parameters.
SGA_TARGET will be initialized to a value of min(MEMORY_TARGET-PGA_AGGREGATE_TARGET,
SGA_MAX_SIZE (if set by the user)) and will auto-tune subcomps.
• If neither is set, they will be auto-tuned without any minimum or default values. We will have a policy of
distributing the total memory set by memory_target parameter in a fixed ratio to the the SGA and PGA
during initialization. The policy is to give 60% for sga and 40% for PGA at startup.
If MEMORY_TARGET is not set or set to set to 0 explicitly (default value is 0 for 11g):
• If SGA_TARGET is set we will only auto-tune the sizes of the sub-components of the SGA. PGA will be
autotuned independent of whether it is explicitly set or not. Though the whole SGA(SGA_TARGET) and the
PGA(PGA_AGGREGATE_TARGET) will not be auto-tuned, i.e., will not grow or shrink automatically.
• If neither SGA_TARGET nor PGA_AGGREGATE_TARGET is set, we will follow the same policy as we have
today; PGA will be auto-tuned and the SGA will not be auto-tuned and parameters for some of the sub-
components will have to be set explicitly (for SGA_TARGET).
• If only MEMORY_MAX_TARGET is set, MEMORY_TARGET will default to 0 and we will not auto tune sga and
pga. It will default to 10gR2 behavior within sga and pga.
• If sga_max_size is not user set, we will internally set it to MEMORY_MAX_TARGET.
In a text initialization parameter file, if you omit the line for MEMORY_MAX_TARGET and include a value for
MEMORY_TARGET, the database automatically sets MEMORY_MAX_TARGET to the value of MEMORY_TARGET. If
Page 15 of 287
you omit the line for MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the MEMORY_TARGET
parameter defaults to zero. After startup, you can then dynamically change MEMORY_TARGET to a non-zero value,
provided that it does not exceed the value of MEMORY_MAX_TARGET.
If you wish to monitor the decisions made by Automatic Memory Management following views can be useful
• V$MEMORY_DYNAMIC_COMPONENTS has the current status of all memory components
• V$MEMORY_RESIZE_OPS has a circular history buffer of the last 800 SGA resize requests
SGA_TARGET vs SGA_MAX_SIZE
SGA_MAX_SIZE
sga_max_size sets the maximum value for sga_target
If sga_max_size is less than the sum of db_cache_size + log_buffer + shared_pool_size + large_pool_size at
initialization time, then the value of sga_max_size is ignored.
SGA_TARGET
This parameter is new with Oracle 10g. It specifies the total amaount of SGA memory available to an instance. Setting
this parameter makes Oracle distribute the available memory among various components - such as shared pool (for
SQL and PL/SQL), Java pool, large_pool and buffer cache - as required.
This new feature is called Automatic Shared Memory Management. With ASMM, the parameters java_pool_size,
shared_pool_size, large_pool_size and db_cache_size need not be specified explicitely anymore.
sga_target cannot be higher than sga_max_size.
SGA_TARGET is a database initialization parameter (introduced in Oracle 10g) that can be used for automatic SGA
memory sizing.
Parameter description:
SGA_TARGET
Property Description
Parameter type Big integer
Syntax SGA_TARGET = integer [K | M | G]
Default value 0 (SGA autotuning is disabled)
Modifiable ALTER SYSTEM
Range of values 64 to operating system-dependent
Basic Yes
SGA_TARGET provides the following:
• Single parameter for total SGA size
• Automatically sizes SGA components
• Memory is transferred to where most needed
• Uses workload information
• Uses internal advisory predictions
• STATISTICS_LEVEL must be set to TYPICAL
By using one parameter we don't need to use all other SGA parameters like.
• DB_CACHE_SIZE (DEFAULT buffer pool)
• SHARED_POOL_SIZE (Shared Pool)
• LARGE_POOL_SIZE (Large Pool)
• JAVA_POOL_SIZE (Java Pool)
SGA_TARGET And LOCK_SGA
SGA_TARGET is to tell how much memory Oracle can use for SGA.
LOCK_SGA is use to make sure that the contents from the SGA are not flushed, i.e data from the db buffer cache is
not written back to disc. It is like to pin the contents of SGA.
SGA_TARGET is the minimum value of sga that is used on startup or the memory allocated to sga on startup.....
SGA_LOCK is a parameter that protect your sga to be pagged....
The lock_sga parameter is used to make the Oracle SGA region ineligible for swapping, effectively pinning the SGA
RAM in memory. This technique is also known as "page fencing", using lock_sga=true to guarantee that SGA RAM is
never sent to the swap disk during a page-out operation.
So, my question is what will be effect of "alter system flush" if LOCK_SGA is set to TRUE....
Logically the SGA can be considered one monolithic block of memory; Oracle knows what is in it but to the OS it is
opaque. The entire SGA might be in memory or part of the SGA may be in memory and part may have been
'swapped' to disk by the OS.
In either case the OS does not know and does not care what is in the SGA. If it needs memory for other things it may
Page 16 of 287
swap (page) part of large memory segments to disk and then if a 'memory' reference is made to a part that is on disk
the OS will load it back into memory and may swap something else out to disk to make room for it.
LOCK_SGA ensures that all of the SGA is kept in memory and prevents any of it from being swapped to disk.
Flushing is an Oracle process that flushes the 'contents' of the SGA regardless of where the SGA is physically located.
The part in memory and any swapped parts will all be flushed. The flush process does not know, and does not care if
all of the SGA is in memory or if part of it is swapped out.
They are two separate and distinct operations.
5. What are new memory parameters in Oracle 11g?
MEMORY_TARGET:
MEMORY_TARGET specifies the Oracle system-wide usable memory. The database tunes memory to the
MEMORY_TARGET value, reducing or enlarging the SGA and PGA as needed.
In a text-based initialization parameter file, if you omit MEMORY_MAX_TARGET and include a value for
MEMORY_TARGET, then the database automatically sets MEMORY_MAX_TARGET to the value of MEMORY_TARGET.
If you omit the line for MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the MEMORY_TARGET
parameter defaults to zero. After startup, you can then dynamically change MEMORY_TARGET to a nonzero value,
provided that it does not exceed the value of MEMORY_MAX_TARGET.
MEMORY_TARGET and MEMORY_MAX_TARGET
The Oracle documents state the following:
MEMORY_TARGET specifies the Oracle system-wide usable memory.
MEMORY_MAX_TARGET (…) decide on a maximum amount of memory that you would want to allocate to the
database for the foreseeable future.
So my guess is, MEMORY_MAX_TARGET (static) is the maximum you can set MEMORY_TARGET (dynamic) to. A
couple of days ago, I wanted to experiment a bit with these memory settings.
My Oracle Enterprise Linux (5.5) machine was set for MEMORY_MAX_TARGET=512M and MEMORY_TARGET=256M,
but after starting the database, it showed the following:
SQL> startup pfile=init.ora
ORACLE instance started.
Total System Global Area 534462464 bytes
Fixed Size 2215064 bytes
Variable Size 473957224 bytes
Database Buffers 50331648 bytes
Redo Buffers 7958528 bytes
Database mounted.
Database opened.
Total SGA, 534462464 bytes? That’s about 510M, certainly not what I had specified for MEMORY_TARGET…!?
Checking SGA in Enterprise Manager (yes, I use it sometimes), it showed 256M allocated for MEMORY_TARGET,
containing SGA and PGA:
Page 17 of 287
AMM SGA/PGA sizes
SGA was using 152M and PGA took the rest:
AMM Advice
Clicking the graph will update the MEMORY_TARGET parameter.
One can also query V$MEMORY_TARGET_ADVICE for this information:
MEMORY_SIZE MEMORY_SIZE_FACTOR ESTD_DB_TIME ESTD_DB_TIME_FACTOR VERSION
----------- ------------------ ------------ ------------------- ----------
256 1 501 1 0
320 1.25 501 1 0
384 1.5 501 .9995 0
448 1.75 501 .9994 0
512 2 501 .9994 0
Page 18 of 287
What is /dev/shm?
It is an in-memory mounted file system (tmpfs) and is very fast, but non-persistent when Linux is rebooted.
In Oracle 11g, it is used to hold SGA memory by storing the SGA structures in files with the same granule size. This
granule size comes in 4M and 16M flavours, depending the MEMORY_MAX_TARGET smaller or larger than 1G.
When these MEMORY_TARGET and MEMORY_MAX_TARGET parameters are set, oracle will create as much as
=(MEMORY_MAX_TARGET / granule size) files. For instance, when MEMORY_MAX_TARGET set to 512M, it will create
512/4 = 128 files (actually 129, the sneaky…).
The output of ‘ls -la /dev/shm’, will show you that not all the 128 files are taking the 4M of space:
shm> ls -la
total 151780
drwxrwxrwt 2 root root 2620 Sep 10 11:13 .
drwxr-xr-x 12 root root 3880 Sep 10 08:47 ..
-rw-rw---- 1 oracle oinstall 4194304 Sep 10 14:17 ora_ianh_3768323_0
-rw-rw---- 1 oracle oinstall 4194304 Sep 10 14:11 ora_ianh_3768323_1
-rw-rw---- 1 oracle oinstall 0 Sep 10 11:13 ora_ianh_3768323_10
-rw-rw---- 1 oracle oinstall 4194304 Sep 10 14:17 ora_ianh_3768323_100
(...)
-rw-rw---- 1 oracle oinstall 4194304 Sep 10 14:17 ora_ianh_3768323_127
-rw-rw---- 1 oracle oinstall 4194304 Sep 10 11:13 ora_ianh_3768323_128
-rw-rw---- 1 oracle oinstall 0 Sep 10 11:13 ora_ianh_3768323_13
-rw-rw---- 1 oracle oinstall 0 Sep 10 11:13 ora_ianh_3768323_14
-rw-rw---- 1 oracle oinstall 0 Sep 10 11:13 ora_ianh_3768323_15
-rw-rw---- 1 oracle oinstall 0 Sep 10 11:13 ora_ianh_3768323_16
-rw-rw---- 1 oracle oinstall 0 Sep 10 11:13 ora_ianh_3768323_17
-rw-rw---- 1 oracle oinstall 0 Sep 10 11:13 ora_ianh_3768323_18
-rw-rw---- 1 oracle oinstall 0 Sep 10 11:13 ora_ianh_3768323_19
-rw-rw---- 1 oracle oinstall 4194304 Sep 10 11:13 ora_ianh_3768323_2
Now this is the trick Oracle is using. When you add up all the files that do take 4M of space, it will never take more
space than MEMORY_TARGET. Therefor, Oracle does not allocate more memory than the MEMORY_TARGET and the
sum of these files might even be smaller than MEMORY_TARGET.
When you look at the SGA memory size using ‘select ceil(sum(bytes)/(1024*1024*4)) from v$sgastat’, you will see it
is near the sum of the files in /dev/shm (again, plus one…).
0 bytes in memory
When a file in /dev/shm is 0 bytes, it does not use memory. That memory is ‘free’ to other applications. Now this is
Oracle’s implementation of releasing memory back to the Linux OS, by cleaning up one or more of these in-memory
files (will it do a ‘cat /dev/null > ora_sid_number_id’ ?).
Funny thing is, PGA is not stored in shared memory, because this is private memory. MEMORY_MAX_TARGET (used
for SGA and PGA) is ‘allocated’ in /dev/shm, but PGA is not stored in /dev/shm. This means, when memory for PGA is
allocated (and/or pga_aggregate_target is set), not all files in /dev/shm will get used!
Increase /dev/shm
If you increase the MEMORY_MAX_TARGET above the available /dev/shm space (df -h), you will receive:
ORA-00845: MEMORY_TARGET not supported on this system
If you have enough memory on your Linux machine, but /dev/shm is mounted to small by default, one can increase
this amount of memory by changing /etc/fstab for permanent changes. The default is half of your physical RAM
without swap.
For temporary changes to at least start the database, execute the following (change the 1500m to your
environment):
> umount tmpfs
> mount -t tmpfs shmfs -o size=1500m /dev/shm
152M boundary
When I was playing around with these settings, it seems 152M is an initial minimal memory target.
If you start oracle with a pfile setting of lower than 152M, it fails to start and you will get the following message:
ORA-00838: Specified value of MEMORY_TARGET is too small, needs to be at least 152M
Remarks
• When I changed MEMORY_TARGET to 152M in my pfile, after the bounce the PGA was set to Manual Mode.
Page 19 of 287
• Oracle will devide SGA/PGA as 60/40% when enough memory is available.
• The PGA_AGGREGATE_TARGET and SGA_TARGET are not ignored, but act as a minimum when set.
• When SGA_MAX_SIZE is set, it will act as a maximum; when it’s not set it will show the
MEMORY_MAX_TARGET value.
• /dev/shm must mounted for at least 384M bytes (You are trying to use the MEMORY_TARGET feature. This
feature requires the /dev/shm file system to be mounted for at least 402653184 bytes).
Conclusion
With Automatic Memory Management, one can set the upper limit of the total SGA and PGA to use. It is using an in-
memory file structure, so it can give back unused memory to the Linux OS, unlike 10g, setting SGA_MAX_TARGET will
just use all the memory specified.
On the other hand, when problems arise, one still needs to dive into the memory structures and tune. The
‘automatic’ feature added is memory distribution between SGA and PGA, and Oracle and OS.
6. What are the mandatory background processes?
DBWR LGWR SMON PMON CKPT RECO. (http://satya-dba.blogspot.in/2009/08/background-processes-in-oracle.html)
Background Processes in oracle
To maximize performance and accommodate many users, a multiprocess Oracle database system uses background
processes. Background processes are the processes running behind the scene and are meant to perform certain
maintenance activities or to deal with abnormal conditions arising in the instance. Each background process is meant
for a specific purpose and its role is well defined.
Background processes consolidate functions that would otherwise be handled by multiple database programs
running for each user process. Background processes asynchronously perform I/O and monitor other Oracle database
processes to provide increased parallelism for better performance and reliability.
A background process is defined as any process that is listed in V$PROCESS and has a non-null value in the pname
column.
Not all background processes are mandatory for an instance. Some are mandatory and some are optional. Mandatory
background processes are DBWn, LGWR, CKPT, SMON, PMON, and RECO. All other processes are optional, will be
invoked if that particular feature is activated.
Oracle background processes are visible as separate operating system processes in Unix/Linux. In Windows, these run
as separate threads within the same service. Any issues related to background processes should be monitored and
analyzed from the trace files generated and the alert log.
Background processes are started automatically when the instance is started.
To findout background processes from database:
SQL> select SID,PROGRAM from v$session where TYPE='BACKGROUND';
To findout background processes from OS:
$ ps -ef|grep ora_|grep SID
Mandatory Background Processes in Oracle
If any one of these 6 mandatory background processes is killed/not running, the instance will be aborted.
Database Writer (maximum 20):
Whenever a log switch is occurring as redolog file is becoming CURRENT to ACTIVE stage, oracle calls DBWn and
synchronizes all the dirty blocks in database buffer cache to the respective datafiles, scattered or randomly.
Database writer (or Dirty Buffer Writer) process does multi-block writing to disk asynchronously. One DBWn process
is adequate for most systems. Multiple database writers can be configured by initialization parameter
DB_WRITER_PROCESSES, depends on the number of CPUs allocated to the instance. To have more than one DBWn
only make sense if each DBWn has been allocated its own list of blocks to write to disk. This is done through the
initialization parameter DB_BLOCK_LRU_LATCHES. If this parameter is not set correctly, multiple DB writers can end
up contending for the same block list.
The possible multiple DBWR processes in RAC must be coordinated through the locking and global cache processes to
ensure efficient processing is accomplished.
DBWn will be invoked in following scenarios:
• When the dirty blocks in SGA reaches to a threshold value, oracle calls DBWn.
• When the database is shutting down with some dirty blocks in the SGA, then oracle calls DBWn.
• DBWn has a time out value (3 seconds by default) and it wakes up whether there are any dirty blocks or not.
• When a checkpoint is issued.
• When a server process cannot find a clean reusable buffer after scanning a threshold number of buffers.
Page 20 of 287
• When a huge table wants to enter into SGA and oracle could not find enough free space where it decides to
flush out LRU blocks and which happens to be dirty blocks. Before flushing out the dirty blocks, oracle calls
DBWn.
• Oracle RAC ping request is made.
• When Table DROPped or TRUNCATEed.
• When tablespace is going to OFFLINE/READ ONLY/BEGIN BACKUP.
Log Writer (maximum 1) LGWR
LGWR writes redo data from redolog buffers to (online) redolog files, sequentially.
Redolog file contains changes to any datafile. The content of the redolog file is file id, block id and new content.
LGWR will be invoked more often than DBWn as log files are really small when compared to datafiles (KB vs GB). For
every small update we don’t want to open huge gigabytes of datafiles, instead write to the log file.
Redolog file has three stages CURRENT, ACTIVE, INACTIVE and this is a cyclic process. Newly created redolog file will
be in UNUSED state.
When the LGWR is writing to a particular redolog file, that file is said to be in CURRENT status. If the file is filled up
completely then a log switch takes place and the LGWR starts writing to the second file (this is the reason every
database requires a minimum of 2 redolog groups). The file which is filled up now becomes from CURRENT to ACTIVE.
Log writer will write synchronously to the redolog groups in a circular fashion. If any damage is identified with a
redolog file, the log writer will log an error in the LGWR trace file and the alert log. Sometimes, when additional
redolog buffer space is required, the LGWR will even write uncommitted redolog entries to release the held buffers.
LGWR can also use group commits (multiple committed transaction's redo entries taken together) to write to
redologs when a database is undergoing heavy write operations.
In RAC, each RAC instance has its own LGWR process that maintains that instance’s thread of redo logs.
• LGWR will be invoked in following scenarios:
• LGWR is invoked whenever 1/3rd of the redo buffer is filled up.
• Whenever the log writer times out (3sec).
• Whenever 1MB of redolog buffer is filled (This means that there is no sense in making the redolog buffer
more than 3MB).
• Shutting down the database.
• Whenever checkpoint event occurs.
• When a transaction is completed (either committed or rollbacked) then oracle calls the LGWR and
synchronizes the log buffers to the redolog files and then only passes on the acknowledgement back to the
user. Which means the transaction is not guaranteed although we said commit, unless we receive the
acknowledgement. When a transaction is committed, a System Change Number (SCN) is generated and
tagged to it. Log writer puts a commit record in the redolog buffer and writes it to disk immediately along
with the transaction's redo entries. Changes to actual data blocks are deferred until a convenient time (Fast-
Commit mechanism).
• When DBWn signals the writing of redo records to disk. All redo records associated with changes in the block
buffers must be written to disk first (The write-ahead protocol). While writing dirty buffers, if the DBWn
process finds that some redo information has not been written, it signals the LGWR to write the information
and waits until the control is returned.
Checkpoint (maximum 1) CKPT
Checkpoint is a background process which triggers the checkpoint event, to synchronize all database files with the
checkpoint information. It ensures data consistency and faster database recovery in case of a crash.
When checkpoint occurred it will invoke the DBWn and updates the SCN block of the all datafiles and the control file
with the current SCN. This is done by LGWR. This SCN is called checkpoint SCN.
Checkpoint event can be occurred in following conditions:
• Whenever database buffer cache filled up.
• Whenever times out (3seconds until 9i, 1second from 10g).
• Log switch occurred.
• Whenever manual log switch is done.
SQL> ALTER SYSTEM SWITCH LOGFILE;
• Manual checkpoint.
SQL> ALTER SYSTEM CHECKPOINT;
• Graceful shutdown of the database.
Page 21 of 287
• Whenever BEGIN BACKUP command is issued.
• When the time specified by the initialization parameter LOG_CHECKPOINT_TIMEOUT (in seconds), exists
between the incremental checkpoint and the tail of the log.
• When the number of OS blocks specified by the initialization parameter LOG_CHECKPOINT_INTERVAL, exists
between the incremental checkpoint and the tail of the log.
• The number of buffers specified by the initialization parameter FAST_START_IO_TARGET required to perform
roll-forward is reached.
• Oracle 9i onwards, the time specified by the initialization parameter FAST_START_MTTR_TARGET (in seconds)
is reached and specifies the time required for a crash recovery. The parameter FAST_START_MTTR_TARGET
replaces LOG_CHECKPOINT_INTERVAL and FAST_START_IO_TARGET, but these parameters can still be used.
System Monitor (maximum 1) SMON
If the database is crashed (power failure) and next time when we restart the database SMON observes that last time
the database was not shutdown gracefully. Hence it requires some recovery, which is known as INSTANCE CRASH
RECOVERY. When performing the crash recovery before the database is completely open, if it finds any transaction
committed but not found in the datafiles, will now be applied from redolog files to datafiles.
If SMON observes some uncommitted transaction which has already updated the table in the datafile, is going to be
treated as a in doubt transaction and will be rolled back with the help of before image available in rollback segments.
SMON also cleans up temporary segments that are no longer in use.
It also coalesces contiguous free extents in dictionary managed tablespaces that have PCTINCREASE set to a non-zero
value.
In RAC environment, the SMON process of one instance can perform instance recovery for other instances that have
failed.
SMON wakes up about every 5 minutes to perform housekeeping activities.
5) Process Monitor (maximum 1) PMON
If a client has an open transaction which is no longer active (client session is closed) then PMON comes into the
picture and that transaction becomes in doubt transaction which will be rolled back.
PMON is responsible for performing recovery if a user process fails. It will rollback uncommitted transactions. If the
old session locked any resources that will be unlocked by PMON.
PMON is responsible for cleaning up the database buffer cache and freeing resources that were allocated to a
process.
PMON also registers information about the instance and dispatcher processes with Oracle (network) listener.
PMON also checks the dispatcher & server processes and restarts them if they have failed.
PMON wakes up every 3 seconds to perform housekeeping activities.
In RAC, PMON’s role as service registration agent is particularly important.
Recoverer (maximum 1) RECO [Mandatory from Oracle 10g]
This process is intended for recovery in distributed databases. The distributed transaction recovery process finds
pending distributed transactions and resolves them. All in-doubt transactions are recovered by this process in the
distributed database setup. RECO will connect to the remote database to resolve pending transactions.
Pending distributed transactions are two-phase commit transactions involving multiple databases. The database that
the transaction started is normally the coordinator. It will send request to other databases involved in two-phase
commit if they are ready to commit. If a negative request is received from one of the other sites, the entire
transaction will be rolled back. Otherwise, the distributed transaction will be committed on all sites. However, there
is a chance that an error (network related or otherwise) causes the two-phase commit transaction to be left in
pending state (i.e. not committed or rolled back). It's the role of the RECO process to liaise with the coordinator to
resolve the pending two-phase commit transaction. RECO will either commit or rollback this transaction.
7. What are the optional background processes?
ARCH, MMAN, MMNL, MMON, CTWR, ASMB, RBAL, ARBx etc
Optional Background Processes in Oracle
Archiver (maximum 10) ARC0-ARC9
The ARCn process is responsible for writing the online redolog files to the mentioned archive log destination after a
log switch has occurred. ARCn is present only if the database is running in archivelog mode and automatic archiving is
enabled. The log writer process is responsible for starting multiple ARCn processes when the workload increases.
Unless ARCn completes the copying of a redolog file, it is not released to log writer for overwriting.
Page 22 of 287
The number of archiver processes that can be invoked initially is specified by the initialization parameter
LOG_ARCHIVE_MAX_PROCESSES (by default 2, max 10). The actual number of archiver processes in use may vary
based on the workload.
ARCH processes, running on primary database, select archived redo logs and send them to standby database. Archive
log files are used for media recovery (in case of a hard disk failure and for maintaining an Oracle standby database via
log shipping). Archives the standby redo logs applied by the managed recovery process (MRP).
In RAC, the various ARCH processes can be utilized to ensure that copies of the archived redo logs for each instance
are available to the other instances in the RAC setup should they be needed for recovery.
Coordinated Job Queue Processes (maximum 1000) CJQ0/Jnnn
Job queue processes carry out batch processing. All scheduled jobs are executed by these processes. The initialization
parameter JOB_QUEUE_PROCESSES specifies the maximum job processes that can be run concurrently. These
processes will be useful in refreshing materialized views.
This is the Oracle’s dynamic job queue coordinator. It periodically selects jobs (from JOB$) that need to be run,
scheduled by the Oracle job queue. The coordinator process dynamically spawns job queue slave processes (J000-
J999) to run the jobs. These jobs could be PL/SQL statements or procedures on an Oracle instance.
CQJ0 - Job queue controller process wakes up periodically and checks the job log. If a job is due, it spawns Jnnnn
processes to handle jobs.
From Oracle 11g release2, DBMS_JOB and DBMS_SCHEDULER work without setting JOB_QUEUE_PROCESSES. Prior to
11gR2 the default value is 0, and from 11gR2 the default value is 1000.
Dedicated Server
Dedicated server processes are used when MTS is not used. Each user process gets a dedicated connection to the
database. These user processes also handle disk reads from database datafiles into the database block buffers.
LISTENER
The LISTENER process listens for connection requests on a specified port and passes these requests to either a
distributor process if MTS is configured or to a dedicated process if MTS is not used. The LISTENER process is
responsible for load balance and failover in case a RAC instance fails or is overloaded.
CALLOUT Listener
Used by internal processes to make calls to externally stored procedures
Lock Monitor (maximum 1) LMON
Lock monitor manages global locks and resources. It handles the redistribution of instance locks whenever instances
are started or shutdown. Lock monitor also recovers instance lock information prior to the instance recovery process.
Lock monitor co-ordinates with the Process Monitor (PMON) to recover dead processes that hold instance locks.
Lock Manager Daemon (maximum 10) LMDn
LMDn processes manage instance locks that are used to share resources between instances. LMDn processes also
handle deadlock detection and remote lock requests.
Global Cache Service (LMS)
In an Oracle Real Application Clusters environment, this process manages resources and provides inter-instance
resource control.
Lock processes (maximum 10) LCK0- LCK9
The instance locks that are used to share resources between instances are held by the lock processes.
Block Server Process (maximum 10) BSP0-BSP9
Block server Processes have to do with providing a consistent read image of a buffer that is requested by a process of
another instance, in certain circumstances.
Queue Monitor (maximum 10) QMN0-QMN9
This is the advanced queuing time manager process. QMNn monitors the message queues. QMN used to manage
Oracle Streams Advanced Queuing.
Event Monitor (maximum 1) EMN0/EMON
This process is also related to advanced queuing, and is meant for allowing a publish/subscribe style of messaging
between applications.
Dispatcher (maximum 1000) Dnnn
Intended for multi threaded server (MTS) setups. Dispatcher processes listen to and receive requests from connected
sessions and places them in the request queue for further processing. Dispatcher processes also pickup outgoing
responses from the result queue and transmit them back to the clients. Dnnn are mediators between the client
processes and the shared server processes. The maximum number of dispatcher process can be specified using the
initialization parameter MAX_DISPATCHERS.
Page 23 of 287
Shared Server Processes (maximum 1000) Snnn
Intended for multi threaded server (MTS) setups. These processes pickup requests from the call request queue,
process them and then return the results to a result queue. These user processes also handle disk reads from
database datafiles into the database block buffers. The number of shared server processes to be created at instance
startup can be specified using the initialization parameter SHARED_SERVERS. Maximum shared server processes can
be specified by MAX_SHARED_SERVERS.
Parallel Execution/Query Slaves (maximum 1000) Pnnn
These processes are used for parallel processing. It can be used for parallel execution of SQL statements or recovery.
The Maximum number of parallel processes that can be invoked is specified by the initialization parameter
PARALLEL_MAX_SERVERS.
Trace Writer (maximum 1) TRWR
Trace writer writes trace files from an Oracle internal tracing facility.
Input/Output Slaves (maximum 1000) Innn
These processes are used to simulate asynchronous I/O on platforms that do not support it. The initialization
parameter DBWR_IO_SLAVES is set for this purpose.
Data Guard Monitor (maximum 1) DMON
The Data Guard broker process. DMON is started when Data Guard is started. This is broker controller process is the
main broker process and is responsible for coordinating all broker actions as well as maintaining the broker
configuration files. This process is enabled/disabled with the DG_BROKER_START parameter.
Data Guard Broker Resource Manager RSM0
The RSM process is responsible for handling any SQL commands used by the broker that need to be executed on one
of the databases in the configuration.
Data Guard NetServer/NetSlave NSVn
These are responsible for making contact with the remote database and sending across any work items to the remote
database. From 1 to n of these network server processes can exist. NSVn is created when a Data Guard broker
configuration is enabled. There can be as many NSVn processes (where n is 0- 9 and A-U) created as there are
databases in the Data Guard broker configuration.
DRCn
These network receiver processes establish the connection from the source database NSVn process. When the broker
needs to send something (e.g. data or SQL) between databases, it uses this NSV to DRC connection. These
connections are started as needed.
Data Guard Broker Instance Slave Process INSV
Performs Data Guard broker communication among instances in an Oracle RAC environment
Data Guard Broker Fast Start Failover Pinger Process FSFP
Maintains fast-start failover state between the primary and target standby databases. FSFP is created when fast-start
failover is enabled.
LGWR Network Server process LNS
In Data Guard, LNS process performs actual network I/O and waits for each network I/O to complete. Each LNS has a
user configurable buffer that is used to accept outbound redo data from the LGWR process. The NET_TIMEOUT
attribute is used only when the LGWR process transmits redo data using a LGWR Network Server(LNS) process.
Managed Recovery Process MRP
In Data Guard environment, this managed recovery process will apply archived redo logs to the standby database.
Remote File Server process RFS
The remote file server process, in Data Guard environment, on the standby database receives archived redo logs from
the primary database.
Logical Standby Process LSP
The logical standby process is the coordinator process for a set of processes that concurrently read, prepare, build,
analyze, and apply completed SQL transactions from the archived redo logs. The LSP also maintains metadata in the
database. The RFS process communicates with the logical standby process (LSP) to coordinate and record which files
arrived.
Wakeup Monitor Process (maximum 1) WMON
This process was available in older versions of Oracle to alarm other processes that are suspended while waiting for
an event to occur. This process is obsolete and has been removed.
Recovery Writer (maximum 1) RVWR
This is responsible for writing flashback logs (to FRA).
Page 24 of 287
Fetch Archive Log (FAL) Server
Services requests for archive redo logs from FAL clients running on multiple standby databases. Multiple FAL servers
can be run on a primary database, one for each FAL request.
Fetch Archive Log (FAL) Client
Pulls archived redo log files from the primary site. Initiates transfer of archived redo logs when it detects a gap
sequence.
Data Pump Master Process DMnn
Creates and deletes the master table at the time of export and import. Master table contains the job state and object
information. Coordinates the Data Pump job tasks performed by Data Pump worker processes and handles client
interactions. The Data Pump master (control) process is started during job creation and coordinates all tasks
performed by the Data Pump job. It handles all client interactions and communication, establishes all job contexts,
and coordinates all worker process activities on behalf of the job. Creates the Worker Process
Data Pump Worker Process DWnn
It performs the actual heavy duty work of loading and unloading of data. It maintains the information in master table.
The Data Pump worker process is responsible for performing tasks that are assigned by the Data Pump master
process, such as the loading and unloading of metadata and data.
Shadow Process
When client logs in to an Oracle Server the database creates and Oracle process to service Data Pump API.
Client Process
The client process calls the Data pump API.
ARBx is configured by ASM_POWER_LIMIT.
8. What are the new background processes in Oracle 10g?
MMAN, MMON, MMNL, CTWR, ASMB, RBAL and ARBx
New Background Processes in Oracle 10g
Memory Manager (maximum 1) MMAN
MMAN dynamically adjust the sizes of the SGA components like buffer cache, large pool, shared pool and java pool
and serves as SGA memory broker. It is a new process added to Oracle 10g as part of automatic shared memory
management.
Memory Monitor (maximum 1) MMON
MMON monitors SGA and performs various manageability related background tasks. MMON, the Oracle 10g
background process, used to collect statistics for the Automatic Workload Repository (AWR).
Memory Monitor Light (maximum 1) MMNL
New background process in Oracle 10g. This process performs frequent and lightweight manageability-related tasks,
such as session history capture and metrics computation.
Change Tracking Writer (maximum 1) CTWR
CTWR will be useful in RMAN. Optimized incremental backups using block change tracking (faster incremental
backups) using a file (named block change tracking file). CTWR (Change Tracking Writer) is the background process
responsible for tracking the blocks.
ASMB
This ASMB process is used to provide information to and from cluster synchronization services used by ASM to
manage the disk resources. It's also used to update statistics and provide a heart beat mechanism.
Re-Balance RBAL
RBAL is the ASM related process that performs rebalancing of disk resources controlled by ASM.
Actual Rebalance ARBx
9. How do you use automatic PGA memory management with Oracle 9i and above?
Set the WORKAREA_SIZE_POLICY parameter to AUTO and set PGA_AGGREGATE_TARGET
Explanation:
Automated PGA Memory Management:
There are two different memory types in the Oracle PGA: not tunable and tunable. To configure the tunable are,
there are several database parameters that can be used. These include sort_area_size, hash_area_size,
bitmap_merge_area_size, and create_bitmap_area_size. In Oracle8i, you could set these parameters dynamically.
However, it was difficult to tune them well. More memory was often allocated to a given session than was really
needed. This results in wasted memory.
In 10G, PGA can be configured by setting the PGA_AGGREGATE_TARGET initialization parameter. To instruct the
Oracle Database whether to tune PGA automatically, one needs to set WORKAREA_SIZE_POLICY to AUTO. If the value
Page 25 of 287
of this parameter is set to MANUAL, that means work area size will be based on *_AREA_SIZE parameters like
SORT_AREA_SIZE and HASH_AREA_SIZE. Note that this is not recommended in 10g. At any given time, the amount of
memory avilable for the active work area is derived from the PGA_AGGREGATE_TARGET value. The value is set to
PGA_AGGREGATE_TARGET- memory allocated for PGA by other sessions. Under automatic PGA memory
management mode, the main goal of Oracle is to honor the PGA_AGGREGATE_TARGET limit set by the DBA, by
controlling dynamically the amount of PGA memory allotted to SQL work areas. At the same time, Oracle tries to
maximize the performance of all the memory-intensive SQL operations by maximizing the number of work areas that
are using an optimal amount of PGA memory (cache memory). The rest of the work areas are executed in one-pass
mode, unless the PGA memory limit set by the DBA with the parameter PGA_AGGREGATE_TARGET is so low that
multi-pass execution is required to reduce even more the consumption of PGA memory and honor the PGA target
limits.
To set the PGA initially, rule of thumb says to set the value at 20% of (80% of Total Physical memory) for OLTP and
50% of (80% of Total Available Memory) for DSS systems. Here we are taking 80% of Total Available Memory as the
size of the SGA.
Three statistics have been added to the V$SYSSTAT and V$SESSTAT views that relate to automated PGA memory.
These are:
Work Area Executions: Optimal Size Represents the number of work areas that had an optimal size, and no writes to
disk were required.
Work Area Executions: One Pass Size Represents the number of work areas that had to write to disk, but required
only one pass to disk.
Work Area Executions: Multipasses Size represents the number of work areas that had to write to disk using multiple
passes. High numbers of this statistic might indicate a poorly tuned PGA.
New columns have been added to V$PROCESS to help tune the PGA:
PGA_USED_MEM - reports how much PGA memory the process uses.
PGA_ALLOCATED_MEM - the amount of PGA memory allocated to the process.
PGA_MAX_MEM - the maximum amount of PGA memory allocated by the process
Finally, three new views are available to help the DBA extract information about the PGA:
V$SQL_WORKAREA - provides information about SQL work areas.
V$SQL_WORKAREA_ACTIVE - provides information on current SQL work area allocations.
V$SQL_MEMORY_USAGE - displays current memory-use statistics.
10. Explain two easy SQL optimizations?
a. EXISTS can be better than IN under various conditions.
b. UNION ALL is faster than UNION (not sorting).
11. What are the new features in Oracle 11gR1?
12. What are the new features in Oracle 11g R2?
13. What are the new features in Oracle 12c?
14. What process will get data from datafiles to DB cache?
Server process
15. What background process will writes data to datafiles?
DBWR
16. What background process will write undo data?
DBWR
17. What are physical components of Oracle database?
Oracle database is comprised of three types of files. One or more datafiles, two or more redo log files, and one or
more control files. Password file and parameter file also come under physical components.
18. What are logical components of Oracle database?
Blocks, Extents, Segments, Tablespaces
19. Types segment space management?
LMTS and DMTS
When Oracle allocates space to a segment (like a table or index), a group of contiguous free blocks, called an extent,
is added to the segment. Metadata regarding extent allocation and unallocated extents are either stored in the data
dictionary, or in the tablespace itself. Tablespaces that record extent allocation in the dictionary, are called dictionary
managed tablespaces, and tablespaces that record extent allocation in the tablespace header, are called locally
managed tablespaces.
Page 26 of 287
SQL> select tablespace_name, extent_management, allocation_type from dba_tablespaces;
TABLESPACE_NAME EXTENT_MAN ALLOCATIO
------------------------------ ---------- ---------
SYSTEM DICTIONARY USER
SYS_UNDOTS LOCAL SYSTEM
TEMP LOCAL UNIFORM
Dictionary Managed Tablespaces (DMT):
Oracle use the data dictionary (tables in the SYS schema) to track allocated and free extents for tablespaces that is in
"dictionary managed" mode. Free space is recorded in the SYS.FET$ table, and used space in the SYS.UET$ table.
Whenever space is required in one of these tablespaces, the ST (space transaction) enqueue latch must be obtained
to do inserts and deletes agianst these tables. As only one process can acquire the ST enque at a given time, this
often lead to contention.
Execute the following statement to create a dictionary managed
tablespace:
Page 28 of 287
24. What is the use of redo log files? (http://www.ordba.net/Tutorials/Redolog.htm)
Explanation-1: Redo logs are transaction journals. Each transaction is recorded in the redo logs. Redo logs are used in
a serial fashion with each transaction queuing up in the redo log buffers and being written one at a time into the redo
logs. Redo logs as a general rule should switch about every thirty minutes. However, you may need to adjust the time
up or down depending on the importance of your data. The rule of thumb is to size the redo logs such that you only
loose the amount of data you can stand to loose should for some reason the online redo log become corrupt. With
modern Oracle redo log mirroring and with disk array mirroring and various forms of online disk repair and
replacement the occurrence of redo log corruptions has dropped to practically zero, so size based on the number of
archive logs you want to apply should the database fail just before your next backup.
The LOG_BUFFER_SIZE and LOG_BUFFERS parameters control the redo log buffers. The LOG_BUFFER_SIZE should be
set to reduce the number of writes required per redo log but not be so large that it results in an excessive IO wait
time. Some studies have shown that sizing bigger than one megabyte rarely results in performance gains. Generally I
size the LOG_BUFFER_SIZE such that it is equal to or results in an even divisor of the redo log size.
Monitor redo logs using the alert log, V$LOGHIST, V$LOGFILE, V$RECOVERY_LOG and V$LOG DPTs.
Explanation-2: Redo log files record changes made to the database and are used by Oracle for system crash recovery.
Archiving of redo log files is necessary for hot (on-line) backups, and is mandatory for point-in-time recovery. Redo
log files are created upon database creation and addition ones can be added by the DBA. To enable archive redo
logging, the init.ora file must be modified, the database needs to be altered, and filesystem space is required. The
following explains a little about redo logs, how archive logging can be enabled, and how backups can be performed.
Why redo log files:
Crash Recovery: Redo log files record changes made to the database. Databases can crash in many ways, such as a
sudden power loss, a SHUTDOWN ABORT, or the death of an Oracle process. In these cases, redo log files can provide
information about how to repair the database. During the ALTER DATABASE OPEN phase of database startup, the on-
line redo log files are used for "crash recovery". This type of recover is generally handled by Oracle and does not
require DBA intervention.
Point-In-Time Recovery: Redo log files contain information that can be useful for broader types of recover. Since they
contain all the changes that brought the database to its current state, the redo logs can bring an old backup forward
to any point in time. However, on-line redo log files are used in a circular fashion, so it is important to make a copy of
each redo log file before it gets overwritten with new information. This can be done automatically with archive log
mode.
Hot Backups: During a hot backup, a tablespace is writes are done in a special manner. During this time, tables
residing on this tablespace can be modified, however, extra information about the change is written to the redo log
files. After the tablespace backup is finished, normal on-line redo logging is resumed. Note that during a hot backup
each datafile backup is from a different point in time. And, in some cases, the datafile itself could have been modified
during the backup process. If all of these datafiles were restored, the database would be completely out of sync -
each part would be from a different time. In this case, old copies of the on-line redo log files (archived redo logs) can
be applied to each datafile to bring them all to a single point in time.
25. What are the uses of undo tablespace or redo segments?
Every Oracle database must have a method of maintaining information that is used to roll back, or undo, changes to
the database. Such information consists of records of the actions of transactions, primarily before they are
committed. Oracle refers to these records collectively as undo.
Undo records are used to:
Roll back transactions when a ROLLBACK statement is issued
Recover the database
Provide read consistency
When a rollback statement is issued, undo records are used to undo changes that were made to the database by the
uncommitted transaction. During database recovery, undo records are used to undo any uncommitted changes
applied from the redo log to the datafiles. Undo records provide read consistency by maintaining the before image of
the data for users who are accessing the data at the same time that another user is changing it.
Historically, Oracle has used rollback segments to store undo. Space management for these rollback segments has
proven to be quite complex. Oracle now offers another method of storing undo that eliminates the complexities of
managing rollback segment space, and enables DBAs to exert control over how long undo is retained before being
overwritten. This method uses an undo tablespace. Both of these methods of managing undo space are discussed in
this chapter.
Page 29 of 287
You cannot use both methods in the same database instance, although for migration purposes it is possible, for
example, to create undo tablespaces in a database that is using rollback segments, or to drop rollback segments in a
database that is using undo tablespaces. However, you must shut down and restart your database in order to effect
the switch to another method of managing undo.
Undo vs Rollback
Earlier releases of Oracle Database used rollback segments to store undo. Oracle9i introduced automatic undo
management, which simplifies undo space management by eliminating the complexities associated with rollback
segment management. Oracle strongly recommends (Oracle 9i and on words) to use undo tablespace (automatic
undo management) to manage undo rather than rollback segments.
To see the undo management mode and other undo related information of database-
SQL> show parameter undo
NAME TYPE VALUE
———————————— ———– ——————————
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
Since the advent of Oracle9i, less time-consuming and suggested way is—using Automatic Undo Management—in
which Oracle Database creates and manages rollback segments (now called “undo segments”) in a special-purpose
undo tablespace. Unlike with rollback segments, we don’t need to create or manage individual undo segments—
Oracle Database does that for you when you create the undo tablespace. All transactions in an instance share a single
undo tablespace. Any executing transaction can consume free space in the undo tablespace, and when the
transaction completes, its undo space is freed (depending on how it’s been sized and a few other factors, like undo
retention). Thus, space for undo segments is dynamically allocated, consumed, freed, and reused—all under the
control of Oracle Database, rather than manual management by someone.
Switching Rollback to Undo
1. We have to create an Undo tablespace. Oracle provides a function (10g and up) that provides information on how
to size new undo tablespace based on the configuration and usage of the rollback segments in the system.
DECLARE
utbsiz_in_MB NUMBER;
BEGIN
utbsiz_in_MB ;= DBMS_UNDO_ADV.RBU_MIGRATION;
end;
/
CREATE UNDO TABLESPACE UNDOTBS
DATAFILE ‘/oradata/dbf/undotbs_1.dbf’
SIZE 100M AUTOEXTEND ON NEXT 10M
MAXSIZE UNLIMITED RETENTION NOGUARANTEE;
Note: In undo tablespace creation, “SEGMENT SPACE MANAGEMENT AUTO” can not be set
2. Change system parameters
SQL> alter system set undo_retention=900 scope=both;
SQL> alter system set undo_tablespace=UNDOTBS scope=both;
SQL> alter system set undo_management=AUTO scope=spfile;
SQL> shutdown immediate
SQL> startup
UNDO_MANAGEMENT is a static parameter. So database needs to be restarted.
26. How undo tablespace can guarantee retain of required undo data?
Alter tablespace undo_ts retention guarantee;
27. What is ORA-01555 - snapshot too old error and how do you avoid it?
The ORA-01555 is caused by Oracle read consistency mechanism. If you have a long running SQL that starts at 10:30
AM, Oracle ensures that all rows are as they appeared at 10:30 AM, even if the query runs until noon!
Oracles does this by reading the "before image" of changed rows from the online undo segments. If you have lots of
updates, long running SQL and too small UNDO, the ORA-01555 error will appear.
From the docs we see that the ORA-01555 error relates to insufficient undo storage or a too small value for the
undo_retention parameter:
Page 30 of 287
ORA-01555: snapshot too old: rollback segment number string with name "string" too small
Cause: Rollback records needed by a reader for consistent read are overwritten by other writers.
Action: If in Automatic Undo Management mode, increase the setting of UNDO_RETENTION. Otherwise, use larger
rollback segments.
You can get an ORA-01555 error with a too-small undo_retention, even with a large undo tables. However, you can
set a super-high value for undo_retention and still get an ORA-01555 error. Also see these important notes on
commit frequency and the ORA-01555 error
The ORA-01555 snapshot too old error can be addressed by several remedies:
• Re-schedule long-running queries when the system has less DML load.
• Increasing the size of your rollback segment (undo) size. The ORA-01555 snapshot too old also relates to
your setting for automatic undo retention.
• Don't fetch between commits.
Avoiding the ORA-01555 error
• Steve Adams has good notes on avoiding the ora-1555 snapshot too old error:
• Do not run discrete transactions while sensitive queries or transactions are running, unless you are confident
that the data sets required are mutually exclusive.
• Schedule long running queries and transactions out of hours, so that the consistent gets will not need to
rollback changes made since the snapshot SCN. This also reduces the work done by the server, and thus
improves performance.
• Code long running processes as a series of restartable steps.
• Shrink all rollback segments back to their optimal size manually before running a sensitive query or
transaction to reduce risk of consistent get rollback failure due to extent deallocation.
• Use a large optimal value on all rollback segments, to delay extent reuse.
• Don't fetch across commits. That is, don't fetch on a cursor that was opened prior to the last commit,
particularly if the data queried by the cursor is being changed in the current session.
• Use a large database block size to maximize the number of slots in the rollback segment transaction tables,
and thus delay slot reuse.
• Commit less often in tasks that will run at the same time as the sensitive query, particularly in PL/SQL
procedures, to reduce transaction slot reuse.
• If necessary, add extra rollback segments (undo logs) to make more transaction slots available.
Oracle ACE Steve Karam also has advice on avoiding the ORA-01555: Snapshot too old, rollback segment too small
with UNDO sizing.
Question: I am updating 1 million rows on Oracle 10g, and I run it as batch process, committing after each batch to
avoid undo generation. But in Oracle 10g I am told undo management is automatic and I do not need run the update
as batch process.
Answer: Automatic undo management was available in 9i as well, and my guess is you were probably using it there.
However, I’ll assume for the sake of this writing that you were using manual undo management in 9i and are now on
automatic.
Automatic undo management depends upon the UNDO_RETENTION parameter, which defines how long Oracle
should try to keep committed transactions in UNDO segments. However, the UNDO_RETENTION parameter is only a
suggestion. You must also have an UNDO tablespace that’s large enough to handle the amount of UNDO you will be
generating/holding, or you will get "ORA-01555: Snapshot too old, rollback segment too small" errors.
You can use the UNDO advisor to find out how large this tablespace should be given a desired UNDO retention, or
look online for some scripts…just Google for: oracle undo size
Oracle 10g also gives you the ability to guarantee undo. This means that instead of throwing an error on SELECT
statements, it guarantees your UNDO retention for consistent reads and instead errors your DML that would cause
UNDO to be overwritten.
Now, for your original question…yes, it’s easier for the DBA to minimize the issues of UNDO when using automatic
undo management. If you set the UNDO_RETENTION high enough with a properly sized undo tablespace you
shouldn’t have as many issues with UNDO.
How often you commit should have nothing to do with it, as long as your DBA has properly set UNDO_RETENTION
and has an optimally sized UNDO tablespace. Committing more often will only result in your script taking longer,
more LGWR/DBWR issues, and the “where was I” problem if there is an error (if it errors, where did it stop?).
Page 31 of 287
Lastly (and true even for manual undo management), if you commit more frequently, you make it more possible for
ORA-01555 errors to occur. Because your work will be scattered among more undo segments, you increase the
chance that a single one may be overwritten if necessary, thus causing an ORA-01555 error for those that require it
for read consistency.
It all boils down to the size of the undo tablespace and the undo retention, in the end…just as manual management
boiled down to the size, amount, and usage of rollback segments. Committing frequently is peroxiding band-aid: it
covers up the problem, tries to clean it, but in the end it just hurts and causes problems for otherwise healthy
processes.
Oracle guru Joel Garry offers another great explanation of the machinations of the ORA-01555 error:
You have to understand, in general, ORA-01555 means something else is causing it to die - Oracle needs to be able to
create a read-consistent view of the table for the query as it looked at the start of the query, and it is unable to
because something has overwritten the undo necessary to create such a view. Since you have the same table over
and over in your alert log, that probably means the something is the previous queries your monitoring software is
making, not ever releasing the transaction.
Something like:
• 10AM query starts, never ends
• 11AM query starts, never ends
• Noon query starts, never ends
• 1PM query starts
Meanwhile, the undo needed from the 10AM query for the 1PM query gets overwritten, 1PM query dies with ORA-
01555, since it needs to know what the table looked like before the 10AM query started mucking with it.
Also if the query is a loop with a commit in it, it can do the same thing without other queries, as eventually the next
iteration requires looking back at it's own previous first generation, can't do it, and barfs.
Upping undo_retention may help, or may not, depending on the real cause. Also check v$undostat, you may still have
information in there if this is ongoing (or may not, since by the time you check it the needed info may be gone).
28. What is the use/size of temporary tablespace?
Temporary tablespaces are used for special operations, particularly for sorting data results on disk. For SQL with
millions of rows returned, the sort operation is too large for the RAM area and must occur on disk. The temporary
tablespace is where this takes place.
Each database should have one temporary tablespace that is created when the database is created. You create, drop
and manage tablespaces with create temporary tablespace, drop temporary tablespace and alter temporary
tablespace commands, each of which is like it’s create tablespace counterpart.
The only other difference is that a temporary tablespace uses temporary files (also called tempfiles) rather than
regular datafiles. Thus, instead of using the datafiles keyword you use the tempfiles keyword when issuing a create,
drop or alter tablespace command as you can see in these examples:
CREATE TEMPORARY TABLESPACE temp
TEMPFILE ‘/ora01/oracle/oradata/booktst_temp_01.dbf’ SIZE 50m;
DROP TEMPORARY TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;
Tempfiles are a bit different than datafiles in that you may not immediately see them grow to the size that they have
been allocated (this particular functionality is platform dependent). Hence, don’t panic if you see a file that looks too
small.
Temporary Tablespace Group Overview
Oracle 10g first introduced “temporary tablespace group.” A temporary tablespace group consists of only temporary
tablespace, and has the following properties:
• It contains one or more temporary tablespaces.
• It contains only temporary tablespace.
• It is not explicitly created. It is created implicitly when the first temporary tablespace is assigned to it, and is
deleted when the last temporary tablespace is removed from the group.
Temporary Tablespace Group Benefits
• Temporary tablespace group has the following benefits:
• It allows multiple default temporary tablespaces to be specified at the database level.
• It allows the user to use multiple temporary tablespaces in different sessions at the same time.
• It allows a single SQL operation to use multiple temporary tablespaces for sorting.
Page 32 of 287
29. What is the use of password file?
Explanation-1:
As a DBA we must have used sqlplus “/as sysdba” to connect to database, atleast hundred times a day. Never
bothered about the password to provide !!!
This is because we were using OS level authentication. We can change the configuration and make Oracle to ask for
the password. Well, “/as sysdba” works fine if we are connecting to the host where the database is actually installed.
For example I have installed a database as oracle01 user (which belongs to DBA group) on one of my host called
“host1″. I telnet to host1 as oracle01 user and provide the password to login. Once I successfully login to the host,
there ends the authentication part. Not for administering the database all I have to do is to use our famous command
to connect to database – “sqlplus /as sysdba”.
The reason above thing work is because I was using Operating System level authentication. If I try to connect to same
database as sysdba from some other host, I wont be able to connect. Because the authentication is done based on
host login password. Since I haven’t logged into host, authentication will fail and connect as sysdba will fail. So for OS
authentication its mandatory that you are always logged into the host where the oracle is installed (oracle database
resides).
Authentication Type
There are 2 types of authentication:
• OS (Operating System) Authentication
• Password File Authentication
And yes the above one that i explained is OS level authentication. Lets see what is password file authentication.
Password File Authentication
In case of password file authentication, we create a password file for our database. ORAPWD is the utility for creating
a password file. This utility is provided by oracle and comes when you install database. This binary is present in
ORACLE_HOME/bin directory. Below is the usage for the same.
ORAPWD FILE=(file_name) password=(password) ENTRIES=(Entries)
Where file_name is the name and location of the password file. Usually we create password with name as
ora(SID).pwd in ORACLE_HOME/dbs directory. So value for file_name becomes $ORACLE_HOME/dbs/ora(sid).pwd
password - is the password you want to set for password file. Remember that this will become the password for sys
user as well. Meaning that when you are connecting as sys user, you need to provide this password. (oracle will
prompt for password in case of password file authentication).
Entries - This is the number of entries that password file can have. Be careful while providing this value as once you
set this value, you cannot change it. You have to delete password file and recreate it, but its risky.
Example:
$orapwd FILE=/u01/oracle/product/9.2.0/dbs/oraorcl.pwd PASSWORD=welcome1 ENTRIES=10
This will create a password file oraorcl.pwd in /u01/oracle/product/9.2.0/dbs directory.
After creating password file, how your database will know that you have created password file and you are supposed
to use the same. This is done by INIT.ORA parameter REMOTE_LOGIN_PASSWORDFILE. This
parameter can have 3 values (none - OS level authentication, shared/exclusive – password file authentication). So for
using password file, you need to set the value of this parameter to either shared or exclusive.
What is the difference between SHARED and EXCLUSIVE?
If we set the value of REMOTE_LOGIN_PASSWORDFILE to SHARED in INIT.ORA file, then following is true.
• This file can be used for more then one database (shared file)
• One SYS user will be recognized by database. Meaning that you can login to database using SYS and no other
user holding sysdba responsibility. However you can connect to database using SYSTEM or any other user
but not the once holding sysdba responsibility.
• If we set the value of REMOTE_LOGIN_PASSWORDFILE to EXCLUSIVE in INIT.ORA file, then following is true.
• This file will be specific to one database only. Other database cannot use this file.
• Any user enter having sysdba responsibility, which is present in password file can be connected to database
as sysdba from remote server.
So when using password file authentication remember to set the value of REMOTE_LOGIN_PASSWORDFILE to
SHARED or EXCLUSIVE in INIT.ORA file. Also when using OS level authentication set the value if this parameter to
NONE
Explanation-2: If the DBA wants to start up an Oracle instance there must be a way for Oracle to authenticate this
DBA. That is if (s)he is allowed to do so. Obviously, his password can not be stored in the database, because Oracle
Page 33 of 287
can not access the database before the instance is started up. Therefore, the authentication of the DBA must happen
outside of the database. There are two distinct mechanisms to authenticate the DBA: using the password file or
through the operating system.
The init parameter remote_login_passwordfile specifies if a password file is used to authenticate the DBA or not. If it
set either to shared or exclusive a password file will be used.
Scenario:
QUICK REFERENCE
Step 1. Log on the database machine and create a password file:
For Unix (Shell)
orapwd file=$ORACLE_HOME/dbs/orapw password=password_for_sys
For Windows (Command Prompt)
orapwd file=%ORACLE_HOME%\database\PWDsid_name.ora
password=password_for_sys
Step 2. Add the following line to initservice_name.ora in UNIX, or init.ora in Windows:
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
Step 3. Restart the Database and Test the Remote Login.
connect sys/password_for_sys@tns_name_of_db as sysdba
SYSDBA AUTHENTICATING APPROACHES
A SYSDBA authenticating approach is the method of verifying the identity of database administrators. On the layer of
data dictionary, Oracle database administrators are authenticated using an account password like other users. In
addition to the normal data dictionary, the following approaches are available to secure the authentication of
administrators with the SYSDBA privilege:
* Operating-System-based Authentication;
* Password-File-based Authentication;
* Strong and Centralized Authentication (from 11g on).
Operating-System-Based Authentication:
It means to authenticate database administrators by establishing a user group on the operating system, granting
Oracle DBA privileges to that group, and then adding the database administrative users to that group. Users
authenticated in this way can logon to the Oracle database as a SYSDBA without having to enter a user name or
password (i.e. "connect / as sysdba"). On UNIX platform, the special user group is called the DBA group, and on
Windows systems, it is called the ORA_DBA group.
Password-File-Based Authentication:
Oracle Database uses database-specific password files to keep track of the database users who have been granted
the SYSDBA and SYSOPER privileges
Strong and Centralized Authentication:
This authenticating approach (from 11g on) is featured by a network-based authentication service, such as Oracle
Internet Directory. It is recommended by Oracle for the centralized control of SYSDBA access to multiple databases.
One of the following methods can be used to enable the Oracle Internet Directory server to authorize SYSDBA
connections:
* Directory Authentication;
* Kerberos Authentication;
* Secure Sockets Layer Authentication.
CONFIGURING STEPS
To use the password file authentication, you must configure the database to use a password file. To do so, you first
need to create the password file, and then configure the database so that it knows to use it. Steps 1 to 3 require the
local login to the database server.
Step 1: Create the Password File
To set a password file on the server-side, log on the server machine where the remote Oracle database resides.
Create the database password file by using the Oracle utility "orapwd."
The Orapwd Command For Oracle 8.1.7 through 10g :
Usage: orapwd file=<filename> password=<password> [entries=<numusers>] where
* file - (mandatory) The password filename (Refer to Notice 1);
* password - (mandatory) The password for the sys user (Refer to Notice 3);
* entries - (Optional) Maximum number of entries (user accounts) to permit in the file (Refer to Notice 2);
Page 34 of 287
There are no spaces around the equal-to (=) character.
In UNIX:
For Shell :
orapwd file=$ORACLE_HOME/dbs/orapw password=change_on_install entries=30
For SQL* Plus :
host orapwd file=$ORACLE_HOME/dbs/orapw password=change_on_install entries=30
The above command creates a password file named "orapw" that allows up to 30 privileged users with different
passwords.
In Windows:
For Command Prompt:
orapwd file=%ORACLE_HOME%\database\PWDorcl92.ora password=change_on_install entries=30
For SQL* Plus :
host orapwd file=%ORACLE_HOME%\database\PWDorcl92.ora password=change_on_install entries=30
The above command creates a password file named "PWDorcl92" that allows up to 30 privileged users with different
passwords.
The Orapwd Command For Oracle 11g Release 1 :
Usage: orapwd file=<filename> [entries=<numusers>] [force={y|n}] [ignorecase={y|n}] [nosysdba={y|n}]
where
* file - (mandatory) The password filename ;
* entries - (Optional) Maximum number of entries (user accounts) to permit in the file;
* force - (Optional) If y, permits overwriting an existing password file;
* ignorecase - (Optional) If y, passwords are treated as case-insensitive;
* nosysdba - (Optional) For Data Vault installations
There are no spaces around the equal-to (=) character.
The command, when executed, prompts for the SYS password and stores the password in the created password file.
Orapwd Command Examples:
In UNIX :
orapwd file=$ORACLE_HOME/dbs/orapw entries=30
Enter password: change_on_install
The above commands create a password file named "orapw" that has "change_on_install" as the password for the
sys user and allows up to 30 privileged users with different passwords.
In Windows :
orapwd file=%ORACLE_HOME%\database\PWDorcl11.ora entries=30
Enter password: change_on_install
The above commands create a password file named "PWDorcl11" that has "change_on_install" as the password for
the sys user and allows up to 30 privileged users with different passwords.
Step 2: Configure the Database to Use the Password File
By default, an Oracle database is not configured to use the password file. However, you'd better first verify the value
of the parameter "remote_login_passwordfile" in initservice_name.ora, in UNIX, or init.ora, in Windows. If the value
is "exclusive," continue with Step 3: Restart the Database. If the value is "shared," or if the line
"REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE" is remarked off, continue with the procedure below: Stop the
Database.
Use the SQLPlus show statement to check the parameter value:
SQL> show parameter password;
NAME TYPE VALUE
----------------------------------------- ------------------------ ------------------------
remote_login_passwordfile string EXCLUSIVE
Stop the database by stopping the services or using the SQLPlus shutdown immediate statement.
Add the following line to initservice_name.ora, in UNIX , or init.ora, in Windows
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
Step 4 : (Optional) Change the Password for the SYS User
SQL>PASSWORD sys;
Changing password for sys
Page 35 of 287
New password: password
Retype new password: password
Step 5 : Verify Whether SYS Has the SYSDBA Privilege
Use the SQLPlus select statement to check the password file users:
SQL> select * from v$pwfile_users;
USERNAME SYSDB SYSOP
----------------------- ----------------- -------------
SYS TRUE TRUE
30. How to create password file?
$ orapwd file=orapwSID password=sys_password force=y nosysdba=y
31. How many types of indexes are there? (http://www.orafaq.com/node/1403)
Clustered and Non-Clustered
1.B-Tree index
2.Bitmap index
3.Unique index
4.Function based index
5. Implicit index and explicit index
Explicit indexes are again of many types like simple index, unique index, Bitmap index, Functional index,
Organizational index, cluster index.
Explanation-1:
B*Tree Indexes: B*tree stands for balanced tree. This means that the height of the index is the same for all values
thereby ensuring that retrieving the data for any one value takes approximately the same amount of time as for any
other value. Oracle b*tree indexes are best used when each value has a high cardinality (low number of
occurrences)for example primary key indexes or unique indexes. One important point to note is that NULL values are
not indexed. They are the most common type of index in OLTP systems.
B*Tree Cluster Indexes: These are B*tree index defined for clusters. Clusters are two or more tables with one or
more common columns and are usually accessed together (via a join).
CREATE INDEX product_orders_ix ON CLUSTER product_orders;
Hash Cluster Indexes: In a hash cluster rows that have the same hash key value (generated by a hash function) are
stored together in the Oracle database. Hash clusters are equivalent to indexed clusters, except the index key is
replaced with a hash function. This also means that here is no separate index as the hash is the index.
CREATE CLUSTER emp_dept_cluster (dept_id NUMBER) HASHKEYS 50;
Reverse Key Indexes: These are typically used in Oracle Real Application Cluster (RAC) applications. In this type of
index the bytes of each of the indexed columns are reversed (but the column order is maintained). This is useful
when new data is always inserted at one end of the index as occurs when using a sequence as it ensures new index
values are created evenly across the leaf blocks preventing the index from becoming unbalanced which may in turn
affect performance.
CREATE INDEX emp_ix ON emp(emp_id) REVERSE;
Bitmap Indexes: These are commonly used in datawarehouse applications for tables with no updates and whose
columns have low cardinality (i.e. there are few distinct values). In this type of index Oracle stores a bitmap for each
distinct value in the index with 1 bit for each row in the table. These bitmaps are expensive to maintain and are
therefore not suitable for applications which make a lot of writes to the data.
For example consider a car manufacturer which records information about cars sold including the colour of each car.
Each colour is likely to occur many times and is therefore suitable for a bitmap index.
CREATE BITMAP INDEX car_col ON cars(colour) REVERSE;
Partitioned Indexes: Partitioned Indexes are also useful in Oracle datawarehouse applications where there is a large
amount of data that is partitioned by a particular dimension such as time.
Partition indexes can either be created as local partitioned indexes or global partitioned indexes. Local partitioned
indexes means that the index is partitioned on the same columns and with the same number of partitions as the
table. For global partitioned indexes the partitioning is user defined and is not the same as the underlying table.
Function-based Indexes: As the name suggests these are indexes created on the result of a function modifying a
column value. For example
CREATE INDEX upp_ename ON emp(UPPER(ename));
The function must be deterministic (always return the same value for the same inputs).
Page 36 of 287
Index Organised Tables: In an index-organised table all the data is stored in teh Oracle database in a B*tree index
structure defined on the table's primary key. This is ideal when related pieces of data must be stored together or data
must be physically stored in a specific order. Index-organised tables are often used for information retrieval, spatial
and OLAP applications.
Domain Indexes: These indexes are created by user-defined indexing routines and enable the user to define his or
her own indexes on custom data types (domains) such as pictures, maps or fingerprints for example. These type of
index require in-depth knowledge about the data and how it will be accessed.
Oracle includes numerous data structures to improve the speed of Oracle SQL queries. Taking advantage of the low
cost of disk storage, Oracle includes many new indexing algorithms that dramatically increase the speed with which
Oracle queries are serviced. This article explores the internals of Oracle indexing; reviews the standard b-tree index,
bitmap indexes, function-based indexes, and index-only tables (IOTs); and demonstrates how these indexes may
dramatically increase the speed of Oracle SQL queries.
Oracle uses indexes to avoid the need for large-table, full-table scans and disk sorts, which are required when the SQL
optimizer cannot find an efficient way to service the SQL query. I begin our look at Oracle indexing with a review of
standard Oracle b-tree index methodologies.
The Oracle b-tree index: The oldest and most popular type of Oracle indexing is a standard b-tree index, which excels
at servicing simple queries. The b-tree index was introduced in the earliest releases of Oracle and remains widely
used with Oracle.
B-tree indexes are used to avoid large sorting operations. For example, a SQL query requiring 10,000 rows to be
presented in sorted order will often use a b-tree index to avoid the very large sort required to deliver the data to the
end user.
Oracle offers several options when creating an index using the default b-tree structure. It allows you to index on
multiple columns (concatenated indexes) to improve access speeds. Also, it allows for individual columns to be sorted
in different orders. For example, we could create a b-tree index on a column called last_name in ascending order and
have a second column within the index that displays the salary column in descending order.
create index
name_salary_idx
on
person
(
last_name asc,
salary desc);
While b-tree indexes are great for simple queries, they are not very good for the following situations:
• Low-cardinality columns—columns with less than 200 distinct values do not have the selectivity required in
order to benefit from standard b-tree index structures.
• No support for SQL functions—B-tree indexes are not able to support SQL queries using Oracle's built-in
functions. Oracle9i provides a variety of built-in functions that allow SQL statements to query on a piece of an
indexed column or on any one of a number of transformations against the indexed column.
Page 37 of 287
Prior to Oracle9i, the Oracle SQL optimizer had to perform time-consuming long-table, full-table scans due to these
shortcomings. Consequently, it was no surprise when Oracle introduced more robust types of indexing structures.
Bitmapped indexes: Oracle bitmap indexes are very different from standard b-tree indexes. In bitmap structures, a
two-dimensional array is created with one column for every row in the table being indexed. Each column represents a
distinct value within the bitmapped index. This two-dimensional array represents each value within the index
multiplied by the number of rows in the table. At row retrieval time, Oracle decompresses the bitmap into the RAM
data buffers so it can be rapidly scanned for matching values. These matching values are delivered to Oracle in the
form of a Row-ID list, and these Row-ID values may directly access the required information.
The real benefit of bitmapped indexing occurs when one table includes multiple bitmapped indexes. Each individual
column may have low cardinality. The creation of multiple bitmapped indexes provides a very powerful method for
rapidly answering difficult SQL queries.
For example, assume there is a motor vehicle database with numerous low-cardinality columns such as car_color,
car_make, car_model, and car_year. Each column contains less than 100 distinct values by themselves, and a b-tree
index would be fairly useless in a database of 20 million vehicles. However, combining these indexes together in a
query can provide blistering response times a lot faster than the traditional method of reading each one of the 20
million rows in the base table. For example, assume we wanted to find old blue Toyota Corollas manufactured in
1981:
select
license_plat_nbr
from
vehicle
where
color = ‘blue’
and
make = ‘toyota’
and
year = 1981;
Oracle uses a specialized optimizer method called a bitmapped index merge to service this query. In a bitmapped
index merge, each Row-ID, or RID, list is built independently by using the bitmaps, and a special merge routine is used
in order to compare the RID lists and find the intersecting values. Using this methodology, Oracle can provide sub-
second response time when working against multiple low-cardinality columns:
Function-based indexes: One of the most important advances in Oracle indexing is the introduction of function-
based indexing. Function-based indexes allow creation of indexes on expressions, internal functions, and user-written
functions in PL/SQL and Java. Function-based indexes ensure that the Oracle designer is able to use an index for its
query.
Prior to Oracle8, the use of a built-in function would not be able to match the performance of an index.
Consequently, Oracle would perform the dreaded full-table scan. Examples of SQL with function-based queries might
include the following:
Select * from customer where substr(cust_name,1,4) = ‘BURL’;
Select * from customer where to_char(order_date,’MM’) = ’01;
Select * from customer where upper(cust_name) = ‘JONES’;
Select * from customer where initcap(first_name) = ‘Mike’;
In Oracle, Oracle always interrogates the where clause of the SQL statement to see if a matching index exists. By using
Page 38 of 287
function-based indexes, the Oracle designer can create a matching index that exactly matches the predicates within
the SQL where clause. This ensures that the query is retrieved with a minimal amount of disk I/O and the fastest
possible speed.
Once a function-based index is created, you need to create CBO statistics, but beware that there are numerous bugs
and issues when analyzing a function-based index. See these important notes on statistics and function-based
indexes.
Index-only tables: Beginning with Oracle8, Oracle recognized that a table with an index on every column did not
require table rows. In other words, Oracle recognized that by using a special table-access method called an index fast
full scan, the index could be queried without actually touching the data itself.
Oracle codified this idea with its use of index-only table (IOT) structure. When using an IOT, Oracle does not create
the actual table but instead keeps all of the required information inside the Oracle index. At query time, the Oracle
SQL optimizer recognizes that all of the values necessary to service the query exist within the index tree, at which
time the Oracle cost-based optimizer has a choice of either reading through the index tree nodes to pull the
information in sorted order or invoke an index fast full scan, which will read the table in the same fashion as a full
table scan, using sequential prefetch (as defined by the db_file_multiblock_read_count parameter). The multiblock
read facility allows Oracle to very quickly scan index blocks in linear order, quickly reading every block within the
index tablespace. Here is an example of the syntax to create an IOT.
CREATE TABLE emp_iot (
emp_id number,
ename varchar2(20),
sal number(9,2),
deptno number,
CONSTRAINT pk_emp_iot_index PRIMARY KEY (emp_id) )
ORGANIZATION index
TABLESPACE spc_demo_ts_01
PCTHRESHOLD 20 INCLUDING ename;
Index performance
Oracle indexes can greatly improve query performance but there are some important indexing concepts to
understand.
• Index clustering
• Index blocksizes
Indexes and blocksize: Indexes that experience lots of index range scans of index fast full scans (as evidence by
multiblock reads) will greatly benefit from residing in a 32k blocksize.
Today, most Oracle tuning experts utilize the multiple blocksize feature of Oracle because it provides buffer
segregation and the ability to place objects with the most appropriate blocksize to reduce buffer waste. Some of the
world record Oracle benchmarks use very large data buffers and multiple blocksizes.
According to an article by Christopher Foot, author of the OCP Instructors Guide for Oracle DBA Certification, larger
block sizes can help in certain situations:
"A bigger block size means more space for key storage in the branch nodes of B-tree indexes, which reduces index
height and improves the performance of indexed queries."
In any case, there appears to be evidence that block size affects the tree structure, which supports the argument that
data blocks affect the structure of the tree.
Indexes and clustering: The CBO's decision to perform a full-table vs. an index range scan is influenced by the
clustering_factor (located inside the dba_indexes view), db_block_size, and avg_row_len. It is important to
understand how the CBO uses these statistics to determine the fastest way to deliver the desired rows.
Conversely, a high clustering_factor, where the value approaches the number of rows in the table (num_rows),
indicates that the rows are not in the same sequence as the index, and additional I/O will be required for index range
scans. As the clustering_factor approaches the number of rows in the table, the rows are out of sync with the index.
Oracle MOSC Note:223117.1 has some great advice for tuning-down “db file sequential read” waits by table
reorganization in row-order:
- If Index Range scans are involved, more blocks than necessary could be being visited if the index is unselective: by
forcing or enabling the use of a more selective index, we can access the same table data by visiting fewer index blocks
(and doing fewer physical I/Os).
- If the index being used has a large Clustering Factor, then more table data blocks have to be visited in order to get
Page 39 of 287
the rows in each Index block: by rebuilding the table with its rows sorted by the particular index columns we can
reduce the Clustering Factor and hence the number of table data blocks that we have to visit for each index block.
This validates the assertion that the physical ordering of table rows can reduce I/O (and stress on the database) for
many SQL queries.
Tip! In some cases Oracle is able to bypass a sort by reading the data in sorted order from the index. Oracle will even
read data in reverse order from an index to avoid an in-memory sort.
32. What is bitmap index & when it’ll be used?
- Bitmap indexes are preferred in Data warehousing environment. Refer Q31
- Preferred when cardinality is low.
33. What is B-tree index & when it’ll be used?
- B-tree indexes are preferred in OLTP environment. Refer Q31
- Preferred when cardinality is high
34. How you will find out fragmentation of index?
- AUTO_SPACE_ADVISOR_JOB will run in daily maintenance window and report fragmented
Indexes/Tables
SQL>ANALYZE INDEX VALIDATE STRUCTURE;
This populates the table ‘INDEX_STATS’. It should be noted that this table contains only one row and therefore only
one index can be analyzed at a time.
An index should be considered for rebuilding under any of the following conditions:
* the percentage of deleted rows exceeds 30% of the total, i.e. if del_lf_rows / lf_rows > 0.3.
* If the ‘HEIGHT’ is greater than 4.
* If the number of rows in the index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ this can indicate a large
number of deletes, indicating that the index should be rebuilt.
35. What is the difference between delete and truncate?
Truncate will release the space. Delete won’t.
Delete can be used to delete some records. Truncate can’t.
Delete can be rolled back.
Delete will generate undo (Delete command will log the data changes in the log file where as the truncate will simply
remove the data without it. Hence data removed by Delete command can be rolled back but not the data removed by
TRUNCATE).
Truncate is a DDL statement whereas DELETE is a DML statement.
Truncate is faster than delete.
36. What's the difference between a primary key and a unique key?
Both primary and unique key enforce uniqueness of the column on which they are defined. But by default primary
key creates a clustered index on the column, where unique key creates a non-clustered index by default. Primary key
doesn't allow NULLs, but unique key allows one NULL only.
37. What is the difference between schema and user?
Schema is collection of user’s objects.
38. What is the difference between SYSDBA, SYSOPER and SYSASM?
SYSOPER can’t create and drop database.
SYSOPER can’t do incomplete recovery.
SYSOPER can’t change character set.
SYSOPER can’t CREATE DISKGROUP; ADD/DROP/RESIZE DISK
SYSASM can do anything SYSDBA can do.
38. What is the difference between SYS and SYSTEM?
SYSTEM can’t shutdown the database.
SYSTEM can’t create another SYSTEM, but SYS can create another SYS or SYSTEM.
Explanation-1: In general, unless the documentation tells you, you will NEVER LOG IN as sys or system, they are our
internal data dictionary accounts and not for your use. You will be best served by forgetting they exist.
Sysdba and sysoper are ROLES - they are not users, not schemas. The SYSDBA role is like "root" on UNIX or
"Administrator" on Windows. It sees all, can do all. Internally, if you connect as sysdba, your schema name will
appear to be SYS.
In real life, you hardly EVER need sysdba - typically only during an upgrade or patch.
Sysoper is another role, if you connect as sysoper, you'll be in a schema "public" and will only be able to do things
granted to public AND start/stop the database. Sysoper is something you should use to startup and shutdown. You'll
Page 40 of 287
use sysoper much more often than sysdba.
Do not grant sysdba to anyone unless and until you have absolutely verified they have the NEED for sysdba - the
same with sysoper.
Explanation-2: The following administrative user accounts are automatically created when you install Oracle
Database. They are both created with the password that you supplied upon installation, and they are both
automatically granted the DBA role.
SYS
This account can perform all administrative functions. All base (underlying) tables and views for the database data
dictionary are stored in the SYS schema. These base tables and views are critical for the operation of Oracle Database.
To maintain the integrity of the data dictionary, tables in the SYS schema are manipulated only by the database. They
should never be modified by any user or database administrator. You must not create any tables in the SYS schema.
The SYS user is granted the SYSDBA privilege, which enables a user to perform high-level administrative tasks such as
backup and recovery.
SYSTEM
This account can perform all administrative functions except the following:
• Backup and recovery
• Database upgrade
While this account can be used to perform day-to-day administrative tasks, Oracle strongly recommends creating
named users account for administering the Oracle database to enable monitoring of database activity.
SYSDBA and SYSOPER System Privileges
SYSDBA and SYSOPER are administrative privileges required to perform high-level administrative operations such as
creating, starting up, shutting down, backing up, or recovering the database. The SYSDBA system privilege is for fully
empowered database administrators and the SYSOPER system privilege allows a user to perform basic operational
tasks, but without the ability to look at user data.
The SYSDBA and SYSOPER system privileges allow access to a database instance even when the database is not open.
Control of these privileges is therefore completely outside of the database itself. This control enables an
administrator who is granted one of these privileges to connect to the database instance to start the database.
You can also think of the SYSDBA and SYSOPER privileges as types of connections that enable you to perform certain
database operations for which privileges cannot be granted in any other way. For example, if you have the SYSDBA
privilege, then you can connect to the database using AS SYSDBA.
The SYS user is automatically granted the SYSDBA privilege upon installation. When you log in as user SYS, you must
connect to the database as SYSDBA or SYSOPER. Connecting as a SYSDBA user invokes the SYSDBA privilege;
connecting as SYSOPER invokes the SYSOPER privilege. Oracle Enterprise Manager Database Control does not permit
you to log in as user SYS without connecting as SYSDBA or SYSOPER.
When you connect with the SYSDBA or SYSOPER privilege, you connect with a default schema, not with the schema
that is generally associated with your user name. For SYSDBA this schema is SYS; for SYSOPER the schema is PUBLIC.
Explanation-3:
Difference between sys and system users:
(1) the data stored in the importance of the type;
[Sys] Oracle data dictionary base tables and views are stored in the sys user, these base tables and views for the
operation of the oracle is critical, database maintenance, any user can not manually change
* Sys user has dba, sysdba, sysoper roles or permissions, the highest users of Oracle permission.
[System used to store the second level of internal data, such as some of the characteristics of the oracle or tool
management information.
* System users with ordinary dba role permissions.
(2) Privileges.
System users can only landed (as) normal identity ORCL, unless you grant it sysdba system privileges or syspoer of
system privileges.
Sys User can use (as) SYSDBA or (as) the SYSOPER identity Login ORCL, can not use normal.
Sys user login Oracle, perform select * from V_ $ PWFILE_USERS;
Can query the user with sysdba privileges, such as:
SQL> select * from V_ $ PWFILE_USERS;
the results are shown as follows:
Page 41 of 287
USERNAME SYSDB sysop
--------------------------------------
SYS TRUE TRUE
Normal SYSDBA and sysoper,, three systems permission difference:
(1) normal, sysdba, sysoper What is the difference:
1) normal ordinary users
2) sysdba have the highest system privileges, after landing sys
3) sysoper mainly used to start, shut down the database, sysoper after landing the user is public
4) the sysdba and sysoper belongs to a system privilege, also known as the Administrative privilege, such as database
permission to open the shut down like some system management level.
SYSDBA and SYSOPER specific permissions to the table below:
system normal as normal login, it is actually an ordinary dba user
If you are logged in as sysdba As a result, it is actually logged in as the sys user login information inside we can see it.
Principle: as sysdba connect to the database objects created are actually generated in the sys. Other users as well as
sysdba login, but also as the sys user login.
See the following experiments:
SQL> create user strong identified by strong;
The user has been created.
SQL> conn strong / strong @ magick as sysdba;
Is connected.
SQL> show user;
USER is "SYS"
SQL> create table test (a int);
The table has been created.
SQL> select owner from dba_tables where table_name = 'test';
/ / Query from dba_tables the table (table_name) the owner (owner).
No rows selected / / the oracle because when you create a table automatically be converted to uppercase, lowercase
query does not exist;
SQL> select owner from dba_tables where table_name = 'TEST';
OWNER
------------------------------
SYS
40. What is the difference between view and materialized view?
Materialized views: Materialized views are disk based and update periodically base upon the query definition
Views: Views are virtual only and run the query definition each time they are accessed
Views are evaluating the data in the tables underlying the view definition at the time the view is queried. It is a logical
view of your tables, with no data stored anywhere else. The upside of a view is that it will always return the latest
data to you. The downside of a view is that its performance depends on how good a select statement the view is
based on. If the select statement used by the view joins many tables, or uses joins based on non-indexed columns,
this view can perform poorly.
Materialized views are similar to regular views, in that they are a logical view of your data (based on a select
statement), however, the underlying query resultset has been saved to a table. The <b>upside</b> of this is that
when you query a materialized view, you are querying a table, which may also be indexed. In addition, because all the
joins have been resolved at materialized view refresh time, you pay the price of the join once (or as often as you
refresh your materialized view), rather than each time you select from the materialized view. In addition, with query
rewrite enabled, Oracle can optimize a query that selects from the source of your materialized view in such a way
that it instead reads from your materialized view. In situations where you create materialized views as forms of
aggregate tables, or as copies of frequently executed queries, this can greatly speed up the response time of your end
user application The downside though is that the data you get back from the materialized view is only as up to date
as the last time the materialized view has been refreshed.
Materialized views can be set to refresh manually, on a set schedule, or based on the database detecting a change in
data from one of the underlying tables. Materialized views can be incrementally updated by combining them with
materialized view logs, which act as change data capture sources on the underlying tables.
Page 42 of 287
Materialized views are most often used in data warehousing / business intelligence applications where querying large
fact tables with thousands of millions of rows would result in query response times that resulted in an unusable
application.
Explanation-2:
View is logical, will store only the query, and will always gets latest data.
Mview is physical, will store the data, and may not get latest data.
41. What are materialized view refresh types and which is default?
Complete, fast, force (default)
COMPLETE Refreshes by recalculating the materialized view's defining query.
FAST Applies incremental changes to refresh the materialized view using the information logged in the
materialized view logs, or from a SQL*Loader direct-path or a partition maintenance operation.
FORCE Applies FAST refresh if possible; otherwise, it applies COMPLETE refresh.
NEVER Indicates that the materialized view will not be refreshed with refresh mechanisms.
FAST:
Specify FAST to indicate the incremental refresh method, which performs the refresh according to the changes that
have occurred to the master tables. The changes for conventional DML changes are stored in the materialized view
log associated with the master table.The changes for direct-path INSERT operations are stored in the direct loader
log.
If you specify REFRESH FAST, then the CREATE statement will fail unless materialized view logs already exist for the
materialized view master tables. Oracle Database creates the direct loader log automatically when a direct-path
INSERT takes place. No user intervention is needed.
For both conventional DML changes and for direct-path INSERT operations, other conditions may restrict the
eligibility of a materialized view for fast refresh.
Materialized views are not eligible for fast refresh if the defining query contains an analytic function
COMPLETE:
Specify COMPLETE to indicate the complete refresh method, which is implemented by executing the defining query
of the materialized view. If you request a complete refresh, then Oracle Database performs a complete refresh even
if a fast refresh is possible.
FORCE:
Specify FORCE to indicate that when a refresh occurs, Oracle Database will perform a fast refresh if one is possible or
a complete refresh if fast refresh is not possible. If you do not specify a refresh method (FAST, COMPLETE, or FORCE),
then FORCE is the default.
42. How fast refresh happens?
43. How to find out when was a materialized view refreshed?
Query dba_mviews or dba_mview_analysis or dba_mview_refresh_times
SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,'YYYY-MM-DD HH24:MI:SS') from dba_mviews;
(or)
SQL> select NAME, to_char(LAST_REFRESH,'YYYY-MM-DD HH24:MI:SS') from dba_mview_refresh_times;
(or)
SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,'YYYY-MM-DD HH24:MI:SS') from dba_mview_analysis;
44. What is materialized view log (type)?
45. What is atomic refresh in mviews?
From Oracle 10g, complete refresh of single materialized view can do delete instead of truncate. To force the refresh
to do truncate instead of delete, parameter ATOMIC_REFRESH must be set to false.
ATOMIC_REFRESH = FALSE, mview will be truncated and whole data will be inserted. The refresh will go faster, and
no undo will be generated.
ATOMIC_REFRESH = TRUE (default), mview will be deleted and whole data will be inserted. Undo will be generated.
We will have access at all times even while it is being refreshed.
SQL> EXEC DBMS_MVIEW.REFRESH ('mv_emp', 'C', atomic_refresh=FALSE);
46. How to find out whether database/tablespace/datafile is in backup mode or not?
Query V$BACKUP view.
47. What is row chaining?
Page 43 of 287
Explanation-1: If the row is too large to fit into an empty data block in this case the oracle stores the data for the row
in a chain of one or more data blocks, can occur when the row is inserted
Explanation-2: A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for your
database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces. Some
conditions that will cause row chaining are: Tables whose rowsize exceeds the blocksize. Tables with LONG and LONG
RAW columns are prone to having chained rows. Tables with more then 255 columns will have chained rows as
Oracle break wide tables up into pieces. So, instead of just having a forwarding address on one block and the data on
another we have data on two or more blocks.
Chained rows affect us differently. Here, it depends on the data we need. If we had a row with two columns that was
spread over two blocks, the query:
SELECT column1 FROM table
where column1 is in Block 1, would not cause any «table fetch continued row». It would not actually have to get
column2, it would not follow the chained row all of the way out. On the other hand, if we ask for:
SELECT column2 FROM table
and column2 is in Block 2 due to row chaining, then you would in fact see a «table fetch continued row»
48. What is row migration?
Explanation-1: An update statement increases the amount of data in a row so that the row no longer fits in its data
blocks. Now the oracle tries to find another free block with enough space to hold the entire row if such a block is
available oracle moves entire row to new block.
Row Migration
Explanation-2: We will migrate a row when an update to that row would cause it to not fit on the block anymore
(with all of the other data that exists there currently). A migration means that the entire row will move and we just
leave behind the «forwarding address». So, the original block just has the rowid of the new block and the entire row
is moved.
Page 45 of 287
A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for your database, and
you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces. Some conditions that will
cause row chaining are: Tables whose rowsize exceeds the blocksize. Tables with LONG and LONG RAW columns are
prone to having chained rows. Tables with more then 255 columns will have chained rows as Oracle break wide
tables up into pieces. So, instead of just having a forwarding address on one block and the data on another we have
data on two or more blocks.
Chained rows affect us differently. Here, it depends on the data we need. If we had a row with two columns that was
spread over two blocks, the query:
SELECT column1 FROM table
where column1 is in Block 1, would not cause any «table fetch continued row». It would not actually have to get
column2, it would not follow the chained row all of the way out. On the other hand, if we ask for:
SELECT column2 FROM table
and column2 is in Block 2 due to row chaining, then you would in fact see a «table fetch continued row»
Example
The following example was published by Tom Kyte, it will show row migration and chaining. We are using an 4k block
size:
SELECT name,value
FROM v$parameter
WHERE name = 'db_block_size';
NAME VALUE
-------------- ------
db_block_size 4096
Create the following table with CHAR fixed columns:
CREATE TABLE row_mig_chain_demo (
x int PRIMARY KEY,
a CHAR(1000),
b CHAR(1000),
c CHAR(1000),
d CHAR(1000),
e CHAR(1000)
);
That is our table. The CHAR(1000)'s will let us easily cause rows to migrate or chain. We used 5 columns a,b,c,d,e so
that the total rowsize can grow to about 5K, bigger than one block, ensuring we can truly chain a row.
INSERT INTO row_mig_chain_demo (x) VALUES (1);
INSERT INTO row_mig_chain_demo (x) VALUES (2);
INSERT INTO row_mig_chain_demo (x) VALUES (3);
COMMIT;
We are not interested about seeing a,b,c,d,e - just fetching them. They are really wide so we'll surpress their display.
column a noprint
column b noprint
column c noprint
column d noprint
column e noprint
SELECT * FROM row_mig_chain_demo;
X
----------
Page 46 of 287
1
2
3
Check for chained rows:
SELECT a.name, b.value
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower(a.name) = 'table fetch continued row';
NAME VALUE
---------------------------------------------------------------- ----------
table fetch continued row 0
Now that is to be expected, the rows came out in the order we put them in (Oracle full scanned this query, it
processed the data as it found it). Also expected is the table fetch continued row is zero. This data is so small right
now, we know that all three rows fit on a single block. No chaining.
Page 47 of 287
So, lets see a migrated row affecting the «table fetch continued row»:
SELECT * FROM row_mig_chain_demo WHERE x = 3;
X
----------
3
SELECT a.name, b.value
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower(a.name) = 'table fetch continued row';
NAME VALUE
---------------------------------------------------------------- ----------
table fetch continued row 0
This was an index range scan / table access by rowid using the primary key. We didn't increment the «table fetch
continued row» yet since row 3 isn't migrated.
SELECT * FROM row_mig_chain_demo WHERE x = 1;
X
----------
1
SELECT a.name, b.value
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower(a.name) = 'table fetch continued row';
NAME VALUE
---------------------------------------------------------------- ----------
table fetch continued row 1
Row 1 is migrated, using the primary key index, we forced a «table fetch continued row».
Demonstration of the Row Chaining
UPDATE row_mig_chain_demo SET d = 'z4', e = 'z5' WHERE x = 3;
COMMIT;
Row 3 no longer fits on block 1. With d and e set, the rowsize is about 5k, it is truly chained.
SELECT x,a FROM row_mig_chain_demo WHERE x = 3;
X
----------
3
SELECT a.name, b.value
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower(a.name) = 'table fetch continued row';
NAME VALUE
---------------------------------------------------------------- ----------
table fetch continued row 1
We fetched column «x» and «a» from row 3 which are located on the «head» of the row, it will not cause a «table
fetch continued row». No extra I/O to get it.
Page 48 of 287
SELECT x,d,e FROM row_mig_chain_demo WHERE x = 3;
SELECT a.name, b.value
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower(a.name) = 'table fetch continued row';
NAME VALUE
---------------------------------------------------------------- ----------
table fetch continued row 2
Now we fetch from the «tail» of the row via the primary key index. This increments the «table fetch continued row»
by one to put the row back together from its head to its tail to get that data.
Now let's see a full table scan - it is affected as well:
SELECT * FROM row_mig_chain_demo;
X
----------
3
2
1
SELECT a.name, b.value
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower(a.name) = 'table fetch continued row';
NAME VALUE
---------------------------------------------------------------- ----------
table fetch continued row 3
The «table fetch continued row» was incremented here because of Row 3, we had to assemble it to get the trailing
columns. Rows 1 and 2, even though they are migrated don't increment the «table fetch continued row» since we
full scanned.
SELECT x,a FROM row_mig_chain_demo;
X
----------
3
2
1
SELECT a.name, b.value
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower(a.name) = 'table fetch continued row';
NAME VALUE
---------------------------------------------------------------- ----------
table fetch continued row 3
Page 49 of 287
No «table fetch continued row» since we didn't have to assemble Row 3, we just needed the first two columns.
SELECT x,e FROM row_mig_chain_demo;
X
----------
3
2
1
SELECT a.name, b.value
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower(a.name) = 'table fetch continued row';
NAME VALUE
---------------------------------------------------------------- ----------
table fetch continued row 4
But by fetching for d and e, we incemented the «table fetch continued row». We most likely have only migrated rows
but even if they are truly chained, the columns you are selecting are at the front of the table.
So, how can you decide if you have migrated or truly chained?
Count the last column in that table. That'll force to construct the entire row.
SELECT count(e) FROM row_mig_chain_demo;
COUNT(E)
----------
1
SELECT a.name, b.value
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower(a.name) = 'table fetch continued row';
NAME VALUE
---------------------------------------------------------------- ----------
table fetch continued row 5
Analyse the table to verify the chain count of the table:
ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;
SELECT chain_cnt
FROM user_tables
WHERE table_name = 'ROW_MIG_CHAIN_DEMO';
CHAIN_CNT
----------
3
Three rows that are chained. Apparently, 2 of them are migrated (Rows 1 and 2) and one is truly chained (Row 3).
Total Number of «table fetch continued row» since instance startup?
The V$SYSSTAT view tells you how many times, since the system (database) was started you did a «table fetch
continued row» over all tables.
sqlplus system/<password>
SELECT 'Chained or Migrated Rows = '||value
FROM v$sysstat
WHERE name = 'table fetch continued row';
Chained or Migrated Rows = 31637
You could have 1 table with 1 chained row that was fetched 31'637 times. You could have 31'637 tables, each with a
chained row, each of which was fetched once. You could have any combination of the above -- any combo.
Also, 31'637 - maybe that's good, maybe that's bad. it is a function of
how long has the database has been up
how many rows is this as a percentage of total fetched rows.
For example if 0.001% of your fetched are table fetch continued row, who cares!
Therefore, always compare the total fetched rows against the continued rows.
SELECT name,value FROM v$sysstat WHERE name like '%table%';
Page 50 of 287
NAME VALUE
---------------------------------------------------------------- ----------
table scans (short tables) 124338
table scans (long tables) 1485
table scans (rowid ranges) 0
table scans (cache partitions) 10
table scans (direct read) 0
table scan rows gotten 20164484
table scan blocks gotten 1658293
table fetch by rowid 1883112
table fetch continued row 31637
table lookup prefetch client count 0
How many Rows in a Table are chained?
The USER_TABLES tells you immediately after an ANALYZE (will be null otherwise) how many rows in the table are
chained.
ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;
SELECT chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used
FROM user_tables
WHERE table_name = 'ROW_MIG_CHAIN_DEMO';
CHAIN_CNT PCT_CHAINED AVG_ROW_LEN PCT_FREE PCT_USED
---------- ----------- ----------- ---------- ----------
3 100 3691 10 40
PCT_CHAINED shows 100% which means all rows are chained or migrated.
List Chained Rows
You can look at the chained and migrated rows of a table using the ANALYZE statement with the LIST CHAINED ROWS
clause. The results of this statement are stored in a specified table created explicitly to accept the information
returned by the LIST CHAINED ROWS clause. These results are useful in determining whether you have enough room
for updates to rows.
Creating a CHAINED_ROWS Table
To create the table to accept data returned by an ANALYZE ... LIST CHAINED ROWS statement, execute the
UTLCHAIN.SQL or UTLCHN1.SQL script in $ORACLE_HOME/rdbms/admin. These scripts are provided by the database.
They create a table named CHAINED_ROWS in the schema of the user submitting the script.
create table CHAINED_ROWS (
owner_name varchar2(30),
table_name varchar2(30),
cluster_name varchar2(30),
partition_name varchar2(30),
subpartition_name varchar2(30),
head_rowid rowid,
analyze_timestamp date
);
After a CHAINED_ROWS table is created, you specify it in the INTO clause of the ANALYZE statement.
ANALYZE TABLE row_mig_chain_demo LIST CHAINED ROWS;
SELECT owner_name,
table_name,
head_rowid
FROM chained_rows
OWNER_NAME TABLE_NAME HEAD_ROWID
------------------------------ ------------------------------ ------------------
SCOTT ROW_MIG_CHAIN_DEMO AAAPVIAAFAAAAkiAAA
SCOTT ROW_MIG_CHAIN_DEMO AAAPVIAAFAAAAkiAAB
How to avoid Chained and Migrated Rows?
Page 51 of 287
Increasing PCTFREE can help to avoid migrated rows. If you leave more free space available in the block, then the row
has room to grow. You can also reorganize or re-create tables and indexes that have high deletion rates. If tables
frequently have rows deleted, then data blocks can have partially free space in them. If rows are inserted and later
expanded, then the inserted rows might land in blocks with deleted rows but still not have enough room to expand.
Reorganizing the table ensures that the main free space is totally empty blocks.
The ALTER TABLE ... MOVE statement enables you to relocate data of a nonpartitioned table or of a partition of a
partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This
statement also lets you modify any of the storage attributes of the table or partition, including those which cannot
be modified using ALTER TABLE. You can also use the ALTER TABLE ... MOVE statement with the COMPRESS keyword
to store the new segment using table compression.
Table altered.
Again count the number of Rows per Block after the ALTER TABLE MOVE
SELECT dbms_rowid.rowid_block_number(rowid) "Block-Nr", count(*) "Rows"
FROM row_mig_chain_demo
GROUP BY dbms_rowid.rowid_block_number(rowid) order by 1;
Block-Nr Rows
---------- ----------
2322 1
2324 1
2325 1
Page 52 of 287
ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;
Table analyzed.
SELECT chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used
FROM user_tables
WHERE table_name = 'ROW_MIG_CHAIN_DEMO';
CHAIN_CNT PCT_CHAINED AVG_ROW_LEN PCT_FREE PCT_USED
---------- ----------- ----------- ---------- ----------
1 33.33 3687 20 40
If the table includes LOB column(s), this statement can be used to move the table along with LOB data and
LOB index segments (associated with this table) which the user explicitly specifies. If not specified, the default
is to not move the LOB data and LOB index segments.
SELECT owner_name,
table_name,
count(head_rowid) row_count
FROM chained_rows
GROUP BY owner_name,table_name
/
OWNER_NAME TABLE_NAME ROW_COUNT
------------------------------ ------------------------------ ----------
SCOTT ROW_MIG_CHAIN_DEMO 1
Conclusion:
Migrated rows affect OLTP systems which use indexed reads to read singleton rows. In the worst case, you can add
an extra I/O to all reads which would be really bad. Truly chained rows affect index reads and full table scans.
Row migration is typically caused by UPDATE operation
Page 53 of 287
Row chaining is typically caused by INSERT operation.
SQL statements which are creating/querying these chained/migrated rows will degrade the performance due to more
I/O work.
To diagnose chained/migrated rows use ANALYZE command , query V$SYSSTAT view
To remove chained/migrated rows use higher PCTFREE using ALTER TABLE MOVE.
49. What are different types of partitions?
With Oracle8, Range partitioning (on single column) was introduced.
With Oracle8i, Hash and Composite (Range-Hash) partitioning was introduced.
With Oracle9i, List partitioning and Composite (Range-List) partitioning was introduced.
With Oracle 11g, Interval partitioning, Reference partitioning, Virtual column based partitioning, System partitioning
and Composite partitioning [Range-Range, List-List, List-Range, List-Hash, Interval-Range, Interval-List, and Interval-
Interval was introduced.
50. What is local partitioned index and global partitioned index?
A local index is an index on a partitioned table which is partitioned in the exact same manner as the underlying
partitioned table. Each partition of a local index corresponds to one and only one partition of the underlying table.
A global partitioned index is an index on a partitioned or non partitioned tables which are partitioned using a
different partitioning key from the table and can have different number of partitions. Global partitioned indexes can
only be partitioned using range partitioning.
51. How you will recover if you lost one/all control file(s)?
Lost one controlfile:
a. Shut database
b. Copy and rename the controlfile from the existing or mirror controlfile at os level ‘OR’
Remove the controlfile location from the pfile
c. start the database
Lost of all controlfile: using the backup:
a. shut the database (abort)
b. startup the database in nomount state
c. restore the controlfile from the autobackup
d. open the database with resetlogs
Lost of all controlfile: without using the backup:
a. create the controlfile manually with all the datafile locations
b. mount the controlfile
c. open the database with resetlogs
52. Why more archivelogs are generated, when database is begin backup mode?
During begin backup mode datafile headers get freeze and as result row information cannot be retrieved as a result
the entire block is copied to redo logs as a result more redo generated and more log switch and in turn more archive
logs. Normally only details (change vectors) are logged to the redo logs. When in backup mode, Oracle will write
complete changed blocks to the redo log files.
Mainly to overcome fractured blocks. Most of the cases Oracle block size is equal to or a multiple of the operating
system block size.
e.g. Consider Oracle blocksize is 2k and OSBlocksize is 4k. so each OS Block is comprised of 2 Oracle Blocks. Now you
are doing an update when your db is in backup mode. An Oracle Block is updating and at the same time backup is
happening on the OS block which is having this particular DB block. Backup will not be consistent since the one part
of the block is being updated and at the same time it is copied to the backup location. In this case we will have a
fractured block, so as to avoid this Oracle will copy the whole OS block to redo logfile which can be used for recovery.
Because of this redo generation is more.
53. What UNIX parameters you will set while Oracle installation?
shmmax, shmmni, shmall, sem
SHMMAX and SHMALL are two key shared memory parameters that directly impact’s the way by which Oracle
creates an SGA. Shared memory is nothing but part of Unix IPC System (Inter Process Communication) maintained by
kernel where multiple processes share a single chunk of memory to communicate with each other.
While trying to create an SGA during a database startup, Oracle chooses from one of the 3 memory management
models a) one-segment or b) contiguous-multi segment or c) non-contiguous multi segment. Adoption of any of these
models is dependent on the size of SGA and values defined for the shared memory parameters in the linux kernel,
most importantly SHMMAX.
Page 54 of 287
So what are these parameters - SHMMAX and SHMALL?
SHMMAX is the maximum size of a single shared memory segment set in “bytes”.
silicon:~ # cat /proc/sys/kernel/shmmax
536870912
SHMALL is the total size of Shared Memory Segments System wide set in “pages”.
silicon:~ # cat /proc/sys/kernel/shmall
1415577
The key thing to note here is the value of SHMMAX is set in "bytes" but the value of SHMMALL is set in "pages".
What’s the optimal value for SHMALL?
As SHMALL is the total size of Shard Memory Segments System wide, it should always be less than the Physical
Memory on the System and should also be less than sum of SGA’s of all the oracle databases on the server. Once this
value (sum of SGA’s) hit the limit, i.e. the value of shmall, then any attempt to start a new database (or even an
existing database with a resized SGA) will result in an “out of memory” error (below). This is because there won’t be
any more shared memory segments that Linux can allocate for SGA.
ORA-27102: out of memory
Linux-x86_64 Error: 28: No space left on device.
So above can happen for two reasons. Either the value of shmall is not set to an optimal value or you have reached
the threshold on this server.
Setting the value for SHMALL to optimal is straight forward. All you want to know is how much “Physical Memory”
(excluding Cache/Swap) you have on the system and how much of it should be set aside for Linux Kernel and to be
dedicated to Oracle Databases.
For e.g. Let say the Physical Memory of a system is 6GB, out of which you want to set aside 1GB for Linux Kernel for
OS Operations and dedicate the rest of 5GB to Oracle Databases. Then here’s how you will get the value for SHMALL.
Convert this 5GB to bytes and divide by page size. Remember SHMALL should be set in “pages” not “bytes”.
So here goes the calculation.
Determine Page Size first, can be done in two ways. In my case it’s 4096 and that’s the recommended and default in
most cases which you can keep the same.
silicon:~ # getconf PAGE_SIZE
4096
or
silicon:~ # cat /proc/sys/kernel/shmmni
4096
Convert 5GB into bytes and divide by page size, I used the linux calc to do the math.
silicon:~ # echo "( 5 * 1024 * 1024 * 1024 ) / 4096 " | bc -l
1310720.00000000000000000000
Reset shmall and load it dynamically into kernel
silicon:~ # echo "1310720" > /proc/sys/kernel/shmall
silicon:~ # sysctl –p
Verify if the value has been taken into effect.
silicon:~ # sysctl -a | grep shmall
kernel.shmall = 1310720
Another way to look this up is
silicon:~ # ipcs -lm
------ Shared Memory Limits --------
max number of segments = 4096 /* SHMMNI */
max seg size (kbytes) = 524288 /* SHMMAX */
max total shared memory (kbytes) = 5242880 /* SHMALL */
min seg size (bytes) = 1
To keep the value effective after every reboot, add the following line to /etc/sysctl.conf
echo “kernel.shmall = 1310720” >> /etc/sysctl.conf
Also verify if sysctl.conf is enabled or will be read during boot.
silicon:~ # chkconfig boot.sysctl
boot.sysctl on
If returns “off”, means it’s disabled. Turn it on by running
silicon:~ # chkconfig boot.sysctl on
Page 55 of 287
boot.sysctl on
What’s the optimal value for SHMMAX?
Oracle makes use of one of the 3 memory management models to create the SGA during database startup and it does
this in following sequence. First Oracle attempts to use the one-segment model and if this fails, it proceeds with the
next one which's the contiguous multi-segment model and if that fails too, it goes with the last option which is the
non-contiguous multi-segment model.
So during startup it looks for shmmax parameter and compares it with the initialization parameter *.sga_target. If
shmmax > *.sga_target, then oracle goes with one-segment model approach where the entire SGA is created within a
single shared memory segment.
But the above attempt (one-segment) fails if SGA size otherwise *.sga_target > shmmax, then Oracle proceeds with
the 2nd option – contiguous multi-segment model. Contiguous allocations, as the name indicates are a set of shared
memory segments which are contiguous within the memory and if it can find such a set of segments then entire SGA
is created to fit in within this set.
But if cannot find a set of contiguous allocations then last of the 3 option’s is chosen – non-contiguous multi-segment
allocation and in this Oracle has to grab the free memory segments fragmented between used spaces.
So let’s say if you know the max size of SGA of any database on the server stays below 1GB, you can set shmmax to 1
GB. But say if you have SGA sizes for different databases spread between 512MB to 2GB, then set shmmax to 2Gigs
and so on.
Like SHMALL, SHMMAX can be defined by one of these methods..
Dynamically reset and reload it to the kernel..
silicon:~ # echo "536870912" > /proc/sys/kernel/shmmax
silicon:~ # sysctl –p -- Dynamically reload the parameters.
Or use sysctl to reload and reset ..
silicon:~ # sysctl -w kernel.shmmax=536870912
To permanently set so it’s effective in reboots…
silicon:~ # echo "kernel.shmmax=536870912" >> /etc/systctl.conf
Install doc for 11g recommends the value of shmmax to be set to "4GB – 1byte" or half the size of physical memory
whichever is lower. I believe “4GB – 1byte” is related to the limitation on the 32 bit (x86) systems where the virtual
address space for a user process can only be little less than 4GB. As there’s no such limitation for 64bit (x86_64) bit
systems, you can define SGA’s larger than 4 Gig’s. But idea here is to let Oracle use the efficient one-segment model
and for this shmmax should stay higher than SGA size of any individual database on the system.
54. What is the use of inittrans and maxtrans in table definition?
Initial and Maximum transactions allowed to read/write to the block concurrently
INITRANS specifies the number of DML transaction entries for which space is initially reserved in the data block
header. Space is reserved in the headers of all data blocks in the associated segment. As multiple transactions
concurrently access the rows of the same data block, space is allocated for each DML transaction’s entry in the block.
Once the space reserved by INITRANS is depleted, space for additional transaction entries is allocated out of the free
space in a block, if available. Once allocated, this space effectively becomes a permanent part of the block header.
The MAXTRANS parameter limits the number of transaction entries that can concurrently use data in a data block.
Therefore, you can limit the amount of free space that can be allocated for transaction entries in a data block using
MAXTRANS.
The INITRANS and MAXTRANS parameters for the data blocks allocated to a specific schema object should be set
individually for each schema object based on the
following criteria:
The space you would like to reserve for transaction entries compared to the space you would reserve for database
data
The number of concurrent transactions that are likely to touch the same data blocks at any given time
For example, if a table is very large and only a small number of users simultaneously access the table, the chances of
multiple concurrent transactions requiring access to the same data block is low. Therefore, INITRANS can be set low,
especially if space is at a premium in the database.
Alternatively, assume that a table is usually accessed by many users at the same time. In this case, you might consider
preallocating transaction entry space by using
a high INITRANS. This eliminates the overhead of having to allocate transaction entry space, as required when the
object is in use. Also, allow a higher MAXTRANS so that no user has to wait to access necessary data blocks.
Page 56 of 287
INITRANS and MAXTRANS are used when you expect multiple access to the same data block.
Every transaction which modifies a block must acquire an entry in the Interested Transaction List (ITL). Space for this
list is defined by INITRANS. The ITL grows dynamically as needed by transactions up to the value MAXTRANS. It also
shrinks back down to the setting for INITRANS.
INITRANS
The default value is 1 for tables and 2 for clusters and indexes.
MAXTRANS
The default value is an operating system-specific function of block size, not exceeding 255.
55. What are differences between dbms_job and dbms_schedular?
Through dbms_schedular we can schedule OS level jobs also.
56. What are differences between dbms_schedular and cron jobs?
Through dbms_schedular we can schedule database and os level jobs, but through cron we can’t set.
57. Difference between CPU & PSU patches?
CPU - Critical Patch Update - includes only Security related patches.
PSU - Patch Set Update - includes CPU + other patches deemed important enough to be released prior to a minor (or
major) version release.
58. What you will do if (local) inventory corrupted [or] opatch lsinventory is giving error?
What to do if my Global Inventory is corrupted?
If your global Inventory is corrupted, you can recreate global Inventory on machine using Universal Installer and
attach already Installed oracle home by option
-attachHome
./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.loc ORACLE_HOME=Oracle_Home_Location
ORACLE_HOME_NAME=Oracle_Home_Name CLUSTER_NODES={}
59. What are the entries/location of oraInst.loc?
/etc/oraInst.loc is pointer to central/local Oracle Inventory.
60. What is the difference between central/global inventory and local inventory?
Overview of Inventory
The inventory is a very important part of the Oracle Universal Installer. This is where OUI keeps all information
regarding the products installed on a specific machine.
There are two inventories with the newer releases of OUI (2.x and higher):
* The inventory in the ORACLE_HOME (Local Inventory)
* The central inventory directory outside the ORACLE_HOME (Global Inventory)
At startup, the Oracle Universal Installer first looks for the key that specifies where the global inventory is located at
(this key varies by platform).
* /var/opt/oracle/oraInst.loc (typical)
* /etc/oraInst.loc (AIX and Linux)
* HKEY_LOCAL_MACHINE -> Software -> Oracle -> INST_LOC (Windows platforms)
If this key is found, the directory within it will be used as the global inventory location.
If the key is not found, the inventory path will default created as follows:
* UNIX : ORACLE_BASE\oraInventory
* WINDOWS : c:\program files\oracle\Inventory
If the ORACLE_BASE environment variable is not defined, the inventory is created at the same level as the first Oracle
home. For example, if your first Oracle home is at /private/ORACLEHome1, then, the inventory is at
/private/oraInventory.
With Oracle Applications 11i the inventory contains information about both the iAS and RDBMS ORACLE_HOMEs
About the Oracle Universal Installer Inventory
The Oracle Universal Installer inventory is the location for the Oracle Universal Installer’s bookkeeping. The inventory
stores information about:
* All Oracle software products installed in all Oracle homes on a machine
* Other non-ORACLE_HOME specific products, such as the Java Runtime Environment (JRE)
Starting with Oracle Universal Installer 2.1, the information in the Oracle Universal Installer inventory is stored in
Extensible Markup Language (XML) format. The XML format allows for easier diagnosis of problems and faster
loading of data. Any secure information is not stored directly in the inventory. As a result, during deinstallation of
some products, you may be prompted for required secure information, such as passwords.
Page 57 of 287
By default, the Universal Installer inventory is located in a series of directories at /Program Files/Oracle/Inventory on
Windows computers and in the /Inventory directory on UNIX computers.
Local Inventory
There is one Local Inventory per ORACLE_HOME. It is physically located inside the ORACLE_HOME at
$ORACLE_HOME/inventory and contains the detail of the patch level for that ORACLE_HOME.
The Local Inventory gets updated whenever a patch is applied to the ORACLE_HOME, using OUI.
If the Local Inventory becomes corrupt or is lost, this is very difficult to recover, and may result in having to reinstall
the ORACLE_HOME and re-apply all patchsets and patches.
Global Inventory
The Global Inventory is the part of the XML inventory that contains the high level list of all oracle products installed
on a machine. There should therefore be only one per machine. Its location is defined by the content of oraInst.loc.
The Global Inventory records the physical location of Oracle products installed on the machine, such as
ORACLE_HOMES (RDBMS and IAS) or JRE. It does not have any information about the detail of patches applied to
each ORACLE_HOMEs.
The Global Inventory gets updated every time you install or de-install an ORACLE_HOME on the machine, be it
through OUI Installer, Rapid Install, or Rapid Clone.
Note: If you need to delete an ORACLE_HOME, you should always do it through the OUI de-installer in order to keep
the Global Inventory synchronized.
61. What is the use of root.sh & oraInstRoot.sh?
Explanation-1:
Changes ownership & permissions of oraInventory
Creating oratab file in the /etc directory
In RAC, starts the clusterware stack
Note: Both the script should be run as root user
orainstRoot.sh:
It is located in $ORACLE_BASE/oraInventory
Usage:
a. It creates the inventory pointer file (/etc/oraInst.loc), The file shows the inventory location and group it is linked to.
b. Changing groupname of the oraInventory directory to oinstall group
root.sh:
It is located in $ORACLE_HOME directory
Usage:
root.sh script performs many things, namely
a. It changes or correctly sets the environment variables
b. copying of few files into /usr/local/bin , the files are dbhome, oraenv, coraenv etc.
c. creation of /etc/oratab file or adding database home and SID's entry into /etc/oratab file.
62. What is transportable tablespace (and across platforms)?
(http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmttbsb.htm)
Overview
Oracle's Transportable Tablespace is one of those much awaited features that was introduced in Oracle8i (8.1.5) and
is commonly used in Data Warehouses (DW). Using transportable tablespaces is much faster than using other utilities
like export/import, SQL*Plus copy tables, or backup and recovery options to copy data from one database to another.
This article provides a brief introduction into configuring and using transportable tablespaces.
Introduction to Transportable Tablespaces
Before covering the details of how to setup and use transportable tablespaces, let's first discuss some of the
terminology and limitations to provide us with an introduction.
The use of transportable tablespaces are much faster than using export/import, SQL*Plus copy tables, or backup and
recovery options to copy data from one database to another.
A transportable tablespace set is defined as two components:
All of the datafiles that make up the tablespaces that will be moved AND an export that contains the data dictionary
information about those tablespaces.
COMPATIBLE must be set in both the source and target database to at least 8.1.
When transporting a tablespace from an OLTP system to a data warehouse using the Export/Import utility, you will
most likely NOT need to transport TRIGGER and CONSTRAINT information that is associated with the tables in the
Page 58 of 287
tablespace you are exporting. That is, you will set the TRIGGERS and CONSTRAINTS Export utility parameters equal to
"N".
The data in a data warehouse is inserted and altered under very controlled circumstances and does not require the
same usage of constraints and triggers as a typical operational system does.
It is common and recommended though that you use the GRANTS option by setting it to Y.
The TRIGGERS option is new in Oracle8i for use with the export command. It is used to control whether trigger
information, associated with the tables in a tablespace, are included in the tablespace transport.
Limitations of Transportable Tablespaces:
The transportable set must be self-contained.
Both the source and target database must be running Oracle 8.1 or higher release.
The two databases do not have to be on the same release
The source and target databases must be on the same type of hardware and operating-system platform.
The source and target databases must have the same database block size.
The source and target databases must have the same character set.
A tablespace with the same name must not already exist in the target database.
Materialized views, function-based indexes, scoped REFs, 8.0 compatible advanced queues with multiple-recipients,
and domain indexes can't be transported in this manner. (As of Oracle8i)
Users with tables in the exported tablespace should exist in the target database prior to initiating the import. Create
the user reported by the error message.
Explanation: The metadata exported from the target database does not contain enough information to create the
user in the target database. The reason is that, if the metadata contained the user details, it might overwrite the
privileges of an existing user in the target database.
(i.e. If the user by the same name already exists in the target database)
By not maintaining the user details, we preserve the security of the database.
Using Transportable Tablespaces
In this section, we finally get to see how to use transportable tablespaces. Here is an overview of the steps we will
perform in this section:
Verify that the set of source tablespaces are self-contained
Generate a transportable tablespace set.
Transport the tablespace set
Import the tablespaces set into the target database.
In this example, we will be transporting the tablespaces, "FACT1, FACT2, and FACT_IDX" from a database named
DWDB to REPORTDB. The user that owns these tables will be "DW" and password "DW".
% exp
userid=\"sys/change_on_install@dwdb as sysdba\"
transport_tablespace=y
tablespaces=fact1, fact2, fact_idx
triggers=y
constraints=y
grants=y
file=fact_dw.dmp
% cp /u10/app/oradata/DWDB/fact1_01.dbf /u10/app/oradata/REPORTDB/fact1_01.dbf
% cp /u10/app/oradata/DWDB/fact2_01.dbf /u10/app/oradata/REPORTDB/fact2_01.dbf
% cp /u09/app/oradata/DWDB/fact_idx01.dbf /u09/app/oradata/REPORTDB/fact_idx01.dbf
Page 61 of 287
Table 3.3: Supported platforms for transportable tablespaces.
The v$database data dictionary view also adds two columns, platform ID and platform name:
SQL> select name, platform_id,platform_name
2 from v$database;
NAME PLATFORM_ID PLATFORM_NAME
------- ----------- -----------------------
GRID 2 Solaris[tm] OE (64-bit)
To transport a tablespace from one platform to another, datafiles on different platforms must be in the same endian
format (byte ordering).
The pattern for byte ordering in native types is called endianness. There are only two main patterns, big endian and
little endian. Big endian means the most significant byte comes first, and little endian means the least significant byte
comes first. If the source platform and the target platform are of different endianness, then an additional step must
be taken on either the source or target platform to convert the tablespace being transported to the target format. If
they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were
on the same platform.
Be aware of the following limitations as you plan for transportable tablespace use:
• The source and target database must use the same character set and national character set.
• You cannot transport a tablespace to a target database in which a tablespace with the same name already
exists. However, you can rename either the tablespace to be transported or the destination tablespace
before the transport operation.
• The set should be self-containing
Convert Datafiles using RMAN
You do not need to convert the datafile to transport a tablespace from an AIX-based platform to a Sun platform, since
both platforms use a big endian.
However, to transport a tablespace from a Sun platform (big endian) to a Linux platform (little endian), you need to
use the CONVERT command in the RMAN utility to convert the byte ordering. This can be done on either the source
platform or the target platform.
RMAN> CONVERT TABLESPACE ‘USERS’
TO PLATFORM = ‘Linux IA (32-bit)’
DB_FILE_NAME_CONVERT = ‘/u02/oradata/grid/users01.dbf’, ‘/dba/recovery_area/transport_linux’
The limitation requiring transportable tablespaces to be transferred to the same operating system has been
removed. However, to transport tablespaces across different platforms, both the source and target databases must
be at least on Oracle Database 10g, be on at least version 10.0.1, and have the COMPATIBLE initialization parameter
set to 10.0.
Transporting Tablespaces Between Databases: A General Procedure
Perform the following steps to move or copy a set of tablespaces.
• You must pick a self-contained set of tablespaces. Verify this using the dbms_tts.transport_set_check
package.
• Next, generate a transportable tablespace set, using the Export utility.
• A transportable tablespace set consists of the set of datafiles for the set of tablespaces being transported and
an Export file containing metadata information for the set of tablespaces.
• Transporting a tablespace set to a platform different from the source platform will require connection to the
Recovery Manager (RMAN) and invoking the CONVERT command. An alternative is to do the conversion on
the target platform after the tablespace datafiles have been transported.
• The final step is to plug in the tablespace - You use the Import utility to plug the set of tablespaces metadata,
and hence the tablespaces themselves, into the target database.
If you are transporting these tablespaces to a different platform, use the v$platform view to find the platform name.
You can then use the Recovery Manager CONVERT command to perform the conversion.
Note - As an alternative to conversion before transport, the CONVERT command can be used for the conversion on
the target platform after the tablespace set has been transported.
Page 62 of 287
The limitation requiring transportable tablespaces to be transferred to the same operating system has been removed.
However, to transport tablespaces across different platforms, both the source and target databases must be at least
on Oracle Database 10g, be on at least version 10.0.1, and have the COMPATIBLE initialization parameter set to 10.0.
Transporting Tablespaces Between Databases: A General Procedure
Perform the following steps to move or copy a set of tablespaces.
• You must pick a self-contained set of tablespaces. Verify this using the dbms_tts.transport_set_check
package.
• Next, generate a transportable tablespace set, using the Export utility.
• A transportable tablespace set consists of the set of datafiles for the set of tablespaces being transported and
an Export file containing metadata information for the set of tablespaces.
• Transporting a tablespace set to a platform different from the source platform will require connection to the
Recovery Manager (RMAN) and invoking the CONVERT command. An alternative is to do the conversion on
the target platform after the tablespace datafiles have been transported.
• The final step is to plug in the tablespace - You use the Import utility to plug the set of tablespaces metadata,
and hence the tablespaces themselves, into the target database.
If you are transporting these tablespaces to a different platform, use the v$platform view to find the platform name.
You can then use the Recovery Manager CONVERT command to perform the conversion.
Note - As an alternative to conversion before transport, the CONVERT command can be used for the conversion on
the target platform after the tablespace set has been transported.
65. What is the difference between restore point & guaranteed restore point?
Sometimes it is more efficient to roll-back changes in a database rather than do a point-in-time recovery. Flashback
Database has the ability to rewind the entire database and changes that occurred within a given time window. The
effects are similar to database point-in-time recovery.
Normal Restore Points
You can create restore points to enable you to Flashback the database to a particular point in time or SCN. You can
think of it as a bookmark or alias that can be used with commands that recognize a RESTORE POINT clause as
shorthand for specifying an SCN. In essence before you perform any operations that you may have to reverse you can
create a normal restore point. The name of the restore point and SCN are then recorded within the control file.
So basically creating a restore point eliminates the need to determine the current SCN before performing an
operations or finding the proper one after the fact. You can use RESTORE POINTS to specify the target SCN in the
following contexts:
RECOVER DATABASE and FLASHBACK DATABASE commands within RMAN
FLASHBACK TABLE in SQL*Plus
Guaranteed Restore Points (GRP)
A Guaranteed Restore Point can be used to perform a Flashback Database operation even if flashback logging is not
enabled for your database. It can be used to revert a whole database to a known good state days or weeks ago, as
long as there is enough disk space in flash recovery area to store the needed logs. Even effects of NOLOGGING
operations like direct load inserts can be reversed using guaranteed restore points.
Limits to both types of Restore Points include shrinking a datafile or dropping a tablespace can prevent flashing back
the affected datafiles to the restore point.
About Logging for Flashback Database and GRP
The logging for Flashback Database and guaranteed restore points is based upon capturing images of datafile blocks
before changes are applied. These images can then be used to return datafiles to their previous state when a
FLASHBACK DATABASE command is executed. The chief difference between normal flashback logging and GRP
logging are related to when blocks are logged and whether the logs can be deleted in response to space pressure in
FRA. If no files are eligible for deletion because of retention policy and GRP then the database will behave as if it has
encountered a disk full condition and may halt.
In general it is more efficient to turn off logging for Flashback Database and use only guaranteed restore points if the
primary need is to be able to return your database to a specific time in which the guaranteed restore point was
created. In other words you don't have a need to restore to a point between the GRP and the current SCN of the
database. And you don't have a reason to use any of the other "Flashback" technologies.
If Flashback Database is enabled and one or more guaranteed restore points is defined then the database performs
normal flashback logging. This can cause some performance overhead and significant space pressure in the flash
Page 63 of 287
recovery area. It keeps all the information it needs to ally FLASHBACK DATABASE to any time as far back as the
earliest currently defined guaranteed restore point.
Create Guaranteed Restore Point
CREATE RESTORE POINT before_damage GUARANTEE FLASHBACK DATABASE;
To See Restore Points
SELECT SCN, RESTORE_POINT_TIME, NAME, PERSERVED FROM GV$RESTORE_POINT;
To FLASHBACK DATABASE
SHUTDOWN IMMEDIATE;
STARTUP MOUNT EXCLUSIVE;
FLASHBACK DATABASE TO RESTORE POINT before_damage;
ALTER DATABASE OPEN RESETLOGS;
To Drop Restore Points
DROP RESTORE POINT before_damage;
How to quickly restore to a clean database using Oracle’s restore point
Applies to:
Oracle database – 11gR2
Problem:
----------------------------------------------------------------------------------------------------------
Often while conducting benchmarking tests, it is required to load a clean database before the start of a new run. One
way to ensure a clean database is to recreate the entire database before each test run, but depending on the size of
it, this approach may be very time consuming or inefficient.
Solution:
----------------------------------------------------------------------------------------------------------
This article describes how to use Oracle’s flashback feature to quickly restore a database to a state that existed just
before running the workload. More specifically, this article describes steps on how to use the ‘guaranteed restore
points’.
Restore point:
Restore point is nothing but a name associated with a timestamp or an SCN of the database. One can create either a
normal restore point or a guaranteed restore point. The difference between the two is that guaranteed restore point
allows you to flashback to the restore point regardless of the DB_FLASHBACK_RETENTION_TARGET initialization
parameter i.e. it is always available (assuming you have enough space in the flash recovery area).
NOTE: In this article Flashback logging was not turned ON.
Guaranteed Restore point:
Prerequisites: Creating a guaranteed restore point requires the following prerequisites:
The user must have the SYSDBA system privileges
Must have created a flash recovery area
The database must be in ARCHIVELOG mode
Create a guaranteed restore point:
After you have created or migrated a fresh database, first thing to do is to create a guaranteed restore point so you
can flashback to it each time before you start a new workload. The steps are as under:
$> su – oracle
$> sqlplus / as sysdba;
Find out if ARCHIVELOG is enabled
SQL> select log_mode from v$database;
If step 3 shows that ARCHIVELOG is not enabled then continue else skip to step 8 below.
SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database archivelog;
SQL> alter database open;
SQL> create restore point CLEAN_DB guarantee flashback database;
where CLEAN_DB is the name given to the guaranteed restore point.
Viewing the guaranteed restore point
SQL> select * from v$restore_point;
Verify the information about the newly created restore point. Also, note down the SCN# for reference and we will
refer to it as “reference SCN#”
Page 64 of 287
Flashback to the guaranteed restore point
Now, in order to restore your database to the guaranteed restore point, follow the steps below:
$> su – oracle
$> sqlplus / as sysdba;
SQL> select current_scn from v$database;
SQL> shutdown immediate;
SQL> startup mount;
SQL> select * from v$restore_point;
SQL> flashback database to restore point CLEAN_DB;
SQL> alter database open resetlogs;
SQL> select current_scn from v$database;
Compare the SCN# from step 9 above to the reference SCN#.
NOTE: The SCN# from step 9 above may not necessarily be the exact SCN# as the reference SCN# but it will be close
enough.
Normal restore point
A label for an SCN or time. For commands that support an SCN or time, you can often specify a restore point. Normal
restore points exist in the circular list and can be overwritten in the control file. However, if the restore point pertains
to an archival backup, then it will be preserved in the recovery catalog.
Guaranteed restore point
A restore point for which the database is guaranteed to retain the flashback logs for an Oracle Flashback Database
operation. Unlike a normal restore point, a guaranteed restore point does not age out of the control file and must be
explicitly dropped. Guaranteed restore points utilize space in the flash recovery area, which must be defined.
66. What is the difference between 10g/11g OEM Grid control and 12c Cloud control?
https://blogs.oracle.com/oem/entry/questions_and_answers_from_the
67. What are the components of Grid control?
Grid Control Configuration: http://docs.oracle.com/html/B12013_03/configs.htm
OMS (Oracle Management Server):
OMR (Oracle Management Repository):
OMS (Oracle Management Server):
OEM Agent: oracle Management Agent (Management Agent):
Grid Control Console:
OMS is a J2EE Web application that orchestrates with Management Agents to discover targets, monitor and manage
them, and store the collected information in a repository for future reference and analysis. OMS also renders the
user interface for the Grid Control console. OMS is deployed to the application server that is installed along with
other core components of Grid Control.
OMR (Oracle Management Repository):
Management Repository is the storage location where all the information collected by the Management Agent gets
stored. It consists of objects such as database jobs, packages, procedures, views, and tablespaces.
Technically, OMS uploads the monitoring data it receives from the Management Agents to the Management
Repository. The Management Repository then organizes the data so that it can be retrieved by OMS and displayed in
the Grid Control console. Since data is stored in the Management Repository, it can be shared between any number
of administrators accessing Grid Control.
Management Repository is configured in Oracle Database. This Oracle Database can either be an existing database in
your environment or a new one installed along with other core components of Grid Control.
OEM Agent: oracle Management Agent (Management Agent):
Management Agent is an integral software component that is deployed on each monitored host. It is responsible for
monitoring all the targets running on those hosts, communicating that information to the middle-tier Oracle
Management Service, and managing and maintaining the hosts and its targets.
Grid Control Console:
Grid Control Console is the user interface you see after you install Grid Control. From the Grid Control console, you
can monitor and administer your entire computing environment from one location on the network. All the services
within your enterprise, including hosts, databases, listeners, application servers, and so on, are easily managed from
one central location.
68. What are the new features of 12c Cloud control?
Page 65 of 287
69. How to find if your Oracle database is 32 bit or 64 bit?
Execute the command "file $ORACLE_HOME/bin/oracle", you should see output like /u01/db/bin/oracle: ELF 64-bit
MSB executable SPARCV9 Version 1
means you are on 64 bit oracle.
If your oracle is 32 bit you should see output like below
oracle: ELF 32-bit MSB executable SPARC Version 1
70. How to find opatch Version?
Opatch is utility to apply database patch, In order to find opatch version execute
"$ORACLE_HOME/OPatch/opatch version"
71. Which procedure does not affect the size of the SGA?
Stored procedure
72. When Dictionary tables are created?
Once for the Entire Databasecreation.
73. The order in which Oracle processes a single SQL statement is?
Parse, execute and fetch
74. What are the mandatory datafiles to create a database in Oracle 11g?
SYSTEM, SYSAUX, UNDO
75. In one server can we have different oracle versions?
Yes
76. How do sessions communicate with database?
Server processes execute SQL received from user processes.
77. Which SGA memory structure cannot be resized dynamically after instance startup?
Log buffer
78. When a session changes data, where does the change get written?
To the data block in the cache, and the redo log buffer
79. How many maximum no of control files we can have within a database?
8
80.System Data File Consists of?
Metadata
Bigfile Tablespaces
Oracle lets you create bigfile tablespaces. This allows Oracle Database to contain tablespaces made up of single large
files rather than numerous smaller ones. This lets Oracle Database utilize the ability of 64-bit systems to create and
manage ultralarge files. The consequence of this is that Oracle Database can now scale up to 8 exabytes in size.
With Oracle-managed files, bigfile tablespaces make datafiles completely transparent for users. In other words, you
can perform operations on tablespaces, rather than the underlying datafile. Bigfile tablespaces make the tablespace
the main unit of the disk space administration, backup and recovery, and so on. Bigfile tablespaces also simplify
datafile management with Oracle-managed files and Automatic Storage Management by eliminating the need for
adding new datafiles and dealing with multiple files.
The system default is to create a smallfile tablespace, which is the traditional type of Oracle tablespace. The SYSTEM
and SYSAUX tablespace types are always created using the system default type.
Bigfile tablespaces are supported only for locally managed tablespaces with automatic segment-space management.
There are two exceptions: locally managed undo and temporary tablespaces can be bigfile tablespaces, even though
their segments are manually managed.
An Oracle database can contain both bigfile and smallfile tablespaces. Tablespaces of different types are
indistinguishable in terms of execution of SQL statements that do not explicitly refer to datafiles.
You can create a group of temporary tablespaces that let a user consume temporary space from multiple tablespaces.
A tablespace group can also be specified as the default temporary tablespace for the database. This is useful with
bigfile tablespaces, where you could need a lot of temporary tablespace for sorts.
Benefits of Bigfile Tablespaces
• Bigfile tablespaces can significantly increase the storage capacity of an Oracle database. Smallfile tablespaces
can contain up to 1024 files, but bigfile tablespaces contain only one file that can be 1024 times larger than a
smallfile tablespace. The total tablespace capacity is the same for smallfile tablespaces and bigfile
tablespaces. However, because there is limit of 64K datafiles for each database, a database can contain 1024
times more bigfile tablespaces than smallfile tablespaces, so bigfile tablespaces increase the total database
Page 66 of 287
capacity by 3 orders of magnitude. In other words, 8 exabytes is the maximum size of the Oracle database
when bigfile tablespaces are used with the maximum block size (32 k).
• Bigfile tablespaces simplify management of datafiles in ultra large databases by reducing the number of
datafiles needed. You can also adjust parameters to reduce the SGA space required for datafile information
and the size of the control file.
• They simplify database management by providing datafile transparency.
Considerations with Bigfile Tablespaces
• Bigfile tablespaces are intended to be used with Automatic Storage Management or other logical volume
managers that support dynamically extensible logical volumes and striping or RAID.
• Avoid creating bigfile tablespaces on a system that does not support striping because of negative implications
for parallel execution and RMAN backup parallelization.
• Avoid using bigfile tablespaces if there could possibly be no free space available on a disk group, and the only
way to extend a tablespace is to add a new datafile on a different disk group.
• Using bigfile tablespaces on platforms that do not support large file sizes is not recommended and can limit
tablespace capacity. Refer to your operating system specific documentation for information about maximum
supported file sizes.
• Performance of database opens, checkpoints, and DBWR processes should improve if data is stored in bigfile
tablespaces instead of traditional tablespaces. However, increasing the datafile size might increase time to
restore a corrupted file or create a new datafile.
The SYSTEM Tablespace
• Every Oracle database contains a tablespace named SYSTEM, which Oracle creates automatically when the
database is created. The SYSTEM tablespace is always online when the database is open.
• To take advantage of the benefits of locally managed tablespaces, you can create a locally managed SYSTEM
tablespace, or you can migrate an existing dictionary managed SYSTEM tablespace to a locally managed
format.
• In a database with a locally managed SYSTEM tablespace, dictionary managed tablespaces cannot be created.
It is possible to plug in a dictionary managed tablespace using the transportable feature, but it cannot be
made writable.
• Note:
• If a tablespace is locally managed, then it cannot be reverted back to being dictionary managed.
The SYSAUX Tablespace
• The SYSAUX tablespace is an auxiliary tablespace to the SYSTEM tablespace. Many database components use
the SYSAUX tablespace as their default location to store data. Therefore, the SYSAUX tablespace is always
created during database creation or database upgrade.
• The SYSAUX tablespace provides a centralized location for database metadata that does not reside in the
SYSTEM tablespace. It reduces the number of tablespaces created by default, both in the seed database and
in user-defined databases.
• During normal database operation, the Oracle database server does not allow the SYSAUX tablespace to be
dropped or renamed. Transportable tablespaces for SYSAUX is not supported.
• Note:
• If the SYSAUX tablespace is unavailable, such as due to a media failure, then some database features might
fail.
Undo Tablespaces
• Undo tablespaces are special tablespaces used solely for storing undo information. You cannot create any
other segment types (for example, tables or indexes) in undo tablespaces. Each database contains zero or
more undo tablespaces. In automatic undo management mode, each Oracle instance is assigned one (and
only one) undo tablespace. Undo data is managed within an undo tablespace using undo segments that are
automatically created and maintained by Oracle.
• When the first DML operation is run within a transaction, the transaction is bound (assigned) to an undo
segment (and therefore to a transaction table) in the current undo tablespace. In rare circumstances, if the
instance does not have a designated undo tablespace, the transaction binds to the system undo segment.
• Caution:
• Do not run any user transactions before creating the first undo tablespace and taking it online.
Page 67 of 287
• Each undo tablespace is composed of a set of undo files and is locally managed. Like other types of
tablespaces, undo blocks are grouped in extents and the status of each extent is represented in the bitmap.
At any point in time, an extent is either allocated to (and used by) a transaction table, or it is free.
• You can create a bigfile undo tablespace.
Creation of Undo Tablespaces
A database administrator creates undo tablespaces individually, using the CREATE UNDO TABLESPACE statement. It
can also be created when the database is created, using the CREATE DATABASE statement. A set of files is assigned to
each newly created undo tablespace. Like regular tablespaces, attributes of undo tablespaces can be modified with
the ALTER TABLESPACE statement and dropped with the DROP TABLESPACE statement.
Note: An undo tablespace cannot be dropped if it is being used by any instance or contains any undo information
needed to recover transactions.
Assignment of Undo Tablespaces
You assign an undo tablespace to an instance in one of two ways:
• At instance startup. You can specify the undo tablespace in the initialization file or let the system choose an
available undo tablespace.
• While the instance is running. Use ALTER SYSTEM SET UNDO_TABLESPACE to replace the active undo
tablespace with another undo tablespace. This method is rarely used.
You can add more space to an undo tablespace by adding more datafiles to the undo tablespace with the ALTER
TABLESPACE statement.
You can have more than one undo tablespace and switch between them. Use the Database Resource Manager to
establish user quotas for undo tablespaces. You can specify the retention period for undo information.
Default Temporary Tablespace
When the SYSTEM tablespace is locally managed, you must define at least one default temporary tablespace when
creating a database. A locally managed SYSTEM tablespace cannot be used for default temporary storage.
If SYSTEM is dictionary managed and if you do not define a default temporary tablespace when creating the
database, then SYSTEM is still used for default temporary storage. However, you will receive a warning in ALERT.LOG
saying that a default temporary tablespace is recommended and will be necessary in future releases.
How to Specify a Default Temporary Tablespace
Specify default temporary tablespaces when you create a database, using the DEFAULT TEMPORARY TABLESPACE
extension to the CREATE DATABASE statement.
If you drop all default temporary tablespaces, then the SYSTEM tablespace is used as the default temporary
tablespace.
You can create bigfile temporary tablespaces. A bigfile temporary tablespaces uses tempfiles instead of datafiles.
Note: You cannot make a default temporary tablespace permanent or take it offline.
Using Multiple Tablespaces
A very small database may need only the SYSTEM tablespace; however, Oracle recommends that you create at least
one additional tablespace to store user data separate from data dictionary information. This gives you more flexibility
in various database administration operations and reduces contention among dictionary objects and schema objects
for the same datafiles.
You can use multiple tablespaces to perform the following tasks:
• Control disk space allocation for database data
• Assign specific space quotas for database users
• Control availability of data by taking individual tablespaces online or offline
• Perform partial database backup or recovery operations
• Allocate data storage across devices to improve performance
A database administrator can use tablespaces to do the following actions:
• Create new tablespaces
• Add datafiles to tablespaces
• Set and alter default segment storage settings for segments created in a tablespace
• Make a tablespace read only or read/write
• Make a tablespace temporary or permanent
• Rename tablespaces
• Drop tablespaces
Page 68 of 287
Managing Space in Tablespaces
Tablespaces allocate space in extents. Tablespaces can use two different methods to keep track of their free and used
space:
• Locally managed tablespaces: Extent management by the tablespace
• Dictionary managed tablespaces: Extent management by the data dictionary
When you create a tablespace, you choose one of these methods of space management. Later, you can change the
management method with the DBMS_SPACE_ADMIN PL/SQL package.
Note: If you do not specify extent management when you create a tablespace, then the default is locally managed.
Locally Managed Tablespaces
A tablespace that manages its own extents maintains a bitmap in each datafile to keep track of the free or used
status of blocks in that datafile. Each bit in the bitmap corresponds to a block or a group of blocks. When an extent is
allocated or freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. These changes
do not generate rollback information because they do not update tables in the data dictionary (except for special
cases such as tablespace quota information).
Locally managed tablespaces have the following advantages over dictionary managed tablespaces:
• Local management of extents automatically tracks adjacent free space, eliminating the need to coalesce free
extents.
• Local management of extents avoids recursive space management operations. Such recursive operations can
occur in dictionary managed tablespaces if consuming or releasing space in an extent results in another
operation that consumes or releases space in a data dictionary table or rollback segment.
The sizes of extents that are managed locally can be determined automatically by the system. Alternatively, all
extents can have the same size in a locally managed tablespace and override object storage options.
The LOCAL clause of the CREATE TABLESPACE or CREATE TEMPORARY TABLESPACE statement is specified to create
locally managed permanent or temporary tablespaces, respectively.
Segment Space Management in Locally Managed Tablespaces
When you create a locally managed tablespace using the CREATE TABLESPACE statement, the SEGMENT SPACE
MANAGEMENT clause lets you specify how free and used space within a segment is to be managed. Your choices are:
AUTO
This keyword tells Oracle that you want to use bitmaps to manage the free space within segments. A bitmap, in this
case, is a map that describes the status of each data block within a segment with respect to the amount of space in
the block available for inserting rows. As more or less space becomes available in a data block, its new state is
reflected in the bitmap. Bitmaps enable Oracle to manage free space more automatically; thus, this form of space
management is called automatic segment-space management.
Locally managed tablespaces using automatic segment-space management can be created as smallfile (traditional) or
bigfile tablespaces. AUTO is the default.
MANUAL
This keyword tells Oracle that you want to use free lists for managing free space within segments. Free lists are lists
of data blocks that have space available for inserting rows.
Dictionary Managed Tablespaces
If you created your database with an earlier version of Oracle, then you could be using dictionary managed
tablespaces. For a tablespace that uses the data dictionary to manage its extents, Oracle updates the appropriate
tables in the data dictionary whenever an extent is allocated or freed for reuse. Oracle also stores rollback
information about each update of the dictionary tables. Because dictionary tables and rollback segments are part of
the database, the space that they occupy is subject to the same space management operations as all other data.
Multiple Block Sizes
Oracle supports multiple block sizes in a database. The standard block size is used for the SYSTEM tablespace. This is
set when the database is created and can be any valid size. You specify the standard block size by setting the
initialization parameter DB_BLOCK_SIZE. Legitimate values are from 2K to 32K.
In the initialization parameter file or server parameter, you can configure subcaches within the buffer cache for each
of these blocks sizes. Subcaches can also be configured while an instance is running. You can create tablespaces
having any of these block sizes. The standard block size is used for the system tablespace and most other tablespaces.
Note: All partitions of a partitioned object must reside in tablespaces of a single block size.
Multiple block sizes are useful primarily when transporting a tablespace from an OLTP database to an enterprise data
warehouse. This facilitates transport between databases of different block sizes.
Page 69 of 287
81. What is the function of SMON in instance recovery?
It roles forward by applying changes in the redo log.
Shutdown Modes
A database administrator with SYSDBA or SYSOPER privileges can shut down the database using the SQL*Plus
SHUTDOWN command or Enterprise Manager. The SHUTDOWN command has options that determine shutdown
behavior. Table 13-2 summarizes the behavior of the different shutdown modes.
Table 13-2 Shutdown Modes
Database Behavior ABORT IMMEDIATE TRANSACTIONAL NORMAL
Permits new user connections No No No No
Waits until current sessions end No No No Yes
Waits until current transactions end No No Yes Yes
Performs a checkpoint and closes open files No Yes Yes Yes
The possible SHUTDOWN statements are:
SHUTDOWN ABORT
This mode is intended for emergency situations, such as when no other form of shutdown is successful. This mode of
shutdown is the fastest. However, a subsequent open of this database may take substantially longer because instance
recovery must be performed to make the data files consistent.
Note: Because SHUTDOWN ABORT does not checkpoint the open data files, instance recovery is necessary before the
database can reopen. The other shutdown modes do not require instance recovery before the database can reopen.
SHUTDOWN IMMEDIATE
This mode is typically the fastest next to SHUTDOWN ABORT. Oracle Database terminates any executing SQL
statements and disconnects users. Active transactions are terminated and uncommitted changes are rolled back.
SHUTDOWN TRANSACTIONAL
This mode prevents users from starting new transactions, but waits for all current transactions to complete before
shutting down. This mode can take a significant amount of time depending on the nature of the current transactions.
SHUTDOWN NORMAL
This is the default mode of shutdown. The database waits for all connected users to disconnect before shutting down.
How a Database Is Closed
The database close operation is implicit in a database shutdown. The nature of the operation depends on whether
the database shutdown is normal or abnormal.
How a Database Is Closed During Normal Shutdown
When a database is closed as part of a SHUTDOWN with any option other than ABORT, Oracle Database writes data
in the SGA to the data files and online redo log files. Next, the database closes online data files and online redo log
files. Any offline data files of offline tablespaces have been closed already. When the database reopens, any
tablespace that was offline remains offline.
At this stage, the database is closed and inaccessible for normal operations. The control files remain open after a
database is closed.
How a Database Is Closed During Abnormal Shutdown
If a SHUTDOWN ABORT or abnormal termination occurs, then the instance of an open database closes and shuts
down the database instantaneously. Oracle Database does not write data in the buffers of the SGA to the data files
and redo log files. The subsequent reopening of the database requires instance recovery, which Oracle Database
performs automatically.
How a Database Is Unmounted
After the database is closed, Oracle Database unmounts the database to disassociate it from the instance. After a
database is unmounted, Oracle Database closes the control files of the database. At this point, the instance remains
in memory.
How an Instance Is Shut Down
The final step in database shutdown is shutting down the instance. When the database instance is shut down, the
SGA is removed from memory and the background processes are terminated.
In unusual circumstances, shutdown of an instance may not occur cleanly. Memory structures may not be removed
from memory or one of the background processes may not be terminated. When remnants of a previous instance
exist, a subsequent instance startup may fail. In such situations, you can force the new instance to start by removing
Page 70 of 287
the remnants of the previous instance and then starting a new instance, or by issuing a SHUTDOWN ABORT
statement in SQL*Plus or using Enterprise Manager.
Database writer (DBWn)
The database writer writes modified blocks from the database buffer cache to the datafiles. Oracle Database allows a
maximum of 20 database writer processes (DBW0-DBW9 and DBWa-DBWj). The DB_WRITER_PROCESSES
initialization parameter specifies the number of DBWn processes. The database selects an appropriate default setting
for this initialization parameter or adjusts a user-specified setting based on the number of CPUs and the number of
processor groups.
For more information about setting the DB_WRITER_PROCESSES initialization parameter, see the Oracle Database
Performance Tuning Guide.
Log writer (LGWR)
The log writer process writes redo log entries to disk. Redo log entries are generated in the redo log buffer of the
system global area (SGA). LGWR writes the redo log entries sequentially into a redo log file. If the database has a
multiplexed redo log, then LGWR writes the redo log entries to a group of redo log files. See Chapter 10, "Managing
the Redo Log" for information about the log writer process.
Checkpoint (CKPT)
At specific times, all modified database buffers in the system global area are written to the datafiles by DBWn. This
event is called a checkpoint. The checkpoint process is responsible for signalling DBWn at checkpoints and updating
all the datafiles and control files of the database to indicate the most recent checkpoint.
System monitors (SMON)
The system monitor performs recovery when a failed instance starts up again. In an Oracle Real Application Clusters
database, the SMON process of one instance can perform instance recovery for other instances that have failed.
SMON also cleans up temporary segments that are no longer in use and recovers dead transactions skipped during
system failure and instance recovery because of file-read or offline errors. These transactions are eventually
recovered by SMON when the tablespace or file is brought back online.
Process monitor (PMON)
The process monitor performs process recovery when a user process fails. PMON is responsible for cleaning up the
cache and freeing resources that the process was using. PMON also checks on the dispatcher processes (described
later in this table) and server processes and restarts them if they have failed.
Archiver (ARCn)
One or more archiver processes copy the redo log files to archival storage when they are full or a log switch occurs.
Archiver processes are the subject of Chapter 11, "Managing Archived Redo Logs".
Recoverer (RECO)
The recoverer process is used to resolve distributed transactions that are pending because of a network or system
failure in a distributed database. At timed intervals, the local RECO attempts to connect to remote databases and
automatically complete the commit or rollback of the local portion of any pending distributed transactions. For
information about this process and how to start it, see Chapter 33, "Managing Distributed Transactions".
Dispatcher (Dnnn)
Dispatchers are optional background processes, present only when the shared server configuration is used. Shared
server was discussed previously in "Configuring Oracle Database for Shared Server".
Global Cache Service (LMS)
In an Oracle Real Application Clusters environment, this process manages resources and provides inter-instance
resource control.
82. Which action occurs during a checkpoint?
Oracle flushes the dirty blocks in the database buffer cache to disk.
Explanation-1:
A checkpoint occurs when the DBWR (database writer) process writes all modified buffers in the SGA buffer cache to
the database data files. Data file headers are also updated with the latest checkpoint SCN, even if the file had no
changed blocks.
Checkpoints occur AFTER (not during) every redo log switch and also at intervals specified by initialization
parameters.
Set parameter LOG_CHECKPOINTS_TO_ALERT=TRUE to observe checkpoint start and end times in the database alert
log.
Checkpoints can be forced with the ALTER SYSTEM CHECKPOINT; command.
Page 71 of 287
Explanation-2:
Checkpoint types can be divided as INCREMENTAL and COMPLETE.
Also COMPLETE CHECKPOINT can be divided further into
PARTIAL and FULL.
In Incremental Checkpoint,checkpoint information is written to the
controlfile. In the following cases:
1.Every three second.
2.At the time of log switch - Sometimes log switches may trigger a complete checkpoint , if the next log where the log
switch is to take place is Active.
In complete Checkpoint,checkpoint information is written
in controlfile,datafile header and also dirty block is
written by DBWR to the datafiles.
Full Checkpoint
1.fast_start_mttr_target
2.Before Clean Shutdown
3.Some log switches may trigger a complete checkpoint , if the next log where the log switch is to take place is Active.
This has more chance of happenning when the Redo Log files are small in size and continuous transactions are taking
place.
4.when the 'alter system checkpoint' command is issued
Partial Checkpoint happens in the following cases.
1.before begin backup.
2.before tablespace offline.
3.before placing tablespace in read only.
4.Before dropping tablespace.
5.before taking datafile offline.
6.When checpoint queue exceeds its threshold.
7.before segment is dropped.
8.Before adding and removing columns from table.
Explanation-3:
a checkpoint is the act of flushing modified, cached database blocks to disk. Normally,
when you make a change to a block -- the modifications of that block are made to a memory
copy of the block. When you commit -- the block is not written (but the REDO LOG is --
that makes it so we can "replay" your transaction in the event of a failure).
Eventually, the system will checkpoint your modified blocks to disk.
there is no relationship between "checkpoint" and sid and instance recovery does not
imply "checkpoint". a checkpoint reduces the amount of time it takes to perform instance
recovery.
Explanation-3:
PURPOSE OF CHECKPOINTS
Database blocks are temporarily stored in Database buffer cache. As blocks are read, they are stored in DB buffer
cache so that if any user accesses them later, they are available in memory and need not be read from the disk. When
we update any row, the buffer in DB buffer cache corresponding to the block containing that row is updated in
memory. Record of the change made is kept in redo log buffer . On commit, the changes we made are written to the
disk thereby making them permanent. But where are those changes written? To the datafiles containing data blocks?
No !!! The changes are recorded in online redo log files by flushing the contents of redo log buffer to them.This is
called write ahead logging. If the instance crashed right now, the DB buffer cache will be wiped out but on restarting
the database, Oracle will apply the changes recorded in redo log files to the datafiles.
Why doesn’t Oracle write the changes to datafiles right away when we commit the transaction? The reason is
simple. If it chose to write directly to the datafiles, it will have to physically locate the data block in the datafile first
and then update it which means that after committing, user has to wait until DBWR searches for the block and then
writes it before he can issue next command. This will bring down the performance drastically. That is where the role
of redo logs comes in. The writes to the redo logs are sequential writes – LGWR just dumps the info in redologs to log
files sequentially and synchronously so that the user does not have to wait for long. Moreover, DBWR will always
write in units of Oracle blocks whereas LGWR will write only the changes made. Hence, write ahead logging also
improves performance by reducing the amount of data written synchronously. When will the changes be applied to
Page 72 of 287
the datablocks in datafiles? The data blocks in the datafiles will be updated by the DBWR asynchronously in response
to certain triggers. These triggers are called checkpoints.
Checkpoint is a synchronization event at a specific point in time which causes some / all dirty blocks to be written to
disk thereby guaranteeing that blocks dirtied prior to that point in time get written.
Whenever dirty blocks are written to datafiles, it allows oracle
- to reuse a redo log : A redo log can’t be reused until DBWR writes all the dirty blocks protected by that logfile to
disk. If we attempt to reuse it before DBWR has finished its checkpoint, we get the following message in alert log :
Checkpoint not complete.
- to reduce instance recovery time : As the memory available to a database instance increases, it is possible to have
database buffer caches as large as several million buffers. It requires that the database checkpoint advance
frequently to limit recovery time, since infrequent checkpoints and large buffer caches can exacerbate crash recovery
times significantly.
- to free buffers for reads : Dirtied blocks can’t be used to read new data into them until they are written to disk.
Thus DBWrR writes dirty blocks from the buffer cache, to make room in the cache.
Various types of checkpoints in Oracle :
- Full checkpoint
- Thread checkpoint
- File checkpoint
- Parallel Query checkpoint
- Object checkpoint
- Log switch checkpoint
_ Incremental checkpoint
Whenever a checkpoint is triggered:
- DBWR writes some /all dirty blocks to datafiles
- CKPT process updates the control file and datafile headers
FULL CHECKPOINT
- Writes block images to the database for all dirty buffers from all instances.
- Statistics updated
. DBWR checkpoints
. DBWR checkpoint buffers written
. DBWR thread checkpoint buffers written
- Caused by :
. Alter system checkpoint [global]
. ALter database begin backup
. ALter database close
. Shutdown [immediate]
- Controlfile and datafile headers are updated
. Checkpoint_change#
THREAD CHECKPOINT
- Writes block images to the database for all dirty buffers from one instance
- Statistics updated
. DBWR checkpoints
. DBWR checkpoint buffers written
. DBWR thread checkpoint buffers written
- Caused by :
. Alter system checkpoint local
- Controlfile and datafile headers are updated
. Checkpoint_change#
FILE CHECKPOINT
When a tablespace is put into backup mode or take it offline, Oracle writes all the dirty blocks from the tablespace to
disk before changing the state of the tablespace.
- Writes block images to the database for all dirty buffers for all files of a tablespace from all instances
- Statistics updated
. DBWR checkpoints
. DBWR tablespace checkpoint buffers written
Page 73 of 287
. DBWR checkpoint buffers written
- Caused by :
. Alter tablespace xxx offline
. Alter tablespace xxx begin backup
. Alter tablespace xxx read only
- Controlfile and datafile headers are updated
. Checkpoint_change#
PARALLEL QUERY CHECKPOINT
Parallel query often results in direct path reads (Full tablescan or index fast full scan). This means that blocks are read
straight into the session’s PGA, bypassing the data cache; but that means if there are dirty buffers in the data cache,
the session won’t see the most recent versions of the blocks unless they are copied to disk before the query starts –
so parallel queries start with a checkpoint.
- Writes block images to the database for all dirty buffers belonging to objects accessed by the query from all
instances.
- Statistics updated
. DBWR checkpoints
. DBWR checkpoint buffers written
- Caused by :
. Parallel Query
. Parallel Query component of Parallel DML (PDML) or Parallel DDL (PDDL)
- Mandatory for consistency
- Controlfile and datafile headers are updated
. Checkpoint_change#
OBJECT CHECKPOINT
When an object is dropped/truncated, the session initiates an object checkpoint telling DBWR to copy any dirty
buffers for that object to disk and the state of those buffers is changed to free.
- Writes block images to the database for all dirty buffers belonging to an object from all instances.
- Statistics updated
. DBWR checkpoints
. DBWR object drop buffers written
- Caused by dropping or truncating a segment:
. Drop table XXX
. Drop table XXX Purge
. Truncate table xxx
. Drop index xxx
- Mandatory for media recovery purposes
- Controlfile and datafile headers are updated
. Checkpoint_change#
LOG SWITCH CHECKPOINT
- Writes the contents of the dirty buffers whose information is protected by a redo log to the database .
- Statistics updated
. DBWR checkpoints
. DBWR checkpoint buffers written
. background checkpoints started
. background checkpoints completed
- Caused by log switch
- Controlfile and datafile headers are updated
. Checkpoint_change#
INCREMENTAL CHECKPOINT
Prior to Oracle 8i, only well known checkpoint was log switch checkpoint. Whenever LGWR filled an online logfile,
DBWR would go into a frenzy writing data blocks to disks, and when it had finished, Oracle would update each data
file header block with the SCN to show that file was updated up to that point in time.
Oracle 8i introduced incremental checkpointing which triggered DBWR to write some dirty blocks from time to time
so as to advance the checkpoint and reduce the instance recovery time.
Incremental checkpointing has been implemented using two algorithms :
Page 74 of 287
- Ageing algorithm
- LRU/TCH algorithm
AGEING ALGORITHM
This strategy involves writing changed blocks that have been dirty for the longest time and is called aging writes. This
algorithm relies on the CKPT Q running thru the cache and buffers being linked to the end of this list the first time
they are made dirty.
.The LRU list contains all the buffers – free / pinned / dirty. Whenever a buffer in LRU list is dirtied, it is placed in CKPT
Q as well i.e. a buffer can simultaneously have pointers in both LRU list and CKPT Q but the buffers in CKPT Q are
arranged in the order in which they were dirtied.Thus, checkpoint queue contains dirty blocks in the order of SCN# in
which they were dirtied
Every 3 secs DBWR wakes up and checks if there are those many dirty buffers in CKPT Q which need to br written so
as to satisfy instance recovery requirement..
If those many or more dirty buffers are not found,
DBWR goes to sleep
else (dirty buffers found)
.CKPT target RBA is calculated based on
- The most recent RBA
- log_checkpoint_interval
- log_checkpoint_timeout
- fast_start_mttr_target
- fast_start_io_target
- 90% of the size of the smallest redo log file
. DBWR walks the CKPT Q from the low end (dirtied earliest) of the redo log file collecting buffers for writing to disk
until it reaches the buffer that is more recent than the target RBA. These buffers are placed in write list-main.
. DBWR walks the write list-main and checks all the buffers
– If changes made to the buffer have already been written to redo log files
. Move those buffers to write-aux list
else
. Trigger LGWR to write changes to those buffers to redo logs
. Move those buffers to write-aux list
. Write buffers from write-aux list to disk
. Update checkpoint RBA in SGA
. Delink those buffers from CKPT Q
. Delink those buffers from write-aux list
- Statistics Updated :
. DBWR checkpoint buffers written
- Controlfile updated every 3 secs by CKPT
. Checkpoint progress record
As sessions link buffers to one end of the list, DBWR can effectively unlink buffers from the other end and copy them
to disk. To reduce contention between DBWR and foreground sessions, there are two linked lists in each working set
so that foreground sessions can link buffers to one while DBWR is unlinking them from the other.
LRU/TCH ALGORITHM
LRU/TCH algorithm writes the cold dirty blocks to disk that are on the point of being pushed out of cache.
As per ageing algorithm, DBWR will wake up every 3 seconds to flush dirty blocks to disk. But if blocks get dirtied at a
fast pace during those 3 seconds and a server process needs some free buffers, some buffers need to be flushed to
the disk to make room. That’s when LRU/TCH algorithm is used to write those dirty buffers which are on the cold end
of the LRU list.
Whenever a server process needs some free buffers to read data, it scans the LRU list from its cold end to look for
free buffers.
While searching
If unused buffers found
Read blocks from disk into the buffers and link them to the corresponding hash bucket
if it finds some clean buffers (contain data but not dirtied or dirtied and have been flushed to disk),
if they are the candidates to be aged out (low touch count)
Read blocks from disk into the buffers and link them to the corresponding hash bucket
Page 75 of 287
else (have been accessed recently and should not be aged out)
Move them to MRU end depending upon its touch count.
If it finds dirty buffers (they are already in CKPT Q),
Delink them from LRU list
Link them to the write-main list (Now these buffers are in CKPT Q and write-main list)
The server process scans a threshold no. of buffers (_db_block_max_scan_pct = 40(default)). If it does not find
required no. of free buffers,
It triggers DBWR to dirty blocks in write-mainlist to disk
. DBWR walks the write list-main and checks all the buffers
– If changes made to the buffer have already been written to redo log files
. Move those buffers to write-aux list
else
. Trigger LGWR to write changes to those buffers to redo logs
. Move those buffers to write-aux list
. Write buffers from write-aux list to disk
. Delink those buffers from CKPT Q and w rite-aux list
. Link those buffers to LRU list as free buffers
Note that
- In this algorithm, the dirty blocks are delinked from LRU list before linking them to write-main list in contrast to
ageing algorithm where the blocks can be simultaneously be in both CKPT Q and LRU list.
- In this algorithm, checkpoint is not advanced because it may be possible that the dirty blocks on the LRU end may
actually not be the ones which were dirtied earliest. They may be there because the server process did not move
them to the MRU end earlier. There might be blocks present in CKPT Q which were dirtied earlier than the blocks in
question.
I hope the information was usefule. Thanks for your time.
Explanation-3:
A Checkpoint is a database event which synchronizes the modified data blocks in memory with the datafiles on disk.
It offers Oracle the means for ensuring the consistency of data modified by transactions. The mechanism of writing
modified blocks on disk in Oracle is not synchronized with the commit of the corresponding transactions.
A checkpoint has two purposes:
(1) to establish data consistency, and
(2) enable faster database recovery.
The checkpoint must ensure that all the modified buffers in the cache are really written to the corresponding
datafiles to avoid the loss of data which may occur with a crash (instance or disk failure).
Depending on the number of datafiles in a database, a checkpoint can be a highly resource intensive operation, since
all datafile headers are frozen during the checkpoint. Frequent checkpoints will enable faster recovery, but can cause
performance degradation
Key Initialization parameters related to Checkpoint performance.
• FAST_START_MTTR_TARGET
• LOG_CHECKPOINT_INTERVAL
• LOG_CHECKPOINT_TIMEOUT
• LOG_CHECKPOINTS_TO_ALERT
FAST_START_MTTR_TARGET: It enables you to specify the number of seconds the database takes to perform crash
recovery
of a single instance. Based on internal statistics, incremental checkpoint automatically adjusts the checkpoint target
to meet the requirement of FAST_START_MTTR_TARGET. V$INSTANCE_RECOVERY.ESTIMATED_MTTR shows the
current estimated mean time to recover (MTTR) in seconds. This value is shown even if FAST_START_MTTR_TARGET
is not specified.
LOG_CHECKPOINT_INTERVAL: It influences when a checkpoint occurs, which means careful attention should be given
to the setting of this parameter, keeping it updated as the size of the redo log files is changed. The checkpoint
frequency is one of the factors which impacts the time required for the database to recover from an unexpected
failure. Longer intervals between checkpoints mean that if the system crashes, more time will be needed for the
Page 76 of 287
database to recover. Shorter checkpoint intervals mean that the database will recover more quickly, at the expense
of increased resource utilization during the checkpoint operation
LOG_CHECKPOINT_TIMEOUT: The parameter specifies the maximum number of seconds the incremental checkpoint
target should lag the current log tail. In another word, it specifies how long a dirty buffer in buffer cache can remain
dirty. Checkpoint frequency impacts the time required for the database to recover from an unexpected failure.
Longer intervals between checkpoints mean that more time will be required during database recovery.
LOG_CHECKPOINTS_TO_ALERT: It lets you log your checkpoints to the alert file. Doing so is useful for determining
whether checkpoints are occurring at the desired frequency.
Relationship between Redologs and Checkpoint: A checkpoint occurs at every log switch. If a previous checkpoint is
already in progress, the checkpoint forced by the log switch will override the current checkpoint. Maintain well-sized
redo logs to avoid unnecessary checkpoints as a result of frequent log switches. The alert log is a valuable tool for
monitoring the rate that log switches occur, and subsequently, checkpoints occur.
Checkpoint not complete: This message in alert log indicates that Oracle wants to reuse a redo log file, but the
current checkpoint position is still in that log. In this case, Oracle must wait until the checkpoint position passes that
log.When the database waits on checkpoints,redo generation is stopped until the log switch is done. This situation
may be encountered if DBWR writes
too slowly, or if a log switch happens before the log is completely full, or if log file sizes are too small.
Explanation-4:
http://www.oracleportal.org/knowledge-base/oracle-database/database-concepts/general/checkpoint.aspx
83. SMON process is used to write into LOG files?
No
84. Oracle does not consider a transaction committed until?
The LGWR successfully writes the changes to redo
85. How many maximum DBWn (Db writers) we can invoke?
20
86. Which activity would generate less undo data?
INSERT
87. What happens when a user issues a COMMIT?
1) LGWR wakes up.
2) LGWR acquires the redo allocation latch and redo copy latch.
3) LGWR flushes the redologbuffer to the logfiles (2 memebrs in parallel)
4) LGWR releases the redo latches
5) LGWR posts session A
The LGWR flushes the log buffer to the online redo log.
"When you issue a DML Oracle generates redo entries based on the changes and these entries are buffered in
memory while the transaction is occurring.
When you issue a commit, oracle immediately writes this redo entries to disk along with redo for the commit. Oracle
does not return from the commit until the redo has been completely written to disk.
The matter is, the redo information is written to the disk immediately and the session waits for the process to
complete before return.
Asynchronous Commit:
But in Oracle 10g R2 oracle changed this concept (i.e.)
You can let the log writer write the redo information to disk in its own time, instead of immediately, and you can
have the commit return to you before it's completed, instead of waiting."
Explanation-2:
Committing means that a user has explicitly or implicitly requested that the changes in the transaction be made
permanent. An explicit request occurs when the user issues a COMMIT statement. An implicit request occurs after
normal termination of an application or completion of a data definition language (DDL) operation. The changes made
by the SQL statement(s) of a transaction become permanent and visible to other users only after that transaction
commits. Queries that are issued after the transaction commits will see the committed changes.
You can name a transaction using the SET TRANSACTION ... NAME statement before you start the transaction. This
makes it easier to monitor long-running transactions and to resolve in-doubt distributed transactions.
88. What happens when a user process fails?
PMON performs process recovery.
Page 77 of 287
Explanation: The process monitor (PMON) performs process recovery when a user process fails. PMON is responsible
for cleaning up the database buffer cache and freeing resources that the user process was using. For example, it
resets the status of the active transaction table, releases locks, and removes the process ID from the list of active
processes.
PMON periodically checks the status of dispatcher and server processes, and restarts any that have stopped running
(but not any that Oracle Database has terminated intentionally). PMON also registers information about the instance
and dispatcher processes with the network listener.
Like SMON, PMON checks regularly to see whether it is needed and can be called if another process detects the need
for it.
Explanation-2: Types of Database failures:
A single database operation fails, such as a DML (Data Manipulation Language) statement - INSERT,
Statement
UPDATE, and so on.
User
A single database connection fails.
process
A network component between the client and the database server fails, and the session is disconnected
Network
from the database.
An error message is not generated, but the operation’s result, such as dropping a table, is not what the
User error
user intended.
Instance The database instance fails unexpectedly.
Media One or more of the database files is lost, deleted, or corrupted.
Database Recovery
When a database fails to run or a media failure occurs, or any of database or schema objects are lost or corrupted, a
recovery process is needed. For this, an understanding of various types of database failures is essential.
Database Failure Types
There are six general categories for database-related failures. Understanding what category a failure belongs in will
help you to more quickly understand the nature of the recovery effort you need to use to reverse the effects of the
failure and maintain a high level of availability and performance in your database. The six general categories of
failures are as follows:
Statement Failures
Statement failures occur when a single database operation fails, such as a single INSERT statement or the creation of
a table. In the list that follows are a few of the most common problems and their solutions when a statement fails.
Although granting user privileges or additional quotas within a tablespace solves many of these problems, also
consider whether there are any gaps in the user education process that might lead to some of these problems in the
first place.
User Process Failures
The abnormal termination of a user session is categorized as a user process failure; any uncommitted transaction
must be cleaned up. The PMON (process monitor) background process periodically checks all user processes to
ensure that the session is still connected. If the PMON finds a disconnected session, it rolls back the uncommitted
transaction and releases all locks held by the disconnected process. Causes for user process failures typically fall into
one of these categories:
A user closes their SQL*Plus window without logging out.
The workstation reboots suddenly before the application can be closed.
The application program causes an exception and closes before the application can be terminated normally.
A user process times out and Oracle disconnects the session.
A small percentage of user process failures is generally no cause for concern unless it becomes chronic; it may be a
sign that user education is lacking—for example, training users to terminate the application gracefully before shutting
down their workstation.
Network Failures
Depending on the locations of your workstation and your server, getting from your workstation to the server over the
network might involve a number of hops: you might traverse several local switches and WAN routers to get to the
database. From a network perspective, this configuration provides a number of points where failure can occur. These
types of failures are called network failures.
Page 78 of 287
In addition to hardware failures between the server and client, a listener process on the Oracle server can fail or the
network card on the server itself can fail. To guard against these kinds of failures, you can provide redundant network
paths from your clients to the server, as well as additional listener connections on the Oracle server and redundant
network cards on the server.
User Error Failures
Even if all your redundant hardware is at peak performance, and your users have been trained to disconnect from
their Oracle sessions properly, users can still inadvertently delete or modify data in tables or drop an index. This is
known as a user error failure. Although these operations succeed from a statement point of view, they might not be
logically correct: the DROP TABLE command worked fine, but you really didn’t want to drop that table!
If data was inadvertently deleted from a table, and not yet committed, a ROLLBACK statement will undo the damage.
If a COMMIT has already been performed, you have a number of options at your disposal, such as using data in the
undo tablespace for a Flashback Query or using data in the archived and online redo logs with the LogMiner utility,
available as a command-line or GUI interface.
You can recover a dropped table using Oracle’s recycle bin functionality: a dropped table is stored in a special
structure in the tablespace and is available for retrieval as long as the space occupied by the table in the tablespace is
not needed for new objects. Even if the table is no longer in the tablespace’s recycle bin, depending on the criticality
of the dropped table, you can use either tablespace point in time recovery (TSPITR) or Flashback Database Recovery
to recover the table, taking into consideration the potential data loss for other objects stored in the same tablespace
for TSPITR or in the database if you use Flashback Database Recovery.
If the inadvertent changes are limited to a small number of tables that have few or no interdependencies with other
database objects, Flashback Table functionality is most likely the right tool to bring back the table to a point of time in
the past.
Instance Failures
An instance failure occurs when the instance shuts down without synchronizing all the database files to the same
system change number (SCN), requiring a recovery operation the next time the instance is started. Many of the
reasons for an instance failure are out of your direct control; in these situations, you can minimize the impact of
these failures by tuning instance recovery.
A few causes for instance failure:
• A power outage.
• A server hardware failure.
• Failure of an Oracle background process.
• Emergency shutdown procedures (intentional power outage or SHUTDOWN ABORT) .
In all these scenarios, the solution is easy: run the STARTUP command, and let Oracle automatically perform instance
recovery using the online redo logs and undo data in the undo tablespace. If the cause of the instance failure is
related to an Oracle background process failure, you can use the alert log and process-specific trace files to debug the
problem. The EM Database Control makes it easy to review the contents of the alert log and any other alerts
generated right before the point of failure.
Media Failures
Another type of failure that is somewhat out of your control is media failure. A media failure is any type of failure
that results in the loss of one or more database files: datafiles, control files, or redo log files. Although the loss of
other database-related files such as an init.ora file or a server parameter file (SPFILE) is of great concern, Oracle
Corporation does not consider it a media failure.
The database file can be lost or corrupted for a number of reasons:
• Failure of a disk drive.
• Failure of a disk controller.
• Inadvertent deletion or corruption of a database file.
Following the best practices by adequately mirroring control files, redo log files, and ensuring that full backups and
their subsequent archived redo log files are available will keep you prepared for any type of media failure
89. What are the free buffers in the database buffer cache?
Buffer that can be overwritten
http://koenigocm.blogspot.in/2012/07/database-buffer-cache-architecture.html
Explanation-1: Database Buffer cache is one of the most important components of System Global Area (SGA).
Database Buffer Cache is the place where data blocks are copied from datafiles to perform SQL operations. Buffer
Cache is shared memory structure and it is concurrently accessed by all server processes.
Page 79 of 287
Working of Database buffer Cache
Buffer Cache is organized into two lists
Write List
Write list contains dirty buffers. These are the data blocks which contain modified data and needed to be written to
datafiles.
Least Recent Used (LRU) List
Buffers owned by LRU list are categorized into Pinned , Clean, Free or Unused and Dirty buffers. Pinned buffers are
currently being used while Clean buffer are available for use. Although Clean buffers contain some data but it is sync
with block content stored in datafiles, so there is no need to write these buffer to disk. Free buffer are empty and
haven’t been used yet. Dirty buffers are those which needed to be moved to write list.
When oracle server process requires a specific data block, it first searches it in Buffer cache. If it finds required block,
it is directly accessed and this event is known as Cache Hit. If searching in Buffer cache fails then it is read from
datafile on the disk and the event is called Cache Miss. If the required block is not found in Buffer cache then process
needs a free buffer to read data from disk. It starts search for free buffer from least recently used end of LRU list .In
process of searching, if user process finds dirty block in LRU list it shifts them to Write List. If the process can not find
free buffers until certain amount of time then process signals DBWn process to write dirty buffers to disks.
By default accessed buffers are moved to most recently used end of the LRU list. Search for free buffers is initiated
from least recently used end of LRU list, this means that recently accessed buffers are kept in cache for longer time.
But when a Full table scan happens, oracle process puts the blocks of table to least recently used end of LRU list. This
means that they are quickly re-acclaimed by oracle process. When a table is created, a storage parameter Cache |
NoCache| Cache Reads is required. If a table is created with Cache parameter, then the data block of table are added
to most recently used end inspite of full table scan.
Size of the Database Buffer Cache
Oracle allows different block size for different tablespaces. A standard block size is defined in
DB_BLOCK_SIZEinitialization parameter . System tablespace uses standard block size. DB_CACHE_SIZE parameter is
used to defiane size for Database buffer cache. For example to create a cache of800 mb, set parameter as below
DB_CACHE_SIZE=800M
If you have created a tablesapce with bock size different from standard block size, for example your standard block
size is 4k and you have created a tablespace with 8k block size then you must create a 8k buffer cache as below.
DB_8K_CACHE_SIZE=256M
Keep Buffer Pool and Recycle Buffer Pool
Data required by oracle user process is loaded into buffer cache, if it is not already present in cache. Proper memory
tuning is required to avoid repeated disk access for the same data. This means that there should be enough space in
buffer cache to hold required data for long time. If same data is required in very short intervals then such data should
be permanently pinned into memory. Oracle allows us to use multiple buffers. Using multiple buffers we can control
that how long objects should be kept in memory.
Keep Buffer Pool
Data which is frequently accessed should be kept in Keep buffer pool. Keep buffer pool retains data in the memory.
So that next request for same data can be entertained from memory. This avoids disk read and increases
performance. Usually small objects should be kept in Keep buffer. DB_KEEP_CACHE_SIZE initialization parameter is
used to create Keep buffer Pool. If DB_KEEP_CACHE_SIZE is not used then no Keep buffer is created. Use following
syntax to create a Keep buffer pool of 40 MB.
DB_KEEP_CACHE_SIZE=40M
To pin an object in Keep buffer pool use DBMS_SHARED_POOL.KEEP method.
Recycle Buffer Pool
Blocks loaded in Recycle Buffer pool are immediate removed when they are not being used. It is useful for those
objects which are accessed rarely. As there is no more need of these blocks so memory occupied by such blocks is
made available for others data. For example if ASM is enabled then available memory can be assigned to other SGA
components . Use following syntax to create a Recycle Buffer Pool
DB_RECYCLE_CACHE_SIZE=20M
Default Pool
If an object is not assigned a specific buffer pool then its blocks are loaded in default pool DB_CACHE_SIZE
initialization parameter is used to create Default Pool. For more information on Default Pool visit following link,
http://exploreoracle.com/2009/03/31/database-buffer-cache/
Page 80 of 287
BUFFER_POOL value in storage clause of schema objects lets you to assign an object to a specific Buffer pool. Value
of BUFFER_POOL can be KEEP,RECYCLE or DEFAULT.
90. When the SMON Process perform Instance Crash Recovery?
Only at the time of startup after abort shutdown
91. Which dynamic view can be queried when a database is started up in no mount state?
V$INSTANCE
92. Which two tasks occur as a database transitions from the mount stage to the open stage?
The online data files & Redo log files are opened.
93. In which situation is it appropriate to enable the restricted session mode?
Exporting a consistent image of a large number of tables
94. What is the component of an Oracle instance?
The SGA
95. Which process is involved when a user starts a new session on the database server?
The Oracle server process
96. In the event of an Instance failure, which files store command data NOT written to the datafiles?
Online redo logs
97. When are the base tables of the data dictionary created?
When the database is created
98. Sequence of events takes place while starting a Database is?
Instance started, Database mounted & Database opened
99. The alert log will never contain information about which database activity?
Performing operating system restore of the database files
100. Where can you find the non-default parameters when an instance is started?
Alert log
101. Which tablespace is used as the temporary tablespace if TEMPORARY TABLESPACE is not specified for a user?
SYSTEM
102. User SCOTT creates an index with this statement: CREATE INDEX emp_indx on employee (empno). In which
tablespace would be the index created?
SCOTT’S default tablespace
103. Which data dictionary view shows the available free space in a certain tablespace?
DBA_FREE_SPACE
104. Which method increase the size of a tablespace?
Add a datafile to a tablespace.
105. What does the command ALTER DATABASE . . . RENAME DATAFILE do?
It updates the control file.
106. Can you drop objects from a read-only tablespace?
Yes
107. SYSTEM TABLESPACE can be made off-line?
No
108. Data dictionary can span across multiple Tablespaces?
No
109. Multiple Tablespaces can share a single datafile?
No
110. All datafiles related to a Tablespace are removed when the Tablespace is dropped?
No
111. What is a default role?
A role automatically enabled when the user logs on.
112. Who is the owner of a role?
Nobody
113. When granting the system privilege, which clause enables the grantee to further grant the privilege to other
users or roles?
WITH ADMIN OPTION
114. Which view will show a list of privileges that are available for the current session to a user?
SESSION_PRIVS
Page 81 of 287
115. Which view shows all of the objects accessible to the user in a database?
ALL_OBJECTS
116. Which statement about profiles is false?
Profiles are assigned to users, roles, and other profiles.
117. Which password management feature is NOT available by using a profile?
Password change
118. Which resource can not be controlled using profiles?
PGA memory allocations
119. You want to retrieve information about account expiration dates from the data dictionary. Which view do you
use?
DBA_USERS
120. It is very difficult to grant and manage common privileges needed by different groups of database users using
roles?
No
121. Which data dictionary view would you query to retrieve a table’s header block number?
DBA_SEGMENTS
122. When tables are stored in locally managed tablespaces, where is extent allocation information stored?
Corresponding tablespace itself
123. Which of the following three portions of a data block are collectively called as Overhead?
Table directory, row directory and data block header
124. Can a tablespace hold objects from different schemes?
Yes
125. Which data dictionary view would you query to retrieve a table’s header block number?
DBA_SEGMENTS
126. What is default value for storage parameter INITIAL in 10g if extent management is Local?
40k
127. Using which package we can convert Tablespace from DMTS to LMTS?
DBMS_SPACE_ADMIN
128. Is it Possible to Change ORACLE Block size after creating database?
No
129. Locally Managed table spaces will increase the performance?
TRUE
130.Index is a Space demanding Object?
Yes
131. What is a potential reason for a Snapshot too old error message?
An ITL entry in a data block has been reused.
132. An Oracle user receives the following error? ORA-01555 SNAPSHOP TOO OLD, What is the possible solution?
https://blogs.oracle.com/db/entry/troubleshooting_ora_1555
http://oracle-randolf.blogspot.in/2009/04/read-consistency-ora-01555-snapshot-too.html
Increase the extent size of the rollback segments.
Explanation-1:
Oracle uses for its read consistency model a true multi-versioning approach which allows readers to not block writers
and vice-versa, writers to not block readers. Obviously this great feature allowing highly concurrent processing
doesn't come for free, since somewhere the information to build multiple versions of the same data needs to be
stored.
Oracle uses the so called undo information not only to rollback on-going transactions but also to re-construct old
versions of blocks if required. Very simplified when reading data Oracle knows the point in time (which corresponds
to an internal counter called SCN, System Change Number) that data needs to be consistent with. In the default READ
COMMITTED isolation mode this point in time is defined when a statement starts to execute. You could also say at
the moment a statement starts to run its result is pre-ordained. When Oracle processes a block it checks if the block
is "old" enough and if it discovers that the block content is too new (has been changed by other sessions but the
current access is not supposed to see this updated content according to the point-in-time assigned to the statement
execution) it will start to create a copy of the block and use the information available from the corresponding undo
segment to re-construct an older version of the block. Note that this process can be iterative: If after re-constructing
the older version of the block it's still not sufficiently old more undo information will be used to go further back in
Page 82 of 287
time.
Since the undo information of transactions that have been committed is marked as re-usable Oracle is free to
overwrite the corresponding undo data under certain circumstances (e.g. no more free space left in the UNDO
tablespace). If now an older version of a block needs to be created but the corresponding undo information required
to do so has been overridden, the infamous "ORA-01555 snapshot too old" error will be raised, since the required
read-consistent view of the data can not be generated any longer.
In order to avoid this error starting from 10g on you only need to have a sufficiently large UNDO tablespace in
automatic undo management mode so that the undo information required to create old versions of the blocks
doesn't get overridden prematurely. In 9i you need to set the UNDO_RETENTION parameter according to the longest
expected runtime of your queries and of course have sufficient space in the UNDO tablespace to allow Oracle to
adhere to this setting.
So until now Oracle was either able to provide a consistent view of the data according to its read-consistency model,
or you would get an error message if the required undo data wasn't available any longer.
Enter the SCN_ASCENDING hint: As already mentioned by Martin Berger and Chandra Pabba Oracle officially
documented the SCN_ASCENDING hint for Oracle 11.1.0.7 in Metalink Note 6688108.8 (Enhancement: Allow ORA-
1555 to be ignored during table scan).
Explanation-2:
The ORA-1555 errors can happen when a query is unable to access enough undo to build
a copy of the data at the time the query started. Committed “versions” of blocks are
maintained along with newer uncommitted “versions” of those blocks so that queries can
access data as it existed in the database at the time of the query. These are referred to as
“consistent read” blocks and are maintained using Oracle undo management.
Diagnosis:
Due to space limitations, it is not always feasible to keep undo blocks on hand for the life of the instance. Oracle
Automatic Undo Management (AUM) helps to manage the time frame that undo blocks are stored. The time frame is
the “retention” time for those blocks.
There are several ways to investigate the ORA-1555 error. In most cases, the error is a legitimate problem with
getting to an undo block that has been overwritten due to the undo “retention” period having passed.
AUM will automatically tune up and down the “retention” period, but often space limitations or configuration of the
undo tablespace will throttle back continuous increases to the “retention” period.
Explanation-3:
1. Problem:
Below are the settings for the undo tablespace:
undo_retention - 1200
undo_management - AUTO
The user encounters the following error in the job that is running in the database:
Ora-01555 snapshot too old error.
2. Impact: Medium to high because it would effect the long running queries due to insufficient undo tablespace thus
impacting performance. Also could be a part of a batch process.
3. Solutions: The Ora-01555 snapshot too old error occurs when the undo tablepspace storage space is smaller as
compared to the space needed by long running queries. It could also occur because of inappropriate (too small) value
of the undo_retention. Undo_retention specifies the time period (in seconds) till which the system retains undo ie.
undo would be retined for at least the time specified in this parameter. The undo_retention parameter would be
efficent if the current undo tablespace has enough space. If there is an active transaction which requires undo space
and there is not enough available space, then the system reuses unexpired undo space. This causes some queries to
fail with a snapshot too old error message. The underlying technology that undo supports is the Oracle read
consistency mechanism.
Below are the remedies to address and remedy this error:
1. Reduce and delay extent reuse by increasing the size of the undo tablespace and the undo_retention parameter.
2. Try not to do a fetch between the commits. So, if a cursor was opened before the last commit don’t fetch the
cursor as it is still performing actions in the current sessions.
3. Don't perform frequent commits as this would reduce the size of the undo tablespace and also the queries would
take more time.
4. Try to perform the long-running queries when the system has the least load of DMLtransactions.
5. Set a large value for the database block size (db_block_size) parameter to reduce and delay extent reuse.
Page 83 of 287
6. Run separate transactions while the sensitive long-running queries are taking place only when it is very important
and the transactions are not dependent on each other and do not prejudice each others performance.
7. Before you run long-running and sensitive sql queries make sure that you have sufficient and optimal undo
tablespace. If you do not have sufficient undo tablespace manually resize it to prevent rollback failure thus
preventing the error.
8. You can also calculate the size of the optimal undo_retention, undo tablespace and the db_block_size before hand.
9. You can manually manage the usage, size and the amount of the rollback segments.
133. The status of the Rollback segment can be viewed through?
DBA_ROLLBACK_SEG
134. Explicitly we can assign transaction to a rollback segment?
TRUE
135. Are uncommitted transactions written to flashback redologs?
Yes
136. Is it possible to do flashback after truncate?
No
137. Can we restore a dropped table after a new table with the same name has been created?
Yes
138. Which following command will clear database recyclebin?
Purge recyclebin
139. What is the OPTIMAL parameter?
length of a rollback segment.
140. Flashback query time depends on?
Undo_retention
141. Can we create spfile in shutdown mode?
Yes
142. Can we alter static parameters by using scope=both?
No
143. Can we take backup of spfile in RMAN?
Yes
144. Does Drop Database command removes spfile?
Yes
145. Using which SQL command we can alter the parameters?
Alter system
146. OMF database will Improve the performance?
No
147. Max number of controlfiles that can be multiplexed in an OMF database?
5
148. Which environment variable is used to help set up Oracle names?
TNS_ADMIN
149 Which Net8 component waits for incoming requests on the server side?
Listener
150. What is the listener name when you start the listener without specifying an argument?
LISTENER
151. When is a request sent to a listener?
After name resolution.
152. In which file is the information that host naming is enabled stored?
sqlnet.ora
153. Which protocols can oracle Net 11g Use?
TCP
154. Which of the following statements about listeners is correct?
Multiple listeners can share one network interface card.
155. Can we perform DML operation on Materialized view?
No
156.Materialized views are schema objects, that can be used to summarize pre compute replicate and distribute
data?
Page 84 of 287
True
157. Does a materialized view occupies space?
Yes
158. Can we name a Materialized View log?
No
159. How to improve sqlldr (SQL*Loader) performance?
160. By using which view can a normal user see public database link?
ALL_DB_LINKS
161. Can we change the refresh interval of a Materialized View?
YES
162. Can we use a database link even after the target user has changed his password?
Yes
163. Can we convert a materialized view from refresh fast to complete?
Yes
164. A normal user can create public database link?
False
165. If we truncate the master table, materialized view log on that table?
Will be dropped
166. What is the correct procedure for multiplexing online redo logs?
Issue the ALTER DATABASE. . . ADD LOGFILE MEMBER command.
167. In which situation would you need to create a new control file for an existing database?
When MAXLOGMEMBERS needs to be changed.
168. When configuring a database for ARCHIVELOG mode, you use an initialisation parameter to specify which
action?
To Store Archive log Files
169. Which command creates a text backup of the control file?
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
170. You are configuring a database for ARCHIVELOG mode. Which initialization parameter should you use?
LOG_ARCHIVE_DEST
171. How does a DBA specify multiple control files?
By listing the files in the CONTROL_FILES parameter.
172. Which dynamic view should a DBA query to obtain information about the different sections of the control
file?
V$CONTROLFILE_RECORD_SECTION
173. What is the characteristics of the control file?
It must be updated at every log switch.
174. Which statements about online redo log members in a group are true?
All members in a group are the same size.
175. Which command does a DBA use to list the current status of archiving?
ARCHIVE LOG LIST;
176.When performing an open database backup, which statement is NOT true?
The database can be open but only in READ ONLY mode.
177. Which task can a DBA perform using the export/import facility?
Transport tablespaces between databases.
178. Why does this command cause an error?
exp system/manager inctype=full file=expdat.dnp
The full=y parameter needs to be specified.
179. Which import option do you use to create tables without data?
ROWS
180. Which export option will generate code to create an initial extent that is equal to the sum of the sizes of all
the extents currently allocated to an object?
COMPRESS
181. Can I take 1 dump file set from my source database and import it into multiple databases?
Yes
181. EXP command is used?
Page 85 of 287
To take Backup of the Oracle Database
182. Can we export a dropped table?
No
183. What is the default value for IGNORE parameter in EXP/IMP?
No
184. Why is Direct Path Export Faster?
This option By Passes the SQL Layer
185. Is there a way to estimate the size of an export job before it gets underway?
Yes
186. Can I monitor a Data Pump Export or Import job while the job is in progress?
Yes
187. If a job is stopped either voluntarily or involuntarily, can I restart it?
Yes
188. Does Data Pump support Flashback?
Yes
189. If the tablespace is Read Only,Can we export objects from that tablespaces?
Yes
190. Dump files exported using traditional EXP are compatible with DATAPUMP?
False
191. Before a DBA creates a transportable tablespace, which condition must be completed?
The target system is in the same operating system.
192. Can we transport tablespace from one database to another database which is having SYS owned objects?
No
193. What is default value for TRANSPORT_TABLESPACE Parameter in EXP?
No
194. How to find whether tablespace is created in that database or transported from another database?
Dba_tablespaces
195. Can we Perform TTS using EXPDP?
Yes
196. Can we Transport Tablespace which has Materialized View in it?
No
197. When would a DBA need to perform a media recovery?
When a data file is not synchronized with the other data files, redo logs, and control files.
198. Why would you set a data file offline when the database is in MOUNT state?
To allow for automatic data file recovery.
199. What is the causes of media failures?
There is a head crash on the disk containing a database file.
200. Which of the following would not require you to perform an incomplete recovery?
Instance failure
201. In what scenario you have to open a database with reset logs option?
All of the above
202. Is it possible take consistent backup if the database is in NOARCHIVELOG mode?
Yes
203. Database is in Archivelog mode and Loss of unbackedup datafile is?
Complete Online Recovery
204. You should issue a backup of the control file after issuing which command?
CREATE TABLESPACE
205. The alert log will never contain specific information about which database backup activity) ?
Performing an operating system backup of the database files.
206. A tablespace becomes unavailable because of a failure. The database is running in NOARCHIVELOG mode?
What should the DBA do to make the database available?
Restore the data files, redo log files, and control files from an earlier copy of a full database backup.
207. How often does a read-only tablespace need to be backed up?
Only once after the tablespace becomes read-only
208. With the instance down, how would you recover a lost control file?
Page 86 of 287
Restore backup control file & recover using backup controlfile
209. Which action does Oracle recommend after a DBA recovers from the loss of the current online redo-log?
Back up the database
210. Which command creates a text backup of the control file?
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
211. Which option is used in the parameter file to detect corruptions in an Oracle data block?
DBVERIFY
212. Your database is configured in ARCHIVELOG mode. Which backups cannot be performed?
Online control file backups using the ALTER CONTROLFILE BACKUP command
213. You are using hot backup without being in archivelog mode, can you recover in the event of a failure?
No
214. Which following statement is true when tablespaces are put in backup mode for hot backups?
High Volume of REDO is generated
215. Can Consistant Backup be performed when the database is open?
No
216. Can we shutdown the database if it is in BEGIN BACKUP mode?
Yes
217. Which data dictionary view helps you to view whether tablespace is in BEGIN BACKUP Mode or not?
V$backup
218. Which command is used to allow RMAN to store a group of commands in the recovery catalog?
CREATE SCRIPT
219. When using Recovery Manager without a catalog, the connection to the target database?
Can be a local or a remote connection.
220. Work is done by Recovery Manager through?
Operating system commands
221. You perform an incomplete database recovery using RMAN. Which state of target database is needed?
Mount
222. Is it possible to perform Transportable tablespace(TTS) using RMAN ?
Yes
223. Which type of file does Not RMAN include in its backups?
Online redo-logs
224. When using Recovery Manager without a catalog, the connection to the target database should be made as?
A user with SYSDBA privilege
225. RMAN online backup generates excessive Redo information?
Flase
226. Which background process will be invoked when we enable BLOCK CHANGE TRACKING?
CTWr
227. Where should a recovery catalog be created?
In the target database
228. How to list restore points in RMAN?
RC_RESTORE_POINT view
229. Without LIST FAILURE can we say ADVISE FAILURE in Data Recovery Advisor?
Yes
230. Import Catalog Command is used for?
To Merge Two diff catalogs
231. Interfile backup parallelism does?
Divide files into Multiple sections & take backup parallel
232. What is the difference between pfile and spfile. Where these files are located?
233. What will you do if pfile and spfile file is deleted? Can you start the database?
pfile or init.ora is a text file, hence setting any oracle init parameters in this file requires restarting database.
With spfile we can dynamically set certain oracle init parameters without restarting the instance.
Example: alter system set DB_CACHE_SIZE=2G scope=both; both means memory and spfile both. Location of
pfile/spfile is $ORACLE_HOME/dbs
if init.ora/spfile is lost, we can manually create a pfile using any other database pfile. Edit the pfile as per the
db_name, control_files etc.
Page 87 of 287
And then start the database. Later on we can create spfile from pfile.
234. What is the difference between Static and Dynamic init.ora/spfile parameters?
Changing Oracle Static parameters requires instance restart to make them effective.
Dynamic parameters are immediately effective in running Oracle Instance and does not require restart.
235. What is the complete syntax to set DB_CACHE_SIZE in memory and spfile?
alter system set DB_CACHE_SIZE=2G scope=both;
236. How do we configure multiple Buffer Cache in Oracle. Whats the benefit? Does setting multiple Cache
requires Database Restart?
We can set multiple Buffer Cache by setting DB_NK_CACHE_SIZE dynamic parameter in pfile or spfile. NK can be (2K,
4K,8K,16,32K)
if db_block_size=8K then DB_8K_CACHE_SIZE is not allowed.
OLTP database has small transactions so they need small block size(2k,4k,8k) and hence 2k, 4k, 8k Cache Size.
Datawarehouse database work on big transaction that effect big tables hence we need bigger block size(8k,16k,32k)
If a database is mix having both OLPT and Datawarehouse needs we need to configure Multiple Block size and Also
create Tablespace of different block size using BLOCKSIZE syntax.
Multiple Buffer Cach parameters are Dynamic database restart is not needed.
237. What is Oracle Golden Gate?
It a software used for Replicating data from one database to another. The source and target can be Microsoft Sql
Server, Oracle , IBM DB2, Sybase, MYSQL running on any OS.
238. Can we create Tablespaces of multiple Block Sizes. If yes, what is the Syntax?
YES it is possible. We need to set Buffer Caches of corresponding block size, and create the tablespace with
BLOCKSIZE syntax.
For example if we need Tablespace of 32K size we will use following steps:
alter system set db_32k_cache_size=2G scope=both;
create tablespace hr_data datafile ‘/u01/app/oracle/oradata/hrprd/hr_data01.dbf’ size 1G BLOCKSIZE 32K;
239. How do you calculate the size of oracle memory areas Buffer Cache, Log Buffer, Shared Pool, PGA etc?
We allocate70- 80% of Unix Server RAM to Oracle and then allocate 60-70% to Buffer Cache
20-30% to PGA and remaining to Shared Pool and Log Buffer.
240. What is OMF? What spfile parameters are used to configure OMF. What is the benefit?
OMF is oracle managed files and it is used to simplify the syntax for Datafile, Logfile, Tablespace and controlfile
creation.
init.ora/spfile parameters to configure OMF are:
db_create_file_dest ,db_create_online_log_dest_n (n=1 to 5)
241. What is Database Cloning? Why Cloning is needed? What are the steps to clone a database?
Cloning is used to create dev and test database from production on a different machine. Refer to blog for complete
steps.
242. What is Oracle Streams?
Oracle Streams is used to Replicate/Transfer Data from one Oracle Database to another Oracle Database.
243. There are 2 control files for a database. What will happen when 1 control file is deleted and you try to start
database? How you will fix this problem?
If one Control file is missing out of 2, Oracle will complain when we start database. To fix this we need to modify
CONTROL_FILES init.ora/spfile parameter and remove the entry for deleted control file. We can also copy
control01.ctl to control02.ctl and then start the database and it will fix the error.
244. What is Dynamic performance view and What is Data Dictionary Views. Give some examples of each?
During Database operation, Oracle maintains a set of virtual tables/view that record current database activity.
These are called dynamic performance views because they are continuously updated while a database is open and in
use.
These are also called V$ views. GV$ views used in RAC are same a V$ views bu
Dynamic performance view ( v$datafile, v$controlfile, v$sql, v$transaction )
Data Dictionary Views ( dba_users, dba_tablespaces, dba_sys_privs )
245. You are working in database that does lot of Sorting , i.e SELECT queries use a lot of ORDER BY and GROUP
BY? What Oracle memory area and Physical File/Tablespace you need to tune and How?
We need bigger PGA and TEMP tablespace space to support excessive Sorting.
246. Why we upgrade a database. What are the steps to upgrade database. Any errors you got during upgrade?
Page 88 of 287
Every few years an Oracle Database Version gets desupported by Oracle so we need to upgrade to newer Oracle
version. Currently Oracle 9i is not supported by oracle. Also we need to upgrade to newer versions to use the new
features/tools provided by newer Oracle version like 11gr1/11gr2.
We should use utlu112i.sql , utlu112s.sql and DBUA/catupgrd.sql to upgrade a database to 11gr2.
247. What is MEMORY_TARGET not supported error. How do you fix it?
This error occurs when linux shared memory or swap space is defined less. Increase its size to fix the error.
248. What are the steps to manually create a database?
Create init.ora/spfile
startup nomount
Run Create Database command to manually create database.
Refer to blog for exact steps.
249. A DBA ran a delete statement to delete all records on a table. The table has 50 Million rows? While Delete is
running his SQLPLUS session terminate abnormally? What oracle will do internally?
When the session terminates PMON Process will rollback this transaction.
Next question- Which query/view you will use to monitor the Rollback/Undo that Oracle is doing
V$TRANSACTION columns used_ublk and used_urec
250. What is Oracle Dataguard?
Dataguard is use to configure a Standby Database at a Remote location. Dataguard provides database protection in
case of natural disaster(Earthquake, Flood) when complete Datacenter is lost and database is damaged. Business will
using Standby database present at remote location.
251. Can we change the DB_BLOCK_SIZE? if Yes. What are the steps?
we can not change the db_block_size using any oracle commands. If it is required here are steps
export data from old database using Datapump(expdp) into a .dmp file.
Create a new databsae with db_block_size to any of the values (2k,4k,8k,16k,32K) as per your requirement.
Import data into new database using the using the .dmp file
252. Explain the Oracle Architecture?
Oracle consists of Instance and Physical Database.
Instance has SGA, PGA and Background Process.
Physical Database consists of Datafiles, Control files, Log files and Archive log files
253. What happens internally in Oracle when a User Connects and run a SELECT Query? What SGA areas and
background processes are involved?
254. How do you create a tablespace, undo tablespace and temp tablespace. What are the Syntax?
Tablespace -> create tablespace …
Undo Tablespace -> create undo tablespace..
Temp Tablespace -> create temporary tablesapce…
255. As a HR user you logged in and Creating a EMP_BIG Table and inserting 10 lac rows? While inserting 10 lac
rows you got error ORA-01688: unable to extend table EMP_BIG by 512 in tablespace HR_DATA? What are the two
ways to fix this Tablespace error?
1) Resize the existing tablespace datafile to add more space
2) Add new datafile to tablespace to add more space
256. What are the steps to rename a database?
Shutdown Immediate.
Startup mount
Then use the NID command to rename a database.
Refer to blog for exact steps.
257. What is the syntax to create a user and roles?
create user username identified by pass1 default tablespace hr_data temporary tablespace temp;
create role hr_read_role;
258. What are the 3 init.ora parameters to manage UNDO? What is their usage?
UNDO_TABLESPACE
UNOD_MANAGEMENT=AUTO/MANAUAL
UNDO_RETENTION
259. What is Snapshot too old error? How do you fix it?
Snapshot too old error occurs when a long running queries tries to read data that from Undo Tablespace which is
already overwritten by some new Transactions.
Page 89 of 287
To fix this error, We need to create proper size Undo Tablespace. Query the v$UNDOSTAT for undo tablespace size
recommendation.
Also we can set RETENTION GURANTEE for a tablespace, But it is not recommended.
260. What is Undo Retention Gurantee. How do we set it. What is the Proc and Cons of setting it? When Retention
Gurantee is set for Undo Tablespace, committed transactions are not overwritten for UNDO_RETENTION period.
If we set this the new Transactions will fail if there is less space in Undo Tablespace and hence its very Risky and not
recommended.
261. What are System Privileges and Object Privileges? Give some examples? What Data Dictionary view we use to
check both?
System privileges are generic database privileges e.g CREATE TABLE, CREATE VIEW, CREATE SESSION
To see System privileges query : SELECT * FROM DBA_SYS_PRIVS
Object privileges are on specific database object/table e.g SELECT ON EMPLOYEE, DELETE ON EMPLOYEE
To see Object privileges query : SELECT * FROM DBA_TAB_PRIVS
262. What is PGA? What information is stored in PGA?. What is PGA Tuning?
PGA is process global are used to store sorting data, bind variable etc. PGA tuning is setting the proper size of
PGA_AGGREGATE_TARGET init.ora/spfile parameter for better performance.
263. What are the steps to identify a slow running SQL and tune it?
a) Monitor sessions to find slow running sql.
b) Generate Explain Plan/SQL plan to find the root cause of slowness.
c) Tune the sqls by Creating indexes or Using SQL Hints or by rewriting a Bad sql
264. What are all the preparation work a DBA need to do before installing Oracle?
Set linux kernel parameters.
Install Oracle recommended Linux packages.
For all steps Refer to Oracle Installation blog.
265. Any error that you got during Oracle Installation and how did you fix it?
Example of Oracle errors/warnings are: Kernel parameters not set, Linux packages missing, In sufficient Memory for
Oracle.
266. What is default tablespace and temporary tablespace?
Default Tablespace : Place where a user creates objects if the user does not specify some other tablespace. Note
that having a default tablespace does not imply that the user has the privilege of creating objects in that tablespace,
nor does the user have a quota of space in that tablespace in which to create objects. Both of these are granted
separately.
Temporary tablespace: This is a place where temporary objects, such as sorts and temporary tables, are created on
behalf of the user by the instance. No quota is applied to temporary tablespaces.
267. Which privilege allows you to select from tables owned by other users?
The SELECT ANY TABLE privilege allows to select from tables owned by other users
268. What command we use to revoke system privilege?
Revoke select table from username
269. How do we create a Role?
A role is a named group of related privileges that are granted to users or to other roles. A DBA manages privileges
through roles.
To create a role:
CREATE ROLE role_name;
OR
1. In Enterprise Manager Database Control, click the Server tab and then click Roles under the Security heading.
2. Click the Create button
270. Difference between Non-Deffered and deffered constraint?
Nondeferred constraints, also known as immediate constraints, are enforced at the end of every DML statement. A
constraint violation causes the statement to be rolled back. If a constraint causes an action such as delete cascade,
the action is taken as part of the statement that caused it. A constraint that is defined as nondeferrable cannot be
changed to a deferrable constraint. For nondeferrable constraints, the primary key and unique key constraints need
unique indexes; if the column or columns already have a non-unique index, constraint creation fails because those
indexes cannot be used for a unique or primary key.
Deferred constraints are constraints that are checked only when a transaction is committed.
If constraint violations are detected at commit time, the entire transaction is rolled back. These constraints are most
Page 90 of 287
useful when both the parent and child rows in a foreign key relationship are entered at the same time, as in the case
of an order entry system in which the order and the items in the order are entered at the same time. For deferrable
constraints, primary key and unique keys need non-unique indexes; if the column or columns already have a unique
index on them, constraint creation fails because those indexes cannot be deferred.
271. Difference between varchar and varchar2 data types?
Varchar can store upto 2000 bytes and varchar2 can store upto 4000 bytes. Varchar will occupy space for NULL values
and Varchar2 will not occupy any space. Both are differed with respect to space.
272. In which language Oracle has been developed?
Oracle has been developed using C Language.
273. What is RAW datatype?
RAW datatype is used to store values in binary data format. The maximum size for a raw in a table in 32767 bytes.
274. What is the use of NVL function?
The NVL function is used to replace NULL values with another or given value. Example is –
NVL(Value, replace value)
275. Whether any commands are used for Months calculation? If so, what are they?
In Oracle, months_between function is used to find number of months between the given dates. Example is –
Months_between(Date 1, Date 2)
276. What are nested tables?
Nested table is a data type in Oracle which is used to support columns containing multi valued attributes. It also hold
entire sub table.
277. What is COALESCE function?
COALESCE function is used to return the value which is set to be not null in the list. If all values in the list are null,
then the coalesce function will return NULL.
Coalesce(value1, value2,value3,…)
278. What is BLOB datatype?
A BLOB data type is a varying length binary string which is used to store two gigabytes memory. Length should be
specified in Bytes for BLOB.
279. How do we represent comments in Oracle?
Comments in Oracle can be represented in two ways –
Two dashes(–) before beginning of the line – Single statement
/*—— */ is used to represent it as comments for block of statement
280. What is DML?
Data Manipulation Language (DML) is used to access and manipulate data in the existing objects. DML statements
are insert, select, update and delete and it won’t implicitly commit the current transaction.
281. What is the difference between TRANSLATE and REPLACE?
Translate is used for character by character substitution and Replace is used substitute a single character with a word.
282. How do we display rows from the table without duplicates?
Duplicate rows can be removed by using the keyword DISTINCT in the select statement.
283. What is the usage of Merge Statement?
Merge statement is used to select rows from one or more data source for updating and insertion into a table or a
view. It is used to combine multiple operations.
284. What is NULL value in oracle?
NULL value represents missing or unknown data. This is used as a place holder or represented it in as default entry to
indicate that there is no actual data present.
285. What is USING Clause and give example?
The USING clause is used to specify with the column to test for equality when two tables are joined.
[sql]Select * from employee join salary using employee ID[/sql]
Employee tables join with the Salary tables with the Employee ID.
286. What is key preserved table?
A table is set to be key preserved table if every key of the table can also be the key of the result of the join. It
guarantees to return only one copy of each row from the base table.
287. What is WITH CHECK OPTION?
The WITH CHECK option clause specifies check level to be done in DML statements. It is used to prevent changes to a
view that would produce results that are not included in the sub query.
288. What is the use of Aggregate functions in Oracle?
Page 91 of 287
Aggregate function is a function where values of multiple rows or records are joined together to get a single value
output. Common aggregate functions are –
Average
Count
Sum
289. What do you mean by GROUP BY Clause?
A GROUP BY clause can be used in select statement where it will collect data across multiple records and group the
results by one or more columns.
290. What is a sub query and what are the different types of subqueries?
Sub Query is also called as Nested Query or Inner Query which is used to get data from multiple tables. A sub query is
added in the where clause of the main query.
There are two different types of subqueries:
Correlated sub query
A Correlated sub query cannot be as independent query but can reference column in a table listed in the from list of
the outer query.
Non-Correlated subquery
This can be evaluated as if it were an independent query. Results of the sub query are submitted to the main query or
parent query.
291. What is cross join?
Cross join is defined as the Cartesian product of records from the tables present in the join. Cross join will produce
result which combines each row from the first table with the each row from the second table.
292. What are temporal data types in Oracle?
Oracle provides following temporal data types:
Date Data Type – Different formats of Dates
TimeStamp Data Type – Different formats of Time Stamp
Interval Data Type – Interval between dates and time
293. How do we create privileges in Oracle?
A privilege is nothing but right to execute an SQL query or to access another user object. Privilege can be given as
system privilege or user privilege.
[sql]GRANT user1 TO user2 WITH MANAGER OPTION;[/sql]
294. What is VArray?
VArray is an oracle data type used to have columns containing multivalued attributes and it can hold bounded array
of values.
295. How do we get field details of a table?
Describe <Table_Name> is used to get the field details of a specified table.
296. What is the difference between rename and alias?
Rename is a permanent name given to a table or a column whereas Alias is a temporary name given to a table or
column. Rename is nothing but replacement of name and Alias is an alternate name of the table or column.
297. What is a View?
View is a logical table which based on one or more tables or views. The tables upon which the view is based are
called Base Tables and it doesn’t contain data.
298. What is a cursor variable?
A cursor variable is associated with different statements which can hold different values at run time. A cursor variable
is a kind of reference type.
299. What are cursor attributes?
Each cursor in Oracle has set of attributes which enables an application program to test the state of the cursor. The
attributes can be used to check whether cursor is opened or closed, found or not found and also find row count.
300. What are SET operators?
SET operators are used with two or more queries and those operators are Union, Union All, Intersect and Minus.
301. How can we delete duplicate rows in a table?
Duplicate rows in the table can be deleted by using ROWID.
302. What are the attributes of Cursor?
Attributes of Cursor are
%FOUND
Returns NULL if cursor is open and fetch has not been executed
Page 92 of 287
Returns TRUE if the fetch of cursor is executed successfully.
Returns False if no rows are returned.
%NOT FOUND
Returns NULL if cursor is open and fetch has not been executed
Returns False if fetch has been executed
Returns True if no row was returned
%ISOPEN
Returns true if the cursor is open
Returns false if the cursor is closed
%ROWCOUNT
Returns the number of rows fetched. It has to be iterated through entire cursor to give exact real count.
303. Can we store pictures in the database and if so, how it can be done?
Yes, we can store pictures in the database by Long Raw Data type. This datatype is used to store binary data for 2
gigabytes of length. But the table can have only on Long Raw data type.
304. What is an integrity constraint?
An integrity constraint is a declaration defined a business rule for a table column. Integrity constraints are used to
ensure accuracy and consistency of data in a database. There are types – Domain Integrity, Referential Integrity and
Domain Integrity.
305. What is an ALERT?
An alert is a window which appears in the center of the screen overlaying a portion of the current display.
306. What is hash cluster?
Hash Cluster is a technique used to store the table for faster retrieval. Apply hash value on the table to retrieve the
rows from the table.
307. What are the various constraints used in Oracle?
Following are constraints used:
NULL – It is to indicate that particular column can contain NULL values
NOT NULL – It is to indicate that particular column cannot contain NULL values
CHECK – Validate that values in the given column to meet the specific criteria
DEFAULT – It is to indicate the value is assigned to default value
308. What is difference between SUBSTR and INSTR?
SUBSTR returns specific portion of a string and INSTR provides character position in which a pattern is found in a
string.
SUBSTR returns string whereas INSTR returns numeric.
309. What is the parameter mode that can be passed to a procedure?
IN, OUT and INOUT are the modes of parameters that can be passed to a procedure.
310. What are the different Oracle Database objects?
There are different data objects in Oracle –
Tables – set of elements organized in vertical and horizontal
Views – Virtual table derived from one or more tables
Indexes – Performance tuning method for processing the records
Synonyms – Alias name for tables
Sequences – Multiple users generate unique numbers
Tablespaces – Logical storage unit in Oracle
311. What are the differences between LOV and List Item?
LOV is property whereas list items are considered as single item. List of items is set to be a collection of list of items. A
list item can have only one column, LOV can have one or more columns.
312. What are privileges and Grants?
Privileges are the rights to execute SQL statements – means Right to connect and connect. Grants are given to the
object so that objects can be accessed accordingly. Grants can be provided by the owner or creator of an object.
313. What is the difference between $ORACLE_BASE and $ORACLE_HOME?
Oracle base is the main or root directory of an oracle whereas ORACLE_HOME is located beneath base folder in which
all oracle products reside.
314. What is the fastest query method to fetch data from the table?
Row can be fetched from table by using ROWID. Using ROW ID is the fastest query method to fetch data from the
table.
Page 93 of 287
315. What is the maximum number of triggers that can be applied to a single table?
12 is the maximum number of triggers that can be applied to a single table.
316. How to display row numbers with the records?
Display row numbers with the records numbers -
[sql]Select rownum, <fieldnames> from table;[/sql]
This query will display row numbers and the field values from the given table.
317. How can we view last record added to a table?
Last record can be added to a table and this can be done by –
[sql]Select * from (select * from employees order by rownum desc) where rownum<2;[/sql]
318. What is the data type of DUAL table?
The DUAL table is a one-column table present in oracle database. The table has a single VARCHAR2(1) column called
DUMMY which has a value of ‘X’.
319. What is difference between Cartesian Join and Cross Join?
There are no differences between the join. Cartesian and Cross joins are same. Cross join gives cartesian product of
two tables – Rows from first table is multiplied with another table which is called cartesian product.
Cross join without where clause gives Cartesian product.
320. How to display employee records who gets more salary than the average salary in the department?
This can be done by this query –
[sql]Select * from employee where salary>(select avg(salary) from dept, employee where dept.deptno =
employee.deptno;[/sql]
321. What is the difference between RMAN and a traditional hot backup?
RMAN is faster, can do incremental (changes only) backups, and does not place tablespaces into hotbackup mode.
322. What are bind variables and why are they important?
With bind variables in SQL, Oracle can cache related queries a single time in the SQL cache (area). This avoids a hard
parse each time, which saves on various locking and latching resources we use to check objects existence and so on.
BONUS: For rarely run queries, especially BATCH queries, we explicitely DO NOT want to use bind variables, as they
hide information from the Cost Based Opitmizer.
BONUS BONUS: For batch queries from 3rd party apps like peoplesoft, if we can’t remove bind variables, we can use
bind variable peeking!
323. In PL/SQL, what is bulk binding, and when/how would it help performance?
Oracle’s SQL and PL/SQL engines are separate parts of the kernel which require context switching, like between unix
processes. This is slow, and uses up resources. If we loop on an SQL statement, we are implicitely flipping between
these two engines. We can minimize this by loading our data into an array, and using PL/SQL bulk binding operation
to do it all in one go!
324. Why is SQL*Loader direct path so fast?
SQL*Loader with direct path option can load data ABOVE the high water mark of a table, and DIRECTLY into the
datafiles, without going through the SQL engine at all. This avoids all the locking, latching, and so on, and doesn’t
impact the db (except possibly the I/O subsystem) at all.
325. What are the tradeoffs between many vs few indexes? When would you want to have many, and when would
it be better to have fewer?
Fewer indexes on a table mean faster inserts/updates. More indexes mean faster, more specific WHERE clauses
possibly without index merges.
326. What is the difference between RAID 5 and RAID 10? Which is better for Oracle?
RAID 5 is striping with an extra disk for parity. If we lose a disk we can reconstruct from that parity disk.
RAID 10 is mirroring pairs of disks, and then striping across those sets.
RAID 5 was created when disks were expensive. Its purpose was to provide RAID on the cheap. If a disk fails, the IO
subsystem will perform VERY slowly during the rebuild process. What’s more your liklihood of failure increases
dramatically during this period, with all the added weight of the rebuild. Even when it is operating normally RAID 5 is
slow for everything but reading. Given that and knowing databases (especially Oracle’s redo logs) continue to
experience write activity all the time, we should avoid RAID5 in all but the rare database that is MOSTLY read activity.
Don’t put redologs on RAID5.
RAID10 is just all around goodness. If you lose one disk in a set of 10 for example, you could lose any one of eight
other disks and have no troubles. What’s more rebuilding does not impact performance at all since you’re simply
making a mirror copy. Lastly RAID10 perform exceedingly well in all types of databases.
327. When using Oracle export/import what character set concerns might come up? How do you handle them?
Page 94 of 287
Be sure to set NLS_LANG for example to “AMERCIAN_AMERICA.WE8ISO8859P1″. If your source database is US7ASCII,
beware of 8-bit characters. Also be wary of multi-byte characters sets as those may require extra attention. Also
watch export/import for messages about any “character set conversions” which may occur.
328. Name three SQL operations that perform a SORT?
a. CREATE INDEX
b. DISTINCT
c. GROUP BY
d. ORDER BY
f. INTERSECT
g. MINUS
h. UNION
i. UNINDEXED TABLE JOIN
329. What is your favorite tool for day-to-day Oracle operation?
Hopefully we hear some use of command line as the answer!
330. What is the difference between Truncate and Delete? Why is one faster? Can we ROLLBACK both? How would
a full table scan behave after?
Truncate is nearly instantaenous, cannot be rolled back, and is fast because Oracle simply resets the HWM. When a
full table scan is performed on a table, such as for a sort operation, Oracle reads to the HWM. So if you delete every
single solitary row in 10 million row table so it is now empty, sorting on that table of 0 rows would still be extremely
slow.
331. What is the difference between a materialized view (snapshot) fast refresh versus complete refresh? When is
one better, and when the other?
Fast refresh maintains a change log table, which records change vectors, not unlike how the redo logs work. There is
overhead to this, as with a table that has a LOT of indexes on it, and inserts and updates will be slower. However if
you are performing refreshes often, like every few minutes, you want to do fast refresh so you don’t have to full-
table-scan the source table. Complete refresh is good if you’re going to refresh once a day. Does a full table scan on
the source table, and recreats the snapshot/mview. Also inserts/updates on the source table are NOT impacted on
tables where complete refresh snapshots have been created.
332. What does the NO LOGGING option do? Why would we use it? Why would we be careful of using it?
It disables the logging of changes to the redologs. It does not disable ALL LOGGING, however as Oracle continues to
use a base of changes, for recovery if you pull the plug on the box, for instance. However it will cause problems if you
are using standby database. Use it to speed up operations, like an index rebuild, or partition maintenance operations.
333. Tell me about standby database? What are some of the configurations of it? What should we watch out for?
Standby databases allow us to create a copy of our production db, for disaster recovery. We merely switch mode on
the target db, and bring it up as read/write. Can setup as master->slave or master->master. The latter allows the
former prod db to become the standby, once the failure cause is remedied. Watch out for NO LOGGING!! Be sure
we’re in archivelog mode.
334. What do you know about privileges?
A privilege is a right to execute a particular type of SQL statement or to access another user’s object
Privileges are divided into two categories:
System privileges: Each system privilege allows a user to perform a particular database operation or class of database
operations. For example, the privilege to create tablespaces is a system privilege.
Object privileges: Object privileges allow a user to perform a particular action on a specific object, such as a table,
view, sequence, procedure, function, or package. Without specific permission, users can access only their own
objects.
Page 95 of 287
Oracle ASM FAQ
Page 96 of 287
Questions
1. What is the use of ASM (or) Why ASM preferred over file system? Benefits?
2. Describe about ASM architecture?
3. How does database connects to ASM Instance?
4. What are the init parameters related to ASM?
5. What is rebalancing (or) what is the use of ASM_POWER_LIMIT?
6. What is significance of re-balance power?
7. In what situations we need to re-balance the disk?
8. In what situation asm instance will automatically re-balance the disk?
9. Explain about disk group managements?
10. What are different types of redundancies in ASM & explain?
11. How to copy file to/from ASM from/to file system?
12. How to find out the databases, which are using the ASM instance?
13. What is Striping and Mirroring? What are different types of striping and Mirroring in ASM & their
differences?
14. What are Diskgroup’s and Failuregroups?
15. Can ASM be used as replacement for RAID?
16. What are the background processes in ASM?
17. What are the file types that ASM support and keep in disk groups?
18. How many ASM Diskgroups can be created under one ASM Instance?
19. What process does the rebalancing?
20. How does ASM provides Redundancy?
21. Can we change the Redundancy for Diskgroup after its creation?
22. Unable to open the ASM instance. What is the reason?
23. Can ASM instance and database (rdbms) be on different servers?
24. Can we see the files stored in the ASM instance using standard unix commands.
25. Can we use ASM for storing Voting Disk/OCR in a RAC instance?
26. Does ASM instance automatically rebalances and takes care of hot spots?
27. What is ASMLIB?
28. What is SYSASM role?
29. Can we use BCV to clone the ASM Diskgroup on same host?
30. Can we edit the ASM Disk header to change the Diskgroup Name?
31. Whats is Kfed?
32. Can we use block devices for ASM Disks?
33. Is it mandatory to use disks of same size and characteristics for Diskgroups?
34. Do we need to install ASM and Oracle Database Software in different ORACLE_HOME?
35. What is the maximum size of Disk supported by ASM?
36. I have created Oracle database using DBCA and having a different home for ASM and Oracle Database. I see
that listener is running from ASM_HOME. Is it correct?
37. How does the database interact with the ASM instance and how do I make ASM go faster?
38. Do I need to define the RDBMS FILESYSTEMIO_OPTIONS parameter when I use ASM?
39. Why Oracle recommends two diskgroups?
40. We have a 16 TB database. I’m curious about the number of disk groups we should use; e.g. 1 large disk
group, a couple of disk groups, or otherwise?
41. We have a new app and don’t know our access pattern, but assuming mostly sequential access, what size
would be a good AU fit?
42. Would it be better to use BIGFILE tablespaces, or standard tablespaces for ASM?
43. What is the best LUN size for ASM?
44. In 11g RAC we want to separate ASM admins from DBAs and create different users and groups. How do we
set this up?
45. Can my RDBMS and ASM instances run different versions?
46. Where do I run my database listener from; i.e., ASM HOME or DB HOME?
Page 97 of 287
47. How do I backup my ASM instance?
48. When should I use RMAN and when should I use ASMCMD copy?
49. I’m going to do add disks to my ASM diskgroup, how long will this rebalance take?
50. We are migrating to a new storage array. How do I move my ASM database from storage A to storage B?
51. Is it possible to unplug an ASM disk group from one platform and plug into a server on another platform (for
example, from Solaris to Linux)?
52. How does ASM work with multipathing software?
53. Is ASM constantly rebalancing to manage “hot spots”?
54. Draw the Diagram that how database interacts with ASM when a request is to read or open a datafile.
55. Can my disks in a diskgroup can be varied size? For example one disk is of 100GB and another disk is of 50GB.
If so how does ASM manage the extents?
56. What is Intelligent Data Placement?
57. What is ASM preferred Mirror read? How does it useful?
58. What is ACFS?
59. What is ADVM?
60. What is ASM Template?
61. Why Oracle recommends two diskgroups. Why?
Page 98 of 287
Answers
1. What is the use of ASM (or) Why ASM preferred over file system? Benefits?
(http://www.datadisk.co.uk/html_docs/oracle/asm.htm)
Explantion-1:
ASM is a volume manager and a file system for Oracle database files that supports single-instance Oracle Database
and Oracle Real Application Clusters (Oracle RAC) configurations. ASM is Oracle's recommended storage
management solution that provides an alternative to conventional volume managers, file systems, and raw devices.
ASM uses disk groups to store datafiles; an ASM disk group is a collection of disks that ASM manages as a unit. Within
a disk group, ASM exposes a file system interface for Oracle database files. The content of files that are stored in a
disk group are evenly distributed, or striped, to eliminate hot spots and to provide uniform performance across the
disks. The performance is comparable to the performance of raw devices.
You can add or remove disks from a disk group while a database continues to access files from the disk group. When
you add or remove disks from a disk group, ASM automatically redistributes the file contents and eliminates the need
for downtime when redistributing the content.
The ASM volume manager functionality provides flexible server-based mirroring options. The ASM normal and high
redundancy disk groups enable two-way and three-way mirroring respectively. You can use external redundancy to
enable a Redundant Array of Inexpensive Disks (RAID) storage subsystem to perform the mirroring protection
function.
ASM also uses the Oracle Managed Files (OMF) feature to simplify database file management. OMF automatically
creates files in designated locations. OMF also names files and removes them while relinquishing space when
tablespaces or files are deleted.
ASM reduces the administrative overhead for managing database storage by consolidating data storage into a small
number of disk groups. This enables you to consolidate the storage for multiple databases and to provide for
improved I/O performance.
ASM files can coexist with other storage management options such as raw disks and third-party file systems. This
capability simplifies the integration of ASM into pre-existing environments.
Oracle Enterprise Manager includes a wizard that enables you to migrate non-ASM database files to ASM. ASM also
has easy to use management interfaces such as SQL*Plus, the ASMCMD command-line interface, and Oracle
Enterprise Manager.
ASM provides striping and mirroring.
SM is Logical Volume Manager; it's not just a set of File System.
ASM let you plug or unplug (add or remove) disk while Oracle Database is running by using simple SQL statement. I
think it's a strong point for ASM.
Moreover, ASM load balances the I/O across disks so as to improve performance.
Explanation-2: In
Oracle Database 10g/11g there are two types of instances: database and ASM instances. The ASM instance, which is
generally named +ASM, is started with the INSTANCE_TYPE=ASM init.ora parameter. This parameter, when set,
signals the Oracle initialization routine to start an ASM instance and not a standard database instance. Unlike the
standard database instance, the ASM instance contains no physical files; such as logfiles, controlfiles or datafiles, and
only requires a few init.ora parameters for startup.
Upon startup, an ASM instance will spawn all the basic background processes, plus some new ones that are specific
to the operation of ASM. The STARTUP clauses for ASM instances are similar to those for database instances. For
example, RESTRICT prevents database instances from connecting to this ASM instance. NOMOUNT starts up an ASM
instance without mounting any disk group. MOUNT option simply mounts all defined diskgroups
For RAC configurations, the ASM SID is +ASMx instance, where x represents the instance number.
Benefits-1:
Provides automatic load balancing over all the available disks, thus reducing hot spots in the file system
Prevents fragmentation of disks, so you don't need to manually relocate data to tune I/O performance
Adding disks is straight forward - ASM automatically performs online disk reorganization when you add or remove
storage
Uses redundancy features available in intelligent storage arrays
The storage system can store all types of database files
Page 99 of 287
Using disk group makes configuration easier, as files are placed into disk groups
ASM provides stripping and mirroring (fine and coarse gain - see below)
ASM and non-ASM oracle files can coexist
ASM is free!!!!!!!!!!!!!
Benefits-2:
ASM provides filesystem and volume manager capabilities built into the Oracle database kernel. Withthis capability,
ASM simplifies storage management tasks, such as creating/laying out databases and disk space management. Since
ASM allows disk management to be done using familiar create/alter/drop SQL statements, DBAs do not need to learn
a new skill set or make crucial decisions on provisioning.
The following are some key benefits of ASM:
ASM spreads I/O evenly across all available disk drives to prevent hot spots and maximize performance.
ASM eliminates the need for over provisioning and maximizes storage resource utilization facilitating database
consolidation.
Inherent large file support.
Performs automatic online redistribution after the incremental addition or removal of storage capacity.
Maintains redundant copies of data to provide high availability, or leverages 3rd party RAID functionality.
Supports Oracle Database as well as Oracle Real Application Clusters (RAC).
Capable of leveraging 3rd party multipath technologies.
For simplicity and easier migration to ASM, an Oracle database can contain ASM and non-ASM files.
Any new files can be created as ASM files whilst existing files can also be migrated to ASM.
RMAN commands enable non-ASM managed files to be relocated to an ASM disk group.
Enterprise Manager Database Control or Grid Control can be used to manage ASM disk and file activities.
Benefits-3:
Stripes files rather than logical volumes
Provides redundancy on a file basis
Enables online disk reconfiguration and dynamic rebalancing
Reduces the time significantly to resynchronize a transient failure by tracking changes while disk is offline
Provides adjustable rebalancing speed
Is cluster-aware
Supports reading from mirrored copy instead of primary copy for extended clusters
Is automatically installed as part of the Grid Infrastructure
2. Describe about ASM architecture?
Automatic Storage Management (ASM) instance
Instance that manages the diskgroup metadata
Disk Groups
Logcal grouping of disks
Determines file mirroring options
ASM Disks
LUNs presented to ASM
ASM Files
Files that are stored in ASM disk groups are called ASM files, this includes database files
ASM Rebalance Process. Rebalances data extents within an ASM disk group. Possible processes are ARB0-
ARBn
ARB9 and ARBA.
ASM Rebalance Master Process. Coordinates rebalance activity. In an ASM instance, it coordinates rebalance
RBAL
activity for disk groups. In a database instances, it manages ASM disk groups.
Exadata only - ASM Disk Expel Slave Process. Performs ASM post-rebalance activities. This process expels
Xnnn
dropped disks at the end of an ASM rebalance.
When a rebalance operation is in progress, the ARBn processes will generate trace files in the background dump
destination directory, showing the rebalance progress.
Views
In an ASM instance, V$ASM_OPERATION displays one row for every active long running ASM operation executing in
the current ASM instance. GV$ASM_OPERATION will show cluster wide operations.
During the rebalance, the OPERATION will show REBAL, STATE will shows the state of the rebalance
operation, POWER will show the rebalance power and EST_MINUTES will show an estimated time the operation
should take.
In an ASM instance, V$ASM_DISK displays information about ASM disks. During the rebalance, the STATE will show
the current state of the disks involved in the rebalance operation.
Is your disk group balanced:
run the following query in your ASM instance to get the report on the disk group imbalance.
SQL> column "Diskgroup" format A30
SQL> column "Imbalance" format 99.9 Heading "Percent|Imbalance"
SQL> column "Variance" format 99.9 Heading "Percent|Disk Size|Variance"
SQL> column "MinFree" format 99.9 Heading "Minimum|Percent|Free"
SQL> column "DiskCnt" format 9999 Heading "Disk|Count"
SQL> column "Type" format A10 Heading "Diskgroup|Redundancy"
SQL> SELECT g.name "Diskgroup",
100*(max((d.total_mb-d.free_mb)/d.total_mb)-min((d.total_mb-d.free_mb)/d.total_mb))/max((d.total_mb-
d.free_mb)/d.total_mb) "Imbalance",
Page 103 of 287
100*(max(d.total_mb)-min(d.total_mb))/max(d.total_mb) "Variance",
100*(min(d.free_mb/d.total_mb)) "MinFree",
count(*) "DiskCnt",
g.type "Type"
FROM v$asm_disk d, v$asm_diskgroup g
WHERE d.group_number = g.group_number and
d.group_number <> 0 and
d.state = 'NORMAL' and
d.mount_status = 'CACHED'
GROUP BY g.name, g.type;
Diskgroup Imbalance Variance Free Count Redundancy
------------------------------ --------- --------- ------- ----- ----------
ACFS .0 .0 12.5 2 NORMAL
DATA .0 .0 48.4 2 EXTERN
PLAY 3.3 .0 98.1 3 NORMAL
RECO .0 .0 82.9 2 EXTERN
Explanation-2:
Dynamic Storage Configuration: ASM
enables you to change the storage configuration without having to take the database offline. It automatically
rebalances—redistributes file data evenly across all the disks of the disk group—after you add disks to or drop disks
from a disk group.
Should a disk failure occur, ASM automatically rebalances to restore full redundancy for files that had extents on the
failed disk. When you replace the failed disk with a new disk, ASM rebalances the disk group to spread data evenly
across all disks, including the replacement disk.
Tuning Rebalance Operations:
The V$ASM_OPERATION view provides information that can be used for adjusting ASM_POWER_LIMIT and the
resulting power of rebalance operations. The V$ASM_OPERATION view also gives an estimate in the EST_MINUTES
column of the amount of time remaining for the rebalance operation to complete. You can see the effect of changing
the rebalance power by observing the change in the time estimate.
Effects of Adding and Dropping Disks from a Disk Group:
ASM automatically rebalances whenever disks are added or dropped. For a normal drop operation (without the
FORCE option), a disk is not released from a disk group until data is moved off of the disk through rebalancing.
Likewise, a newly added disk cannot support its share of the I/O workload until rebalancing completes. It is more
efficient to add or drop multiple disks at the same time so that they are rebalanced as a single operation. This avoids
unnecessary movement of data.
For a drop operation, when rebalance is complete, ASM takes the disk offline momentarily, and then drops it, setting
disk header status to FORMER.
You can add or drop disks without shutting down the database. However, a performance impact on I/O activity may
result.
Explanation-3:
ASM Rebalance:
The Rebalance operation provides an even distribution of file extents across all disks in the diskgroup. The rebalance
is done on each file to ensure balanced I/O load.
The RBAL background process manages the rebalance activity. It examines the extent map for each file and
redistributes the extents to new storage configuration. The RBAL process will calculate estimation time and the work
required to perform the rebalance activity and then message the ARBx processes to actually perform the task. The
number of ARBx process starts is determined by the parameter ASM_POWER_LIMIT.
There will be one I/O for each ARBx process at a time. Hence the impact of physical movement of file extents will be
low. The asm_power_limit parameter determines the speed of the rebalance activity. It can have values between 0
and 11. If the value is 0 no rebalance occurs. If the value is 11 the rebalance takes place at full speed. The power
value can also be set for specific rebalance activity using Alter Diskgroup statement.
The rebalance operation has various states, they are
WAIT: No operations are running for the group.
RUN: A rebalance operation is running for the group.
HALT: The DBA has halted the operation.
Page 104 of 287
ERROR: The operation has halted due to errors.
You can query the V$ASM_OPERATION to view the status of rebalance activity.
The rebalance activity is an asynchronous operation, i.e., the operation runs in the background while the users can
perform other tasks. In certain situation you need the rebalance activity to finish successfully before performing the
other tasks. To make the operation synchronous you add a keyword WAIT while performing the rebalance as shown
below.
SQL> Alter diskgroup ASMDB Add Disk ‘/dev/sdc4’ Rebalance power 4 WAIT;
The above statement will not return the control to the user unless the rebalance operation ends.
Explanation-4:
Manually Rebalancing Disk Groups
You can manually rebalance the files in a disk group using the REBALANCE clause of the ALTER DISKGROUP
statement. This would normally not be required, because ASM automatically rebalances disk groups when their
configuration changes. You might want to do a manual rebalance operation if you want to control the speed of what
would otherwise be an automatic rebalance operation.
The POWER clause of the ALTER DISKGROUP...REBALANCE statement specifies the degree of parallelism, and thus the
speed of the rebalance operation. It can be set to a value from 0 to 11. A value of 0 halts a rebalancing operation until
the statement is either implicitly or explicitly re-run. The default rebalance power is set by the ASM_POWER_LIMIT
initialization parameter. See "Tuning Rebalance Operations" for more information.
The power level of an ongoing rebalance operation can be changed by entering the rebalance statement with a new
level.
The ALTER DISKGROUP...REBALANCE command by default returns immediately so that you can issue other
commands while the rebalance operation takes place asynchronously in the background. You can query the
V$ASM_OPERATION view for the status of the rebalance operation.
If you want the ALTER DISKGROUP...REBALANCE command to wait until the rebalance operation is complete before
returning, you can add the WAIT keyword to the REBALANCE clause. This is especially useful in scripts. The command
also accepts a NOWAIT keyword, which invokes the default behavior of conducting the rebalance operation
asynchronously. You can interrupt a rebalance running in wait mode by typing CTRL-C on most platforms. This causes
the command to return immediately with the message ORA-01013: user requested cancel of current operation, and
then to continue the rebalance operation asynchronously.
Additional rules for the rebalance operation include the following:
• An ongoing rebalance command will be restarted if the storage configuration changes either when you alter
the configuration, or if the configuration changes due to a failure or an outage. Furthermore, if the new
rebalance fails because of a user error, then a manual rebalance may be required.
• The ALTER DISKGROUP...REBALANCE statement runs on a single node even if you are using Oracle Real
Application Clusters (Oracle RAC).
• ASM can perform one disk group rebalance at a time on a given instance. Therefore, if you have initiated
multiple rebalances on different disk groups, then Oracle processes this operation serially. However, you can
initiate rebalances on different disk groups on different nodes in parallel.
• Rebalancing continues across a failure of the ASM instance performing the rebalance.
• The REBALANCE clause (with its associated POWER and WAIT/NOWAIT keywords) can also be used in ALTER
DISKGROUP commands that add, drop, or resize disks.
Note:
Oracle will restart the processing of an ongoing rebalance operation if the storage configuration changes.
Furthermore, if the next rebalance operation fails because of a user error, then you may need to perform a manual
rebalance.
Example: Manually Rebalancing a Disk Group
The following example manually rebalances the disk group dgroup2. The command does not return until the
rebalance operation is complete.
ALTER DISKGROUP dgroup2 REBALANCE POWER 5 WAIT;
Tuning Rebalance Operations
If the POWER clause is not specified in an ALTER DISKGROUP statement, or when rebalance is implicitly run by adding
or dropping a disk, then the rebalance power defaults to the value of the ASM_POWER_LIMIT initialization
parameter. You can adjust the value of this parameter dynamically.
ASM stripes files using extents with a coarse method for load balancing or a fine method to reduce latency.
• Coarse-grained striping is always equal to the effective AU size.
• Fine-grained striping is always equal to 128 KB.
Explanation-2:
ASM stripes files across all the disks within the disk group thus increasing performance, each stripe is called an
‘allocation unit’. ASM offers two types of stripping which is dependent on the type of database file.
Striping is a technique where data is stored on multiple disk drives by splitting up the data and accessing all of the
disk drives in parallel. Striping significantly speeds up disk drive performance.
Example: RAID - RAID 0 is data striping
ASM stripes its files across all the disks that belong to a disk group. It remains unclear if it follows a strict RAID 3
fashion of striping or a variant of RAID 3 that facilitates easy addition and removal of disks to and from the disk group.
Oracle Corporation recommends that all the disks that belong to a disk group have the same size, in which case each
disk gets the same number of extents. However, if a DBA configures disks of different sizes, each disk might get a
different number of extents — based upon the size of the disk. An allocation unit typically has a size of 1MB.
ASM stripes help make data more reliably available and more secure than in other Oracle storage implementations.
Types of Striping:
Coarse Stripping used for datafile, archive logs (1MB stripes)
Fine Stripping used for online redo logs, controlfile, flashback files(128KB stripes)
ASM Mirroring
Disk mirroring provides data redundancy, this means that if a disk were to fail Oracle will use the other mirrored disk
and would continue as normal. Oracle mirrors at the extent level, so you have a primary extent and a mirrored
Process Description
Opens all device files as part of discovery and
RBAL
coordinates the rebalance activity
One or more slave processes that do the rebalance
ARBn
activity
Responsible for managing the disk-level activities
GMON such as drop or offline and advancing the ASM disk
group compatibility
MARK Marks ASM allocation units as stale when needed
One or more ASM slave processes forming a pool of
Onnn connections to the ASM instance for exchanging
messages
One or more parallel slave processes used in fetching
PZ9n data on clustered ASM installation from GV$ views
17. What are the file types that ASM support and keep in disk groups?
Control files
Flashback logs
Data Pump dump sets
• ASMCA
• Single Client Access Name (SCAN) - eliminates the need to change tns entry when nodes are added to or
removed from the Cluster. RAC instances register to SCAN listeners as remote listeners. SCAN is fully qualified
name. Oracle recommends assigning 3 addresses to SCAN, which create three SCAN listeners.
• Clusterware components: crfmond, crflogd, GIPCD.
• AWR is consolidated for the database.
• 11g Release 2 Real Application Cluster (RAC) has server pooling technologies so it’s easier to provision and
manage database grids. This update is geared toward dynamically adjusting servers as corporations manage
the ebb and flow between data requirements for datawarehousing and applications.
• By default, LOAD_BALANCE is ON.
• GSD (Global Service Deamon), gsdctl introduced.
• GPnP profile.
• Cluster information in an XML profile.
• Oracle RAC OneNode is a new option that makes it easier to consolidate databases that aren’t mission
critical, but need redundancy.
• raconeinit - to convert database to RacOneNode.
• raconefix - to fix RacOneNode database in case of failure.
• racone2rac - to convert RacOneNode back to RAC.
• Oracle Restart - the feature of Oracle Grid Infrastructure's High Availability Services (HAS) to manage
associated listeners, ASM instances and Oracle instances.
• Oracle Omotion - Oracle 11g release2 RAC introduces new feature called Oracle Omotion, an online migration
utility. This Omotion utility will relocate the instance from one node to another, whenever instance failure
happens.
• Omotion utility uses Database Area Network (DAN) to move Oracle instances. Database Area Network (DAN)
technology helps seamless database relocation without losing transactions.
• Cluster Time Synchronization Service (CTSS) is a new feature in Oracle 11g R2 RAC, which is used to
synchronize time across the nodes of the cluster. CTSS will be replacement of NTP protocol.
More about the above functions will be clear from the following discussion on contention. Please note that GCS is
available in the form of the background process called LMS.
Past Image: The concept of Past Image is very specific to RAC setup. Consider an instance holding exclusive lock on a
data block for updates. If some other instance in the RAC needs the block, the holding instance can send the block to
• There can be more than one PI of the block at a time across the instances. In case there is some instance
crash/failure in the RAC and a recovery is required, Oracle is able to re-construct the block using these Past
Images from all the instances.
When a block is written to the disk, all Past Images of that block across the instances are discarded. GCS informs all
the instances to do this. At this time, the redo logs containing the redo for that data block can also be overwritten
because they are no longer needed for recovery.
Consistent Read
A consistent read is needed when a particular block is being accessed/modified by transaction T1 and at the same
time another transaction T2 tries to access/read the block. If T1 has not been committed, T2 needs a consistent read
(consistent to the non-modified state of the database) copy of the block to move ahead. A CR copy is created using
the UNDO data for that block. A sample series of steps for a CR in a normal setup would be:
As mentioned above, CACHE FUSION helps resolve all the possible contentions that could happen between instances
in a RAC setup. There are 3 possible contentions in a RAC setup which we are going to discuss in detail here with a
mention of cache fusion where ever applicable.
Our discussion thus far should help understand the following discussion on contentions and their resolutions better.
1. Read/Read contention: Read-Read contention might not be a problem at all because the table/row will be in
a shared lock mode for both transactions and none of them is trying an exclusive lock anyways.
2. Read/Write contention: This one is interesting.
Here is more about this contention and how the concept of cache fusion helps resolve this contention
1. A data block is in the buffer cache of instance A and is being updated. An exclusive lock has been
acquired on it.
b. After some time instance B is interested in reading that same data block and hence sends a
request to GCS. So far so good – Read/Write contention has been induced
c. GCS checks the availability of that data block and finds that instance A has acquired an exclusive lock.
Hence, GCS asks instance A to release the block for instance B.
d. Now there are two options – either instance A releases the lock on that block (if it no longer needs it)
and lets instance B read the block from the disk OR instance A creates a CR image of the block in its
own buffer cache and ships it to the requesting instance via interconnect
e. The holding instance notifies the GCS accordingly (if the lock has been released or the CR image has
been shipped)
f. Creation of CR image, shipping it to the requesting instance and involvement of GCS is where CACHE
FUSION comes into play
This is the case where both instance A as well as B are trying to acquire an exclusive lock on the data block. A data
block is in the buffer cache of instance A and is being updated. An exclusive lock has been acquired on it
PI image VS CR image
Let us just halt and understand some basic stuff - Wondering why CR image used in Read-Write contention and PI
image used in Write-Write contention? What is the difference?
1. CR image was shipped to avoid Read-Write type of contention because the requesting instance doesn’t wants
to perform a write operation and hence won’t need an exclusive lock on the block. Thus for a read operation,
the CR image of the block would suffice. Whereas for Write-Write contention, the requesting instance also
needs to acquire an exclusive lock on the data block. So to acquire the lock for write operations, it would
need the actual block and not the CR image. The holding instance hence sends the actual block but is liable to
keep the PI of the block until the block has been written to the disk. So if there is any instance failure or
crash, Oracle is able to build the block using the PI from across the RAC instances (there could be more than
on PI of a data block before the block has actually been written to the disk). Once the block is written to the
disk, it won’t need a recovery in case of a crash and hence associated PIs can be discarded.
2. Another difference of course is that the CR image is to be shipped to the requesting instance where as the PI
has to be kept by the holding instance after shipping the actual block.
UNDO?
This discussion is not about UNDO management in RAC but here is a brief about UNDO in a RAC scenario. UNDO is
generated separately on each instance just similar to a standalone database. Each instance has its own UNDO
tablespace. The UNDO data of all instances is used by holding instance to build CR image in case of contention
What is Cache Fusion and how does this affect applications:
Cache Fusion is new parallel database architecture for exploiting clustered computers to achieve scalability of all
types of applications. Cache Fusion is a shared cache architecture that uses high speed low latency interconnects
available today on clustered systems to maintain database cache coherency. Database blocks are shipped across the
interconnect to the node where access to the data is needed. This is accomplished transparently to the application
and users of the system. As Cache Fusion uses at most a 3 point protocol, this means that it easily scales to clusters
with a large numbers of nodes. For more information about cache fusion see the following links: Additional
• If a dedicated session is requested, then the listener will select the instance first on the basis of the node that
is least loaded; if all nodes are equally loaded, it will then select the instance that has the least load.
• For a shared server connection, however, the listener goes one step further. It will also check to see if all of
the available instances are equally loaded; if this is true, the listener will place the connection on the least-
loaded dispatcher on the selected instance.
Example:
http://www.databasejournal.com/features/oracle/article.php/3666396/Oracle-10gR2-RAC-Load-Balancing-
Features.htm
http://www.orafaq.com/node/1840
/*
|| Oracle 10gR2 RAC LBA Features Listing
||
|| Demonstrates Oracle 10gR2 Load Balancing Advisory (LBA) features for
|| Real Application Clusters, including:
|| - How to set up client-side load balancing and failover
|| - How to set up server-side load balancing
|| - How to set up Load Balancing Advisory features
|| - How to monitor the efficiency and outcomes of the Load Balancing Advisory
||
|| Author: Jim Czuprynski
||
|| Usage Notes:
|| This script is provided to demonstrate various features of Oracle 10gR2
|| Load Balancing Advisor, and it should be carefully proofread before
|| executing it against any existing Oracle database to insure that no
|| potential damage can occur.
*/
/*
|| Listing 1: Setting up client-side connection load balancing
*/
#####
# Add these entries to each client's TNSNAMES.ORA configuration file
# to enable Client-Side Load Balancing ONLY (i.e., no failover)
#####
CSLB_ONLY =
(DESCRIPTION =
(LOAD_BALANCE = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = raclinux1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = raclinux2-vip)(PORT = 1521))
Page 138 of 287
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = racdb)
)
)
/*
|| Listing 2: Setting up client-side connection load balancing plus failover
*/
#####
# Add these entries to each client's TNSNAMES.ORA configuration file
# to enable Client-Side Load Balancing PLUS Failover
#####
CSLB_FAILOVER =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = raclinux1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = raclinux2-vip)(PORT = 1521))
(LOAD_BALANCE = ON) # Activates load balancing
(FAILOVER = ON) # Activates failover
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = racdb)
)
)
/*
|| Listing 3: Setting up server-side connection load balancing features.
|| Note that server-side load balancing requires:
|| 1.) New entries in every client's TNSNAMES.ORA file for the new alias
|| 2.) New entries in the TNSNAMES.ORA file of every node in the cluster
|| to include the REMOTE_LISTENER setting
|| 3.) The addition of *.REMOTE_LISTENER parameter to all nodes in cluster
|| to force each node's Listener to register with each other
*/
#####
# Add these entries to each server's TNSNAMES.ORA file to enable Server-Side
# Load Balancing:
#####
LISTENERS_RACDB =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = raclinux1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = raclinux2-vip)(PORT = 1521))
)
SSLB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = raclinux1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = raclinux2-vip)(PORT = 1521))
(LOAD_BALANCE = ON)
(FAILOVER = ON)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = racdb)
)
)
-----
-- Run this command to add the REMOTE_LISTENERS initialization parameter to
Page 139 of 287
-- the common SPFILE for all nodes in the RAC clustered database:
-----
ALTER SYSTEM SET REMOTE_LISTENER = LISTENERS_RACDB SID='*' SCOPE=BOTH;
/*
|| Listing 4: Setting up Load Balancing Advisory features in an Oracle 10g
|| Real Applications Cluster (RAC) clustered database environment
*/
#####
# Create, register, and start three new services with
# Cluster-Ready Services
#####
srvctl add service -d racdb -s ADHOC -r racdb1,racdb2
srvctl start service -d racdb -s ADHOC
srvctl add service -d racdb -s DSS -r racdb1,racdb2
srvctl start service -d racdb -s DSS
srvctl add service -d racdb -s OLTP -r racdb1,racdb2
srvctl start service -d racdb -s OLTP
/*
|| Listing 5: Using DBMS_SERVICE.MODIFY_SERVICE to configure RAC services
|| to use Load Balancing Advisory features in an Oracle 10g
|| Real Applications Cluster (RAC) clustered database environment
*/
-----
-- Configuring existing RAC services to use the Load Balancing Advisory:
-- 1.) ADHOC: No Load Balancing Advisory
-- 2.) DSS: Load Balancing Advisory with Service Time goal
-- 3.) OLTP: Load Balancing Advisory with Throughput goal
-- Note that Advanced Queueing (AQ) tracking is also activated.
-----
BEGIN
DBMS_SERVICE.MODIFY_SERVICE(
service_name => 'ADHOC'
,aq_ha_notifications => TRUE
,goal => DBMS_SERVICE.GOAL_NONE
,clb_goal => DBMS_SERVICE.CLB_GOAL_LONG
);
DBMS_SERVICE.MODIFY_SERVICE(
service_name => 'DSS'
,aq_ha_notifications => TRUE
,goal => DBMS_SERVICE.GOAL_SERVICE_TIME
,clb_goal => DBMS_SERVICE.CLB_GOAL_SHORT
);
DBMS_SERVICE.MODIFY_SERVICE(
service_name => 'OLTP'
,aq_ha_notifications => TRUE
,goal => DBMS_SERVICE.GOAL_THROUGHPUT
,clb_goal => DBMS_SERVICE.CLB_GOAL_SHORT
);
END;
/
-----
-- Confirm these services' configuration by querying DBA_SERVICES:
-----
SET PAGESIZE 50
SET LINESIZE 11O
Page 140 of 287
TTITLE 'Services Configured to Use Load Balancing Advisory (LBA) Features|
(From DBA_SERVICES)'
COL name FORMAT A16 HEADING 'Service Name' WRAP
COL created_on FORMAT A20 HEADING 'Created On' WRAP
COL goal FORMAT A12 HEADING 'Service|Workload|Management|Goal'
COL clb_goal FORMAT A12 HEADING 'Connection|Load|Balancing|Goal'
COL aq_ha_notifications FORMAT A16 HEADING 'Advanced|Queueing|High-|Availability|Notification'
SELECT
name
,TO_CHAR(creation_date, 'mm-dd-yyyy hh24:mi:ss') created_on
,goal
,clb_goal
,aq_ha_notifications
FROM dba_services
WHERE goal IS NOT NULL
AND name NOT LIKE 'SYS%'
ORDER BY name
;
TTITLE OFF
/*
|| Listing 6: Using the GV$SERVICEMETRIC global view to track how RAC
|| services are responding to the Load Balancing Advisor
*/
TTITLE 'Current Service-Level Metrics|(From GV$SERVICEMETRIC)'
BREAK ON service_name NODUPLICATES
COL service_name FORMAT A08 HEADING 'Service|Name' WRAP
COL inst_id FORMAT 9999 HEADING 'Inst|ID'
COL beg_hist FORMAT A10 HEADING 'Start Time' WRAP
COL end_hist FORMAT A10 HEADING 'End Time' WRAP
COL intsize_csec FORMAT 9999 HEADING 'Intvl|Size|(cs)'
COL goodness FORMAT 999999 HEADING 'Good|ness'
COL delta FORMAT 999999 HEADING 'Pred-|icted|Good-|ness|Incr'
COL cpupercall FORMAT 99999999 HEADING 'CPU|Time|Per|Call|(mus)'
COL dbtimepercall FORMAT 99999999 HEADING 'Elpsd|Time|Per|Call|(mus)'
COL callspersec FORMAT 99999999 HEADING '# 0f|User|Calls|Per|Second'
COL dbtimepersec FORMAT 99999999 HEADING 'DBTime|Per|Second'
COL flags FORMAT 999999 HEADING 'Flags'
SELECT
service_name
,TO_CHAR(begin_time,'hh24:mi:ss') beg_hist
,TO_CHAR(end_time,'hh24:mi:ss') end_hist
,inst_id
,goodness
,delta
,flags
,cpupercall
,dbtimepercall
,callspersec
,dbtimepersec
FROM gv$servicemetric
WHERE service_name IN ('OLTP','DSS','ADHOC')
ORDER BY service_name, begin_time DESC, inst_id
;
CLEAR BREAKS
TTITLE OFF
Page 141 of 287
35. What are the uses of services? How to find out the services in cluster?
Applications should use the services to connect to the Oracle database. Services define rules and characteristics
(unique name, workload balancing, failover options, and high availability) to control how users and applications
connect to database instances.
36. How to find out the nodes in cluster (or) how to find out the master node?
# olsnodes -- Which ever displayed first, is the master node of the cluster.
Select MASTER_NODE from V$GES_RESOURCE;
To find out which is the master node, you can see ocssd.log file and search for "master node number".
37. How to know the public IPs, private IPs, VIPs in RAC?
# olsnodes -n -p -i
node1-pub 1 node1-prv node1-vip
node2-pub 2 node2-prv node2-vip
38. What utility is used to start DB/instance?
srvctl start database –d database_name
srvctl start instance –d database_name –i instance_name
39. How can you shutdown single instance?
Change cluster_database=false
srvctl stop instance –d database_name –i instance_name
40. What is HAS (High Availability Service) and the commands?
HAS includes ASM & database instance and listeners.
crsctl check has
crsctl config has
crsctl disable has
crsctl enable has
crsctl query has releaseversion
crsctl query has softwareversion
crsctl start has
crsctl stop has [-f]
41. How many nodes are supported in a RAC Database?
10g Release 2, support 100 nodes in a cluster using Oracle Clusterware, and 100 instances in a RAC database.
With 10g Release 2, we support 100 nodes in a cluster using Oracle Clusterware, and 100 instances in a RAC
database. Currently DBCA has a bug where it will not go beyond 63 instances. There is also a documentation bug for
the max-instances parameter. With 10g Release 1 the Maximum is 63.
42. What is fencing?
I/O fencing prevents updates by failed instances, and detecting failure and preventing split brain in cluster. When a
cluster node fails, the failed node needs to be fenced off from all the shared disk devices or disk groups. This
methodology is called I/O Fencing, sometimes called Disk Fencing or failure fencing.
Nodes in a RAC cluster can fall victim to conditions called Split Brain and Amnesia. These conditions usually occur
from a temporary network disconnect. Because of disconnect, the "sick" node thinks it is the only node in the cluster,
and forms its own "sub cluster" consisting only of itself.
In this case, the cluster needs to correct the issue. Traditional clusters use a process called STONITH (Shoot the Other
Node in the Head) in order to correct the issue; this simply means the healthy nodes kill the sick node. Oracle's
Clusterware does not do this; instead, it simply gives the message "Please Reboot" to the sick node. The node
bounces itself and rejoins the cluster.
There are other methods of fencing that are utilized by different hardware/software vendors. When using Veritas
Storage Foundation for RAC (VxSF RAC), you can implement I/O fencing instead of node fencing. This means that
instead of asking a server to reboot, you simply close it off from shared storage.
43. Why Clusterware installed in root (why not oracle)?
Need document
44. What are the wait events in RAC? Differences in Oracle RAC wait events?
gc current block 2-way
gc current block 3-way
gc current block busy
gc current buffer busy
gc current block congested
Page 142 of 287
gc current block 2-way:
An instance requests authorization for a block to be accessed in current mode to modify a block, the instance
mastering the resource receives the request. The master has the current version of the block and sends the current
copy of the block to the requestor via Cache Fusion and keeps a Past Image (.PI)
If you get this then do the following
• Analyze the contention, segments in the "current blocks received" section of AWR
• Use application partitioning scheme
• Make sure the system has enough CPU power
• Make sure the interconnect is as fast as possible
• Ensure that socket send and receive buffers are configured correctly
Monitoring an Oracle RAC database often means monitoring this situation and the amount of requests going back
and forth over the RAC interconnect. The most common wait events related to this are gc cr request and gc buffer
busy (note that in Oracle RAC 9i and earlier these wait events were known as "global cache cr request " and "global
cache buffer busy" wait events).
This will dump a trace file to the location specified by the user_dump_dest Oracle parameter containing information
about the network and protocols being used for the RAC interconnect.
Inefficient Queries - poorly tuned queries will increase the amount of data blocks requested by an Oracle session.
The more blocks requested typically means the more often a block will need to be read from a remote instance via
the interconnect.
gc buffer busy acquire and gc buffer busy release
The gc buffer busy acquire and gc buffer busy release wait events specify the time the remote instance locally spends
accessing the requested data block. In Oracle 11g you will see gc buffer busy acquire wait event when the global
cache open request originated from the local instance and gc buffer busy release when the open request originated
from a remote instance. In Oracle 10g these two wait events were represented in a single gc buffer busy wait, and in
Oracle 9i and prior the "gc" was spelled out as "global cache" in the global cache buffer busy wait event. These wait
events are all very similar to the buffer busy wait events in a single-instance database and are often the result of:
Hot Blocks - multiple sessions may be requesting a block that is either not in buffer cache or is in an incompatible
mode. Deleting some of the hot rows and re-inserting them back into the table may alleviate the problem. Most of
the time the rows will be placed into a different block and reduce contention on the block. The DBA may also need to
adjust the pctfree and/or pctused parameters for the table to ensure the rows are placed into a different block.
Inefficient Queries - as with the gc cr request wait event, the more blocks requested from the buffer cache the more
likelihood of a session having to wait for other sessions. Tuning queries to access fewer blocks will often result in less
contention for the same block.
Buffer busy global cache:
This wait event falls under the umbrella of ‘global buffer busy events’. This wait event occurs when a user is waiting
for a block that is currently held by another session on the same instance and the blocking session is itself waiting on
a global cache transfer.
Buffer busy global CR:
This wait event falls under the umbrella of ‘global buffer busy events’. This wait event occurs when multiple CR
requests for the same block are submitted from the same instance before the first request completes, users may
queue up behind it
Global cache busy:
This wait event falls under the umbrella of ‘global buffer busy events’. This wait event means that a user on the local
instance attempts to acquire a block globally and a pending acquisition or release is already in progress.
Global cache cr request:
this wait event falls under the umbrella of ‘global cache events’. This wait event determines that an instance has
requested a consistent read version of a block from another instance and is waiting for the block to arrive.
Global cache null to s and global cache null to x:
This wait event falls under the umbrella of ‘global cache events’. These events are waited for when a block was used
by an instance, transferred to another instance, and then requested back again.
Global cache open s and global cache open x:
This wait event falls under the umbrella of ‘global cache events’. These events are used when an instance has to read
a block from disk into cache as the block does not exist in any instances cache. High values on these waits may be
indicative of a small buffer cache, therefore you may see a low cache hit ratio for your buffer cache at the same time
as seeing these wait events.
Global cache s to x:
This wait event falls under the umbrella of ‘global cache events’. This event occurs when a session converts a block
1. The file system was designed with Oracle clustering in mind and it is free.
2. Eliminates the need to use RAW devices or other expensive clustered file systems.
3. With the advent of OCFS2, binaries, scripts, and configuration files (shared Oracle home) can be stored in the
file system. Making the management of RAC easier.
Cons:
1. With OCFS version 1, regular files cannot be store in the file system, however this issue is eliminated with
OCFS2.
Explanation-2:
Oracle Cluster File System (OCFS) presents a consistent file system image across the servers in a cluster. OCFS allows
administrators to take advantage of a files system for the Oracle database files (data files, control files, and archive
logs) and configuration files. This eases administration of the Oracle Real Application Clusters.
61. What is Oracle Cluster Ware?
a. It is framework which contains application modeling logic.
Invokes application aware agents.
Performs resource recovery. When a node goes down, Clusterware framework
recovers the application by relocating the resources to a live node.
This can be done for non -oracle applications as well, for example. xclock.
b. Clusterware also hosts OCR cache.
The Oracle Clusterware requires two clusterware components:
a voting disk to record node membership information and the
Oracle Cluster Registry/Repository (OCR) to record cluster configuration information.
The voting disk and the OCR must reside on shared storage.
62. What is a resource?
A resource is a Oracle Cluster ware manager application.
'Profile attributes' for a resource is stored in Oracle Cluster Registry.
63. How to register a resource?
a. Use crs_profile to create .CAP file with configuration details.
b. use crs_register to read .CAP file and update the OCR.
c. Resources can have dependencies. It will start in order and failover as a single unit.
64. What does crs_start / crs_stop does?
Reads config info from OCR and calls agent with command 'start'.
The agents (can be user written) actully stops the resource.
crs_start => read OCR config info => calls 'Control Agent' with command start. => Control agent stops the resource.
crs_stop => read OCR config info => call 'Control agent' with 'stop' => control agent stops app.
65. What is the difference between Oracle Cluster ware and CRS?
Oracle Cluster ware is formerly known as Cluster Ready Services (CRS). It is an integrated cluster management
solution that enables you to link multiple servers so that they function as a single system or cluster. The Oracle
Cluster ware simplifies the infrastructure required for RAC because it is integrated with the Oracle Database. In
addition, Oracle Cluster ware is also available for use with single-instance databases and applications that you deploy
• Public interface names must be the same for all nodes. If the public interface on one node uses the network
adapter eth0, then you must configure eth0 as the public interface on all nodes. Network interface names are
case-sensitive.
• You should configure the same private interface names for all nodes as well. If eth1 is the private interface
name for the first node, then eth1 should be the private interface name for your second node. Network
interface names are case-sensitive.
• The network adapter for the public interface must support TCP/IP.
• The network adapter for the private interface must support the user datagram protocol (UDP) using high-
speed network adapters and a network switch that supports TCP/IP (Gigabit Ethernet or better).
The view v$services contains information about services that have been started on that instance, here is a list from a
fresh RAC
installation
• Goal - allows you to define a service goal using service time, throughput or none
• Connect Time Load Balancing Goal - listeners and mid-tier servers contain current information about service
performance
• Distributed Transaction Processing - used for distributed transactions
• AQ_HA_Notifications - information about nodes being up or down will be sent to mid-tier servers via the
advance queuing mechanism
• Preferred and Available Instances - the preferred instances for a service, available ones are the backup
instances
• DBCA
• EM (Enterprise Manager)
• DBMS_SERVICES
• Server Control (srvctl)
Two services are created when the database is first installed, these services are running all the time and cannot be
disabled.
131. What is split Brain Syndrome? How Oracle Clusterware handles it?
132. What is STONIH algorithm?
133. What is cache fusion? Which Database background process facilitate it?
134. What is GRD? Where does it reside?
Explanation-1: RAC uses GRD (Global Resources Directory) to record information about how resources are used
within a clustered database.
Global Resource directory , a common memory structures across SGA’s, in other words this is the combination of
GCS/GES memory structures (infact synchronizing all the times through cluster interconnect messages). All the
resources in the cluster group form a central repository called GRD. which is integrated and distributed across the
nodes memory structures. Each instance masters some of the resources (buffer) based on their weightage and
accessibility) and together all formed called GRD. Basically a combination of GES and GCS.
Explanation-2: The Global Resource Directory (GRD) contains information about the current status of all shared
resources. It is maintained by the GCS and GES to record information about resources and enqueues held on these
resources. The GRD resides in memory and is used by the GCS and GES to manage the global resource activity. It is
distributed throughout the cluster to all nodes. Each node participates in managing global resources and manages a
portion of the GRD.
When an instance reads data blocks for the first time, its existence is local; that is, no other instance in the cluster has
a copy of that block. The block in this state is called a current state (XI). The behavior of this block in memory is
similar to any single-instance configuration, with the exception that GCS keeps track of the block even in a local
mode. Multiple transactions within the instance have access to these data blocks. Once another instance has
requested the same block, then the GCS process will update the GRD, changing the role of the data block from local
to global.
Explanation-3: The RAC environment includes many resources such as multiple versions of data block buffers in
buffer caches in different modes, Oracle uses locking and queuing mechanisms to coordinate lock resources, data and
interinstance data requests. Resources such as data blocks and locks must be synchronized between nodes as nodes
within a cluster acquire and release ownership of them. The synchronization provided by the Global Resource
Directory (GRD) maintains a cluster wide concurrency of the resources and in turn ensures the integrity of the shared
data. Synchronization is also required for buffer cache management as it is divided into multiple caches, and each
instance is responsible for managing its own local version of the buffer cache. Copies of data are exchanged between
nodes, this sometimes is referred to as the global cache but in reality each nodes buffer cache is separate and copies
of blocks are exchanged through traditional distributed locking mechanism.
Global Cache Services (GCS) maintain the cache coherency across buffer cache resources and Global Enqueue
Services (GES) controls the resource management across the clusters non-buffer cache resources.
Cache Coherency : Cache coherency identifies the most up-to-date copy of a resource, also called the master copy, it
uses a mechanism by which multiple copies of an object are keep consistent between Oracle instances. Parallel Cache
Management (PCM) ensures that the master copy of a data block is stored in one buffer cache and consistent copies
of the data block are stored in other buffer caches, the process LCKx is responsible for this task.
The lock and resource structures for instance locks reside in the GRD (also called the DLM), its a dedicated area within
the shared pool. Details about the data blocks resources and cached versions are maintained by GCS. Additional
details such as the location of the most current version, state of the buffer, role of the data block (local or global) and
ownership are maintained by GES. Global cache together with GES form the GRD. Each instance maintains a part of
Explanation-1:
RAC relies on the cluster services for failure detection. The cluster services are a distributed kernel component that
monitors whether cluster members can communicate with each other and through this process enforces the rule of
cluster membership.This is take care by Cluster Synchronization service (CSS) with CSSD process. The functions
performed by CSS can be listed below.
1. Forms a cluster, add/remove members to/from a cluster.
2. Tracks which members in a cluster are active.
3. Maintains a cluster membership list, which is consistent on all member nodes.
4. Provides timely notification of membership changes.
When a node polls another node (target) in the cluster, and the target has not responded successfully after repeated
attempts, a timeout occurs after approx 60 secs.
Among the responding nodes, the node that was started first and that is alive declares that the other node is not
responding and has failed. This node becomes the new MASTER and starts evicting the non-responding node from
the cluster. Once eviction is complete, cluster reformation begins. The reorganization process regroups accessible
nodes and removes the failed ones.
LMON is a background process that monitors the entire cluster to manage the global resource. By constantly probing
the other instances, it checks and manages instance death and associated recovery for Global Cache Service (GCS).
When a node joins or leaves the cluster, it handles reconfiguration of locks and associated resources. LMON handles
the part of recovery associated with global resources. Failover of a service is also triggered by the EVMD process by
firing a down event.
Once the reconfiguration of the nodes is complete ,oracle in, coordination with the EVMD and CRSD, performs
several tasks.
)
)
REMOTE_LISTENERS =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = node1-vip)
(PORT = 1521)
)
(ADDRESS =
(PROTOCOL = TCP)
(HOST = node2-vip)
(PORT = 1521)
)
)
LOCAL_LISTENER=LOCAL_LISTENER_NODE2
REMOTE_LISTENER=REMOTE_LISTENERS on a 2-node cluster your REMOTE_LISTENER can point to a single listener
but i find it easier to keep REMOTE_LISTENER identical on all nodes.
Note: The purpose of REMOTE_LISTENER is to connect all instances with all listeners so the instances can propagate
their load balance advisories to all listeners. if you connect to a listener, this listener uses the advisories to decide
who should service your connect. if the listener decides its local instance(s) are least loaded and should service your
connect it passes your connect to the local instance. if the node you connected to is overloaded, the listener can use
TNS redirect to redirect your connect a less loaded instance.
Explanation with Example:
suppose we have 2-node cluster: host1 and host2, with VIP address host1-vip and host2-vip respectively.
and one RAC database (orcl) running on this cluster; instace 1 (orcl1) on host1, and instance 2 (orcl2) on host2
we have listener_host1 running on host1, and listener_host2 running on host2.
listener_host1 is considered local listener for orcl1 instance, while listener_host2 is considered remote listener for
that same orcl1 instance (because the listener in not running on the same machine as the database instance).
similarly, listener_host2 is considered local listener for orcl2 instance, and considered as remote listener for orcl1.
Preferred Sites:
http://appsdbaera.blogspot.in/2013/03/rac-interview-questions.html#!/2013/03/rac-interview-questions.html
http://lazyappsdba.blogspot.in/2010/08/rac-interview-q_24.html
http://orajourn.blogspot.in/2007/06/rac-class-day-4.html
http://e-university.wisdomjobs.com/oracle-dba-interview-questions/oracle-dba-interview-questions/question-1.html
A logical standby also has to wait for the EOR redo from the primary to be applied and SQL apply to shut down before
the switchover command can complete, once the EOR has been processed, the GUARD can be turned off and
production processing can begin.
Failover
A failover is a unplanned event when something has happened to hardware, networking, etc. This is when you invoke
you DR procedures (hopefully documented), and you will have full confidence in getting the new primary up and
running as quickly as possible. Unlike the switchover which begins on the primary, no primary is involved which
means you will not be able to get the redo from the primary. Depending on what protection mode you have chosen
there may be data loss (less you have a Maximum Protection mode enabled), you start be telling Data Guard to apply
the remaining redo that it can. Once the redo has been applied you run the same command that you do with a
physical standby to switchover the standby to a primary
complete the switchover (new
alter database commit to switchover to primary;
primary)
Once difference is when the switchover has completed the protection mode will be maximum performance
regardless what it was before, to get it back to your original protection mode you must get a standby database back
up and running, then manually execute the steps to get it into the protection mode you want.
Since the redo heartbeat is sent every 6 seconds or so, the general rule is that you may lose 6 seconds of redo during
a failover but this is a best guess. At failover the merging thread will look at the last log of the disconnected thread
and use the last heartbeat in it to define the consistent point, throwing away all the redo that the surviving nodes had
been sending all along.
22. What are the background processes involved in Data Guard?
MRP, LSP,
23. What happens if standby out of sync with primary? How will you resolve it?
24. How will you sync if archive is got deleted in primary?
25. Can we change protection mode online?
26. How will add a datafile in standby environment?
27. Can we add/delete/create/drop the datafile at standby database?
You cannot rename the datafile on the standby site when the STANDBY_FILE_MANAGEMENT initialization parameter
is set to AUTO. When you set the STANDBY_FILE_MANAGEMENT initialization parameter to AUTO, use of the
following SQL statements is not allowed:
ALTER DATABASE RENAME
ALTER DATABASE ADD/DROP LOGFILE
ALTER DATABASE ADD/DROP STANDBY LOGFILE MEMBER
ALTER DATABASE CREATE DATAFILE AS
If you attempt to use any of these statements on the standby database, an error is returned. For example:
SQL> ALTER DATABASE RENAME FILE '/disk1/oracle/oradata/payroll/t_db2.log' to 'dummy';
alter database rename file '/disk1/oracle/oradata/payroll/t_db2.log' to 'dummy'
*
ERROR at line 1:
ORA-01511: error in renaming log/datafiles
ORA-01270: RENAME operation is not allowed if STANDBY_FILE_MANAGEMENT is auto
28. If Standby database does not receive the redo data from the primary database, how will you diagnose?
If the standby site is not receiving redo data, query the V$ARCHIVE_DEST view and check for error messages. For
example, enter the following query:
SQL> SELECT DEST_ID "ID",
2> STATUS "DB_status",
3> DESTINATION "Archive_dest",
4> ERROR "Error"
Page 210 of 287
5> FROM V$ARCHIVE_DEST WHERE DEST_ID <=5;
ID DB_status Archive_dest Error
-- --------- ------------------------------ ------------------------------------
1 VALID /vobs/oracle/work/arc_dest/arc
2 ERROR standby1 ORA-16012: Archivelog standby database identifier mismatch
3 INACTIVE
4 INACTIVE
5 INACTIVE
5 rows selected.
If the output of the query does not help you, check the following list of possible issues. If any of the following
conditions exist, redo transport services will fail to transmit redo data to the standby database:
The service name for the standby instance is not configured correctly in the tnsnames.ora file for the primary
database.
• The Oracle Net service name specified by the LOG_ARCHIVE_DEST_n parameter for the primary database is
incorrect.
• The LOG_ARCHIVE_DEST_STATE_n parameter for the standby database is not set to the value ENABLE.
• The listener.ora file has not been configured correctly for the standby database.
• The listener is not started at the standby site.
• The standby instance is not started.
• You have added a standby archiving destination to the primary SPFILE or text initialization parameter file, but
have not yet enabled the change.
• The databases in the Data Guard configuration are not all using a password file, or the SYS password
contained in the password file is not identical on all systems.
• You used an invalid backup as the basis for the standby database (for example, you used a backup from the
wrong database, or did not create the standby control file using the correct method).
29. You can’t mount the standby database what is the reason?
You cannot mount the standby database if the standby control file was not created with the ALTER DATABASE
CREATE [LOGICAL] STANDBY CONTROLFILE ... statement or RMAN command. You cannot use the following types of
control file backups:
• An operating system-created backup
• A backup created using an ALTER DATABASE statement without the PHYSICAL STANDBY or LOGICAL STANDBY
option
30. How do you do network tuning for redo transmission in data guard?
For optimal performance, set the Oracle Net SDU parameter to 32 kilobytes in each Oracle Net connect descriptor
used by redo transport services.
The following example shows a database initialization parameter file segment that defines a remote destination
netserv:
LOG_ARCHIVE_DEST_3='SERVICE=netserv'
The following example shows the definition of that service name in the tnsnames.ora file:
netserv=(DESCRIPTION=(SDU=32768)(ADDRESS=(PROTOCOL=tcp)(HOST=host) (PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=srvc)))
The following example shows the definition in the listener.ora file:
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)
(HOST=host)(PORT=1521))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(SDU=32768)(SID_NAME=sid)
(GLOBALDBNAME=srvc)(ORACLE_HOME=/oracle)))
If you archive to a remote site using a high-latency or high-bandwidth network link, you can improve performance by
using the SQLNET.SEND_BUF_SIZE and SQLNET.RECV_BUF_SIZE Oracle Net profile parameters to increase the size of
the network send and receive I/O buffers.
31. How to troubleshoot the slow disk performance on standby database?
If asynchronous I/O on the file system itself is showing performance problems, try mounting the file system using the
Direct I/O option or setting the FILESYSTEMIO_OPTIONS=SETALL initialization parameter. The maximum I/O size
setting is 1 MB.
Optimizer chooses if statement is without where clause and all the select columns included in the
index and At least one of the index columns is not null. (performs single block I/O)
Fast Full Index Alternative to a full table scan the difference is, it performs multi-block reads and can not be used
Scans against bitmap indexes
Index Joins An index join is a hash join of several indexes that together contain all the table columns that are
referenced in the query
Bitmap Indexes A bitmap join uses a bitmap for key values and a mapping function that converts each bit position
to rowid.
4- Cluster Access A cluster scan is used to retrieve, from a table stored in an indexed cluster
5- Hash Access A hash scan is used to locate rows in a hash cluster, based on a hash value
6- Sample Table Scan This access path is used when a statement's FROM clause includes the SAMPLE clause or the
SAMPLE BLOCK clause.
--scan to access 1% of the employees table
SELECT * FROM employees SAMPLE BLOCK (1);
73. What is the explain plan? And what type of information explain plan contains?
The EXPLAIN PLAN statement displays execution plans chosen by the Oracle optimizer for SELECT, UPDATE, INSERT,
and DELETE statements. A statement's execution plan is the sequence of operations Oracle performs to run the
statement.
It shows the following information in a statement:
Ordering of the tables
Access method
Join method for tables
Data operations ( filter, sort, or aggregation)
Optimization ( cost and cardinality of each operation)
Partitioning (set of accessed partitions)
Parallel execution (distribution method of join inputs)
74. What is TKPROF?
Formats a trace file into a more readable format for performance analysis, before you can use TKPROF, you need to
generate a trace file and locate it.
75. What is SQL Trace?
SQL trace files are text files and used to debug performance problems, execution plan and other statistics.
76. What is the Explain plan statement disadvantage?
Explain Plan is not as useful when used in conjunction with tkprof since the trace file contains the actual execution
path of the SQL statement. Use Explain Plan when anticipated execution statistics are desired without actually
executing the statement.
77. What is the Explain plan statement advantage?
Main advantage is that it does not actually run the query - just parses the SQL. In the early stages of tuning explain
plan gives you an idea of the potential performance of your query without actually running it.
78. What is the plan table? Describe its purpose?
A global temporary table (automatically created) that Oracle fills when you issue “Explain plan” command for an SQL
statement for all users.
79. How you can create plan table if plan table already not exists?
UTLXPLAN.SQL script if plan table not already exists and creates table Named PLAN_TABLE
SQL> CONN sys/password AS SYSDBA
SQL> @$ORACLE_HOME/rdbms/admin/utlxplan.sql
SQL> GRANT ALL ON sys.plan_table TO public;
SQL> CREATE PUBLIC SYNONYM plan_table FOR sys.plan_table;
80. What are the important fields of plan table?
Page 232 of 287
Most important fields within the plan table are (operation, option, object_name,id, parent_id.)
81. How you run explain plan statement?
EXPLAIN PLAN FOR SELECT last_name FROM employees;
--Using EXPLAIN PLAN with the STATEMENT ID Clause
EXPLAIN PLAN SET STATEMENT_ID = 'st1'
FOR SELECT last_name FROM employees;
--Using EXPLAIN PLAN with the INTO Clause
EXPLAIN PLAN INTO my_plan_table
FOR SELECT last_name FROM employees;
82. What are the methods can be used to display plan table output (Execution Plan)?
The execution plan can be display by using following methods
1- Using simple query:(base is PLAN_TABLE)
Display execuation plan for the the last "EXPLAIN PLAN" command. You need to format result yourself.
2- Utlxpls.sql or utlxplp.sql scripts (for serial or parllel queries) (base is PLAN_TABLE)
Displays the contents of a PLAN_TABLE. Makes it much easier to format and display execution plans.
@ORACLE_HOME\RDBMS\ADMIN\Utlxpls.sql --FOR serial quieries
@ORACLE_HOME\RDBMS\ADMIN\utlxplp.sql --FOR parallel quieries
Note: Executing individual scripts or using DBMS_XPLAN is same.
3- Using DBMS_XPLAN (As of 9i) (base is PLAN_TABLE)
DBMS_XPLAN.DISPLAY function that displays the contents of a PLAN_TABLE. Makes it much easier to format and
display execution plans.
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
DBMS_XPLAN.DISPLAY_AWR Function look up an historical SQL statement captured in Oracle 10g's Automatic
Workload Repository (AWR), and display its execution plan. This gives you a seven-day rolling window of history that
you can access.
4- Using V$SQL_PLAN Views (base is SQL Statement)
After the statement has executed V$SQL_PLAN views can be used to display the execution plan of a SQL statement.
Its definition is similar to the PLAN_TABLE. It is the actual execution plan and not the predicted one – just like tkprof
and even better than Explain Plan.
V$SQL_PLAN_STATISTICS provides actual execution statistics (output rows and time) for every operation
V$SQL_PLAN_STATISTICS_ALL combines V$SQL_PLAN and V$SQL_PLAN_STATISTICS information
Both v$sql_plan_statistics and v$sql_plan_statistics_all are not populated by default. The option statistics_level=all
must be set.
5- Using Toad (base is SQL Statement) TOOLS > SGA Trace / Optimization
83. Why and when should one tune?
One of the biggest responsibilities of a DBA is to ensure that the Oracle database is tuned properly. The Oracle
RDBMS is highly tunable and allows the database to be monitored and adjusted to increase its performance.
One should do performance tuning for the following reasons:
The speed of computing might be wasting valuable human time (users waiting for response);
Enable your system to keep-up with the speed business is conducted; and
Optimize hardware usage to save money (companies are spending millions on hardware).
Although this FAQ is not overly concerned with hardware issues, one needs to remember than you cannot tune a
Buick into a Ferrari.
84. What database aspects should be monitored?
One should implement a monitoring system to constantly monitor the following aspects of a database. Writing
custom scripts, implementing Oracle’s Enterprise Manager, or buying a third-party monitoring product can achieve
this. If an alarm is triggered, the system should automatically notify the DBA (e-mail, page, etc.) to take appropriate
action.
Infrastructure availability:
• Is the database up and responding to requests
• Are the listeners up and responding to requests
• Are the Oracle Names and LDAP Servers up and responding to requests
• Are the Web Listeners up and responding to requests
Things that can cause service outages:
103. What are the values of optimizer_mode init parameters and their meaning?
optimizer_mode = choose
104. What is the use of AWR, ADDM, and ASH?
105. How to generate AWR report and what are the things you will check in the report?
106. How to generate ADDM report and what are the things you will check in the report?
107. How to generate ASH report and what are the things you will check in the report?
108. How to generate STATSPACK report and what are the things you will check in the report?
109. How to generate TKPROF report and what are the things you will check in the report?
The tkprof tool is a tuning tool used to determine cpu and execution times for SQL statements. Use it by first setting
timed_statistics to true in the initialization file and then turning on tracing for either the entire database via the
sql_trace parameter or for the session using the ALTER SESSION command. Once the trace file is generated you run
the tkprof tool against the trace file and then look at the output from the tkprof tool. This can also be used to
generate explain plan output.
110. What is Performance Tuning?
Making optimal use of system using existing resources called performace tuning.
111. Types of Tunings?
1. CPU Tuning 2. Memory Tuning 3. IO Tuning 4. Application Tuning 5. Databse Tuning
112. What mainly Database Tuning contains?
1. Hit Ratios 2. Wait Events
113. What is an optimizer?
Optimizer is a mechanizm which will make the execution plan of an sql statement
114.Types of Optimizers?
1. RBO(Rule Based Optimizer) 2. CBO(Cost Based Optimzer)
115. Which init parameter is used to make use of Optimizer?
optimizer_mode= rule----RBO cost---CBO choose--------First CBO otherwiser RBO
116. Which optimizer is the best one?
CBO
117. What are the pre requisite to make use of Optimizer?
1. Set the optimizer mode 2. Collect the statistics of an object
118. How do you collect statistics of a table?
analyze table emp compute statistics or analyze table emp estimate statistics
119. What is the diff between compute and estimate?
If you use compute, The FTS will happen, if you use estimate just 10% of the table will be read
120. What will happen if you set the optimizer_mode=choose?
If the statistics of an object is available then CBO used, if not RBO will be used.
121. Data Dictionary follows which optimizer mode?
RBO
122. How do you delete statistics of an object?
analyze table emp delete statistics
123. How do you collect statistics of a user/schema?
EXEC DBMS_STATS.GATHER_SCHEMA_STATS (SCOTT)
124. How do you see the statistics of a table?
select num_rows,blocks,empty_blocks from dba_tables where tab_name='emp'
125. What are chained rows?
These are rows, it spans in multiple blocks
126. How do you collect statistics of a user in Oracle Apps?
fnd_stats package
127. How do you create a execution plan and how do you see?
Page 238 of 287
1. @?/rdbms/admin/utlxplan.sql --------- it creates a plan_table
2. explain set statement_id='1' for select * from emp;
3. @?/rdbms/admin/utlxpls.sql -------------it display the plan
128. How do you know what sql is currently being used by the session?
by goind v$sql and v$sql_area
129. What is a execution plan?
Its a road map how sql is being executed by oracle db?
130. How do you get the index of a table and on which column the index is?
dba_indexes and dba_ind_columns
131. Which init paramter you have to set to by pass parsing?
cursor_sharing=force
132. How do you know which session is running long jobs?
by going v$session_longops
133. How do you flush the shared pool?
alter system flush shared_pool
134. How do you get the info about FTS?
using v$sysstat
135. How do you increase the db cache?
alter table emp cache
136. Where do you get the info of library cache?
v$librarycache
137. How do you get the information of specific session?
v$mystat
138. What you’ll check whenever user complains that his session/database is slow?
139. Customer reports a application slowness issue, and you need to evaluate database performance. What do you
look at for 9i and for 11g.
On oracle 9i, look at the statspack report , on oracle 11g look at the AWR report. In both reports look top sqls listed in
elapsed time or cpu time. At sqlplus look sql_text from v$sql where disk_reads is high.
140. You have found a long running sql in your evaluation of system health of database, what do you look for to
determine why sql is slow?
Use explain plan to determine the execution plan of the sql. When looking at execution plan look for indexes being
used, full table scans on large tables.
141. You have a windows service is crashing, how can you determine the sqls related to the windows service?
Use sql trace to trace the username and program associated with the trace file. Use tkprof to analyze the sql trace
and determine the long running sqls.
142. What is proactive tuning and reactive tuning?
In Proactive Tuning, the application designers can then determine which combination of system resources and
available Oracle features best meet the needs during design and development. In reactive tuning the bottom up
approach is used to find and fix the bottlenecks. The goal is to make Oracle run faster.
143. Describe the level of tuning in oracle?
A.System-level tuning involves the following steps:
1. Monitoring the operating system counters using a tool such as top, gtop, and GKrellM or the VTune analyzer’s
counter monitor data collector for applications running on Windows.
2. Interpreting the counter data to locate system-level performance bottlenecks and opportunities for improving the
way your application interacts with the system.
3.SQL-level tuning:Tuning disk and network I/O subsystem to optimize the I/O time, network packet size and
dispatching frequency is called the server kernel optimization.
Distribution of data can be studied by the optimizer by collecting and storing optimizer statistics. This enables
intelligent execution plans. Choice of db_block_size, db_cache_size, and OS parameters
(db_file_multiblock_read_count, cpu_count, &c), can influence SQL performance. Tuning SQL Access workload with
physical indexes and materialized views.
144. What is Database design level tuning?
The steps involved in database design level tuning are:
1. Determination of the data needed by an application (what relations are important, their attributes and structuring
the data to best meet the performance goals)
Page 239 of 287
2.Analysis of data followed by normalization to eliminate data redundancy.
3. Avoiding data contention.
4. Localizing access to the data to the partition, process and instance levels.
5. Using synchronization points in Oracle Parallel Server.
6. Implementation of 8i enhancements that can help avoid contention are:
a. Consideration on partitioning the data
b. Consideration over using local or global indexes.
145. Explain rule-based optimizer and cost-based optimizer?
A. Oracle decides how to retrieve the necessary data whenever a valid SQL statement is processed. This decision can
be made using one of two methods:
1. Rule Based Optimizer
if the server has no internal statistics relating to the objects referenced by the statement then the RBO method is
used. This method will be deprecated in the future releases of oracle.
2. Cost Based Optimizer
The CBO method is used if internal statistics are present. The CBO checks several possible execution plans and selects
the one with the lowest cost based on the system resources.
146. What are object datatypes? Explain the use of object datatypes?
Object data types are user defined data types. Both column and row can represent an object type. Object types
instance can be stored in the database. Object datatypes make it easier to work with complex data, such as images,
audio, and video. Object types provide higher-level ways to organize and access data in the database.The SQL
attributes of Select into clause, i.e. SQL % Not found, SQL % found, SQL % Isopen, SQL %Rowcount.
1.% Not found: True if no rows returned
E.g. If SQL%NOTFOUND then return some_value
2.% found: True if at least one or more rows returned
E.g. If SQL%FOUND then return some_value
3.%Isopen: True if the SQL cursor is open. Will always be false, because the database opens and closes the implicit
cursor used to retrieve the data
4.%Rowcount: Number of rows returned. Equals 0 if no rows were found (but the exception is raised) and a 1, if one
or more rows are found (if more than one an exception is raised).
147. What is translate and decode in oracle?
1. Translate: translate function replaces a sequence of characters in a string with another set of characters. The
replacement is done single character at a time.Syntax:
translate( string1, string_to_replace, replacement_string )
Example:
translate ('1tech23', '123', '456);
2. Decode: The DECODE function compares one expression to one or more other expressions and, when the base
expression is equal to a search expression, it returns the corresponding result expression; or, when no match is
found, returns the default expression when it is specified, or NA when it is not.
Syntax:
DECODE (expr , search, result [, search , result]... [, default])
Example:
SELECT employee_name, decode(employee_id, 10000, ‘tom’, 10001, ‘peter’, 10002, ‘jack’ 'Gateway') result FROM
employee;
148. What is oracle correlated sub-queries? Explain with an example?
A query which uses values from the outer query is called as a correlated sub query. The subquery is executed once
and uses the results for all the evaluations in the outer query.Example:
Here, the sub query references the employee_id in outer query. The value of the employee_id changes by row of the
outer query, so the database must rerun the subquery for each row comparison. The outer query knows nothing
about the inner query except its results.
select employee_id, appraisal_id, appraisal_amount From employee
where
appraisal_amount < (select max(appraisal_amount)
from employee e
where employee_id = e. employee_id);
I will not get into Details how to generate AWR since i mention it before on my Blog .
analyzes data in the Automatic Workload Repository (AWR) to identify potential performance bottlenecks.and we use
it for the following cases :
• CPU bottlenecks
• Undersized memory structures
• I/O capacity issues
• High load SQL statements
• RAC specific issues
• Database configuration issues
• Also provides recommendations on hardware changes, database configuration & schema changes.
Generate ADDM :
statistics from the in-memory performance monitoring tables also used to track session activity and simplify
performance tuning.
Note:
If the current value for any parameter is higher than the value listed in this table, do not
change the value of that parameter.
To view the current value specified for these kernel parameters, and to change them if necessary, follow these steps:
Enter commands similar to the following to view the current values of the kernel parameters:
Note:
Make a note of the current values and identify any values that you must change.
Parameter Command
semmsl, semmns, semopm, and # /sbin/sysctl -a | grep sem
semmni This command displays the value of the semaphore parameters in the ord
listed.
shmall, shmmax, and shmmni # /sbin/sysctl -a | grep shm
file-max # /sbin/sysctl -a | grep file-max
ip_local_port_range # /sbin/sysctl -a | grep ip_local_port_range
This command displays a range of port numbers.
If the value of any kernel parameter is different to the recommended value, complete the following steps:
Using any text editor, create or edit the /etc/sysctl.conf file and add or edit lines similar to the following:
Note:
Include lines only for the kernel parameter values that you want to change. For the
semaphore parameters (kernel.sem), you must specify all four values. However, if any of the
current values are larger than the recommended value, specify the larger value.
By specifying the values in the /etc/sysctl.conf file, they persist when you reboot the system.
Enter the following command to change the current values of the kernel parameters:
# /sbin/sysctl -p
Review the output from this command to verify that the values are correct. If the values are incorrect, edit the
/etc/sysctl.conf file, then enter this command again.
On SUSE systems only, enter the following command to cause the system to read the /etc/sysctl.conf file when it
reboots:
# /sbin/chkconfig boot.sysctl on
Add the following line to the /etc/pam.d/login file, if it does not already exist:
session required /lib/security/pam_limits.so
Depending on the oracle user's default shell, make the following changes to the default shell start-up file:
For the Bourne, Bash, or Korn shell, add the following lines to the /etc/profile file (or the /etc/profile.local file on
SUSE systems):
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
For the C or tcsh shell, add the following lines to the /etc/csh.login file (or the /etc/csh.login.local file on SUSE
systems):
if ( $USER == "oracle" ) then
limit maxproc 16384
limit descriptors 65536
endif
SUSE:
# eject /media/cdrom
In this example, /mnt/cdrom or /media/cdrom is the mount point directory for the CD-ROM drive, depending on your
distribution.
Insert the disc into the CD-ROM or DVD-ROM drive.
To verify that the disc mounted automatically, enter a command similar to the following:
Red Hat:
$ ls /mnt/cdrom
SUSE:
$ ls /media/cdrom
If this command fails to display the contents of the disc, enter a command similar to the following, depending on your
distribution:
Red Hat:
# mount /mnt/cdrom
SUSE:
# mount /media/cdrom
Log In as the oracle User and Configure the oracle User's Environment
You run the Installer from the oracle account. However, before you start the Installer you must configure the
environment of the oracle user. To configure the environment, you must:
Set the default file mode creation mask (umask) to 022 in the shell startup file.
Set the DISPLAY, ORACLE_BASE, and ORACLE_SID environment variables.
To set the oracle user's environment, follow these steps:
Start another terminal session.
Enter the following command to ensure that X Window applications can display on this system:
$ xhost +
To determine the default shell for the oracle user, enter the following command:
$ echo $SHELL
Open the oracle user's shell startup file in any text editor:
Bash shell (bash) on Red Hat:
$ vi .bash_profile
Enter or edit the following line in the shell startup file, specifying a value of 022 for the default file mode creation
mask:
umask 022
C shell:
% source ./.login
If you determined that the /tmp directory had insufficient free disk space when checking the hardware requirements,
enter the following commands to set the TEMP and TMPDIR environment variables. Specify a directory on a file
system with sufficient free disk space.
Bourne, Bash, or Korn shell:
$ TEMP=/directory
$ TMPDIR=/directory
$ export TEMP TMPDIR
C shell:
% setenv TEMP /directory
% setenv TMPDIR /directory
If you are not installing the software on the local system, enter the following command to direct X applications to
display on the local system:
Bourne, Bash, or Korn shell:
$ DISPLAY=local_host:0.0 ; export DISPLAY
C shell:
% setenv DISPLAY local_host:0.0
In this example, local_host is the host name or IP address of the system that you want to use to display the Installer
(your workstation or PC).
Enter commands similar to the following to set the ORACLE_BASE and ORACLE_SID environment variables:
Bourne, Bash, or Korn shell:
$ ORACLE_BASE=/u01/app/oracle
$ ORACLE_SID=sales
$ export ORACLE_BASE ORACLE_SID
C shell:
% setenv ORACLE_BASE /u01/app/oracle
% setenv ORACLE_SID sales
In these examples, /u01/app/oracle is the Oracle base directory that you created earlier and sales is the name that
you want to call the database (typically no more than five characters).
Enter the following commands to ensure that the ORACLE_HOME and TNS_ADMIN environment variables are not
set:
Page 251 of 287
Bourne, Bash, or Korn shell:
$ unset ORACLE_HOME
$ unset TNS_ADMIN
C shell:
% unsetenv ORACLE_HOME
% unsetenv TNS_ADMIN
To verify that the environment has been set correctly, enter the following commands:
$ umask
$ env | more
Verify that the umask command displays a value of 0022, 022, or 22 and that the environment variables you set in
this section have the correct values.
Install Oracle Database 10g
After configuring the oracle user's environment, start the Installer and install the Oracle software, as follows:
Note:
The following examples show paths to the runInstaller script on a CD-ROM. If you are
installing the software from DVD-ROM, use a command similar to the following:
$ /mount_point/db/runInstaller
SUSE:
$ cd /tmp
$ /media/cdrom/runInstaller
If the Installer does not appear, see the Oracle Database Installation Guide for UNIX Systems for information about
how to troubleshoot X display problems.
Use the following guidelines to complete the installation:
The following table describes the recommended action for each Installer screen.
Note:
If you have completed the tasks listed previously, you can complete the installation by
choosing the default values on most screens.
If you need more assistance, or if you want to choose an option that is not a default, click Help for additional
information.
If you encounter errors while installing or linking the software, see the Oracle Database Installation Guide for UNIX
Systems for information about troubleshooting.
Screen Recommended Action
Welcome to the Oracle Specify the following information, then click Next.
Database 10g Oracle Home Location
Installation Verify that the path shown is similar to the following:
oracle_base/product/10.1.0/db_1
Installation Type
Select Enterprise Edition or Standard Edition.
UNIX DBA Group
Select the name of the OSDBA group that you created earlier, for example dba.
Global Database Name
Specify a name for the database, followed by the domain name of the system:
Page 252 of 287
Screen Recommended Action
sales.your_domain.com
Database Password/Confirm Password
Specify and confirm the password that you want to use for the following administrative
database accounts:
SYS, SYSTEM, SYSMAN, and DBSNMP
Specify Inventory Note: This screen appears only during the first installation of Oracle products on a system.
Directory and Specify the following information, then click Next.
Credentials Enter the full path of the inventory directory:
Verify that the path is similar to the following, where oracle_base is the value you specified
for the ORACLE_BASE environment variable:
oracle_base/oraInventory
In this example, oracle_home is the directory where you installed the software. The correct
path is displayed on the screen.
Press Return to accept the default values for each prompt displayed by the script. When the
script finishes, click OK.
End of Installation The configuration assistants configure several Web-based applications, including Oracle
Enterprise Manager Database Control. This screen displays the URLs configured for these
applications. Make a note of the URLs used.
The port numbers used in these URLs are also recorded in the following file:
oracle_home/install/portlist.ini
DVD-ROM installation:
$ /mount_point/companion/runInstaller
The following table describes the recommended action for each Installer screen:
Screen Recommended Action
Welcome Click Next.
Specify File In the Destination section, select the Name or Path value that specifies the Oracle home directory
Locations where you installed Oracle Database 10g, then click Next.
The default Oracle home path is similar to the following:
oracle_base/product/10.1.0/db_1
Select a Product Select Oracle Database 10g Products, then click Next.
to Install
Summary Review the information displayed, then click Install.
Install The Install screen displays status information while the product is being installed.
Setup Privileges When prompted, run the following script in a separate terminal window as the root user:
oracle_home/root.sh
In this example, oracle_home is the directory where you installed the software. The correct path
is displayed on the screen.
Note: Unless you want to install Legato Single Server Version, enter 3 to quit the installation of
LSSV.
When the script finishes, click OK.
End of Installation To exit from the Installer, click Exit, then click Yes.
2. What are the scripts to run to successful of software installation? Explain them?
a. Orainstroot.sh
b. root.sh
Page 254 of 287
Note: Both the script should be run as root user
orainstRoot.sh:
It is located in $ORACLE_BASE/oraInventory
Usage:
a. It creates the inventory pointer file (/etc/oraInst.loc), The file shows the inventory location and group it is linked to.
b. Changing groupname of the oraInventory directory to oinstall group
root.sh:
It is located in $ORACLE_HOME directory
Usage:
root.sh script performs many things, namely
a. It changes or correctly sets the environment variables
b. copying of few files into /usr/local/bin , the files are dbhome, oraenv, coraenv etc.
c. creation of /etc/oratab file or adding database home and SID's entry into /etc/oratab file.
3. What are post installation Tasks?
Required Post-installation Tasks
Recommended Post-installation Tasks
Required Product-Specific Post-installation Tasks
Installing Oracle Database 10g Products from the Companion CD
Required Post-installation Tasks
You must perform the tasks described in the following sections after completing an installation:
Downloading and Installing Patches
Running Oracle Enterprise Manager Java Console
Connecting with Instant Client
Configuring Oracle Products
Downloading and Installing Patches
Check the OracleMetalink Web site for required patches for your installation. To download required patches:
Use a Web browser to view the OracleMetalink Web site:
http://metalink.oracle.com
Log in to OracleMetalink.
Note:
If you are not an OracleMetalink registered user, click Register for MetaLink! and follow the
registration instructions.
On the main OracleMetalink page, click Patches.
Select Simple Search.
Specify the following information, then click Go:
In the Search By field, choose Product or Family, then specify RDBMS Server.
In the Release field, specify the current release number.
In the Patch Type field, specify Patchset/Minipack.
In the Platform or Language field, select your platform.
Running Oracle Enterprise Manager Java Console
In addition to using Oracle Enterprise Manager Database Control or Grid Control to manage an Oracle Database 10g
database, you can also use the Oracle Enterprise Manager Java Console to manage databases from this release or
previous releases. The Java Console is installed by the Administrator installation type.
Note:Oracle recommends that you use Grid Control or Database Control in preference to the
Java Console when possible.
To start the Java Console, follow these steps:
Set the ORACLE_HOME environment variable to specify the Oracle home directory where you installed Oracle Client.
Set the shared library path environment variable of the system to include the following directories:
Platform Environment Variable Required Setting
Linux x86-64 LD_LIBRARY_PATH $ORACLE_HOME/lib32:$ORACLE_HOME/lib:$LD_LIBRARY_PATH
Use one of the following methods to specify database connection information for the client application:
Specify a SQL connect URL string using the following format:
//host:port/service_name
Set the TNS_ADMIN environment variable to specify the location of the tnsnames.ora file and specify a service name
from that file.
Set the TNS_ADMIN environment variable and set the TWO_TASK environment variable to specify a service name
from the tnsnames.ora file.
Note:
You do not have to specify the ORACLE_HOME environment variable.
Note:
You need only perform post-installation tasks for products that you intend to use.
Configuring Oracle Net Services
If you have a previous release of Oracle software installed on this system, you might want to copy information from
the Oracle Net tnsnames.ora and listener.ora configuration files from the previous release to the corresponding files
for the new release.
Note:
The default location for the tnsnames.ora and listener.ora files is the
$ORACLE_HOME/network/admin/ directory. However, you can also use a central location for
thesethis files, for example /var/opt/oracle/etc.
Modifying the listener.ora File
If you are upgrading from a previous release of Oracle Database, Oracle recommends that you use the current release
of Oracle Net listener instead of the listener from the previous release.
To use the listener from the current release, you may need to copy static service information from the listener.ora
file from the previous release to the version of that file used by the new release.
For any database instances earlier than release 8.0.3, add static service information to the listener.ora file. Oracle
Database releases later than release 8.0.3 do not require static service information.
Modifying the tnsnames.ora File
Unless you are using a central tnsnames.ora file, copy Oracle Net service names and connect descriptors from the
previous release tnsnames.ora file to the version of that file used by the new release.
If necessary, you can also add connection information for additional database instances to the new file.
Configuring Oracle Label Security
Page 257 of 287
If you installed Oracle Label Security, you must configure it in a database before you use it. You can configure Oracle
Label Security in two ways; with Oracle Internet Directory integration and without Oracle Internet Directory
integration. If you configure Oracle Label Security without Oracle Internet Directory integration, you cannot configure
it to use Oracle Internet Directory at a later stage.
Note:
To configure Oracle Label Security with Oracle Internet Directory integration, Oracle Internet
Directory must be installed in your environment and the Oracle database must be registered
in the directory.
See Also:
For more information about Oracle Label Security enabled with Oracle Internet Directory, see
the Oracle Label Security Administrator's Guide.
Installing Natively Compiled Java Libraries for Oracle JVM and Oracle interMedia
If you plan to use Oracle JVM or Oracle interMedia, Oracle strongly recommends that you install the natively
compiled Java libraries (NCOMPs) used by those products from the Oracle Database 10g Companion CD. These
libraries are required to improve the performance of the products on your platform.
For information about how to install products from the Companion CD, see the "Installing Oracle Database 10g
Products from the Companion CD" section.
Installing Oracle Text Supplied Knowledge Bases
An Oracle Text knowledge base is a hierarchical tree of concepts used for theme indexing, ABOUT queries, and
deriving themes for document services. If you plan to use any of these Oracle Text features, you can install two
supplied knowledge bases (English and French) from the Oracle Database 10g Companion CD.
Note:
You can extend the supplied knowledge bases depending on your requirements.
Alternatively, you can create your own knowledge bases, possibly in languages other than
English and French. For more information about creating and extending knowledge bases, see
the Oracle Text Reference.
For information about how to install products from the Companion CD, see the "Installing Oracle Database 10g
Products from the Companion CD" section.
Configuring Oracle Messaging Gateway
To configure Oracle Messaging Gateway, see the section about Messaging Gateway in the Oracle Streams Advanced
Queuing User's Guide and Reference manual. When following the instructions listed in that manual, refer to this
section for additional platform-specific instructions about configuring the listener.ora, tnsnames.ora, and mgw.ora
files.
Modifying the listener.ora File for External Procedures
To modify the $ORACLE_HOME/network/admin/listener.ora file for external procedures:
Back up the listener.ora file.
Ensure that the default IPC protocol address for external procedures is set as follows:
(ADDRESS = (PROTOCOL=IPC)(KEY=EXTPROC))
Add static service information for a service called mgwextproc by adding lines similar to the following to the SID_LIST
parameter for the listener in the listener.ora file:
(SID_DESC =
(SID_NAME = mgwextproc)
(ENVS = platform-specific_env_vars)
(ORACLE_HOME = oracle_home)
(PROGRAM = extproc_agent)
)
In this example:
The ENVS parameter defines the shared library path environment variable and any other required environment
variables.
Note:
In the following examples, the PLSExtProc service is the default service for PL/SQL external
procedures.
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/10.1.0/db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(SID_NAME = mgwextproc)
(ENVS = EXTPROC_DLLS=/u01/app/oracle/product/10.1.0/db_1/lib32/
libmgwagent.sl,LD_PRELOAD=/u01/app/oracle/product/10.1.0/db_1/jdk/jre/
lib/PA-
RISC/server/libjvm.sl,SHLIB_PATH=/u01/app/oracle/product/10.1.0/db_1/jdk/jre/lib/PA_RISC:/u01/app/oracle/prod
uct/10.1.0/db_1/jdk/jre/lib/PA_RISC/server:/u01/app/oracle/product/10.1.0/db_1/lib32)
(ORACLE_HOME = /u01/app/oracle/product/10.1.0/db_1)
(PROGRAM = extproc32)
)
)
Modifying the tnsnames.ora File for External Procedures
To modify the $ORACLE_HOME/network/admin/tnsnames.ora file for external procedures:
Back up the tnsnames.ora file.
In the tnsnames.ora file, add a connect descriptor with the net service name MGW_AGENT, as follows:
MGW_AGENT =
(DESCRIPTION=
(ADDRESS_LIST= (ADDRESS= (PROTOCOL=IPC)(KEY=EXTPROC)))
(CONNECT_DATA= (SID=mgwextproc) (PRESENTATION=RO)))
In this example:
The value specified for the KEY parameter must match the value specified for that parameter in the IPC protocol
address in the listener.ora file.
The value of the SID parameter must match the service name in the listener.ora file that you specified for the Oracle
Messaging Gateway external procedure agent in the previous section (mgwextproc).
Setting up the mgw.ora Initialization File
Note:
All the lines in the mgw.ora file should be less than 1024 characters.
Configuring Oracle Precompilers
The following sectiondescribes post-installation tasks for Oracle precompilers.
Configuring Pro*C/C++
Note:
All precompiler configuration files are located in the $ORACLE_HOME/precomp/admin
directory.
Configuring Pro*C/C++
Verify that the PATH environment variable setting includes the directory that contains the C compiler executable.
Table 4-1 shows the default directories and the appropriate commands to verify the path setting, depending on your
platform and compiler.
Table 4-1 C/C++ Compiler Directory
Platform Path Command
Linux x86-64 /usr/bin $ which gcc
Installing Oracle Database 10g Products from the Companion CD
The Oracle Database 10g Companion CD contains additional products that you can install. Whether you need to
install these products depends on which Oracle Database products or features you plan to use. If you plan to use the
following products or features, Oracle strongly recommends that you complete the Oracle Database 10g Products
installation from the Companion CD:
Oracle JVM
Oracle interMedia
Oracle Text
To install Oracle Database 10g Products from the Companion CD, follow these steps:
Note:
For more detailed installation information, see the Oracle Database Companion CD
Installation Guide, which is available on the Companion CD.
Insert the Oracle Database 10g Companion CD or the Oracle Database 10g DVD-ROM into the disc drive.
If necessary, log into the system as the user who installed Oracle Database (typically the oracle user).
To start the Installer, enter the following commands where directory_path is the CD-ROM mount point directory or
the path of the companion directory on the DVD-ROM:
$ cd /tmp
SIEMENS
1. How do you alter table space?
2. FINGER command
3. TOUCH command
4. IMPORT/EXPORT what happens if we keep DIRECT = Y?
5. Day started and supposes 90 % data file filled? What do you do?
6. LOCAL/DICTIONARY TABLESPACES DIFFERENCES?
7. STATS PACK
8. MTS ( Multi Thread Services)
9. COLD/HOT Differences?
10. Control files BACKUP?
11. Trace files?
12. Listener.ora?
Q SOFT
1. How many instances in your company?
2. Suppose your db is 90 GB how do u assign SGA?
3. What are the components of SGA?
4. Database Buffer cache is useful for what?
5. What is DML Statement?
6. Suppose a redo log file fails what would be happen what is going on?
7. Difference between Shutdown Transactional and NORMAL?
8. What is SP FILE? Suppose u have sp1, sp2, pfile which will run first?
9. Suppose 90 % of data file filled? What do u do?
10. HOT BACKUP ADVANTAGES?
11. Write command on user assign Temp, Default TS? What contains SYSTEM TS?
12. RBS and UNDO?
13. Installation on Linux, parameter in KERNEL?
14. How many tables?
15. What is Hash Partioning ?
Page 283 of 287
16. Export utility ? SHOW =Y & DIRECT =Y explain it ?
17. How can use quota ?
18. PGA_AGGREGATE_TARGET?
19. Explain about Alert log file ?
20. Optimiser RBO,CBO how can wes say RBO AND CBO ?
21. Logical Backup ?
22. When you create indexes ? After inserting data you created index on same?
ORACLE
1. Suppose SID= 100 how can you check in OS level?
2. Altering a data file? Any pre requisite?
3. What is your setup i.e. Environment?
4. How can you verify log verification?
5. Auto Extend on?
6. U added a data file how can u see that?
7. How many processes are running? how can we see that ?
8. How can I increase my performance?
9. Difference between ls –ltr and ls –l?
CHENNAI COMPANY
1. OS level disk free space? ( df –h )
2. Hot backup steps?
3. controlfile command
4. BACKUP TIME?
5. Compress =y ( IMPORT /EXPORT ) ?
6. Does export possible in diff OS ‘s ?
7. Version of OS ?
8. How can you identify port No ?
9. Default Port No ?
10. While Installing Oracle on Linux what is the last step ?
11. Can user create a db ?
12. What is DB links ?
13. Which user can Install Oracle ? ( root user or $ user )
14. How many memory layers are in the shared pool ?
15. How do you find out in the RMAN catlalog a particular archive log has been backed up ?
16. What is ORA 01113 & 01110 ?
17. What is Dual Table ?
18. What is for Reco ?
19. Expalin the setup in your company ?
20. How many redologs are generated in your company and tell the size ?
FINCH
1. what is check point ?
2. How to manage Idle Users ? ( idle_time in profile)
3. BACKUP Strategy of your company ?
4. When u take cold bakup and RMAN Backup ?
5. Monitoring of Dynamic Views & Data Dictionary Views ?
6. SQL Tuning ?
7. what is semaphore while installing ORACLE ?
8. Datafile size ?
9. How do u find the overall size of database ?
10. How do u find USED and FREE space in DATABASE ?
11. Shared Realm memory does not exist ?
12. Status of Redologs in ARCHIVE and NOARCHIVE mode of DB ?
13. What happens internally when we keep in BEGIN BACKUP MODE ?
Page 284 of 287
14. What are the advantages of CATALOG DB ?
15. What is RMAN Repository ?
IBM
1. Tell me about yourself ?
2. Backup Strategy ?
3. what happens internally when we keep in BEGIN BACKUP MODE ?
4. How do u include user managed backups in RMAN ?
5. What happens internally while taking backup using RMAN ?
6. If a block is found corrupt while taking backup using RMAN does RMAN take backup of that datafile or will it
terminate processing ?
7. what is DBWR,LGWR ?
8. What is SMON and PMON ?
9. How does ckpt process helpful during instance recovery ?
10. How can we image copy of datafile using RMAN ?
11. How can we compress backup set ?
12. what are new features in 10g release 2 ?
13. How do you Configure RMAN ?
WIPRO
I ROUND
1. Tell about yourself ?
2. What are your daily activities ?
3. What type of DB u have i.e. OLTP/DEVELOPMENT/OLAP ?
4. Undo tablespace OPTIONS ?
5. How many types one can create db ?
6. Suppose u have deleted datafile ? can we recevor or not provided no backup is available ?
7. What is DBWR ?
8. What are features of oracle 9i ?
9. What is core level sql statement ?
10. Suppose waste data inserted into a table ? I want to remove that ? how ?
11. 4 pm meeting going on ? db crash ? what is scenario ?
II ROUND
1. What is your company profile ?
2. Which type of backups are using ?
3. Difference between RMAN and User Managed ?
4. Difference between Export /Import ? what Features ?
5. OLTP backup manager ?
6. SQL statement Tuning ?
7. Row buffer cache ?
8. STATS PACK ?
MIND TREE
1. Definition of Function, Package , Procedure ?
2. What do you mean by dual table why it is required ?
3. Snapshot too old error 01555 ?
4. Difference between HOT & COLD backup ? Why it is required to be archive in online backup ?
5. What is meant by Trigger ?
WIBEN TECHNOLOGIES
1. Daily activities ?
2. Database version ?
3. DB Size ?
4. HOW many instances you have and how may development boxes u have ?
5. Team size ?
6. Tell about physical backups ?
7. How to take physical backups can you explain ?
8. What is logical backup ?
9. What is your backup strategy ?
10. Difference between logical and physical backup ?
AXA
1. Tell me about yourself ?
2. Team size ?
3. How many DB u have production , development & QA ?
4. SIZE of your Production DB ?
5. Size of your DEVELOPMENT DB ?
6. What is the size of your Redo log ?
7. Size of SGA ?
8. Control file contains what parameters ?
9. what does Pfile Contains ?
10. What is PCT Free ?
11. what is db block size ?
12. What is SQL Loader ?
13. How can you load data in Sql Loader ?
14. I have one updated row how can you insert already updated table by using SQL Loader ?
15. How can you increase TS Size and Datafile Size ?
16. What is Dirty buffer ?
17. what is alert log ? what is it contain ?
18. ANY issues did u face ?
19. Generally what errors you got ?
20. Can you tell me what is diff between 9i and 10g ?
21. Explain 10g features ?
22. Tell me Oracle installation steps ?
23. What is your backup strategy ?
24. When u take hot backup and how much time it take to complete ?
Page 286 of 287
25. what is cold backup ?
26. What is db refresh ? How you do db refresh ?
27. What is Stats Pack Analysis ?
28. What is PT ? Have you ever done it ?
29. Can you tell me how many controlfiles you have ?
30. If I have created 9 controlfiles what happens ? why ?
31. Control file contains what ?
32. If listener is failed what is the error ?
33. One of the user not able to connect to db he is getting listener failed error ? what is the reason ? what error
?
34. wha is recovery catalog
35. What is TKPROF ?
36. One of my user is complaining that system is so slow ? what do u d o ?
37. If the user does not have the privilege to enable session , As a DBA what u do?
38. Suppose If have 1000 statements in my report and I want to see the top 5 resource consuming statements ?
how to see those statements ?
39. How to enable session at Instance level ?
IBM
1. how do you apply patch in rac
2. what is cache fusion.
3. how do you add disk to asm disk group
4. what is incarnation
5. how do you sync DG when there is some archive logs missing
6. difference between incremental and cumulative backup’s
7. what is rebalancing
8. do you know abot OS watcher
9. how to add node in RAC
10. wt happens if archive dest is full
11. how will perform clone
12. how to upgrade a rac database.
13. what do you mean rolling upgrade.
14. what are diff protection mode available is DG
15. diff between exp and datapump
16. what is inventory location
17. what happens if inventory is corrupted
18. what is fractured block.
19. what is partial check point
20. what are background processes in asm
21. how will find out the number of clientrs running under one ASM ins
22. how do you recover undo wt no down time
23. how recover a lost datafile.
24. what is the background process that write into alert log file
25. sga_target,sga_max_size
26. root.sh,orainstroot.sh
27. how to find nodes in RAC