Midterm Lectures
Midterm Lectures
A database is a collection of related data. By data, we mean known facts that can be
recorded and that have implicit meaning. For example, consider the names, telephone
numbers, and addresses of the people you know. You may have recorded this data in
an indexed address book or you may have stored it on a hard drive, using a personal
computer and software such as Microsoft Access or Excel. This collection of related
data with an implicit meaning is a database.
Database Management System (DBMS) is a software for storing and retrieving users’ data
while considering appropriate security measures. It consists of a group of programs which
manipulate the database. The DBMS accepts the request for data from an application and
instructs the operating system to provide the specific data. In large systems, a DBMS helps users
and other third-party software to store and retrieve data.
DBMS allows users to create their own databases as per their requirement. The term “DBMS”
includes the user of the database and other application programs. It provides an interface
between the data and the software application.
Example of DBMS
Oracle
IBM DB2
Ingress
Teradata
MS SQL Server
MS Access
MySQL
Forms
Forms are used for entering, modifying, and viewing records. You likely have had to fill out
forms on many occasions, like when visiting a doctor's office, applying for a job, or registering
for school. The reason forms are used so often is that they're an easy way to guide people toward
entering data correctly. When you enter information into a form in Access, the data goes exactly
where the database designer wants it to go in one or more related tables.
Forms make entering data easier. Working with extensive tables can be confusing, and when you
have connected tables, you might need to work with more than one at a time to enter a set of
data. However, with forms it's possible to enter data into multiple tables at once, all in one place.
Database designers can even set restrictions on individual form components to ensure all of the
needed data is entered in the correct format. All in all, forms help keep data consistent and
organized, which is essential for an accurate and powerful database.
Reports
Reports offer you the ability to present your data in print. If you've ever received a computer
printout of a class schedule or a printed invoice of a purchase, you've seen a database report.
Reports are useful because they allow you to present components of your database in an easy-to-
read format. You can even customize a report's appearance to make it visually appealing. Access
offers you the ability to create a report from any table or query.
Queries
Queries are a way of searching for and compiling data from one or more tables. Running a
query is like asking a detailed question of your database. When you build a query in Access, you
are defining specific search conditions to find exactly the data you want.
Queries are far more powerful than the simple searches you might carry out within a table. While
a search would be able to help you find the name of one customer at your business, you could
run a query to find the name and phone number of every customer who's made a purchase within
the past week. A well-designed query can give information you might not be able to find just by
looking through the data in your tables.
DBMS Languages
A Software Package that enables users to define, create, maintain, and control access to the
database.
Security, integrity, concurrent access, recovery, support for data communication, etc.
Lecture 3
Interaction of user with database
Teleprocessing
Processing performed within the same physical computer. User terminals are
typically “dumb”, incapable of functioning on their own, and cabled to the central
computer
File-Server
Client-Server (2-tiers)
SQL processing remained on the server side. In such an architecture, the server is
often called a query server or transaction server because it provides these two
The user interface programs, and application programs can run on the client side.
This allows Java client programs to access one or more DBMSs through a
standard interface.
object-oriented DBMSs, where the software modules of the DBMS were divided
between client and server in a more integrated way. For example, the server level
may include the part of the DBMS software responsible for handling data storage
on
disk pages, local concurrency control and recovery, buffering and caching of disk
pages, and other such functions. Meanwhile, the client level may handle the user
interface; data dictionary functions; DBMS interactions with programming
language compilers; global query optimization, concurrency control, and recovery
across multiple servers; structuring of complex objects from the data in the
buffers;
and other such functions. In this approach, the client/server interaction is more
on the client and some on the server—rather than by the users/programmers. The
client/server architecture, the server has been called a data server because it
provides data in disk pages to the client. This data can then be structured into
objects
The architectures described here are called two-tier architectures because the
software components are distributed over two systems: client and server. The
advantages of this architecture are its simplicity and seamless compatibility with
existing
systems. The emergence of the Web changed the roles of clients and servers,
leading
adds an intermediate layer between the client and the database server, this
intermediate layer or middle tier is called the application server or the Web
that are used to access data from the database server. It can also improve database
security by checking a client’s credentials before forwarding a request to the
database server. Clients contain GUI interfaces and some additional application-
specific
business rules. The intermediate server accepts requests from the client, processes
the request and sends database queries and commands to the database server, and
then acts as a conduit for passing (partially) processed data from the database
server
in GUI format. Thus, the user interface, application rules, and data access act as
the
three tiers. Figure 2.7(b) shows another architecture used by database and other
user and allows data entry. The business logic layer handles intermediate rules
and
constraints before data is passed up to the user or down to the DBMS. The bottom
layer includes all data management services. The middle layer can also act as a
Web
server, which retrieves query results from the database server and formats them
into
dynamic Web pages that are viewed by the Web browser at the client side.
Other architectures have also been proposed. It is possible to divide the layers
between the user and the stored data further into finer components, thereby giving
rise to n-tier architectures, where n may be four or five tiers. Typically, the
business
logic layer is divided into multiple layers. Besides distributing programming and
data throughout a network, n-tier applications afford the advantage that any one
tier can run on an appropriate processor or operating system platform and can be
handled independently. Vendors of ERP (enterprise resource planning) and CRM
A conceptual data model is a model that helps to identify the highest-level relationships between
the different entities, while a logical data model is a model that describes the data as much detail
as possible, without regard to how they will be physically implemented in the database.
Query Processing is a translation of high-level queries into low-level expression. It is a step wise
process that can be used at the physical level of the file system, query optimization and actual
execution of the query to get the result. It requires the basic concepts of relational algebra and
file structure. It refers to the range of activities that are involved in extracting data from the
database. It includes translation of queries in high-level database languages into expressions that
can be implemented at the physical level of the file system. In query processing, we will actually
understand how these queries are processed and how they are optimized.
Physical database design is the process of transforming logical data models into physical data
models. An experienced database designer will make a physical database design in parallel with
conceptual data modeling if they know the type of database technology that will be used.
Lecture 4
DBMS Three Levels Architecture
and describes the complete details of data storage and access paths for the
database.
2. The conceptual level has a conceptual schema, which describes the structure of the whole
database for a community of users. The conceptual schema
hides the details of physical storage structures and concentrates on describing entities, data types,
relationships, user operations, and constraints.
data model.
views. Each external schema describes the part of the database that a particular user group is
interested in and hides the rest of the database from that
user group. As in the previous level, each external schema is typically implemented using a
representational data model, possibly based on an external
The three-schema architecture is a convenient tool with which the user can visualize
the schema levels in a database system. Most DBMSs do not separate the three levels
completely and explicitly but support the three-schema architecture to some extent.
Some older DBMSs may include physical-level details in the conceptual schema.
development because it clearly separates the users’ external level, the database’s conceptual
level, and the internal storage level for designing a database. It is very much
applicable in the design of DBMSs, even today. In most DBMSs that support user
views, external schemas are specified in the same data model that describes the
conceptual-level information (for example, a relational DBMS like Oracle uses SQL
for this). Some DBMSs allow different data models to be used at the conceptual and
external levels. An example is Universal Data Base (UDB), a DBMS from IBM,
which uses the relational model to describe the conceptual schema, but may use an
Notice that the three schemas are only descriptions of data; the stored data that
actually exists is at the physical level only. In a DBMS based on the three-schema
architecture, each user group refers to its own external schema. Hence, the DBMS
must transform a request specified on an external schema into a request against the
conceptual schema, and then into a request on the internal schema for processing
over the stored database. If the request is a database retrieval, the data extracted
from the stored database must be reformatted to match the user’s external view. The
processes of transforming requests and results between levels are called mappings.
are meant to support small databases—do not support external views. Even in such
Data Independence
The three-schema architecture can be used to further explain the concept of data
independence, which can be defined as the capacity to change the schema at one
level of a database system without having to change the schema at the next higher
may change the conceptual schema to expand the database (by adding a
schemas need not be changed as well. Changes to the internal schema may be
needed because some physical files were reorganized—for example, by creating additional
access structures—to improve the performance of retrieval or
update. If the same data as before remains in the database, we should not
Generally, physical data independence exists in most databases and file environments where
physical details such as the exact location of data on disk, and hardware details of storage
encoding, placement, compression, splitting, merging of
records, and so on are hidden from the user. Applications remain unaware of these
details. On the other hand, logical data independence is harder to achieve because it
information on how to map requests and data among the various levels. The DBMS
information in the catalog. Data independence occurs because when the schema is
changed at some level, the schema at the next higher level remains unchanged; only
the mapping between the two levels is changed. Hence, application programs referring to the
higher-level schema need not be changed.
The three-schema architecture can make it easier to achieve true data independence, both
physical and logical. However, the two levels of mappings create an
Database schema
A database schema is a blueprint or architecture of how our data will look. It doesn’t hold data
itself, but instead describes the shape of the data and how it might relate to other tables or
models. An entry in our database will be an instance of the database schema. It will contain all of
the properties described in the schema.
Schema types
There are two main database schema types that define different parts of the schema: logical and
physical.
A logical database schema represents how the data is organized in terms of tables. It also
explains how attributes from tables are linked together. Different schemas use a different syntax
to define the logical architecture and constraints.
To create a logical database schema, we use tools to illustrate relationships between components
of your data. This is called entity-relationship modeling (ER Modeling). It specifies what the
relationships between entity types are.
The physical database schema represents how data is stored on disk storage. In other words, it
is the actual code that will be used to create the structure of your database. In MongoDB with
mongoose, for instance, this will take the form of a mongoose model. In MySQL, you will use
SQL to construct a database with tables.
Schema objects
A schema is a collection of schema objects. Examples of schema objects include tables, views,
sequences, synonyms, indexes, clusters, database links, procedures, and packages. This chapter
explains tables, views, sequences, synonyms, indexes, and clusters.
Schema objects are logical data storage structures. Schema objects do not have a one-to-one
correspondence to physical files on disk that store their information. However, Oracle stores a
schema object logically within a tablespace of the database. The data of each object is physically
contained in one or more of the tablespace's datafiles. For some objects such as tables, indexes,
and clusters, you can specify how much disk space Oracle allocates for the object within the
tablespace's datafiles.
Who is DBA?
Network Administrator
Application Developers
DBA’s Tasks
DBA’s Responsibilities
Installing and upgrading the Oracle Database server and application tools
Allocating system storage and planning future storage requirements for the database
system
Creating primary database storage structures (tablespaces) after application developers
have designed an application
Creating primary objects (tables, views, indexes) once application developers have
designed an application
Modifying the database structure, as necessary, from information given by application
developers
Enrolling users and maintaining system security
Ensuring compliance with Oracle license agreements
Controlling and monitoring user access to the database
Monitoring and optimizing the performance of the database
Planning for backup and recovery of database information
Maintaining archived data on tape
Backing up and restoring the database
Contacting Oracle for technical support
Physical database design is the process of transforming logical data models into
physical data models. An experienced database designer will make a physical database
design in parallel with conceptual data modeling if they know the type of database
technology that will be used.
Purposes
Meeting the expectations of Database Designer for the database, following are two main
purposes of Physical Database Design for a DBA.
Managing Storage Structure for database or DBMS
Performance & Tuning
Factor (A): Analyzing the database queries and transactions
Before undertaking the physical database design, we must have a good idea of the
intended use of the database by defining in a high-level form the queries and transactions
that are expected to run on the database. For each retrieval query, the following
information about the query would be needed:
1. The files that will be accessed by the query.
2. The attributes on which any selection conditions for the query are specified.
3. Whether the selection condition is an equality, inequality, or a range condition.
4. The attributes on which any join conditions or conditions to link multiple
tables or objects for the query are specified.
5. The attributes whose values will be retrieved by the query.
The attributes listed in items 2 and 4 above are candidates for the definition of access
structures, such as indexes, hash keys, or sorting of the file.
For each update operation or update transaction, the following information would be
needed:
1. The files that will be updated.
2. The type of operation on each file (insert, update, or delete).
3. The attributes on which selection conditions for a delete or update are specified.
4. The attributes whose values will be changed by an update operation.
Again, the attributes listed in item 3 are candidates for access structures on the
files,because they would be used to locate the records that will be updated or deleted.
On the other hand, the attributes listed in item 4 are candidates for avoiding an
access structure, since modifying them will require updating the access structures.
Factor (B): Frequency with Queries and Transactions
Besides identifying the characteristics of expected retrieval queries and update
transactions, we must consider their expected rates of invocation. This
frequency information, along with the attribute information collected on each query and
transaction, is used to compile a cumulative list of the expected frequency
of use for all queries and transactions. This is expressed as the expected frequency of
using each attribute in each file as a selection attribute or a join attribute, over all the
queries and transactions. Generally, for large volumes of processing, the informal 80–20
rule can be used: approximately 80 percent of the processing is accounted for by only 20
percent of the queries and transactions. Therefore, in practical situations, it is rarely
necessary to collect exhaustive statistics and invocation
rates on all the queries and transactions; it is sufficient to determine the 20 percent or so
most important ones.
Factor (C): Time constraints of queries & transactions
Some queries and transactions may have stringent performance constraints. For
example,a transaction may have the constraint that it should terminate within 5 seconds
on 95 percent of the occasions when it is invoked, and that it should never take more than
20 seconds. Such timing constraints place further priorities on the attributes that are
candidates for access paths. The selection attributes used by queries and transactions with
time constraints become higher-priority candidates for primary access structures for the
files because the primary access structures are generally the most efficient for locating
records in a file.
Factor (D): Expected frequencies of update operations
A minimum number of access paths should be specified for a file that is frequently
updated,because updating the access paths themselves slows down the update operations.
For example, if a file that has frequent record insertions has 10 indexes on 10
different attributes, each of these indexes must be updated whenever a new record is
inserted.The overhead for updating 10 indexes can slow down the insert operations.
Factor (E): Uniqueness constraints on attributes
Access paths should be specified on all candidate key attributes—or sets of attributes—
that are either the primary key of a file or unique attributes. The existence of an index (or
other access path) makes it sufficient to only search the index when checking this
uniqueness constraint, since all values of the attribute will exist in the leaf nodes of the
index. For example, when inserting a new record, if a key attribute value of the new
record already exists in the index, the insertion of the new record should be rejected,
since it would violate the uniqueness constraint on the attribute. Once the preceding
information is compiled, it is possible to address the physical
database design decisions, which consist mainly of deciding on the storage structures and
access paths for the database files.
Design Decisions about Indexing.
The attributes whose values are required in equality or range conditions (selection
operation) are those that are keys or that participate in join conditions (join operation)
requiring access paths, such as indexes.
The performance of queries largely depends upon what indexes or hashing schemes exist
to expedite the processing of selections and joins. On the other hand, during
insert, delete, or update operations, the existence of indexes adds to the overhead.This
overhead must be justified in terms of the gain in efficiency by expediting
queries and transactions.The physical design decisions for indexing fall into the following
categories:
1. Whether to index an attribute. The general rules for creating an index on an attribute
are that the attribute must either be a key (unique), or there must be some query that uses
that attribute either in a selection condition (equality or range of values) or in a join
condition. One reason for creating multiple indexes is that some operations can be
processed by just scanning the indexes, without having to access the actual data file.
2. What attribute or attributes to index on. An index can be constructed on a single
attribute, or on more than one attribute if it is a composite index. If multiple attributes
from one relation are involved together in several queries,(for example,
(Garment_style_#, Color) in a garment inventory database), a Multi attribute (composite)
index is warranted. The ordering of attributes within a multiattribute index must
correspond to the queries. For instance, the above index assumes that queries would be
based on an ordering of colors within a Garment_style_# rather than vice versa.
3. Whether to set up a clustered index. At most, one index per table can be a primary or
clustering index, because this implies that the file be physically
ordered on that attribute. In most RDBMSs, this is specified by the keyword CLUSTER.
(If the attribute is a key, a primary index is created, whereas a clustering index is created
if the attribute is not a key.) If a table requires several indexes, the decision about which
one should be the primary or clustering index depends upon whether keeping the
table ordered on that attribute is needed. Range queries benefit a great deal
from clustering. If several attributes require range queries, relative benefits must be
evaluated before deciding which attribute to cluster on. If a query is to be answered by
doing an index search only (without retrieving data records), the corresponding index
should not be clustered, since the main benefit of clustering is achieved when retrieving
the records themselves. A clustering index may be set up as a multi attribute index if
range retrieval by that composite key is useful in report creation (for example, an index
on Zip_code, Store_id, and Product_id may be a clustering index for sales data).
4. Whether to use a hash index over a tree index. In general, RDBMSs use B+- trees
for indexing. However, ISAM and hash indexes are also provided in some systems (see
Chapter 18). B+-trees support both equality and range queries on the attribute used as the
search key. Hash indexes work well with equality conditions, particularly during joins to
find a matching record(s), but they do not support range queries
5. Whether to use dynamic hashing for the file. For files that are very volatile—that is,
those that grow and shrink continuously.
The process of continuing to revise/adjust the physical database design by monitoring resource
utilization as well as internal DBMS processing to reveal bottlenecks such as contention for the
same data or devices.
Tuning Indexes
The initial choice of indexes may have to be revised for the following reasons:
Certain queries may take too long to run for lack of an index.
Certain indexes may not get utilized at all.
Certain indexes may undergo too much updating because the index is on an attribute that
undergoes frequent changes.
Most DBMSs have a command or trace facility, which can be used by the DBA to ask
the system to show how a query was executed—what operations were performed in
what order and what secondary access structures (indexes) were used. By analyzing
these execution plans, it is possible to diagnose the causes of the above problems.
Some indexes may be dropped and some new indexes may be created based on the
tuning analysis.
fluctuate seasonally or during different times of the month or week, and to reorganize
the indexes and file organizations to yield the best overall performance. Dropping and building
new indexes is an overhead that can be justified in terms of performance improvements.
Updating of a table is generally suspended while an index is dropped or created; this loss of
service must be accounted for. Besides dropping or creating indexes and changing from a
nonclustered to a clustered index and vice versa, rebuilding the index may improve performance.
Most RDBMSs use B+-trees for an index. If there are many deletions on the index key, index
pages may contain wasted space, which can be claimed during a rebuild operation. Similarly,
too many insertions may cause overflows in a clustered index that affect performance.
Rebuilding a clustered index amounts to reorganizing the entire table ordered on that key.
The available options for indexing and the way they are defined, created, and reorganized varies
from system to system. As an illustration, consider the sparse and dense indexes. A sparse index
such as a primary index will have one index pointer for each page (disk block) in the data file; a
dense index such as a unique secondary index will have an index pointer for each record. Sybase
provides clustering indexes as sparse indexes in the form of B+-trees, whereas INGRES provides
sparse clustering indexes as ISAM files and dense clustering indexes as B+-trees. In some
versions of Oracle and DB2, the option of setting up a clustering index is limited to a dense
index (with many more index entries), and the DBA has to work with this limitation.
Storage statistics: Data about allocation of storage into tablespaces, index spaces,
The times required for different phases of query and transaction processing
Problems in Tuning
Most of the previously mentioned problems can be solved by the DBA by setting
operating system parameters, and other similar activities. The solutions tend to
be closely tied to specific systems. The DBAs are typically trained to handle these
from two or more tables are frequently needed together: This reduces the
■ For the given set of tables, there may be alternative design choices, all of
which achieve 3NF or BCNF.We illustrated alternative equivalent designs. One normalized
design may be replaced by another.
that is in BCNF can be stored in multiple tables that are also in BCNF—for
example, R1(K, A, B), R2(K, C, D, ), R3(K, ...)—by replicating the key K in each
table. Such a process is known as vertical partitioning. Each table groups sets of attributes that
are accessed together. For example, the table EMPLOYEE(Ssn, Name, Phone, Grade, Salary)
may be split into two tables: EMP1(Ssn, Name, Phone) and EMP2(Ssn, Grade, Salary). If the
original table has a large number of rows (say 100,000) and queries about phone numbers and
salary information are totally distinct and occur with very different frequencies, then this
separation of tables may work better.
■ Attribute(s) from one table may be repeated in another even though this creates
replicated in tables wherever the Part# appears (as foreign key), but there
distinct tables. For example, product sales data may be separated into ten
tables based on ten product lines. Each table has the same set of columns
transaction applies to all product data, it may have to run against all the
Tuning Queries
Some typical instances of situations prompting query tuning include the following:
of different sizes and precision (such as Aqty = Bqty where Aqty is of type
query:
may not use the index on Dno in EMPLOYEE, whereas using Dno = Dnumber
in the WHERE-clause with a single block query may cause the index to be
used.
the result. A DISTINCT often causes a sort operation and must be avoided as
much as possible.
multiple queries into a single query unless the temporary relation is needed
useful. Consider the following query, which retrieves the highest paid
SELECT Ssn
FROM EMPLOYEE E
FROM EMPLOYEE AS M
This has the potential danger of searching all of the inner EMPLOYEE table M
for each tuple from the outer EMPLOYEE table E. To make the execution
more efficient, the process can be broken into two queries, where the first
FROM EMPLOYEE
GROUP BY Dno;
SELECT EMPLOYEE.Ssn
6. If multiple options for a join condition are possible, choose one that uses a
clustering index and avoid those that contain string comparisons. For example,
7. One idiosyncrasy with some query optimizers is that the order of tables in
the FROM-clause may affect the join processing. If that is the case, one may
have to switch this order so that the smaller of the two relations is scanned
Of the four types above, the first one typically presents no problem, since
most query optimizers evaluate the inner query once. However, for a query
of the second type, such as the example in item 2, most query optimizers
may not use an index on Dno in EMPLOYEE. However, the same optimizers
9. Finally, many applications are based on views that define the data of interest
query may be posed directly against a base table, rather than going through a
A primary key is a candidate key selected as the primary means of identifying rows in a
relation:
A primary key is a minimal identifier (means maybe 2 or 3 columns will gives you uniqueness)
that is used to identify tuples uniquely.
This means that no subset of the primary key is sufficient to provide unique identification of
tuples.
NULL values is not allowed in primary key attribute.
DBMS supplied
Short, numeric and never changes – an ideal primary key!
Has artificial values that are meaningless to users
Normally hidden in forms and reports
Example
Definition:
A foreign key is an attribute that refers to a primary key of same or different relation to form a
link (constraint) between the relations:
Example-1
In this example DeptID is referred to the foreign key in Employee table that refers to a primary
key in Department table.
LECTURE 34
Entity or Entity Type
Physical entities
Conceptual entities
A simple attribute:
Composite Attributes
A composite attribute:
Example: The Address attribute of an EMPLOYEE entity can be divided into Street, City,
State, Zip.
Multivalued Attributes
An attribute that holds more than one value for a single entity.
Derived & Stored Attributes
Is an attribute that represents a value that is derived from the value of a related attribute or set of
attributes not necessarily in the same entity.
Example 1: The value of the Age attribute of the EMPLOYEE entity can be derived from the
today’s date and the value of the employee BirthDate.
Complex Attributes
{AdressPhone({Phone(AreaCode, PhoneNumber)},
Note
LECTURE 40
Definition
Database Management System (DBMS) is a software package that controls the storage, organization,
and retrieval of data, a DBMS has the following elements:
Relational Model
Scientist E. F. Codd defined relational model based on mathematical theory, called relational model. The
relational model has the following major aspects:
RDBMS Operations
Logical operations
An application specifies what content is required. For example, an application requests an employee
name or adds an employee record to a table. Execution of DDL in an application or schema of DBMS.
Physical operations
RDBMS determines how things should be done and carries out the operation. For example, after an
application queries a table, the database may use an index to find the requested rows and read the data
into memory etc. before returning result.
ORACLE
what is oracle?
Dear Student,
Oracle is a multinational computer technology corporation that specializes
in developing and providing a wide range of software, hardware, and cloud
services, including its flagship product, Oracle Database. Oracle
Corporation is one of the world's largest and most influential technology
companies. Here are some key aspects of Oracle.
Oracle Database: Oracle Database is one of the company's most well-
known products. It is a powerful and widely used relational database
management system (RDBMS) that provides a secure and efficient platform
for storing, managing, and retrieving structured data. Oracle Database is
used for various purposes, including transaction processing, data
warehousing, and business applications. Oracle is a major technology
corporation known for its comprehensive suite of database, cloud,
hardware, and software solutions. It serves a wide range of industries and
provides technology infrastructure and applications that are crucial for
many businesses and organizations.
Introduction to Transactions
A transaction is a logical unit of work that contains one or more SQL statements. A transaction
is an atomic unit. The effects of all the SQL statements in a transaction can be either
all committed (applied to the database) or all rolled back (undone from the database).
A transaction begins with the first executable SQL statement. A transaction ends when it is
committed or rolled back, either explicitly with a COMMIT or ROLLBACK statement or implicitly
when a DDL statement is issued.
To illustrate the concept of a transaction, consider a banking database. When a bank customer
transfers money from a savings account to a checking account, the transaction can consist of
three separate operations:
Oracle must allow for two situations. If all three SQL statements can be performed to maintain
the accounts in proper balance, the effects of the transaction can be applied to the database.
However, if a problem such as insufficient funds, invalid account number, or a hardware failure
prevents one or two of the statements in the transaction from completing, the entire transaction
must be rolled back so that the balance of all accounts is correct.
A transaction in Oracle begins when the first executable SQL statement is encountered.
An executable SQL statement is a SQL statement that generates calls to an instance, including
DML and DDL statements.
When a transaction begins, Oracle assigns the transaction to an available undo tablespace to
record the rollback entries for the new transaction.
First, you can start a database instance without having it accessing any database files. This is
how you create a database, starting an instance first and creating the database from within the
instance.
Second, an instance can access only one database at a time. When you start an instance, the
next step is to mount that instance to a database. And an instance can mount only one
database at a single point in time.
Third, multiple database instances can access the same database. In a clustering environment,
many instances on several servers can access a central database to enable high availability
and scalability.
Finally, a database can exist without an instance. However, it would be unusable because it is
just a set of files.
There are many versions of Oracle Database like Oracle Database 10g, Oracle Database 11g,
Oracle Database 12c, Oracle Database 19c, etc. from which Oracle 19c is the Latest
Version. In your course, you will install oracle 11g.
LECTURE 47
Overview of the Data Dictionary
An important part of an Oracle database is its data dictionary, which is a read-only set of tables
that provides administrative metadata about the database. A data dictionary contains information
such as the following:
The definitions of every schema object in the database, including default values for
columns and integrity constraint information
The amount of space allocated for and currently used by the schema objects
The names of Oracle Database users, privileges and roles granted to users, and auditing
information related to users
The data dictionary is a central part of data management for every Oracle database. For example,
the database performs the following actions:
Accesses the data dictionary to find information about users, schema objects, and storage
structures
Modifies the data dictionary every time that a DDL statement is issued
Because Oracle Database stores data dictionary data in tables, just like other data, users
can query the data with SQL. For example, users can run SELECT statements to determine
their privileges, which tables exist in their schema, which columns are in these tables,
whether indexes are built on these columns, and so on.
Base tables
These underlying tables store information about the database. Only Oracle Database should write
to and read these tables. Users rarely access the base tables directly because they are normalized
and most data is stored in a cryptic format.
Views
These views decode the base table data into useful information, such as user or table names,
using joins and WHERE clauses to simplify the information. These views contain the names and
description of all objects in the data dictionary. Some views are accessible to all database users,
whereas others are intended for administrators only.
Reading Content:
Oracle enterprise manager uses the views to obtain information about the database.
Administrators can use the views for performance monitoring and debugging.
Dynamic performance views are sometimes called fixed views because they cannot be altered or
removed by a database administrator. However, database administrators can query and create
views on the tables and grant access to these views to other users.
SYS owns the dynamic performance tables, whose names begin with V_$. Views are created on
these tables, and then public synonyms prefixed with V$. For example, the V$DATAFILE view
contains information about data files. The V$FIXED_TABLE view contains information about all
of the dynamic performance tables and views.
For almost every V$ view, a corresponding GV$ view exists. In Oracle Real Application Clusters
(Oracle RAC), querying a GV$ view retrieves the V$ view information from all qualified database
instances.
When you use the Database Configuration Assistant (DBCA) to create a database, Oracle
automatically creates the data dictionary. Oracle Database automatically runs
the catalog.sql script, which contains definitions of the views and public synonyms for the
dynamic performance views. You must run catalog.sql to create these views and synonyms.
Memory management involves maintaining optimal sizes for the Oracle Database instance
memory structures as demands on the database change. The memory structures that must be
managed are the system global area (SGA) and the instance program global area (instance PGA).
Oracle Database supports various memory management methods, which are chosen by
initialization parameter settings. Oracle recommends that you enable the method known
as automatic memory management.
Beginning with Release 11g, Oracle Database can manage the SGA memory and instance PGA
memory completely automatically. You designate only the total memory size to be used by the
instance, and Oracle Database dynamically exchanges memory between the SGA and the
instance PGA as needed to meet processing demands. This capability is referred to as automatic
memory management. With this memory management method, the database also dynamically
tunes the sizes of the individual SGA components and the sizes of the individual PGAs.
If you prefer to exercise more direct control over the sizes of individual memory components,
you can disable automatic memory management and configure the database for manual memory
management. There are a few different methods available for manual memory management.
Some of these methods retain some degree of automation. The methods therefore vary in the
amount of effort and knowledge required by the DBA. These methods are:
Program code
Information about a connected session, even if it is not currently active
Information needed during program execution (for example, the current state of a query
from which rows are being fetched)
Information that is shared and communicated among Oracle processes (for example,
locking information)
Cached data that is also permanently stored on peripheral memory (for example, data
blocks and redo log entries)
System Global Area (SGA), which is shared by all server and background processes.
Program Global Areas (PGA), which is private to each server and background process;
there is one PGA for each process.
CONTROL_FILES = (/u01/oracle/prod/control01.ctl,
/u02/oracle/prod/control02.ctl,
/u03/oracle/prod/control03.ctl)
To add a multiplexed copy of the current control file or to rename a control file:
All control files for the database have been permanently damaged and you do not have a
control file backup.
You want to change the database name.
If Oracle Database sends you an error (usually error ORA-01173, ORA-01176, ORA-
01177, ORA-01215, or ORA-01216) when you attempt to mount and open the database
after creating a new control file, the most likely cause is that you omitted a file from the
CREATE CONTROLFILE statement or included one that should not have been listed.
Action:
For the former problem you need to recover the database from a more recent control file.
For the later problem, simply recreate the control file checking to be sure that you include
all the datafiles
in the system tablespace.
The most crucial structure for recovery operations is the redo log
The structure consists of two or more pre-allocated files that store all changes made to the
database as they occur.
Every instance of an Oracle Database has an associated redo log to protect the database in
case of an instance failure.
Redo Log files stores all the change information for the database and are used by Oracle
during Database recovery
Redo Threads?
When speaking in the context of multiple database instances, the redo log for each
database instance is also referred to as a redo thread.
In typical configurations, only one database instance accesses an Oracle Database, so
only one thread is present.
Redo log files are filled with redo records or redo entry, is made up of a group of change
vectors, each of which is a description of a change made to a single block in the database.
For example, if you change a salary value in an employee table, you generate a redo
record containing change vectors that describe changes to the data segment block for the
table, the undo segment data block, and the transaction table of the undo segments.
Redo records are buffered in a circular fashion in the redo log buffer of the SGA.
Redo records are written to one of the redo log files by the Log Writer (LGWR)database
background process.
Whenever a transaction is committed, LGWR writes the transaction redo records from
the redo log buffer of the SGA to a redo log file, and assigns a system change number
(SCN) to identify the redo records for each committed transaction.
Transactions are committed when redo records are written to the disk safely.
LGWR writes to redo log files in a circular fashion. When the current redo log file fills,
LGWR begins writing to the next available redo log file. When the last available redo log
file is filled, LGWR returns to the first redo log file and writes to it, starting the cycle
again.
Oracle Database uses only one redo log file at a time to store redo records written from
the redo log buffer.
The redo log file that LGWR is actively writing to is called the current redo log file.
Redo log files that are required for instance recovery are called active redo log files.
Redo log files that are no longer required for instance recovery are called inactive redo
log files.
If you have enabled archiving (the database is in ARCHIVELOG mode), then the
database cannot reuse or overwrite an active online log file until one of the archiver
background processes (ARCn) has archived its contents.
A log switch is the point at which the database stops writing to one redo log file and
begins writing to another. Normally, a log switch occurs when the current redo log file is
completely filled and writing must continue to the next redo log file.
Oracle Database assigns each redo log file a new log sequence number every time a log
switch occurs and LGWR begins writing to it
Multiplexing Redo Log Files?
To protect against a failure involving the redo log itself, Oracle Database allows
a multiplexed redo log, meaning that two or more identical copies of the redo log can be
automatically maintained in separate locations. For the most benefit, these locations
should be on separate disks. Even if all copies of the redo log are on the same disk,
however, the redundancy can help protect against I/O errors, file corruption, and so on.
When redo log files are multiplexed, LGWR concurrently writes the same redo log
information to multiple identical redo log files, thereby eliminating a single point of redo
log failure.
Multiplexing is implemented by creating groups of redo log files. A group consists of a
redo log file and its multiplexed copies. Each identical copy is said to be a member of
the group. Each redo log group is defined by a number, such as group 1, group 2, and so
on.
Plan the redo log of a database and create all required groups and members of redo log files
during database creation. However, there are situations where you might want to create
additional groups or members. For example, adding groups to a redo log can correct redo log
group availability problems.
To create new redo log groups and members, you must have the ALTER DATABASE system
privilege. A database can have up to MAXLOGFILES groups.
To create a new group of redo log files, use the SQL statement ALTER DATABASE
with the ADD LOGFILE clause.
To a group number of redo log files, use the SQL statement ALTER DATABASE with the ADD
LOGFILE clause. Using group numbers can make administering redo log groups easier.
However, the group number must be between 1 and MAXLOGFILES. Do not skip redo log file
group numbers (that is, do not number your groups 10, 20, 30, and so on), or you will consume
unnecessary space in the control files of the database.
In some cases, it might not be necessary to create a complete group of redo log files. A group
could already exist, but not be complete because one or more members of the group were
dropped (for example, because of a disk failure). In this case, you can add new members to an
existing group.
To create new redo log members for an existing group, use the SQL statement ALTER
DATABASE with the ADD LOGFILE MEMBER clause. The following statement adds a new redo log
member to redo log group number 2:
Notice that filenames must be specified, but sizes need not be. The size of the new members is
determined from the size of the existing members of the group.
When using the ALTER DATABASE statement, you can alternatively identify the target group by
specifying all of the other members of the group in the TO clause, as shown in the following
example:
You can use operating system commands to relocate redo logs, then use the ALTER
DATABASE statement to make their new names (locations) known to the database. This
procedure is necessary, for example, if the disk currently used for some redo log files is going to
be removed, or if datafiles and a number of redo log files are stored on the same disk and should
be separated to reduce contention.
To rename redo log members, you must have the ALTER DATABASE system privilege.
Additionally, you might also need operating system privileges to copy files to the desired
location and privileges to open and back up the database.
Before relocating your redo logs, or making any other structural changes to the database,
completely back up the database in case you experience problems while performing the
operation. As a precaution, after renaming or relocating a set of redo log files, immediately back
up the database control file.
Operating system files, such as redo log members, must be copied using the appropriate
operating system commands. See your operating system specific documentation for more
information about copying files.
1. The following example uses operating system commands (UNIX) to move the redo log
members to a new location:
mv /diska/logs/log1a.rdo /diskc/logs/log1c.rdo
mv /diska/logs/log2a.rdo /diskc/logs/log2c.rdo
CONNECT / as SYSDBA
STARTUP MOUNT
Use the ALTER DATABASE statement with the RENAME FILE clause to rename the
database redo log files.
ALTER DATABASE
TO '/diskc/logs/log1c.rdo', '/diskc/logs/log2c.rdo';
The redo log alterations take effect when the database is opened.
In some cases, you may want to drop an entire group of redo log members. For example, you
want to reduce the number of groups in an instance redo log. In a different case, you may want to
drop one or more specific redo log members. For example, if a disk failure occurs, you may need
to drop all the redo log files on the failed disk so that the database does not try to write to the
inaccessible files. In other situations, particular redo log files become unnecessary. For example,
a file might be stored in an inappropriate location.
To drop a redo log group, you must have the ALTER DATABASE system privilege. Before
dropping a redo log group, consider the following restrictions and precautions:
An instance requires at least two groups of redo log files, regardless of the number of
members in the groups. (A group comprises one or more members.)
You can drop a redo log group only if it is inactive. If you need to drop the current group,
first force a log switch to occur.
Make sure a redo log group is archived (if archiving is enabled) before dropping it. To
see whether this has happened, use the V$LOG view.
To drop a redo log member, you must have the ALTER DATABASE system privilege. Consider
the following restrictions and precautions before dropping individual redo log members:
It is permissible to drop redo log files so that a multiplexed redo log becomes temporarily
asymmetric. For example, if you use duplexed groups of redo log files, you can drop one
member of one group, even though all other groups have two members each. However, you
should rectify this situation immediately so that all groups have at least two members, and
thereby eliminate the single point of failure possible for the redo log.
An instance always requires at least two valid groups of redo log files, regardless of the number
of members in the groups. (A group comprises one or more members.) If the member you want
to drop is the last valid member of the group, you cannot drop the member until the other
members become valid. To see a redo log file status, use the V$LOGFILE view. A redo log file
becomes INVALID if the database cannot access it. It becomes STALE if the database suspects
that it is not complete or correct. A stale log file becomes valid again the next time its group is
made the active group.
You can drop a redo log member only if it is not part of an active or current group. If you want to
drop a member of an active group, first force a log switch to occur.
Make sure the group to which a redo log member belongs is archived (if archiving is enabled)
before dropping the member. To see whether this has happened, use the V$LOG view.
To drop specific inactive redo log members, use the ALTER DATABASE statement with the
DROP LOGFILE MEMBER clause.
You can force a log switch to make the currently active group inactive and available for redo log
maintenance operations. For example, you want to drop the currently active group, but are not
able to do so until the group is inactive. You may also wish to force a log switch if the currently
active group needs to be archived at a specific time before the members of the group are
completely filled. This option is useful in configurations with large redo log files that take a long
time to fill.
To force a log switch, you must have the ALTER SYSTEM privilege. Use the ALTER SYSTEM
statement with the SWITCH LOGFILE clause.
You can configure the database to use checksums to verify blocks in the redo log files. If you set
the initialization parameter DB_BLOCK_CHECKSUM to TYPICAL (the default), the database
computes a checksum for each database block when it is written to disk, including each redo log
block as it is being written to the current log. The checksum is stored the header of the block.
Oracle Database uses the checksum to detect corruption in a redo log block. The database
verifies the redo log block when the block is read from an archived log during recovery and
when it writes the block to an archive log file. An error is raised and written to the alert log if
corruption is detected.
If corruption is detected in a redo log block while trying to archive it, the system attempts to read
the block from another member in the group. If the block is corrupted in all members of the redo
log group, then archiving cannot proceed.
The value of the DB_BLOCK_CHECKSUM parameter can be changed dynamically using the
ALTER SYSTEM statement.
A redo log file might become corrupted while the database is open, and ultimately stop database
activity because archiving cannot continue. In this situation the ALTER DATABASE CLEAR
LOGFILE statement can be used to reinitialize the file without shutting down the database.
The following statement clears the log files in redo log group number 3:
If you clear a log file that is needed for recovery of a backup, then you can no longer recover
from that backup. The database writes a message in the alert log describing the backups from
which you cannot recover.
If you want to clear an unarchived redo log that is needed to bring an offline tablespace online,
use the UNRECOVERABLE DATAFILE clause in the ALTER DATABASE CLEAR
LOGFILE statement.
If you clear a redo log needed to bring an offline tablespace online, you will not be able to bring
the tablespace online again. You will have to drop the tablespace or perform an incomplete
recovery. Note that tablespaces taken offline normal do not require recovery.
Oracle Database lets you save filled groups of redo log files to one or more offline destinations,
known collectively as the archived redo log. The process of turning redo log files into archived
redo log files is called archiving. This process is only possible if the database is running
in ARCHIVELOG mode. You can choose automatic or manual archiving.
An archived redo log file is a copy of one of the filled members of a redo log group. It includes
the redo entries and the unique log sequence number of the identical member of the redo log
group. For example, if you are multiplexing your redo log, and if group 1 contains identical
member files a_log1 and b_log1, then the archiver process (ARCn) will archive one of these
member files. Should a_log1 become corrupted, then ARCn can still archive the
identical b_log1. The archived redo log contains a copy of every group created since you enabled
archiving.
When the database is running in ARCHIVELOG mode, the log writer process (LGWR) cannot
reuse and hence overwrite a redo log group until it has been archived. The background process
ARCn automates archiving operations when automatic archiving is enabled. The database starts
multiple archiver processes as needed to ensure that the archiving of filled redo logs does not fall
behind.
Recover a database
Update a standby database
Get information about the history of a database using the LogMiner utility
The choice of whether to enable the archiving of filled groups of redo log files depends on the
availability and reliability requirements of the application running on the database. If you cannot
afford to lose any data in your database in the event of a disk failure, use ARCHIVELOG mode. The
archiving of filled redo log files can require you to perform extra administrative operations.
When you run your database in NOARCHIVELOG mode, you disable the archiving of the redo log.
The database control file indicates that filled groups are not required to be archived. Therefore,
when a filled group becomes inactive after a log switch, the group is available for reuse by
LGWR.
NOARCHIVELOG mode protects a database from instance failure but not from media failure. Only
the most recent changes made to the database, which are stored in the online redo log groups, are
available for instance recovery. If a media failure occurs while the database is
in NOARCHIVELOG mode, you can only restore the database to the point of the most recent full
database backup. You cannot recover transactions subsequent to that backup.
In NOARCHIVELOG mode you cannot perform online tablespace backups, nor can you use online
tablespace backups taken earlier while the database was in ARCHIVELOG mode. To restore a
database operating in NOARCHIVELOG mode, you can use only whole database backups taken
while the database is closed. Therefore, if you decide to operate a database
in NOARCHIVELOG mode, take whole database backups at regular, frequent intervals.
When you run a database in ARCHIVELOG mode, you enable the archiving of the redo log. The
database control file indicates that a group of filled redo log files cannot be reused by LGWR
until the group is archived. A filled group becomes available for archiving immediately after a
redo log switch occurs.
A database backup, together with online and archived redo log files, guarantees that you
can recover all committed transactions in the event of an operating system or disk failure.
If you keep an archived log, you can use a backup taken while the database is open and in
normal system use.
You can keep a standby database current with its original database by continuously
applying the original archived redo logs to the standby.
Archiving: After a switch, a copy of the Redo Log file is sent to Archive DestinationThe process
of turning redo log files into archived redo log files is called archiving. This process is only
possible if the database is running in ARCHIVELOG mode. You can choose automatic or
manual archiving. Setting the Initial Database Archiving Mode
You set the initial archiving mode as part of database creation in the CREATE
DATABASE statement. Usually, you can use the default of NOARCHIVELOG mode at database
creation because there is no need to archive the redo information generated by that process. After
creating the database, decide whether to change the initial archiving mode.
If you specify ARCHIVELOG mode, you must have initialization parameters set that specify the
destinations for the archived redo log files.
An open database must first be closed and any associated instances shut down before you can
switch the database archiving mode. You cannot change the mode
from ARCHIVELOG to NOARCHIVELOG if any datafiles need media recovery.
Before making any major change to a database, always back up the database to protect against
any problems. This will be your final backup of the database in NOARCHIVELOG mode and
can be used if something goes wrong during the change to ARCHIVELOG mode.
4. Edit the initialization parameter file to include the initialization parameters that specify
the destinations for the archived redo log files.
5. Start a new instance and mount, but do not open, the database.
6. STARTUP MOUNT
To enable or disable archiving, the database must be mounted but not open.
7. Change the database archiving mode. Then open the database for normal operations.
8. ALTER DATABASE ARCHIVELOG;
9. ALTER DATABASE OPEN;
10. Shut down the database.
11. SHUTDOWN IMMEDIATE
12. Back up the database.
Changing the database archiving mode updates the control file. After changing the database
archiving mode, you must back up all of your database files and control file. Any previous
backup is no longer usable because it was taken in NOARCHIVELOG mode.
You can choose to archive redo logs to a single destination or to multiple destinations.
Destinations can be local—within the local file system or an Oracle Automatic Storage
Management (Oracle ASM) disk group—or remote.
When you archive to multiple destinations, a copy of each filled redo log file is written to
each destination. These redundant copies help ensure that archived logs are always
available in the event of a failure at one of the destinations.
To archive to only a single destination, specify that destination using the
LOG_ARCHIVE_DEST initialization parameter.
LOG_ARCHIVE_DEST_n initialization parameters, or to archive only to a primary and
secondary destination using the LOG_ARCHIVE_DEST and
LOG_ARCHIVE_DUPLEX_DEST initialization parameters.
Physical database design is the process of choosing specific storage structures and access paths
for the database files to achieve good performance for the various database applications.
Each DBMS offers a variety of options for file organization and access path. Each DBMS
includes various types of indexing, clustering of related records on disk blocks, link related
records via pointers and various types of hashing/ partitions.
For example, Oracle gives the concept of partitions as TableSpaces & MS SQL Server provides
concept of sub databases
Design decisions about indexing: Attributes that are joint on equality, inequality or range
conditions.
Whether to index an attribute.
What attribute(s) to index on. (Composite keys or one of attributes in key and other is
non-key)
Databases, tablespaces, and datafiles are closely related, but they have important differences:
An Oracle database consists of one or more logical storage units called tablespaces,
which collectively store all of the database's data.
Each tablespace in an Oracle database consists of one or more files called datafiles,
which are physical structures that conform to the operating system in which Oracle is
running.
A database's data is collectively stored in the datafiles that constitute each tablespace of
the database. For example, the simplest Oracle database would have one tablespace and
one datafile. Another database can have three tablespaces, each consisting of two
datafiles (for a total of six datafiles).
Separate user data from data dictionary data to reduce I/O contention.
Separate data of one application from the data of another to prevent multiple applications
for offline purpose.
Store the data files of different tablespaces on different disk drives to reduce I/O
contention.
Take individual tablespaces offline while others remain online, providing better overall
availability.
Optimizing tablespace use by reserving a tablespace for a particular type of database use,
such as high update activity, read-only activity, or temporary segment storage.
Back up individual tablespaces.
Grant to users who will be creating tables, clusters, materialized views, indexes, and
other objects the privilege to create the object and a quota(space allowance or limit) in
the tablespace intended to hold the object segment.
For PL/SQL objects such as packages, procedures, and functions, users only need the
privileges to create the objects. No explicit tablespace quota is required to create these
PL/SQL objects
Types of Tablespaces
SYSTEM tablespace
Non-SYSTEM tablespace
Separate segments
Eases space administration
Controls amount of space of a user
CREATE TABLESPACE
When you create a tablespace, it is initially a read/write tablespace. You can subsequently use
the ALTER TABLESPACE statement to take the tablespace offline or online, add datafiles or
tempfiles to it, or make it a read-only tablespace.
You can also drop a tablespace from the database with the DROP TABLESPACE statement.
Creating Tablespace using SQL
Windows Platform:
datafile 'C:\Oracle11g\ordata\wrk01.dbf'
datafile '/u01/ordata/wrk01.dbf'
Tablespaces allocate space in extents. Tablespaces can be created to use one of the following two
different methods of keeping track of free and used space:
Locally managed tablespaces: The extents are managed within the tablespace via bitmaps.
Each bit in the bitmap corresponds to a block or a group of blocks.
When an extent is allocated or freed for reuse, the Oracle server changes the bitmap values to
show the new status of the blocks. Locally managed is the default beginning Oracle.
Dictionary-managed tablespaces: The extents are managed by the data dictionary. The Oracle
server updates the appropriate tables in the data dictionary whenever an extent is allocated or
deallocated.
Locally-Managed Tablespaces
Dictionary-Managed Tablespaces
Restrictions
You can create tablespaces with block sizes different from the standard database block size,
which is specified by the DB_BLOCK_SIZE initialization parameter. This feature lets you
transport tablespaces with unlike block sizes between databases.
Use the BLOCKSIZE clause of the CREATE TABLESPACE statement to create a tablespace
with a block size different from the database standard block size. In order for the BLOCKSIZE
clause to succeed, you must have already set the DB_CACHE_SIZE and at least one
DB_nK_CACHE_SIZE initialization parameter. Further, and the integer you specify in the
BLOCKSIZE clause must correspond with the setting of one DB_nK_CACHE_SIZE parameter
setting. Although redundant, specifying a BLOCKSIZE equal to the standard block size, as
specified by the DB_BLOCK_SIZE initialization parameter, is allowed.
The following statement creates tablespace lmtbsb, but specifies a block size that differs from the
standard database block size (as specified by the DB_BLOCK_SIZE initialization parameter):
BLOCKSIZE 8K;
Bigfile Tablespaces
Oracle Database enables the creation of bigfile tablespaces. A bigfile tablespace consists of a
single data or temporary file which can be up to 128 TB. The use of bigfile tablespaces can
significantly reduce the number of data files for your database. Oracle Database supports parallel
RMAN backup and restore on single data files.
Bigfile Tablespaces
A bigfile tablespace with 8K blocks can contain a 32 terabyte data file. A bigfile
tablespace with 32K blocks can contain a 128 terabyte data file. The maximum number
of data files in an Oracle Database is limited (usually to 64K files). Therefore, bigfile
tablespaces can significantly enhance the storage capacity of an Oracle DB
Bigfile tablespaces can reduce the number of data files needed for a database. An
additional benefit is that the DB_FILES initialization parameter and MAXDATAFILES
parameter of the CREATE DATABASE and CREATE CONTROLFILE statements can
be adjusted to reduce the amount of SGA space required for data file information and the
size of the control file.
Bigfile tablespaces simplify database management by providing data file transparency.
SQL syntax for the ALTER TABLESPACE statement lets you perform operations on
tablespaces, rather than the underlying individual data files.
Before you can make a tablespace read-only, the following conditions must be met.
You can take an online tablespace offline so that it is temporarily unavailable for general
use.
The rest of the database remains open and available for users to access data.
Conversely, you can bring an offline tablespace online to make the schema objects within
the tablespace available to database users.
The database must be open to alter the availability of a tablespace.
To make a portion of the database unavailable while allowing normal access to the
remainder of the database
To perform an offline tablespace backup (even though a tablespace can be backed up
while online and in use)
To make an application and its group of tables temporarily unavailable while updating or
maintaining the application
To rename or relocate tablespace data files
SYSTEM
The undo tablespace
Temporary tablespaces
The tablespace must be online. This is necessary to ensure that there is no undo
information that must be applied to the tablespace.
The tablespace cannot be the active undo tablespace or SYSTEM tablespace.
The tablespace must not currently be involved in an online backup, because the end of a
backup updates the header file of all data files in the tablespace.
The tablespace cannot be a temporary tablespace.
You can increase the size of a tablespace by either increasing the size of a datafile in the
tablespace or adding one. See "Creating Datafiles and Adding Datafiles to a Tablespace" for
more information.
Additionally, you can enable automatic file extension (AUTOEXTEND) to datafiles and
bigfile tablespaces. See "Enabling and Disabling Automatic Extension for a Datafile".
You cannot alter a locally managed tablespace to a locally managed temporary tablespace,
nor can you change its method of segment space management. Coalescing free extents is
unnecessary for locally managed tablespaces. However, you can use the ALTER
TABLESPACE statement on locally managed tablespaces for some operations, including the
following:
Adding a datafile. For example:
Renaming a datafile, or enabling or disabling the autoextension of the size of a datafile in the
tablespace.
Two clauses of the ALTER TABLESPACE statement support datafile transparency when
you are using bigfile tablespaces:
RESIZE: The RESIZE clause lets you resize the single datafile in a bigfile tablespace to an
absolute size, without referring to the datafile. For example:
With a bigfile tablespace, you can use the AUTOEXTEND clause outside of the ADD
DATAFILE clause. For example:
An error is raised if you specify an ADD DATAFILE clause for a bigfile tablespace.
You can use ALTER TABLESPACE to add a tempfile, take a tempfile offline, or bring a
tempfile online, as illustrated in the following examples:
You cannot take a temporary tablespace offline. Instead, you take its tempfile offline. The
view V$TEMPFILE displays online status for a tempfile.
The following statements take offline and bring online tempfiles. They behave identically to
the last two ALTER TABLESPACE statements in the previous example.
The following statement drops a tempfile and deletes its operating system file:
The tablespace to which this tempfile belonged remains. A message is written to the alert log
for the tempfile that was deleted. If an operating system error prevents the deletion of the file,
the statement still succeeds, but a message describing the error is written to the alert log.
It is also possible to use the ALTER DATABASE statement to enable or disable the
automatic extension of an existing tempfile, and to rename a tempfile.
Using the RENAME TO clause of the ALTER TABLESPACE, you can rename a permanent or
temporary tablespace. For example, the following statement renames the users tablespace:
When you rename a tablespace the database updates all references to the tablespace name in the
data dictionary, control file, and (online) data file headers.
Dropping Tablespaces
You can drop a tablespace and its contents (the segments contained in the tablespace)
from the database if the tablespace and its contents are no longer required. You must have
the DROP TABLESPACE system privilege to drop a tablespace.
You cannot drop a tablespace that contains any active segments.
For example, if a table in the tablespace is currently being used or the tablespace contains
undo data needed to roll back uncommitted transactions, you cannot drop the tablespace.
To drop a tablespace, use the DROP TABLESPACE statement. The following statements
drops the users tablespace, including the segments in the tablespace:
To drop a tablespace, use the DROP TABLESPACE statement. The following statements
drops the users tablespace, including the segments in the tablespace:
shutdown immediate
startup open
conn usr1/xyz
2 (id number);
create table
ERROR at line 1:
It is required when database is created and when database is getting start using.
Pre bundled code and data dictionary-managed tablespace.
It contains System Objects, your objects and Users/ Privileges.
Pre-requisites of Migration
It is required when database is created and when database is getting start using.
Creating TEMP Tablespace
The end