Chapter 9 Database Approach
Database Management Systems
Objectives for Chapter 9
Problems inherent in the flat file approach to data management
that gave rise to the database concept
Relationships among the defining elements of the database
environment
Anomalies caused by unnormalized databases and the need for
data normalization
Stages in database design: entity identification, data modeling,
constructing the physical database, and preparing user views Solves the following problems of the flat file approach
Features of distributed databases and issues to consider in no data redundancy - except for primary keys, data is
deciding on a particular database configuration only stored once
single update
Flat-File Versus Database Environments current values
Computer processing involves two components: data and task-data independence - users have access to the full
instructions (programs) domain of data available to the firm
Conceptually, there are two methods for designing the interface A database is a set of computer files that minimizes data
between program instructions and data: redundancy and is accessed by one or more application
File-oriented processing: A specific data file was created programs for data processing.
for each application The database approach to data storage applies whenever a
Data-oriented processing: Create a single data database is established to serve two or more applications,
repository to support numerous applications. organizational units, or types of users.
Disadvantages of file-oriented processing include redundant A database management system (DBMS) is a computer
data and programs and varying formats for storing the program that enables users to create, modify, and utilize
redundant data. database information efficiently.
Flat-File Environment Advantages of the Database Approach
Data sharing/centralize database resolves flat-file problems:
No data redundancy: Data is stored only once, eliminating data
redundancy and reducing storage costs.
Single update: Because data is in only one place, it requires
only a single update, reducing the time and cost of keeping the
database current.
Current values: A change to the database made by any user
yields current data values for all other users.
Task-data independence: As users’ information needs expand,
the new needs can be more easily satisfied than under the flat-
file approach.
Disadvantages of the Database Approach
Can be costly to implement
additional hardware, software, storage, and network
resources are required
Users access data via computer programs that process the data Can only run in certain operating environments
and present information to the users. may make it unsuitable for some system configurations
Users own their data files. Because it is so different from the file-
Data redundancy results as multiple applications maintain the oriented approach, the database approach
same data elements. requires training users
Files and data elements used in more than one application must may be inertia or resistance
be duplicated, which results in data redundancy.
As a result of redundancy, the characteristics of data elements Elements of the Database Environment
and their values are likely to be inconsistent.
Outputs usually consist of preprogrammed reports instead of ad-
hoc queries provided upon request. This results in
inaccessibility of data.
Changes to current file-oriented applications cannot be made
easily, nor can new developments be quickly realized, which
results in inflexibility.
Data Redundancy and Flat-File Problems
Data Storage - creates excessive storage costs of paper
documents and/or magnetic form
Data Updating - any changes or additions must be performed
multiple times
Currency of Information - potential problem of failing to update
all affected files
Task-Data Dependency - user’s inability to obtain additional
information as his or her needs change
Internal Controls and DBMS
The database management system (DBMS) stands between
the user and the database per se.
Thus, commercial DBMS’s (e.g., Access or Oracle) actually
consist of a database plus…
Plus software to manage the database, especially
controlling access and other internal controls
Plus software to generate reports, create data-entry
forms, etc.
The DBMS has special software to know which data elements
each user is authorized to access and deny unauthorized
requests of data.
DBMS Features
Program Development - user created applications
Backup and Recovery - copies database
Database Usage Reporting - captures statistics on database
usage (who, when, etc.)
Database Access - authorizes access to sections of the
database
Also…
User Programs - makes the presence of the DBMS
transparent to the user
Direct Query - allows authorized users to access data
without programming
Data Definition Language (DDL)
DDL is a programming language used to define the database
per se.
It identifies the names and the relationship of all data
elements, records, and files that constitute the database.
DDL defines the database on three viewing levels
Internal view – physical arrangement of records (1
view) Data Manipulation Language (DML)
Conceptual view (schema) – representation of DML is the proprietary programming language that a particular
database (1 view) DBMS uses to retrieve, process, and store data to / from the
User view (subschema) – the portion of the database database.
each user views (many views) Entire user programs may be written in the DML, or selected
DML commands can be inserted into universal programs, such
as COBOL and FORTRAN.
Can be used to ‘patch’ third party applications to the DBMS
Query Language
The query capability permits end users and professional
programmers to access data in the database without the need
for conventional programs.
Can be an internal control issue since users may be
making an ‘end run’ around the controls built into the
conventional programs
IBM’s structured query language (SQL) is a fourth-generation
language that has emerged as the standard query language.
Adopted by ANSI as the standard language for all
relational databases
Functions of the DBA
are associated with a single occurrence in a related table
Used to determine primary keys and foreign keys
Database Conceptual Models
Refers to the particular method used to organize records in a
database
A.k.a. “logical data structures”
Objective: develop the database efficiently so that data can be
accessed quickly and easily
There are three main models:
hierarchical (tree structure)
network
relational
Most existing databases are relational. Some legacy systems
use hierarchical or network databases.
Properly Designed Relational Tables
The Relational Model Each row in the table must be unique in at least one attribute,
The relational model portrays data in the form of two which is the primary key.
dimensional ‘tables’. Tables are linked by embedding the primary key into the
Its strength is the ease with which tables may be linked to one related table as a foreign key.
another. The attribute values in any column must all be of the same class
A major weakness of hierarchical and network or data type.
databases Each column in a given table must be uniquely named.
Relational model is based on the relational algebra functions of Tables must conform to the rules of normalization, i.e., free from
restrict, project, and join. structural dependencies or anomalies.
Three Types of Anomalies
Insertion Anomaly: A new item cannot be added to the table
until at least one entity uses a particular attribute item.
Deletion Anomaly: If an attribute item used by only one entity
is deleted, all information about that attribute item is lost.
Update Anomaly: A modification on an attribute must be made
in each of the rows in which the attribute appears.
Anomalies can be corrected by creating additional relational
tables.
Advantages of Relational Tables
Removes all three types of anomalies
Various items of interest (customers, inventory, sales) are
stored in separate tables.
Space is used efficiently.
Very flexible – users can form ad hoc relationships
The Normalization Process
A process which systematically splits unnormalized complex
tables into smaller tables that meet two conditions:
all nonkey (secondary) attributes in the table are
Associations and Cardinality dependent on the primary key
Association – the labeled line connecting two entities or tables all nonkey attributes are independent of the other
in a data model nonkey attributes
Describes the nature of the between them When unnormalized tables are split and reduced to third normal
Represented with a verb, such as ships, requests, or form, they must then be linked together by foreign keys.
receives
Cardinality – the degree of association between two entities
The number of possible occurrences in one table that
processing units (IPUs) distributed throughout the organization.
Each IPU is placed under the control of the end user.
DDP does not always mean total decentralization.
IPUs in a DDP system are still connected to one
another and coordinated.
Typically, DDP’s use a centralized database.
Alternatively, the database can be distributed, similar to
the distribution of the data processing capability.
Decentralization does not attempt to integrate the parts into a
logical whole unit.
Centralized Databases in DDP Environment
The data is retained in a central location.
Accountants and Data Normalization Remote IPUs send requests for data.
Update anomalies can generate conflicting and obsolete Central site services the needs of the remote IPUs.
database values. The actual processing of the data is performed at the remote
Insertion anomalies can result in unrecorded transactions and IPU.
incomplete audit trails.
Deletion anomalies can cause the loss of accounting records Advantages of DDP
and the destruction of audit trails. Cost reductions in hardware and data entry tasks
Accountants should understand the data normalization process Improved cost control responsibility
and be able to determine whether a database is properly Improved user satisfaction since control is closer to the user
normalized. level
Backup of data can be improved through the use of multiple
data storage sites
Six Phases in Designing Relational Databases Disadvantages of DDP
1. Identify entities Loss of control
identify the primary entities of the organization Mismanagement of resources
construct a data model of their relationships Hardware and software incompatibility
2. Construct a data model showing entity associations Redundant tasks and data
determine the associations between entities Consolidating incompatible tasks
model associations into an ER diagram Difficulty attracting qualified personnel
3. Add primary keys and attributes Lack of standards
assign primary keys to all entities in the model to
uniquely identify records Data Currency
every attribute should appear in one or more user views Occurs in DDP with a centralized database
4. Normalize and add foreign keys During transaction processing, data will temporarily be
remove repeating groups, partial and transitive inconsistent as records are read and updated.
dependencies Database lockout procedures are necessary to keep IPUs from
assign foreign keys to be able to link tables reading inconsistent data and from writing over a transaction
5. Construct the physical database being written by another IPU.
create physical tables
populate tables with data Distributed Databases: Partitioning
6. Prepare the user views Splits the central database into segments that are distributed to
normalized tables should support all required views of their primary users
system users Advantages:
user views restrict users from have access to users’ control is increased by having data stored at local
unauthorized data sites
transaction processing response time is improved
Distributed Data Processing (DDP) volume of transmitted data between IPUs is reduced
Data processing is organized around several information reduces the potential data loss from a disaster
The Deadlock Phenomenon
Especially a problem with partitioned databases
Occurs when multiple sites lock each other out of data that they
are currently using
One site needs data locked by another site.
Special software is needed to analyze and resolve conflicts.
Transactions may be terminated and restarted.
Distributed Databases: Replication
The duplication of the entire database for multiple IPUs
Effective for situations with a high degree of data sharing, but
no primary user
Supports read-only queries
Data traffic between sites is reduced considerably.
Concurrency Problems and Control Issues
Database concurrency is the presence of complete and
accurate data at all IPU sites.
With replicated databases, maintaining current data at all
locations is difficult.
Time stamping is used to serialize transactions.
Prevents and resolves conflicts created by updating
data at various IPUs
Distributed Databases and the Accountant
The following database options impact the organization’s ability
to maintain database integrity, to preserve audit trails, and to
have accurate accounting records.
Centralized or distributed data?
If distributed, replicated or partitioned?
If replicated, totally or partially replication?
If partitioned, what allocation of the data segments
among the sites?