B.M.S.
COLLEGE OF ENGINEERING, BANGALORE-19
(Autonomous Institute, Affiliated to VTU)
Department of Computer Science & Business Systems
COURSE MATERIAL – MODULE 2
CourseCode:
Course Title: Essentials of Information Technology
25CS1ESEIT/25CS2ESEIT
Module-2: Operating Systems: The History of Operating Systems, Operating System
Architecture, Coordinating the Machine’s Activities, Handling Competition Among Processes,
Security. Algorithms: The Concept of an Algorithm, Algorithm Representation, Algorithm
Discovery.
Textbook 1: Chapter-3, Chapter-5 (5.1-5.3) Suggested Learning Resources:
(Textbook/Reference Book):
I. Textbooks:
1. J. Glenn Brookshear and Dennis Brylow, Computer Science: An
Overview, 12th Edition, Pearson Education Limited, 2017
2. Roy, Shambhavi; Daniel, Clinton; and Agrawal, Manish, "Fundamentals of Information
Technology", Digital Commons at The University of South Florida (2023)
II. Reference books:
1. V. Rajaraman, “Introduction to Information Technology”, Third Edition, PHI Learning,
2018
2. “INTRODUCTION TO INFORMATION TECHNOLOGY”, 2ND EDN,
Pearson, 2012
3. Pelin Aksoy, Information Technology in Theory, First Edition, Cengage
III. Web links and Video Lectures (e-Resources):
1. Information-Technology:
https://onlinecourses.swayam2.ac.in/cec20_cs05/preview
2. Computer Organization and Architecture:
https://nptel.ac.in/courses/106103068
3. Introduction To Internet: https://nptel.ac.in/courses/106105084
Part 1
Operating Systems
• Operating System (OS) Definition: The software that controls the overall operation of a
computer.
• Key Functions of OS :
File Management: Allows users to store and retrieve files.
Program Execution Interface: Provides an interface for users to request the execution of programs.
Execution Environment: Creates the necessary environment to execute requested programs.
Examples of Operating Systems
a) Windows
Developed by Microsoft.
Available in numerous versions.
Widely used in the personal computer (PC) market.
b) UNIX
A well-established OS choice for larger computer systems and PCs.
Core of two other popular operating systems:
Mac OS: Developed by Apple for its range of Mac machines.
Solaris: Developed by Sun Microsystems (now owned by Oracle).
c) Linux
Originally developed non-commercially by computer enthusiasts.
Available through various commercial sources, including IBM.
It is used on both large and small machines.
2.1 The History of Operating Systems
Computers of the 1940s and 1950s were large, inflexible, and inefficient. Modern operating
systems are complex software packages evolved from simple beginnings.
2.1.1 Characteristics of Early Computers
a) Physical Size: Machines occupied entire rooms.
b) Program Execution:
Required extensive preparation.
Mounting magnetic tapes.
Using punched cards in card readers.
Setting switches and other configurations.
The machine was prepared for executing the program, the program was executed, and then all
the tapes, punched cards, etc. had to be retrieved before the next program preparation could
begin
c) User Interaction - Machine Sharing:
Users shared machines using sign-up sheets to reserve time blocks. During allocated time,
users had complete control over the machine.
Sessions typically began with program setup, followed by short execution periods, often
rushed to accommodate the next user.
2.1.2 Development of Operating Systems
a) Initial Purpose:
Designed to simplify program setup and streamline transitions between jobs.
b) Separation of Users and Equipment:
One early development was the separation of users and equipment, which eliminated the physical
transition of people in and out of the computer room. For this purpose a computer operator was
hired to operate the machine. Anyone wanting a program run was required to submit it, along with
any required data and special directions about the program’s requirements, to the operator and
return later for the results. The operator, in turn, loaded these materials into the machine’s mass
storage where a program called the operating system could read and execute them one at a time.
This was the beginning of batch processing
2.1.3 Batch Processing
Batch processing: The execution of jobs collected in a single batch, processed without further user
interaction.
Job Queue
a) Functionality:
Jobs reside in mass storage and wait for execution in a job queue.
The queue operates on a first-in, first-out (FIFO) basis, meaning jobs are executed in the order
they arrive
b) Priority Handling:
Most job queues do not strictly adhere to FIFO due to priority considerations.
Higher-priority jobs can preempt those waiting in the queue.
c) Job Control Language (JCL)
Instructions:
Each job includes a set of instructions (encoded using a Job Control Language) that detail the
preparation steps for the machine.
These instructions are stored with the job in the job queue.
d) Execution Process:
• When a job is selected, the operating system prints the instructions for the computer operator to
follow.
• This communication method persists in modern systems, where error messages (e.g., "network
not available") are displayed to users.
e) Drawbacks of Batch Processing
• Lack of User Interaction:
Users have no interaction with their jobs once submitted to the operator.
Suitable for applications like payroll processing, where all data and decisions are
predetermined.
• Unsuitable Applications:
Not ideal for applications requiring real-time user interaction, such as:
• Reservation Systems: Need immediate updates for reservations and cancellations.
• Word Processing: Involves dynamic writing and rewriting of documents.
• Computer Games: Require continuous interaction with the user.
2.1.4 Interactive Processing-
a) New operating systems were developed to allow programs to carry on a dialogue with
users through remote terminals. This feature is known as interactive processing.
For successful interactive processing, the computer's actions must be sufficiently fast to
coordinate with the user's needs.
b) If the computer is too slow, it forces the user to conform to the machine's timetable, which
can be frustrating.
c) For example, payroll processing can be scheduled to fit the computer's processing time,
but interactive applications like word processors require prompt responses as the user types.
d) The development of interactive processing capabilities in operating systems was a key
advancement to enable responsive and user-friendly computing experiences, in contrast to
batch processing systems that were less accommodating of user needs and timing.
Real-Time Processing
Real-time processing refers to the computer performing tasks in accordance with deadlines in the
external environment.
Real-time processing is crucial for interactive applications like word processors, where prompt
responses are required.
Challenges in Multi-User Environments
In the 1960s and 1970s, computers were expensive and had to serve multiple users simultaneously.
If the operating system could only execute one job at a time, only one user would receive
satisfactory real-time service.
Time-Sharing Systems
Time-sharing operating systems were designed to provide service to multiple users concurrently.
Multiprogramming is a technique used to implement time-sharing, where time is divided into
intervals and jobs are rapidly switched between intervals.
Time-sharing systems could provide acceptable real-time processing to up to 30 users
simultaneously.
Multitasking
Multiprogramming techniques are also used in single-user systems, where it is called multitasking.
Multitasking refers to a single user executing multiple tasks simultaneously, while time-sharing
refers to multiple users sharing access to a common computer.
Time-Sharing vs. Multitasking
Time-sharing refers to multiple users sharing access to a common computer.
Multitasking refers to a single user executing numerous tasks simultaneously.
Typical Computer Installations
With the development of multiuser, time-sharing operating systems, a typical setup involved a
large central computer connected to numerous workstations.
Users could directly communicate with the computer from the workstations, rather than submitting
requests to a computer operator.
Commonly used programs were stored in the machine's mass storage, and the operating system
would execute these as requested from the workstations.
Decline of the Computer Operator Role
The role of the computer operator as an intermediary between users and the computer has largely
disappeared, especially with the rise of personal computers.
Today, the computer user assumes most of the responsibilities of computer operation.
The job of computer operator has evolved into that of a system administrator, who manages the
computer system and coordinates problem resolution.
2.2 Operating System Architecture
2.2.1 Software-survey
Software refers to the programs, applications, and data that run on a computer system.
Two Broad Categories of Software:
a) Application Software:
Programs designed for specific tasks related to the machine's use.
Examples include spreadsheets, database systems, desktop publishing, accounting
systems, program development software, and games.
Varies based on the machine's purpose (e.g., manufacturing vs. engineering).
b) System Software:
Provides common tasks essential for computer systems.
Acts as the infrastructure for application software, similar to a nation's infrastructure.
Divided into two main categories:
Utility Software: Programs that perform fundamental tasks not included in the operating
system. Examples include: Disk formatting, File copying, Data compression and
decompression
Operating System: The core software managing hardware and software resources.
The Operating System component is further divided into two sub-components
a. User Interface: This refers to the graphical user interface (GUI) or command-line
interface (CLI) that allows users to interact with the computer system.
b. Kernel: The kernel is the core of the operating system, responsible for managing the
computer's hardware resources.
2.2.2 Components of an Operating System
1. User Interface (UI) - The user interface is crucial for communication between the user and the
operating system. It can be categorized into two main types:
a) Shells
Definition: Older text-based interfaces that allow users to interact with the operating system using
command-line inputs.
Functionality: Users type commands using a keyboard, and the system responds with textual output on a
monitor.
b) Graphical User Interfaces (GUIs)
Definition: Modern interfaces that display objects such as files and programs as graphical icons on the
screen.
Input Devices:
1. Mouse: Used for clicking and dragging icons.
2. Styluses: Special-purpose devices often used by graphic artists.
3. Touch Screens: Allow direct manipulation of icons with fingers.
• Advanced Interfaces: 3D Interfaces: Research is ongoing into interfaces that utilize 3D projection
systems, tactile devices, and surround sound audio.
• Customizability
• Some operating systems, like UNIX, allow users to choose from various shells (e.g., Bourne
shell, C shell, Korn shell) or GUIs (e.g., X11).
• Microsoft Windows initially operated as a GUI application on top of MS-DOS, which still
includes a command shell (cmd.exe).
• Apple’s OS X includes a Terminal utility shell, reflecting its UNIX heritage.
• Window Manager
• Role: Manages the display of applications on the screen through windows.
• Functionality:
Allocates screen space for each application.
Handles mouse actions and notifies applications of user interactions.
• Customization: Offers various styles and configurations; popular choices on Linux include
KDE and Gnome.
• Kernel
The kernel is the core component of the operating system responsible for managing system
resources and hardware interactions.
• File Manager
• Function: Coordinates access to mass storage.
• Responsibilities:
Maintains records of all files stored in mass storage.
Tracks file locations, user permissions, and available storage space.
Retrieves records when storage media is accessed, ensuring the system knows the stored
files.
1. File Management
• File managers organize files into directories (or folders), allowing users to group related
files for better organization.
a) Directories and Subdirectories
• Directories: Bundles of files organized by purpose.
• Subdirectories: Directories within directories, enabling a hierarchical structure.
• Example Structure:
MyRecords, FinancialRecords, MedicalRecords
b) Directory Paths
• Definition: A sequence of directories leading to a specific file or subdirectory.
• Path Expression: Typically expressed using slashes (/) for UNIX/Linux or backslashes ()
for Windows.
• Example
Path: animals/prehistoric/dinosaurs (UNIX) or animals\prehistoric\dinosaurs (Windows).
File Access
• Access to files is managed through a process called "opening the file."
• The file manager grants access and provides the necessary information for file
manipulation.
c) Device Drivers
• Device drivers are specialized software components that facilitate communication between
the operating system and peripheral devices.
Functionality:
• Each driver is tailored for a specific device (e.g., printer, disk drive).
• Translates generic commands into device-specific actions.
• Handles technical details, allowing other software components to interact with devices
without needing in-depth knowledge of device operations.
d) Memory Management
The memory manager is responsible for managing the computer's main memory.
1. Basic Functions
In single-task environments, memory management is straightforward, with programs loaded
sequentially.
In multitasking environments, the memory manager must allocate memory for multiple
programs simultaneously.
2. Responsibilities
Assigns memory space to active programs and data blocks.
Ensures that programs operate within their allocated memory space.
Keeps track of free and occupied memory areas.
3. Virtual Memory
When the total memory required exceeds available memory, the memory manager
uses paging to create the illusion of additional memory.
Paging: Involves moving data between main memory and mass storage.
Example: If 8GB of memory is needed but only 4GB is available, the memory manager
reserves additional space on a disk to store data in pages (usually a few KB each).
This method allows the system to function as if it has more memory than physically available.
e) Additional Kernel Components
Scheduler: Determines which processes or tasks are eligible for execution in a
multiprogramming system.
Dispatcher: Controls the allocation of CPU time to these tasks.
2.3 Coordinating the Machine’s Activities
2.3.1 The Concept of a Process
• A process is the dynamic activity of executing a program, as opposed to the static program
itself.
• Each process has an associated process state, which includes the current position in the
program, CPU register values, and memory contents.
• Processes represent the "active" execution of programs, as opposed to the "passive"
programs sitting on storage.
• Multitasking and Process Management
Typical computers run multiple processes concurrently, all competing for system
resources.
The operating system's role is to manage these processes, ensuring each has the
necessary resources (devices, memory, CPU time, etc.).
The OS must prevent interference between independent processes and facilitate
information exchange between cooperating processes.
Key OS Components for Process Management
a) Scheduler - Determines which processes are eligible for execution at a given
time.
b) Dispatcher - Controls the allocation of CPU time to the scheduled processes.
c) Memory manager - Coordinates the use of main memory by multiple
concurrent processes.
d) File manager - Mediates access to files and storage by different processes.
e) Device drivers - Provide abstraction layer for processes to interact with
peripherals.
Process Administration
f) To keep track of all the processes, the scheduler maintains a block of information in main
memory called the process table.
g) Each time the execution of a program is requested, the scheduler creates a new entry for that
process in the process table. This entry contains such information as the memory area
assigned to the process (obtained from the memory manager), the priority of the process, and
whether the process is ready or waiting.
h) A process is ready if it is in a state in which its progress can continue; it is waiting if its
progress is currently delayed until some external event occurs, such as the completion of a
mass storage operation, the pressing of a key at the keyboard, or the arrival of a message
from another process.
Multiprogramming and Time Slicing
a. The dispatcher uses multiprogramming to manage process execution.
b. Time is divided into short "time slices" (typically in milliseconds or microseconds).
c. The dispatcher switches the CPU's attention between processes, allowing each a time slice.
d. This process of switching from one process to another is called a "process switch" or "context
switch".
Interrupt Handling and Process Switching
a. At the end of each time slice, a timer interrupt signal is generated.
b. The CPU saves the current process state and transfers control to the dispatcher's interrupt
handler.
c. The dispatcher's interrupt handler selects the next highest priority ready process from the
process table.
d. The dispatcher then restores the saved state of the selected process, allowing it to resume
execution.
Preserving Process State
a. Preserving and restoring process state is crucial for successful process switching.
b. The process state includes the program counter, CPU registers, and relevant memory contents.
c. CPUs designed for multiprogramming have built-in support for saving and restoring process
state during interrupts.
d. This simplifies the dispatcher's task of performing efficient process switches.
Benefits of Multiprogramming
a. Multiprogramming improves overall system efficiency by utilizing "lost time" during I/O
operations.
b. While one process waits for I/O, the dispatcher can allocate CPU time to other ready
processes.
c. This overlapping of I/O and CPU utilization reduces the total time to complete a set of tasks.
Part-2
Algorithms
2.4 The Concept of an Algorithm
An Informal Review
• Throughout our studies, we have encountered a variety of algorithms, each serving
different purposes. Here are some notable examples:
a) Numeric Conversions: Algorithms that convert numeric representations
from one format to another.
b) Error Detection and Correction: Methods for identifying and rectifying
errors in data.
c) Data Compression: Techniques for reducing the size of data files for storage
or transmission.
• The CPU Cycle as an Algorithm
The machine cycle followed by a CPU can be viewed as a simple algorithm:
a) Fetch an instruction.
b) Decode the instruction.
c) Execute the instruction.
This cycle continues until a halt instruction is executed.
• Algorithms in Everyday Life
Algorithms are not limited to technical tasks; they are present in everyday activities as
well. For instance, consider the algorithm for shelling peas:
Obtain a basket of unshelled peas and an empty bowl.
While there are unshelled peas in the basket, execute the following steps:
a. Take a pea from the basket.
b. Break open the pea pod.
c. Dump the peas from the pod into the bowl.
d. Discard the pod.
This example illustrates that algorithms can govern even the simplest tasks.
The Formal Definition of an Algorithm
The Structure and Requirements of an Algorithm
In this section, we will explore the essential characteristics and requirements that define
an algorithm, emphasizing the importance of order, executability , unambiguity, and
termination.
Importance of Order - An algorithm must have a well-defined structure regarding the
order of execution of its steps. However, this does not imply that all algorithms follow a
strict linear sequence (first step, second step, etc.).
Parallel Algorithms- Some algorithms, known as parallel algorithms, consist of multiple
sequences of steps that can be executed simultaneously by different processors in a
multiprocessor system. In these cases, the algorithm's structure resembles multiple threads
that branch and reconnect as different processors handle various parts of the task.
Example: Circuit Algorithms- An example of an algorithm executed by circuits, such as
flip-flops, involves gates performing individual steps. Here, the steps are ordered by cause
and effect, as each gate's action propagates throughout the circuit.
Executable Steps: For an algorithm to be valid, it must consist of executable steps. Let us
consider the instruction: "Make a list of all the positive integers."This instruction is
impossible to perform because there are infinitely many positive integers. Consequently,
any algorithm containing such an instruction would not be valid.
Unambiguous Steps - An algorithm's steps must be unambiguous, meaning that during
execution, the information available must uniquely and completely specify the actions
required for each step. This ensures that executing an algorithm does not require creative
skills; instead, it requires the ability to follow clear directions.
Termination of Processes- An algorithm must define a terminating process, meaning that
its execution must lead to an end. This requirement stems from theoretical computer
science, which seeks to answer questions about the limitations of algorithms and
machines. It distinguishes between problems that can be solved algorithmically and those
that cannot.
Nonterminating Processes While termination is a key requirement, there are meaningful
applications for nonterminating processes, such as: Monitoring a hospital patient's vital
signs.
Informal Use of "Algorithm"- In informal contexts, the term "algorithm" is often used
to refer to sets of steps that may not define terminating processes. For instance, the long-
division "algorithm" for dividing 1 by 3 does not terminate. Such instances technically
represent a misuse of the term but reflect the flexibility of the term in applied settings.
2.5 Algorithm Representation
Primitives in Algorithm Representation
The representation of an algorithm requires a specific form of language. This section delves
into the concept of primitives, which serve as fundamental building blocks for algorithm
representation.
Language and Communication Challenges
a) Natural Languages and Visual Representations
b) Humans can use various forms of communication, such as: Traditional Natural Languages:
Examples include English, Spanish, Russian, and Japanese.
Visual Languages: Such as diagrams or pictures, which can illustrate processes (e.g., folding a
bird from a square piece of paper). However, these natural communication methods often lead
to misunderstandings due to:
Ambiguity in Terminology: For instance, the phrase "Visiting grandchildren can be nerve-
racking" may imply either that the grandchildren cause stress or that the act of visiting them is
stressful.
Varying Levels of Detail: Instructions may lack sufficient detail. Few people could successfully
fold a bird using vague directions, while an experienced origami student would find it
straightforward. These challenges highlight the need for a precise and adequately detailed
language for representing algorithms.
The Role of Primitives
1. Definition of Primitives - In computer science, the ambiguity of natural languages is addressed
by establishing a well-defined set of building blocks known as primitives. By assigning precise
definitions to these primitives, many communication problems are alleviated.
2. Characteristics of Primitives
a) Uniform Level of Detail: Requiring algorithms to be described using primitives ensures
consistency in detail across different representations.
b) Syntax and Semantics:
Syntax: Refers to the symbolic representation of a primitive, specifying how it is written
or structured.
Semantics: Refers to the meaning of the primitive, defining what it represents or does.
c) Example of Primitives - For example, the syntax of "air" consists of three symbols, while
its semantics refers to the gaseous substance surrounding the Earth. Similarly, in origami,
specific primitives are used to describe the folds and techniques involved in creating
paper figures.
d) Higher-Level Primitives and Programming Languages- To represent algorithms suitable
for computer execution, we can utilize the individual instructions that a machine is
designed to execute. While expressing an algorithm at this low level ensures machine
compatibility, it can be tedious.
e) Higher-Level Primitives- Instead, programmers typically use a collection of higher-level
primitives, which are abstract tools constructed from the lower-level primitives provided
in the machine's language. This approach allows for:
f) Conceptual Abstraction: Algorithms can be expressed at a higher conceptual level than
in machine language, making them easier to understand and implement.
g) Formal Programming Languages- The result of using higher-level primitives is the
creation of formal programming languages, which provide a structured way to express
algorithms.
Pseudocode Overview
• Pseudocode is a simplified, informal way to express algorithms without adhering strictly to
the syntax of a formal programming language.
• It allows developers to focus on the logic and structure of their algorithms in a more intuitive
manner.
• Key Features of Pseudocode
a) Informal Notation: Pseudocode uses a less formal structure than programming
languages, making it easier to understand and communicate ideas.
b) Syntax Borrowing: It often borrows syntax from popular programming
languages like Python, Java, C, Algol, and Pascal. This helps those familiar with
these languages to grasp the pseudocode quickly.
c) Consistency: A good pseudocode must maintain a consistent notation for
recurring semantic structures, which aids in clarity and understanding.
• Basic Structure: Pseudocode typically involves:
1. Control Structures: Such as if, else, for, and while.
2. Indentation: To denote blocks of code, similar to Python.
3. Colons: To indicate the start of a block following control structures.
• Example of Pseudocode
Here’s an example demonstrating how to handle leap years when calculating daily
totals:
if (year is leap year):
daily_total = total / 366
else:
daily_total = total / 365
• Semantic Structures
Pseudocode can represent various semantic structures, such as:
1. Conditionals: Using if and else to control the flow based on conditions.
2. Assignments: Assigning computed values to variables.
• Pseudocode for Conditional and Repetitive Structures
Pseudocode can effectively express both conditional actions and repetitive tasks.
Here’s how to structure both cases using the Python-like syntax.
• Conditional Statements
For cases where there is no else activity, we can represent conditions simply. For
example, the statement:
"Should it be the case that sales have decreased, lower the price by 5%." can be
expressed in pseudocode as:
if (sales have decreased):
lower the price by 5%
• Repetitive Structures
For repeated execution of statements as long as a condition remains true, we adopt
the while structure. This is useful for scenarios like:
"As long as there are tickets to sell, continue selling tickets."
This can be written in pseudocode as:
while (tickets remain to be sold):
sell a ticket
• Indentation and Nested Structures
Indentation is crucial in pseudocode, especially for nested structures. For example:
if (not raining):
if (temperature == hot):
go swimming
else:
play golf
else:
watch television
In this example:
The inner if statement (checking temperature) only executes if the outer condition
(not raining) is true.
The else for play golf is associated with the inner if, not the outer one.
• Functions in Pseudocode - In our pseudocode, we define reusable units of code as functions.
We will use the keyword def to declare a function.
For example, a function that prints "Hello" three times can be represented as:
def Greetings():
print "Hello"
print "Hello"
print"Hello"
To call this function elsewhere in the pseudocode, we simply use its name:
Greetings()
• Generic Functions and Parameters
Functions should be designed to be as generic as possible. For instance, a sorting function
can be defined as follows:
def Sort(List):
// Code to sort the List
Here are the five pseudocode examples reformatted to match the style you
provided:
1. Finding the Maximum of Two Numbers
if (number1 > number2):
max = number1
else:
max = number2
output "The maximum number is", max
2. Calculating the Sum of Numbers from 1 to N
sum = 0
for (i from 1 to N):
sum = sum + i
output "The sum is", sum
3. Checking if a Number is Even or Odd
if (number MOD 2 = 0):
output number, "is even"
else:
output number, "is odd"
4. Finding the Factorial of a Number
factorial = 1
for (i from 1 to N):
factorial = factorial * i
output: "The factorial of", N, "is", factorial 1.
5. Checking if a Number is Positive, Negative, or Zero
if (number > 0):
output number, "is positive"
else if (number < 0):
output number, "is negative"
else:
output number, "is zero"
6. Calculating the Average of Three Numbers
average = (number1 + number2 + number3) / 3
output "The average is", average
7. Checking if a Character is a Vowel
if (character == 'a' or character == 'e' or character == 'i' or character == 'o' or character ==
'u'):
output character, "is a vowel"
else:
output character, "is not a vowel"
2.6 Algorithm Discovery
• The development of a program consists of two primary activities:
1. Discovering the Underlying Algorithm: This involves identifying a method
to solve a specific problem.
2. Representing the Algorithm as a Program: This entails translating the
discovered algorithm into a programming language that a computer can
execute.
• The Challenge of Algorithm Discovery
While both activities are crucial, algorithm discovery is often the more challenging
aspect of software development. This is because:
a) Problem Understanding: Discovering an algorithm requires a deep
understanding of the problem at hand. It involves analyzing the problem,
identifying its constraints, and determining what constitutes a valid solution.
b) Creative Thinking: Algorithm discovery often demands creative and critical
thinking. It may require thinking outside the box to find innovative solutions
that are not immediately obvious.
• The Importance of Problem-Solving Process
To effectively discover algorithms, one must understand the broader problem-
solving process. This includes:
a) Identifying the Problem: Clearly defining what needs to be solved.
b) Gathering Information: Collecting relevant data and understanding the context
surrounding the problem.
c) Generating Possible Solutions: Brainstorming various approaches to tackle the
problem.
d) Evaluating and Selecting Solutions: Analyzing potential solutions for
feasibility and effectiveness.
• The techniques of problem solving are essential across various fields, not just in
computer science. The close relationship between algorithm discovery and general
problem-solving has led to collaborative efforts aimed at improving techniques. Despite
the desire to reduce problem-solving to algorithmic processes, it has been shown that
some problems do not have algorithmic solutions. Thus, problem-solving remains more
of an artistic skill than a precise science.
• Polya’s Phases of Problem Solving
Mathematician G. Polya proposed four loosely defined phases of problem solving in
1945, which serve as foundational principles for teaching problem-solving skills:
Phase 1. Understand the Problem: Grasp the requirements and context of the problem.
Phase 2. Devise a Plan: Formulate a strategy for solving the problem.
Phase 3. Carry Out the Plan: Implement the proposed solution.
Phase 4. Evaluate the Solution: Assess the accuracy and applicability of the solution.
• Key Observations
a) Non-Sequential Nature: These phases are not strictly linear. Successful
problem solvers often begin formulating strategies (Phase 2) before fully
understanding the problem (Phase 1). If initial strategies fail, deeper insights
can lead to more effective solutions.
b) Initiative in Problem Solving: The process of solving problems requires
initiative rather than mere adherence to steps. A mindset focused on completing
phases sequentially can hinder success. Engaging deeply with the problem often
leads to a realization of having navigated through Polya’s phases
retrospectively.
c) Understanding Through Action: True understanding often emerges during the
solution process. Insisting on complete understanding before proposing
solutions can be impractical and idealistic.
• Example Problem: Determining Children's Ages
Consider the problem of determining the ages of three children based on the following
clues:
First Clue: The product of their ages is 36.
Second Clue: The sum of their ages does not uniquely identify them.
Third Clue: There is an oldest child.
Step-by-Step Analysis:
First Clue: The product of the ages is 36. Possible combinations include:
(1, 1, 36)
(1, 2, 18)
(1, 3, 12)
(1, 4, 9)
(1, 6, 6)
(2, 2, 9)
(3, 3, 4)
Second Clue: The sum must not uniquely identify the ages. The sums of the
combinations are:
(1, 6, 6) → Sum = 13
(2, 2, 9) → Sum = 13
All other combinations yield unique sums.
Since both (1, 6, 6) and (2, 2, 9) yield the same sum (13), the correct ages must be among
these combinations.
Third Clue: The existence of an "oldest child" suggests there must be a unique oldest
age. This rules out (1, 6, 6) because it has two children of age 6. Therefore, the only
valid combination is (2, 2, 9).