0% found this document useful (0 votes)
17 views25 pages

C Programming Material

The document discusses the history and evolution of computers from early counting tools like the abacus to modern computers. It covers early mechanical calculating devices, the generations of computers from vacuum tubes to integrated circuits, and describes the basic components and functions of a computer system.

Uploaded by

bchiru518
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
17 views25 pages

C Programming Material

The document discusses the history and evolution of computers from early counting tools like the abacus to modern computers. It covers early mechanical calculating devices, the generations of computers from vacuum tubes to integrated circuits, and describes the basic components and functions of a computer system.

Uploaded by

bchiru518
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 25

What is a Computer?

A computer is an electronic machine that collects information, stores it, processes it according to
user instructions, and then returns the result.

A computer is a programmable electronic device that performs arithmetic and logical operations
automatically using a set of instructions provided by the user.

Early Computing Devices


People used sticks, stones, and bones as counting tools before computers were invented. More
computing devices were produced as technology advanced and the human intellect improved
over time. Let us look at a few of the early-age computing devices used by mankind.

1. Abacus
Abacus was invented by the Chinese around 4000 years ago. It’s a wooden rack with metal rods
with beads attached to them. The abacus operator moves the beads according to certain
guidelines to complete arithmetic computations.

2. Napier’s Bone
John Napier devised Napier’s Bones, a manually operated calculating apparatus. For calculating,
this instrument used 9 separate ivory strips (bones) marked with numerals to multiply and
divide. It was also the first machine to calculate using the decimal point system.

3. Pascaline
Pascaline was invented in 1642 by Biaise Pascal, a French mathematician and philosopher. It is
thought to be the first mechanical and automated calculator. It was a wooden box with gears and
wheels inside.

4. Stepped Reckoner or Leibniz wheel


In 1673, a German mathematician-philosopher named Gottfried Wilhelm Leibniz improved on
Pascal’s invention to create this apparatus. It was a digital mechanical calculator known as the
stepped reckoner because it used fluted drums instead of gears.

5. Difference Engine
In the early 1820s, Charles Babbage created the Difference Engine. It was a mechanical
computer that could do basic computations. It was a steam-powered calculating machine used to
solve numerical tables such as logarithmic tables.

6. Analytical Engine
Charles Babbage created another calculating machine, the Analytical Engine, in 1830. It was a
mechanical computer that took input from punch cards. It was capable of solving any
mathematical problem and storing data in an indefinite memory.

7. Tabulating machine
An American Statistician – Herman Hollerith invented this machine in the year 1890. Tabulating
Machine was a punch card-based mechanical tabulator. It could compute statistics and record or
sort data or information. Hollerith began manufacturing these machines in his company, which
ultimately became International Business Machines (IBM) in 1924.

8. Differential Analyzer
Vannevar Bush introduced the first electrical computer, the Differential Analyzer, in 1930. This
machine is made up of vacuum tubes that switch electrical impulses in order to do calculations.
It was capable of performing 25 calculations in a matter of minutes.

9. Mark I
Howard Aiken planned to build a machine in 1937 that could conduct massive calculations or
calculations using enormous numbers. The Mark I computer was constructed in 1944 as a
collaboration between IBM and Harvard.

Generations of Computers
In the history of computers, we often refer to the advancements of modern computers as
the generation of computers. We are currently on the fifth generation of computers. So let us
look at the important features of these five generations of computers.

 1st Generation: This was from the period of 1940 to 1955. This was when machine
language was developed for the use of computers. They used vacuum tubes for the
circuitry. For the purpose of memory, they used magnetic drums. These machines were
complicated, large, and expensive. They were mostly reliant on batch operating systems
and punch cards. As output and input devices, magnetic tape and paper tape were
implemented. For example, ENIAC, UNIVAC-1, EDVAC, and so on.

 2nd Generation: The years 1957-1963 were referred to as the “second generation of
computers” at the time. In second-generation computers, COBOL and FORTRAN are
employed as assembly languages and programming languages. Here they advanced from
vacuum tubes to transistors. This made the computers smaller, faster and more energy-
efficient. And they advanced from binary to assembly languages. For instance, IBM
1620, IBM 7094, CDC 1604, CDC 3600, and so forth.

 3rd Generation: The hallmark of this period (1964-1971) was the development of the
integrated circuit. A single integrated circuit (IC) is made up of many transistors, which
increases the power of a computer while simultaneously lowering its cost. These
computers were quicker, smaller, more reliable, and less expensive than their
predecessors. High-level programming languages such as FORTRON-II to IV, COBOL,
and PASCAL PL/1 were utilized. For example, the IBM-360 series, the Honeywell-6000
series, and the IBM-370/168.

 4th Generation: The invention of the microprocessors brought along the fourth
generation of computers. The years 1971-1980 were dominated by fourth generation
computers. C, C++ and Java were the programming languages utilized in this generation
of computers. For instance, the STAR 1000, PDP 11, CRAY-1, CRAY-X-MP, and Apple
II. This was when we started producing computers for home use.

 5th Generation: These computers have been utilized since 1980 and continue to be
used now. This is the present and the future of the computer world. The defining aspect of
this generation is artificial intelligence. The use of parallel processing and
superconductors are making this a reality and provide a lot of scope for the future. Fifth-
generation computers use ULSI (Ultra Large Scale Integration) technology. These are the
most recent and sophisticated computers. C, C++, Java,.Net, and more programming
languages are used. For instance, IBM, Pentium, Desktop, Laptop, Notebook, Ultrabook,
and so on.

Computer organization
Every computer mainly consists of three things and those are...

1. Hardware
2. Software
3. User

Here the user interacts with the software, and the software makes the computer hardware
parts to work for the user.
What is Computer Hardware?
All physical components of the computer are called as computer hardware. A user can see,
touch and feel every hardware of the computer. All hardware components perform any task
based on the instructions given by the computer software.
The computer hardware is the physical part of a computer.
The computer hardware components are as follows...

1. Input Devices - These are the parts through which a user can give the data to the
computer.
2. Output Devices - These are the physical components of a computer through which
the computer gives the result to the user.
3. Storage Devices - These are the physical components of a computer in which the data
can be stored.
4. Devices Drives - Using drives, user can read and write data on to the storage devices
like CD, floppy, etc.,
5. Cables - Various cables (Wires) are used to make connections in a computer
6. Other Devices - Other than the above hardware components, a computer also
contains components like Motherboard, CPU (Processor), SMPS, Fans, etc.,

Input Devices
Computer input devices are the physical components of the computer which are used to give
the data given by the user to the computer. Using input devices the user can give the data to
the computer.

Example
Output Devices
Computer output devices are the physical components of the computer which are used to give
the computer result to the User. Using output devices, the user can see the computer-
generated result.

Example

How does Computer work?


When a user wants to communicate with the computer, the user interacts with an
application. The application interacts with the operating system, and the operating system
makes hardware components to work according to the user given instructions. The hardware
components send the result back to the operating system, then the operating system forwards
the same to the application and the application shows the result to the user.

By using input devices, the user interacts with the application and the application uses
output devices to show the result. All input and output devices work according to the
instructions given by the operating system.
The working process of a computer is shown in the following figure.
What is computer memory?

Computer memory is one of the most important parts of the computer. It stores and allows users
to access the data anytime, anywhere they want. There are two types of computer memories.

 Volatile memory and

 Non-volatile memory.
Volatile memory is termed as RAM which stands for Random access memory. While non-
volatile stands for ROM which is an acronym for Read-only memory. Computer memory is
based on the two factors that include access time and capacity. The faster the speed of the
memory is the lesser will be the access time. A computer uses the memory which is organized in
such a way that it enables largest capacity of the memory and the faster speed.

Types of Memory

In computer terms, memory is divided into two categories:

1) Main memory or primary memory

2) Auxiliary memory or secondary memory

Main memory or primary memory


The main memory unit that connects directly to the CPU is the primary memory. Further, there
are two types of primary memory i.e RAM and ROM
1. Random Access Memory
RAM is also known as the volatile memory. It is in the form of the chip that is implemented
with the use os semiconductors. Generally, RAM is used to store temporary storage of output
data, input data, and intermediate results. RAM can be divided into two categories:

1. Static RAM or SRAM

2. Dynamic Ram or DRAM

2. Read-only memory
ROM is not as accessible as RAM and is, therefore, non-volatile memory. Once a ROM
chip is programmed it cannot be rewritten or programmed. The only changes you can make in
ROM is at the time of manufacturing. ROM has three categories which are:

1. Programmable ROM or PROM

2. Electrically Erasable Programmable ROM or EEPROM

3. Erasable Programmable ROM or EPROM

Auxiliary memory or secondary memory


Secondary memory is a permanent storage device. It is non-volatile in nature and is used to
store programs and data when they are not being processed. Because of this, the data remains in
the same stage as long as they are not deleted or rewritten from the user’s end. A secondary
memory includes devices such as:

1. Optical disks like DVD, CD, and Blue-ray disks

2. Magnetic disks like memory stick, floppy disk, and hard disk drive.

3. Solid state disks like the thumb drive, pen, and flash.
Along with this one may also ask units and measurements as to how memory in computers is
measured. We all use a hard disk and a pen drive to transfer the data from one place to another.
But what are its units? Computer measures data in many forms such as Megabyte, Kilobyte,
Byte, Bit, Nibble, Terabyte, Gigabyte, Exabyte, Petabyte, and many more. Here are the
conversions of these data into one form or another:

8 Bits 1 Byte

Bytes (1024) KiloByte (1KB)


MegaByte
KB (1024)
(1MB)

MB (1024) GigaByte (1GB)

GB (1024) TeraByte (1TB)

TB (1024) PetaByte (1PB)

PB (1024) ExaByte (1EB)

EB (1024) ZettaByte (1ZB)

YottaByte
ZB (1024)
(1YB)

1 YB BrontoByte

1024 BrontoByte 1 GeopByte

Types of Computers
1. Analog Computers – Analog computers are built with various components such as
gears and levers, with no electrical components. One advantage of analogue computation
is that designing and building an analogue computer to tackle a specific problem can be
quite straightforward.
2. Digital Computers – Information in digital computers is represented in discrete form,
typically as sequences of 0s and 1s (binary digits, or bits). A digital computer is a system
or gadget that can process any type of information in a matter of seconds. Digital
computers are categorized into many different types. They are as follows:
a. Mainframe computers – It is a computer that is generally utilized by large
enterprises for mission-critical activities such as massive data processing. Mainframe
computers were distinguished by massive storage capacities, quick components, and
powerful computational capabilities. Because they were complicated systems, they
were managed by a team of systems programmers who had sole access to the
computer. These machines are now referred to as servers rather than mainframes.

b. Supercomputers – The most powerful computers to date are commonly referred to


as supercomputers. Supercomputers are enormous systems that are purpose-built to
solve complicated scientific and industrial problems. Quantum mechanics, weather
forecasting, oil and gas exploration, molecular modelling, physical simulations,
aerodynamics, nuclear fusion research, and cryptoanalysis are all done on
supercomputers.

c. Minicomputers – A minicomputer is a type of computer that has many of the same


features and capabilities as a larger computer but is smaller in size. Minicomputers,
which were relatively small and affordable, were often employed in a single
department of an organization and were often dedicated to a specific task or shared
by a small group.

d. Microcomputers – A microcomputer is a small computer that is based on a


microprocessor integrated circuit, often known as a chip. A microcomputer is a
system that incorporates at a minimum a microprocessor, program memory, data
memory, and input-output system (I/O). A microcomputer is now commonly referred
to as a personal computer (PC).

e. Embedded processors – These are miniature computers that control electrical and
mechanical processes with basic microprocessors. Embedded processors are often
simple in design, have limited processing capability and I/O capabilities, and need
little power. Ordinary microprocessors and microcontrollers are the two primary
types of embedded processors. Embedded processors are employed in systems that
do not require the computing capability of traditional devices such as desktop
computers, laptop computers, or workstations.

What is an arithmetic-logic unit (ALU)?


An arithmetic-logic unit is the part of a central processing unit that carries out arithmetic and
logic operations on the operands in computer instruction words.

In some processors, the ALU is divided into two units: an arithmetic unit (AU) and a logic
unit (LU). Some processors contain more than one AU -- for example, one for fixed-point
operations and another for floating-point operations.
In computer systems, floating-point computations are sometimes done by a floating-point unit
(FPU) on a separate chip called a numeric coprocessor.

How does an arithmetic-logic unit work?


Typically, the ALU has direct input and output access to the processor controller, main
memory (random access memory or RAM in a personal computer) and input/output devices.
Inputs and outputs flow along an electronic path that is called a bus.

The input consists of an instruction word, sometimes called a machine instruction word, that
contains an operation code or "opcode," one or more operands and sometimes a format code.
The operation code tells the ALU what operation to perform and the operands are used in the
operation.

For example, two operands might be added together or compared logically. The format may
be combined with the opcode and tells, for example, whether this is a fixed-point or a
floating-point instruction.

The output consists of a result that is placed in a storage register and settings that indicate
whether the operation was performed successfully. If it isn't, some sort of status will be stored
in a permanent place that is sometimes called the machine status word.

In general, the ALU includes storage places for input operands, operands that are being
added, the accumulated result (stored in an accumulator) and shifted results. The flow of bits
and the operations performed on them in the subunits of the ALU are controlled by gated
circuits.

The gates in these circuits are controlled by a sequence logic unit that uses a
particular algorithm or sequence for each operation code. In the arithmetic unit,
multiplication and division are done by a series of adding or subtracting and shifting
operations.

What is Program Counter?


Many complex components come together to make a computer system work
seamlessly. One such essential element is the program counter. There is a register
in a PC (program counter) processor that contains the address of the next
instruction to be executed from memory.

CPU (Central Processing Unit)


The full name of the CPU is Central Processing Unit. It is also known as a
processor. CPU is the brain of the computer system.
It is an electronic microchip that processes the data and converts it into useful information
based on the instructions given and controls all the functions of the computer system.

Instructions
Computer instructions are a group of machine language instructions and they are
executed by a specific computer processor. Whatever instruction we give to the computer, it
will work on its basis.

Memory
Computer memory is a device that is used to store data and information. In other
words, “Computer memory is an important part of the computer in which data is stored,
without memory the computer does not work.” .Memory includes RAM and ROM.

What is Program Counter?


There is a register in a PC (program counter) processor that contains the address of
the next instruction to be executed from memory.

It is a 16-bit register and is also called instruction counter, instruction pointer, and
instruction address register (IAR). PC (program counter) is a digital counter that is needed
to execute tasks quickly and track the current execution point.

All the instructions and data present in memory have a special address. As each instruction
is processed, the program counter is updated to the address of the next instruction to be
fetched. When a byte (machine code) is fetched, the PC is incremented by one. So that it
can fetch the next instruction. If the computer is reset or restarted, the program counter
returns to zero value.
For example, suppose the content of the PC is 8000H. This means that the processor wants
to fetch the instruction byte on 8000H. After fetching the byte at 8000H, the PC
automatically increments by one (1). In this way, the processor becomes ready to fetch the
next byte of the instruction or to fetch the next opcode.

Computer Languages

What is Computer Language?

Generally, we use languages like English, Hindi, etc., to make communication

between two persons. That means when we want to make communication between two

persons we need a language through which persons can express their feelings. Similarly,

when we want to make communication between user and computer or between two or more

computers we need a language through which user can give information to the computer and

vice versa. When a user wants to give any instruction to the computer the user needs a

specific language and that language is known as a computer language.

The user interacts with the computer using programs and that programs are created
using computer programming languages like C, C++, Java, etc.,

Computer languages are the languages through which the user can communicate with

the computer by writing program instructions.


Every computer programming language contains a set of predefined words and a set of rules
(syntax) that are used to create instructions of a program.

Computer Languages Classification


Over the years, computer languages have been evolved from Low-Level to High-
Level Languages. In the earliest days of computers, only Binary Language was used to write
programs. The computer languages are classified as follows...
Low-Level Language (Machine Language)
Low-Level language is the only language which can be understood by the
computer. Binary Language is an example of a low-level language. Low-level language is
also known as Machine Language. The binary language contains only two symbols 1 & 0.
All the instructions of binary language are written in the form of binary numbers 1's & 0's. A
computer can directly understand the binary language. Machine language is also known as
the Machine Code.

As the CPU directly understands the binary language instructions, it does not require
any translator. CPU directly starts executing the binary language instructions and takes very
less time to execute the instructions as it does not require any translation. Low-level language
is considered as the First Generation Language (1GL).

Advantages

 A computer can easily understand the low-level language.


 Low-level language instructions are executed directly without any translation.
 Low-level language instructions require very less time for their execution.

Disadvantages

 Low-level language instructions are very difficult to use and understand.


 Low-level language instructions are machine-dependent, that means a program
written for a particular machine does not execute on another machine.
 In low-level language, there is more chance for errors and it is very difficult to find
errors, debug and modify.

Middle-Level Language (Assembly Language)


Middle-level language is a computer language in which the instructions are created
using symbols such as letters, digits and special characters. Assembly language is an example
of middle-level language. In assembly language, we use predefined words called mnemonics.
Binary code instructions in low-level language are replaced with mnemonics and operands in
middle-level language. But the computer cannot understand mnemonics, so we use a
translator called Assembler to translate mnemonics into binary language. Assembler is a
translator which takes assembly code as input and produces machine code as output. That
means, the computer cannot understand middle-level language, so it needs to be translated
into a low-level language to make it understandable by the computer. Assembler is used to
translate middle-level language into low-level language.

Advantages

 Writing instructions in a middle-level language is easier than writing instructions in a


low-level language.
 Middle-level language is more readable compared to low-level language.
 Easy to understand, find errors and modify.

Disadvantages

 Middle-level language is specific to a particular machine architecture, that means it is


machine-dependent.
 Middle-level language needs to be translated into low-level language.
 Middle-level language executes slower compared to low-level language.

High-Level Language
A high-level language is a computer language which can be understood by the users.
The high-level language is very similar to human languages and has a set of grammar rules
that are used to make instructions more easily. Every high-level language has a set of
predefined words known as Keywords and a set of rules known as Syntax to create
instructions. The high-level language is easier to understand for the users but the computer
can not understand it. High-level language needs to be converted into the low-level language
to make it understandable by the computer. We use Compiler or interpreter to convert high-
level language to low-level language.

Languages like COBOL, FORTRAN, BASIC, C, C++, JAVA, etc., are examples of
high-level languages. All these programming languages use human-understandable language
like English to write program instructions. These instructions are converted to low-level
language by the compiler so that it can be understood by the computer.

Advantages

 Writing instructions in a high-level language is easier.


 A high-level language is more readable and understandable.
 The programs created using high-level language runs on different machines with little
change or no change.
 Easy to understand, create programs, find errors and modify.

Disadvantages

 High-level language needs to be translated into low-level language.


 High-level language executes slower compared to middle and low-level languages.
Understanding Computer Languages
The following figure provides a few key points related to computer languages.

From the above figure, we can observe the following key points...

 The programming languages like C, C++, Java, etc., are written in High-level
language which is more comfortable for the developers.
 A high-level language is closer to the users.
 Low-level language is closer to the computer. Computer hardware can understand
only the low-level language (Machine Language).
 The program written in the high-level language needs to be converted to low-level
language to make communication between the user and the computer.
 Middle-level language is not closer to both user and computer. We can consider it as a
combination of both high-level language and low-level language.

What is Algorithm | Introduction to Algorithms


Definition of Algorithm
The word Algorithm means ” A set of finite rules or instructions to be followed in
calculations or other problem-solving operations ”
Or
” A procedure for solving a mathematical problem in a finite number of steps that
frequently involves recursive operations”.

Therefore Algorithm refers to a sequence of finite steps to solve a particular problem.


Use of the Algorithms:
Algorithms play a crucial role in various fields and have many applications. Some of the
key areas where algorithms are used include:
1. Computer Science: Algorithms form the basis of computer programming and are used
to solve problems ranging from simple sorting and searching to complex tasks such as
artificial intelligence and machine learning.
2. Mathematics: Algorithms are used to solve mathematical problems, such as finding the
optimal solution to a system of linear equations or finding the shortest path in a graph.
3. Operations Research: Algorithms are used to optimize and make decisions in fields
such as transportation, logistics, and resource allocation.
4. Artificial Intelligence: Algorithms are the foundation of artificial intelligence and
machine learning, and are used to develop intelligent systems that can perform tasks
such as image recognition, natural language processing, and decision-making.
5. Data Science: Algorithms are used to analyze, process, and extract insights from large
amounts of data in fields such as marketing, finance, and healthcare.
What is the need for algorithms?
1. Algorithms are necessary for solving complex problems efficiently and effectively.
2. They help to automate processes and make them more reliable, faster, and easier to
perform.
3. Algorithms also enable computers to perform tasks that would be difficult or impossible
for humans to do manually.
4. They are used in various fields such as mathematics, computer science, engineering,
finance, and many others to optimize processes, analyze data, make predictions, and
provide solutions to problems.
What are the Characteristics of an Algorithm?

As one would not follow any written instructions to cook the recipe, but only the standard
one. Similarly, not all written instructions for programming are an algorithm. For some
instructions to be an algorithm, it must have the following characteristics:

 Clear and Unambiguous: The algorithm should be unambiguous. Each of its steps
should be clear in all aspects and must lead to only one meaning.

 Well-Defined Inputs: If an algorithm says to take inputs, it should be well-defined


inputs. It may or may not take input.

 Well-Defined Outputs: The algorithm must clearly define what output will be yielded
and it should be well-defined as well. It should produce at least 1 output.

 Finite-ness: The algorithm must be finite, i.e. it should terminate after a finite time.

 Feasible: The algorithm must be simple, generic, and practical, such that it can be
executed with the available resources. It must not contain some future technology or
anything.

 Language Independent: The Algorithm designed must be language-independent, i.e. it


must be just plain instructions that can be implemented in any language, and yet the
output will be the same, as expected.

 Input: An algorithm has zero or more inputs. Each that contains a fundamental operator
must accept zero or more inputs.

 Output: An algorithm produces at least one output. Every instruction that contains a
fundamental operator must accept zero or more inputs.
 Definiteness: All instructions in an algorithm must be unambiguous, precise, and easy
to interpret. By referring to any of the instructions in an algorithm one can clearly
understand what is to be done. Every fundamental operator in instruction must be
defined without any ambiguity.

 Finiteness: An algorithm must terminate after a finite number of steps in all test cases.
Every instruction which contains a fundamental operator must be terminated within a
finite amount of time. Infinite loops or recursive functions without base conditions do
not possess finiteness.

 Effectiveness: An algorithm must be developed by using very basic, simple, and


feasible operations so that one can trace it out by using just paper and pencil.

Properties of Algorithm:
 It should terminate after a finite time.
 It should produce at least one output.
 It should take zero or more input.
 It should be deterministic means giving the same output for the same input case.
 Every step in the algorithm must be effective i.e. every step should do some work.

What is FlowChart?
A flowchart is a type of diagram that represents a workflow or process. A flowchart
can also be defined as a diagrammatic representation of an algorithm, a step-by-step
approach to solving a task.
Types of boxes used to make a flowchart
There are different types of boxes that are used to make flowcharts. All the different
kinds of boxes are connected to one another by arrow lines. Arrow lines are used to display
the flow of control. Let’s learn about each box in detail.
1. Terminal

This box is of an oval shape which is used to indicate the start or end of the
program. Every flowchart diagram has an oval shape that depicts the start of an algorithm
and another oval shape that depicts the end of an algorithm. For example:
2.Data

This is a parallelogram-shaped box inside which the inputs or outputs are written.
This basically depicts the information that is entering the system or algorithm and the
information that is leaving the system or algorithm. For example: if the user wants to input
a from the user and display it, the flowchart for this would be:

3. Process

This is a rectangular box inside which a programmer writes the main course of
action of the algorithm or the main logic of the program. This is the crux of the flowchart as
the main processing codes is written inside this box. For example: if the programmer wants
to add 1 to the input given by the user, he/she would make the following flowchart:
4. Decision

This is a rhombus-shaped box, control statements like if, condition like a > 0, etc are
written inside this box. There are 2 paths from this one which is “yes” and the other one is
“no”. Every decision has either yes or no as an option, similarly, this box has these as
options. For example: if the user wants to add 1 to an even number and subtract 1 if the
number is odd, the flowchart would be:

5. Flow

This arrow line represents the flow of the algorithm or process. It represents the
direction of the process flow. in all the previous examples, we included arrows in every
step to display the flow of the program. arrow increases the readability of the program.

6. On-Page Reference

This circular figure is used to depict that the flowchart is in continuation with the
further steps. This figure comes into use when the space is less and the flowchart is long.
Any numerical symbol is present inside this circle and that same numerical symbol will be
depicted before the continuation to make the user understand the continuation. Below is a
simple example depicting the use of On-Page Reference.

Advantages of Flowchart
 It is the most efficient way of communicating the logic of the system.
 It acts as a guide for a blueprint during the program design.
 It also helps in the debugging process.
 Using flowcharts we can easily analyze the programs.
 flowcharts are good for documentation.

Disadvantages of Flowchart
 Flowcharts are challenging to draw for large and complex programs.
 It does not contain the proper amount of details.
 Flowcharts are very difficult to reproduce.
 Flowcharts are very difficult to modify.

Solved Examples on FlowChart

Question 1. Draw a flowchart to find the greatest number among the 2 numbers.

Solution:

Algorithm:
1. Start
2. Input 2 variables from user
3. Now check the condition If a > b, goto step 4, else goto step 5.
4. Print a is greater, goto step 6
5. Print b is greater
6. Stop

FlowChart:
Question 2. Draw a flowchart to check whether the input number is odd or even

Solution:

Algorithm:
1. Start
2. Put input a
3. Now check the condition if a % 2 == 0, goto step 5. Else goto step 4
4. Now print(“number is odd”) and goto step 6
5. Print(“number is even”)
6. Stop
FlowChart:

Question 3. Draw a flowchart to print the input number 5 times.

Solution:

Algorithm:
1. Start
2. Input number a
3. Now initialise c = 1
4. Now we check the condition if c <= 5, goto step 5 else, goto step 7.
5. Print a
6. c = c + 1 and goto step 4
7. Stop
FlowChart:

Question 4. Draw a flowchart to print numbers from 1 to 10.

Solution:

Algorithm:

1. Start
2. Now initialise c = 1
3. Now we check the condition if c < 11, then goto step 4 otherwise goto step 6.
4. Print c
5. c = c + 1 then goto step 3
6. Stop
FlowChart:

Question 5. Draw a flowchart to print the first 5 multiples of 3.

Solution:

Algorithm:

1. Start
2. Now initialise c = 1
3. Now check the condition if c < 6, then goto step 4. Otherwise goto step 6
4. Print 3 * c
5. c += 1. Then goto step 3.
6. Stop
FlowChart:

You might also like