C Programming Material
C Programming Material
A computer is an electronic machine that collects information, stores it, processes it according to
user instructions, and then returns the result.
A computer is a programmable electronic device that performs arithmetic and logical operations
automatically using a set of instructions provided by the user.
1. Abacus
Abacus was invented by the Chinese around 4000 years ago. It’s a wooden rack with metal rods
with beads attached to them. The abacus operator moves the beads according to certain
guidelines to complete arithmetic computations.
2. Napier’s Bone
John Napier devised Napier’s Bones, a manually operated calculating apparatus. For calculating,
this instrument used 9 separate ivory strips (bones) marked with numerals to multiply and
divide. It was also the first machine to calculate using the decimal point system.
3. Pascaline
Pascaline was invented in 1642 by Biaise Pascal, a French mathematician and philosopher. It is
thought to be the first mechanical and automated calculator. It was a wooden box with gears and
wheels inside.
5. Difference Engine
In the early 1820s, Charles Babbage created the Difference Engine. It was a mechanical
computer that could do basic computations. It was a steam-powered calculating machine used to
solve numerical tables such as logarithmic tables.
6. Analytical Engine
Charles Babbage created another calculating machine, the Analytical Engine, in 1830. It was a
mechanical computer that took input from punch cards. It was capable of solving any
mathematical problem and storing data in an indefinite memory.
7. Tabulating machine
An American Statistician – Herman Hollerith invented this machine in the year 1890. Tabulating
Machine was a punch card-based mechanical tabulator. It could compute statistics and record or
sort data or information. Hollerith began manufacturing these machines in his company, which
ultimately became International Business Machines (IBM) in 1924.
8. Differential Analyzer
Vannevar Bush introduced the first electrical computer, the Differential Analyzer, in 1930. This
machine is made up of vacuum tubes that switch electrical impulses in order to do calculations.
It was capable of performing 25 calculations in a matter of minutes.
9. Mark I
Howard Aiken planned to build a machine in 1937 that could conduct massive calculations or
calculations using enormous numbers. The Mark I computer was constructed in 1944 as a
collaboration between IBM and Harvard.
Generations of Computers
In the history of computers, we often refer to the advancements of modern computers as
the generation of computers. We are currently on the fifth generation of computers. So let us
look at the important features of these five generations of computers.
1st Generation: This was from the period of 1940 to 1955. This was when machine
language was developed for the use of computers. They used vacuum tubes for the
circuitry. For the purpose of memory, they used magnetic drums. These machines were
complicated, large, and expensive. They were mostly reliant on batch operating systems
and punch cards. As output and input devices, magnetic tape and paper tape were
implemented. For example, ENIAC, UNIVAC-1, EDVAC, and so on.
2nd Generation: The years 1957-1963 were referred to as the “second generation of
computers” at the time. In second-generation computers, COBOL and FORTRAN are
employed as assembly languages and programming languages. Here they advanced from
vacuum tubes to transistors. This made the computers smaller, faster and more energy-
efficient. And they advanced from binary to assembly languages. For instance, IBM
1620, IBM 7094, CDC 1604, CDC 3600, and so forth.
3rd Generation: The hallmark of this period (1964-1971) was the development of the
integrated circuit. A single integrated circuit (IC) is made up of many transistors, which
increases the power of a computer while simultaneously lowering its cost. These
computers were quicker, smaller, more reliable, and less expensive than their
predecessors. High-level programming languages such as FORTRON-II to IV, COBOL,
and PASCAL PL/1 were utilized. For example, the IBM-360 series, the Honeywell-6000
series, and the IBM-370/168.
4th Generation: The invention of the microprocessors brought along the fourth
generation of computers. The years 1971-1980 were dominated by fourth generation
computers. C, C++ and Java were the programming languages utilized in this generation
of computers. For instance, the STAR 1000, PDP 11, CRAY-1, CRAY-X-MP, and Apple
II. This was when we started producing computers for home use.
5th Generation: These computers have been utilized since 1980 and continue to be
used now. This is the present and the future of the computer world. The defining aspect of
this generation is artificial intelligence. The use of parallel processing and
superconductors are making this a reality and provide a lot of scope for the future. Fifth-
generation computers use ULSI (Ultra Large Scale Integration) technology. These are the
most recent and sophisticated computers. C, C++, Java,.Net, and more programming
languages are used. For instance, IBM, Pentium, Desktop, Laptop, Notebook, Ultrabook,
and so on.
Computer organization
Every computer mainly consists of three things and those are...
1. Hardware
2. Software
3. User
Here the user interacts with the software, and the software makes the computer hardware
parts to work for the user.
What is Computer Hardware?
All physical components of the computer are called as computer hardware. A user can see,
touch and feel every hardware of the computer. All hardware components perform any task
based on the instructions given by the computer software.
The computer hardware is the physical part of a computer.
The computer hardware components are as follows...
1. Input Devices - These are the parts through which a user can give the data to the
computer.
2. Output Devices - These are the physical components of a computer through which
the computer gives the result to the user.
3. Storage Devices - These are the physical components of a computer in which the data
can be stored.
4. Devices Drives - Using drives, user can read and write data on to the storage devices
like CD, floppy, etc.,
5. Cables - Various cables (Wires) are used to make connections in a computer
6. Other Devices - Other than the above hardware components, a computer also
contains components like Motherboard, CPU (Processor), SMPS, Fans, etc.,
Input Devices
Computer input devices are the physical components of the computer which are used to give
the data given by the user to the computer. Using input devices the user can give the data to
the computer.
Example
Output Devices
Computer output devices are the physical components of the computer which are used to give
the computer result to the User. Using output devices, the user can see the computer-
generated result.
Example
By using input devices, the user interacts with the application and the application uses
output devices to show the result. All input and output devices work according to the
instructions given by the operating system.
The working process of a computer is shown in the following figure.
What is computer memory?
Computer memory is one of the most important parts of the computer. It stores and allows users
to access the data anytime, anywhere they want. There are two types of computer memories.
Non-volatile memory.
Volatile memory is termed as RAM which stands for Random access memory. While non-
volatile stands for ROM which is an acronym for Read-only memory. Computer memory is
based on the two factors that include access time and capacity. The faster the speed of the
memory is the lesser will be the access time. A computer uses the memory which is organized in
such a way that it enables largest capacity of the memory and the faster speed.
Types of Memory
2. Read-only memory
ROM is not as accessible as RAM and is, therefore, non-volatile memory. Once a ROM
chip is programmed it cannot be rewritten or programmed. The only changes you can make in
ROM is at the time of manufacturing. ROM has three categories which are:
2. Magnetic disks like memory stick, floppy disk, and hard disk drive.
3. Solid state disks like the thumb drive, pen, and flash.
Along with this one may also ask units and measurements as to how memory in computers is
measured. We all use a hard disk and a pen drive to transfer the data from one place to another.
But what are its units? Computer measures data in many forms such as Megabyte, Kilobyte,
Byte, Bit, Nibble, Terabyte, Gigabyte, Exabyte, Petabyte, and many more. Here are the
conversions of these data into one form or another:
8 Bits 1 Byte
YottaByte
ZB (1024)
(1YB)
1 YB BrontoByte
Types of Computers
1. Analog Computers – Analog computers are built with various components such as
gears and levers, with no electrical components. One advantage of analogue computation
is that designing and building an analogue computer to tackle a specific problem can be
quite straightforward.
2. Digital Computers – Information in digital computers is represented in discrete form,
typically as sequences of 0s and 1s (binary digits, or bits). A digital computer is a system
or gadget that can process any type of information in a matter of seconds. Digital
computers are categorized into many different types. They are as follows:
a. Mainframe computers – It is a computer that is generally utilized by large
enterprises for mission-critical activities such as massive data processing. Mainframe
computers were distinguished by massive storage capacities, quick components, and
powerful computational capabilities. Because they were complicated systems, they
were managed by a team of systems programmers who had sole access to the
computer. These machines are now referred to as servers rather than mainframes.
e. Embedded processors – These are miniature computers that control electrical and
mechanical processes with basic microprocessors. Embedded processors are often
simple in design, have limited processing capability and I/O capabilities, and need
little power. Ordinary microprocessors and microcontrollers are the two primary
types of embedded processors. Embedded processors are employed in systems that
do not require the computing capability of traditional devices such as desktop
computers, laptop computers, or workstations.
In some processors, the ALU is divided into two units: an arithmetic unit (AU) and a logic
unit (LU). Some processors contain more than one AU -- for example, one for fixed-point
operations and another for floating-point operations.
In computer systems, floating-point computations are sometimes done by a floating-point unit
(FPU) on a separate chip called a numeric coprocessor.
The input consists of an instruction word, sometimes called a machine instruction word, that
contains an operation code or "opcode," one or more operands and sometimes a format code.
The operation code tells the ALU what operation to perform and the operands are used in the
operation.
For example, two operands might be added together or compared logically. The format may
be combined with the opcode and tells, for example, whether this is a fixed-point or a
floating-point instruction.
The output consists of a result that is placed in a storage register and settings that indicate
whether the operation was performed successfully. If it isn't, some sort of status will be stored
in a permanent place that is sometimes called the machine status word.
In general, the ALU includes storage places for input operands, operands that are being
added, the accumulated result (stored in an accumulator) and shifted results. The flow of bits
and the operations performed on them in the subunits of the ALU are controlled by gated
circuits.
The gates in these circuits are controlled by a sequence logic unit that uses a
particular algorithm or sequence for each operation code. In the arithmetic unit,
multiplication and division are done by a series of adding or subtracting and shifting
operations.
Instructions
Computer instructions are a group of machine language instructions and they are
executed by a specific computer processor. Whatever instruction we give to the computer, it
will work on its basis.
Memory
Computer memory is a device that is used to store data and information. In other
words, “Computer memory is an important part of the computer in which data is stored,
without memory the computer does not work.” .Memory includes RAM and ROM.
It is a 16-bit register and is also called instruction counter, instruction pointer, and
instruction address register (IAR). PC (program counter) is a digital counter that is needed
to execute tasks quickly and track the current execution point.
All the instructions and data present in memory have a special address. As each instruction
is processed, the program counter is updated to the address of the next instruction to be
fetched. When a byte (machine code) is fetched, the PC is incremented by one. So that it
can fetch the next instruction. If the computer is reset or restarted, the program counter
returns to zero value.
For example, suppose the content of the PC is 8000H. This means that the processor wants
to fetch the instruction byte on 8000H. After fetching the byte at 8000H, the PC
automatically increments by one (1). In this way, the processor becomes ready to fetch the
next byte of the instruction or to fetch the next opcode.
Computer Languages
between two persons. That means when we want to make communication between two
persons we need a language through which persons can express their feelings. Similarly,
when we want to make communication between user and computer or between two or more
computers we need a language through which user can give information to the computer and
vice versa. When a user wants to give any instruction to the computer the user needs a
The user interacts with the computer using programs and that programs are created
using computer programming languages like C, C++, Java, etc.,
Computer languages are the languages through which the user can communicate with
As the CPU directly understands the binary language instructions, it does not require
any translator. CPU directly starts executing the binary language instructions and takes very
less time to execute the instructions as it does not require any translation. Low-level language
is considered as the First Generation Language (1GL).
Advantages
Disadvantages
Advantages
Disadvantages
High-Level Language
A high-level language is a computer language which can be understood by the users.
The high-level language is very similar to human languages and has a set of grammar rules
that are used to make instructions more easily. Every high-level language has a set of
predefined words known as Keywords and a set of rules known as Syntax to create
instructions. The high-level language is easier to understand for the users but the computer
can not understand it. High-level language needs to be converted into the low-level language
to make it understandable by the computer. We use Compiler or interpreter to convert high-
level language to low-level language.
Languages like COBOL, FORTRAN, BASIC, C, C++, JAVA, etc., are examples of
high-level languages. All these programming languages use human-understandable language
like English to write program instructions. These instructions are converted to low-level
language by the compiler so that it can be understood by the computer.
Advantages
Disadvantages
From the above figure, we can observe the following key points...
The programming languages like C, C++, Java, etc., are written in High-level
language which is more comfortable for the developers.
A high-level language is closer to the users.
Low-level language is closer to the computer. Computer hardware can understand
only the low-level language (Machine Language).
The program written in the high-level language needs to be converted to low-level
language to make communication between the user and the computer.
Middle-level language is not closer to both user and computer. We can consider it as a
combination of both high-level language and low-level language.
As one would not follow any written instructions to cook the recipe, but only the standard
one. Similarly, not all written instructions for programming are an algorithm. For some
instructions to be an algorithm, it must have the following characteristics:
Clear and Unambiguous: The algorithm should be unambiguous. Each of its steps
should be clear in all aspects and must lead to only one meaning.
Well-Defined Outputs: The algorithm must clearly define what output will be yielded
and it should be well-defined as well. It should produce at least 1 output.
Finite-ness: The algorithm must be finite, i.e. it should terminate after a finite time.
Feasible: The algorithm must be simple, generic, and practical, such that it can be
executed with the available resources. It must not contain some future technology or
anything.
Input: An algorithm has zero or more inputs. Each that contains a fundamental operator
must accept zero or more inputs.
Output: An algorithm produces at least one output. Every instruction that contains a
fundamental operator must accept zero or more inputs.
Definiteness: All instructions in an algorithm must be unambiguous, precise, and easy
to interpret. By referring to any of the instructions in an algorithm one can clearly
understand what is to be done. Every fundamental operator in instruction must be
defined without any ambiguity.
Finiteness: An algorithm must terminate after a finite number of steps in all test cases.
Every instruction which contains a fundamental operator must be terminated within a
finite amount of time. Infinite loops or recursive functions without base conditions do
not possess finiteness.
Properties of Algorithm:
It should terminate after a finite time.
It should produce at least one output.
It should take zero or more input.
It should be deterministic means giving the same output for the same input case.
Every step in the algorithm must be effective i.e. every step should do some work.
What is FlowChart?
A flowchart is a type of diagram that represents a workflow or process. A flowchart
can also be defined as a diagrammatic representation of an algorithm, a step-by-step
approach to solving a task.
Types of boxes used to make a flowchart
There are different types of boxes that are used to make flowcharts. All the different
kinds of boxes are connected to one another by arrow lines. Arrow lines are used to display
the flow of control. Let’s learn about each box in detail.
1. Terminal
This box is of an oval shape which is used to indicate the start or end of the
program. Every flowchart diagram has an oval shape that depicts the start of an algorithm
and another oval shape that depicts the end of an algorithm. For example:
2.Data
This is a parallelogram-shaped box inside which the inputs or outputs are written.
This basically depicts the information that is entering the system or algorithm and the
information that is leaving the system or algorithm. For example: if the user wants to input
a from the user and display it, the flowchart for this would be:
3. Process
This is a rectangular box inside which a programmer writes the main course of
action of the algorithm or the main logic of the program. This is the crux of the flowchart as
the main processing codes is written inside this box. For example: if the programmer wants
to add 1 to the input given by the user, he/she would make the following flowchart:
4. Decision
This is a rhombus-shaped box, control statements like if, condition like a > 0, etc are
written inside this box. There are 2 paths from this one which is “yes” and the other one is
“no”. Every decision has either yes or no as an option, similarly, this box has these as
options. For example: if the user wants to add 1 to an even number and subtract 1 if the
number is odd, the flowchart would be:
5. Flow
This arrow line represents the flow of the algorithm or process. It represents the
direction of the process flow. in all the previous examples, we included arrows in every
step to display the flow of the program. arrow increases the readability of the program.
6. On-Page Reference
This circular figure is used to depict that the flowchart is in continuation with the
further steps. This figure comes into use when the space is less and the flowchart is long.
Any numerical symbol is present inside this circle and that same numerical symbol will be
depicted before the continuation to make the user understand the continuation. Below is a
simple example depicting the use of On-Page Reference.
Advantages of Flowchart
It is the most efficient way of communicating the logic of the system.
It acts as a guide for a blueprint during the program design.
It also helps in the debugging process.
Using flowcharts we can easily analyze the programs.
flowcharts are good for documentation.
Disadvantages of Flowchart
Flowcharts are challenging to draw for large and complex programs.
It does not contain the proper amount of details.
Flowcharts are very difficult to reproduce.
Flowcharts are very difficult to modify.
Question 1. Draw a flowchart to find the greatest number among the 2 numbers.
Solution:
Algorithm:
1. Start
2. Input 2 variables from user
3. Now check the condition If a > b, goto step 4, else goto step 5.
4. Print a is greater, goto step 6
5. Print b is greater
6. Stop
FlowChart:
Question 2. Draw a flowchart to check whether the input number is odd or even
Solution:
Algorithm:
1. Start
2. Put input a
3. Now check the condition if a % 2 == 0, goto step 5. Else goto step 4
4. Now print(“number is odd”) and goto step 6
5. Print(“number is even”)
6. Stop
FlowChart:
Solution:
Algorithm:
1. Start
2. Input number a
3. Now initialise c = 1
4. Now we check the condition if c <= 5, goto step 5 else, goto step 7.
5. Print a
6. c = c + 1 and goto step 4
7. Stop
FlowChart:
Solution:
Algorithm:
1. Start
2. Now initialise c = 1
3. Now we check the condition if c < 11, then goto step 4 otherwise goto step 6.
4. Print c
5. c = c + 1 then goto step 3
6. Stop
FlowChart:
Solution:
Algorithm:
1. Start
2. Now initialise c = 1
3. Now check the condition if c < 6, then goto step 4. Otherwise goto step 6
4. Print 3 * c
5. c += 1. Then goto step 3.
6. Stop
FlowChart: