Assignment -1
Section A:
1. What are the major components of a programming system and how do they
interact with each other?
The major components of a programming system are the programming
language chosen , the data structure used , algorithms and the software
architecture .
These components together by specifying the instructions in a easy readable
format to organize the data to solving the problem efficiently and
maintaining overall structure and functionality .
2. Describe the historical evolution of programming systems, highlighting key
milestones and their impact on modern computing.
(gfg)
1883: the start
Charles Babbage had made the device, but he was confused about how to
give instructions to the machine, and then Ada Lovelace wrote the
instructions for the analytical engine for computing Bernoulli's number.
1949: Assembly Language
It is a type of low-level language.
It mainly consists of instructions (kind of symbols) that only machines could
understand , these are mnemonics to machine language .
It is used in real time monitoring systems and creating viruses .
1952: Autocode
The first compiled computer programming language.
COBOL and FORTRAN are the languages referred to as Autocode.
1957: FORTRAN
It was designed for numeric computation and scientific computing.
Software for NASA probes voyager-1 (space probe) and voyager-2 (space
probe) was originally written in FORTRAN 5.
1958: ALGOL
ALGOL stands for ALGOrithmic Language.
the initial phase of the most popular programming languages of C, C++, and
JAVA.
It was also the first language implementing the nested function and has a
simple syntax than FORTRAN. It is an object oriented language .
It has a code block like "begin" that indicates that your program has started
and "end" means you have ended your code.
1959: COBOL
It stands for COmmon Business-Oriented Language.In 1997, 80% of the world's
business ran on Cobol.
The US internal revenue service scrambled its path to COBOL-based IMF
(individual master file) in order to pay the tens of millions of payments
mandated by the coronavirus aid, relief, and economic security.
1964: BASIC
It stands for beginners All-purpose symbolic instruction code.
In 1991 Microsoft released Visual Basic, an updated version of Basic
The first microcomputer version of Basic was co-written by Bill Gates, Paul
Allen, and Monte Davidoff for their newly-formed company, Microsoft.
1972: C
procedural programming language and the most popular programming
language. It can replace the areas covered by assembly .
It can be used in operating system, embedded system, and also on the
website using the Common Gateway Interface (CGI).
C is the mother of almost all higher-level programming languages like C#, D,
Go, Java, JavaScript, Limbo, LPC, Perl, PHP, Python, and Unix's C shell.
3. What is meant by machine structure, and why is it important for
understanding computer architecture?
Machine structure depicts the computers components and how they are
connected .I t encompasses the hardware, including the CPU, memory, I/O
devices, and interconnects like buses. Understanding machine structure is
crucial for understanding computer architecture because it forms the
foundation upon which software is built and executed.
Section B:
8. What are the core components of an operating system, and how have they
changed over time?
Operating system an interface bw computer hardware and user applications by
providing an abstraction over the abstract isa of the hardware more enhanced user
abstraction and it manages processes , memory , information and devices .
Operating systems have evolved quite a lot from the start of batch which is not an
os but a way of scheduling tasks by stacking tasks of similar jobs into a batch and
processing the entire batch altogether . after this os started to evolve :
Multiprogramming and Timesharing:
Introduction of multiprogramming to utilize CPU
efficiently.
Timesharing systems, like CTSS (1961) and Multics (1969),
allowed multiple users to interact with a single system.
1970s: Unix and Personal Computers
o Unix (1971) revolutionized OS design with simplicity,
portability, and multitasking.
o Personal computers emerged, leading to simpler OSs
like CP/M (1974) and PC-DOS (1981).
1980s: GUI and Networking
o Graphical User Interfaces (GUIs) gained popularity with
systems like Apple Macintosh (1984) and Microsoft
Windows (1985).
o Networking features, like TCP/IP in Unix, became
essential.
1990s: Linux and Advanced GUIs
o Linux (1991) introduced open-source development.
o Windows and Mac OS refined GUIs and gained
widespread adoption.
2000s-Present: Mobility and Cloud
o Mobile OSs like iOS (2007) and Android (2008) dominate.
o Cloud-based and virtualization technologies reshape
computing, with OSs like Windows Server and Linux
driving innovation.
9. Discuss the characteristics of the first-generation programming languages and
their significance in the evolution of programming systems.
A first-generation programming language (1GL) is a machine-level programming
language and belongs to the low-level programming languages.
A first generation (programming) language (1GL) is a grouping of programming
languages that are machine level languages used to program first-generation
computers. Originally, no translator was used to compile or assemble the first-
generation language. The first-generation programming instructions were entered
through the front panel switches of the computer system . The instructions in 1GL
are made of binary numbers, represented by 1s and 0s. This makes the language
suitable for the understanding of the machine but far more difficult to interpret and
learn by the human programmer.
The main advantage of programming in 1GL is that the code can run very fast and
very efficiently, precisely because the instructions are executed directly by
the central processing unit (CPU). One of the main disadvantages of programming
in a low level language is that when an error occurs, the code is not as easy to fix.
First generation languages are very much adapted to a specific computer and
CPU, and code portability is therefore significantly reduced in comparison
to higher level languages.
Modern day programmers still occasionally use machine level code, especially
when programming lower level functions of the system, such
as drivers, interfaces with firmware and hardware devices. Modern tools such as
native-code compilers are used to produce machine level from a higher-level
language.
10. What were the main challenges faced by early programmers using machine
language?
There were many difficulties faced by early programmers such ass :
1. The main difficulty of you would call it so, is how tedious you need to be as
well as patient to implement software on pure assembly. You don’t have
any form of abstraction, you would have to create your own in order to
make the task more bareable. Assembly language in and of itself is simple to
learn. It’s a basic instruction followed by operands on which the instruction
works. There are other things to consider when writing in assembly, such as
ensuring you’re familiar with how different instructions affect the FLAGS
register on a given architecture. These are just some of the things that make
programming in assembly tedious, but I believe they are the core reasons for
the tedium.
2. There are no built in functions. No printf for example. There are no data types.
There are no safety restrictions built in. You can really break things in an
amazingly small amount of time. It is very easy to write really bad code that
allows memory leaks or is inefficient. And it is tedious to write more than very
short pieces of code. One or two lines of a high level language can turn into
a couple pages of assembly when it’s compiled. All the magic that high level
languages provide turn into a ton of assembly when you read a listing.
The most important one is that you need to understand the specific architecture.
There’s no single machine code, nor an universal Assembly. Each architecture has its
own. So you need to know a lot in that regard.
Another one is that, namely on RISC architectures, coding in Assembly (not to
mention machine code) is technically difficult. For example, I think it’s MIPS
processors that, in order not to loose initial stages of instruction pipeline processing,
require you to code your conditional jumps in advance. (I.e. the instruction must
precede it’s point of execution by certain amount of other instructions so that the
processor has time to start to pipeline both possible branches of the jump to have
the next instruction ready when it gets to that point.) Producing an efficient code by
hand on such architectures is really painful (and current compilers generally do
rather better job).
11. Describe the syntax of an assembly language instruction and how it relates to
machine code.
an assembly program can be divided into three sections −
• The data section,
• The bss section, and
• The text section.
The data Section
The data section is used for declaring initialized data or constants. This data does
not change at runtime. You can declare various constant values, file names, or
buffer size, etc., in this section.
The syntax for declaring data section is −
section.data
The bss Section
The bss section is used for declaring variables. The syntax for declaring bss section is
−
section.bss
The text section
The text section is used for keeping the actual code. This section must begin with
the declaration global _start, which tells the kernel where the program execution
begins.
The syntax for declaring text section is −
section.text
global _start
_start:
Comments
Assembly language comment begins with a semicolon (;). It may contain any
printable character including blank. It can appear on a line by itself, like −
; This program displays a message on screen
or, on the same line along with an instruction, like −
add eax, ebx ; adds ebx to eax
Assembly Language Statements
Assembly language programs consist of three types of statements −
• Executable instructions or instructions,
• Assembler directives or pseudo-ops, and
• Macros.
The executable instructions or simply instructions tell the processor what to do.
Each instruction consists of an operation code (opcode). Each executable
instruction generates one machine language instruction.
The assembler directives or pseudo-ops tell the assembler about the various
aspects of the assembly process. These are non-executable and do not generate
machine language instructions.
Macros are basically a text substitution mechanism.
Syntax of Assembly Language Statements
Assembly language statements are entered one statement per line. Each
statement follows the following format −
[label] mnemonic [operands] [;comment]
The fields in the square brackets are optional. A basic instruction has two parts, the
first one is the name of the instruction (or the mnemonic), which is to be executed,
and the second are the operands or the parameters of the command.
Following are some examples of typical assembly language statements −
INC COUNT ; Increment the memory variable COUNT
MOV TOTAL, 48 ; Transfer the value 48 in the
; memory variable TOTAL
ADD AH, BH ; Add the content of the
; BH register into the AH register
AND MASK1, 128 ; Perform AND operation on the
; variable MASK1 and 128
ADD MARKS, 10 ; Add 10 to the variable MARKS
MOV AL, 10 ; Transfer the value 10 to the AL register
How it resembles machine code
Assembly language is a low-level programming language that corresponds directly
to the machine code used by a specific computer architecture.
Assembly language is a more human-readable version of machine code. It uses
symbolic representations of machine code instructions, which are translated into
binary code by an assembler. Each assembly language is specific to a particular
computer architecture, as it corresponds directly to the machine code used by that
architecture. Machine code, on the other hand, is the lowest level of programming
language, consisting of binary or hexadecimal instructions that can be directly
executed by the computer's central processing unit (CPU).
The relationship between assembly language and machine code is a close one.
Assembly language is essentially a symbolic representation of machine code,
designed to be more easily understood and manipulated by humans. Each
instruction in assembly language corresponds to a specific instruction in machine
code. For example, an assembly language instruction might tell the computer to
'load' a value into a register, while the corresponding machine code instruction
would be a specific binary or hexadecimal value that the computer's CPU
recognises as the 'load' command.
The process of translating assembly language into machine code is carried out by a
program called an assembler. The assembler takes each assembly language
instruction and converts it into the corresponding machine code instruction. This
process is known as 'assembling' the code. The resulting machine code can then be
executed directly by the computer's CPU.
In summary, assembly language and machine code are two different
representations of the same thing: the instructions that tell a computer what to do.
Assembly language is a symbolic, human-readable version of these instructions,
while machine code is the binary or hexadecimal version that can be directly
executed by the computer's CPU. The process of converting assembly language into
machine code is carried out by an assembler.
Section – c
Que.1 Discuss the impact of assembly language on software development practices
during the early stages of computer science.
Key Features of Assembly Language
Assembly language has several key features that make it an inevitable part of the
software development process.Assembly Language Key Features
1. Mnemonic instructions
Assembly language uses mnemonic instructions to represent machine code
instructions. These are short, easy-to-remember words representing specific
instructions that the computer’s processor can understand. For example, the
mnemonic ‘MOV’ stands for ‘move’ and is used to move data from one location to
another.
2. Direct access to hardware
Assembly language provides direct access to hardware resources such as the CPU,
memory, and I/O ports. This allows programmers to write code that can control
these resources directly. For instance, assembly language can be used to write a
code (i.e., a device driver) that interacts directly with a piece of hardware such as a
printer or network card.
3. Low-level abstraction
Assembly language provides a close-to-hardware abstraction of the underlying
computer system. This allows programmers to write specific code that takes
advantage of a particular hardware feature of a given computer system. For
example, assembly language can be used to write algorithms for tasks such as
sorting and searching.
4. Efficient use of resources
Assembly language programs are built for the hardware on which they run. This
allows them to use system resources such as memory and processing power
efficiently. For instance, assembly language can be used to write code that uses
memory more efficiently than any other higher-level languages such as
C#, JavaScript, or PHP.
5. Full control over program flow
With assembly language, programmers can gain complete control over the flow of
their programs. This allows for more fine-grained control over program execution
through constructs such as loops and conditionals. For example, assembly language
can be used to write code implementing complex logic that cannot be easily
expressed using higher-level languages such as Swift or Ruby.
6. Direct access to memory
Assembly language programs have direct access to a computer system’s memory.
This allows programmers to write code that can directly manipulate the data stored
in memory. For instance, assembly language can be used to write code that
implements complex data structures such as linked lists and binary trees.
7. Better control over CPU
Assembly language provides better control over the CPU, allowing programmers to
write code that can perform operations such as setting flags and manipulating
registers directly. This level of control can be important for tasks such as systems
programming, where it is necessary to interact directly with the operating system
and the CPU.
Advantages of Assembly Language
Assembly language can facilitate fast and efficient code writing. Although coding in
assembly language is quite complex, the language is much more flexible than other
high-level languages.
Here are some of the key benefits of assembly language.
1. Display flexibility
Assembly language provides a high degree of flexibility in displaying data on the
screen, thanks to its data-stream commands, wide screens, and cursor-dependent
functions.
Data-stream commands are used to write data to the screen in real-time. This allows
assembly language programs to display information as it is generated without the
need to store it in memory first. For example, a program might use data-stream
commands to display the output of a sensor reading or the results of a calculation.
Wide screens refer to displays with a large number of pixels or columns. Assembly
language provides the ability to control each pixel or column on the screen,
allowing programmers to create custom graphics and user interfaces. Wide screens
are particularly useful in applications such as video games or multimedia
presentations.
Cursor-dependent functions are used to control the position of the cursor on the
screen. This allows assembly language programs to create user interfaces with
menus, buttons, and other interactive elements. For example, a programmer can
use cursor-dependent functions to create a menu allowing users to select different
options.
2. Specific data handling
Assembly language provides powerful tools to handle special data scenarios, such
as managing reentrancy into global data structures or complex functions at
operator logoff.
Reentrant code can be safely called by multiple threads or processes simultaneously
without interfering with each other. In the context of assembly language, this means
that multiple programs or processes can execute the same code simultaneously
without causing conflicts. This is particularly useful for updating global data structures
shared across multiple programs or processes. Assembly language provides powerful
synchronization primitives such as semaphores and locks that can be used to ensure
that multiple programs or processes can access global data structures safely and
without conflicts.
Complex functions at operator logoff or abend-reinstatement refer to situations
where a program must execute complex code when the user logs off or an error
occurs. In these situations, assembly language provides a way to save the program’s
state and resume execution later. This is accomplished using interrupts and signal
handlers, which allow the program to handle unexpected events and take
appropriate action. For example, the program will save its state when the user logs
off and resume execution when the user logs back in.
3. Access to privileged functions
Privileged functions are supported by assembly language, such as access to
macros, by providing instructions that can only be executed in privileged mode.
Macros are pre-defined sets of instructions that a program can call. They are often
used to simplify programming tasks and increase code reusability.
Assembly language provides access to system macros only available in privileged
mode, allowing programmers to perform tasks such as system calls, memory
allocation, and process management. By providing access to these macros,
assembly language enables programmers to develop low-level software with direct
access to system resources and can perform privileged operations.
4. Interaction with other commands
Assembly language supports interaction with other commands, such as examining
the status of or waiting on asynchronous or timed events, by providing instructions
that allow the programmer to control the flow of the program based on specific
conditions. For example, this language provides instructions that can check the
status of input/output (I/O) operations and wait for those operations to be
completed before proceeding with the program.
The language also provides instructions that allow the programmer to delay the
execution of the program for a specified period, which is useful for handling timed
events. This is often done using interrupts, which are signals the system uses to
communicate with devices and other programs.
Assembly language provides instructions that allow a programmer to enable or
disable interrupts and handle interrupt requests when they occur. This allows the
program to interact flexibly and responsively with other commands, such as I/O
operations or timed events, making it well-suited for developing low-level software
that requires direct access to hardware and system resources.
Que.2 . How do modern programming systems differ from the early systems in terms
of complexity and functionality?
The further back you go, the less abstract it was.
Earliest computers needed custom circuits built. They were later configurable to a
degree using patch cords - physical cables plugged into jack sockets.
Program source code started out on punch cards and paper tape, where you can
see the physical binary pattern of commands. It was only later where that gave way
to typewriter style keyboards with letters on. Today, voice input can work.
The programming languages themselves started out as primitive lists of machine-
level operations. The story of languages has been to raise the abstraction level
closer to how humans think about a problem. Less about how the hardware
works,and more about modelling the problem at hand.The further back you go, the
less abstract it was.Earliest computers needed custom circuits built. They were later
configurable to a degree using patch cords - physical cables plugged into jack
sockets.Program source code started out on punch cards and paper tape, where
you can see the physical binary pattern of commands. It was only later where that
gave way to typewriter style keyboards with letters on. Today, voice input can work.
The programming languages themselves started out as primitive lists of machine-
level operations. The story of languages has been to raise the abstraction level
closer to how humans think about a problem. Less about how the hardware works,
and more about modelling the problem at hand.Low level programming? Not that
different.Specific details differ. A modern CPU has a much more extensive, more
streamlined instruction set than an early stored program computer. The programmer
does not need to worry that much about memory and execution speed since
modern hardware is more than adequate for most tasks, even when the code is
terribly inefficient. And it is much easier to write programs using fast, reliable SSDs,
hard drives and cloud storage than messing with punch cards or similar archaic
storage mediums. But the basic algorithms remain the same, the basic challenges
remain the same when you write code in assembler or a relatively “low-level”
programming language like C. There are only so many ways you can handle an
interrupt, move bytes through an I/O port, or solve a system of partial differential
equations numerically.On the other hand… application programming has become
conceptually very different compared to the early days. What does a variable
represent, for instance? A number? A string? An array? An object with properties
and methods? How about a single variable representing an entire cloud service? A
user identity? A complex application?On a strictly technical level, of course, there is
little difference between assigning a fancy name to a collection of subroutines or
simply referring to them as, say, subroutine 23 on magnetic drum 7. But
we think about these things differently. Instead of bits and bytes, numbers and
strings, we request, say, an authentication token that authorizes our code to perform
a specific action on behalf of a user, and then use this token when we generate a
service request. Programmers working on application code at this level do not worry
about the nitty-gitty details of how many bytes that token occupies or how the
memory it uses is freed afterwards; they worry about the token’s persistence.
Another difference is that nowadays, many applications “live” in a complex,
networked environment with code running in several different contexts: e.g.,
JavaScript code managing the UI of a Web application, server-side (e.g., PHP, C#)
code performing the application logic, database queries running against records on
a database server, while accessing cloud storage using yet another software
interface. This is quite a change from the old school, one computer, one CPU, one
user conceptual foundation that characterized personal computing decades ago,
or the time-share, batch processing computer environments of mainframes.
Yet another major difference is the emphasis on security. Back in the old days, users
were assumed to be people who had authorized access to the system and whose
objective was to make things work. So certain bugs were… acceptable. If typing too
many characters into a field crashed the application, you could just stick a note on
the computer monitor saying “Don’t type more than 20 characters in field X” and
that was it, at least until a new version was developed, possibly months if not years
later. Today? A bug like that is readily exploited by hostile individuals, cybercriminals,
you name it. Today, even a simple Web application needs to be more robust,
security-wise, than highly secure military systems decades ago.So yes, if a
programmer, even an experienced programmer from, say, 1991 was transported to
the present and entrusted with developing even a simple Web application, he
would likely fail miserably at first. The very idea that the same PHP file contains code
that runs on the server side, code that runs on the client workstation, code that runs
on a database back-end, and perhaps even code that runs on a remote cloud
service? It would take quite some time to digest and process these things
conceptually. Meanwhile, he would have no appreciation whatsoever of the
security challenges that ubiquitous Internet connectivity imposes even on simple
solutions.On the other hand, experienced present-day programmers can do things
that back 30, 40 years ago we never even dreamed possible. Take the venerable
Commodore-64 for instance: at the time a revolutionary machine with its reasonably
capable 8-bit processor running at 1 MHz, and its whopping 64 kilobytes of memory.
Now you wouldn’t expect a machine like that to run a windowing operating system
or full-motion video. Yet people found ways to make it happen. Let me tell you,
back in 1983 or thereabouts when I was actively working on developing games for
the C64, no sane person thought something like this would ever be possible on that
computer: