0% found this document useful (0 votes)
42 views5 pages

Comp Org and Desgn Notes

This document provides an overview of computer organization and design. It discusses the different types of computers including personal computers, servers, embedded computers, and supercomputers. It also discusses emerging technologies like personal mobile devices, cloud computing, and software as a service. The document outlines eight great ideas in computer architecture like performance via parallelism and pipelining. It also discusses the hierarchy of computer memories and how redundancy provides dependability.

Uploaded by

dee
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
42 views5 pages

Comp Org and Desgn Notes

This document provides an overview of computer organization and design. It discusses the different types of computers including personal computers, servers, embedded computers, and supercomputers. It also discusses emerging technologies like personal mobile devices, cloud computing, and software as a service. The document outlines eight great ideas in computer architecture like performance via parallelism and pipelining. It also discusses the hierarchy of computer memories and how redundancy provides dependability.

Uploaded by

dee
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 5

Computer Organization And Design

1.Computer Abstractions and Technology


Computers have led to a third revolution for civilization, with the information revolution taking its place
alongside the agricultural and the industrial revolutions

The computer revolution continues. Each time the cost of computing improves by another factor of 10, the
opportunities for computers multiply

Today’s science fiction suggests tomorrow’s killer applications: already on their way are glasses that
augment reality, the cashless society, and cars that can drive themselves
Although a common set of hardware technologies is used in computers ranging from smart home
appliances to cell phones to the largest supercomputers, these different applications have different design
requirements
and employ the core hardware technologies in different ways
computers are used in three different classes of applications
Personal computers (PCs) :A computer designed for use by an individual, usually incorporating a
graphics display, a keyboard, and a mouse.
. Servers :A computer used for running larger programs for multiple users, often simultaneously, and
typically accessed only via a network.
Embedded computers :A computer inside another device used for running one predetermined application
or collection of software

Servers are the modern form of what were once much larger computers, and are usually accessed only via
a network. Servers are oriented to carrying large workloads, which may consist of either single complex
applications—usually a scientific or engineering application—or handling many small jobs, such as would
occur in building a large web server. Th ese applications are usually based on software from another
source (such as a database or simulation system), but are often modified or customized for a particular
function. Servers are built from the same basic technology as desktop computers, but provide for greater
computing, storage, and input/output capacity. In general, servers also place a greater emphasis
on dependability, since a crash is usually more costly than it would be on a single user PC. Servers span
the widest range in cost and capability. At the low end, a server may be little more than a desktop computer
without a screen or keyboard and cost a thousand dollars. These low-end servers are typically used for fi le
storage, small business applications, or simple web serving .At the other extreme are supercomputers,
which at the present consist of tens of thousands of processors and many terabytes of memory, and cost
tens to hundreds of millions of dollars. Supercomputers are usually used for high-end scientific and
engineering calculations, such as weather forecasting, oil exploration, protein structure determination, and
other large-scale problems. Although such supercomputers represent the peak of computing capability,
they represent a relatively small fraction of the servers and a relatively small fraction of the overall
computer market in terms of total revenue

supercomputer :A class of computers with the highest performance and cost; they are configured as
servers and typically cost tens to hundreds of millions of dollars.

Embedded computers :Embedded computers include the microprocessors found in your car, the
computers in a television set, and the networks of processors that control a modern airplane or cargo ship.
Embedded computing systems are designed to run one application or one set of related applications that
are normally integrated with the hardware and delivered as a single system; thus, despite the large number
of embedded computers, most users never really see that they are using a computer!
.
terabyte (TB) Originally 1,099,511,627,776 (240) bytes, although communications and secondary storage
systems developers started using the term to mean 1,000,000,000,000 (1012) bytes. To reduce confusion,
we now use the term tebibyte (TiB) for 240 bytes, defining terabyte (TB) to mean 1012 bytes. Figure 1.1
shows the full range of decimal and binary values and names paste fig 1.1
Personal mobile devices (PMDs) are small wireless devices to connect to the Internet; they rely on
batteries for power, and soft ware is installed by downloading apps. Conventional examples are smart
phones and tablets.
Cloud Computing refers to large collections of servers that provide services over the Internet; some
providers rent dynamically varying numbers of servers as a utility.
Soft ware as a Service (SaaS) delivers soft ware and data as a service over the Internet, usually via a thin
program such as a browser that runs on local client devices, instead of binary code that must be installed,
and runs wholly on that device. Examples include web search and social networking.

Replacing the PC is the personal mobile device (PMD). PMDs are battery operated with wireless
connectivity to the Internet and typically cost hundreds of dollars, and, like PCs, users can download soft
ware (“apps”) to run on them. Unlike PCs, they no longer have a keyboard and mouse, and are more likely
to rely on a touch-sensitive screen or even speech input. Today’s PMD is a smart phone or a tablet
computer, but tomorrow it may include electronic glasses. Figure 1.2 shows the rapid growth time of tablets
and smart phones versus that of PCs and traditional cell phones. Taking over from the traditional server is
Cloud Computing, which relies upon giant data centers that are now known as Warehouse Scale
Computers (WSCs). Companies like Amazon and Google build these WSCs containing 100,000 servers
and then let companies rent portions of them so that they can provide soft ware services to PMDs without
having to build WSCs of their own. Indeed, Soft ware as a Service (SaaS) deployed via the cloud is
revolutionizing the soft ware industry just as PMDs and WSCs are revolutionizing the hardware industry.
Today’s soft ware developers will often have a portion of their application that runs on the PMD and a
portion that runs in the Cloud.

Programmers interested in performance now need to understand the issues that have replaced the simple
memory model of the 1960s: the parallel nature of processors and the hierarchical nature of memories.
Moreover, as we explain in Section 1.7, today’s programmers need to worry about energy efficiency of their
programs running either on the PMD or in the Cloud, which also requires understanding what is below your
code. Programmers who seek to build competitive versions of soft ware will therefore need to increase their
knowledge of computer organization.

Microprocessor A microprocessor containing multiple processors (“cores”) in a single integrated circuit.

The performance of a program depends on a combination of the effectiveness of the algorithms used in the
program, the soft ware systems used to create and translate the program into machine instructions, and the
effectiveness of the computer in executing those instructions, which may include input/output (I/O) operations.
This table summarizes how the hardware and soft ware affect performance. See table in page 9

Eight Great Ideas in Computer Architecture


Design for Moore’s Law
The one constant for computer designers is rapid change, which is driven largely by Moore’s Law. It states that
integrated circuit resources double every 18–24 months. Moore’s Law resulted from a 1965 prediction of such
growth in IC capacity made by Gordon Moore, one of the founders of Intel. As computer designs can take years,
the resources available per chip
can easily double or quadruple between the start and finish of the project. Like a skeet shooter, computer
architects must
anticipate where the technology will be when the design finishes rather than design for where it starts

Use Abstraction to Simplify Design


A major productivity technique for hardware and soft ware is to use abstractions to represent the design at
different levels
of representation; lower-level details are hidden to offer a simpler model at higher levels
Make the Common Case Fast
Making the common case fast will tend to enhance performance better than optimizing the rare case. Ironically,
the common case is often simpler than the rare case and hence is often easier to enhance

Performance via Parallelism


Since the dawn of computing, computer architects have offered designs that get more performance by
performing operations in parallel

Performance via Pipelining


A particular pattern of parallelism is so prevalent in computer architecture that it merits its own name: pipelining.

Performance via Prediction


In some cases it can be faster on average to guess and start working rather than wait until you know for sure,
assuming that the mechanism to recover from a misprediction is not too expensive and your prediction is
relatively accurate

Hierarchy of Memories
Programmers want memory to be fast, large, and cheap, as memory speed often shapes performance, capacity
limits the size of problems that can be solved, and the cost of memory today is often the majority of computer
cost. Architects have found
that they can address these conflicting demands with a hierarchy of memories, with the fastest, smallest, and
most expensive memory per bit at the top of the hierarchy and the slowest, largest, and cheapest per bit at the
bottom

Dependability via Redundancy


device can fail, we make systems dependable by including redundant components that can take over when a
failure occurs and to help detect failures

Below Your Program


The hardware in a computer can only execute extremely simple low-level instructions. To go from a complex
application to the simple instructions involves several layers of software that interpret or translate high-level
operations into simple computer instructions, an example of the great idea of abstraction.(see fig1.3)

There are many types of systems software, but two types of systems software are central to every computer
system today: an operating system and a compiler. An operating system interfaces between a user’s program
and the hardware and provides a variety of services and supervisory functions. Among the most
important functions are:
■ Handling basic input and output operations
■ Allocating storage and memory
■ Providing for protected sharing of the computer among multiple applications using it simultaneously.
Examples of operating systems in use today are Linux, iOS, and Windows

systems soft ware Software that provides services that are commonly useful, including operating systems,
compilers, loaders, and assemblers.
operating system Supervising program that manages the resources of a computer for the benefit of the
programs that run on that computer

compiler A program that translates high-level language statements into assembly language statements
Compilers perform another vital function: the translation of a program written in a high-level language,
such as C, C++, Java, or Visual Basic into instructions that the hardware can execute Given the
sophistication of modern programming languages and the simplicity of the instructions executed by the
hardware, the translation from a high-level language program to hardware instructions is complex
From a High-Level Language to the Language of Hardware
To actually speak to electronic hardware, you need to send electrical signals. Th e easiest signals for
computers to understand are on and off , and so the computer alphabet is just two letters The two symbols
for these two letters are the numbers 0 and 1, and we commonly think of the computer language as
numbers in base 2, or binary numbers. We refer to each “letter” as a binary digit or bit. Computers are
slaves to our commands, which are called instructions. Instructions, which are just collections of bits that
the computer understands and obeys, can be thought of as
numbers. For example, the bits tell one computer to add two numbers
1000110010100000
Binary digit also called a bit. One of the two numbers in base 2 (0 or 1) that are the components of
information.

instruction A command that computer hardware understands and obeys.


.
The first programmers communicated to computers in binary numbers, but this was so tedious that they
quickly invented new notations that were closer to the way humans think. At first, these notations were
translated to binary by hand, but this
process was still tiresome. Using the computer to help program the computer, the pioneers invented
programs to translate from symbolic notation to binary. The first of these programs was named an
assembler

assembler A program that translates a symbolic version of instructions into the binary version

For example, the programmer would write


add A,B
and the assembler would translate this notation into
1000110010100000

The name coined for this symbolic language, still used today, is assembly language. In contrast, the
binary language that the machine understands is the machine language

assembly language : A symbolic representation of machine instructions.


machine language : A binary representation of machine instructions

Although a tremendous improvement, assembly language is still far from the notations a scientist might like
to use to simulate fluid flow or that an accountant might use to balance the books. Assembly language
requires the programmer
to write one line for every instruction that the computer will follow, forcing the programmer to think like the
computer.
Programmers today owe their productivity—and their sanity—to the creation of high-level programming
languages and compilers that translate programs in such languages into instructions (see fig 1.4)
high-level programming language A portable language such as C, C++, Java, or Visual Basic that is
composed of words and algebraic notation that can be translated by a compiler into assembly language.

A compiler enables a programmer to write this high-level language expression:


A+B
The compiler would compile it into this assembly language statement:
add A,B
As shown above, the assembler would translate this statement into the binary instructions that tell the
computer to add the two numbers A and B.
High-level programming languages offer several important benefits. First, they allow the programmer to
think in a more natural language, using English words and algebraic notation, resulting in programs that
look much more like text than like tables of cryptic symbols they allow languages to be designed according
to their intended use. Hence, Fortran was designed for scientific computation, Cobol for business data
processing, Lisp for symbol manipulation, and so on
The second advantage of programming languages is improved programmer productivity. One of the few
areas of widespread agreement in soft ware development is that it takes less time to develop programs
when they are written in languages that require fewer lines to express an idea. Conciseness is a clear
advantage of high level languages over assembly language
The final advantage is that programming languages allow programs to be independent of the computer on
which they were developed, since compilers and assemblers can translate high-level language programs to
the binary instructions of any computer
Under the Covers
input device : A mechanism through which the computer is fed information, such as a keyboard.
output device : A mechanism that conveys the result of a computation to a user, such as a display, or to
another computer.
The underlying hardware in any computer performs the same basic functions: inputting data, outputting
data, processing data, and storing data
Two key components of computers are input devices, such as the microphone, and output devices, such
as the speaker. As the names suggest, input feeds the such as wireless networks, provide both input and
output to the computer.
The five classic components of a computer are input, output, memory, Data path, and control, with the last
two sometimes combined and called the processor Figure 1.5 shows the standard organization of a
computer. This organization is independent of hardware technology: you can place every piece of every
computer, past and present, into one of these five
categories

FIGURE 1.5 The organization of a computer, showing the fi ve classic components. Th e processor
gets instructions and data from memory. Input writes data to memory, and output reads data from memory.
Control sends the signals that determine the operations of the datapath, memory, input, and output.

You might also like