Chapter 2 Data Representation
Chapter 2 Data Representation
Data Representation in
Computer Systems
Chapter 2 Objectives
2
Chapter 2 Objectives
3
2.1 Introduction
4
2.1 Introduction
5
2.2 Positional Numbering Systems
6
2.2 Positional Numbering Systems
9 10 2 + 4 10 1 + 7 10 0
5 10 3 + 8 10 2 + 3 10 1 + 6 10 0
+ 4 10 -1 + 7 10 -2
7
2.2 Positional Numbering Systems
= 16 + 8 + 0 + 0 + 1 = 25
8
2.3 Decimal to Binary Conversions
9
2.3 Decimal to Binary Conversions
10
2.3 Decimal to Binary Conversions
14
2.3 Decimal to Binary Conversions
16
2.3 Decimal to Binary Conversions
18
2.3 Decimal to Binary Conversions
19
2.3 Decimal to Binary Conversions
22
2.3 Decimal to Binary Conversions
23
2.3 Decimal to Binary Conversions
25
2.3 Decimal to Binary Conversions
26
2.3 Decimal to Binary Conversions
27
2.4 Signed Integer Representation
28
2.4 Signed Integer Representation
Signed Magnitude System
29
2.4 Signed Integer Representation
30
2.4 Signed Integer Representation
Let’s see how the addition rules work with signed magnitude numbers . . .
31
2.4 Signed Integer Representation
• Example:
– Using signed magnitude
binary arithmetic, find the
sum of 75 and 46.
• First, convert 75 and 46 to
binary, and arrange as a sum,
but separate the (positive)
sign bits from the magnitude
bits.
32
2.4 Signed Integer Representation
• Example:
– Using signed magnitude
binary arithmetic, find the
sum of 75 and 46.
• Just as in decimal arithmetic,
we find the sum starting with
the rightmost bit and work left.
33
2.4 Signed Integer Representation
• Example:
– Using signed magnitude
binary arithmetic, find the
sum of 75 and 46.
• In the second bit, we have a
carry, so we note it above the
third bit.
34
2.4 Signed Integer Representation
• Example:
– Using signed magnitude
binary arithmetic, find the
sum of 75 and 46.
• The third and fourth bits also
give us carries.
35
2.4 Signed Integer Representation
• Example:
– Using signed magnitude binary
arithmetic, find the sum of 75
and 46.
• Once we have worked our way
through all eight bits, we are
done.
In this example, we were careful careful to pick two values whose sum would fit into seven bits. If
36
2.4 Signed Integer Representation
• Example:
– Using signed magnitude binary
arithmetic, find the sum of 107
and 46.
• We see that the carry from the
seventh bit overflows and is
discarded, giving us the
erroneous result: 107 + 46 = 25.
37
2.4 Signed Integer Representation
38
2.4 Signed Integer Representation
• The sign of the result gets the sign of the number that is larger.
39
2.4 Signed Integer Representation
40
2.4 Signed Integer Representation
Complement system
41
2.4 Signed Integer Representation
43
2.4 Signed Integer Representation
44
2.4 Signed Integer Representation
45
2.4 Signed Integer Representation
We note that 19 in one’s complement is: 00010011, so -19 in one’s complement is:
11101100,
46
2.4 Signed Integer Representation
47
2.4 Signed Integer Representation
• Example:
– Using two’s complement binary
arithmetic, find the sum of 107
and 46.
• We see that the nonzero carry
from the seventh bit overflows into
the sign bit, giving us the
erroneous result: 107 + 46 = -103.
Rule for detecting two’s complement overflow: When the “carry in” and the “carry out” of the sign bit differ,
48
2.5 Floating-Point Representation
49
2.5 Floating-Point Representation
50
2.5 Floating-Point Representation
51
2.5 Floating-Point Representation
52
2.5 Floating-Point Representation
53
2.5 Floating-Point Representation
54
2.5 Floating-Point Representation
significand.
55
2.5 Floating-Point Representation
56
2.5 Floating-Point Representation
• Example:
– Express 3210 in the simplified 14-bit floating-point
model.
• We know that 32 is 25. So in (binary) scientific
notation 32 = 1.0 x 25 = 0.1 x 26.
• Using this information, we put 110 (= 610) in the
exponent field and 1 in the significand as shown.
57
2.5 Floating-Point Representation
58
2.5 Floating-Point Representation
59
2.5 Floating-Point Representation
two. (Why?)
60
2.5 Floating-Point Representation
• Example:
– Express 3210 in the revised 14-bit floating-point model.
• We know that 32 = 1.0 x 25 = 0.1 x 26.
• To use our excess 16 biased exponent, we add 16 to
6, giving 2210 (=101102).
• Graphically:
62
2.5 Floating-Point Representation
• Example:
– Express 0.062510 in the revised 14-bit floating-point
model.
• We know that 0.0625 is 2-4. So in (binary) scientific
notation 0.0625 = 1.0 x 2-4 = 0.1 x 2 -3.
• To use our excess 16 biased exponent, we add 16 to
-3, giving 1310 (=011012).
63
2.5 Floating-Point Representation
• Example:
– Express -26.62510 in the revised 14-bit floating-point
model.
• We find 26.62510 = 11010.1012. Normalizing, we have:
26.62510 = 0.11010101 x 2 5.
• To use our excess 16 biased exponent, we add 16 to
5, giving 2110 (=101012). We also need a 1 in the sign
bit.
64
2.5 Floating-Point Representation
65
2.5 Floating-Point Representation
66
2.5 Floating-Point Representation
67
2.5 Floating-Point Representation
• Example:
– Find the sum of 1210 and 1.2510 using the 14-bit floating-
point model.
• We find 1210 = 0.1100 x 2 4. And 1.2510 = 0.101 x 2 1 =
0.000101 x 2 4.
• Thus, our sum is 0.110101 x 2 4.
68
2.5 Floating-Point Representation
69
2.5 Floating-Point Representation
• Example:
– Find the product of 1210 and 1.2510 using the 14-bit
floating-point model.
• We find 1210 = 0.1100 x 2 4. And 1.2510 = 0.101 x 2 1.
• Thus, our product is 0.0111100 x 2
5 = 0.1111 x 2 4.
an exponent of 20 = 10110 .
10 2
70
2.5 Floating-Point Representation
71
2.5 Floating-Point Representation
72
2.5 Floating-Point Representation
128.5 - 128
0.39%
128
• If we had a procedure that repetitively added 0.5 to
128.5, we would have an error of nearly 2% after only
four iterations.
73
2.5 Floating-Point Representation
74
2.5 Floating-Point Representation
Experienced programmers know that it’s better for a program to crash than to have it produce incorrect,
75
2.6 Character Codes
76
2.6 Character Codes
77
2.6 Character Codes
78
2.6 Character Codes
79
2.6 Character Codes
80
2.6 Character Codes
81
2.7 Codes for Data Recording
And Transmission
• When character codes or numeric values are stored
in computer memory, their values are unambiguous.
• This is not always the case when data is stored on
magnetic disk or transmitted over a distance of more
than a few feet.
– Owing to the physical irregularities of data storage and
transmission media, bytes can become garbled.
• Data errors are reduced by use of suitable coding
methods as well as through the use of various error-
detection techniques.
82
2.7 Codes for Data Recording
And Transmission
• To transmit data, pulses of “high” and “low” voltage
are sent across communications media.
• To store data, changes are induced in the magnetic
polarity of the recording medium.
– These polarity changes are called flux reversals.
• The period of time during which a bit is transmitted,
or the area of magnetic storage within which a bit is
stored is called a bit cell.
83
2.7 Codes for Data Recording
And Transmission
• The simplest data recording and transmission
code is the non-return-to-zero (NRZ) code.
• NRZ encodes 1 as “high” and 0 as “low.”
• The coding of OK (in ASCII) is shown below.
84
2.7 Codes for Data Recording
And Transmission
• The problem with NRZ code is that long strings of
zeros and ones cause synchronization loss.
• Non-return-to-zero-invert (NRZI) reduces this
synchronization loss by providing a transition (either
low-to-high or high-to-low) for each binary 1.
85
2.7 Codes for Data Recording
And Transmission
• Although it prevents loss of synchronization over long
strings of binary ones, NRZI coding does nothing to
prevent synchronization loss within long strings of
zeros.
• Manchester coding (also known as phase modulation)
prevents this problem by encoding a binary one with
an “up” transition and a binary zero with a “down”
transition.
86
2.7 Codes for Data Recording
And Transmission
• For many years, Manchester code was the dominant
transmission code for local area networks.
• It is, however, wasteful of communications capacity
because there is a transition on every bit cell.
• A more efficient coding method is based upon the
frequency modulation (FM) code. In FM, a transition is
provided at each cell boundary. Cells containing
binary ones have a mid-cell transition.
87
2.7 Codes for Data Recording
And Transmission
• At first glance, FM is worse than Manchester code,
because it requires a transition at each cell boundary.
• If we can eliminate some of these transitions, we would
have a more economical code.
• Modified FM does just this. It provides a cell boundary
transition only when adjacent cells contain zeros.
• An MFM cell containing a binary one has a transition in
the middle as in regular FM.
88
2.7 Codes for Data Recording
And Transmission
• The main challenge for data recording and trans-
mission is how to retain synchronization without
chewing up more resources than necessary.
• Run-length-limited, RLL, is a code specifically
designed to reduce the number of consecutive
ones and zeros.
– Some extra bits are inserted into the code.
– But even with these extra bits RLL is remarkably
efficient.
89
2.7 Codes for Data Recording
And Transmission
• An RLL(d,k) code dictates a minimum of d and a
maximum of k consecutive zeros between any pair
of consecutive ones.
– RLL(2,7) has been the dominant disk storage coding
method for many years.
• An RLL(2,7) code contains more bit cells than its
corresponding ASCII or EBCDIC character.
• However, the coding method allows bit cells to be
smaller, thus closer together, than in MFM or any
other code.
90
2.7 Codes for Data Recording
And Transmission
• The RLL(2,7) coding for OK is shown below,
compared to MFM. The RLL code (bottom)
contains 25% fewer transitions than the MFM
code (top).
The details as to how this code is derived are given in the text.
91
2.8 Error Detection and Correction
92
2.8 Error Detection and Correction
93
2.8 Error Detection and Correction
94
2.8 Error Detection and Correction
1+0=1 1+1=0
You will fully understand why modulo 2 arithmetic is so handy after you study digital circuits in
Chapter 3.
95
2.8 Error Detection and Correction
97
2.8 Error Detection and Correction
98
2.8 Error Detection and Correction
X 2 + 1.
99
2.8 Error Detection and Correction
100
2.8 Error Detection and Correction
101
2.8 Error Detection and Correction
102
2.8 Error Detection and Correction
103
2.8 Error Detection and Correction
3:
104
2.8 Error Detection and Correction
105
2.8 Error Detection and Correction
106
2.8 Error Detection and Correction
107
2.8 Error Detection and Correction
108
2.8 Error Detection and Correction
109
2.8 Error Detection and Correction
110
2.8 Error Detection and Correction
111
2.8 Error Detection and Correction
112
2.8 Error Detection and Correction
113
2.8 Error Detection and Correction
117
Chapter 2 Conclusion
119