0% found this document useful (0 votes)
7 views12 pages

Theory of Computing

Computing theory

Uploaded by

flinckertech
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
7 views12 pages

Theory of Computing

Computing theory

Uploaded by

flinckertech
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 12

THEORY OF COMPUTING

Compiled by Joseph A. Erho

Computer Science Department, Niger Delta University

August, 2023

1.0 General Introduction


Theory of computation traditionally centers on three areas: automata, computability, and
complexity. They are linked by the question:
What are the fundamental capabilities and limitations of computers?
This question goes back to the 1930s when mathematical logicians first began to explore the
meaning of computation. Technological advances since that time have greatly increased our
ability to compute and have brought this question out of the realm of theory into the world of
practical concern.
In each of the three areas—automata, computability, and complexity—this question is
interpreted differently, and the answers vary according to the interpretation. Let us start with
Complexity theory.

Complexity Theory
Computer problems come in different varieties; some are easy, and some are hard. For example,
the sorting problem is an easy one. Even a small computer can sort a million numbers rather
quickly. Compare that to finding a schedule of classes for the entire university to satisfy some
reasonable constraints, such as that no two classes take place in the same room at the same time.
Finding the best schedule for just a thousand classes may require centuries, even with a
supercomputer. This raises a central question of complexity theory.
What makes some problems computationally hard and others easy?
In one important achievement of complexity theory thus far, researchers have discovered an
elegant scheme for classifying problems according to their computational difficulty. Using this
scheme, we can demonstrate a method for giving evidence that certain problems are
computationally hard, even if we are unable to prove that they are.
You have several options when you confront a problem that appears to be computationally hard.
First, by understanding which aspect of the problem is at the root of the difficulty, you may be
able to alter it so that the problem is more easily solvable. Second, you may be able to settle for
less than a perfect solution to the problem. In certain cases, finding solutions that only
approximate the perfect one is relatively easy. Third, some problems are hard only in the worst
case situation, but easy most of the time. Depending on the application, you may be satisfied with
a procedure that occasionally is slow but usually runs quickly. Finally, you may consider
alternative types of computation, such as randomized computation, that can speed up certain
tasks.
One applied area that has been affected directly by complexity theory is the ancient field of
cryptography. Cryptography is unusual because it specifically requires computational problems
that are hard, rather than easy. Secret codes should be hard to break without the secret key or
password.

Computability Theory
Mathematicians have long ago discovered that certain basic problems cannot be solved by
computers. One example of this phenomenon is the problem of determining whether a
mathematical statement is true or false. This task is the bread and butter of mathematicians. It
seems like a natural for solution by computer because it lies strictly within the realm of
mathematics. But no computer algorithm can perform this task. No computer algorithm can
perform a proof process.
Among the consequences of this profound result was the development of ideas concerning
theoretical models of computers that eventually would help lead to the construction of actual
computers.
The theories of computability and complexity are closely related. In complexity theory, the
objective is to classify problems as easy ones and hard ones; whereas in computability theory,
the classification of problems is by those that are solvable and those that are not. Computability
theory introduces several of the concepts used in complexity theory.

AUTOMATA THEORY
Automata theory deals with the definitions and properties of mathematical models of
computation. These models play a role in several applied areas of computer science. One model,
called the finite automaton, is used in text processing, compilers, and hardware design. Another
model, called the context-free grammar, is used in programming languages and artificial
intelligence.
Automata theory is an excellent place to begin the study of the theory of computation. The
theories of computability and complexity require a precise definition of a computer. Automata
theory allows practice with formal definitions of computation as it introduces concepts relevant
to other non theoretical areas of computer science.

1.2 MATHEMATICAL NOTIONS


Theory of computing is basically mathematical. As such, we begin with a discussion of the basic
mathematical objects, tools, and notation that we expect to use.
Sets
A set is a group of objects represented as a unit. The objects in a set are called its elements or
members. Sets may be described formally in several ways.
e.g. S = {7, 21, 57}
For membership, we say 7 ∈ S. But where an element (say 8) is not a member of a set, we write
8 ∈ S. For two sets A and B, we say that A is a subset of B, written A ⊆ B, if every member of
A also is a member of B. We say that A is a proper subset of B, written A ⊊ B, if A is a subset
of B and not equal to B.
The order of elements of a set doesn’t matter, nor does repetition of its members. We get the
same set S by writing {57, 7, 7, 7, 21}. If we do want to take the number of occurrences of
members into account, we call the group a multiset instead of a set. Thus {7} and {7, 7} are
different as multisets but identical as sets. An infinite set contains infinitely many elements. We
cannot write a list of all the elements of an infinite set, so we sometimes use the “. . .” notation to
mean “continue the sequence forever.” For example, {1,2,3,...} or even this {...,-3,-2,-
1,0,1,2,3,...} are infinite.
The set with zero members is called the empty set and is written ∅. A set with one member is
sometimes called a singleton set, and a set with two members is called an unordered pair.
Sometimes we describe a set containing elements according to some rule, as {n | rule about n} or
{w | something about w} This is called set-former.

Operations on Sets
For any given two sets A and B, the union of A and B, written A∪B, is the set we get by
combining all the elements in A and B into a single set. As usual repeating elements is not

B. The complement of A, written ̅, is the set of all elements under consideration that are not in
allowed. The intersection of A and B, written A ∩ B, is the set of elements that are in both A and

A. Let set  = { , , }. The power set of a set , denoted as 2 , is the set of all subsets of ,
including the empty set. For example, 2{,,} = {∅, { }, { }, { }, { , }, { , }, { , }, { , , }}.
The notation 2 to denote power set  is to reminder that it has 2|| elements. Notice that the
subsets  over the set  can each be represented with a binary n-tuple (  ,  , … , || ) where 
is a 1 if the   element of  is in  and a 0 otherwise. Since there are 2|| ways to assign 0’s and
1’s to (  ,  , … , || ), 2 has 2|| elements. Recall that || = 3. If  is finite, then |2 | = 2|| .
For example, each of the subsets can be represented as follows: ∅ → (0,0,0), { } → (0,0,1),
{ } → (0,1,0), { , } → (0,1,1), { } → (1,0,0), { , } → (1,0,1), { , } → (1,1,0), { , , } →
(1,1,1). So there are 8 ways of arranging the binary (base 2) numbers, which is equivalent to
2# ≡ 2|| .
Picture helps clarify a concept and so Venn diagram is often used to represent set. It represents
sets as regions enclosed by circular lines. It represents sets as regions enclosed by circular lines.
The operations on set can easily be visuallised with Venn diagram.

Sequences and Tuples


A sequence of objects is a list of these objects in some order. We usually designate a sequence
by writing the list within parentheses. For example, the sequence 7, 21, 57 would be written
(7, 21, 57).
Recall that order doesn’t matter in a set, but in a sequence it does. Hence (7, 21, 57) is not the
same as (57, 7, 21). Similarly, repetition does matter in a sequence, but it doesn’t matter in a set.
Thus (7, 7, 21, 57) is different from both of the other sequences, whereas the set {7, 21, 57} is
identical to the set {7, 7, 21, 57}. As with sets, sequences may be finite or infinite. Finite
sequences often are called tuples. A sequence with k elements is a k-tuple. Thus (7, 21, 57) is a
3-tuple. A 2-tuple is also called an ordered pair.
Sets and sequences may appear as elements of other sets and sequences. For example, the power
set of A given above is a set of all subsets of A. If A is the set {0, 1}, then { {0}, {1}, {0, 1} } is
set of subsets of A. The set of all ordered pairs whose elements are 0s and 1s is { (0, 0), (0, 1), (1,
0), (1, 1) }. If A and B are two sets, the Cartesian product or cross product of A and B, written A
× B, is the set of all ordered pairs wherein the first element is a member of A and the second
element is a member of B.
Example
If A = {1, 2} and B = {x, y, z},

We can also take the Cartesian product of k sets,  ,  , … , % , written  ×  × … × % . It is


A × B = { (1, x), (1, y), (1, z), (2, x), (2, y), (2, z) }.

the set consisting of all k-tuples (  ,  , … , % ) where  ∈  .


%
If we have the Cartesian product of a set with itself, we use the shorthand '((()(((*
 ×  × … ×  = %

1.3 Functions and Relations


A relation +: - ⟶ / from - to / is any collection of ordered pairs (0, 1) ∈ - × /, that is, any
subset of - × /. A relation +: - ⟶ / is a function, if +(0) contains at most one element for
each 0 ∈ -, that is, each input produces a unique output or no output at all.
Functions are central to mathematics. A function is an object that sets up an input–output

always produces the same output. If f is a function whose output value is ∈  when the input
relationship. A function takes an input and produces an output. In every function, the same input

value is ∈ , we write
f(a) = b.

pair ( , ) ∈ +, being defined by a transformation. That is, f is like a machine: what is fed into it
Note: there is difference between f and f(a). f is the function or transformation or 2-tuple. The

is the a and what comes out is the f(a) otherwise written as b. A function also is called a
mapping, and, if f(a) = b, we say that f maps a to b.
The set of possible inputs to the function is called its domain, written as D(f). The outputs of a
function come from a set called its range, written as R(f) The notation for saying that f is a
function with domain D and range R is
+: 2 ⟶ 3.
For example, the addition (+) function takes two inputs from integer and produces an integer,
written as
44: ℤ × ℤ ⟶ ℤ
So, the domain is the set of pairs of integers ℤ × ℤ and the range is ℤ, Note that a function may
not necessarily use all the elements of the specified range. The function abs i.e. absolute value
(|x|) given as
+ 0 > 0,
|0| = 60 + 0 = 0,;
−0 + 0 < 0
<: ℤ ⟶ ℤ, never takes on the value −1 even though −1 ∈ ℤ. A function that does use all the
elements of the range is said to be onto the range.
We may describe a specific function in several ways. One way is with a procedure for computing
an output from a specified input. Another way is with a table that lists all possible inputs and

Consider the function +: {0,1,2,3,4} ⟶ {1,2,3,4,0}.


gives the output for each input.

n +(>)
0 1
1 2
2 3
3 4
4 0

This function adds 1 to its input and then outputs the result modulo 5. A number modulo m is the
remainder after division by m.

example 44: ℤ × ℤ ⟶ ℤ above has a domain of Cartesian product of two sets. It can be
Sometimes the domain of the function is the Cartesian product of multiple sets. The add (+)

represented on tabular form as


add 0 1 2 3 4
0 0 1 2 3 4
1 1 2 3 4 5
2 2 3 4 5 6
3 3 4 5 6 7
4 4 5 6 7 8

function 44(, ?)
If we label the entry at the row as i and the column labeled j in the table is the value of the

When the domain of a function f is .  ×  × … × % , for some sets  ,  , … , % , the input to


f is a k-tuple (  ,  , … , % ) and we call the  the arguments to f. A function with k arguments is
called a k-ary function, and k is called the arity of the function. If k is 1, f has a single argument
and f is called a unary function. If k is 2, f is a binary function. Certain familiar binary functions
are written in a special infix notation, with the symbol for the function placed between its two
arguments, rather than in prefix notation, with the symbol preceding. For example, the addition
function add usually is written in infix notation with the + symbol between its two arguments as
in a + b instead of in prefix notation add(a, b).
A predicate or property is a function whose range is {TRUE, FALSE}. For example, let even be
a property that is TRUE if its input is an even number and FALSE if its input is an odd number.
Thus even(4) = TRUE and even(5) = FALSE.
A property whose domain is a set of k-tuples A×· · ·×A is called a relation, a k-ary relation, or a
k-ary relation on A. A common case is a 2-ary relation, called a binary relation. When writing
an expression involving a binary relation, we customarily use infix notation. For example, “less
than” is a relation usually written with the infix operation symbol <. “Equality”,written with the
= symbol, is another familiar relation. If R is a binary relation, the statement aRb means that aRb
= TRUE. Similarly, if R is a k-ary relation, the statement R(a1, . . . , ak) means that R(a1, . . . ,
ak) = TRUE.

1.4 Definitions, Theorems, and Proofs


Theorems and proofs are the heart and soul of mathematics and definitions are its spirit. These
three entities are central to every mathematical subject, including Theory of Computation.
Definitions describe the objects and notions that we use. A definition may be simple, as in the
definition of set, or complex as in the definition of security in a cryptographic system. Precision
is essential to any mathematical definition. When defining some object, we must make clear
what constitutes that object and what does not.
After we have defined various objects and notions, we usually make mathematical statements
about them. Typically, a statement expresses that some object has a certain property. The
statement may or may not be true; but like a definition, it must be precise. No ambiguity about its
meaning is allowed.
A proof is a convincing logical argument that a statement is true. In mathematics, an argument
must be airtight; that is, convincing in an absolute sense. In everyday life or in the law, the
standard of proof is lower. A murder trial demands proof “beyond any reasonable doubt.” The
weight of evidence may compel the jury to accept the innocence or guilt of the suspect.
However, evidence plays no role in a mathematical proof. A mathematician demands proof
beyond any doubt.
A theorem is a mathematical statement proved true. Generally we reserve the use of that word
for statements of special interest. Occasionally we prove statements that are interesting only
because they assist in the proof of another, more significant statement. Such statements are called
lemmas. Occasionally a theorem or its proof may allow us to conclude easily that other, related
statements are true. These statements are called corollaries of the theorem.
Finding Proofs
The only way to determine the truth or falsity of a mathematical statement is with a mathematical
proof. Unfortunately, finding proofs isn’t always easy. It can’t be reduced to a simple set of rules
or processes. During this course, you will be asked to present proofs of various statements. Don’t
despair at the prospect! Even though no one has a recipe for producing proofs, (and indeed no
computer algorithm can find this proof) some helpful general strategies are available.
First, carefully read the statement you want to prove. Do you understand all the notation?
Rewrite the statement in your own words. Break it down and consider each part separately.
If you are still stuck trying to prove a statement, try something easier. Attempt to prove a special
case of the statement. For example, if you are trying to prove that some property is true for every
k > 0, first try to prove it for k = 1. If you succeed, try it for k = 2, and so on until you can
understand the more general case. If a special case is hard to prove, try a different special case or
perhaps a special case of the special case.
Finally, when you believe that you have found the proof, you must write it up properly. A well-
written proof is a sequence of statements, wherein each one follows by simple reasoning from
previous statements in the sequence. Carefully writing a proof is important, both to enable a
reader to understand it, and for you to be sure that it is free from errors.
The following are a few tips for producing a proof.
• Be patient. Finding proofs takes time. If you don’t see how to do it right away, don’t worry.
Researchers sometimes work for weeks or even years to find a single proof.
• Come back to it. Look over the statement you want to prove, think about it a bit, leave it,
and then return a few minutes or hours later. Let the unconscious, intuitive part of your
mind have a chance to work.
• Be neat. When you are building your intuition for the statement you are trying to prove, use
simple, clear pictures and/or text. You are trying to develop your insight into the
statement, and sloppiness gets in the way of insight. Furthermore, when you are
writing a solution for another person to read, neatness will help that person understand
it.
• Be concise. Brevity helps you express high-level ideas without getting lost in details. Good
mathematical notation is useful for expressing ideas concisely. But be sure to include
enough of your reasoning when writing up a proof so that the reader can easily
understand what you are trying to say.

Types of Proof
Several types of arguments arise frequently in mathematical proofs. Here, we describe a few that
often occur in the theory of computation. Note that a proof may contain more than one type of
argument because the proof may contain within it several different subproofs.

Proof by Construction
Many theorems state that a particular type of object exists. One way to prove such a theorem is
by demonstrating how to construct the object. This technique is a proof by construction.
Let’s use a proof by construction to prove the following theorem. We define a graph to be k-
regular if every node in the graph has degree k.

Proof by Contradiction
In one common form of argument for proving a theorem, we assume that the theorem is false and
then show that this assumption leads to an obviously false consequence, called a contradiction.
We use this type of reasoning frequently in everyday life..

Proof by Induction
Proof by induction is an advanced method used to show that all elements of an infinite set have a
specified property. For example, we may use a proof by induction to show that an arithmetic
expression computes a desired quantity for every assignment to its variables, or that a program
works correctly at all steps or for all inputs.

2.0 Automata Theory


We now continue our discussion of theory of computation starting with automata theory
2.1 The Central Concepts of Automata Theory
In this section we shall introduce the most important definitions of terms that pervade the theory
of automata These concepts include the "alphabet" (a set of symbols), "strings" (a list of symbols
from an alphabet) and "language" (a set of strings).

2.1.1 Alphabets
An alphabet is a finite, nonempty set of symbols. Conventionally, we use the capital Greek
letters Σ and Γ to designate alphabets and a typewriter font for symbols from an alphabet As a
set, the members of the alphabet are the symbols of the alphabet. Common alphabets include:

Σ = {0,1} the binary alphabet


Σ = {a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z } the set of
1
2
all lower-case letters
3 Γ = {0, 1, x, y, z} set of binary digits and 3 lower-case letters
4 The set of all ASCII characters, or the set of all printable ASCII characters

2.1.2 Strings
A string over an alphabet (or sometimes called word or sentence) is a finite sequence of
symbols over an alphabet (i.e. chosen from some alphabet).For example, 001101 is a string
chosen from the binary alphabet Σ ={0,1}, and aaabcctsmn is a string over the alphabet Σ = {a,
b, c, ... , z}, etc.
A string with zero occurrences of symbols is known as empty string. This string, denoted ⁧, 1, or
”, is a string that may be chosen from any alphabet whatsoever. The empty string plays the role
of 0 in number system
The length of a string is the number of positions for symbols in the string. It is not strictly correct
to say that the string length is the number of symbols in the string. For instance, the string
aaabccdbaa has only 4 symbols - a, b, c, d - which is different from the string length. The string

where w is the string. So, if w = aaabccdbaa, |w| = 9. If w has length n, we can write A =
length is 9 because there are 9 positions for the string symbols. String length is denoted by |w|,

A A … AB where each A ∈ Σ. The reverse of w, written A C , is the string obtained by writing w


in the opposite order (i.e., AB ABD … A).E.g., let w=computer, then A C = EFGHIJK . If
A = A C , A is called palindrome. Let A = J 4 J, then A is a palindrome. Other examples of
palindrome include: madam am adam (written as one word), 11100111, bob, deed, rotator, etc.

of abracadabra. If H, L, A are strings and A = HL, then H is a prefix of A and L is a suffix of A.


String z is a substring of w if z appears consecutively within w . For example, cad is a substring

A proper prefix of A is a prefix that is not equal to λ or A. (Similarly for suffix)

Powers of an Alphabet

by using an exponential notation. We define Σ % to be set of strings of length k, each of whose


Given an alphabet Σ, we can express the set of all strings of a particular length from the alphabet

Examples: let Σ ={0,1}, Σ N ={⁧}, Σ ={0, 1}, Σ  ={00,01, 10, 11}, Σ # ={000,001, 010, 011,
symbols is in Σ.

100, 101, 110, 111}.


Note that there is difference between Σ and Σ . The Σ is an alphabet containing 2 symbols, but
Σ is a set of 2 strings of length 1 each..

Union of Powers of an Alphabet

Σ ∪ Σ  = {0, 1, 00, 01, 10, 11}


We can compute the union of the power sets of an alphabet. From the example above, we form

Σ ∪ Σ  ∪ Σ # = {0, 1, 00, 01, 10, 11, 000, 001, 010, 011, 100, 101, 110, 111}

by Σ O
In the way, we can find the union of all non-zero powers of an alphabet. Such union is denoted

Σ O = Σ ∪ Σ  ∪ Σ # ∪ …
Clearly, the empty string {⁧} or {”} is not in the set. But if we add the Σ N ={⁧} (or Σ N ={”}), we
have what is known as the universal set denoted by Σ ∗ . Thus, Σ ∗ = Σ O ∪ Σ N = Σ O ∪ {λ}

Concatenation of String
Let x and y be strings. Then xy denotes the concatenation (or product) of x and y, that is, the

string composed of i symbols 0 =   …  and y is the string composed of j symbols 1 =


string formed by making a copy of x and following it by a copy of y. More precisely, if x is the

  … Q , then xy is the string of length  + ?, 01 =   …    … Q . So, |01| = |0| + |1|.

strings x, y, and z over Σ, then 0(1S) = (01)S - associatively. Since λx = xλ = x, we say λ is


Strings concatenation follows associatively and identity properties. Let Σ be alphabet for all

identity of concatenation. Concatenation can be expressed as power of the string. If 0 is over Σ,


0 N = λ, the empty string, and for all  ≥ 1, 0  = 00 D, e.g.
( )# =
( ) =
( ) =
( )N = λ
We can determine the string length as V0  V = . |0|. Here, 0  is the   power of 0 and for all
, ? ≥ 0, 0  0 Q = 0 OQ = 0 Q 0  , i.e. concatenation of the same string any number of times is
commutative.

Equality of string
Two strings 0 and 1, over an alphabet Σ, are equal, written as 0 = 1, if they have the same

example, let 0 = 4+ and 1 = 4+ be two string, then 0 = 1. But for 0 = 4+ and


length and the corresponding positions for symbols are occupied by the similar symbols. For

1= +4; 0 = and 1 = 4+; or 0 = 4X and 1 = 4+; then 0 ≠ 1 (i.e. 0


and 1 are not equal).

2.1.3 Languages
Strings of characters are fundamental building blocks in computer science. For example,

of strings, all of which are chosen from some Σ ∗ , where Σ is a particular alphabet, is called a
programming languages are defined from strings of characters. What then is a language? A set

language. If Σ is an alphabet and Z ⊆ Σ ∗ , then Z is a language (or formal language) over Σ.

over Σ need not include strings with all the symbols of Σ. So, once we have established that Z is a
In other words, formal language is a subset of the universal language. It means that a language

language over Σ, we also know it is a language over any alphabet that is a superset of Σ .
Generally, common languages can be viewed as sets of strings, such as the English language and
programming languages. The symbols of the strings are drawn from a given alphabet. But we
also have abstract languages.
Examples:
- the language of all strings consisting of > 0's followed by > 1's, for some > ≥ 0: {λ, 01,
0011, 000111, ...}
- {λ, 01, 10, 0011, 0101, 1001, ...}, equal 0's and 1's

Σ ∗ is a language for any alphabet Σ


- the set of binary numbers whose value is prime, {10, 11, 101, 111, 1011, ...}

∅, the empty language, is a language over any alphabet


-
-
- {λ}, the language consisting of only the empty string, is also a language over any
alphabet.
Notice that ∅ ≠ {λ}. ∅ has no string, but {λ} has only one string, which is the empty string.

Kleene closure
A new language can be formed from other languages by using product and closure. Let Σ be an
alphabet. If Z , Z ⊆ Σ ∗ , then the product Z Z is defined by
Z Z = {A A| A ∈ Z >4 A ∈ Z }.

If Z ⊆ Σ ∗ then the set L∗ is a new language defined by


L∗ = {A A … AB |> ≥ 0 >4 A , A , … , AB ∈ Z}.

same language Z. So, we can state it as LB defined by LN = {λ}, LBO = LB Z for all > ≥ 0, then
This is known as star or Kleene closure. It is a closure because all the words are formed from the

L∗ = ⋃] B^N L , i.e. L ∪ L ∪ ZZ ∪ L ∪ …
B N  #

Examples: Let  = { , , , 4}. Then


{ }{ } = { , }
{ , }{ , 4} = { , 4, , 4}
language products to form new languages
{ }∗ = {_, , 
, #
,…}
{ , 4}∗ = {_, , 4, 
, 4  , 4, 4 , #
, 
4, 4  , 4 # , … }
i.e. { , 4}∗ = { , 4}N ∪ { , 4} ∪ { , 4} ∪ { , 4}# ∪ …
language closure to form new languages
Next is to discuss regular languages. But before doing so, we introduce two models for
describing regular languages - finite automata and regular expressions.

2.2 Finite Automata


Finite automata (automaton for singular) is a theoretical model of mechanism for describing
regular languages. It is a "machine", equipped with states, which takes as input a string of
symbols from some alphabets. The reading of each symbol causes some change to take place in
the machine. When all symbols have been read, the final state the machine stopped at is to
determine whether the string has been accepted or not. Hence, the output is just two possibilities
- "acceptance" or "rejection".
The machine uses a finite set of states.
- the purpose of the states is to remember the relevant portions of the systems history.
Since there are only a finite number of states, the entire history cannot be remembered.
So, the finite automata must be designed carefully to remember what is important and
forget about what is not.
- the advantage of having limited states is that we can implement the system with fixed set
of resources. For example, electric switch has 2 states and 2 categories of inputs: OFF or
ON and push on or push off respectively see the diagram below. Here, there are infinite
pushes on or off, but there are only 2 states to remember the state of the push. If it is
OFF, the push will change the state to ON and vice versa.

push on

OFF ON

push off

2.2.1 Deterministic Finite Automata

quintuple or 5-tuole A = (Q, Σ, `, aN , F) where


Finite automata are classified into deterministic (DFA) and nondeterministic (NFA). A DFA is a

1. Q is finite nonempty set whose members are called states of the automaton;

` is a map from Q × Σ to Q called transition function of the automaton;


2. Σ is finite nonempty set called the alphabet of the automaton;

aN is a member of Q and is called the initial state;


3.
4.
5. F is a nonempty subset of Q whose members are called terminal states or accepting
states.
The function ` describe the change of state when a single symbol is read in - if the automaton in
state a reads the symbol then its state changes to state `(a, ). Initially, the state is aN and if
the input word is A =   … B then, as each symbol is read, the state changes and we get
a , a , … , aB , defined by
a = `(aN ,  )
a = `(a ,  )
a# = `(a , # )
⋮ ⋮ ⋮
aB = `(aBD , B )
It's worthwhile to extend the second argument of ` to be a word instead of just a symbol so we
can write aB = `d (aN , A). Notice that the function changes from ` to `d . The `d is extended
transition function, constructed from `, and it means transition to a destination state through a
path on reading a word input instead of a single symbol. Also in place of `: Q × Σ → Q, we write
`: Q × Σ ∗ → Q by defining the function as
`d (a, λ) = a for all a ∈ e
d d
` (a, A ) = `f` (a, A), g for all a ∈ e; A ∈ Σ ∗ ; ∈ Σ.
Suppose we have  = (e, Σ, `d , aN , h). We say that a word A ∈ Σ ∗ is accepted (or recognized) by
automaton  if `d (aN , A) ∈ h, otherwise it is said to be rejected. The set of all words accepted by
 is called the language accepted by  and will be denoted by Z(). Thus,
Z() = {A ∈ Σ ∗ | `d (aN , A) ∈ h}. If A =   … B ∈ Σ ∗ and `d (I, A) = a, then there are states
E , E , … , EB with E = `(I,  ), E = `(E ,  ), E# = `(E , # ), ..., EB = `(EBD , B ). We say
that the states I and a are connected by a path with label A and write
i m n opi o
I j⎯l E j⎯l E j⎯l E# …. … EBD j⎯⎯l EBD j⎯⎯l a
or I j⎯l …s … j⎯l a
In particular, a word A is accepted by automaton if and only if there is a path from the initial
state to the final state with label A. We call such path an accepting path.
Example. Let  be the automaton (e, Σ, `, aN , h) where e = {aN , a , a }, Σ = {0,1}, h = {a }
and the transition function ` is given by
`(aN , 0) = a , `(aN , 1) = a
`(a , 0) = a , `(a , 1) = a
`(a , 0) = a , `(a , 1) = a
A transition function ` can also be given in tabular form known as transition table. The
transition function can be given in a tabular as below.
0 1
aN a a
a a a
a a a

You might also like