Theory of Computing
Theory of Computing
August, 2023
Complexity Theory
Computer problems come in different varieties; some are easy, and some are hard. For example,
the sorting problem is an easy one. Even a small computer can sort a million numbers rather
quickly. Compare that to finding a schedule of classes for the entire university to satisfy some
reasonable constraints, such as that no two classes take place in the same room at the same time.
Finding the best schedule for just a thousand classes may require centuries, even with a
supercomputer. This raises a central question of complexity theory.
What makes some problems computationally hard and others easy?
In one important achievement of complexity theory thus far, researchers have discovered an
elegant scheme for classifying problems according to their computational difficulty. Using this
scheme, we can demonstrate a method for giving evidence that certain problems are
computationally hard, even if we are unable to prove that they are.
You have several options when you confront a problem that appears to be computationally hard.
First, by understanding which aspect of the problem is at the root of the difficulty, you may be
able to alter it so that the problem is more easily solvable. Second, you may be able to settle for
less than a perfect solution to the problem. In certain cases, finding solutions that only
approximate the perfect one is relatively easy. Third, some problems are hard only in the worst
case situation, but easy most of the time. Depending on the application, you may be satisfied with
a procedure that occasionally is slow but usually runs quickly. Finally, you may consider
alternative types of computation, such as randomized computation, that can speed up certain
tasks.
One applied area that has been affected directly by complexity theory is the ancient field of
cryptography. Cryptography is unusual because it specifically requires computational problems
that are hard, rather than easy. Secret codes should be hard to break without the secret key or
password.
Computability Theory
Mathematicians have long ago discovered that certain basic problems cannot be solved by
computers. One example of this phenomenon is the problem of determining whether a
mathematical statement is true or false. This task is the bread and butter of mathematicians. It
seems like a natural for solution by computer because it lies strictly within the realm of
mathematics. But no computer algorithm can perform this task. No computer algorithm can
perform a proof process.
Among the consequences of this profound result was the development of ideas concerning
theoretical models of computers that eventually would help lead to the construction of actual
computers.
The theories of computability and complexity are closely related. In complexity theory, the
objective is to classify problems as easy ones and hard ones; whereas in computability theory,
the classification of problems is by those that are solvable and those that are not. Computability
theory introduces several of the concepts used in complexity theory.
AUTOMATA THEORY
Automata theory deals with the definitions and properties of mathematical models of
computation. These models play a role in several applied areas of computer science. One model,
called the finite automaton, is used in text processing, compilers, and hardware design. Another
model, called the context-free grammar, is used in programming languages and artificial
intelligence.
Automata theory is an excellent place to begin the study of the theory of computation. The
theories of computability and complexity require a precise definition of a computer. Automata
theory allows practice with formal definitions of computation as it introduces concepts relevant
to other non theoretical areas of computer science.
Operations on Sets
For any given two sets A and B, the union of A and B, written A∪B, is the set we get by
combining all the elements in A and B into a single set. As usual repeating elements is not
B. The complement of A, written ̅, is the set of all elements under consideration that are not in
allowed. The intersection of A and B, written A ∩ B, is the set of elements that are in both A and
A. Let set = { , , }. The power set of a set , denoted as 2 , is the set of all subsets of ,
including the empty set. For example, 2{,,} = {∅, { }, { }, { }, { , }, { , }, { , }, { , , }}.
The notation 2 to denote power set is to reminder that it has 2|| elements. Notice that the
subsets over the set can each be represented with a binary n-tuple ( , , … , || ) where
is a 1 if the element of is in and a 0 otherwise. Since there are 2|| ways to assign 0’s and
1’s to ( , , … , || ), 2 has 2|| elements. Recall that || = 3. If is finite, then |2 | = 2|| .
For example, each of the subsets can be represented as follows: ∅ → (0,0,0), { } → (0,0,1),
{ } → (0,1,0), { , } → (0,1,1), { } → (1,0,0), { , } → (1,0,1), { , } → (1,1,0), { , , } →
(1,1,1). So there are 8 ways of arranging the binary (base 2) numbers, which is equivalent to
2# ≡ 2|| .
Picture helps clarify a concept and so Venn diagram is often used to represent set. It represents
sets as regions enclosed by circular lines. It represents sets as regions enclosed by circular lines.
The operations on set can easily be visuallised with Venn diagram.
always produces the same output. If f is a function whose output value is ∈ when the input
relationship. A function takes an input and produces an output. In every function, the same input
value is ∈ , we write
f(a) = b.
pair ( , ) ∈ +, being defined by a transformation. That is, f is like a machine: what is fed into it
Note: there is difference between f and f(a). f is the function or transformation or 2-tuple. The
is the a and what comes out is the f(a) otherwise written as b. A function also is called a
mapping, and, if f(a) = b, we say that f maps a to b.
The set of possible inputs to the function is called its domain, written as D(f). The outputs of a
function come from a set called its range, written as R(f) The notation for saying that f is a
function with domain D and range R is
+: 2 ⟶ 3.
For example, the addition (+) function takes two inputs from integer and produces an integer,
written as
44: ℤ × ℤ ⟶ ℤ
So, the domain is the set of pairs of integers ℤ × ℤ and the range is ℤ, Note that a function may
not necessarily use all the elements of the specified range. The function abs i.e. absolute value
(|x|) given as
+ 0 > 0,
|0| = 60 + 0 = 0,;
−0 + 0 < 0
<: ℤ ⟶ ℤ, never takes on the value −1 even though −1 ∈ ℤ. A function that does use all the
elements of the range is said to be onto the range.
We may describe a specific function in several ways. One way is with a procedure for computing
an output from a specified input. Another way is with a table that lists all possible inputs and
n +(>)
0 1
1 2
2 3
3 4
4 0
This function adds 1 to its input and then outputs the result modulo 5. A number modulo m is the
remainder after division by m.
example 44: ℤ × ℤ ⟶ ℤ above has a domain of Cartesian product of two sets. It can be
Sometimes the domain of the function is the Cartesian product of multiple sets. The add (+)
function 44(, ?)
If we label the entry at the row as i and the column labeled j in the table is the value of the
Types of Proof
Several types of arguments arise frequently in mathematical proofs. Here, we describe a few that
often occur in the theory of computation. Note that a proof may contain more than one type of
argument because the proof may contain within it several different subproofs.
Proof by Construction
Many theorems state that a particular type of object exists. One way to prove such a theorem is
by demonstrating how to construct the object. This technique is a proof by construction.
Let’s use a proof by construction to prove the following theorem. We define a graph to be k-
regular if every node in the graph has degree k.
Proof by Contradiction
In one common form of argument for proving a theorem, we assume that the theorem is false and
then show that this assumption leads to an obviously false consequence, called a contradiction.
We use this type of reasoning frequently in everyday life..
Proof by Induction
Proof by induction is an advanced method used to show that all elements of an infinite set have a
specified property. For example, we may use a proof by induction to show that an arithmetic
expression computes a desired quantity for every assignment to its variables, or that a program
works correctly at all steps or for all inputs.
2.1.1 Alphabets
An alphabet is a finite, nonempty set of symbols. Conventionally, we use the capital Greek
letters Σ and Γ to designate alphabets and a typewriter font for symbols from an alphabet As a
set, the members of the alphabet are the symbols of the alphabet. Common alphabets include:
2.1.2 Strings
A string over an alphabet (or sometimes called word or sentence) is a finite sequence of
symbols over an alphabet (i.e. chosen from some alphabet).For example, 001101 is a string
chosen from the binary alphabet Σ ={0,1}, and aaabcctsmn is a string over the alphabet Σ = {a,
b, c, ... , z}, etc.
A string with zero occurrences of symbols is known as empty string. This string, denoted , 1, or
”, is a string that may be chosen from any alphabet whatsoever. The empty string plays the role
of 0 in number system
The length of a string is the number of positions for symbols in the string. It is not strictly correct
to say that the string length is the number of symbols in the string. For instance, the string
aaabccdbaa has only 4 symbols - a, b, c, d - which is different from the string length. The string
where w is the string. So, if w = aaabccdbaa, |w| = 9. If w has length n, we can write A =
length is 9 because there are 9 positions for the string symbols. String length is denoted by |w|,
Powers of an Alphabet
Examples: let Σ ={0,1}, Σ N ={}, Σ ={0, 1}, Σ ={00,01, 10, 11}, Σ # ={000,001, 010, 011,
symbols is in Σ.
Σ ∪ Σ ∪ Σ # = {0, 1, 00, 01, 10, 11, 000, 001, 010, 011, 100, 101, 110, 111}
by Σ O
In the way, we can find the union of all non-zero powers of an alphabet. Such union is denoted
Σ O = Σ ∪ Σ ∪ Σ # ∪ …
Clearly, the empty string {} or {”} is not in the set. But if we add the Σ N ={} (or Σ N ={”}), we
have what is known as the universal set denoted by Σ ∗ . Thus, Σ ∗ = Σ O ∪ Σ N = Σ O ∪ {λ}
Concatenation of String
Let x and y be strings. Then xy denotes the concatenation (or product) of x and y, that is, the
Equality of string
Two strings 0 and 1, over an alphabet Σ, are equal, written as 0 = 1, if they have the same
2.1.3 Languages
Strings of characters are fundamental building blocks in computer science. For example,
of strings, all of which are chosen from some Σ ∗ , where Σ is a particular alphabet, is called a
programming languages are defined from strings of characters. What then is a language? A set
over Σ need not include strings with all the symbols of Σ. So, once we have established that Z is a
In other words, formal language is a subset of the universal language. It means that a language
language over Σ, we also know it is a language over any alphabet that is a superset of Σ .
Generally, common languages can be viewed as sets of strings, such as the English language and
programming languages. The symbols of the strings are drawn from a given alphabet. But we
also have abstract languages.
Examples:
- the language of all strings consisting of > 0's followed by > 1's, for some > ≥ 0: {λ, 01,
0011, 000111, ...}
- {λ, 01, 10, 0011, 0101, 1001, ...}, equal 0's and 1's
Kleene closure
A new language can be formed from other languages by using product and closure. Let Σ be an
alphabet. If Z , Z ⊆ Σ ∗ , then the product Z Z is defined by
Z Z = {A A| A ∈ Z >4 A ∈ Z }.
same language Z. So, we can state it as LB defined by LN = {λ}, LBO = LB Z for all > ≥ 0, then
This is known as star or Kleene closure. It is a closure because all the words are formed from the
L∗ = ⋃] B^N L , i.e. L ∪ L ∪ ZZ ∪ L ∪ …
B N #
push on
OFF ON
push off
1. Q is finite nonempty set whose members are called states of the automaton;