Intro To Advanced Applied Algorithms Nitk 2013
Intro To Advanced Applied Algorithms Nitk 2013
1
Textbook and references
Cormen, 3rd edition.
http://www.flipkart.com/introduction-algorithms-8120340078
2
References
Design and Analysis of Computer
Algorithms,
by Ullman and Hopcroft
http://www.flipkart.com/design-analysis-computer-algorithms-
8131702057
Algorithm Design Manual, 2nd ed,
by Skiena
http://www.flipkart.com/algorithm-design-manual-8184898657
Google, Wikipedia
Mathematics for CS by Meyer, from
MIT.
MIT courseware video lectures
3
References (advanced)
Combinatorial Optimization: Algorithms and Complexity, by Papadimitriou and Steiglitz.
Computer and Intractability, by Garey and Johnson.
Approximation Algorithms, by Vazirani.
Randomized Algorithms, by Motwani and Raghavan.
4
Programming textbooks
K&R: C Programming Language, by Kernighan and Ritchie, 2nd ed
(not older ed).
Unix Programing Environment, by Kernighan and Pike.
C++ Language, by Stroustrup, 3rd ed (not older ed).
Learning Python (or docs.python.org)
Core Java, Vol 1 & 2, by Horstmann
5
Lab
C programs using gcc/g++ IDE from
http://www.codeblocks.org or MS-VC, not
dev-cpp. recent gcc 4.
On Linux or Windows (install cygwin for bash,
make, configure, vim, ctags).
Sometimes we will use Python, Java, C++,js
Demonstration packages:
Maple, Mathematica
LP: glpk (gnu linear programming),Excel solver.
Optimization: Concorde TSP Solvers, Coin-OR
Number Theory: gnupg, gmp, pari, js.
graphviz for drawing graphs
6
Topics
Algorithm running times
Searching & Sorting: insertion, bubble, merge, quick, heap, radix.
Data structures: stacks, queues, lists, hashing, heaps, trees, graphs.
Searching: lists, binary, dfs, bfs.
Greedy algorithms, minimum spanning tree.
Dynamic programming: matrix chain multiplication, longest common
subsequence,
Graph algorithms: shortest path, matchings, max-flow.
Matrix operations, LUP decomposition, Strassen matrix mult.
Polynomial multiplication and fft.
String matching: kmp, rabin karp, boyer moore, suffix trees.
Linear programming, simplex.
Number theory: gcd, ext-gcd, power-mod, crt, rsa,gpg.
NP completeness, SAT, vertex cover, tsp, hamiltonian, partition
Approximation algorithms
7
Website and discussions
Code https://sites.google.com/site/algorithms2013/
IT-235 https://www.facebook.com/groups/algorithms.nitk
CS-830 https://www.facebook.com/groups/applied.algorithms.nitk
To ask questions, reply to questions, discussions, schedules.
8
Grading scheme
50% endsem
25% midsem exam
10+10% quizzes/handwritten assignments
5% attendance
9
What is an algorithm?
Algorithm is a step-by-step procedure for
calculations. More precisely, an algorithm is an
effective method expressed as a finite list of
well-defined instructions for calculating a
function
A program is an implementation of a
algorithm.
10
Examples
Compute area of circle, given the radius?
Driving Directions from NITK to Manipal?
Recipe to bake a cake?
Predict tomorrows weather?
Autofocus a camera?
Recognize face? Voice? Songs?
Oldest algorithm? Euclids GCD.
11
Reactive systems
Examples: Traffic lights, lift, factory alarms
Properties:
Always running, realtime input/output
No beginning, no end.
Correctness
Liveliness
Deadlock free
Fairness
12
What is computable?
Alonzo Church created a method for
defining functions called the -calculus
Alan Turing created a theoretical model for
a machine, now called a universal Turing
machine
ChurchTuring thesis states that a
function is algorithmically computable if
and only if it is computable by a Turing
machine.
13
Turing machine (TM)
Turing machine is a hypothetical device that
manipulates symbols on a strip of tape according to a
table of rules.
Despite its simplicity, a TM can be adapted to simulate
the logic of any computer algorithm
Used for explaining real computations.
A universal Turing machine (UTM) is able to simulate
any other Turing machine.
ChurchTuring thesis states that Turing machines
capture the informal notion of effective method in logic
and mathematics, and provide a precise definition of an
algorithm or a 'mechanical procedure'.
14
Lambda Calculus ()
Formulated by Alonzo Church as a way to formalize
mathematics through the notion of functions
( var body) ..Lambda expression/function
Application:
( x y (* (+ 3 x) (- y 4))) 10 20
(* (+ 3 10) (- 20 4)) (* 13 16) 208
ChurchTuring Thesis, the untyped lambda calculus is
claimed to be capable of computing all effectively
calculable functions.
The typed lambda calculus is a variety that restricts
function application, so that functions can only be
applied if they are capable of accepting the given input's
"type" of data.
15
Combinatory Logic
SKI combinator calculus is a computational system that may be perceived as a
reduced version of untyped lambda calculus.
Used in mathematical theory of algorithms because it is an extremely simple
Turing complete language.
Only two operators or combinators: Kxy = x and Sxyz = xz(yz).
K := x.y.x
S := x.y.z.x z (y z)
I := x.x
B := x.y.z.x (y z)
C := x.y.z.x z y
W := x.y.x y y
:= x.x x
:=
Y := g.(x.g (x x)) (x.g (x x))
16
Functional programming
FP is a programming paradigm that treats computation
as the evaluation of mathematical functions and avoids
state and mutable data.
Based on just application of functions without side
effects.
Compare to the imperative programming style, which
emphasizes changes in state.
Functional programming has its roots in lambda calculus
Languages: Haskell, Ocaml, ML, Lisp, Scala, Clojure,..
Lisp example: (defun fibonacci (n &optional (a 0)
(b 1)) (if (= n 0) a (fib (- n 1) b (+ a b))))
Python Example: >>> (lambda x: x*2)(3)
6
17
Languages
Recursively Enumerable Languages Turing
Some use in parsing. computable functions machines
- context-sensitive
grammar Recursive Languages
- linear bounded algorithms TMs that
automata decision problems always halt
Useful for parsing. Context-Sensitive Languages
- context-free
grammar
- pushdown
automata
Context-Free Languages
19
Oracle of Delphi
A Turing machine
with a black box,
called an oracle,
which is able to
decide certain
decision problems in a
single operation.
The problem can be of
any complexity class.
Even undecidable
problems, like the
halting problem can
be answered by an
oracle.
20
Cray super computer
21
Data center = Millions of PCs
22
FFT algorithm is used in
audio/video/signal processing
23
Starting from an initial state and initial input
(perhaps empty), the instructions describe a
computation that, when executed, will proceed
through a finite number of well-defined
successive states, eventually producing "output"
and terminating at a final ending state.
The transition from one state to the next is not
necessarily deterministic
some algorithms, known as randomized
algorithms, incorporate random input.
1
Some algorithms are harder than
others
Some algorithms are easy
Finding the max (largest or smallest) value in a list
Searching for a specific value in a list
Some algorithms are a bit harder
Sorting a list
Some algorithms are very hard
Finding the cheapest road tour of n cities
Some algorithms are practically impossible
Factoring large numbers Impossible
Some problems are undecidable
(halting problem). Hard
Easy 2
Algorithm: Maximum element
Given a list, how do we find the maximum
element in the list?
To express the algorithm, well use
pseudocode. Pseudocode is like a
programming language, without worrying
about syntax.
3
Finding the Maximum
procedure max (a[1..n] : array of integers)
max := a1
for i := 2 to n
if max < ai then
max := ai
{max is largest of a[1..i} // invariant
endfor
{max is the largest of a[1..n]} // invariant
return max
end procedure
4
Maximum element running time
How long does this take?
If the list has n elements
Worst case?
Best case?
Average case?
5
Properties of algorithms
Algorithms generally share a set of properties:
Input
Output
Definiteness: the steps are defined precisely
Correctness: should produce the correct output
Finiteness: the steps required should be finite
Effectiveness: each step must be able to be
performed in a finite amount of time
Generality: the algorithm should be applicable to all
problems of a similar form.
6
Types of algorithms
Deterministic
Randomized, answer maybe different
each time, but correct.
Approximation, for hard problems answer
is close to the best (optimal).
Online, solve items as they come, without
seeing all the items (choices). E.g. Hiring.
7
Algorithm Design Techniques:
9
Complexity
Asymptotic Performance: How does the
algorithm behave as the problem size gets very large?
Running time
Memory/storage requirements
asymptotic (big-O) notation : What does O(n)
running time mean? O(n2)? (n lg n)?
10
Design and Analysis
of Algorithms IT-235
Timings and location
Tue 10-12pm
Thu 11-12 and 1-2pm
Classroom L601
Office hours: After class
Lab: Thu 2-5pm
1
Applied Algorithms
CS-830
Timings and location
Tue 2-4pm
Thu 2-3pm
Classroom: Seminar Room
Office hours: After class
2
Marking scheme
Final 50 %
Midterm 25 %
In semester 25%
Quizzes 10 %
Homework 10%
Attendance 5%
3
Website and discussions
Code https://sites.google.com/site/algorithms2013/
IT-235 https://www.facebook.com/groups/algorithms.nitk
CS-830 https://www.facebook.com/groups/applied.algorithms.nitk
Ask Class Representative to add you to the group
To ask questions, reply to questions, discussions, schedules.
Email: moshahmed/at/gmail
4
(and computers algorithms)
Pocket calculators
Cray super computer
Data center = Million PCs
How does one measure
algorithms?
We can time how long it takes a computer
What if the computer is doing other things?
And what happens if you get a faster
computer?
A 3 Ghz Windows machine will run an algorithm at
a different speed than a 3 Ghz Linux
Step
We can loosely define a step as a single
computer operation
A comparison, an assignment, etc.
Regardless of how many machine instructions it
translates into
O(1) 1
O(log n) 10
O(n) 103
O(n log n) 104
O(n2) 106
O(n3) 109
O(n4) 1012
O(nc) 103*c c is a consant
2n 10301
n! 102568
nn 103000
Complexity Term
O(1) constant
O(log n) logarithmic
O(n) linear
O(n lg n) n log n
O(nb) polynomial
O(bn) b > 1 exponential
O(n!) factorial
5000
4000
f(n) = n
f(n) = log(n)
3000
f(n) = n log(n)
f(n) = n^2
2000 f(n) = n^3
f(n) = 2^n
1000
0
1 3 5 7 9 11 13 15 17 19
Exponential versus Polynomial
Logarithmic
scale!
From http://en.wikipedia.org/wiki/Time_complexity
Complexity
Worst-Case Complexity:
the maximum number of
steps taken on an instance of
size n.
Best-Case Complexity the
minimum number of steps
taken on an instance of size
n.
Average-Case Complexity
the average number of steps
taken on any instance
of size n.
Example of difficult problem
Factoring a composite number into its
component primes is O(2n)
Where n is the number of bits in the number,
ie. The length of the number in binary bits.
This, if we choose 2048 bit numbers (as in
RSA keys), it takes 22048 (10617) steps to
factor it.
NP Hard problems
Solving hard problems in practice
SIMPLEX - Optimization using simplex
method can take exponential steps in
worst case, but in practice it finishes
quickly.
Hard problems can be solved quickly
using fast approximation algorithms, if you
dont want the very best answer.
Knapsack and bin packing are NP
complete problems
Moving a drill Soldering on a
minimal amount to circuit board
make a circuit
board
Beyond Polynomial
Polynomial: Any expression of the form nc,
where c is a constant Thus, any function
that is O(nk) is a polynomial-time function,
for a fixed k.
2n, n!, nn are exponential, not polynomial
functions
Elementary: +,-,*,/,log,e(n),n!
Non-elementary: Ackerman
Ackerman function
the fastest growing function
ack 0 n = n + 1 - - haskell
ack m 0 = ack (m-1) 1
ack m n = ack (m-1) (ack m (n-1))
Inverse-Ackermann function
(n), slowest growing function.
As slow as ackermann is fast, not inverse
Algorithm Union-Find runtime is O(m (n) + n),
where n is the number of elements and m is the total number of
Union and Find operations performed.
Install following software
Windows:
Cygwin with gcc, g++, xwindows
Codeblocks (gcc and g++ with ide)
python
Java jdk
Linux
Codeblocks
Home work
Write a program to print the table of
Ackermann[0..3, 0..4] in C, Python and
Java.
Solutions: Ackerman in python, C, Java
1. # nave ackermann function
2. def ackermann(m, n):
3. global calls
4. calls += 1
5. if m == 0:
6. return n + 1
7. elif n == 0:
8. return ackermann(m - 1, 1)
9. else:
10. return ackermann(m - 1, ackermann(m, n - 1))
c.g(n)
Commonly used
notation
f(n) = O(g(n)) m n
Correct notation
f(n) in O(g(n)) f(n) = O(g(n)
Big Omega -notation (Lower bound, Best case)
f(n)
time
m n
f(n) = (g(n)
Think of the equality as meaning in the set of functions.
Big Theta -notation (Tight bound)
c2.g(n)
f(n)
time
m n
f(n) = (g(n)
Other Asymptotic Notations
Little o
A function f(n) is o(g(n)) if positive constants
c and m such that
f(n) < c g(n) n m
little-omega
A function f(n) is (g(n)) if positive constants
c and m such that
c g(n) < f(n) n m
f (n) = O(g(n)) a b f no faster than g
f (n) = (g(n)) a b f no slower than g
f (n) = (g(n)) a = b f about as fast as g
f (n) = o (g(n)) a < b f slower than g
f (n) = w(g(n)) a > b f faster than g
Space Complexity
Number of memory cells (or words) needed to carry
out the computational steps required to solve an
instance of the problem excluding the space allocated
to hold the input. Only the work space.
All previous asymptotic notation definitions are also
applied to space complexity.
Naturally, in many problems there is a time-space
tradeoff: The more space we allocate for the
algorithm the faster it runs, and vice versa
Space time hierarchy
Complexity of Traveling salesman
Logarithmic spiral
Logarithms notation, simple facts
1. lg n = log2 n (logarithm of base 2)
2. ln n = loge n (natural logarithm to base e=2.7182818284590451)
3. log n = log10 n (common logarithm to base 10)
a = blogb a
logc (ab) = logc a + logc b
logc an = n logc a
logb (1/a) = logb a
5 2 4 6 1 3 2 6
5 2 4 6 1 3 2 6
5 2 4 6 1 3 2 6
2 5 4 6 1 3 2 6
2 4 5 6 1 2 3 6
1 2 2 3 4 5 6 6
Recurrence Relation for
Mergesort
Let T(n) be worst case time on a sequence
of n keys
If n = 1, then T(n) = (1) (constant)
If n > 1, then T(n) = 2 T(n/2) + (n)
two subproblems of size n/2 each that are
solved recursively
(n) time to do the merge
Recurrence Relations
How To Solve Recurrences
Ad hoc method:
expand several times
guess the pattern
can verify with proof by induction
Master theorem
general formula that works if recurrence has the form T(n)
= aT(n/b) + f(n)
a is number of sub-problems
n / b is size of each subproblem
f(n) is cost of non-recursive part
Master Theorem
Consider a recurrence of the form
T(n) = a T(n/b) + f(n)
with a1, b>1, and f(n) eventually positive.
a) If f(n) = O(nlogb(a)-), then T(n)=(nlogb(a)).
b) If f(n) = (nlogb(a) ), then T(n)=(nlogb(a) log(n)).
c) If f(n) = (nlogb(a)+) and f(n) is regular, then
T(n)=(f(n))
f(n) is regular iff eventually a*f(n/b) c*f(n) for some constant c<1
Master Theorem
Master method details
Essentially, the Master theorem compares
the function f(n) with g(n) = nlogb(a).
Roughly, the theorem says:
a)If f(n) g(n) then T(n)=(g(n)).
b)If f(n) g(n) then T(n)=(g(n)log(n)).
c)If f(n) g(n) then T(n)=(f(n)).
Master method details (CLR)
Nothing is perfect
The Master theorem does not cover all
possible cases. For example, if
f(n) = (nlogb(a) log n),
then we lie between cases 2) and 3), but the
theorem does not apply.
There exist better versions of the Master
theorem that cover more cases, but these
are even harder to memorize.
Idea of the Proof
Let us iteratively substitute the recurrence:
T (n) aT (n / b) f (n)
a(aT (n / b 2 )) f (n / b)) bn
a 2T (n / b 2 ) af (n / b) f (n)
a 3T (n / b 3 ) a 2 f (n / b 2 ) af (n / b) f (n)
...
(log b n ) 1
a logb nT (1) a f
i 0
( n i
/ b i
)
(log b n ) 1
n logb aT (1) a f
i 0
( n i
/ b i
)
Idea of the Proof
Thus, we obtained
1 1 2 3
2 3
4 5 6 4 5 6
Directed graph
Graph
Graph. A graph G=(V ,E) is
VerticesV (nodes) and Edges (links) E.
Edge e connects two nodes
Edge e can have a weight (distance), or 1.
Directed Graph if edges have direction.
Size of Graph is measured by size of V
and E
Example
V:={1,2,3,4,5,6}
E:={{1,2},{1,5},{2,3},{2,5},{3,4},{4,5},{4,6}}
Vertex degree
Vertex degree is
Undirected G:
number of vertices adjacent to the vertex.
Directed G:
In-degree: number of edges directed to vi
Out-degree: number of edges directed out of to vi
Degree
A B C
D E F
The degree of B is 2.
4 5
1 A B C
2 3
Cycle
4 5 6 D E F
Cycle
Unreachable
Simple path from 1 to 5
= [ 1, 2, 4, 5 ]
Our texts alternates the vertices If there is path p from u to v then we
and edges. say v is reachable from u via p.
Connectivity
G is connected if
you can get from any node to any other by following a
sequence of edges, i.e.
any two nodes are connected by a path.
G is strongly connected if
there is a directed path from any node to any other
node.
Sparse/Dense Graphs
A graph is sparse if | E | | V |
A graph is dense if | E | | V |2.
Complete Graph
Denoted Kn
Every pair of vertices are adjacent
N vertices, has n(n-1) edges
Bipartite graph
V can be partitioned
into 2 sets V1 and V2
such that (u,v)E
implies
either
u V1 and v V2
OR
v V1 and u V2.
Complete Bipartite Graph
Bipartite Variation of Complete Graph
Every node of one set is connected to
every other node on the other set
K(2,3) K(3,3)
Planar Graph
K4
1
1 2 2
3 3
5 4 4
5
Problem: adjacency-list 2
1 2 5
1 2 2
3 3
5 4 4
5
Problem: adjacency-list 3
1 2 5
1 2 2 5 3 4
3 3 4
5 4 4 5
5
5
Adjacency matrix
Represents the graph as a n x n matrix A:
A[i, j] = 1 if edge (i, j) E (or weight of edge)
= 0 if edge (i, j) E
Storage requirements: O(V2)
Efficient for small graphs
Especially if store just one bit/edge
Undirected graph: only need one diagonal of
matrix
Problem: Adjacency matrix
1 2 3 4 5
1 2
1 x x x x x
3 2 x x x x x
5 4 3 x x x x x
4 x x x x x
5 x x x x x
Adjacency matrix 2
1 2 3 4 5
1 2
1 0 1 0 0 1
3 2 0 0 1 1 1
5 4 3 0 0 0 1 0
4 0 0 0 0 1
5 0 0 0 0 1
Large graphs
Graph Problems
Bipartite, matchings
Shortest path
Max flow
Hamitonian path
TSP
Vertex cover
Colouring
Isomorphism
Searching
algorithms
Given a list, find a specific element in the
list
We will see two types
Linear search
a.k.a. sequential search
Binary search
1
Linear search
Given a list, find a specific element in the list
List does NOT have to be sorted!
2
Algorithm 2: Linear search, take 1
procedure linear_search (x: integer; a1, a2, , an: integers)
i := 1
while ( i n and x ai )
i := i + 1 x 3
if i n then location := i
location 8
else location := 0
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
4 1 7 0 5 2 9 3 6 8
i 1
8
7
6
5
4
3
2 3
Algorithm 2: Linear search, take 2
procedure linear_search (x: integer; a1, a2, , an: integers)
i := 1
while ( i n and x ai )
i := i + 1 x 11
if i n then location := i
location 0
else location := 0
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
4 1 7 0 5 2 9 3 6 8
i 1
90
8
7
6
5
4
3
2
11 4
Linear search running time
How long does this take?
5
Algorithm 3: Binary search
Given a list, find a specific element in the list
List MUST be sorted!
Each time it iterates through, it cuts the list in half
i 1
6
7 m 5
8
7
6 j 10
8
7
7
Binary Search
Assumes list is sorted
Example of a recursive algorithm as
algorithm applied to smaller list
Binary search in worst case takes
O(lg2 n) steps, where n = size of the list.
8
Binary search running time
How long does this take (worst case)?
n log n
2 = 21 1
16 = 24 4
1024 = 210 10
1 million = 220 20
1 billion = 230 30
10
Selection in Linear time
(divide and conquer)
item 0
key 1
hash
2
function
3
H-1
Hash Tables (2)
Hashing is a technique used to perform insertions,
deletions, and finds in constant average time.
To insert or find a certain data, we assign a key to the
elements and use a function to determine the location of
the element within the table called hash function.
Hash tables are arrays of cells with fixed size containing
data or keys corresponding to data.
For each key, we use the hashing function to map key
into some number in the range 0 to H-size-1 using
hashing function.
Hash Function
Hashing function should have the following features:
Easy to compute.
Two distinct key map to two different cells in array (Not true in
general) - why?.
This can be achieved by using direct-address table where
universal set of keys is reasonably small.
Distributes the keys evenly among cells.
One simple hashing function is to use mod function with
a prime number.
Any manipulation of digits, with least complexity and
good distribution can be used.
Hash function
A hash function is any
well-defined procedure
or mathematical
function that converts a
large, possibly variable-
sized amount of data
into a small data (usually
an integer that may
serve as an index to an
array).
Example of hash function in Java
int hash(String key, int tableSize) {
int hashVal = 0;
for (int i=0; i < key.length(); i++)
hashVal += key.charAt(i)
return hashVal % tableSize;
}
Choosing a hash function
A good has function should satisfy two
criteria:
1. It should be quick to compute
2. It should minimize the number of collisions
Hashing
Hash tables are used to store data as
{<key,value> pairs }, accessible by the key.
Hashing is a method of inserting data into a
table.
Tables can be implemented in many ways.
open addressing, all entry records are stored in the bucket array
itself. When a new entry has to be inserted, the buckets are
examined, starting with the hashed-to slot and proceeding in some
probe sequence, until an unoccupied slot is found
Deletion is harder
Probing sequence for empty slot
Well-known probe sequences include:
Linear probing, in which the interval between
probes is fixed (usually 1)
Quadratic probing, in which the interval between
probes is increased by adding the successive
outputs of a quadratic polynomial to the starting
value given by the original hash computation
Double hashing, in which the interval between
probes is computed by another hash function
hash table in practice
Associative array, hash, map, or dictionary is
an abstract data type composed of a collection
of (key, value) pairs, such that each possible key
appears at most once in the collection.
Operations associated with this data type allow:
the addition of pairs to the collection
the removal of pairs from the collection
the modification of the values of existing pairs
the lookup of the value associated with a particular
key
hash maps in languages
C++:
#include <map>
std::map<std::string, std::string> mymap; mymap[mosh"] = nitk";
C: see glibc, next slide
Java:
Map<String, String> mymap = new HashMap<String, String>();
mymap.put(mosh", nitk");
Perl: $mymap{mosh}=nitk;
Php: $mymap[mosh]=nitk;
Python:
mymap = {}
mymap["mosh"]="nitk
See http://rosettacode.org/wiki/Associative_arrays
Hashing in C with gcc
#include <search.h>
First a hash table must be created using hcreate()
The function hdestroy() frees the memory occupied by the hash
table that was created by hcreate(). After calling hdestroy() a new
hash table can be created using hcreate().
The hsearch() function searches the hash table for an item with the
same key as item
On success, hsearch() returns a pointer to an entry in the hash
table. hsearch() returns NULL on error, that is, if action is ENTER
and the hash table is full, or action is FIND and item cannot be found
in the hash table.
Disadvantage: single global hash table
See http://linux.die.net/man/3/hsearch
Hashing in g++
$ g++ -std=c++0x hash_map4.cpp
$ a.exe
september -> 30
february -> 28
1. #include <iostream>
2. #include <string>
3. #include <unordered_map>
4. using namespace std;
5. int main() {
6. unordered_map<string, int> months;
7. months["february"] = 28;
8. months["september"] = 30;
9. cout << "september -> " << months["september"] << endl;
10. cout << "february -> " << months["february"] << endl;
11. return 0;
12. }
Custom hashing function in g++
1. #include <iostream>
2. #include <unordered_map>
3. #include <string>
4. #include <functional>
5. using namespace std;
Analysis of Algorithms
n 1
T (n) find min element swap
i 1
= n-1 + n-2 + n-3 + + 1 = n(n-1)/2
Or = (n - i) = n (n - 1) / 2 O(n2)
Properties
Not stable
O(1) extra space
(n^2) comparisons
(n) swaps
Not adaptive
Selection sort animation
Merge sort
Merge sort (also commonly spelled mergesort)
is an O(n log n) comparison-based sorting
algorithm.
Most implementations produce a stable sort,
which means that the implementation preserves
the input order of equal elements in the sorted
output.
Merge sort is a divide and conquer algorithm
that was invented by John von Neumann in
1945
How it works
Conceptually, a merge sort works as
follows
1. Divide the unsorted list into n sublists,
each containing 1 element (a list of 1
element is considered sorted).
2. Repeatedly merge sublists to produce
new sublists until there is only 1 sublist
remaining. This will be the sorted list.
Merge sort animated example
Algorithm: MERGE
// from wikipedia
function merge(left, right)
var list result
while length(left) > 0 or length(right) > 0
if length(left) > 0 and length(right) > 0
if first(left) <= first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
else if length(left) > 0
append first(left) to result
left = rest(left)
else if length(right) > 0
append first(right) to result
right = rest(right)
end while
return result
Merge Sort divide-and-conquer
Divide: divide the n-element sequence
into two subproblems of n/2 elements
each.
Conquer: sort the two subsequences
recursively using merge sort. If the length
of a sequence is 1, do nothing since it is
already in order.
Combine: merge the two sorted
subsequences to produce the sorted
answer.
5
MERGE-SORT(A,p,r)
1. if lo < hi
2. then mid (lo+hi)/2
3. MERGE-SORT(A,lo,mid)
4. MERGE-SORT(A,mid+1,hi)
5. MERGE(A,lo,mid,hi)
A = {10, 5, 7, 6, 1, 4, 8, 3, 2, 9}
6
Merge sort EXAMPLE from wikipedia
Analysis of Merge Sort
1. if lo < hi 1
2. then mid (lo+hi)/2 . 1
3. MERGE-SORT(A,lo,mid) . n/2
4. MERGE-SORT(A,mid+1,hi) n/2
5. MERGE(A,lo,mid,hi) .n
8
Merge sort animation
Quick sort
The quicksort algorithm was developed in 1960
by Tony Hoare while in the Soviet Union, as a
visiting student at Moscow State University.
At that time, Hoare worked in a project on
machine translation for the National Physical
Laboratory.
He developed the algorithm in order to sort the
words to be translated, to make them more
easily matched to an already-sorted Russian-to-
English dictionary that was stored on magnetic
tape
Algorithm
Quicksort is a divide and conquer algorithm. Quicksort first divides a
large list into two smaller sub-lists: the low elements and the high
elements. Quicksort can then recursively sort the sub-lists.
The steps are:
Pick an element, called a pivot, from the list.
Reorder the list so that all elements with values less than the pivot
come before the pivot, while all elements with values greater than
the pivot come after it (equal values can go either way). After this
partitioning, the pivot is in its final position. This is called the
partition operation.
Recursively sort the sub-list of lesser elements and the sub-list of
greater elements.
The base case of the recursion are lists of size zero or one, which
never need to be sorted.
Psuedo code
function quicksort('array')
if length('array') 1
return 'array' // an array of zero or one elements is already sorted
select and remove a pivot value 'pivot' from 'array'
create empty lists 'less' and 'greater'
for each 'x' in 'array'
if 'x' 'pivot' then append 'x' to 'less'
else append 'x' to 'greater'
return concatenate(quicksort('less'), 'pivot',
quicksort('greater')) // two recursive calls
Properties
Not stable
O(lg(n)) extra space
O(n2) time, but typically O(nlg(n)) time
With both sub-sorts performed recursively, quick sort
requires O(n) extra space for the recursion stack in the
worst case when recursion is not balanced.
To make sure at most O(log N) space is used, recurse
first into the smaller half of the array, and use a tail call
to recurse into the other
Quicksort with 3-way partitioning.
Use insertion sort, which has a smaller constant factor
and is thus faster on small arrays.
Pivot selection
In very early versions of quicksort, the leftmost
element of the partition would often be chosen
as the pivot element.
Unfortunately, this causes worst-case behavior
on already sorted arrays, which is a rather
common use-case.
The problem was easily solved by choosing
either a random index for the pivot, choosing the
middle index of the partition or (especially for
longer partitions) choosing the median of the
first, middle and last element of the partition for
the pivot [Sedgewick].
Quick sort animation
Quick sorting example
Divide and Conquer
An algorithm design technique
1. Divide: the instance (problem) into a number of
subinstances (in most cases 2).
2. Conquer: the subinstance by solving
them separately.
3. Combine: the solutions to the
subinstances to obtain the solution to
the original problem instance.
Binary Search
Input: An array A[1..n] of n elements sorted in
nondecreasing order and an element x.
Output: j if x = A[j], 1 j n, and 0 otherwise.
1. binarysearch(1, n)
Procedure binarysearch(low, high)
1. if low > high then return 0
2. else
3. mid (low + high)/2
4. if x = A[mid] then return mid
5. else if x < A[mid] then return binarysearch(low,mid-1)
6. else return binarysearch(mid + 1, high)
7. end if
Partitioning places all the elements less than the pivot in the left part of
the array, and all elements greater than the pivot in the right part of the
array. The pivot fits in the slot between them.
Partitioning
44 33 23 43 12 55 64 77 75
Thus we can sort the down up
elements to the left of
the pivot and the right of 12 33 23 43 44 55 64 77 75
the pivot independently!
Quicksort best case
The best case for divide-and-conquer
algorithms comes when we split the
input as evenly as possible. Thus in The recursion tree for
the best case looks like
the best case, each subproblem is of this
size n/2.
The partition step on each subproblem is
linear in its size. Thus the total effort
in partitioning the problems of size n
is O(n).
The total partitioning on each level is n/2 n/2
O(n), and it take log n levels of
partitions to get to single element n/4 n/4 n/4 n/4
subproblems.
When we are down to single elements,
the problems are sorted. Thus the
total time in the best case is O(n log
n) . 1 1 1 1 1 1 1 1
Quicksort - Worst Case
Worst case: the pivot
chosen is the largest or
smallest value in the array. n
Partition creates one part
of size 1 (containing only 1 n-1
the pivot), the other of size
n-1.
1 n-2
Now we have n-1 levels,
instead of logn , for a worst
1
case time of O(n2)
1
Heap sort
14 10
8 7 9 3
2 4 1 1 1 1 1 1
Heap can be seen as a complete binary tree think of unfilled slots as null pointers
A= 16 14 10 8 7 9 3 2 4 1
Operations on Heaps
delete-max[H]: Delete and return an item of maximum key from
a nonempty heap H.
insert[H,x]: Insert item x into heap H.
delete[H,i]: Delete the ith item from heap H.
makeheap[A]: Transform array A into a heap.
HEAPSORT(A): sorts an array in place.
Before describing the main heap operations, let
us first present two secondary heap operations:
Shift-up
Shift-down
These are used subroutines in the algorithms that
implement heap operations.
Shift-up
Suppose the key stored in the 10th position of the heap shown in
is changed from 5 to 25.
move H[i] up along the unique path from H[i] to the root until its proper
location along this path is found.
At each step along the path, the key of H[i] is compared with the key of its
parent H[ i/2 ].
Shift-down
Suppose we change the key 17 stored in the second position of the heap
shown in Fig. 1 to 3.
At each step along the path, its key is compared with the maximum of
the two keys stored in its children nodes (if any).
Algorithm MAKEHEAP
continued
Example continued
3 Sorting by next digit (10s place)
gives: [Note: 802 again comes before 2 as 802
comes before 2 in the previous list.]
: 802, 2, 24, 45, 66, 170, 75, 90
4. Sorting by most significant digit
(100s place) gives:
: 2, 24, 45, 66, 75, 90, 170, 802
Radix sort is single pass
It is important to realize that each of
the above steps requires just a single
pass over the data,
since each item can be placed in its
correct bucket without comparing with
other items.
Topological Sort
2
Definition of Topological Sort
Topological sort is a method of arranging the vertices in a directed
acyclic graph (DAG), as a sequence, such that no vertex appear in
the sequence before its predecessor.
(a) (b)
3
Topological Sort is not unique
s1 = {a, b, c, d, e, f, g, h, i}
s2 = {a, c, b, f, e, d, h, g, i}
s3 = {a, b, d, c, e, g, f, h, i}
s4 = {a, c, f, b, e, h, d, g, i}
etc.
4
Topological sort algorithm
1. Input graph G
2. Result { }
3. S { Set of all nodes with no incoming edges }
4. while S is non-empty do
5. Move n from S to Result
6. for each child m of n do:
7. delete edge e (n to m)
8. if m is root (no more incoming edges) then
9. add m to S
10.Finally
11.if G still has edges then
12. return error (G was cyclic)
13.else
14. return Result (a topologically sorted order) 5
DFS based topological sort
result = { };
roots = {nodes with no parent}
for n in roots:
dfs(n)
function visit(n)
if n is not-visited then
mark n as visited
for child m of n do:
visit(m)
add n to result;
6
Topological Sort: DFS
G A B
D E
1
Topological Sort: DFS
C
dfs(A)
G A B
D E
2
Topological Sort: DFS
C
dfs(A)
dfs(D)
G A B
D E
3
Topological Sort: DFS
C
dfs(A)
dfs(D)
dfs(E)
G A B
D E
4
Topological Sort: DFS
C
dfs(A)
dfs(D)
dfs(E)
dfs(F)
G A B
D E
5
Topological Sort: DFS
C
dfs(A)
dfs(D)
dfs(E)
dfs(F)
G A B dfs(H)
D E
6
Topological Sort: DFS
C
dfs(A)
dfs(D)
dfs(E)
dfs(F)
G A B
D E
H
7
7
Topological Sort: DFS
C
dfs(A)
dfs(D)
dfs(E)
G A B
D E
6
H
7
8
Topological Sort: DFS
C
dfs(A)
dfs(D)
G A B
D E
5
6
H
7
9
Topological Sort: DFS
C
dfs(A)
dfs(D)
G A B
D E
5
6
H
7
10
Topological Sort: DFS
C
dfs(A)
G A B
D E
4 5
6
H
7
11
Topological Sort: DFS
C
dfs(A)
G A B
D E
4 5
6
H
7
12
Topological Sort: DFS
G A 3 B
D E
4 5
6
H
7
13
Topological Sort: DFS
C
dfs(B)
G A 3 B
D E
4 5
6
H
7
14
Topological Sort: DFS
C
dfs(B)
G A 3 B
D E
4 5
6
H
7
15
Topological Sort: DFS
G A 3 B 2
D E
4 5
6
H
7
16
Topological Sort: DFS
C
dfs(C)
G A 3 B 2
D E
4 5
6
H
7
17
Topological Sort: DFS
C
dfs(C)
G A 3 B 2
D E
4 5
6
H
7
18
Topological Sort: DFS
C
dfs(C)
G A 3 B 2
D E
4 5
6
H
7
19
Topological Sort: DFS
C
dfs(C)
G A 3 B 2
D E
4 5
6
H
7
20
Topological Sort: DFS
C
dfs(C)
dfs(G)
G A 3 B 2
D E
4 5
6
H
7
21
Topological Sort: DFS
C
dfs(C)
G 1 A 3 B 2
D E
4 5
6
H
7
22
Topological Sort: DFS
0 C
G 1 A 3 B 2
D E
4 5
6
H
7
23
Topological Sort: DFS
0 C
G 1 A 3 B 2
D E
4 5
6
H
7
Topological order: C G B A D E F H
24
Data Structure
Tree
Is a connected acylic graph
Has a Root
Each node may have subtrees
Branching factor: number of children
Tree vocabulary
Root: A (only node with no parent)
Children(B) = {E, F}
Siblings(X) = {Nodes with the same parent as X,
excluding X}
Siblings(B) = {C, D}, Siblings(A) = { }
Descendants(X) = {Nodes below X}
Descendants(A) = {B,C,D,E,F,G}
Ancestors(X) = {Nodes between X and the root}
Ancestors(E) = {B, A}
Nodes with no children are called
leaves or external or terminal nodes: {C, E, F, G}
Internal nodes: Nodes with children (vertices other than
leaves {A, B, D})
The subtree rooted at X is the tree of all descendents
of X, including X.
Tree
Depth or Level of a vertex is length of the path
from the root to the vertex.
Root depth = 0
Depth(x) = number of ancestors of x.
Depth(x) = 1 + Depth(parent(x))
Height of a node x is the length of the longest
path from the vertex to a leaf.
Height of a tree is Height of root
Path
Path to a node x is a sequence of nodes from
root to x. How many paths are there to a
node?
Height of a node is the length of the
LONGEST path from the node to its leaves.
Height of a leaf is 0
Tree Traversals
visit all nodes of a tree, starting from the
root. Use recursion
each element of the tree is visited exactly
once.
visit of an element, indicates action (print value,
evaluate the operator, etc.)
preOrder(T) {
if(! T) return;
visit(T);
preOrder(T.left);
preOrder(T.right);
}
Preorder: ABCDEFGHI
Postorder: CBFGEIHDA
Inorder: BCAFEGDIH
Traversal example
Tree Traversals
If T is complete then:
n j=0..k 2^j = 2^(k+1)-1.
If T is almost-complete then:
2^k n 2^(k+1) -1
Storage of Binary Trees in arrays
Max heap
Skip lists (a probabilistic alternative
to balanced trees )
A skip list is a data structure for storing a sorted list of items using
a hierarchy of linked lists that connect increasingly sparse
subsequences of the items. These auxiliary lists allow item lookup
with efficiency comparable to balanced binary search trees (that is,
with number of probes proportional to log n instead of n)
Skip list
Skip lists are a probabilistic data structure that seem likely to
supplant balanced trees as the implementation method of choice for
many applications.
Skip list algorithms have the same asymptotic expected time
bounds as balanced trees and are simpler, faster and use less
space.
Binomial tree
A binomial heap is implemented as a collection of binomial trees. A binomial tree is
defined recursively: A binomial tree of order 0 is a single node
A binomial tree of order k has a root node whose children are roots of binomial
trees of orders k1, k2, ..., 2, 1, 0 (in this order).
Binomial heap
Supports quick merging of two heaps.
A binomial heap is implemented as a collection
of binomial trees (compare with a binary heap single binary
tree).
A binomial tree is defined recursively:
A binomial tree of order 0 is a single node
A binomial tree of order k has a root node whose
children are roots of binomial trees of orders k1, k2,
..., 2, 1, 0 (in this order).
Properties
A binomial tree of order k has 2^k nodes, height k.
binomial heap with n nodes consists of at most log n + 1 binomial
trees
Example of binomial trees
Binomial trees of order 0 to 3:
Each tree has a root node with subtrees of all lower ordered binomial trees, which
have been highlighted.
For example, the order 3 binomial tree is connected to an order 2, 1, and 0
B-tree of order 2
B-Tree
In order to maintain the pre-defined range,
internal nodes may be joined or split.
Because a range of child nodes is
permitted, B-trees do not need re-
balancing as frequently as other self-
balancing search trees,
but may waste some space, since nodes
are not entirely full.
B-Tree
Each internal node of a B-tree will contain
a number of keys.
The keys act as separation values which
divide its subtrees
Example of a B-tree of order 5
This means that (other that the root node) all internal nodes have at
least ceil(5 / 2) = ceil(2.5) = 3 children (and hence at least 2 keys).
The maximum number of children that a node can have is 5 (so that
4 is the maximum number of keys).
Each leaf node must contain at least 2 keys.
In practice B-trees usually have orders a lot bigger than 5.
B+ Trees
The primary value of a B+ tree is in storing data for
efficient retrieval in a block-oriented storage contextin
particular, filesystems.
The B+-tree is used as a (dynamic) indexing method in
relational database management systems.
This is primarily because unlike binary search trees, B+
trees have very high fanout (typically on the order of 100
or more), which reduces the number of I/O operations
required to find an element in the tree.
B+ Tree example
B+ Tree
variant of the original B-tree in which all
records are stored in the leaves and all
leaves are linked sequentially.
Red-black tree
A redblack tree is a type of self-balancing binary
search tree.
The self-balancing is provided by painting each node
with one of two colors (these are typically called 'red' and
'black', hence the name of the trees) in such a way that
the resulting painted tree satisfies certain properties that
don't allow it to become significantly unbalanced.
When the tree is modified, the new tree is subsequently
rearranged and repainted to restore the coloring
properties. The properties are designed in such a way
that this rearranging and recoloring can be made
efficiently.
Example of RB tree
Rbtree
Tracking the color of each node requires only 1
bit of information per node because there are
only two colors.
The tree does not contain any other data
specific to it being the redblack tree so its
memory footprint is almost identical to classic
(uncolored) binary search tree.
In many cases the additional bit of information
can be stored at no additional memory cost.
Properties
A node is either red or black.
The root is black. (This rule is sometimes omitted.
Since the root can always be changed from red to
black, but not necessarily vice-versa, this rule has little
effect on analysis.)
All leaves (NIL) are black. (All leaves are same color
as the root.)
Both children of every red node are black.
Every simple path from a given node to any of its
descendant leaves contains the same number of black
nodes.
These constraints enforce a critical property of redblack trees: that the path from
the root to the furthest leaf is no more than twice as long as the path from the root
to the nearest leaf.
Uses
C++ STL Library: Map, Multimap, Set, Multiset
Javas TreeMap class
IBMs ISAM (Indexed Sequential Access
Method)
Splay Trees - 2
Zig-Zig
z x has a grandparent x
10 30
y y
20 20
x z
T1 30 10 T4
T2 T3
T3 T4 T1 T2
Splay Trees - 3
Zig-Zag
z x has a grandparent x
10 20
y
30
x z y
T1 20 10 30
T4
T2 T3 T1 T2 T3 T4
Splay Trees - 4
Zig
x has no grandparent (so, y is the root)
y Note: w could be NIL x
10 20
x
20
w y w
T1 30 10 30
T2
T3 T4 T1 T2 T3 T4
Splay Trees - 5
Complete Example
44
Splay(78) 50
17 88
32 65 97
zig-zag
28 54 82
z
29 76
y
80
x
78
Splay Trees - 6
Complete Example
44
Splay(78) 50
17 88
32 65 97
zig-zag
28 54 82
x
29 78
z 76 y
80
Splay Trees - 7
Complete Example
44
Splay(78) 50
17 88
32 z 65 97
zig-zag
y
28 54 82
29 78 x
76 80
Splay Trees - 8
Complete Example
44
Splay(78) 50
17 88
32 x 78 97
zig-zag
28 z 65 y 82
29 54 76 80
Splay Trees - 9
Complete Example
44
z
Splay(78) 50
17 88 y
32 x 78 97
zig-zag
28 65 82
29 54 76 80
Splay Trees - 10
Complete Example
44
x
Splay(78) 78
17 50 z 88 y
82
32 97
zig-zag 65
80
28
54 76
29
Splay Trees - 11
Complete Example
y 44
x
Splay(78) 78
17 50 88 w
zig
82
32 97
65
80
28
54 76
29
Splay(78)
17 50 88 w
82
32 97
zig 65
80
28
54 76
29
Splay Trees - 13
Case 1 of 3
1. x has no grandparent (zig)
If x is left child of root y, then rotate (xy)R.
Else if x is right child of root y, then rotate
(yx)L.
Case 2 of 3
2. x is LL or RR grandchild (zig-zig)
If x is left child of y, and y is left child of z,
then rotate at grandfather (yz)R and then
rotate at father (xy)R.
Else if x is right child of y, and y is right child
of z,
then rotate at grandfather (yz)L and then
rotate at father (xy)L.
If x has not become the root, then
continue splaying at x.
Case 3 of 3
3. x is LR or RL grandchild (zig-zag)
If x is right child of y, and y is left child of z,
then rotate at father (yx)L and then rotate at
grandfather (xz)R.
Else if x is left child of y, and y is right child
of z,
then rotate at father (yx)R and then rotate at
grandfather (xz)L.
If x has not become the root, then
continue splaying at x.
method for Faster Multiplication
Examples
E.g. Polynomial in 2 variables with 3 terms
Degree-bound of A(x) is n.
Degree is k if ak is the highest non-zero coefficient.
Polynomial Addition
n 1
A(x) a j x j
j 0
n 1
B(x) b j x j
j 0
n 1
C(x) A(x) B(x) c j x j , c j a j b j
j 0
j
where c j a k b j k
k 0
called convolution
> factor(m);
(2*x+3)*(x+3)*(x^2+2*x+5)
Similarly for Q:
Q = C xn+D
Usual multiplication of P*Q
Divide P and Q of size 2n each into smaller
polynomials of size n each:
P = A xn+B
Q = C xn+D
P*Q = (A*C)(xn)2 + (A*D+B*C)(xn) +(B*D)
There are 4 multiplies of size n
polynomials, plus a size 2n polynomial
addition:
O(4*(n2)). What we expected, no better.
Karatsubas multiplication
P*Q = (A*C)(xn)2 + (A*D+B*C)(xn) +(B*D)
The new idea: Instead, compute {R,S,T} in 3*
R = (A+B)*(C+D) = A*C+A*D+B*C+B*D .. mult1
S = A*C .. mult2
T = B*D .. mult3
U = R-S-T = A*D+B*C
P*Q = S*(xn)2 + U*xn + T.
= A*C*(xn)2 + (R-S-T)*xn + B*D.
= A*C*(xn)2 + (A*D+B*C)*xn + B*D.
N
Ci , j ai ,k bk , j
Time analysis k 1
N N N
Thus T ( N ) c cN 3 O ( N 3 )
i 1 j 1 k 1
Strassenss Matrix Multiplication
A B = R
a0 b0 = a0 b0
C11 = P1 + P4 - P5 + P7
= (A11+ A22)(B11+B22) + A22 * (B21 - B11) - (A11 + A12) * B22+
(A12 - A22) * (B21 + B22)
= A11 B11 + A11 B22 + A22 B11 + A22 B22 + A22 B21 A22 B11 -
A11 B22 -A12 B22 + A12 B21 + A12 B22 A22 B21 A22 B22
= A11 B11 + A12 B21
Strassen Algorithm
void matmul(int *A, int *B, int *R, int n) {
Divide matrices in
if (n == 1) {
(*R) += (*A) * (*B);
sub-matrices and
} else { recursively multiply
matmul(A, B, R, n/4); sub-matrices using
matmul(A, B+(n/4), R+(n/4), n/4); matmul(..).
matmul(A+2*(n/4), B, R+2*(n/4), n/4);
matmul(A+2*(n/4), B+(n/4), R+3*(n/4), n/4);
matmul(A+(n/4), B+2*(n/4), R, n/4);
matmul(A+(n/4), B+3*(n/4), R+(n/4), n/4);
matmul(A+3*(n/4), B+2*(n/4), R+2*(n/4), n/4);
matmul(A+3*(n/4), B+3*(n/4), R+3*(n/4), n/4);
}
Strassen vs O(n^3)
for small n (usually n < 45) the general
algorithm is practically a better choice.
Faster Multiplication,
using Polynomials
Polynomial interpolation
The red dots denote the data points (xi,yi),
while the blue curve shows the interpolation polynomial.
Polynomial interpolation
Polynomial interpolation
Suppose we have 4 data points:
(x1, y1), (x2, y2), (x3, y3), and (x4, y4).
There is exactly one polynomial of deg 3,
p(x) = ax3 + bx2 + cx + d
which passes through the four points.
Interpolate in Maple
# Evaluate P(x) and Q(x) at 3 points x=[-1,0,1,-1]
# And Interpolate at 3 points ([x1,x2,x3], [y1,y2,y3])
> p := x -> 1 + 2 * x;
p := x -> 1 + 2 x
> q := x -> 2 + x;
q := x -> 2 + x
> seq(p(x)*q(x), x=-1..1);
-1, 2, 9
> interp([-1,0,1],[-1,2,9], x);
2 x^2 + 5 x + 2
Plot the graphs
> plot({p(x), q(x), p(x)*q(x)},x=-2..2}
All the maple commands together
1. p := x -> 1 + 2 * x;
2. q := x -> 2 + x;
3. seq(p(x)*q(x), x=-1..1);
4. interp([-1,0,1],[-1,2,9], x);
5. py:= {seq([x,p(x)], x=-1..1)};
6. qy:= {seq([x,q(x)], x=-1..1)};
7. pqy:= {seq([x,p(x)*q(x)], x=-1..1)};
8. pl1:=plot({p(x), q(x), p(x)*q(x)},x=-2..2):
9. pl2:=pointplot(py):
10. pl3:=pointplot(qy):
11. pl4:=pointplot(pqy);
12. with(plots):
13. display({pl1,pl2,pl3,pl4});
Vandermonde matrix
Given the points {p(xi) : i=0..n}
we can compute the polynomial
p(x)= anxn + .. + a0
using the matrix inverse:
Multiply polynomials h(x)=
f(x)*g(x) by Interpolation at n
points
A. for i=0n;
evaluate f(i) and g(i);
save the value of h(i) = f(i) * g(i).
B. Interpolate { (h(i),i) : i=0..n } to the unique
polynomial h(x) that passes through these
points: (0,h(0)),(1,h(1))...(n,h(n)).
Horners rule to evaluate P(X)
Evaluating P(x) at a point x0.
Brute force takes O(n2)
P(x) = a0 + x a1 + x2 a2 + x3 a3
Horners rule does it in O(n) by factoring on x:
P(x) = a0 + x (a1 + x (a2 + x ()))
Doing it at n points takes O(n2).
Horners Polynomial Evaluation
Given the coefficients (a0,a1,a2,,an-1)
of x^i in p(x).
Evaluate p(x) at z in O(n) time using:
Function Horner(A=(a0,a1,a2,,an-1),z):
If n==1 then return a0
else return (a0+ z * Horner(A=(a1,a2,,an-1),z) )
Using Maple
> convert( 5 * x^5 + 3 * x^3 + 22 * x^2 + 55,
horner);
55+(22+(3+5*x^2)*x)*x^2
Homework
Write C program to compute p(x) using horner
rule, given polynomial coeffs as p[ ] and points x[ ].
e.g. p(x) = 5 * x^5 + 3 * x^3 + 22 * x^2 + 55
Horner of p(x) is 55+(22+(3+5*x^2)*x)*x^2
double p [ ] = { 55,0,22,3,0,5};
double x [ ] = { 1, 10, 20};
for k in 0..2
print horner(p, x[k])
Homework solution: horner in C
double horner( double *coeffs, int n, double x ) {
double y = 0.0;
while ( n-- > 0)
y = y * x + coeffs[n];
return y;
}
#define LEN(A) (sizeof(A)/sizeof(A[0]))
int main( ) {
double k, p[] = { 1, 0, 3 }; // p(x):=1+0x+3x^2 = 1+x(0+3*x));
for(k=-2;k<=2;k++)
printf( "p(%g)= %g\n", k, horner( p, LEN(p), k) );
return 0;
}
A peculiar way to multiply
polynomials f(x) by g(x)
How much does this cost?
Evaluations: 2*n evaluations of size(n)
using Horners rule is O(n2).
Interpolation: using Lagrange or Newton
Divided Difference is also O(n2).
So O(n2), it is slower than Karatsuba.
BUT....
Evaluate P(x) faster?
Our choice of points 0,1,2,... n was
arbitrary.
What if we choose better points?
Evaluating P(0) is almost free.
Evaluating at symmetric points:
P(c) and P(-c) can be done faster by
sharing the calculations
e.g. take coefficients of odd and even
powers separately.
Evaluate p(-x) faster from p(x)
p1 = Pood:= a2n+1x2n+1+a2n-1xn-1+... +a1
p2= Peven:=a2nx2n + ...+ a0
Evaluation at roots
Interpolation
of unity, O(n log n)
O(n log n)
C ( 20n )
A( 20n ), B ( 20n )
Pointwise multiplication O(n)
C ( 12 n )
A( 12 n ), B ( 12 n ) Point-value
representation
C ( 22nn 1 )
A( 22nn 1 ), B( 2nn1 )
(n1 for evaluating the FFT)
Complex roots of unity
The n-th root of unity is the
complex number such that n=1.
The value wn=exp(i 2/n) is called the
principal n-th root of unity,
where: i=-1
exp(i*u)=cos(u)+i*sin(u).
All the n roots of unity
The n distinct roots of 1 are
= {1, , 2, , n-1}
= { exp( k * i 2 / n), k=1..n}
with = e2i/n
1
k
1
k
1 1
k
k
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
The DFT
Assume n is a power of 2
(pad with 0 if required).
The DFT Discrete Fourier Transform
The vector y=(y_0..y_{n-1}) is called the DFT of the
coefficient vector a=(a_0..a_{n-1}) of a polynomial A(x),
and written as vector: y=DFTn(a).
y0 1 1 1 1 1 a0
n 1
y1 1 n n 2
n
3
n a1
1 n2 n4 n6 n2( n 1)
y2 a2
y 1 n3 n6 n9 n3( n 1) a
3 3
( n 1)( n 1)
y n 1 1 nn 1 n 2( n 1) 3( n 1)
n n an 1
Interpolation, inverse matrix
Theorem: for j and k in {0,1, ..., n-1},
the (j,k) entry of the inverse of matrix is w_n^{-kj}/n.
And V* V-1 = I
Proof: Next slide, from [Cormen 3e book, pg 913].
k 0 k 0
Comparing the coefficients of Zl-1 on the left and right hand sides of
this equation proves the proposition.
Inverse matrix to Fn
1
Interpolation a= Fn ( 1 ) y
n
FFT and iFFT are O(n log n)
By using the FFT and the iFFT,
we can transform a polynomial of degree n
back and forth between its coefficient
representation and a point-value representation
in time O(n log n).
The convolution
For any two vectors a and b of length n is a power of 2,
we can do:
A butterfly operation. (a) The two input values enter from the left, the twiddle factor
w_n^k is multiplied by y_k^[1] , and the sum and difference are output on the right.
(b) A simplified drawing of a butterfly operation.
Convolution
[a0,a1,a2,...,an-1] [b0,b1,b2,...,bn-1]
The DFT and the Pad with n 0's Pad with n 0's
iDFT can be used
to multiply two [a0,a1,a2,...,an-1,0,0,...,0] [b0,b1,b2,...,bn-1,0,0,...,0]
[y0,y1,y2,...,y2n-1] [z0,z1,z2,...,z2n-1]
So we can get the
coefficients of the Component
Multiply
product polynomial
quickly if we can [y0z0,y1z1,...,y2n-1z2n-1]
compute the DFT
(and iDFT) quickly inverse DFT
[c0,c1,c2,...,c2n-1]
(Convolution)
Iterative FFT
(a0, a1, a2, a3, a4 , a5, a6, a7 )
We take the elements in pairs, compute the DFT of each pair, using one
butterfly operation, and replace the pair with its DFT
We take these n/2 DFTs in pairs and compute the DFT of the four vector
elements.
We take 2 (n/2)-element DFTs and combine them using n/2 butterfly
operations into the final n-element DFT
Iterative-FFT Code with bit reversal.
0,4,2,6,1,5,3,7000,100,010,110,001,101,011,111000,001,010,011,100,101,110,111
BIT-REVERSE-COPY(a,A)
nlength [a]
for k0 to n-1
do A[rev(k)]ak
ITERATIVE-FFT
1. BIT-REVERSE-COPY(a,A)
2. nlength [a]
3. for s1 to log n
4. do m2s
5. m e2i/m
6. for j0 to n-1 by m 1
7. for j0 to m/2-1
8. do for kj to n-1 by m
9. do t A[k+m/2]
10. uA[k]
11. A[k]u+t
12. A[k+m/2]u-t
13. m
14. return A
A parallel FFT circuit
a0 y0
0
a1 2
y1
04
a2 y2
02 14
a3 y3
80
a4 y4
02 18
a5 y5
04 82
a6 y6
02 14 83
a7 y7
FFT Computation
example in Maple
Example: Q=A*B
Usual multiplication the following polynomials in O(n^2):
A(x):=1+x+2x^2;
B(x):=1+2x+3x^2;
Q(x):=A(x)*B(x);
=1+3x+7x^2+7x^3+6x^4
Fast Polynomial Multiplication
Multiply the polynomials in O(n log(n))
using DFT of the coefficient vectors:
A=(1,1,2,0,0)
B=(2,1,3,0,0)
DFT(A)=
[4.000, (-0.309 - 2.126i), (0.809 + 1.314i),
(0.809 - 1.314i), ( -0.309+ 2.126i)]
DFT(B)=
[6.000, (-0.809 - 3.665i), (0.309 + 1.677i),
(0.309 -1.677i), (-0.809 + 3.665i)]
A x B = iDFT(DFT(A)*DFT(B))
DFT(A) DFT(B) =
[24.00, (-7.545 + 2.853i),
(-1.954 + 1.763i), ( -1.954 - 1.763i),
(-7.545 - 2.853i)]
and
A x B = iDFT( DFT(A) . DFT(B) ) = (1,3,7,7,6).
FFT speedup
If one complex multiplication takes 500ns:
N TDFT TFFT
212 8 sec. 0.013 sec.
216 0.6 hours 0.26 sec.
220 6 days 5 sec.
FFT in Maple
# Multiply polynomials A and B using FFT
A(x):=1+x+2*x^2;
B(x):=1+2*x+3*x^2;
Q(x):=expand(A(x)*B(x));
readlib(FFT);
ar := array([1,1,2,0,0,0,0,0]); ai := array([0,0,0,0,0,0,0,0]);
br := array([1,2,3,0,0,0,0,0]); bi := array([0,0,0,0,0,0,0,0]);
FFT(3,ar,ai); af := evalm(ar+I*ai);
FFT(3,br,bi); bf := evalm(br+I*bi);
abf := zip( (x,y)->x * y, af, bf );
abfr := evalm(map(x->Re(x), abf));
abfi := evalm(map(x->Im(x), abf));
iFFT(3,abfr, abfi);
evalm(abfr+I*abfi);
# coeffs of Q := [1,3,7,7,6]
FFT with numpy in python
#!python2.5
# What: fft in python using numpy.
fft(data) = [
22 +0j -14.2 +10.7j -79.1 +0j -4.8-101.8j
0 -2.j 4.8 -58.2j 79.1 -0j 14.2 +46.3j
-22 +0j 14.2 -46.3j 79.1 -0j 4.8 +58.2j
0 +2.j -4.8+101.8j -79.1 +0j -14.2 -10.7j]
ifft(fft(data)) = [
0+0j 1-0j 7-0j 2-0j -1+0j 3+0j 7-0j 8+0j
0+0j -23-0j -7+0j 31+0j 1+0j 31+0j -7+0j -31+0j]
fft.c 1 of 2.
double PI = 3.14159265358979;
const double eps = 1.e-7;
typedef double complex cplx;
void fft_rec(cplx buf[], cplx out[], int n, int step, int inv) {
if (step < n) {
int i;
fft_rec(out , buf , n, step * 2, inv);
fft_rec(out + step, buf + step, n, step * 2, inv);
for (i = 0; i < n; i += 2 * step) {
int sign = inv? -1 : 1;
cplx t = cexp(-I * PI * i * sign / n) * out[i + step];
buf[ i / 2] = out[i] + t;
buf[(i + n)/2] = out[i] - t;
}
}
}
fft.c continued 2 of 2
void fft(cplx buf[], int n, int inv) {
cplx out[n];
int i;
assert(is_power_of_two(n));
for (i = 0; i < n; i++)
out[i] = buf[i];
fft_rec(buf, out, n, 1, inv);
if (inv) for (i = 0; i < n; i++) buf[i] /= n;
}
int main() {
cplx buf[] = {1, 2, 5, 1, 0, 0, 0, 99};
fft(buf, 8, 0);
fft(buf, 8, 1); // invfft
}
Running fft.c
> gcc -std=gnu99 -g -o fft.exe fft.c
> ./fft.exe
Data: x[0]=(1, 0), x[1]=(2, 0), x[2]=(5, 0), x[3]=(1, 0),
x[4]=(0, 0), x[5]=(0, 0), x[6]=(0, 0), x[7]=(99, 0),
Because the discrete Fourier transform is invertible, all the information about the shape is contained in
the Fourier descriptors. A common thing to do with Fourier descriptors is to set the descriptors
corresponding to values above a certain frequency to zero and then reconstruct the shape. The
effect of this is a low-pass filtering of the shape, smoothing the boundary. Since many shapes can
be approximated with a small number of parameters, Fourier descriptors are commonly used to
classify shapes.
The slider lets you choose how many terms to use in the reconstruction. With more terms, the
shape looks more like the original. With fewer terms, the shape becomes smoother and rounder.
The basic method of Fourier descriptors is discussed in R. C. Gonzalez and R. E. Woods, Digital
Image Processing, Englewood Cliffs, NJ: Prentice Hall, 2007.
Fibonacci Numbers
Longest Common Subsequence (LCS)
Shortest Path
Dynamic programming
int fibo_fast(int n)
If n < 2 then return 1
f1 = f2 = 1;
for k =1 to n
f1 = f1 + f2
f2 = f1
return f1
(n) time and
(1) space
LCS: Longest Common
Subsequence
Example of Dynamic programming
Matching two strings with missing
characters
DNA are strings over {C,G,A,T}
RNA are strings over {C,G,A,U}
DNA Matching
The main objective of DNA analysis is to get a
visual representation of DNA left at the scene of
a crime.
A DNA "picture" features columns of dark-
colored parallel bands and is equivalent to a
fingerprint lifted from a smooth surface.
To identify the owner of a DNA sample, the DNA
"fingerprint," or profile, must be matched, either
to DNA from a suspect or to a DNA profile stored
in a database.
LCS example in DNA matching
The longest common subsequence (LCS) problem
string : A = b a c a d
Common subsequences of
A =bacad
B=accbadcb
Are: ad, ac, bac, acad.
6
LCS problem
Given two sequences X=<x1,x2,,xm> and Y=<y1, y2,..,yn>
to find an LCS of X and Y.
Brute force approach
Enumerate all subsequences of X and Y
Find the common to both.
Find the longest one.
Is Infeasible: space (2m+2n), time (2m+n),
(There are 2m subsequences of X is)
LCS data structure is a table c
X = X1 X2 X3 Xn
Y = Y1 Y2 Y3 . Ym
c[i, j] is table of length(LCS(X[1..i],Y[1..j]))
Where:
c[0,i] = c[i,0] = 0 // first row and column are zero.
c[i,j] = 1+c[i-1,j-1] if Xi == Yj
= max(c[i,j-1],c[i-1,j]) otherwise
PRINT-LCS(i, j)
1 if i = 0 or j = 0
2 then return
3 if c[i, j] == 1+c[i-1,j-1] then
4 PRINT-LCS( i - 1, j - 1)
5 print x[i]
6 else if c[i, j] == c[i-1,j] then
7 PRINT-LCS(b, X, i - 1, j)
8 else PRINT-LCS(b, X, i, j - 1)
LCS example
2. X=YOUR_FIRST_NAME
Y=YOUR_LAST_NAME
Homework Solution
Find LCS of X=TAGAGA, Y=GAGATA
(example of dynamic programming)
Max subarray sum
Example:
2
Max subarray sum
Another Example: buying low and selling high, even with perfect
knowledge, is not trivial:
3
Max subarray sum
First Solution: compute the value change of each subarray
corresponding to each pair of dates, and find the maximum.
n
1.How many pairs of dates: 2
2.This belongs to the class (n2)
3.The rest of the costs, although possibly constant, dont improve
the situation: (n2).
4
Max subarray sum
We are going to find an algorithm with an o(n2) running time (i.e.
strictly asymptotically faster than n2), which should allow us to look
at longer time-series.
Transformation: Instead of the daily price, let us consider the daily
change: A[i] is the difference between the closing value on day i
and that on day i-1.
The problem becomes that of finding a contiguous subarray the
sum of whose values is maximum.
On a first look this seems even worse: roughly the same number of
intervals (one fewer, to be precise), and the requirement to add the
values in the subarray rather than just computing a difference:
(n3)?
5
Max subarray sum
6
Max subarray sum
How do we divide?
We observe that a maximum contiguous subarray A[ij] must be
located as follows:
1.It lies entirely in the left half of the original array: [lowmid];
2.It lies entirely in the right half of the original array: [mid+1high];
3.It straddles the midpoint of the original array: i mid < j.
7
Max subarray sum
The left and right subproblems are smaller versions of the
original problem, so they are part of the standard Divide & Conquer
recursion. The middle subproblem is not, so we will need to count
its cost as part of the combine (or divide) cost.
The crucial observation (and it may not be entirely obvious) is that
we can find the maximum crossing subarray in time linear in
the length of the A[lowhigh] subarray.
8
Max subarray sum
The Algorithm:
9
Max subarray sum
The total numbers of iterations for both loops is exactly high-low+1.
The left-recursion will return the indices and value for the largest
contiguous subarray in the left half of A[lowhigh], the right
recursion will return the indices and value for the largest
contiguous subarray in the left half of A[lowhigh], and FIND-
MAX-CROSSING-SUBARRAY will return the indices and value
for the largest contiguous subarray that straddles the midpoint
of A[lowhigh].
10
Max subarray sum
The recursive algorithm:
11
Max subarray sum
The analysis:
1. The base case n = 1. This is easy: T(1) = (1), as one
might expect.
2. The recursion case n > 1. We make the simplifying
assumption that n is a power of 2, so we can always split in
half. T(n) = cost of splitting + 2T(n/2) + cost of finding largest
midpoint-straddlingsubarray + cost of comparing the three
subarray values and returning the correct triple.
The cost of splitting is constant = (1); the cost of finding the
largest straddling subarray is (n); the cost of the final
comparison and return is constant (1).
12
Max subarray sum
We finally have:
1 if n 1,
T n
2T n /2 n if n 1.
The recurrence has the same form as that for MERGESORT, and
thus we should expect it to have the same solution T(n) = (n lg
n).
13
MCM: Optimal Matrix
Chain Multiplication
example of Dynamic Programming
Dynamic Programming vs. Recursion
and Divide & Conquer
11-7
Matrix-chain Multiplication
To compute the number of scalar
multiplications necessary, we must know:
Algorithm to multiply two matrices
Matrix dimensions
11-8
Algorithm to Multiply 2 Matrices
Input: Matrices Apq and Bqr (with dimensions pq and qr)
Result: Matrix Cpr resulting from the product AB
MATRIX-MULTIPLY(Apq , Bqr)
1. for i 1 to p
2. for j 1 to r
3. C[i, j] 0
4. for k 1 to q
5. C[i, j] C[i, j] + A[i, k] B[k, j]
6. return C
Scalar multiplication in line 5 dominates time to compute
CNumber of scalar multiplications = pqr
11-9
MATRIX MULTIPLY
Complexity:
Matrix-Chain Multiplication Problem
1 if n 1
n 1
P( n )
P( k ) p( n k ) if n 2
k 1
P( n ) C( n 1)
1 2 n 4n
( 3 / 2 )
n 1 n n
Step 1: The structure of an optimal
parenthesization
Optimal
(( A1 A2 ... Ak )( Ak 1 Ak 2 ... An ))
Combine
Step 2: A recursive solution
Define m[i, j]= minimum number of scalar
multiplications needed to compute the
matrix Ai.. j Ai Ai 1 .. A j
Goal find m[1, n]
m[ i , j ]
0 i j
min i k j {m[i, k ] m[k 1, j ] pi1 pk p j } i j
How to split the chain?
Basically, were checking different places to
split our matrices by checking different values
of k and seeing if they improve our current
minimum value.
Step 3. Compute Optimal Cost
Input: Array p[0n] containing matrix dimensions and n
Result: Minimum-cost table m[1..n,1..n] and split table s[1..n,1..n]
MATRIX-CHAIN-ORDER(p[ ], n)
for i 1 to n Takes O(n3) time
m[i, i] 0
Requires O(n 2) space
for l 2 to n
for i 1 to n-l+1
j i+l-1
m[i, j]
for k i to j-1
q m[i, k] + m[k+1, j] + p[i-1] p[k] p[j]
if q < m[i, j]
m[i, j] q
s[i, j] k
return m and s
11-21
Constructing Optimal Solution
Our algorithm computes the minimum-
cost table m and the split table s
The optimal solution can be constructed
from the split table s
Each entry s[i, j ]=k shows where to split
(cut) the product chain Ai Ai+1 Aj for
the minimum cost.
11-22
Step 4. Find the optimal cuts
example:
(( A1( A2 A3 ))(( A4 A5 ) A6 ))
Example: Given 6 matrices to
multiply
A1 30 35 p0 p1
A2 35 15 p1 p2
A3 15 5 p2 p3
A4 5 10 p3 p4
A5 10 20 p4 p5
A6 20 25 p5 p6
Find the cheapest way to
multiply these 6 matrices
Show how to multiply Matrix Dimension
this matrix chain
A1 3035
optimally
A2 3515
Brute force Solution A3 155
Minimum cost 15 125
A4 510
Optimal parenthesization
((A1(A2A3))((A4 A5)A6)) A5 1020
A6 2025
The m and s table computed by
MATRIX-CHAIN-ORDER for n=6
((A1(A2A3))((A4 A5)A6))
Example, computation of m[2,5]
m[2,5]= 7125 =
Minimum of the 3 cuts {
m[2,2]+m[3,5]+p1p2p5 = 0+2500+351520 = 13000,
m is:
| 1| 2| 3| 4| 5| 6
-+-------+-------+-------+-------+-------+-------+
1: 0 15750 7875 9375 11875 15125
2: 0 2625 4375 7125 10500
3: 0 750 2500 5375
4: 0 1000 3500
5: 0 5000
6: 0
Example: continued A1[30x35] A2[35x15]
A3[15x5] A4[5x10] A5[10x20] A6[20x25]
s is:
| 1| 2| 3| 4| 5| 6
-+-------+-------+-------+-------+-------+-------+
1: 0 1 1 3 3 3
2: 0 2 3 3 3
3: 0 3 3 3
4: 0 4 5
5: 0 5
6: 0
Elements of dynamic programming
Optimal substructure
Overlapping subproblems
mcm 1 2 3 4
A1[1x2]
A2[2x3]
A3[3x4]
Solution
mcm 1 2 3 4; A1[1x2] A2[2x3] A3[3x4]
m is:
| 1| 2| 3
-+-------+-------+-------+
1: 0 6 18
2: 0 24
3: 0
s is:
| 1| 2| 3
-+-------+-------+-------+
1: 0 1 2
2: 0 2
3: 0
Minimum mult is 18 by:(( A1 A2) A3)
Exercise
mcm 3 4 1 2
A1[3x4] A2[4x1]
A3[1x2]
Exercise Solution
A1[3x4] A2[4x1] A3[1x2]
m| 1| 2| 3
-+-------+-------+-------+
1: 0 12 18
2: 0 8
3: 0
s| 1| 2| 3
-+-------+-------+-------+
1: 0 1 2
2: 0 2
3: 0
Minimum mult is 18 by:(( A1 A2) A3)
Exercise
mcm.exe 1 5 3 2 1
A1[1x5]* A2[5x3]* A3[3x2]*
A4[2x1]
m is:
| 1| 2| 3| 4
-+-------+-------+-------+-------+
1: 0 15 21 23
2: 0 30 21
3: 0 6
4: 0
s is:
| 1| 2| 3| 4
-+-------+-------+-------+-------+
1: 0 1 2 3
2: 0 2 2
3: 0 3
Minimum mult is 23 by:((( A1 A2) A3) A4)
Exercise Solution
A1[1x5]* A2[5x3]* A3[3x2]* A4[2x1]
----------------------------------
m| 1| 2| 3| 4
-+-------+-------+-------+-------+
1: 0 15 21 23
2: 0 30 21
3: 0 6
4: 0
----------------------------------
s| 1| 2| 3| 4
-+-------+-------+-------+-------+
1: 0 1 2 3
2: 0 2 2
3: 0 3
Minimum mult is 23 by:((( A1 A2) A3) A4)
in Graphs
Bipartite Matching
A B
Suppose you work for shadi.com, and your job is to find a perfect
matching between 200 men and 200 women. If there is a perfect
matching, then you can show it to the shadi.com. But suppose there
is no perfect matching, how can you convince your boss of this fact?
Perfect Matching
N(S)
S
1
Maximum Bipartite Matching
A bipartite graph is a graph
G=(V,E) in which V can be
divided into two parts L and R
such that every edge in E is
between a vertex in L and a
vertex in R.
e.g. vertices in L represent
skilled workers and vertices in
R represent jobs. An edge
connects workers to jobs they
can perform.
2
A matching in a graph is a subset M of E, such
that for all vertices v in V, at most one edge of
M is incident on v.
3
A maximum matching is a matching of
maximum size (maximum number of edges).
4
A Maximum Matching
No matching of
cardinality 4,
because only one of v
v and u can be
matched.
In the workers-jobs
example a maximal- u
matching provides
work for as many
people as possible.
5
Solving the Maximum Bipartite
Matching Problem
Reduce the maximum bipartite matching
problem on graph G to the max-flow
problem on a corresponding flow network
G.
6
Corresponding Flow Network
To form the corresponding flow network G' of the bipartite graph G:
Add a source vertex s and edges from s to L.
Direct the edges in E from L to R.
Add a sink vertex t and edges from R to t.
Assign a capacity of 1 to all edges.
Claim: max-flow in G corresponds to a max-bipartite-matching on G.
GG
1
1
1 1
1
1 1
1
1
1 1
s 1 1 t
1 1
L R
7
Solving Bipartite Matching as Max Flow
Let G (V , E ) be a bipartite graph with vertex partition V L R .
If M is a matching in G ,
8
Does this mean that max |f| = max |M|?
9
Integrality Theorem
If the capacity function c takes on only integral
values, then:
1. The maximum flow f produced by the Ford-
Fulkerson method has the property that |f| is
integer-valued.
2. For all vertices u and v the value f(u,v) of the flow
is an integer.
10
Example
min cut
12
Example
Department has n courses and m instructors.
Every instructor has a list of courses he or she
can teach.
Every instructor can teach at most 3 courses
during a year.
13
Applications of Graph Matching
What is a Matching?
Each boy is matched to at most one girl.
Each girl is matched to at most one boy.
1 2 3 4 5
Boys
Girls
A B C D E
What is a Good Matching?
A stable matching: They have no incentive to break up.
(unstable pairs - Willing to break up and switch).
A maximum matching: To maximize the number of pairs matched.
Stable Matching Problem
There are n boys and n girls.
For each boy, there is a preference list of the girls.
For each girl, there is a preference list of the boys.
Boys Girls
1: CBEAD A : 35214
2 : ABECD B : 52143
3 : DCBAE C : 43512
4 : ACDBE D : 12345
5 : ABDEC E : 23415
What is a stable matching?
Boys Girls
1: CBEAD A : 35214
2 : ABECD B : 52143
3 : DCBAE C : 43512
4 : ACDBE D : 12345
5 : ABDEC E : 23415
Unstable pair
Boy 4 prefers girl C more than girl B (his current partner).
Girl C prefers boy 4 more than boy 1 (her current partner).
Boys Girls
1: CBEAD A : 35214
2 : ABECD B : 52143
3 : DCBAE C : 43512
4 : ACDBE D : 12345
5 : ABDEC E : 23415
What is a stable matching?
A stable matching is a matching with no unstable pair, and every one is married.
Boys Girls
1: CBEAD A : 35214
2 : ABECD B : 52143
3 : DCBAE C : 43512
4 : ACDBE D : 12345
5 : ABDEC E : 23415
Does a stable matching always
exists? - Not clear
Can you find a stable matching in this case?
(continued after Stable Roommate problem)
Boys Girls
1: CBEAD A : 35214
2 : ABECD B : 52143
3 : DCBAE C : 43512
4 : ACDBE D : 12345
5 : ABDEC E : 23415
Stable Marriage,
Gale Shapley
Algorithm
Stable Matching
Can you now construct an example where there is no stable matching? not
clear
Gale,Shapley [1962]:
There is always a stable matching in the stable matching problem.
Boys Girls
1: CBEAD A : 35214
2 : ABECD B : 52143
3 : DCBAE C : 43512
4 : ACDBE D : 12345
5 : ABDEC E : 23415
The Marrying Procedure
Morning: boy propose to their favourite girl
Angelina
Billy Bob
Brad
The Marrying Procedure
Morning: boy propose to their favourite girl
Afternoon: girl rejects all but her favourite
Angelina
Billy Bob
Brad
The Marrying Procedure
Morning: boy propose to their favourite girl
Afternoon: girl rejects all but her favourite
Evening: rejected boy writes off girl
This procedure is then repeated until all boys propose to a different girl
Angelina
Billy Bob
Day 1
Morning: boy propose to their favourite girl
Afternoon: girl rejects all but her favourite
Evening: rejected boy writes off girl
Boys Girls
1: CBEAD A : 35214
2 : ABECD B : 52143
3 : DCBAE C : 43512
4 : ACDBE D : 12345
5 : ABDEC E : 23415
Day 2
Morning: boy propose to their favourite girl
Afternoon: girl rejects all but her favourite
Evening: rejected boy writes off girl
Boys Girls
1: CBEAD A : 35214
2 : ABECD B : 52143
3 : DCBAE C : 43512
4 : ACDBE D : 12345
5 : ABDEC E : 23415
Day 3
Morning: boy propose to their favourite girl
Afternoon: girl rejects all but her favourite
Evening: rejected boy writes off girl
Boys Girls
1: CBEAD A : 35214
2 : ABECD B : 52143
3 : DCBAE C : 43512
4 : ACDBE D : 12345
5 : ABDEC E : 23415
Day 4
Morning: boy propose to their favourite girl
Afternoon: girl rejects all but her favourite
OKAY, marriage day!
Evening: rejected boy writes off girl
Boys Girls
1: CBEAD A : 35214
2 : ABECD B : 52143
3 : DCBAE C : 43512
4 : ACDBE D : 12345
5 : ABDEC E : 23415
Proof of Gale-Shapley Theorem
Gale,Shapley [1962]:
This procedure always find a stable matching in the stable marriage problem.
2. Everyone is married.
3. No unstable pairs.
Step 1 of the Proof
Claim 1. The procedure will terminate in at most n2 days
with (n boys and n girls) .
4. Since there are n boys and each list has at most n girls,
the procedure will last for at most n2 days.
Step 2 of the Proof
Claim 2. Every one is married when the procedure stops.
Proof: by contradiction.
3. A girl only rejects a boy if she already has a more preferable partner.
5. That is, all girls are married (to one boy) at the end, but B is not married.
Gale,Shapley [1962]:
There is always a stable matching in the stable marriage problem.
b
c
Maximum
Network Flow
Network Flows
Important problem
in the optimal
management of
resources.
Types of Networks
Internet
Telephone
Cell
Roads, Highways
Railways
Electrical Power
Water
Sewer
Gas
Any more?
Maximum Flow Problem
How can we maximize the flow in a network from
a source or set of sources to a destination or set
of destinations?
The problem reportedly rose to prominence in
relation to the rail networks of the Soviet Union,
during the 1950's.
The US wanted to know how quickly the Soviet
Union could get supplies through its rail network
to its satellite states in Eastern Europe.
Source: lbackstrom, The Importance of Algorithms, at www.topcoder.com
Dual of Max Flow is Min cut
In addition, the US wanted to know which rails it
could destroy most easily to cut off the satellite
states from the rest of the Soviet Union.
It turned out that these two problems were
closely related, and that solving the max flow
problem also solves the min cut problem of
figuring out the cheapest way to cut off the
Soviet Union from its neighbours.
v
s Capacity of
edge (u,v) is
Source
5 2
1
u
t
Sink
Network Flow Problems
What is the max flow in these Graphs?
G2 3
G1
v s
s 1 5 2 t
u 7
t
G3 forward-edge G4
1 u 3
2
t s
s t
2
4
v
4 back-edge
Network Flow Solutions
What is the max flow in these Graphs?
G2 3
G1
v s
s 1 5 2 t
u 7
t
G3 forward-edge G4
1 u 3
2
t s
s t
2
4
v
4 back-edge
G4. maxflow =
G3. maxflow = 2 max(1,3)+max(2,4) = 1+2 = 3
The General Maxflow Problem
Use a graph to model material that flows through pipes.
Each edge represents one pipe, and has a capacity, which
is an upper bound on the flow rate, in some units / time.
Think of edges as pipes of different sizes.
Want to compute max rate that we can ship material from a
given source to a given sink.
Example:
What is a Flow Network?
Each edge (u,v) has a nonnegative capacity
c(u,v).
If (u,v) is not in E, let c(u,v)=0.
We have a source s, and a sink t.
Assume that every vertex v in V is on some path
from s to t.
Here c(s,v1)=16; c(v1,s)=0; c(v2,v3)=0
We can write it as a matrix
Constraints on Flow
For each edge (u,v), the flow f(u,v) is a
real-valued function that must satisfy 3
conditions:
Capacity constraint: u ,v V , f (u ,v ) c (u ,v )
Skew symmetry: u ,v V , f (u ,v ) f (v ,u )
Flow conservation: u V {s ,t }, f (u ,v ) 0
v V
Notes:
The skew symmetry condition implies that f(u,u)=0.
| f | f ( s, v) f (v, t )
vV vV
|f| = f(s, v1) + f(s, v2) + f(s, v3) + f(s, v4) + f(s, t)
= 11 + 8 + 0 + 0 + 0 = 19
|f|= f(s, t) + f(v1, t) + f(v2, t) + f(v3, t) + f(v4, t)
=0 + 0 + 0 + 15 + 4 = 19
A flow in a network
c f (u , v) c(u , v) f (u , v)
Flow Network:
Residual Network:
Augmenting Paths
An augmenting path p is a simple path
from s to t on the residual network.
We can put more flow from s to t
through p.
We call the maximum capacity by which
we can increase the flow on p the
residual capacity of p.
c f ( p ) min{c f (u , v) : (u , v) is on p}
Augmenting Paths
Network:
Residual
Network:
Augmenting
path The residual capacity of this augmenting path is 4.
Computing Max Flow
Classic Method:
Identify an augmenting path
Increase flow along that path
Repeat until no more paths
Ford-Fulkerson Method
Ford FulkersonMethod(G,s,t) {
f = 0 // initial flow in G, from s to t.
while ( there exists an augmenting path p
with spare capacity h ) {
f = f + h // augment flow along p
}
// No more paths, f is maxflow in G.
return f
}
Example
Flow(1) Residual(1)
No more augmenting paths max flow attained.
f (S , T ) f (u, v)
uS ,vT
f(S,T) = 12 4 + 11 = 19
The Capacity of a Cut (S,T)
c( S , T ) c(u, v)
uS ,vT
c(S,T)= 12+ 0 + 14 = 26
Augmenting Paths example
Capacity of the cut
= maximum possible flow through the cut
= 12 + 7 + 4 = 23
Flow(2)
cut
Source
Max-Flow Min-Cut Theorem
Original Network
Flow Network
Resulting Flow = 4
Example
Resulting Flow = 4
Flow Network
augmenting path
Residual Network
augmenting path
Residual Network
augmenting path
Residual Network
Resulting
Flow = 23
No augmenting path:
Maxflow=23
Residual Network
Analysis
O(E)
?
O(E)
Analysis
If capacities are all integer, then each
augmenting path raises |f| by 1.
If max flow is f*, then need |f*| iterations
time is O(E|f*|).
Note that this running time is not polynomial in
input size. It depends on |f*|, which is not a
function of |V| or |E|.
If capacities are rational, can scale them to
integers.
If irrational, FORD-FULKERSON might never
terminate! But we cant use irrational capacities as
inputs to digital computers anyway.
Polynomial time algorithms
Defintion: A polynomial time algorithm is an
algorithm than runs in time polynomial in n,
where n is the number of bits of the input.
|f*|=2,000,000
Run Ford-Fulkerson on this
example
Augmenting Path
Residual Network
Run Ford-Fulkerson on this
example
Augmenting Path
Residual Network
Run Ford-Fulkerson on this
example
Implement Ford-Fulkerson by
always choosing the shortest
possible augmenting path, i.e.,
the one with fewest possible
edges.
The Edmonds-Karp Algorithm
A small fix to the Ford-Fulkerson algorithm
makes it work in polynomial time.
Select the augmenting path using breadth-
first search on residual network.
The augmenting path p is the shortest path
from s to t in the residual network (treating
all edge weights as 1).
Runs in time O(V E2).
Complexity of Edmonds-Karp
Each iteration of the while loop can still be done
in time O(|E|).
0
flow
2 4 4
capacity
G: 0 0 0
10 2 0 8 60 10
0 0 0
s 10 3 9 5 10 t
Flow value = 0
2
Ford-Fulkerson Algorithm
0
flow
2 4 4
capacity
G: 8 X
0 0 8
X 0
10 2 0 8 60 10
0 0 8 X
0
s 10 3 9 5 10 t
Flow value = 0
2 4 4
residual capacity
Gf:
10 2 8 6 10
s 10 3 9 5 10 t
3
Ford-Fulkerson Algorithm
0
2 4 4
G: 10 X
8 8 0
10 2 X
0 8 60 10
2
0 0 2
X 10 X
8
s 10 3 9 5 10 t
Flow value = 8
2 4 4
Gf: 8
2 2 8 6 10
s 10 3 9 5 2 t
4
Ford-Fulkerson Algorithm
0
2 4 4
G: 10 8 0 6
X
10 2 2 8 6X
0 10
6
0 6
X 2 8
X 10
s 10 3 9 5 10 t
Flow value = 10
2 4 4
Gf:
10 2 8 6 10
s 10 3 7 5 10 t
5
Ford-Fulkerson Algorithm
0 2
X
2 4 4
G: 10 8 6 8
X
10 2 X
2 8 66 10
0
6 8
X 8 10
s 10 3 9 5 10 t
Flow value = 16
2 4 4
Gf: 6
10 2 8 6 4
s 4 3 1 5 10 t
6 8
6
Ford-Fulkerson Algorithm
2 3
X
2 4 4
G: 10 8 7
X 8 9
X
10 2 0 8 66 10
8 9
X 8 9
X 10
s 10 3 9 5 10 t
Flow value = 18
2
2 2 4
Gf: 8
10 2 8 6 2
s 2 3 1 5 10 t
8 8
7
Ford-Fulkerson Algorithm
3
2 4 4
G: 10 7 9
10 2 0 8 66 10
9 9 10
s 10 3 9 5 10 t
Flow value = 19
3
2 1 4
Gf: 9
1
10 2 7 6 1
s 1 3 9 5 10 t
9
8
Ford-Fulkerson Algorithm
3
2 4 4
G: 10 7 9
10 2 0 8 66 10
9 9 10
s 10 3 9 5 10 t
3
2 1 4
Gf: 9
1
10 2 7 6 1
s 1 3 9 5 10 t
9
9
(nave algorithm)
T: a b a c a a b
1
P: a b a c a b
4 3 2
a b a c a b
Uses
Text search using text-editor/word-
processor
Web search
Packet filtering
DNA sequence matching
C program: strstr(P, T)
String Searching
String searching problem is to find the
location of a specific text pattern within a
larger body of text (e.g., a sentence, a
paragraph, a book, etc.).
Problem: given a string T and a pattern P,
find if and where P occurs in T.
Example: Find P="n th" in T=the rain in spain
stays mainly on the plain
We want fast, save space, efficiency.
Substring, prefix, suffix
Given a string S of size m = S[0..m-1].
A substring S[i .. j] of S is the string
fragment between indexes i and j inclusive.
A prefix of S is a substring S[0 .. i]
w pre x, w x, w is a prefix of x,
e.g. aba pre abaz.
A suffix of S is a substring S[i .. m-1]
w suf x, w x, w is a suffix of x,
e.g. abc zzabc.
i and j lie between 0 and m-1, inclusive.
Examples of
prefixes and S= a n d r e w
suffixes 0 5
Naive Algorithm
RK: Rabin-Karp Algorithm
String Matching using Finite Automata
KMP: Knuth-Morris-Pratt Algorithm
BM: Boyer Moore
KMP: Knuth-Morris-Pratt Algorithm
KMP Algorithm has two parts: (1)
PREFIX function (2) Matcher.
Information stored in prefix function to
speed up the matcher.
Running time:
Creating PREFIX function takes O(m)
MATCHER takes O(m+n) (instead of O(m n)).
Boyer-Moore Algorithm
Published in 1977
The longer the pattern is, the faster it works
Starts from the end of pattern, while KMP starts
from the beginning
Works best for character string, while KMP works
best for binary string
KMP and Boyer-Moore
- Preprocessing existing patterns
- Searching patterns in input strings
Automata and Patterns:
Regular Expressions
notation for describing a set of strings, possibly of
infinite size
denotes the empty string.
a + c denotes the set {a, c} (a or c).
a* denotes the set {, a, aa, aaa, ...}, 0 or more a.
Examples
(a+b)* all the strings from the alphabet {a,b}
b*(ab*a)*b* strings with an even number of as
(a+b)*sun(a+b)* strings containing the pattern sun
(a+b)(a+b)(a+b)a 4-letter strings ending in a
Alphabet
Elements of T and P are characters from
a finite alphabet
The pattern is in an array P [1..m]
The text is in an array T [1..n]
E.g., = {0,1} or = {a, b, , z}
Usually T and P are called strings of
characters
(epsilon or lambda is the empty string)
String Matching
Given: Two strings P[1..m] and T[1..n] over alphabet .
Find all occurrences of P[1..m] in T[1..n]
Example: = {a, b, c}
text T a b c a b a a b c a a b a c
s=3 a b a a
pattern P
Terminology:
P occurs with shift s in T
P occurs beginning at position s+1.
s is a valid shift.
Goal: Find all valid shifts
Example 1, valid shift is a match
1 2 3 4 5 6 7 8 9 10 11 12 13
text T a b c a b a a b c a b a c
pattern P s=3 a b a a
1 2 3 4
s=3 a b a a
s=9 a b a a
Naive Brute-Force String matching
Nave(T,
Nave(T,P) P)
nn:=
:=length[T];
length[T];
mm:=:=length[P];
length[P];
for
forss:= :=00to
tonnm
mdodo
ififP[1..m]
P[1..m]==T[s+1..s+m]
T[s+1..s+m]then
then
print
printpattern
patternoccurs
occurswith
withshift
shifts
s
done
done
a c a a b c
s=0 a a b
Example
a c a a b c
s=0 a a b
Example
a c a a b c
s=0 a a b
Example
a c a a b c
s=1 a a b
Example
a c a a b c
s=1 a a b
Example
a c a a b c
s=2 a a b
Example
a c a a b c
s=2 a a b
Example
a c a a b c
s=2 a a b
Example
a c a a b c
s=2 a a b
match!
Example
a c a a b c
s=3 a a b
Example
a c a a b c
s=3 a a b
Example
a c a a b c
s=3 a a b
Worst-case Analysis
Brute force pattern matching can take
O(m n) in the worst case.
But most searches of ordinary text is
fast: O(m+n).
There are m comparisons for each
shift into n-m+1 shifts: worst-case
time is (m(n-m+1))
Why is naive matching slow?
Naive method is inefficient because
information from a mismatch is not used
to jump upto m-steps forward.
The brute force algorithm is fast when
the alphabet of the text is large
e.g. {A..Z, a..z, 0..9, ..}
It is slower when the alphabet is small
e.g. binary, image, DNA {G,C,A,T}.
continued
Worst and Average case examples
Example of a worst case:
P: "aaah"
T: "aaaaaaaaaaaaaaaaaaaaaaaaaah"
2 3 5 9 0 2 3 1 4 1 5 2 6 7 3 9 9 2 1
Rabin-Karp
The Rabin-Karp string searching algorithm calculates a
hash value for the pattern, and a hash for each M-
character subsequence of text to be compared.
If the hash values are unequal, the algorithm will
calculate the hash value for next M-character
sequence.
Else the hash values are equal, the algorithm will do a
Brute Force comparison between the pattern and the
M-character sequence.
In this way, there is only one comparison per text
subsequence, and Brute Force is only needed when hash
values match.
Rabin-Karp Example
Hash value of AAAAA is 37
Hash value of AAAAH is 100
Modular Equivalence
if (a mod q) = (b mod q), we say:
a is equivalent to b, modulo q , we write:
a b (mod q), i.e. a and b have the same
remainder when divided by q.
E.g. 23 37 -19 (mod 7).
The mod function (%) has nice properties:
1 [(x % q) + (y % q)] % q (x + y) % q
2 [(x % q) * (y % q)] % q (x * y) % q
3 (x % q) % q x%q
Hashing a string
mod 13
7
text T 2 3 5 9 0 2 3 1 4 1 5 2 6 7 3 9 9 2 1
mod 13 mod 13
7 7
valid spurious
match hit
Example
3 1 4 1 5 pattern
mod 13
7 text
1 2 3 4 5 6 7 8 9 10 11 12 13 14
2 3 1 4 1 5 2 6 7 3 9 9 2 1
mod 13
1 7 8 4 5 10 11 7 9 11
valid spurious
match hit
Rabin Karp Algorithm
RK: Rabin Karp
T: Text RK(T,
RK(T,P,P,d,d,q)q)
nn:= :=length[T];
length[T];
P: Pattern
mm:= :=length[P];
length[P];
d: power, e.g. 10 hh:= m-1
:=ddm-1mod modq;q;
h: 10^m-1, e.g. 10000 pp:= :=0;0;
q: modulus, e.g. 13 t0t := 0;
0 := 0;
for
fori i:=:=11totommdo do
Compute initial hashes pp:= :=(dp
(dp++P[i])
P[i])modmodq;q;
t0t := (dt + P[i]) mod q
0 := (dt00 + P[i]) mod q
od;
od;
Avoid spurious for
forss:= :=00totonnmmdo do
hits by explicit check ififpp==tst then
s then
ififP[1..m]
P[1..m]==T[s+1..s+m]
T[s+1..s+m]then
then
whenever there is a print
printpattern
patternoccurs
occurswith
withshift
shiftss
potential match. fifi
fi;
fi;
ififss<<n-mn-mthenthen
Compute incremental ts+1
ts+1:=:=(d(t
(d(ts sT[s+1]h)
T[s+1]h)++T[s+m+1])
T[s+m+1])mod modqq
hash as we go right fifi
odod
Rabin-Karp Complexity
O (n *m) in Worst case, if every shift
has to be verified.
O (n +m) in Average case, using a large
prime number in hash function, less
collisions.
where
n = size(Text),
m = size(Pattern).
Homework: Write Rabin Karp
String Search
Write a C function rkfind(s,t) to search
string s in t.
Let hash(str) be the sum of chars in str mod
256.
In the main function, check these:
assert(rkfind(s="ab", t="xabc") == t+1);
assert(rkfind("ab", "baa")==NULL);
Homework: Rabin Karp String
search
Implement Rabin karp string search in C.
Use xor(chars of the string) as the hash
function.
Exact String matching: a brief
history
Naive algorithm
Knuth-Morris-Pratt 1977.
Boyer-Moore 1977.
Suffix Trees: Linear time.
Weiner 1973 - Right to left
McCreight 1978 - Left to right.
Ukkonen 1995 - Online, Left to right.
Suffix Trees Uses
Suitable for DNA searching because:
small alphabet (4 chars),
size of T = m is huge (billion)
many patterns to search on T.
Linear time algorithms (large constants).
Music database
Finding repeats in DNA
human chromosome 3
the first 48 999 930 bases
31 min cpu time (8 processors, 4 GB)
ttagggtacatgtgcacaacgtgcaggtttgttacatatgtatacacgtgccatgatggtgtgctgcacccattaactcgtcatttagcgttaggtatatctccgaat
gctatccctcccccctccccccaccccacaacagtccccggtgtgtgatgttccccttcctgtgtccatgtgttctcattgttcaattcccacctatgagtgagaac
atgcggtgtttggttttttgtccttgcgaaagtttgctgagaatgatggtttccagcttcatccatatccctacaaaggacatgaactcatcatttttttatggctgcata
gtattccatggtgtatatgtgccacattttcttaacccagtctacccttgttggacatctgggttggttccaagtctttgctattgtgaatagtgccgcaataaacatac
gtgtgcatgtgtctttatagcagcatgatttataatcctttgggtatatacccagtaatgggatggctgggtcaaatggtatttctagttctagatccctgaggaatca
ccacactgacttccacaatggttgaactagtttacagtcccagcaacagttcctatttctccacatcctctccagcacctgttgtttcctgactttttaatgatcgcca
ttctaactggtgtgagatggtatctcattgtggttttgatttgcatttctctgatggccagtgatgatgagcattttttcatgtgttttttggctgcataaatgtcttcttttga
gaagtgtctgttcatatccttcgcccacttttgatggggttgtttgtttttttcttgtaaatttgttggagttcattgtagattctgggtattagccctttgtcagatgagtag
gttgcaaaaattttctcccattctgtaggttgcctgttcactctgatggtggtttcttctgctgtgcagaagctctttagtttaattagatcccatttgtcaattttggctttt
gttgccatagcttttggtgttttagacatgaagtccttgcccatgcctatgtcctgaatggtattgcctaggttttcttctagggtttttatggttttaggtctaacatgta
agtctttaatccatcttgaattaattataaggtgtatattataaggtgtaattataaggtgtataattatatattaattataaggtgtatattaattataaggtgtaaggaag
ggatccagtttcagctttctacatatggctagccagttttccctgcaccatttattaaatagggaatcctttccccattgcttgtttttgtcaggtttgtcaaagatcaga
tagttgtagatatgcggcattatttctgagggctctgttctgttccattggtctatatctctgttttggtaccagtaccatgctgttttggttactgtagccttgtagtatag
tttgaagtcaggtagcgtgatggttccagctttgttcttttggcttaggattgacttggcaatgtgggctcttttttggttccatatgaactttaaagtagttttttccaatt
ctgtgaagaaattcattggtagcttgatggggatggcattgaatctataaattaccctgggcagtatggccattttcacaatattgaatcttcctacccatgagcgt
gtactgttcttccatttgtttgtatcctcttttatttcattgagcagtggtttgtagttctccttgaagaggtccttcacatcccttgtaagttggattcctaggtattttattct
ctttgaagcaattgtgaatgggagttcactcatgatttgactctctgtttgtctgttattggtgtataagaatgcttgtgatttttgcacattgattttgtatcctgagacttt
gctgaagttgcttatcagcttaaggagattttgggctgagacgatggggttttctagatatacaatcatgtcatctgcaaacagggacaatttgacttcctcttttcc
taattgaatacccgttatttccctctcctgcctgattgccctggccagaacttccaacactatgttgaataggagtggtgagagagggcatccctgtcttgtgcca
gttttcaaagggaatgcttccagtttttgtccattcagtatgatattggctgtgggtttgtcatagatagctcttattattttgagatacatcccatcaatacctaatttatt
gagagtttttagcatgaagagttcttgaattttgtcaaaggccttttctgcatcttttgagataatcatgtggtttctgtctttggttctgtttatatgctggagtacgtttat
tgattttcgtatgttgaaccagccttgcatcccagggatgaagcccacttgatcatggtggataagctttttgatgtgctgctggattcggtttgccagtattttattg
aggatttctgcatcgatgttcatcaaggatattggtctaaaattctctttttttgttgtgtctctgtcaggctttggtatcaggatgatgctggcctcataaaatgagtta
gg
Suffix Trees
Specialized form of keyword trees
New ideas
preprocess text T, not pattern P
O(m) preprocess time
O(n+k) search time
k is number of occurrences of P in T
edges can have string labels
Keyword Tree example
P = {poet, pope, popo, too}
Root
p t
pa
th
o
"to
o
o"
e p o
t e o 4
1 2 3 Leaf
Suffix Tree
Take any m character string S like "xabxac"
Set of keywords is the set of suffixes of S
{xabxac, abxac, bxac, xac, ac, c}
Suffix tree T is essentially the keyword tree for
this set of suffixes of S,
Changes:
Assumption: no suffix is a prefix of another suffix (can
be a substring, but not a prefix)
Assure this by adding a character $ to end of S
Internal nodes except root must have at least 2
children
edges can be labeled with strings
Example: Suffix tree for "xabcxac"
{xabxac, abxac, bxac, xac, ac, c}
b x
c a
x a
a 6 c b
c 5 x b
c
a x
4 a
c c
3
2
1
Notation
Label of a path from r (root) to a node v is
the concatenation of labels on edges
from r to v
label of a node v is L(v), path label from r
to v
string-depth of v is number of characters
in vs label L(v)
Comment: In constructing suffix trees, we
will need to be able to split edges in
the middle
Suffix trees for exact matching
Build suffix tree T for text.
Match pattern P against tree starting at
root until
Case 1, P is completely matched
Every leaf below this match point is the starting
location of P in T
Case 2: No match is possible
P does not occur in T
Example: suffix tree match(P, T)
T = xabxac
suffixes of T = {xabxac, abxac, bxac, xac, ac, c}
Pattern P1 : xa .. matches at 1, 4.
Pattern P2 : xb .. no matches.
b x
c a
x a
a 6 c b
c 5 x b
c
a x
4 a
c c
3
2
1
Running Time Analysis
Build suffix tree:
Ukkonen's Linear time algorithm: O(m)
This is preprocessing of data.
Search time:
O(n+k) where k is the number of occurrences
of P in T
O(n) to find match point if it exists
O(k) to find all leaves below match point
Building trees: O(m2) algorithm
Initialize
One edge for the entire string S[1..m]$
For i = 2 to m
Add suffix S[i..m] to suffix tree
Find match point for string S[i..m] in current tree
If in middle of edge, create new node w
Add remainder of S[i..m] as edge label to suffix i leaf
Running Time
O(m-i) time to add suffix S[i..m]
Exercises: build suffix trees
1. Build a suffix tree for: "abbcbab#"
2. Search for "bcb" in the tree.
Suffix Tree - Solution
abbcbab#
ab
cbab#
#
6 b 4
bcbab#
# ab#
1 7 5
cbab#
bcbab#
3
2
Suffix Trees searching a pattern
abbcbab# ab
cbab#
#
6 b 4
bcbab#
# ab#
1 7 5
cbab#
bcbab#
3
Pattern: bcb
2
Suffix Trees naive
construction
abbcbab#
ab
cbab#
#
6 b 4
abbcbab#
bbcbab#
bcbab#
# ab#
1 7 5
cbab#
bcbab#
3
2
Drawbacks
Suffix trees consume a lot of space
It is O(n)
The constant is quite big
Notice that if we indeed want to traverse
an edge in O(1) time then we need an
array of pointers of size || in each node
(useful when alphabet is small, e.g. 4
chars in DNA and RNA).
Ukkonen Algorithm
Suffix trees
.
Ukkonen Algorithm
Ukkonen algorithm [1995] is the fastest and well performing
algorithm for building a suffix tree in linear time.
The basic idea is constructing iteratively the implicit suffix trees for S[1..i] in the
following way:
Construct tree1
for i = 1 to m-1 // do phase i+1
for j = 1 to i+1 // do extension j
1. find the end of the path from the root with
label S[ji] in the current tree.
2. Extend the path adding character S(i+1),
so that S[ji+1] is in the tree.
Ukkonen: 3 extension rules
The extension will follow one of the next three rules, being b
= S[j..i]:
v ab
cbab#
#
6 b 4
bcbab# S(v)
abbcbab# # ab#
1 7 5
cbab#
Suffix link bcbab#
3
2
Suffix Trees Ukkonen Algorithm - III
With suffix links, we can speed up the construction of the ST
Once a phase rule 3 is used, all succeccive extensions make use of it, hence
we can ignore them.
If in phase i the rule 1 and 2 are applied in the first ji moves, in phase i+1 the
first ji extensions can be made in costant time, simply adding the character
S(i+2) at the end of the paths to the first ji leafs (we will use a global variable e
do do this). Hence the extensions will be computed explicitly from ji+1, reducing
their global number to 2m.
Ukkonen: construct
suffix tree in
linear time
Ukkonens linear time construction
ACTAATC
A
A
1
ACTAATC
AC
A
1
ACTAATC
AC
AC
1
ACTAATC
AC
AC C
1 2
ACTAATC
ACT
AC C
1 2
ACTAATC
ACT
ACT C
1 2
ACTAATC
ACT
ACT CT
1 2
ACTAATC
ACT
ACT CT T
1 2 3
ACTAATC
ACTA
ACT CT T
1 2 3
ACTAATC
ACTA
ACTA CT T
1 2 3
ACTAATC
ACTA
ACTA CTA T
1 2 3
ACTAATC
ACTA
ACTA CTA TA
1 2 3
ACTAATC
ACTAA
ACTA CTA TA
1 2 3
ACTAATC
ACTAA
ACTAA CTA TA
1 2 3
ACTAATC
ACTAA
ACTAA TA
CTAA
1 2 3
ACTAATC
ACTAA
ACTAA TAA
CTAA
1 2 3
ACTAATC
ACTAA
A TAA
CTAA
2 3
A
CTAA
4 1
Phases & extensions
Phase i is when we add character i
ACTAAT
A TAA
CTAA
2 3
A
CTAA
4 1
ACTAATC
ACTAAT
A TAA
CTAA
2 3
A
CTAAT
4 1
ACTAATC
ACTAAT
A TAA
CTAAT
2 3
A
CTAAT
4 1
ACTAATC
ACTAAT
A TAAT
CTAAT
2 3
A
CTAAT
4 1
ACTAATC
ACTAAT
A TAAT
CTAAT
2 3
AT
CTAAT
4 1
ACTAATC
ACTAAT
A TAAT
CTAAT
2 3
AT
T
CTAAT
4 1 5
Extension rules
Rule 1: The suffix ends at a leaf, you add a
character on the edge entering the leaf
ACTAATC
A TAAT
CTAAT
2 3
AT
T
CTAAT
4 1 5
ACTAATC
ACTAATC
A TAAT
CTAAT
2 3
AT
T
CTAATC
4 1 5
ACTAATC
ACTAATC
A TAAT
CTAATC
2 3
AT
T
CTAATC
4 1 5
ACTAATC
ACTAATC
A TAATC
CTAATC
2 3
AT
T
CTAATC
4 1 5
ACTAATC
ACTAATC
A TAATC
CTAATC
2 3
ATC
T
CTAATC
4 1 5
ACTAATC
ACTAATC
A TAATC
CTAATC
2 3
ATC
TC
CTAATC
4 1 5
ACTAATC
ACTAATC
A T
CTAATC
2
ATC
TC C
CTAATC AATC
4 1 5 3 6
Skip forward..
ACTAATCAC
A T
C
TCAC AC
ATCAC TAATCAC CAC
CTAATCAC AATCAC
4 1 5 2 7 3 6
ACTAATCACT
A T
C
TCAC AC
ATCAC TAATCAC CAC
CTAATCAC AATCAC
4 1 5 2 7 3 6
ACTAATCACT
A T
C
TCAC AC
ATCAC TAATCAC CAC
CTAATCACT AATCAC
4 1 5 2 7 3 6
ACTAATCACT
A T
C
TCAC AC
ATCAC TAATCACT CAC
CTAATCACT AATCAC
4 1 5 2 7 3 6
ACTAATCACT
A T
C
TCAC AC
ATCAC TAATCACT CAC
CTAATCACT AATCACT
4 1 5 2 7 3 6
ACTAATCACT
A T
C
TCAC AC
ATCACT TAATCACT CAC
CTAATCACT AATCACT
4 1 5 2 7 3 6
ACTAATCACT
A T
C
TCACT AC
ATCACT TAATCACT CAC
CTAATCACT AATCACT
4 1 5 2 7 3 6
ACTAATCACT
A T
C
TCACT AC
ATCACT TAATCACT CACT
CTAATCACT AATCACT
4 1 5 2 7 3 6
ACTAATCACT
A T
C
TCACT ACT
ATCACT TAATCACT CACT
CTAATCACT AATCACT
4 1 5 2 7 3 6
ACTAATCACTG
A T
C
TCACT ACT
ATCACT TAATCACT CACT
CTAATCACT AATCACT
4 1 5 2 7 3 6
ACTAATCACTG
A T
C
TCACT ACT
ATCACT TAATCACT CACT
CTAATCACTG AATCACT
4 1 5 2 7 3 6
ACTAATCACTG
A T
C
TCACT ACT
ATCACT TAATCACTG CACT
CTAATCACTG AATCACT
4 1 5 2 7 3 6
ACTAATCACTG
A T
C
TCACT ACT
ATCACT TAATCACTG CACT
CTAATCACTG AATCACTG
4 1 5 2 7 3 6
ACTAATCACTG
A T
C
TCACT ACT
ATCACTG TAATCACTG CACT
CTAATCACTG AATCACTG
4 1 5 2 7 3 6
ACTAATCACTG
A T
C
TCACTG ACT
ATCACTG TAATCACTG CACT
CTAATCACTG AATCACTG
4 1 5 2 7 3 6
ACTAATCACTG
A T
C
TCACTG ACT
ATCACTG TAATCACTG CACTG
CTAATCACTG AATCACTG
4 1 5 2 7 3 6
ACTAATCACTG
A T
C
TCACTG ACTG
ATCACTG TAATCACTG CACTG
CTAATCACTG AATCACTG
4 1 5 2 7 3 6
ACTAATCACTG
A T
C
TCACTG ACTG
ATCACTG TAATCACTG CACTG
CT AATCACTG
4 5 2 7 3 6
G
AATCACTG
1 8
ACTAATCACTG
A T
C
TCACTG ACTG
ATCACTG T CACTG
CT AATCACTG
4 5 7 3 6
G G
AATCACTG AATCACTG
1 8 2 9
ACTAATCACTG
A T G
C
11
TCACTG ACTG
ATCACTG T CACTG
CT AATCACTG G
4 5 7 3 6 10
G G
AATCACTG AATCACTG
1 8 2 9
Observations
i
Rule 3 Rule 2
Rule 3 Rule 2
If we apply rule 3 then in all subsequent extensions we
will apply rule 3
Rule 3
In terms of the rules that we apply
a phase looks like:
111111122223333
A T G
C
11
TCACTG ACTG
ATCACTG T CACTG
CT AATCACTG G
4 5 7 3 6 10
G G
AATCACTG AATCACTG
1 8 2 9
ACTAATCACTG
A T G
C
11
TCACTG ACTG
ATCACTG T CACTG
CT AATCACTG G
4 5 7 3 6 10
G G
AATCACTG AATCACTG
1 8 2 9
ACTAATCACTG
A T G
C
11
TCACTG ACTG
ATCACTG T CACTG
CT AATCACTG G
4 5 7 3 6 10
G G
AATCACTG AATCACTG
1 8 2 9
ACTAATCACTG
A T G
C
11
TCACTG ACTG
ATCACTG T CACTG
CT AATCACTG G
4 5 7 3 6 10
G G
AATCACTG AATCACTG
1 8 2 9
ACTAATCACTG
A T G
C
11
TCACTG ACTG
ATCACTG T CACTG
CT AATCACTG G
4 5 7 3 6 10
G G
AATCACTG AATCACTG
1 8 2 9
Suffix Links
From an internal node that corresponds to the
string a to the internal node that
corresponds to (if there is such node)
a
Is there such a node ?
v
x.. y
Create the new internal node if necessary
a
v
x.. y
Create the new internal node if necessary
a
y
v
x.. y
Create the new internal node if necessary, add the suffix
a
y
v
x.. y
Create the new internal node if necessary, add the suffix
and install a suffix link if necessary
Analysis
Handling all extensions of rule 1 and all extensions of
rule 3 per phase take O(1) time O(n) total
y
v
x.. y
So why is it a linear time
algorithm ?
How much can the depth change when we traverse a
suffix link ?
It can decrease by
at most 1
Punch line
Each time we go up or traverse a suffix link the depth
decreases by at most 1
x y
Why LCA?
The LCA of two leaves represents the
longest common prefix (LCP) of these 2
suffixes
#
$
a b 5 4
#
b a a $ 3
b b 4
# $
a $ # 1 2
b
$ 3 2
1
Generalized Suffix Trees (GST)
A GST is simply a ST for a set of strings, each one ending with a different
marker. The leafs have two numbers, one identifiing the string and the other
identifiing the position inside the string.
c$
ab
c# (1,4)
b
S1 = abbc$ (2,2)
(2,4)
bc$
(1,1)
abc#
c$
S2 = babc#
bc$ (2,1)
(1,3)
(2,3)
(1,2)
Exercise: generalized suffix tree
1. Draw the GST for s1=abab, s2=aab
GST Solution
The generalized suffix tree of s1=abab and
s2=aab is:
#
{ $
a b
$ # 5 4
b$ b# #
ab$ ab# b a a $ 3
bab$ aab# b b 4
# $
abab$
a $ # 1 2
} b
$ 3 2
1
ST applications
Searching for substrings
LCS
Hamming distance
Longest palindrome
Longest common substring
Let S1 and S2 be two string over the same alphabet. The
Longest Common Substring problem is to find the longest
substring of S1 that is also a substring of S2.
#
{ $
a b
$ # 5 4
b$ b# #
ab$ ab# b a a $ 3
bab$ aab# b b 4
# $
abab$
a $ # 1 2
} b
$ 3 2
1
LCS solution
Every node with a leaf
descendant from string s1 and #
a leaf descendant from string $
a b
s2 represents a maximal 5 4
common substring and vice #
versa. b a a $ 3
b b 4
Find such node with largest # $
string depth.
a $ # 1 2
b
$ 3 2
1
GST(s1=abab, s2=aab)
Finding maximal palindromes
Palindromes e.g: Dad, Radar, Malayalam
Q. To find all maximal palindromes in a
string s
Let s = cbaaba
The maximal palindrome with center between i-1 and i is the LCP
of the suffix at position i of s and the suffix at position m-i+1 of
reverse(s).
Maximal palindromes algorithm
Algorithm:
r = reverse(s).
m = len(s);
build GST(s,r)
for i=1..m
find LCA(s[i..],r[m-i+1])
b a c
6 b
c
c # a 6
$ a
#
a a
# 4 a
$ b
a 5 5 b
3 3 $
b a c a
c 4 $ # $
# 2 1
2
1
Hamming and Edit Distances
Hamming Distance: two strings of the same length are aligned and the
distance is the number of mismatches between them.
abbcdbaabbc
H=6
abbdcbbbaac
Edit Distance: it is the minimum number of insertions, deletions and
substitutions needed to trasform a string into another.
abbcdbaabbc abbcdbaabbc
E=3
cbcdbaabc abbcdbaabbc
The k-mismatches problem
We have a text T and a pattern P, and we want to find occurences of P in T,
allowing a maximum of k mismatches, i.e. we want to find all the substring T
of T such that H(P,T) k.
We can use suffix trees, but they do not perfom well anymore: the algorithm
scans all the paths to leafs, keeping track of errors, and abandons the path if
this number becomes greater that k.
The algorithm is made faster by using the LCS. For every suffix of T, the
pieces of agreement between the suffix and P are matched together until P is
exausted or the errors overcome k. Every piece is found in costant time. The
complexity of the resulting algorithm is O(k|T|).
aaacaabaaaaa.
c An occurence is found
in position 2 of T, with
aabaab
b one error.
Inexact Matching
In biology, inexact matching is very important:
Similarity in DNA sequences implies often that they have the same
biological function (viceversa is not true);
Mutations and error transcription make exact comparison not very
useful.
There are a lot of algorithms that deal with inexact matching (with respect
to edit distance), and they are mainly based on dynamic programming or
on automata. Suffix trees are used as a secondary tools in some of them,
because their structure is inadapt to deal with insertions and deletions,
and even with substitutions.
The main efforts are spend in speeding up the average behaviour of
algorithms, and this is justified because of the fact that random
sequences often fall in these cases (and DNA sequences have an high
degree of randomness).
Suffix array
We loose some of the functionality but we
save space.
Let s = abab
Sort the suffixes lexicographically:
ab, abab, b, bab
The suffix array gives the indices of the
suffixes in sorted order
3 1 4 2
How do we build it ?
Build a suffix tree
Traverse the tree in DFS, lexicographically
picking edges outgoing from each node
and fill the suffix array.
O(n) time
How do we search for a pattern ?
If P occurs in T then all its occurrences are
consecutive in the suffix array.
Do a binary search on the suffix array
Takes O(m logn) time
Example
Let S = mississippi
L 11 i
8 ippi
Let P = issa 5 issippi
2 ississippi
1 mississippi
M 10 pi
9 ppi
7 sippi
4 sisippi
6 ssippi
R 3 ssissippi
How do we accelerate the search ?
Maintain = LCP(P,L)
L
Maintain r = LCP(P,R)
If = r then start comparing M
to P at + 1
R r
How do we accelerate the search ?
If > r then
input a
state a b
0 1 0
b 0 1
1 0 0
a
transition table
transition diagram b
Finite automata string matcher
Construct deterministic FA for the pattern
P before starting match with T.
Processing time takes (n).
T matches P, if DFA(P) accepts T
String-Matching Automata
Given the pattern P [1..m], build a finite
automaton M
The state set is Q={0, 1, 2, , m}
The start state is 0
The only accepting state is m
FINITE-AUTOMATON-MATCHER (T, m, )
n length[T]
q0
for i 1 to n
q (q, T [i])
if q = m
print pattern occurs with shift i-m
Example
The KMP Algorithm
The Knuth-Morris-Pratt (KMP) algorithm
looks for the pattern in the text in a left-to-
right order (like the brute force algorithm).
It differs from the brute-force algorithm
by keeping track of information gained
from previous comparisons.
But it shifts the pattern more intelligently
than the brute force algorithm.
Knuth-Morris-Pratt (KMP) Method
Avoids computing (transition function)
of DFA matcher
Instead computes a prefix function in
O(m) time. has only m entries
Prefix function stores info about how the
pattern matches against shifts of itself
(to avoid testing shifts that will never match).
How much to skip?
If a mismatch occurs between the text and
pattern P at P[j], what is the most we can
shift the pattern to avoid wasteful
comparisons?
T: a b a a b x
Basic KMP
P: a b a a b a does not do this.
a b a a b a
KMP failure function
KMP failure function, for P = ababac
s a b a b a c a P
q
s+2 a b a b a c a P
k
a b a Pk
In
Ingeneral,
general,ififqqcharacters
charactershave
havematched
matchedsuccessfully
successfullyatatshift
shift
s,s,the
thenext
nextpotentially
potentiallyvalid
validshift (q[q]).
shiftisisss==ss++(q [q]).
The Prefix Function
Compute-(P)
Compute-(P)
11 mm:= :=length[P];
is called the prefix function for P. 22 [1]
length[P];
[1]:= :=0;0;
: {1, 2, , m} {0, 1, , m1} 33 kk:=
44 for
:=0;0;
forqq:= :=22totommdodo
[q] = length of the longest prefix 55 while
whilekk>>00and P[k+1]P[q]
andP[k+1] P[q]do
do
of P that is a proper suffix
66 kk:=:=[k]
[k]
od;
od;
of Pq, i.e., 77 ififP[k+1]
P[k+1]==P[q]P[q]then
then
88 kk:=:=kk++11
[q] = max{k: k < q and Pk suf Pq}. fi;
fi;
99 [q]
[q]:= :=kk
od;
od;
10 return
10 return
Example
i 1 2 3 4 5 6 7 Same as our
P[i] a b a b a c a FA example
[i] 0 0 1 2 3 0 1
P7 = a b a b a c a P4 = a b a b P1 = a
a = P1 a b = P2 = P0
P6 = a b a b a c P3 = a b a
= P0 a = P1
P5 = a b a b a P2 = a b
a b a = P3 = P0
Another Explanation
Essentially KMP is computing a FA with epsilon moves. The spine
of the FA is implicit and doesnt have to be computed -- its just the
pattern P. gives the transitions. There are O(m) such transitions.
a b a b a c a
0 1 2 3 4 5 6 7
P10 = a b b a b a a b b a P7 = a b b a b a a P4 = a b b a P1 = a
a b b a = P4 a = P1 a = P1 = P0
P9 = a b b a b a a b b P6 = a b b a b a P3 = a b b
a b b = P3 a = P1 = P0
P8 = a b b a b a a b P5 = a b b a b P2 = a b
a b = P2 a b = P2 = P0
Time Complexity
Amortized Analysis --
0
loop q = 2 (1st iteration)
1
loop q = 3 (2nd iteration)
2
loop q = 4 (3rd iteration)
loop q = m ((m 1)st iteration)
m-1
m 1
(ci i i 1 )
i 1
m 1
ci m-1 0
i 1
We show i = O(1).
Time Complexity (Continued)
The value of i obviously depends on how many times statement
6 is executed.
Note that k > [k]. Thus, each execution of statement 6 decreases
k by at least 1.
So, suppose that statements 5..6 iterate several times, decreasing
the value of k.
We have: number of iterations kold knew. Thus,
i O(1) + 2(kold knew) + i i-1
for statements = knew = kold
other than 5 & 6
1 2 3 4 5 6 7 8 9 10
T= a b b a b a b a b c
Start of 1st loop: q = 0, i = 1 [a] 8th loop: q = 4, i = 8 [a] mismatch
2nd loop: q = 1, i = 2 [b] 9th loop: q = 3, i = 9 [b] detected
3rd loop: q = 2, i = 3 [b] mismatch 10th loop: q = 4, i = 10 [c] match
4th loop: q = 0, i = 4 [a] detected Termination: q = 5 detected
5th loop: q = 1, i = 5 [b]
6th loop: q = 2, i = 6 [a]
7th loop: q = 3, i = 7 [b]
1
Prefix Function for a Pattern
Given that pattern prefix P [1..q] matches
text characters T [(s+1)..(s+q)], what is
the least shift s > s such that
P [1..k] = T [(s+1)..(s+k)] where s+k=s+q?
2
Prefix Function: Example 1
b a c b a b a b a a b c b a T
s
a b a b a c a P
q
b a c b a b a b a a b c b a T
s
a b a b a c a P
k
a b a b a Pq Compare pattern against itself;
longest prefix of P that is also a
a b a Pk suffix of P5 is P3; so [5]= 3
3
Prefix Function: Example 2
i 1 2 3 4 5 6 7 8 9 10
P [i] a b a b a b a b c a
[i] 0 0 1 2 3 4 5 6 0 1
4
Illustration: given a String S and pattern p as
follows:
S b a c b a b a b a b a c a c a
p a b a b a c a
Let us execute the KMP algorithm to find
whether p occurs in S.
For p the prefix function, was computed previously and is as follows:
q 1 2 3 4 5 6 7
p a b a b a c a
0 0 1 2 3 0 1
5
Initially: n = size of S = 15;
m = size of p = 7
Step 1: i = 1, q = 0
comparing p[1] with S[1]
S b a c b a b a b a b a c a a b
p a b a b a c a
P[1] does not match with S[1]. p will be shifted one position to the right.
Step 2: i = 2, q = 0
comparing p[1] with S[2]
S b a c b a b a b a b a c a a b
p a b a b a c a
P[1] matches S[2]. Since there is a match, p is not shifted. 6
Step 3: i = 3, q = 1
Comparing p[2] with S[3] p[2] does not match with S[3]
S b a c b a b a b a b a c a a b
p a b a b a c a
Backtracking on p, comparing p[1] and S[3]
Step 4: i = 4, q = 0
comparing p[1] with S[4] p[1] does not match with S[4]
S b a c b a b a b a b a c a a b
p a b a b a c a
Step 5: i = 5, q = 0
comparing p[1] with S[5] p[1] matches with S[5]
S b a c b a b a b a b a c a a b
p a b a b a c a
7
Step 6: i = 6, q = 1
Comparing p[2] with S[6] p[2] matches with S[6]
S b a c b a b a b a b a c a a b
p a b a b a c a
Step 7: i = 7, q = 2
Comparing p[3] with S[7] p[3] matches with S[7]
S b a c b a b a b a b a c a a b
p a b a b a c a
Step 8: i = 8, q = 3
Comparing p[4] with S[8] p[4] matches with S[8]
S b a c b a b a b a b a c a a b
p a b a b a c a
8
Step 9: i = 9, q = 4
Comparing p[5] with S[9] p[5] matches with S[9]
S b a c b a b a b a b a c a a b
p a b a b a c a
S b a c b a b a b a b a c a a b
p a b a b a c a
Backtracking on p, comparing p[4] with S[10] because after mismatch q = [5] = 3
S b a c b a b a b a b a c a a b
p a b a b a c a 9
Step 12: i = 12, q = 5
Comparing p[6] with S[12] p[6] matches with S[12]
S b a c b a b a b a b a c a a b
p a b a b a c a
S b a c b a b a b a b a c a a b
p a b a b a c a
Pattern p has been found to completely occur in string S. The total number of shifts
that took place for the match to be found are: i m = 13 7 = 6 shifts.
10
KMP Example
i
T:
P: j=5
jnew = 2
Why
j == 5
Find largest prefix (start) of:
"a b a a b" ( P[0..j-1] )
P x y t
1 c j m
Shift
P x y t
1 j m
Rule 2-1: Character Matching Rule
(A Special Version of Rule 2)
Bad character rule uses Rule 2-1 (Character Matching
Rule).
For any character x in T, find the nearest x in P which
is to the left of x in T.
Implication of Rule 2-1
Case 1. If there is a x in P to the left of T, move P so
that the two xs match.
Case 2: If no such a x exists in P, consider the partial
window defined by x in T and the string to the left of
it.
Partial W
T x
P
Ex: Suppose that P1 is aligned to T6 now. We compare pair-
wise between T and P from right to left. Since T16,17 = P11,12 =
CA and T15 =G P10 = T. Therefore, we find the
rightmost position c=7 in the left of P10 in P such that Pc is
equal to G and we can move the window at least (10-7=3)
positions.
directing of the scan
s=6
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
P A T C AC AGT AT C A
1 2 3 4 5 6 7 8 9 10 11 12
c j=10 m=12
P AT C AC AGT AT C A
1 2 3 4 5 6 7 8 9 10 11 12
Good Suffix Rule 1
If a mismatch occurs in Ts+j-1, we match Ts+j-1 with Pj-m+j , where
j (m-j+1 j < m) is the largest position such that
(1) Pj+1,m is a suffix of P1,j
(2) Pj-(m-j) Pj.
We can move the window at least (m-j) position(s).
s s+j-1
T x t
P z t y t
1 j-m+j j j m
Shift P z t y t
1 j-m+j j j m
zy
Rule 2: The Substring Matching Rule
For any substring u in T, find a nearest u in P which
is to the left of it. If such a u in P exists, move P;
otherwise, we may define a new partial window.
Ex: Suppose that P1 is aligned to T6 now. We compare pair-
wise between P and T from right to left. Since T16,17 = CA
= P11,12 and T15 =A P10 = T. We find the substring
CA in the left of P10 in P such that CA is the suffix of P1,6
and the left character to this substring CA in P is not equal
to P10 = T. Therefore, we can move the window at least m-
j (12-6=6) positions right.
s=6 s+j-1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
T A A A A A A G C C T A G C A A C A A A A
mismatch
P A T C A C A T C A T C A
1 2 3 4 5 6 7 8 9 10 11 12
m=12
j=6 j=10
Shift P A T C A C A T C A T C A
1 2 3 4 5 6 7 8 9 10 11 12
AT
Good Suffix Rule 2
Good Suffix Rule 2 is used only when Good Suffix Rule 1 can
not be used. That is, t does not appear in P(1, j). Thus, t is unique
in P.
If a mismatch occurs in Ts+j-1, we match Ts+m-j with P1,
where j (1 j m-j) is the largest position such that
P1,j is a suffix of Pj+1,m.
s s+j-1 s+m-j
T x t
t
P t y t
1 j j t m
Shift P t y t
1 j j m
P.S. : t is suffix of substring t.
Rule 3-1: Unique Substring Rule
The substring u appears in P exactly once.
If the substring u matches with Ti,j , no matter whether a mismatch
occurs in some position of P or not, we can slide the window by l.
i u j
T: s
u
P: s s
s u
P
Note that the above rule also uses Rule 1.
It should also be noted that the unique
substring is the shorter and the more right-
sided the better.
A short u guarantees a short (or even empty) s
which is desirable.
i j
u
u
s s
s u
l
Ex: Suppose that P1 is aligned to T6 now. We compare pair-wise
between P and T from right to left. Since T12 P7 and there is no
substring P8,12 in left of P8 to exactly match T13,17. We find a
longest suffix AATC of substring T13,17, the longest suffix is
also prefix of P. We shift the window such that the last character
of prefix substring to match the last character of the suffix
substring. Therefore, we can shift at least 12-4=8 positions.
s=6
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
T A A A A AA T C A C A T T A A T C A A A
P A A T C A T C T A A T C
1 2 3 4 5 6 7 8 9 10 11 12 m=12
Shift P A A T C A T C T A A T C
1 2 3 4 5 6 7 8 9 10 11 12
j=4 j=7 m=12
Let Bc(a) be the rightmost position of a in P. The
function will be used for applying bad character rule.
j 1 2 3 4 5 6 7 8 9 10 11 12 A C G T
P A T C A C A T C A T C A B 12 11 0 10
T A G C T A G C C T G C A C G T A C A
j 1 2 3 4 5 6 7 8 9 10 11 12
Move at least
P A T C A C A T C A T C A 10-B(G) = 10 positions
Let Gs(j) be the largest number of shifts by good
suffix rule when a mismatch occurs for comparing Pj
with some character in T.
gs1(j) be the largest k such that Pj+1,m is a suffix of P1,k and
Pk-m+j Pj, where m-j+1 k<m ; 0 if there is no such k.
(gs1 is for Good Suffix Rule 1)
P t t
j j+1 k m
Ex: j+1,m-k+j+1
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 10 11 12 8 9 10 11 12 13 13 13 14
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
TG A T C GA T C A A T C A T C A C A T G A T C A
PA T C A C A T C A T C A
1 2 3 4 5 6 7 8 9 10 11 12
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 10 11 12 8 9 10 11 12 13 13 13 14
Example
As for Good Suffix Rule 2, it is relatively easier.
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 10 11 12 8 9 10 11 12 13 13 13 14
Question: How can we construct the Suffix
function?
P
We now can see that actually the suffix
function is the same as the prefix. The only
difference is now we consider a suffix. Thus,
the recursive formula for the prefix function in
MP Algorithm can be slightly modified for the
suffix function in BM Algorithm.
The formula of suffix function f as follows :
m 2, if j m
k
f ' ( j 1) 1, if 1 j m - 1 and there exists the smallest
f ' ( j)
k 1 such that Pj 1 Pf' k (j 1)-1 ;
m 1,
otherwise
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 14
j=m=12,
f=m+2=14
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 13 14
No k satisfies
Pj+1=Pfk(j+1)-1, k =1
f=m+1=12+1=13
P12 P13
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 13 13 14
No k satisfies
Pj+1=Pfk(j+1)-1, k =1
f=m+1=12+1=13
P11 P12
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 13 13 13 14
No k satisfies
Pj+1=Pfk(j+1)-1,
f=m+1=12+1=13
k =1
P10 P12
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 12 13 13 13 14
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 11 12 13 13 13 14
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 12 8 9 10 11 12 13 13 13 14
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 10 11 12 8 9 10 11 12 13 13 13 14
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 10 11 12 8 9 10 11 12 13 13 13 14
G 0 0 0 0 0 0 0 0 0 0 0 0
Step1: We scan from right to left and gs1(j) is determined
during the scanning, then gs1(j) >= gs2(j)
Observe:
If Pj=P4 P7=Pf(j)-1, we know gs1(f(j)-1)=m+j-f(j)+1=9.
If t = f(j)-1m and Pj Pt , G(t) = m-gs1(f(j)-1) = f(j) 1 j.
f(k)(x)=f(k-1)(f(x) 1), k 2
When j=12, t=13. t > m.
When j=11, t=12. Since P11=C A= P12 ,
G(t) = m max{gs1(t), gs2(t)} = m gs1(t)
= f(j) 1 j
=> G(12)=13-1-11= 1.
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 10 11 12 8 9 10 11 12 13 13 13 14
G 0 0 0 0 0 0 0 0 0 0 0 1
If t = f(j)-1 m and Pj Pt , G(t)=f(j) 1 j.
f(k)(x)=f(k-1)(f(x) 1), k 2
When j=10, t=12. Since P10=TA =P12 , G(12) 0.
When j=9, t=12. P9 = A =P12.
When j=8, t=11. P8 = C =P11.
When j=7, t=10. P7 = T =P10
When j=6, t=9. P6 = A =P9
When j=5, t=8. P5 = C =P8
When j=4, t=7. Since P4 = A P7 = T, G(7) = 8 1 4= 3
Besides, t = f(2)(4) 1=f(f(4) 1) 1=10. Since P4 = AP10
= T, G(10) =f(7) 1 j= 11 1 4 = 6.
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 10 11 12 8 9 10 11 12 13 13 13 14
G 0 0 0 0 0 0 3 0 0 6 0 1
If t = f(j)-1 m and Pj Pt, G(t)=f(j) 1 j.
f(k)(x)=f(k-1)(f(x) 1), k 2
When j=3, t=11. P3=C=P11.
When j=2, t=10. P2=T=P10
When j=1, t=9. P1=A=P9.
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 10 11 12 8 9 10 11 12 13 13 13 14
G 0 0 0 0 0 0 3 0 6 0 0 1
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 10 11 12 8 9 10 11 12 13 13 13 14
G 0 0 0 0 0 0 3 0 0 6 0 1
Let k be the smallest k in {1,,m} such that Pf(k)(1)-1= P1 and
f(k)(1)-1<=m.
Observe:
P1,4=P9,12, gs2(j)=m-(f(1)-1)+1=4, where 1 j f(k)(1)-2.
P A T C A C A T C A T C A
f 10 11 12 8 9 10 11 12 13 13 13 14
G 8 8 8 8 8 8 3 8 0 6 0 1
Let z be f(k)(1)-2. Let k be the largest value k such that f(k)(z)-
1<=m.
Then we set G(j) = m - gs2(j) = m - (m - f(i)(z) - 1) = f(i)(z) - 1,
where 1<=i<=k and f(i-1)(z) < j <= f(i)(z)-1 and f(0)(z) = z.
j 1 2 3 4 5 6 7 8 9 10 11 12
P A T C A C A T C A T C A
f 10 11 12 8 9 10 11 12 13 13 13 14
G 8 8 8 8 8 8 3 8 11 6 11 1
We essentially have to decide the maximum number of steps.
We can move the window right when a mismatch occurs. This
is decided by the following function:
max{G(j), j-B(Ts+j-1)}
Example
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
TG A T C GA T C A C A T A T C A C A T C A T C A
mismatch
PA T C A C A T C A T C A
1 2 3 4 5 6 7 8 9 10 11 12
PA T C A C A T C A T C A
Shift 1 2 3 4 5 6 7 8 9 10 11 12
j 1 2 3 4 5 6 7 8 9 10 11 12 A C G T
P A T C A C A T C A T C A 12 11 0 10
B
f 10 11 12 8 9 10 11 12 13 13 13 14
G 8 8 8 8 8 8 3 8 11 6 11 1
TG A T C GA T C A C A T A T C A C A T C A T C A
mismatch
PA T C A C A T C A T C A
1 2 3 4 5 6 7 8 9 10 11 12
Shift P A T C A C A T C A T C A
1 2 3 4 5 6 7 8 9 10 11 12
j 1 2 3 4 5 6 7 8 9 10 11 12 A C G T
P A T C A C A T C A T C A 12 11 0 10
B
f 10 11 12 8 9 10 11 12 13 13 13 14
G 8 8 8 8 8 8 3 8 11 6 11 1
TG A T C GA T C A C A T A T C A C A T C A T C A
mismatch
PA T C A C A T C A T C A
1 2 3 4 5 6 7 8 9 10 11 12
Shift P A T C A C A T C A T C A
1 2 3 4 5 6 7 8 9 10 11 12
j 1 2 3 4 5 6 7 8 9 10 11 12 A C G T
P A T C A C A T C A T C A 12 11 0 10
B
f 10 11 12 8 9 10 11 12 13 13 13 14
G 8 8 8 8 8 8 3 8 11 6 11 1
P ba
j
Case 1
If P contains x somewhere, then try to
shift P right to align the last occurrence
of x in P with T[i].
T x a T x a ? ?
i inew
and
move i and
j right, so
P x c ba j at end P x c ba
j jnew
Case 2
If P contains x somewhere, but a shift
right to the last occurrence is not
possible, then
shift P right by 1 character to T[i+1].
T x a x T xa x ?
i inew
and
move i and
j right, so
P cw ax j at end P cw ax
j x is after jnew
j position
Case 3
If cases 1 and 2 do not apply, then shift
P to align P[0] with T[i+1].
T x a T x a ? ? ?
i inew
and
move i and
j right, so
P d c ba j at end P d c ba
j 0 jnew
No x in P
Boyer-Moore Example (1)
T:
a p a t t e r n m a t c h i n g a l g o r i t h m
1 3 5 11 10 9 8 7
r i t h m r i t h m r i t h m r i t h m
P: 2 4 6
r i t h m r i t h m r i t h m
Last Occurrence Function
Boyer-Moores algorithm preprocesses the
pattern P and the alphabet A to build a last
occurrence function L()
L() maps all the letters in A to integers
x a b c d
L(x) 4 5 3 -1
x a b c d
L(x) 4 5 3 1
Analysis
Boyer-Moore worst case running time is
O(nm + A)
$ factor 100
100: 2 2 5 5
$ factor 101
101: 101
Modular math
In C,we write: 5 % 3 == 2, read as 5 mod 3 is 2.
In Math, we write: (5 2) mod 3, or 5 3 2
5 mod 3 = 2
3 * 3 mod 5 = 4
3 * 3 * 3 mod 5 = 2
(3 * 3 mod 5 ) * 3 mod 5 = 2
(3 ^ 4) mod 5 = 1 = ((3^2) mod 5)^2 mod 5) = 1
= (9 mod5)^2 mod 5 = 1
GCD (Greatest Common Divisor)
Examples:
1. gcd(10,15)=gcd(2*5, 3*5) = 5 * gcd(2, 3) = 5
2. gcd(20,24)=gcd(2*2*5, 2*2*2*3) = 2*2*gcd(5,2*3) = 4
3. gcd(2,2)=2
Extended-Euclid
Extended-Euclid(a, (a,b)
b)
ififbb==00then
then
return(a,
return(a,1,1,0)0)
fi;
fi;
(d,
(d,x,
x,y)
y):=
:=Extended-Euclid
Extended-Euclid(b, (b,aamod
modb);
b);
(d,
(d,x,x,y)
y):=:=(d, y,xxa/b
(d,y, a/by);
y);
return
return(d, (d,x,x,y)
y)
gcd(a, 0) = a = 1a + 0b
d = bx + (a mod b) y
d = bx + (a a/b b ) y
= ay + b ( x a/b y )
= ax + by
Example: ext-gcd(30,21)=(3,-2,3)
C:\> eea 30 21
Calculating eea_rec(30,21):
Calculating eea_rec(21, 9):
Calculating eea_rec( 9, 3):
Calculating eea_rec( 3, 0):
1 * 3 + 0 * 0 = 3 = gcd( 3, 0),
0 * 9 + 1 * 3 = 3 = gcd( 9, 3),
1 * 21 + -2 * 9 = 3 = gcd(21, 9),
-2 * 30 + 3 * 21 = 3 = gcd(30,21),
Exercises
1. Compute eea(7, 3)
2. Compute eea(31, 20)
Solution
Q. Compute eea(7,3)
A. Calculating eea_rec( 7, 3):
Calculating eea_rec( 3, 1):
Calculating eea_rec( 1, 0):
1 * 1 + 0 * 0 = 1 = gcd( 1, 0)
0 * 3 + 1 * 1 = 1 = gcd( 3, 1)
1 * 7 + -2 * 3 = 1 = gcd( 7, 3)
Solution
Q. Compute eea(31, 20)
A. Calculating eea_rec(31,20):
Calculating eea_rec(20,11):
Calculating eea_rec(11, 9):
Calculating eea_rec( 9, 2):
Calculating eea_rec( 2, 1):
Calculating eea_rec( 1, 0):
1 * 1 + 0 * 0 = 1 = gcd( 1, 0), inv(1) is 1 mod 0
0 * 2 + 1 * 1 = 1 = gcd( 2, 1), inv(2) is 0 mod 1
1 * 9 + -4 * 2 = 1 = gcd( 9, 2), inv(9) is 1 mod 2
-4 * 11 + 5 * 9 = 1 = gcd(11, 9), inv(11) is -4 mod 9
5 * 20 + -9 * 11 = 1 = gcd(20,11), inv(20) is 5 mod 11
-9 * 31 + 14 * 20 = 1 = gcd(31,20), inv(31) is -9 mod 20
Inverse mod n
a * a-1 = 1 mod n
Example: 5 * 2 = 10 = 1 mod 9
5 is inverse of 2 mod 9.
Inverse modulo primes
All non-zero b have an inverse (mod prime p)
where b is not a multiple of p.
For example:
1 * 1 = 1 mod 5
2 * 3 = 1 mod 5
3 * 2 = 1 mod 5
4 * 4 = 1 mod 5
Example, inverse mod 7
The multiplicative inverse of 1 is 1,
the multiplicative inverse of 2 is 4 (and vice versa)
the multiplicative inverse of 3 is 5 (and vice versa)
and the multiplicative inverse of 6 is 6.
Note that 0 doesn't have a multiplicative inverse. This corresponds
to the fact in ordinary arithmetic that 1 divided by 0 does not
have an answer.
Computing inverse mod n
Given a and n, with gcd(a, n) = 1,
Finding x satisfying ax 1 (mod n)
is the same as solving for x in ax + ny = 1.
We can use ext-gcd eea(a,n) = (g,x,y), to get x.
Not all numbers have inverse mod n
if gcd(a,n)=1, then a has an inverse mod n
The inverse can be found from ext-gcd(a,n):
Let gcd(a,n)= 1
a * k + n * y = 1
a * k + 0 = 1, mod n
a * k = 1, mod n
Inverse from ext-gcd
C:\> ext-euclid2.sh 2 5
Computing ext_euclid_algo(2, 5)
# 2 = 0 * 5 + 2
# 5 = 2 * 2 + 1
# 2 = 2 * 1 + 0
res: 2 * -2 + 5 * 1 = 1
coprime: 2 and 5,
so 2 * 3 = 1 (mod 5),
and 3 is the inverse of 2 (mod 5)
Power mod, compute ap mod n
Brute force:
int z = 1;
for(i=1;i<=p;i++)
z = z * a;
z = z % n
Problems:
1. z overflows
2. Slow.
Power mod, compute ap mod n
Brute force:
int z = 1;
for(i=1;i<=p;i++)
z = z * a mod n;
Problems:
1. Slow.
Power mod, compute ap mod n
Brute force:
int z = 1;
for(i=1;i<=p;i++)
z = z * a mod n;
Problems:
1. Slow.
Fast Power mod O(log(p))
// Return: b^p mod n, complexity O(log2(p)).
// Usage: pow-mod(2,100, 5) for 2100 mod 5
int pow-mod(int b, int p, int n){
int r = 1;
while (p > 0) {
if (p & 1) { // same as p % 2.
r = (r * b) % n;
}
p >> = 1; // same as: p = p/2
b = (b * b) % n;
}
return r;
}
CRT (chinese remainder theorem)
Problem: solve for x in
x = a1 mod m1
x = a2 mod m2
x = a3 mod m3
test 0.
crtin=887
887 mod 7 is 5
887 mod 11 is 7
887 mod 13 is 3
crt_inv:+(5*-2*143)+(7*4*91)+(3*-1*77)= 887
Euler phi function
phi(n) = number of numbers relatively prime to n.
phi(6)=2 {1,5}
phi(5)=4 {1,2,3,4}
phi(p) = p-1, if p is a prime.
Perfect number
Complexity: O(sqrt(n))
Euler and Fermat's theorem
Euler's theorem: a ^ phi(n) =1 mod n.
Proof from Lagrange's theorem about groups.
SSL
RSA
Diffie Hellman
Protecting data with Encryption
We need to lock the data
Make sure it is not tampered
Make sure it available only to the recipients.
Encryption
Encryption before computers
Symmetric key encryption
Symmetric key encryption uses one key, called
secret key - for both encryption and
decryption. Users exchanging data must keep
this key secret. Message encrypted with a
secret key can be decrypted only with the
same secret key.
Examples: DES - 64 bits, 3DES - 192 bits, AES -
256 bits, IDEA - 128 bits, Blowfish, Serpent
Symmetric vs Public key encryption
Public key cryptography
Public key Cryptography
The two main branches of public key cryptography are:
Leonard
Ronald Rivest Adi Shamir Adelman
(Born 1948) (Born 1952) (Born 1945)
Examples:
MD5 128 bits
SHA 160
SHA2 256,512
Signing using SHA and RSA
Digital Signatures
SSL is secure socket layer
Used by https
to secure e-commerce and logins
https website
SBI SSL certificate
https: browser warning on fake ssl
certificate
~/.gnupg
GPG
$ is-prime 101
101 is a prime
$ factor 100
100: 2 2 5 5
$ factor 101
101: 101
Is Prime, naive algorithm
is_prime(n) {
for f in 2 to sqrt(n) {
if (n mod f == 0)
return "n is composite, factor = f"
}
return "n is a prime, no factors found"
}
Homework: Write Euclids GCD, Ext-GCD,
Modular Inverse in C
def gcd a, b:
if (b == 0): return a
else: return gcd(b, a mod b)
y
p1
p0
p1-p0
0 x
Basic Geometric Objects in the Plane
point : denoted by a pair of coordinates (x,y)
p2 +angle
p1
p0
Orientation in the Plane
The orientation of an ordered triplet of
points in the plane can be b
-counterclockwise (left turn)
a
-clockwise (right turn) c
-collinear (no turn) c
Examples: clockwise ( right turn
a a b c
b
counter clockwise ( left turn ) collinear no turn
CW and CCW angles
Q2 Line: Left/Right turn?
Given two line segments p0p1 and p1p2,
if we go from p0p1 to p1p2,
Is there a left or a right turn at p1?
p2
p1
Left turn
p1
p0
p0 p2
Right turn
Q3 Lines: intersect?
Do line segments p0p1 and p2p3 intersect?
p3
p3 p3
p2
p1 p2
p1 p1
p0
p0 p0
p2
Cross product of 2 vectors
Given vectors p1, p2, their cross-product p3 is
p3 = p1 p2 = (x1.y2) (x2.y1) = p2 p1
Use the "right hand rule" for direction of p3.
Magnitude of p3 is area of the parallelogram formed
by p1 and p2, area= (|p1|.|p2|.sin(t)), where t is angle
between the vectors.
x1 x2
p2
p1+p2
p1 p2 det
1 2
y y
x1 y2 x2 y1
p1 p2 p1
Example: cross product
p1 x p2 = (x1.y2) (x2.y1)
= 2*2-1*1=4-1=3, (ccw).
p2 x p1 = -3 (clockwise)
p2(x2=1,y2=2)
p1(x1=2,y1=1)
p0(0,0)
Left or Right turn?
Given p0p1 and p1p2,
Going from p0 to p1 to p2,
Do we turn left/right at p1?
Compute d = (p1 p0) (p2 p0)
if d > 0, then right turn
if d < 0, then left turn
p1
p2
t Right turn,
counter clockwise
sin(t) > 0
p0
Bounding box
bounding box is the smallest x-y rectangle
containing the object
y2=max(yi)
y1=min(yi)
x1=min(xi)
x2=max(xi)
Intersection of rectangles
Rect(p1,p2) X Rect(p3,p4) iff
((x1 <= x3 <= x2 <= x4) ||
(x3 <= x1 <= x4 <= x2) ) &&
Similar conditions for y1..y4.
y4
p4
y2
p2
y3 p3
p1
y1
x1 x3 x2 x4
Quick Intersection check
Collision detection
Approximate objects by boxes.
If bounding box of two objects don't intersect, the objects cannot
intersect. (common case).
But, If boxes intersect, more complex test is required for interection.
Point Inclusion
Given a polygon and a point p,
Is the point inside or outside the polygon?
Orientation helps solving this problem in linear time
p
p
Point Inclusion: algorithm
Draw a horizontal line to the right of each point and extend it to infinity
Count the number of times a line intersects the polygon.
even point is outside
odd point is inside
a
b
c
d
e
f
q1 q2
p2
p1 colinear segments
multiple intersections
Computational Aspects
We will use +,-,*,< on real numbers.
Avoid real divisions, causes instability.
Avoid real equality, as the representation
is approximation.
Instead of (x1 == x2) use eq(x1,x2)
#include <math.h>
#define eps 1.e-10
#define eq(x1,x2) (fabs(x1-x2)< eps)
2 line intersection
p2
p4
p3
p4 p3 p2
p1 p1
(a) p1p2, p3p4 intersect (b) p1p2, p3p4 do not intersect
p3 p4 p4
p2
p2 p
p2 3
p4 p3
p1 p1 p1
(c) p1p2, p3p4 intersect (d) p1p2, p3p4 intersect (e) p1p2, p3p4 do not intersect
Two line segments intersect
check whether each segment straddles the line
containing the other.
A segment p1p2 straddles a line if point p1 lies
on one side of the line and point p2 lies on the
other side.
Two line segments intersect iff either hold:
Each segment straddles other.
endpoint of one segment lies on the other segment.
(boundary case.)
Direction
DIRECTION(p0,, p1, p2)
return (p2 p0) (p1 p0)
p2
p0
p1
On-Segment
ON-SEGMENT(p1, p2, p3)
return
min(x1, x2) x3 max(x1,x2)
&&
min(y1, y2) y3 max(y1, y2)
p2
p3
p1
Algorithm
SEGMENTS-INTERSECT(p1, p2, p3, p4)
1 d1 DIRECTION(p3, p4, p1)
2 d2 DIRECTION(p3, p4, p2)
3 d3 DIRECTION(p1, p2, p3)
4 d4 DIRECTION(p1, p2, p4)
5 if ((d1 > 0 and d2 < 0) or (d1 < 0 and d2 > 0))
and ((d3 > 0 and d4 < 0) or (d3 < 0 and d4 > 0))
6 then return TRUE
7 elseif d1 = 0 and ON-SEGMENT(p3, p4, p1) // p1 on p3p4?
8 then return TRUE
9 elseif d2 = 0 and ON-SEGMENT(p3, p4, p2) // p2 on p3p4?
10 then return TRUE
11 elseif d3 = 0 and ON-SEGMENT(p1, p2, p3) // p3 on p1p2?
12 then return TRUE
13 elseif d4 = 0 and ON-SEGMENT(p1, p2, p4) // p4 on p1p2?
14 then return TRUE
15 else return FALSE
N line intersection
The sweep-line status is a total order T, for which we require the following
operations:
INSERT(T, s): insert segment s into T.
DELETE(T, s): delete segment s from T.
ABOVE(T, s): return the segment immediately above segment s in T.
BELOW(T, s): return the segment immediately below segment s in T.
If there are n segments in the input, each of the above operations take
O(lg n) time using red-black trees.
Segment-intersection pseudo-
code
ANY-SEGMENTS-INTERSECT(S)
T{}
EP = sort the endpoints of the segments in S from left to right, breaking
ties by putting left endpoints before right endpoints and breaking
further ties by putting points with lower y-coordinates first
for each point p in EP (the sorted list of endpoints)
if p is the left endpoint of a segment s then INSERT(T, s)
if (ABOVE(T, s) exists and intersects s) or
(BELOW(T, s) exists and intersects s) then return TRUE
if p is the right endpoint of a segment s then
if both ABOVE(T, s) and BELOW(T, s) exist and
ABOVE(T, s) intersects BELOW(T, s) then return TRUE
DELETE(T, s)
return FALSE // no intersections
Example
y
q
p
|px qx|
x
O
The closet pair problem
Given: a set of points P={p1 pn} in the
plane, such that pi=(xi,yi).
Find a pair pi pj with minimum ||pi pj||,
where || p-q|| = sqrt( sqr(px-qx) + sqr(py-qy))
Closest Pair Brute force
Find a closest pair among p1pn
Easy to do in O(n2) time
For all pi pj, compute || pi pj || and choose
the minimum
Let d=min(d1,d2)
Observe:
Need to check only pairs d2
which cross the dividing d1
line
Only interested in pairs
within distance < d
Suffices to look at points
in the 2d-width strip
around the median line
Scanning the strip
S = Points in the strip.
Q = Sort S by their y-coordinates, Let qi =(xi, yi)
Q = {q1qk : k n, sorted points in strip by Y-coord }
dmin= d
For i=1 to k do
j=i-1
While ( yi - yj < d ) do // is this O(n^2) ?
If ||qiqj||<d then
dmin=||qiqj||
j--
Report dmin (and the corresponding pair)
Merge complexity is not O(n^2)
Can we have O(n^2) qjs
that are within distance d
from qi ?
NO d
Right turn,
pop(p4)
Illustration
continued
Step 8: while ..
pop p8,
pop p7
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Graham Scan - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Proof.
Focus on this moment just before pi is pushed, to decide
whether pj stays in the CH
Because pi's polar angle relative to p0 is greater than pj's
polar angle, and because the angle p(k,j,i) makes a left
turn (otherwise we would have popped pj )
pk p j pi
Running time
Line 1 takes (n) time.
Line 2 takes O(n lg n) time, using merge sort or heapsort
to sort the polar angles and the cross-product method of
to compare angles. (Removing all but the farthest point
with the same polar angle can be done in a total of O(n)
time.)
Lines 3-5 take O(1) time. Because m n - 1, the for loop
of lines 6-9 is executed at most n - 3 times. Since PUSH
takes O(1) time, each iteration takes O(1) time exclusive
of the time spent in the while loop of lines 7-8, and thus
overall the for loop takes O(n) time exclusive of the
nested while loop.
We use aggregate analysis to show that the while loop
takes O(n) time overall.
continued ..
Running time continued
For i = 0, 1, ..., m, each point pi is pushed onto stack S
exactly once. We observe that there is at most one POP
operation for each PUSH operation.
At least three points-p0, p1, and pm-are never popped
from the stack, so that in fact at most (m 2) POP
operations are performed in total.
Each iteration of the while loop performs one POP, and
so there are at most m - 2 iterations of the while loop
altogether.
Since the test in line 7 takes O(1) time, each call of POP
takes O(1) time, and since m n - 1, the total time taken
by the while loop is O(n).
Thus, the running time of GRAHAM-
SCAN is O(n lg n).
Convex Hull Jarvis march
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Jarvis March - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Jarvis March - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Jarvis March - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Jarvis March - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Jarvis March - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Jarvis March - Example
p10
p9 p6
p12 p7 p5
p4 p3
p11
p8 p1
p2
p0
Running time
If implemented properly, JM has a running time of O(nh).
where h is size of CH(Q).
For each of the h vertices of CH(Q), we find the vertex
with the minimum polar angle.
Each comparison between polar angles takes O(1) time
We can compute the minimum of n values in O(n) time if
each comparison takes O(1) time.
Thus, Jarvis's march takes O(nh) time.
Convex hull - few other
methods
Divide & Conquer
A B
lower tangent
2D up to n
3D up to ?
3D up to n
kD up to ?
p p
q s
r q
r
Partitioning problem
Given: n points are scattered inside a convex polygon P (in 2D) with m vertices.
Does there exist a partition of P into n sub-regions satisfying the following:
Each sub-region is a convex polygon
Each sub-region contains at least one point
All sub-regions have equal area
Partitioning result
Given: n points are scattered inside a convex polygon P (in
2D) with m vertices. Does there exist a partition of P
into n sub-regions satisfying the following:
An equitable partition always exists, we can find it in
running time O(N n log N), where N = m + n.
Partitioning Applications
p1
e
Delaunay triangulations and convex hulls
K
G
Chromatic number of a Graph
Given a graph G, how many colors are
required to paint the vertices? such that
adjacent vertices have different colors.
6 7
9
Vertex Cover (VC) is NPC
Given a graph G and an integer k, is
there a subset of k vertices (called
cover) such that every edge is
connected to a vertex in the cover?
Independent set is NPC
Independent set is a subset of vertices in the
graph with no edges between any pair of nodes
in the independent set.
Given a graph G=(V,E), find the largest
independent set.
Examples:
Exercises
1. Find a maximum independent set S 1 2
2. Find a minimal vertex cover
3. Find a hamiltonian path
4. Find an Euler circuit
3 5
5. 3 Color the nodes
G2
1 2
G1
3 4 5
6 7
Knapsack problem is NPC
Given a set A integers a1..an, and another
integer K in binary.
Q. Is there a subset B of A, of these integers
that sum exactly to K?
e.g. of 2CNF
Language of Boolean Logic
1. A formula is satisfiable, if it is true
for some truth assignment to its
variables. e.g. S="(x or y or not(z))".
2. A formula is valid (tautology), if it is
true for all (any) truth assignment to
is variables. e.g. V="(x or not(x))"
3. A formula is unsatisfiable (invalid), if
it is false for all values of its
variables, e.g. U="(x and not(x))"
Boolean Algebra
Distributive Laws:
x or (y & z ) = (x or y) & (x &z)
x & (y or z) = (x & y) or (x & z)
D'Morgan's laws:
!(x & y) = !x or !y
!(x or y) = !x & !y
Any boolean formula can be converted into
standard form called CNF:
(...or)&(..or..)&(..or..)
E.g. P = (x1 v x2 v x3) (x2 v x3 v x4)
Boolean formulas
Consider the Boolean expression:
P = (x1 v x2 v x3) .. clause 1
Variables=T/F
3SAT is also NPC
3CNF is a boolean formula with just 3
literals per clause.
3SAT problem: is a given 3CNF formula
is satisfiable? NP Complete.
SAT
3SAT
Maximum cut
Graph coloring Vertex cover
Example
Unresolved complexity
Integer Factoring.
Not all algorithms that are O(2n) are NP
complete.
Integer factorization (also O(2n)) is not
thought to be NP complete.
Note: Size of the problem is measured by the number of digits
in n (not the value of n). E.g. n=999, size of n is 3.
"Is Primes" is in P
Primes is in P [Manindra Agrawal.
Kayal, Saxena, IIT Kanpur, 2002]
They give an unconditional deterministic
polynomial-time algorithm
The algorithm decides whether an input
number is prime or composite (but not
its factors).
General Definitions
P, NP, NP-hard, NP-easy, and NP-complete
Problems
- Decision problems (yes/no)
- Optimization problems (solution with best score)
P
- Decision problems (decision problems) that can be solved in
polynomial time (efficiently).
NP
- Decision problems whose YES answer can be verified in
polynomial time, if we already have the proof (or witness)
co-NP
- Decision problems whose NO answer can be verified in
polynomial time, if we already have the proof (or witness)
P = NP?
P: The complexity class of decision problems that can
be solved on a deterministic Turing machine in
polynomial time.
NP-Complete
P
NP
Definition: S is called NP-Complete if
1. Problem S in NP and
2. S is NP hard: Every other NP problem
is polynomial time reducible to S
EXP NP EXP
P NP-
P = NP
complete
If P NP If P = NP
Definition: S is in P
S is in P if S can be solved in polynomial
time by a deterministic turing macine.
EXP NP EXP
P NP-
P = NP
complete
If P NP If P = NP
An other P versus NP diagram
1. P: The class of problems that P=NP?
can be solved by a deterministic
Turing Machine in polynomial
time.
2. NP: The class of problems that
can be solved by a non-
deterministic Turing Machine in
polynomial time.
3. NP-complete: The class of
problems that are in NP and can P=NP
be reducible by any problem in
NP.
4. NP-hard: The class of problems
that can be reducible by any
problem in NP
from http: //henry2005.jimdo.com/
2011/05/21/p-np-np-hard-and-np-
complete/
If P=NP then
If P=NP, all NP-hard problems can also be
solved in polynomial time, e.g.
Satisfiability, Clique, Graph coloring,
Partition numbers
Assume, all the basic arithmetic
operations (addition, subtraction,
multiplication, division, and comparison)
can be done in polynomial time.
Satisfiability problem (SAT) is NPC
Given a boolean formula, determine if there
exists an assignment to the variables, that
satisfies the formula?
Example.
To solve subset-sum: S=100, m=30,
Let S-m=70, w=40,
call Partition(100+40) to get 40+30, and 70.
Polynomial Reductions
1) Partition REDUCES to Subset-Sum
Partition <p Subset-Sum
2) Subset-Sum REDUCES to Partition
Subset-Sum <p Partition
Therefore they are equivalently hard
NP-Completeness Proof
Method
To show that Q is NP-Complete:
1) Show that Q is in NP
2) Pick an instance, S of your favorite NP-
Complete problem (ex: in 3-SAT)
3) Show a polynomial algorithm to
transform S into an instance of Q
Starting with SAT
Cook and Levin independently showed
that SAT is NP-complete.
Since the reducibility relation p is
transitive, one can show that another
problem B is NP-hard by reducing SAT, or
any of the other known NP-complete
problems, to B.
Showing NP
To show that B is in NP, you should give a
nondeterministic polynomial-time guess-
and-verify algorithm for it.
Give a (small) solution, and
Give a deterministic polynomial-time
algorithm to check the solution.
Showing NPC
To show that B is NP-complete,
1. Show that B is in NP-hard by giving a polynomial-time
reduction from one of the known NP-complete
problems (SAT, CNFSAT, CLIQUE, etc.) to B.
2. Show that B is in NP by giving a nondeterministic
polynomial-time guess-and-verify algorithm for it.
Showing NP Hardness:
If you are only asked to show NP-hardness, you only
need to do 1. But if you are asked to show NP-
completeness, you need to do both 1 and 2
Example: Clique is NPC
CLIQUE = { <G,k> | G is a graph with a clique
of size k }
A clique is a subset of vertices that are all
connected
Why is CLIQUE in NP?
Reduce 3SAT to Clique
Given a 3SAT problem , with k clauses
Make a vertex for each literal.
Connect each vertex to the literals in
other clauses that are not its negations.
Any k-clique in this graph corresponds
to a satisfying assignment (see example
in next slide)
3SAT as a clique problem
Example: Independent Set is NPC
Defintion: An k-IS, is a subset of k vertices
of G with no edges between them.
Decision problem: Does G have a k-IS?
To Show k-IS is NPC
Method: Reduce k-clique to k-IS
Reduce Clique to IS
Independent set is the dual problem of
Clique.
Convert G to Gc (complement graph).
G has a k-clique iff GC has k-IS
G has four Gc has four
3-clique, 3-IS, e.g. see oval
e.g see
oval
Theorem: HAMILTONIAN-PATH
is NP-complete
Proof:
1. HAMILTONIAN-PATH is in NP
Can be easily proven
1
3SAT to H-Path: Reduction explained
For each variable we will build a small graph
called a "gadget".
We will stack the gadgets for each variable
The h-path will pass through each gadget,
Starting at the top,
Passing through the center (left to right OR
right to left), and
Ending at the bottom.
2
(x 1 x 2 ) ( x 1 x 2 )
clause 1 clause 2
(x 1 x 2 )
x1=true
x1=true
x1=false
x1=true
x1=false (x 1 x 2 )
x1=false
x1=true x1=false
5
(x 1 x 2 ) ( x 1 x 2 )
Gadget for (x 1 x 2 )
variable x2
(x 1 x 2 )
6
(x 1 x 2 ) (x 1 x 2 )
x1
(x 1 x 2 )
(x 1 x 2 )
x2
7
(x 1 x 2 ) (x 1 x 2 ) 1
x1 1
(x 1 x 2 )
(x 1 x 2 )
x2 1
8
(x 1 x 2 ) (x 1 x 2 ) 1
x1 0
(x 1 x 2 )
(x 1 x 2 )
x2 1
9
Reductions
Then: B is NP-complete
Vertex Cover
VC of a graph is a subset S of nodes
such that every edge in the graph
touches one node in S
Example:
S = red nodes
Size of vertex-cover is the number
of nodes in the cover
Example: |S|=4
Corresponding language:
VERTEX-COVER = { G , k :
graph G contains a vertex cover
of size k }
Example:
G
G ,4 VERTEX - COVER
Theorem: VERTEX-COVER is NP-complete
Proof:
1. VERTEX-COVER is in NP
Can be easily proven
Example:
(x 1 x 2 x 3 ) (x 1 x 2 x 4 ) (x 1 x 3 x 4 )
Clause 1 Clause 2 Clause 3
n=4, c=3
Formula P can be converted
to a graph G such that:
x1 x1 x1
x2 x3 x2 x4 x3 x4
Clause 1 Clause 2 Clause 3
(x 1 x 2 x 3 ) (x 1 x 2 x 4 ) (x 1 x 3 x 4 )
Clause 1 Clause 2 Clause 3
x1 x1 x2 x2 x3 x3 x4 x4
x1 x1 x1
x2 x3 x2 x4 x3 x4
Clause 1 Clause 2 Clause 3
First direction in proof:
If P is satisfiable,
then G contains a vertex cover of size
k = n+2c
Example:
P ( x1 x2 x3 ) ( x1 x2 x4 ) ( x1 x3 x4 )
Satisfying assignment:
x1 1 x2 0 x3 0 x4 1
x1 x1 x2 x2 x3 x3 x4 x4
x1 x1 x1
x2 x3 x2 x4 x3 x4
x1 x1 x2 x2 x3 x3 x4 x4
x1 x1 x1
x2 x3 x2 x4 x3 x4
Select one satisfying literal in each clause gadget
and include the remaining literals in the cover
This is a vertex cover since every edge is
adjacent to a chosen node
x1 x1 x2 x2 x3 x3 x4 x4
x1 x1 x1
x2 x3 x2 x4 x3 x4
Explanation for general case:
x1 x1 x2 x2 x3 x3 x4 x4
x1 x1 x1
x2 x3 x2 x4 x3 x4
Every edge connecting variable gadgets
and clause gadgets is one of three types:
x1 x1 x2 x2 x3 x3 x4 x4
x1 x1 x1
x2 x3 x2 x4 x3 x4
x1 x1 x2 x2 x3 x3 x4 x4
x1 x1 x1
x2 x3 x2 x4 x3 x4
To include internal edges to gadgets,
and satisfy k = n + 2c
x2 x3 x2 x4 x3 x4
2c chosen out of 3c
For the variable assignment choose the
literals in the cover from variable gadgets
x1 x1 x2 x2 x3 x3 x4 x4
x1 1 x2 0 x3 0 x4 1
P ( x1 x2 x3 ) ( x1 x2 x4 ) ( x1 x3 x4 )
P ( x1 x2 x3 ) ( x1 x2 x4 ) ( x1 x3 x4 )
is satisfied with
x1 1 x2 0 x3 0 x4 1
x1 x1 x2 x2 x3 x3 x4 x4
x1 x1 x1
x2 x3 x2 x4 x3 x4
P= a b c b c d a c d
variable literal
gadget
a a b b c c d d
G=
a b a
clause
gadget
b c c d c d
3SAT solution to VC solution
Given a solution for P, create a VC for G:
1. Put the TRUE literal in each variable-gadget into VC
2. No need to cover this literal again in clause-gadgets, as it is already
covered above.
3. Put the remaining two unsatisfied literals in each clause-gadget in VC.
4. So we have a VC of size n+2c.
variable
gadget
a a b b c c d d
a b a
clause
gadget
b c c d c d
VC solution to 3SAT solution
Given a VC solution, create a 3SAT solution for P:
1. G has a VC of size n+2C, covering
A. One variable in each variable-gadget: Make this TRUE.
B. Two literals in each clause-gadget: Ignore them.
2. In each clause-gadget, the 3rd literal covered by variable-gadget is
TRUE, hence all clauses are TRUE.
3. Hence P is True.
variable
gadget
a a b b c c d d
a b a
clause
gadget
b c c d c d
1
3SAT to Clique polynomial-time reduction
2
Given a 3CNF formula: literal
( x1 x2 x3 ) ( x3 x5 x6 ) ( x3 x6 x4 ) ( x4 x5 x6 )
clause
Each clause has three literals
3CNF-SAT is set of { w: w is a
satisfiable 3CNF formula}
3
A 5-clique in graph G
CLIQUE = { G, k : graph G
contains a k-clique}
4
Theorem: 3SAT is polynomial time
reducible to CLIQUE
5
3SAT to Clique reduction explained
Given a 3CNF formula
We will build a gadget (small graph) for
each clause.
Connect each literal in each clause to every
other literal in other clauses
Don't connect literal to its negation: x to !x
All the true literals will form a clique.
There is clique of size equal number of
variables iff the formula is Satisfiable.
6
Transform formula to graph.
Example:
( x 1 x 2 x 4 ) (x 1 x 2 x 4 ) ( x 1 x 2 x 3 ) ( x 2 x 3 x 4 )
Clause 2
Create Nodes: x1 x2 x4 Clause 3
x1 x1
Clause 1 x2 x2
x4 Clause 4 x3
x2 x3 x4
7
( x 1 x 2 x 4 ) (x 1 x 2 x 4 ) ( x 1 x 2 x 3 ) ( x 2 x 3 x 4 )
x1 x2 x4
x1 x1
x2 x2
x4 x3
x2 x3 x4
x1 x2 x4
x1 x1
x2 x2
x4 x3
x2 x3 x4
Resulting Graph
9
( x 1 x 2 x 4 ) (x 1 x 2 x 4 ) ( x 1 x 2 x 3 ) ( x 2 x 3 x 4 ) 1
x1 1
x2 0 x1 x2 x4
x3 0
x1 x1
x4 1
x2 x2
x4 x3
x2 x3 x4
Proof:
a. CLIQUE is in NP
b. 3SAT is NP-complete
c. 3SAT is polynomial reducible to CLIQUE
11
Example: Clique is NPC
CLIQUE = { <G,k> | G is a graph with a clique
of size k }
A clique is a subset of vertices that are all
connected
Why is CLIQUE in NP?
Reduce 3SAT to Clique
Given a 3SAT problem , with k clauses
Make a vertex for each literal.
Connect each vertex to the literals in
other clauses that are not its negations.
Any k-clique in this graph corresponds
to a satisfying assignment (see example
in next slide)
3SAT as a clique problem
Example: Independent Set is NPC
Defintion: An k-IS, is a k subset of k
vertices of G with no edges between them.
Decision problem: Does G have a k-IS?
To Show k-IS is NPC
Method: Reduce k-clique to k-IS
Reduce Clique to IS
Independent set is the dual problem of
Clique.
Convert G to Gc (complement graph).
G has a k-clique iff GC has k-IS
G has four Gc has four
3-clique, 3-IS, e.g. see oval
e.g see
oval
SAT Reduces to 3SAT
Recall: 3CNF-SAT is a formula where each clause has 3 distinct literals.
C j x3 C 'j 1 x3 y1 y2
C 'j 2 x3 y1 y2
C 'j 3 x3 y1 y2
C 'j 4 x3 y1 y2
SAT Reduces to 3SAT
Case 2: clause Cj contains exactly 3 literals,
nothing to do.
Case 3: clause Cj contains 2 literals:
add 1 new literal y, and replace Cj with 2 clauses as follows:
C j x3 x7 C 'j 1 x3 x7 y
C 'j 2 x3 x7 y
Case 4 ?
SAT Reduces to 3SAT
Case 4: clause Cj contains 4 terms.
introduce - 1 extra Boolean variables
replace Cj with clauses as follows:
C j x1 x 3 x 4 x 5 x 6 x 9 C 'j 1 x1 x1 y1
C 'j 2 y1 x3 y2
C 'j 3 y2 x4 y3
C 'j 4 y3 x5 y4
C 'j 5 y4 x6 y5
C 'j 6 y5 x9 x9
SAT Reduces to 3SAT
Case 4: clause Cj contains 4 terms.
Suppose in LHS, Cj is SAT (true) because of tj4,is true, then
in RHS, each of C'j 1..C'jl can be made true as follows:
set False
C j t j1 t j 2 t j 3 t j C 'j 1 t j1 t j1 y1
C 'j 2 y1 t j2 y2
k=4 C 'j 3 y2 t j3 y3
C 'j 4 y3 t j4 y4
C 'j 5 y4 tj5 y5
C 'j y 1 t j tj
set TRUE
SAT Reduces to 3SAT
Proof of case 4 Suppose SAT instance is satisfiable.
If SAT assignment sets tjk = 1, then
3-SAT assignment sets:
tjk = 1
ym = 1 for all m < k;
ym = 0 for all m k
Cj t j1 t j2 t j3 t j C'j1 t j1 t j1 y1
C'j2 y1 t j2 y2
C'j3 y2 t j3 y3
C'j4 y3 t j4 y4
C'j5 y4 t j5 y5
C'j y1 t j t j
SAT Reduces to 3SAT
Case 4 Proof. Suppose 3-SAT instance is satisfiable,
show SAT is true.
If 3SAT sets tjk = 1 then
we claim tjk = 1 for some k in Cj, because:
each of - 1 new Boolean variables yj can make -2 of
new clauses true.
ONE remaining clause must be satisfied by an original
term tjk
C j t j1 t j 2 t j 3 t j C 'j1 t j1 t j1 y1
C 'j 2 y1 t j2 y2
C 'j 3 y2 t j3 y3
C 'j 4 y3 t j4 y4
C 'j 5 y4 t j5 y5
C 'j y 1 t j t j
3Coloring is NPC
3Coloring a graph is NPC (hard).
2Coloring a graph is in P (easy).
Reduce 3SAT to 3Color
x !x x !x
Case 1. x=True, !x=False Case 2. x=False, !x=True
Reduce 3SAT to 3Color
For the variables build this gadget. Color the center node
Blue, connected to True and False above it, and the
variables connected below it.
3 coloring this ensures that each variable x will be
colored True or False only (and !x will be opposite of x).
Blue
3SAT to 3Color
For each clause, build this gadget, with output connected to color True.
The only ways to 3-color this is to use the color True for at least one
of the vertices at the top (so each clause is true).
Conversely, as long as at least one of top vertex is colored True, the
rest of the graph can be 3-colored.
...
x2 (x
1
x
x2 2
x
4)
x1
Blue
x1
F T
NP Complete
Homework
ILP is Integer Linear Programming
Give a set of variables {x1..xn} with values
in integers.
A set of m linear constraints of the form:
a11 * x1 + a1n * xn cmp b1
am1 * x1 + amn * xn cmp bm
Where a[1..m,1..n], b[1..m] are integers,
cmp is { <, >, >=, <=, =}
Given an ILP problem, does it have an
integer solution?
Q1. Reduce 3SAT to ILP
Given 3CNF SAT with {b1..bn} boolean
variables, and 3CNF formulas (b1 or b2
or b3)&.
Write these as ILP constraints:
1. b1 is 0 or 1 (true or false).
2. b1 is opposite of !b1.
3. Each 3CNF (b1 or b2 or b3) is 1 (true).
Hence ILP is NPC
Q2. Reduce 3Color to ILP
Encode color of each vertex as an
integer {c1, c2, c3}.
Add ILP constraints
1. Each vertex is only one color.
2. For each edge(v1, v2), the color(v1) !=
color(v2).
Hence ILP is NPC.
Q3. Reduce 2Partition to 3Partition
2Partition: Given a set of integer weights
A={w1 .. wn}, to partition A into two sets of
equal sums is NPC.
3Partition: Divide A into 3 equal sets.
Hint: To solve 2partition, add a dummy
weight to A, and call 3Partition. Afterwards
remove the dummy weight from the 3
partitions.
Matrix computation in
Matrix Commands in Excel
Excel can perform some useful matrix operations:
Addition & subtraction
Scalar multiplication & division
Transpose (TRANSPOSE)
Matrix multiplication (MMULT)
Matrix inverse (MINVERSE)
Determinant of matrix (MDETERM)
As well as combinations of these operations.
Cells
Most Excel formula require
you to name one or more
cell ranges e.g. B2:C3.
You can type these in
directly or select them
using the mouse.
However, it is often better
to use a named range.
Naming group of Cells
To assign a name to a range of
cells, highlight it using the mouse
and choose Insert > Name > Define
and enter a name.
Choose a useful name.
Remember Excel does not
distinguish between the names
MATRIX1, Matrix1 and matrix1.
Entering a Matrix
Choose a location for the
matrix (or vector) and enter
the elements of the matrix.
Highlight the cells of the
matrix and choose INSERT >
NAME > DEFINE.
Enter a name for the matrix.
You can now use the name
of the matrix in formulae.
Matrix +- *
To add two named 3 x 2 matrices A and
B:
Highlight a blank 3 x 2 results area in the
spreadsheet. (If the results area is too
small, you will get the wrong answer.)
Type =A+B in the formula bar and press
the CONTROL-SHIFT-ENTER keys
simultaneously, to get the matrix result.
If you click on any cell in the result, the
formula {=A+B} will be displayed, the
{ } brackets indicate a matrix (array)
command.
Matrix Transpose
Suppose B is a 3 x 2 matrix.
B' = B transpose is a 2 x 3 matrix.
Select a 2 x 3 results area, type
=TRANSPOSE(A) in the formula
bar and press CTRL-SHIFT-
ENTER.
Choose A and B so that AB exist.
Check that (AB)' = B 'A', using
MMULT.
Matrix Multiplication
Suppose A and B are named
3 x 2 and 2 x 3 matrices.
Then A*B is 3 x 3 and B*A is 2
x 2. In general, AB != BA.
Select a blank 3 x 3 area for
the result A*B.
Type =MMULT(A,B) in the
formula bar and press CTRL-
SHIFT-ENTER to compute AB.
Matrix Inverse and determinant
Suppose B is a square 2 x2 matrix.
Select a 2 x 2 area for the inverse of B.
Type =MINVERSE(B) in the formula bar
and press CRTL-SHIFT-ENTER.
Find |B|=determinant of B using
=MDETERM(B)
If determinant of B = |B|=0, it is not
invertible.
Let A and B be 3x3, and |A| != 0, |B| !=
0, Compute and compare (AB)-1 = B-1A-
1.
Outline
Geometric intuition for linear algebra
Matrices as linear transformations or as
sets of constraints
Linear systems & vector spaces
Solving linear systems
Eigenvalues & eigenvectors
Basic concepts
Vector in Rn is an ordered 1
set of n real numbers.
6
e.g. v = (1,6,3,4) is in R4 3
(1,6,3,4) is a column 4
vector:
as opposed to a row
vector: 1 6 3 4
m-by-n matrix is an object
with m rows and n columns,
1 2 8
each entry fill with a real
number: 4 78 6
9 3 2
Basic concepts
Transpose: reflect vector/matrix on line:
T
a
a b
T
a b a c
b
c d b d
Note: (Ax)T=xTAT (Well define multiplication soon)
Vector norms:
Lp norm of v = (v1,,vk) is (i |vi|p)1/p
Common norms: L1, L2
Linfinity = maxi |vi|
Length of a vector v is L2(v)
Basic concepts
Vector dot product: u v u1 u2 v1 v2 u1v1 u2v2
Note dot product of u
with itself is the square
of the length of u.
Matrix product: a11 a12 b11 b12
A , B
a21 a22 b21 b22
a11b11 a12b21 a11b12 a12b22
AB
a21b11 a22b21 a21b12 a22b22
Vector products
v1
Dot product: u v u v u1
T
u2 u1v1 u2 v2
v2
Outer product:
u1 u1v1 u1v2
uv v1 v2
T
u2 u 2 v1 u2 v2
Matrices as linear transformations
5 0 1 5
(stretching)
0 5 1 5
0 11 1
(rotation)
1 0 1 1
Matrices as linear transformations
(0,1)
0 1 1 0
(reflection)
1 0 0 1 (1,0)
(1,1)
1 0 1 1
(projection)
0 0 1 0
(1,0)
1 c x x cy
(shearing)
0 1 y y
Matrices as sets of constraints
x y z 1
2x y z 2
x
1 1 1 1
y
2 1 1 z 2
Special matrices
a b c
diagonal a 0 0 0 d e upper-triangular
0 b 0 0 0 f
0 0 c
a b 0 0 a 0 0
c d e 0
b c 0 lower-triangular
0 tri-diagonal
f g h d e f
0 j
0 i
1 0 0
0 1 0 I (identity matrix)
0 0 1
Vector spaces
Formally, a vector space is a set of vectors which is
closed under addition and multiplication by real numbers.
A subspace is a subset of a vector space which is a
vector space itself, e.g. the plane z=0 is a subspace of
R3 (It is essentially R2.).
Well be looking at Rn and subspaces of Rn
Our notion of planes in R3
may be extended to
hyperplanes in Rn (of
dimension n-1)
1 0 0
u Null space: {(0,0)}
2 3 0
1 3 v 0
1 0 1 x 0
2 3 5 y 0 Null space: {(c,c,-c)}
1 3 4 z 0
Linear independence and basis
Vectors v1,,vk are linearly independent if
(c1v1++ckvk = 0) implies (c1==ck=0)
(2,2)
i.e. the nullspace is the origin
| | | c1 0
(0,1)
(1,1) v1 v2 v3 c2 0
| | c3 0
(1,0) |
1 0 0
u Recall nullspace contained
2 3 0 only (u,v)=(0,0).
1 3 v 0 i.e. the columns are linearly
independent.
Linear independence and basis
If all vectors in a vector space may be
expressed as linear combinations of v1,,vk,
then v1,,vk span the space.
2 1 0 0 2 .9 .3 .1
2 20 2 1 2 0 2 1.57 .2 1.29 1 2 .2
2 0 0 1 2 0 0 1
(0,0,1) (.1,.2,1)
(0,1,0)
(.3,1,0)
(1,0,0) (.9,.2,0)
Linear independence and basis
A basis is a set of linearly independent vectors which span the
space.
The dimension of a space is the # of degrees of freedom of the
space; it is the number of vectors in any basis for the space.
A basis is a maximal set of linearly independent vectors and a
minimal set of spanning vectors.
2 1 0 0 2 .9 .3 .1
2 20 2 1 2 0 2 1.57 .2 1.29 1 2 .2
2 0 0 1 2 0 0 1
(0,0,1) (.1,.2,1)
(0,1,0)
(.3,1,0)
(1,0,0) (.9,.2,0)
Linear independence and basis
Two vectors are orthogonal if their dot product is 0.
An orthogonal basis consists of orthogonal vectors.
An ortho-normal basis consists of orthogonal
vectors of unit length.
2 1 0 0 2 .9 .3 .1
2 20 2 1 2 0 2 1.57 .2 1.29 1 2 .2
2 0 0 1 2 0 0 1
(0,0,1) (.1,.2,1)
(0,1,0)
(.3,1,0)
(1,0,0) (.9,.2,0)
About subspaces
The rank of A is the dimension of the column space of A.
It also equals the dimension of the row space of A (the
subspace of vectors which may be written as linear
combinations of the rows of A).
(0,0,1)
1 0
m=3
(0,1,0) 0 1 n=2
0 0 r=2
(1,0,0)
Null space, column space
(0,1,0)
(1,0,0) a = (1,0) aT b 2
c T a
2 1 0 0 2
a a 0
2 0 1 0 2
0 0 0 0 2
Basis transformations
We may write v=(2,2,2) in terms of an alternate basis:
2 1 0 0 2 2 .9 .3 .1 1.57
2 0 1 0 2 2 .2 1 .2 1.29
2 0 0 1 2 2 0 0 1 2
(0,0,1) (.1,.2,1)
(0,1,0)
(.3,1,0)
(1,0,0) (.9,.2,0)
Otherwise, vQ = (QTQ)QTv
Special matrices
Matrix A is symmetric if A = AT
A is positive definite if "xTAx > 0" for all non-zero x (positive
semi-definite if inequality is not strict)
Examples:
1 0 0 a
a b c * 0 1 0 * b a 2 b 2 c 2
0 0 1 c
1 0 0 a
a b c 0 1 0 b a 2 b 2 c 2
0 0 1 c
Special matrices
Note: Any matrix of form
ATA is positive semi-definite because,
xT(ATA)x = (xTAT)(Ax) = (Ax)T(Ax) 0
Determinants
If det(A) = 0, then A is singular.
If det(A) 0, then A is invertible.
a b
det ad bc
c d
Determinants
m-by-n matrix A is rank-deficient if A has
rank r < m ( n)
Theorem: rank(A) < r iff
det(A) = 0 for all t-by-t submatrices,
rtm
Eigenvalues & eigenvectors
How can we characterize matrices?
The solutions to "Ax = x" in the form of
eigenpairs (,x) = (eigenvalue,eigenvector)
where x is non-zero.
To solve this, (A I) x = 0
is an eigenvalue iff det(A I) = 0
The eigenvectors of a matrix are unit vectors
that satisfy: Ax = x
Eigenvalues
(A I) x = 0
is an eigenvalue iff det(A I) = 0
Example:
1 4 5
A 0 3 / 4 6
0 0 1 / 2
1 4 5
det( A I ) 0 3/ 4 6 (1 )(3 / 4 )(1 / 2 )
0 1 / 2
0
1, 3 / 4, 1 / 2
Eigenvector example
2 0 Eigenvalues = 2, 1 with
A
0 1 eigenvectors (1,0), (0,1)
Eigenvectors of a linear transformation A are not rotated (but will be
scaled by the corresponding eigenvalue) when A is applied.
(0,1)
Av
v
(1,0) (2,0)
Solving Ax=b
x + 2y + z = 0 1 2 1 0 Write system of
y- z=2 equations in matrix
0 1 1 2 form.
x +2z= 1 1 0 2 1
-------------
x + 2y + z = 0 1 2 1 0
Subtract first row from
0 1 1 2
last row.
y- z=2
0 2 1 1
-2y + z= 1
-------------
1 2 1 0
x + 2y + z = 0 Add 2 copies of second
0 1 1 2
row to last row.
y- z=2 0 0 1 5
- z=5
Now solve by back-substitution: z = -5, y = 2-z = -3, x = -2y-z = 11
Solving Ax=b & condition numbers
Matlab: linsolve(A,b)
How stable is the solution?
If A or b are changed slightly, how much does it
effect x?
The condition number c of A measures this:
c = max / min
Values of c near 1 are good.
Linear Algebra in
Excel Grid
Excel file have extension .xls or .xlsx (2007).
Work book is a 2D matrix/grid of CELLs containing data
Cells are then named by Column+Row, e.g. A1, B5, I2
Draggable corner to copy cells
Matrix: B5:B6;C5:C6
Sheets: 1,2,
Solving Ax=b
x + 2y + z = 0 1 2 1 0 Write system of
y- z=2 equations in matrix
x +2z= 1 0 1 1 2 form.
1 0 2 1
-------------
x + 2y + z = 0 1 2 1 0
y- z=2 Subtract first row from
0 1 1 2
last row.
-2y + z= 1 0 2 1 1
-------------
x + 2y + z = 0 1 2 1 0
y- z=2 Add 2 copies of second
0 1 1 2
row to last row.
- z=5 0 0 1 5
Now solve by back-substitution: z = -5, y = 2-z = -3, x = -2y-z = 11
Using Excel
1. Enter the numbers (A, b),
2. Name the matrices (A)
and arrays (x,b)
3. Call math and matrix
functions on your
data to compute x.
x = A-1b
4. Compute A-1
Linear Algebra in Excel
Type in your matrix into Excel cells
Select matrix and give it a name, e.g.
'A', 'b', 'x'.
Select the cells you want the result in.
click on 'fx' to enter a formula for the result,
find your function in the list
type in the symbolic arguments,
press Control-Shift-Enter, not 'OK'.
if the result is a matrix.
X
Solving Ax=b in Excel
Select x (E2:E4), click on fx
Compute x=MINVERSE(A) * b
press Control-Shift-Enter instead of ok.
X
LU Decomposition
LUD of a matrix is a method to solve a set of
simultaneous linear equations
Which is better:
Gauss Elimination
LUD
1 0 0 u11 u12 u13
A L U 21 1 0 0 u 22 u 23
31 32 1 0 0 u 33
LUD Method
[A] = [L][U]
Where
[L] = lower triangular matrix
[U] = upper triangular matrix
8n 3 4n 8n 3 4n
T 12n 2 T 12n 2
3 3 3 3
where T = clock cycle time and n = size of the matrix
[U] is the same as the coefficient matrix at the end of the forward
elimination step.
[L] is obtained using the multipliers that were used in the forward
elimination process
LUD Worked Example
Using the Forward Elimination Procedure of Gauss Elimination
25 5 1
A 64 8 1
144 12 1
Finding the [U] matrix
Using the Forward Elimination Procedure of Gauss Elimination
25 5 1
A 64 8 1
144 12 1
25 5 1
2.56; Row2 Row12.56 0 4.8 1.56
64
Step 1:
25
144 12 1
25 5 1
5.76; Row3 Row15.76 0 4.8 1.56
144
25
0 16.8 4.76
Finding the [U] Matrix
25 5 1
0 4.8 1.56
Matrix after Step 1:
0 16.8 4.76
25 5 1
16.8
Step 2: 3.5; Row3 Row23.5 0 4.8 1.56
4.8
0 0 0.7
25 5 1
U 0 4.8 1.56
0 0 0.7
Finding the [L] matrix
1 0 0
1 0
21
31 32 1
1 0 0
L 2.56 1 0
5.76 3.5 1
Does [L][U] = [A]?
1 0 0 25 5 1
L U 2.56 1 0 0 4.8 1.56
5.76 3.5 1 0 0 0.7
Using LU Decomposition
Solve the following set of 25 5 1 x1 106.8
linear equations using LU 64 8 1 x 177.2
Decomposition 2
144 12 1 x3 279.2
1 0 0 25 5 1
A LU 2.56 1 0 0 4.8 1.56
5.76 3.5 1 0 0 0.7
Example
Set [L][Z] = [C] 1 0 0 z1 106.8
2.56 1 0 z 177.2
2
5.76 3.5 1 z 3 279.2
z1 106.8
z 2 177.2 2.56 z1 z1 106.8
177.2 2.56106.8
96.2
Z z2 96.21
z3 279.2 5.76 z1 3.5 z 2 z3 0.735
279.2 5.76106.8 3.5 96.21
0.735
Example
Set [U][X] = [Z]
25 5 1 x1 106.8
0 4.8 1.56 x 96.21
2
0 0 0.7 x3 0.735
a3
0.735 96.21 1.56 a3
a2
0.7 4.8
a3 1.050 96.21 1.561.050
a2
4.8
a2 19.70
To get the solution
Substituting in a3 and a2 using Hence the Solution Vector is:
the first equation
Using the decomposition procedure, the [L] and [U] matrices are found to be
1 0 0 25 5 1
A LU 2.56 1 0 0 4.8 1.56
5.76 3.5 1 0 0 0.7
Example: Inverse of a Matrix
Solving for the each column of [B] requires two steps
Solve [L] [Z] = [C] for [Z]
Solve [U] [X] = [Z] for [X]
1 0 0 z1 1
Step 1: LZ C 2.56 1 0 z2 0
5.76 3.5 1 z3 0
This generates the equations:
z1 1
2.56 z1 z2 0
5.76 z1 3.5 z2 z3 0
Example: Inverse of a Matrix
Solving for [Z]
z1 1
z1 1
z 2 0 2.56 z1
0 2.561
Z z2 2.56
2.56 z3 3.2
z3 0 5.76 z1 3.5 z 2
0 5.761 3.5 2.56
3.2
Example: Inverse of a Matrix
25 5 1 b11 1
Solving [U][X] = [Z] for [X] 0 4.8 1.56 b 2.56
21
0 0 0.7 b31 3.2
( 0, 2 )
( 3, 1 )
x
( 0, 0 ) ( 4, 0 )
x+y=4
maximize
subject to
3x + 5y
x+ y 4
Geometric solution
x + 3y 6
x 0, y 0
y
Extreme Point Theorem Any LP
problem with a nonempty bounded
feasible region has an optimal solution;
moreover, an optimal solution can always
be found at an extreme point of the
problem's feasible region.
( 0, 2 )
( 3, 1 )
x
( 0, 0 ) ( 4, 0 )
3x + 5y = 20
3x + 5y = 14
3x + 5y = 10
Possible outcomes in solving
an LP problem
has a finite optimal solution, which may not
be unique.
unbounded: the objective function of
maximization (minimization) LP problem is
unbounded from above (below) on its
feasible region.
infeasible: there are no points satisfying all
the constraints, i.e. the constraints are
contradictory.
Graphing Second
2-Dimensional LPs
Optimal
Corner pt. Solution
Example 1: y
4
Maximize x+y
3
Subject to: x + 2 y 2 Feasible
Region
x3 2
y4
1
x0 y0
Initial 0
Corner pt.
x
0 1 2 3
Graphing 2-D LP Unique
solution Optimal
Solution
Example 1: y
4
Maximize x+y
3
Subject to: x + 2 y 2 Feasible
Region
x3 2
y4
1
x0 y0
0 x
0 1 2 3
Graphing LP multiple solutions
Multiple
Example 2: y Optimal
Solutions!
4
Minimize ** x-y
3
Subject to: 1/3 x + y 4
-2 x + 2 y 4 2
Feasible
x3 Region
1
x0 y0
0
0 1 2 3 x
Graphing 2-D LP
Example 3: y
40
Minimize x + 1/3 y
30
Subject to: x + y 20 Feasible
-2 x + 5 y 150 20 Region
x5
10
x0 y0
x
0
Optimal
Solution 0 10 20 30 40
Graphing 2-D LP, Unbounded
Example 4: y
40
Maximize x + 1/3 y
30
Subject to: x + y 20 Feasible
-2 x + 5 y 150 20 Region
x5
10
x0 y0
x
0
Optimal Solutions
unbounded 0 10 20 30 40
Optimal lies on a corner
Extreme point
y y y
4 4 40
3 3 30
2 2 20
1 1 10
0 0 0 x
x x
0 1 2 3 0 1 2 3 0 10 20 30 40
Extend to Higher Dimensions
To solve an LP
The constraints of an LP give rise to a polytope.
maximize c1 x1 + ...+ cn xn
subject to ai1x1+ ...+ ain xn = bi , , i = 1,...,m,
x1 0, ... , xn 0
Every LP problem can be represented in such form
In Matrix terms
T
Max c x
subject to Ax b
30 x1 15 x2 30 u1,u2,u3 >= 0
where
x1 , x2 0
x = 1.71
y = .86
z = 4.29
[8]
b := [6]
[6]
x+ y4 x+ y+ u =4
x + 3y 6 x + 3y + v =6
Example x+ y +u =4
x + 3y +v =6
(0, 0, 4, 6) is basic feasible solution
(x, y are non-basic; u, v are basic)
maximize
z = 3x + 5y + 0u + 0v
Simplex Tableau
subject to
x+ y+ u =4
x + 3y + v =6
x y u v x y u v x y u v
u 1 1 1 0 4 u 2 0 1
1
2 x 1 0 3/2 -1/2
1/3 3
3 3
1
v 1 3 0 1 6 y 1 1 0 2 y 0 1 1/2 1/2 1
3 3
z 4 5 z z
3 5 0 0 0 0 0 10 0 0 2 1 14
3 3
2
Faster LP Algorithms
Ellipsoid. (Khachian 1979, 1980)
Solvable in polynomial time: O(n4 L) bit
operations.
n = # variables
L = # bits in input
Theoretical tour de force.
Not remotely practical.
Karmarkar's algorithm. (Karmarkar 1984)
O(n3.5 L).
Polynomial and reasonably efficient
implementations possible.
Interior point algorithms.
O(n3 L).
Competitive with simplex!
Dominates on simplex for large
problems.
Extends to even more general problems.
Theory
Feasible region is convex and bounded by
hyperplanes
Feasible vectors that satisfy N of the
original constraints are termed feasible basic
vectors.
Optimal occur at boundary (gradient vector of
objective function is always nonzero)
Combinatorial problem: determining which N
constraints (out of the N+M constraints)
would be satisfied by the optimal feasible
vector
Duality
Primal problem Dual problem
5
Duality
If a linear program has a finite optimal
solution then so does the dual.
Furthermore, the optimal values of the two
programs are the same.
If either problem has an unbounded
optimal solution, then the other problem
has no feasible solutions.
6
But an Integer Program is
different and much
y
harder
1. Feasible region is a set of 4
discrete points.
4. Solving it as an LP provides
1
x1.val = 8
x2.val = 4
x3.val = 0
1. Model has been successfully processed
Solve with Mathematica
In[9] = Maximize[ {
3 x1 + x2 + 2 x3,
x1 + x2 + 3 x3 <= 30 &&
2 x1 + 2 x2 + 5 x3 <= 24 &&
4 x1 + x2 + 2 x3 <= 36,
x1 >= 0 && x2 >= 0 && x3 >= 0 },
{ x1, x2,x3} ]
Maple> with(simplex);
maximize(3*x1+x2+2*x3, {
x1+ x2+3*x3<=30,
2*x1+2*x2+5*x3<=24,
4*x1+ x2+2*x3<=36
}, NONNEGATIVE);
{x3 = 0, x1 = 8, x2 = 4}
Using python pysimplex
C:\> cat example.py
Constraints:
6 acres of land: 3x + y < 6
6 tons of fertilizer: 2x + 3y < 6
8 hour work day: x + 5y < 8
So profit is
z =2 *x +1*y
B1 = B3 * B4 + C3 * C4
Enter the formula for constraints
D6=SUMPRODUCT(B6:C6,$B$4:$C$4)
D7=SUMPRODUCT(B7:C7,$B$4:$C$4)
D8=SUMPRODUCT(B8:C8,$B$4:$C$4)
Tools >
Solver
Solver > Options
Defaults
solver options
are good.
Solved
x=1.714
y=0.857
z = 4.28
Answer report
Sensitivity report
Limits Report
Aproximation
Algorithms
Cormen chapter 35
1
NP-completeness is not the end
The Problem is NP-
Complete, no one can Use Approximation
solve it quickly. I give Algorithms. See
up! Cormen and Vazirani's
books.
2
Coping With NP-Hardness
Brute-force algorithms.
Develop clever enumeration strategies.
Guaranteed to find optimal solution.
No guarantees on running time.
Heuristics
Develop intuitive algorithms.
Guaranteed to give some correct solution in polynomial time.
No guarantees on quality of solution.
4
Approx ratio (n)
An AA is bounded by (n) (Rho) if cost of the AA solution c (for input
size n) is within a factor (n) of the optimal c*, that is:
An AA has an approximation ratio (n) if for
any input of size n
max(C/C*, C*/C) (n)
where C = cost of solution produced by the AA.
C* = cost of optimal solution > 0
for both minimization and maximization
problems.
5
Definitions: PTAS, FPTAS
Approximate Scheme: AA that takes > 0 as input and
is a (1 + )-approximation algorithm.
That is, we can get as near to the optimal as we wish.
6
AA examples
Vertex cover, using greedy
TSP, using MST.
TSP, Hamiltionian path, NO AA.
Set Cover,
3CNF, randomized 7/8 algorithm.
Weighted vertex cover, using LP.
Subset sum (partition), FPTAS divide and
conquer.
7
Aproximation
Algorithm for Vertex
Cover using LP
Cormen chapter 35
1
Approximating weighted vertex cover using linear programming.
Minimum-weight VC problem:
Given an undirected graph G = (V, E), where each v V has an
associated positive weight w(v). For any vertex cover V, w(V) = w(v)
The goal is to find a vertex cover of minimum weight. vV '
Associate a variable x(v) with each v V, and let x(v) {0, 1}.
x(v) = 1 means v is in the VC; 0 0/w.
For each edge (u, v), at least one of u and v must be in VC.
Thus x(u) + x(v) 1.
0-1 linear program:
min w(v) x(v)
vV
Subject to x(u) + x(v) 1 for each (u, v) E
x(v) {0, 1} for each v
Linear-programming relaxation:
min w(v) x(v)
vV
s.t. x(u) + x(v) 1 for each (u, v) E
x(v) 1 for each v V
2
x(v) 0 for each v V
Approximating weighted vertex cover using
linear programming.
Minimum-weight VC problem: w(v)
vV '
be in VC.
Thus x(u) + x(v) 1. 3
0-1 linear program:
min
w(v)
vV '
w(v) x(v)
Linear-programming
vV
relaxation:
min
Cormen chapter 35
1
Vertex Cover
2
Vertex Cover AA
Approx-VC(G)
Approx-VC(G)
CC=={{}}
E
E==E[G]
E[G]
while
whileEE{{}}do
do
pick
pickedge
edge(u,v)
(u,v)in
inE
E
CC==CC {u,v};
{u,v};
remove
removefrom
fromEEevery
everyedge
edgeincident
incidenton
oneither
eitheruuor
orvv
return
returnCC////cover.
cover.
3
Example: Vertex cover AA
b c d
a e f g
4
Example
b c d
a e f g
5
Example
b c d
a e f g
6
Example
b c d
a e f g
7
Example
b c d
a e f g
8
Example
b c d
a e f g
9
VC AA Ratio is 2
VC AA finds upto twice the optimal number of vertices.
Proof:
Let A = set of edges selected by AA.
Notice: That no two edges in A have a common endpoint,
So our solution size is |C| = 2|A|.
Cormen chapter 35
1
Triangle Inequality
Given complete, undirected graph.
For all vertices, u,v,w, require:
v
2
TSP (with Triangle Inequality)
TSP remains NP-complete even with inequality.
Approx-
Approx-TSP(G,c)
TSP(G,c)
selectrrv[G]
select v[G]
call
callMST-Prim(G,c,r)
MST-Prim(G,c,r)to toconstruct
constructMST
MSTwith
withroot
rootrr
let
let LL==vertices
verticeson
onpreorder
preorderwalk
walkofofMST
MST
////This
Thisstep
stepneeds
needstriangle
triangleinequality
inequalityto
toskip
skipduplicate
duplicatevisits.
visits.
let
letHH==cycle
cyclethat
thatvisits
visitsvertices
verticesin
inthe
theorder
orderLL
return
returnHH
3
Example TSP AA
a d
b f g
4
Example TSP MST prim
a d
b f g
5
Example: Pre-order walk of MST
a d
b f g
6
Example: TSP solution
a d
b f g
Computed Tour
7
Example: TSP optimal tour
a d
b f g
8
Claim: Approximation Ratio is 2
Proof:
Deleting all but first appearance
Let H* = optimal tour, T = MST. of each vertex yields preorder
Deleting an edge of H* yields a walk.
spanning tree. Ex: a,b,c,h,d,e,f,g.
So, c(T) c(H *).
Corresponding cycle H = W with
Consider full walk W of T. some edges removed.
Ex: a,b,c,b,h,b,a,d,e,f,e,g,e,d,a. Ex: a,b,c,h,d,e,f,g,a.
Visits each edge twice
c(W) = 2c(T). c(H) c(W).
Hence, c(W) 2c(H*).
Implies c(H) 2c(H*).
Triangle inequality can
delete any vertex from W and 9
cost doesnt increase.
General TSP has no AA
Theorem 35.3:IfIfPPNP
Theorem35.3: NPand
andapproximation ratio1,
approximationratio 1,
There
Thereisisno
nopolynomial-time
polynomial-timeAA withfor
AAwith forthe
thegeneral
generalTSP.
TSP.
Proof:
Suppose polynomial-time a.a. A with approximation ratio .
WLOG, assume is an integer.
We use A to solve the Hamiltonian Circuit (HC) Problem
in polynomial time.
10
Proof: solving HC using TSP AA.
Given G =(V,E) to find a HCP,
Create a new complete graph G' of G
G = (V,E) be the complete graph on V.
For each (u,v) E, the cost is:
1 if (u,v)
c(u,v) =
|V|+1 otherwise (very large)
11
Proof Continued
If G has a H.C., then G has a tour of cost |V|.
If G does not have a H.C., then any tour in G uses an edge not in E,
and costs at least (|V|+1)+ (|V| 1) > |V|.
12
Randomization: MAX-3CNF
A randomized algorithm has an approximation ratio of (n) as before,
but C is interpreted as an expected cost.
S1
S3 S2
S5
S6
S4
Example
S1
S3 S2
S5
S6
S4
Example
S1
S3 S2
S5
S6
S4
Example
S1
S3 S2
S5
S6
S4
Example
S1
S3 S2
S5
S6
S4
S1
S3 S2
S5
S6
S4
Proof: Let
C = set cover returned by the algorithm.
C* = optimal S.C.
Si = ith set selected by the algorithm.
Claim: Approx. Ratio is Logarithmic
S5
S6
S4
1/3
S5
S6
S4
1/3 1/2
S5
S6
S4
1 1/3 1/2
S5
S6
S4
cx
S C* xS
cx H( |S| )
xS
By claim:
|C| H( |S| )
S C*
|C*|H(max{|s|: s F})
ui = | S (S1 S2 Si)|
= no. of uncovered elements in S after S1,,Si have been selected.
Let u0 = |S|.
k
1 c (H(u
x i 1 ) H(u i ))
c x (u i1 u i )
xS i 1 u i -1
xS i 1
H(u 0 ) H(u k )
H(u 0 ) H(0)
H(u 0 )
H(| S |)
Aproximation
Algorithm for
Subset sum
Cormen chapter 35
Recall: Bin packing problem is NPC
Can k bins of capacity t each, store a finite set of
weights S = {x1,..,xn}?
Example:
We have k=4 bins with capacity t=10 each.
and weights S = {1,9,3,7,6,4,1,2,2,5}.
How to pack S into k bins?
Subset-Sum Problem
Problem: Pack as many of S into box of size t, without overflow.
Exact-SubsetSum(S,
Exact-SubsetSum(S,t)t) ////Exp Expbecause
becauseLength(L)
Length(L)can
canbe
be2^i.
2^i.
nn==|S|
|S|
LL00==[0]
[0]
for
forii==11to tonndo
do
LLi i==Merge
Merge(L (Li-1i-1,,LLi-1i-1++xxi i)); ; /*
/*like
likeMerge
MergeSort
Sort*/
*/
Remove
Removefrom fromLLi ieveryeveryelement
element >>tt
return
returnlargest
largestelement
elementin inLLnn..
Example
LLi i==Merge
Merge(L
(Li-1i-1,,LLi-1i-1++xxi i)); ; /*
/*like
likeMerge
MergeSort
Sort*/
*/
Trim (L,)
Trim(L, ) ////LL==[y
[y11, ,yy22, ,,
,yymm]]
mm=|L|;
=|L|;
L
L ==[y [y11];];
last
last==yy11;;
for
forii==22to tommdo do
last(1++
ifif yyi >>last(1
i then
then
append
appendyyi ionto
ontoend endof ofL;
L;
last
last==yyi i
fifi
od
od
return
returnL L
Approx Subset Sum
Example:
S = {104,102,201,101}, = 0.40, = 0.05
t = 308
Approx-Subset-Sum(S,t,t,)
line 2: L0 = <0>
Approx-Subset-Sum(S, )
line 4: L1 = <0, 104> 11 nn==|S|;
|S|;
line 5: L1 = <0, 104>
line 6: L1 = <0, 104> 22 LL00==[0];
[0];
33 for
forII=1 =1totonndo
do
line 4: L2 = <0, 102, 104, 206>
44 LLi i==Merge
Merge(L (Li-1i-1,,LLi-1i-1+x
+xi);
line 5: L2 = <0, 102, 206> i);
line 6: L2 = <0, 102, 206> 55 LLi i==Trim
Trim(L(Li,i,/2n);
/2n);
line 4: L3 = <0, 102, 201, 206, 303, 407>
66 remove
removefromfromLLi ievery everyelement
element>>tt
line 5: L3 = <0, 102, 201, 303, 407> od;
od;
line 6: L3 = <0, 102, 201, 303>
77 return
returnlargest
largestelement
elementin inLLnn
line 4: L4 = <0, 101, 102, 201, 203, 302,
303, 404>
line 5: L4 = <0, 101, 201, 302, 404>
line 6: L4 = <0, 101, 201, 302>
Algorithm returns 302.
Optimal answer is 307 = 104 + 102 + 101.
Theorem: Approx-SS is a fully-PTAS.
Proof: We consider the solution space Pi at step i.
Let Pi = set of all values that can be obtained by selecting a
(possibly empty) subset of {x1, x2, , xi} and summing
its members.
In Approx-SS, y Pi (y t),
z Li : y/(1+ /2n)i z y.
Proof: By induction on i.
Proof continued
Let y* Pn denote the optimal solution, and let z* denote the solution
returned by SSAA. Then, z* y*.
TPT. y*/z*
d/dn (1+ /2n)n > 0, so (1+ /2n)n increases with n. Its limit is e/2.
Thus, (1+ /2n)n e/2
ex x + x2 see Ch. 3}
Proof: Time complexity
Show time complexity is polynomial in length[I] and 1/
Want to bound the length of Li.
After trimming, successive z and z satisfy z/z > 1 + /2n.
Thus, each list contains [0 to log1 + /2n t ] other values.
So, time complexity is polynomial in length(I) and 1/
ecause:
ln t
log1 /2n t
ln(1 /2n)
2n(1 /2n) ln t
{x/(1 x) ln(1 x) - - see Ch. 3}
4n ln t
{0 1}
TSP: Travelling
salesman problem
TSP
Given:
Set of cities {c1,c2,,cN }.
For each pair of cities {ci,cj}, a distance d(ci,cj).
d(c
i1
(i) , c(i1) ) d(c(N) , c(1) )
Example of TSP instances
n=10
n=1000
n=100 n=10000
Other Instances of TSP
X-ray crystallography
Cities: orientations of a crystal
Distances: time for motors to rotate the crystal
from one orientation to the other
N = 2392
Planar Euclidean Application #2
Cities: Wires to be cut in a Laser Logic
programmable circuit
N = 7397
N = 85,900
N = 33,810
Standard Approach to Coping
with NP-Hardness:
Approximation Algorithms
Run quickly (polynomial-time for theory,
low-order polynomial time for practice)
Obtain solutions that are guaranteed to be
close to optimal
Euclidean TSP, the triangle inequality holds:
d(a,c) d(a,b) + d(b,c)
1
A Salesman wishes to travel around a given set of
cities, and return to the beginning, covering the
smallest total distance
Easy to State
Difficult to Solve
2
If there is no condition to return to the beginning. It
can still be regarded as a TSP.
Suppose we wish to go from A to B visiting all cities.
A
B
3
If there is no condition to return to the beginning. It
can still be regarded as a TSP.
A
B
4
If there is no condition to return to the beginning. It
can still be regarded as a TSP.
A
B
5
A route returning to the beginning is known as a
Hamiltonian Circuit
6
Applications of the TSP
Routing around Cities
Computer Wiring - connecting together computer
components using minimum
wire length
Archaeological Seriation - ordering sites in time
-------
Genome Sequencing - arranging DNA fragments in
sequence
Job Sequencing - sequencing jobs in order to
minimise total set-up time
between jobs
Wallpapering to Minimise Waste
4pm-7pm 6pm-7pm
Depot
8am-10am
6am-9am
2pm-3pm
8
Much more difficult than TSP
History of TSP
9
Recent TSP Problems and
Optimal Solutions
from
Web Page of William Cook,
Georgia Tech, USA
with Thanks
10
1954
1962
1977
1987
1994
1998
n=1512
n=13509
n=2392
n=7397
n=120
n=532
n=666
n=49
n=33
2004
n=24978
11
USA Towns of 500 or more population
13509 cities 1998 Applegate, Bixby,
Chvtal and Cook
12
Towns in Germany 15112 Cities
2001Applegate, Bixby, Chvtal and Cook
13
Sweden 24978 Cities 2004 Applegate, Bixby, Chvtal, Cook and
Helsgaun
14
Solution Methods
I. Try every possibility (n-1)! possibilities grows faster
than exponentially
15
The Nearest Neighbour Method (Heuristic)
A Greedy Method
1. Start Anywhere
4. Return to Beginning
16
A 42-City Problem The Nearest Neighbour Method
(Starting at City 1)
5
25 8
37
31
24 6 28 36
32
41
14 26 30
27
11
7
15
23
33 9
40
22 29
12
13
2 19 34
42 35
20
16
38
17 4 10
21
3
1 39
17
18
The Nearest Neighbour Method (Starting at City 1)
25 8
37
31
24 6 28 36
32
41
14 26 30
27
11
7
15
23
33 9
40
22 29
12
13
2 19 34
42 35
20
16
38
17 4 10
21
3
1 39
18
18
The Nearest Neighbour Method (Starting at City 1)
Length 1498
5
25 8
37
31
24 6 28 36
32
41
14 26 30
27
11
7
15
23
33 9
40
22 29
12
13
2 19 34
42 35
20
16
38
17 4 10
21
3
1 39
19
18
Remove Crossovers
25 8
31 37
24 6 28 36
32
41 26 30
14 27
11
7
15 23
40 33 9
22 29
12
13 19
2 34
42 35
20
16
38
17 4 21 10
3
1 39
20
18
Remove Crossovers
25 8
31 37
24 6 28 36
32
41 26 30
14 27
11
7
15 23
40 33 9
22 29
12
13 19
2 34
42 35
20
16
38
17 4 21 10
3
1 39
21
18
Remove Crossovers Length 1453
25 8
31 37
24 6 28 36
32
41 26 30
14 27
11
7
15 23
40 33 9
22 29
12
13 19
2 34
42 35
20
16
38
17 4 21 10
3
1 39
22
18
Christofides Method (Heuristic)
23
Christofides Method 42 City Problem
Minimum Cost Spanning Tree
5 8
Length 1124
25
31
37
24
6 28
26 30 36
32
27
11
41 14
23 7 33
22
15 9
29
40 12
2
13 35
19
42 34
38
20
16
21 10
17 4
3
39
1
18 24
Minimum Cost Spanning Tree by Greedy
Algorithm
25
Match Odd Degree Nodes in Cheapest Way Matching Problem
5 8
25
31
37
24
6 28
26 30 36
32
27
11
41 14
23 7 33
22
15 9
29
40 12
2
13 35
19
42 34
38
20
16
21 10
17 4
3
39
1
18 26
1. Create Minimum Cost Spanning Tree (Greedy
Algorithm)
27
Create a Eulerian Tour Short Circuiting Cities revisited
Length 1436
5
8
25
31 37
24 28
6 32 36
41 26 30
14 27
11
7
15
23
40 33 9
22 29
12
13
2 19 34
42 35
20
16
38
17 4 10
21
3
1 39
28
18
Optimising Method
1.Make sure every city visited once and left once in cheapest way (Easy)
-The Assignment Problem 5 - Results in subtours Length 1098
8
25
31 37
24 28
6 32 36
41 26 30
14 27
11
7
15
23
40 33 9
22 29
12
13
2 19
42 34 35
20
16
38
17 4 10
21
3
1 39 29
18
Put in extra constraints to remove subtours (More Difficult)
Results in new subtours Length 1154
5
8
25
31 37
24
6 28 32 36
41 26 30
14 27
11
7
15 23
40 33 9
22 29
12
13
2 19 34
42 35
20
16
38
17 4 21 10
3
1 39
30
18
Remove new subtours
Results in further subtours Length 1179
5
8
25
31 37
24 28
6 32 36
41 26 30
14 27
11
7
15
23
40 33 9
22 29
12
13 2 19
34 35
42
20
16
38
17 4 21 10
3
1 39
31
18
Further subtours Length 1189
8
25
31 37
24 28
6 32 36
41 26 30
14 27
11
7
15
23
40 33 9
22 29
12
13 2 19
34 35
42
20
16
38
17 4 21 10
3
1 39
32
18
Further subtours Length 1192
8
25
31 37
24 28
6 32 36
41 26 30
14 27
11
7
15
23
33 9
40
22 29
12
13 2 19
34 35
42
20
16
38
17 4 21 10
3
1 39
33
18
Further subtours Length 1193
8
25
31 37
24 6 28 36
32
41 26 30
14 27
11
7
15 23
33 9
40
22 29
12
13
2 19
34 35
42
20
16
38
17 4 10
21
3
1 39
34
18
Optimal Solution Length 1194
8
25
37
31
24 28
6 32 36
41 26 30
14 27
11
7
15
23
33
40 9
22 29
12
13 2 19
34 35
42
20
16
38
17 4 21 10
3
1 39
35
18
Cmd on Windows
Bash on Linux (and cygwin on Windows)
CMD Command line
Windows Disk consist of drives:
C:\ D:\ E:\
cmd, C:\> or bash#
And each disk has directories,
e.g. C:\windows and
C:\Documents and Settings\a\Desktop
And each directory contains
files.
Files have a name and
extension, e.g.
C:\WINDOWS\system32\system.ini
Create shortcut for cmd
Right click -> new ->
shortcut -> cmd
Right click ->
properties -> quick-
edit
Cmd (console) window
Only text, no graphics
Start -> Run -> cmd
Buttons to
Cmd icon - Minimize
and name
cd and dir commands [] Maximize
X Close window
Title bar
Folder Files
Folder cabinets (File System)
C: Primary Disk volume
C:\ Root directory
C:\windows\ Windows Folder
C:\Document and Settings\
D: Secondary Disk
E: DVD drive
F: USB disk
Filesystems: Fat32, NTFS, ext2, etc
Disk volumes
/dev/hda (linux disk #1)
Partition #1
C: boot partition (primary
dos/windows)
Partition #2 (ext dos)
D: logical drive, FAT32
E: logical drive, NTFS
F: logical drive, CDFS
Partition #3, Non dos
Bootable, Linux
Partition #4, non dos
OS/2 or BSD
Hard disk partitions
Filenames
Files have a name and extension
Avoid non-ascii chars in filenames
Extension indicate type of file
eg. readme.txt, photo.jpg, pic.gif, song.mp3,
movie.avi
Extensions can be wrong, e.g. photo.mp3
File attributes
Permissions:
Readable, writable, readonly, execute
Which users or groups can access it?
Timestamps:
Time of creation, modified, access date
Size: in bytes
Type:
text, binary, symbolic link
stream, seekable, pipe, socket, lock
Paths
Filesystems contain folders and files
Folders and files have names
Charset: Ascii, German, Unicode
Paths are locations of Folders and files.
Backslash is the DOS path separator
Avoid non-ascii chars in paths, so it is easier to
type in commands.
Eg. C:\home\john is better than "C:\document
and settings\john will"
Paths
Paths can be
Absolute, example:
C:\home\john\cc
Relative to current folder
. dot refers to the current folder
.. dot-dot refers to the parent folder
..\.. Refers to the grand-parent folder
Example: ..\readme.txt
cmd keys
Arrow-keys: command history editing
F7 .. F8: command history
TAB .. complete directory/filename
Control-C (C-c) .. kill current command
Control-Z (C-z) .. EOF (end of file).
Folder (directory) commands
C:\windows> cd Shows current dir
C:\windows
C:\windows> cd \ Change-dir to root
C:\> mkdir tmp Make dir C:\tmp
C:\> rmdir tmp Remove dir C:\tmp
Getting help
1. Google search
2. C:\> cd / ?
Displays the name of or changes the current directory.
Command syntax
prompt> command [/switches]
[arguments]
Command completion, press [TAB]
repeatedly till the right word appears
C:\> cd C:\do<TAB><TAB>
C:\> cd c:\document and settings\<TAB>
C:\> cd c:\document and settings\john
File operations
C:\> Copy oldfile newfile
1 file(s) copied.
C:\> Copy oldfile directory
C:\> Rename oldfile newfile
C:\> Move oldfile folder
C:\> Delete oldname
Find dirs (folders)
C:\Documents and Settings> dir /s /b /ad goo*
C:\Documents and Settings\b\Application Data\Google
C:\Documents and Settings\Default User\Application Data\Google
C:\Documents and Settings\Default User\Local Settings\Application Data\Google
Options to commands:
Dir command takes following options (switches):
/s .. search Sub-directories also
/b .. just print Bare names
/ad .. result Attribute must be Directory (we dont want files)
Arguments to commands:
After switches, we type arguments to the dir command,
here we want folder names starting with goo..
Find files
Find files name Hosts in C:\windows
C:\windows> dir /s/b hosts
C:\WINDOWS\I386\HOSTS
C:\WINDOWS\system32\drivers\etc\hosts
C:\> my.bat
Hello john
C:\>
Default terminal shell (command interpreter)
on Linux is /bin/bash
Windows users can download
c:/cygwin/bin/bash with cygwin.
History:
sh (ATT unix) ksh bash (current)
Obselete: zsh, csh, tcsh (bad design)
Start bash in a terminal
start > terminal
Cursor, type
input here
Username
hostname
current directory
pwd, ~ is home dir
# prompt means
superuser
ls list directory and files
bash$ ls l will .. l option for long details
cat (concatenate, print file)
# cat filename .. prints the filename to
stdout, which is the terminal screen, also
called /dev/tty.
Environment
# echo $HOME
# ps -p $$ .. process info
$$ .. Is the pid (process id) of this shell.
TTY .. is the controlling terminal of this bash
Unix filenames
/ is the root
/dev/ .. are the devices, like keyboard, tty,
harddisks.
/bin .. are the programs
/home/john .. user directory
/etc .. system configuration (like registry)
/usr .. user applications
Symlinks, one link can point to another
Unix file system
Common terminal keys
Control-Z .. suspend command
fg .. restart command in foreground
bg .. send command into background
Control-C .. interrupt current command
Control-\ .. Kill current command
Control-D .. EOF to logout
Control-S .. Stop screen output
Control-Q .. continue screen output.
Readline (Command line editing) in
bash
C-a .. beginning of line
C-e .. end of line
C-r .. search history
Up-arrow .. previous history command
Down-arrow .. next history commands
C-k .. delete to end of line
See google, same as Emacs editor keys,
can remap keys in ~/.inputrc
Common Unix commands
ls files .. list file or directory
cat files .. print files to stdout
man xyz .. show manual help page for xyz
cp source target .. copy source to target
mv source target .. move
rm source .. remove source
cd /usr/local .. change directory to
pwd .. show present working dir
grep regexp file .. search regexp in files
more files .. show files page by page.
More Unix commands
ps show processes
who show who is logged on
date .. show date
cal .. show calendar
stty .. terminal settings
chmod .. change dir/file permissions
vim files .. vi improved editor
emacs files .. emacs editor
Network commands
ping host .. check network connection to host
tracert host .. trace route to host
nslookup .. DNS name lookup
mail .. read or send email
ftp .. file transfer
wget urls .. download urls
telnet host .. login to host
ssh host .. secure shell login to host
finger user@host .. find out about user on host
Process and its IO
Output
User Input
ls al /
Error messages
Command, ls is list argument
Options
to ls command
is root '/'
Saving output to a file
Count number of lines in /etc/shells and save it to x
$ wc l /etc/shells > x
$ cat x
16 /etc/shells .. number of lines in file
$ cat /etc/shells | wc
16 16 186
Pipe example with 3 commands
Example: Count
number of lines in
file containing the
string 'sh' or 'SH' .. cat
$ cat /etc/shells |
grep i sh | grep
wc l
2 wc
Running cmd in background
$ wc /etc/shells > /tmp/x &
[1] 2804 process number of background job.
$ ls
[1]+ Done later background job is done
Quoting arguments
$ echo * .. * is globbed into matches
home etc usr
$ echo * .. prevent globbing of *.
*
$ echo \* .. backslash quotes next char
*
Quoting Variables
$ echo $HOME
/home/john
$ echo 22${HOME}99
22/home/john99
$ echo $HOME
$HOME
$ echo \$HOME
$HOME
Bash aliases
Make 'dir' same as 'ls al' command
$ alias dir='ls al'
$ alias date-is='date +%Y-%m-%d'
$ date-is
2013-04-13
Bash functions
$ function dates(){
echo DATE_SECONDS=$(perl -e "print time")
echo DATE_YESTERDAY=$(date --date="1 days ago" +%Y-%m-%d)
echo DATE_TODAY=$(date --date="0 days ago" +%Y-%m-%d)
echo DATE_TOMORROW=$(date --date="1 days" +%Y-%m-%d)
}
$ dates
DATE_SECONDS=1365864924
DATE_YESTERDAY=2013-04-12
DATE_TODAY=2013-04-13
DATE_TOMORROW=2013-04-14
bash scripting
$ cat script
#!/bin/bash
# my first comment in this file.
echo My first script, hello $USER
$ chmod +x script
$ ./script
My first script, hello john
$ bash x v script .. To debug verbose
My first script, hello john
Bash script commands, if then
$ cat myscript1.sh # comment.
if [[ file1 nt file2 ]] ;then
echo file1 is newer
;elsif [[ 20 gt 5 ]] ;then
echo 20 is greater than 5
;else
true; # dummy stmt.
; fi
case stmt
$ cat myscript2.sh
case $# in
0) echo You typed no arguments ;;
1) echo You typed $1 ;;
2) echo You typed $1 and $2 ;;
*) echo You type $* ;;
esac
for loop
$ for user in a b c ;do
ls l /home/$user
; done > list.txt
while loop
$ file=/tmp/x.log
$ while [[ ! s $file ]] ; do
echo waiting for $file to fill up
sleep 1
; done
Perl power user
Fix spelling of 'thier' to 'their' in all c files
(gdb) frame 2
#2 0x814 in main (argc=1, argv=0xf4) at test.c:19 ..
Whats on the stack
(gdb) info frame
Stack level 2, frame at 0xbffffa8c:
eip = 0x8048414 in main (test.c:19);
saved eip 0x40037f5c
called by frame at 0xbffffac8,
caller of frame at 0xbffffa5c source language c.
Arglist at 0xbffffa8c, args: argc=1, argv=0xbffffaf4
Locals at 0xbffffa8c,
Previous frame's sp is 0x0
Saved registers: ebp at 0xbffffa8c, eip at 0xbffffa90
(gdb) info locals
x = 30
s = 0x8048484 "Hello World!\n"
(gdb) info args
argc = 1
argv = (char **) 0xbffffaf4
Breakpoints
(gdb) break test.c:19 # break filename:linenumber
Breakpoint 2 at 0x80483f8: file test.c, line 19
(gdb) break func1
Breakpoint 3 at 0x80483ca: file test.c, line 10
(gdb) break TestClass::testFunc(int)
Breakpoint 1 at 0x80485b2: file cpptest.cpp, line 16.
(gdb) tbreak main
Will stop once in main
(gdb) info breakpoints
(gdb) disable 2
(gdb) enable 2
(gdb) ignore 2 5
Will ignore next 5 crossings of breakpoint 2.
Running
(gdb) run
You Press <Control-C>
Program received signal SIGINT, Interrupt.
(gdb) bt # show call stack
(gdb) list # show near source code.
(gdb) print x # show variable value
Stepping
(gdb) call your_c_function(3)
(gdb) next # go line by line of source
code.
(gdb) step # go into function also.
(gdb) finish # step out of function
(gdb) continue
Viewing data, x/format
(gdb) x/s ptr # print var as a string.
(gdb) x/4c ptr # as 4 chars.
(gdb) x/t ptr # in binary bits
(gdb) x/3x ptr # as 3 hex bytes
(gdb) set var ptr = 0 # change var
(gdb) info registers
Watching data
(gdb) watch x
Hardware watchpoint 4: x, will print message
whenever x is changed.
(gdb) rwatch y # read
(gdb) awatch z # access.
Viewing asm code
Other options
C:\> gdb tui .. START gdb with GUI
Codeblocks debugger has a gdb
command line.
Emacs: M-x gdb (For linux and emacs)
Also see
gdb most systems.
MS Windows:
Visual studio
kernel windbg, softice
Linux kernel - kgdb, kdb
GUI codeblock, ddd
Valgrind, purify Runtime memory errors
Lint - Static bad style warnings
Making your public key
Signing
SSL
Making your own public key
GPG : GNU Privacy Guard
Online help:
https://help.ubuntu.com/community/GnuPrivacyGuardHowto
http://www.gnupg.org/gph/en/manual.html
Using GnuPG to generate your
keys
Open a terminal and enter:
$ gpg --gen-key
Please select what kind of key you want:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
1. #include <stdio.h>
2. #include <stdlib.h>
3. //
4. void update(int mr[]) {
5. int i;
6. for(i=0;i<=3;i++)
7. mr[i]++;
8. }
9. int main( ) {
10. int k=201, ar[3] = {10,20,30}, i=301;
11. update(ar);
12. printf("k=%d, i=%d",k,i);
13. return 0;
14. }
Debug registers
The Intel x86 CPUs have some special registers
which are intended for debugging use only. By
storing special values into these registers, a
program can ask the CPU to execute an INT 1
(interrupt 1) instruction immediately whenever a
specified memory location is read from or written
to.
INT 1 also happens to be the interrupt that's
executed by the CPU after a debugger asks the
CPU to single-step one assembly line of the
program.
Watching memory for change
Determine what address you want to watch
Open the new breakpoint dialog, and switch to
the data tab
Enter the address that you want to watch in the
variable column
The context is ignored. You can clear this if you
want.
Set the item count. If you are using an address,
this should be the number of bytes (example: 4
for a integer type)
Stop after k is allocated, find its address,
and put a Data change breakpoint on 4 bytes of k
Breakpoint hit in update(), when k is changed
At the bp i=3 and mr[3]++=k++
View the disassembly and call stack
Right click to see the source
Conditional Breakpoint
We want to stop in the loop, only when i=3
Hardcoded Breakpoint: __asm int 3
VC Command line setup
After Installing VC6 in c:\vc6,
Start > Run > Cmd
C:\> c:\vc6\VC98\bin\VCVARS32.BAT
set VSCommonDir=C:\vc6\common
set MSDevDir=C:\vc6\common\msdev98
set MSVCDir=C:\vc6\VC98
set PATH=%MSDevDir%\BIN;%MSVCDir%\BIN;
%VSCommonDir%\TOOLS\WINNT;
%VSCommonDir%\TOOLS;
%PATH%
set INCLUDE=%MSVCDir%\ATL\INCLUDE;
%MSVCDir%\INCLUDE;
%MSVCDir%\MFC\INCLUDE;
%INCLUDE%
set LIB=%MSVCDir%\LIB;
%MSVCDir%\MFC\LIB;
%LIB%
VC Command line compiler
C:\> cl /W4 /ZI /GZ hello.c .. To compile debug hello.exe
C:\> cl /W4 hello.c | gvim q - .. Quick-Fix errors with vim
---------------------------------------------
C:\> cl /? .. help with c compiler
Useful Flags:
/GX .. enable C++ Exception Handler
/W4 .. all warnings
/ZI .. incremental debugger
/GZ .. runtime debug checks
From http://www.tenouk.com/Bufferoverflowc/Bufferoverflow3.html
Entering TestFunc()
0x08048334 <TestFunc+0>: push %ebp ;push the previous stack frame
;pointer onto the stack, [ebp+0]
0x08048335 <TestFunc+1>: mov %esp, %ebp ;copy the ebp into esp, now the ebp and esp
;are pointing at the same address,
;creating new stack frame [ebp+0]
0x08048337 <TestFunc+3>: push %edi ;save/push edi register, [ebp-4]
0x08048338 <TestFunc+4>: push %esi ;save/push esi register, [ebp-8]
0x08048339 <TestFunc+5>: sub $0x20, %esp ;subtract esp by 32 bytes for local
;variable and buffer if any, go to [ebp-40]
Pushing the EBP onto the stack, saving the previous stack frame.
Exercises
1. Debug this file abc.c in CodeBlocks:
#include <stdio.h>
int c( ){ return 'c'; }
int b( ){ return 'b'; }
int a( ){ b( ); c( ); return 'a'; }
int main( ){ a( ); return 0; }
. _main:
_a:
pushl %ebp
pushl %ebp
movl %esp, %ebp
movl %esp, %ebp
andl $-16, %esp
call _b
call ___main
call _c
call _a
movl $97, %eax ; return 'a'
movl $0, %eax
popl %ebp
movl %ebp, %esp
ret
popl %ebp
ret
Solution: vc6 assembly file
c:\tmp> cl /FAs abc.c
c:\tmp> cat abc.asm
; line 4 : int a( ){ b( ); c( ); return 'a'; }
push ebp
mov ebp, esp
call _b
call _c
mov eax, 97; 00000061H
pop ebp
ret 0
----------------------------------------------------------------
c:\tmp> cl /Fc abc.c
c:\tmp> cat abc.cod
; line 4 : int a( ){ b( ); c( ); return 'a'; }
00014 55 push ebp
00015 8b ec mov ebp, esp
00017 e8 00 00 00 00 call _b
0001c e8 00 00 00 00 call _c
00021 b8 61 00 00 00 mov eax, 97; 00000061H
00026 5d pop ebp
00027 c3 ret 0
Stack of abc.c in Codeblocks
MAPLE
mathematical
software
Maple
Symbolic and Numerical Mathematics
C:\> start C:\tools\MAPLEV4\BIN.WIN\WMAPLE32.exe
CRT: Chinese Remainder Theorem
Solve for x, given these modular equations:
x 2 (mod 3)
x 3 (mod 5)
x 2 (mod 7)
C:\> c:\tools\maplev4\bin.win\cmaple.exe
>> ?? # General help
>> ??chinese # Search for help on CRT
Calling Sequence: chrem(u, m)
Parameters: u - the list of evaluations [u0, u1,..., un]
m - the list of moduli [m0,m1,...,mn]
> chrem( [2,3,2], [3,5,7] );
23
> quit;
C:\>
Powermod (used in RSA)
# Ordinary arithmetic power and then mod
> 7**123456 mod 101;
16
# Powermod is faster and handles larger input
> 7 &^ 1234567891 mod 101;
98
Graphing
> f := x -> 10*x^5 - 30*x +10 ;
> plot(f,-3..3);
\documentclass{article}
\title{Algorithm assignment}
\author{John Doe}
\date{29 February 2013}
\begin{document}
\maketitle
Hello world!
\end{document}
Compiling file.tex to file.dvi
C:\> latex test.tex
...
Output written on test.dvi (1 page, 424 bytes).
Transcript written on test.log.
View the dvi file
C:\> yap test.dvi .. View the output
Printing dvi file
C:\> dvipdfm test.dvi # dvi to pdf
test.dvi -> test.pdf
Greek letters and Symbols
Search google for Latex manual in pdf.
Arrows
Vectors
Math
Latex Math Examples
% Example 1
\begin{equation*}R = \frac{
\displaystyle{\sum_{i=1}^n (x_i-
\bar{x})(y_i-
\bar{y})}}{\displaystyle{\left[
\sum_{i=1}^n(x_i-\bar{x})^2
\sum_{i=1}^n(y_i-
\bar{y})^2\right]^{1/2}}}
\end{equation*}
% Example 2
$$
c = \sqrt{ a^2 + b^2 }
\int_{-\infty}^{\infty} \frac{1}{x}
\, dx f(x) = \sum_{n =
0}^{\infty} \alpha_n x^n
x_{1,2}=\frac{b\pm\sqrt{\color{Red}
b^2-4ac}}{2a}
\hat a \bar b \vec c x' \dot{x}
\ddot{x}
$$
Python is a
scripting
language with
lots of libraries
Classes and
objects
Indent to
arrange code
Python
Useful for small calculations, scripts
Install python33 / python26 / numpy superpack
Usage:
C:\> C:\python33\python.exe
>>> 2+2
4
>>> ^Z # (^D on unix) EOF to exit
C:\> # back to DOS prompt
Python calculator
C:\> c:\python33\python.exe
>>> import math
>>> dir(math) # see whats in math
>>> math.pi # math object has attribute pi with value=...
3.141592653589793
>>> math.log2(math.pi) # math object has member function log2
1.6514961294723187
------
>>> from math import * # Make all math functions are global
>>> sin(pi/4)
0.70710678118654746
>>> log2(pi)
1.6514961294723187
Functions, quicksort
>>> def quicksort(arr):
quicksort in python3
... if len(arr) <= 1: return arr
... pivot = arr[0]
... return quicksort([x for x in arr[1:] if x < pivot]) + [pivot]
+\
... quicksort([x for x in arr[1:] if x >= pivot])
...
>>> print(quicksort([4,5,1,3,2]))
[1, 2, 3, 4, 5]
Translation
| x2 | |1 0 tx| |x1|
| y2 | = |0 1 ty|*|y1|
| 1 | |0 0 1| |1 |
Scaling
| x2 | |sx 0 0| |x1|
| y2 | = |0 sy 0|*|y1|
| 1 | |0 0 1| |1 |
Rotation
| x2 | |cos(v) -sin(v) 0| |x1|
| y2 | = |sin(v) cos(v) 0|*|y1|
| 1 | |0 0 1| |1 |