Algorithm and Complexity
Algorithm and Complexity
Mbiethieu Cezar,
mbiethieucezar@gmail.com
http://www-public.it-sudparis.eu/~gibson/Teaching/MAT7003/
Big O notation, omega notation and theta notation are often used to this end.
For instance, binary search is said to run in a number of steps proportional to the
logarithm of the length of the list being searched, or in O(log(n)) ("in logarithmic
time“)
Such storage must offer reading and writing functions as fundamental steps
Most computers offer interesting relations between time and space complexity.
For example, on a Turing machine the number of spaces on the tape that play a
role in the computation cannot exceed the number of steps taken.
Many algorithms that require a large time can be implemented using small space
Complexity: why not just measure empirically?
B is a much better
solution for large input
Complexity: Orders of growth – Big O notation
Informally, an algorithm can be said to exhibit a growth rate on the order of a
mathematical function if beyond a certain input size n, the function f(n) times
a positive constant provides an upper bound or limit for the run-time of that
algorithm.
In other words, for a given input size n greater than some no and a constant c,
an algorithm can run no slower than c × f(n). This concept is frequently
expressed using Big O notation
For example, since the run-time of insertion sort grows quadratically as its
input size increases, insertion sort can be said to be of order O(n²).
Just as Big O describes the upper bound, we use Big Omega to describe the
lower bound
Big Theta describes the case where the upper and lower bounds of a
function are on the same order of magnitude.
Optimality
Reduction
250
f(n) = n
f(n) = log(n)
f(n) = n log(n)
f(n) = n^2
f(n) = n^3
f(n) = 2^n
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Iterative Algorithm Analysis Example : Insertion Sort
Insertion Sort
class InsertionSortAlgorithm {
int j = i; 1:2
while ((j > 0) && (a[j-1] > a[i])) {
a[j] = a[j-1];
j--; }
2:3 1:3
a[j] = B; }}
Algorithm Effort
MergeSort(A, left, right) { T(n)
if (left < right) { Θ(1)
mid = floor((left + right) / 2); Θ(1)
MergeSort(A, left, mid); T(n/2)
MergeSort(A, mid+1, right); T(n/2)
Merge(A, left, mid, right); Θ(n)
}
}
Counting the number of repetitions of n in the sum at the end, we see that there
are lg n + 1 of them. Thus the running time is n(lg n + 1) = n lg n + n.
Greedy algorithms are simple and straightforward. They are shortsighted in their
approach in the sense that they make choices on the basis of information at hand
without worrying about the effect these choices may have in the future. They are
easy to invent, easy to implement and most of the time quite efficient. Many
problems cannot be solved correctly by greedy approach. Greedy algorithms are
often used to solve optimization problems
Greedy Approach
Greedy Algorithm works by making the choice that seems most promising at any
moment; it never reconsiders this choice, whatever situation may arise later.
Greedy-Choice Property:
It says that a globally optimal solution can be arrived at by making a locally
optimal choice.
Initial State
Step 1
Final Step
Shortest Path In A Graph – typical implementation
Shortest Path Algorithm (Informal) Analysis
Every time the main loop executes, one vertex is extracted from the queue.
Assuming that there are V vertices in the graph, the queue may contain O(V) vertices.
Each pop operation takes O(lg V) time assuming the heap implementation of priority
queues. So the total time required to execute the main loop itself is O(V lg V).
In addition, we must consider the time spent in the function expand, which applies the
function handle_edge to each outgoing edge. Because expand is only called once per
vertex, handle_edge is only called once per edge.
It might call push(v'), but there can be at most V such calls during the entire
execution, so the total cost of that case arm is at most O(V lg V). The other case arm
may be called O(E) times, however, and each call to increase_priority takes O(lg V)
time with the heap implementation.
Therefore the total run time is O(V lg V + E lg V), which is O(E lg V) because V is
O(E) assuming a connected graph.
Complexity: Some PBL
Input:
Bag of integer coins, Target integer Price
Output:
empty bag to signify that price cannot be paid exactly
or
A smallest bag of coins taken from the original bag and whose sum is equal to
the price to be paid.
The class NP consists of all those decision problems whose positive solutions
can be verified in polynomial time given the right information, or equivalently,
whose solution can be found in polynomial time on a non-deterministic
machine.
NP does not stand for "non-polynomial". There are many complexity classes
that are much harder than NP.
Is P equal to NP?
• k-independent set: Given a graph, does it have a size k independent set? (i.e. k
vertices with no edge between them)
• k-coloring: Given a graph, can the vertices be colored with k colors such
that adjacent vertices get different colors?
The class of NP-complete problems contains the most difficult problems in NP, in
the sense that they are the ones most likely not to be in P.
It is not known exactly which complexity classes contain the decision version of the
integer factorization problem.
It is known to be in both NP and co-NP. This is because both YES and NO answers
can be trivially verified given the prime factors
Many people have tried to find classical polynomial-time algorithms for it and
failed, and therefore it is widely suspected to be outside P.
In addition, there are a number of probabilistic algorithms that can test primality
very quickly in practice if one is willing to accept the small possibility of error.
Proving NP-Completeness of a problem – general approach
Approach: