0% found this document useful (0 votes)
36 views2 pages

Lecture #3: Approximation Algorithms

This document discusses approximation algorithms for solving NP-hard optimization problems. It explains that many important optimization problems are NP-hard, meaning it is unlikely they can be solved exactly in polynomial time unless P=NP. However, there are ways to relax the requirements to find reasonably good solutions in reasonable time, including approximation algorithms that find solutions guaranteed to be near-optimal. Approximation algorithms provide an efficient way to solve NP-hard problems by trading off optimality for speed and always providing solutions within a known factor of the true optimum.

Uploaded by

tay
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
36 views2 pages

Lecture #3: Approximation Algorithms

This document discusses approximation algorithms for solving NP-hard optimization problems. It explains that many important optimization problems are NP-hard, meaning it is unlikely they can be solved exactly in polynomial time unless P=NP. However, there are ways to relax the requirements to find reasonably good solutions in reasonable time, including approximation algorithms that find solutions guaranteed to be near-optimal. Approximation algorithms provide an efficient way to solve NP-hard problems by trading off optimality for speed and always providing solutions within a known factor of the true optimum.

Uploaded by

tay
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 2

Lecture #3: Approximation Algorithms

Text

In mathematics and computer science, an optimization problem is the problem of


finding the best solution from all feasible solutions. The objective may be either min or
max, depending on the problem considered. A large number of optimization problems
which are required to be solved in practice are NP-hard. For such problems, it is not
possible to design algorithms that can find exactly optimal solution to all instances of the
problem in polynomial time in the size of the input, unless P = NP.

An optimization problem consists of three parts:

 A non-empty set of instances I.

 an objective function v(x,y), where y is a feasible solution to x∈I,

 a goal that either minimizes or maximizes the objective function.

Every instance x∈I has an ordered pair (Sx,OPT(x)) where Sx is the set of feasible


solutions, and OPT(x) is the optimal objective value (if a solution in Sx has objective
value OPT(x) then it is called an optimal solution.

In maximization case, the value of OPT(x) is the maximum value of the objective
function for a particular problem instance and for a minimization case, the value of
OPT(x) is the minimum value of the objective function for a particular problem instance.

In the preceding lectures we saw strong evidence to support the claim that no NP-hard
problem can be solved in polynomial time. A large number of the optimization problems
which are required to be solved in practice are NP-hard. Complexity theory tells us that
it is impossible to find efficient algorithms for such problems unless P=NP, and this is
very unlikely to be true. This does not obviate the need for solving these problems.
Observe that NP-hardness only means that, if P≠NP, we cannot find algorithms which
will find exactly the optimal solution to all instances of the problem in time which is
polynomial in the size of the input. If we relax this rather stringent requirement, it may
still be possible to solve the problem reasonably well.

Many NP-hard optimization problems have great practical importance and it is desirable
to solve large instances of these problems in a reasonable amount of time. The best-
known algorithms for NP-hard problems have a worst-case complexity that is
exponential in the number of inputs.

There are three possibilities for relaxing the requirements outlined above to consider a
problem well-solved in practice:
 Super-polynomial time heuristics: We may no longer require that the problem be
solved in polynomial time. In some cases there are algorithms which are just
barely super-polynomial and run reasonably fast in practice. There are
techniques (heuristics) such as branch-and-bound or dynamic programming
which are useful from this point of view. For example, the Knapsack problem is
NP-complete but it is considered “easy" since there is a “pseudo-polynomial"
time algorithm for it. A problem with this approach is that very few problems are
susceptible to such techniques and for most NP-hard problems the best
algorithm we know run in truly exponential time.

 Probabilistic analysis of heuristics: Another possibility is to drop the requirement


that the solution to a problem cater equally to all input instances. In some
applications, it is possible that the class of input instances is severely constrained
and for these instances there is an efficient algorithm which will always do the
trick. Consider for example the problem of finding Hamiltonian cycles in graphs.
This is NP-hard. However, it can be shown that there is an algorithm which will
find a Hamiltonian cycle in “almost every" graph which contains one. Such results
are usually derived using a probabilistic model of the constraints on the input
instances. It is then shown that certain heuristics will solve the problem with very
high probability. Unfortunately, it is usually not very easy to justify the choice of a
particular input distribution. Moreover, in a lot of cases, the analysis of algorithms
under assumptions about distributions is in itself intractable.

 Approximation algorithms: Finally, we could relax the requirement that we always


find the optimal solution. In practice, it is usually hard to tell the difference
between an optimal solution and a near-optimal solution. It seems reasonable to
devise algorithms which are really efficient in solving NP-hard problems, at the
cost of providing solutions which in all cases is guaranteed to be only slightly
sub-optimal.

In some situations, the last relaxation of the requirements for solving a problem appears
to be the most reasonable. This results in the notion of the “approximate" solution of an
optimization problem.

You might also like