Approximation algorithm
In
computer science andoperations research , approximation algorithms arealgorithm s used to find approximate solutions tooptimization problem s. Approximation algorithms are often associated withNP-hard problems; since it is unlikely that there can ever be efficientpolynomial time exact algorithms solving NP-hard problems, one settles for polynomial time sub-optimal solutions. Unlike heuristics, which usually only find reasonably good solutions reasonably fast, one wants provable solution quality and provable run time bounds. Ideally, the approximation is optimal up to a small constant factor (for instance within 5% of the optimal solution). Approximation algorithms are increasingly being used for problems where exact polynomial-time algorithms are known but are too expensive due to the input size.A typical example for an approximation algorithm is the one for vertex cover in graphs: find an uncovered edge and add "both" endpoints to the vertex cover, until none remain. It is clear that the resulting cover is at most twice as large as the optimal one. This is a
constant factor approximation algorithm with a factor of 2.NP-hard problems vary greatly in their approximability; some, such as the
bin packing problem , can be approximated within any factor greater than 1 (such a family of approximation algorithms is often called apolynomial time approximation scheme or "PTAS"). Others are impossible to approximate within any constant, or even polynomial factor unlessP = NP , such as themaximum clique problem .NP-hard problems can often be expressed as
integer programs (IP) and solved exactly inexponential time . Many approximation algorithms emerge from thelinear programming relaxation of the integer program.Not all approximation algorithms are suitable for all practical applications. They often use IP/LP/Semidefinite solvers, complex data structures or sophisticated algorithmic techniques which lead to difficult implementation problems. Also, some approximation algorithms have impractical running times even though they are polynomial time, for example O("n"2000). Yet the study of even very expensive algorithms is not a completely theoretical pursuit as they can yield valuable insights. A classic example is the initial PTAS for Euclidean TSP due to
Sanjeev Arora which had prohibitive running time, yet within a year, Arora refined the ideas into a linear time algorithm. Such algorithms are also worthwhile in some applications where the running times and cost can be justified e.g. computational biology, financial engineering, transportation planning, and inventory management. In such scenarios, they must compete with the corresponding direct IP formulations.Another limitation of the approach is that it applies only to optimization problems and not to "pure"
decision problem s like satisfiability, although it is often possible to conceive optimization versions of such problems, such as themaximum satisfiability problem .Inapproximability has been a fruitful area of research in computational complexity theory since the 1990 result of Feige, Goldwasser, Lovasz, Safra and Szegedy on the inapproximability of Independent Set. After Arora et al. proved the PCP theorem a year later, it has now been shown that Johnson's 1974 approximation algorithms for Max SAT, Set Cover, Independent Set and Coloring all achieve the optimal approximation ratio, assuming P != NP.
Performance guarantees
For some approximation algorithms it is possible to prove certain properties about the approximation of the optimum result. For example, in the case of a "ρ"-approximation algorithm it has been proven that the approximation "a" will not be more (or less, depending on the situation) than a factor "ρ" times the optimum solution "s".
:
The factor "ρ" is called the "relative performance guarantee". An approximation algorithm has an "absolute performance guarantee" or "bounded error" "ε", if it has been proven that
:
Similarly, the "absolute performance ratio" of some approximation algorithm , where refers to an instance of a problem, and where is the performance guarantee of on (i.e. for problem instance ) is:
:
That is to say that is the largest bound on the approximation ratio, , that one sees over all possible instances of the problem. Likewise, the "asymptotic performance ratio" is:
:
That is to say that it is the same as the "absolute performance ratio", with a lower bound on the size of problem instances. These two types of ratios are used because there exist algorithms where the difference between these two is significant.
Domination analysis provides an alternative way to analyze the quality of an approximation algorithm in terms of the rank of the computed solution in the sorted sequence of all possible solutions.References
* cite book
last = Vazirani
first = Vijay V.
authorlink = Vijay Vazirani
title = Approximation Algorithms
publisher = Springer
date = 2003
location = Berlin
isbn = 3540653678
*Thomas H. Cormen ,Charles E. Leiserson ,Ronald L. Rivest , andClifford Stein . "Introduction to Algorithms ", Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 35: Approximation Algorithms, pp.1022–1056.
*Dorit H. Hochbaum , ed. "Approximation Algorithms for NP-Hard problems , PWS Publishing Company, 1997. ISBN 0-534-94968-1. Chapter 9: Various Notions of Approximations: Good, Better, Best, and MoreExternal links
*Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson,
Marek Karpinski and Gerhard Woeginger, [http://www.nada.kth.se/~viggo/wwwcompendium/ "A compendium of NP optimization problems"] .
Wikimedia Foundation. 2010.
Look at other dictionaries:
approximation algorithm — noun A method of finding a nearly optimal solution to an optimization problem that cannot be solved exactly within a reasonable time … Wiktionary
Minimax approximation algorithm — Polynomial expansions such as the Taylor series expansion are often convenient for theoretical work but less useful for practical applications. For practical work it is often desirable to minimize the maximum absolute or relative error of a… … Wikipedia
Approximation theory — In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. Note that what is meant by best and simpler will depend on … Wikipedia
Algorithm — Flow chart of an algorithm (Euclid s algorithm) for calculating the greatest common divisor (g.c.d.) of two numbers a and b in locations named A and B. The algorithm proceeds by successive subtractions in two loops: IF the test B ≤ A yields yes… … Wikipedia
Approximation error — The approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because the measurement of the data is not precise due to the instruments. (e.g., the accurate reading… … Wikipedia
Gauss–Newton algorithm — The Gauss–Newton algorithm is a method used to solve non linear least squares problems. It can be seen as a modification of Newton s method for finding a minimum of a function. Unlike Newton s method, the Gauss–Newton algorithm can only be used… … Wikipedia
K-approximation of k-hitting set — In computer science, k approximation of k hitting set is an approximation algorithm for weighted hitting set. The input is a collection S of subsets of some universe T and a mapping W from S to non negative numbers called the weights of the… … Wikipedia
Polynomial-time approximation scheme — In computer science, a polynomial time approximation scheme (abbreviated PTAS) is a type of approximation algorithm for optimization problems (most often, NP hard optimization problems).A PTAS is an algorithm which takes an instance of an… … Wikipedia
Levenberg–Marquardt algorithm — In mathematics and computing, the Levenberg–Marquardt algorithm (LMA)[1] provides a numerical solution to the problem of minimizing a function, generally nonlinear, over a space of parameters of the function. These minimization problems arise… … Wikipedia
List of algorithm general topics — This is a list of algorithm general topics, by Wikipedia page. * Analysis of algorithms * Ant colony algorithm * Approximation algorithm * Best and worst cases * Big O notation * Combinatorial search * Competitive analysis * Computability theory… … Wikipedia