1. 首页
  2. 课程学习
  3. 专业指导
  4. approximation algorithms

approximation algorithms

上传者: 2019-09-21 08:36:44上传 PDF文件 874.09KB 热度 47次
vazirani的经典近似算法书籍,英文版C0个FNTS18 Sparsest cut....106Chapter 1IntroductionThis book deals with designing polynomial time approximation algorithms for NP-hard optimiz ation problems. Typically, the decision versions of these problems are in NP, and are thereforeNP-complete. From the viewpoint of exact solutions, all NP-complete problems are equally hardsince they are inter-reducible via polynomial time reductions. Typically such a reduction mapsoptimal solutions of the given instance to optimal solutions of the transformed instance and preserves the number of solutions. Indeed, the counting versions of all known NP-complete problemsare P-complete, and typically the proof follows directly from the proof of np-completenessThe picture looks different when these problems are studied from the viewpoint of efficientlyobtaining near-optimal solutions: polynomial time reductions do not preserve near-uptimality ofsolutions, and NP-complete problems exhibit a rich set of possibilities, all the way from allowingapproximability to any required degree, to essentially not allowing approximability at alla problem is polynomial time solvable only if it has the algorithmically relevant combinatorialstructure that can be used asfootholds"to efficiently home in on a solution. The process ofdesigning a polynomial time algorithm is a two-pronged attack: unraveling this structure in thegiven problem, and finding algorithmic techniques that can exploit this structure2. Although NP hard problems do not offer footholds to find optimal solutions cficicntly, theyy still offer footholds to find near-optimal solutions efficiently. So at a high level, the processof designing approximation algorithms is not very different it still involves unraveling relevantstructure and finding algorithmic techniques to exploit it. Typically, the structure turns out to bemore elaborate, and often, the algorithmic techniques result fro generalizing and extending someof the powerful algorithmic tools devcloped in the study of cxact algorithms. On the other handlooking at this process a little more closely, one can see that it has its own general principles. Inthis chapter, we illustrate some of these in an easy settingBasic definitionsWe first formally define the notions of an optimization problem and an approximation algorithmAn optimization problem, Il, consists ofe A set of instances, DI. We will assume that all numbers specified in an input are rationalssince our model of computation cannot handle infinite precision arithmetioIn this paragraph, hy"typically" we mean t hat. a conservative estimat e of t he number of except ions, amang t hehundreds of problems that have been studied is 3INTRODUCTIONEach instance I E Dr has a set of feasible solutions, SI(I). Further, there is polynomial timeIgorithm that, given a pair (I, s), decides whether s E SI(nThere is a polynomial time computable objective function fi, that assigns a non-negativerational number to each pair(l, s), where I is an instance and s is a feasible solution for1.(The objective function is frequently given a physical interpretation, such as cost, lengthweiglht ete Finally, I is specified to be either a minimization problen or a ma immitation problemAn optimal solution for an instance of a minimization (maximization) problem is a feasiblesolution that achieves the smallest(largest)objective function value. We will denote by opt(Ithe ob jective function value of an optimal solution to instance i. In this book we will shorten thisto OPt when it is clear that we are referring to a generic instance of the problem being studiedtho he sixe of instance L, denoted by I/ is defined as the number of bits needed to write/ underassumption that all numbers occurring in the instance are written in binaryLet us illustrate these definitions in the context of the following minimization problemProblem 1.1 (Minimum vertex cover) Given an undirected graph G-(V, E), find a min-imu m cardinality verle cover, i.e., a set VcV such that every edge has at least one end pointincident at v′Instances of this problem are ul directed graphs. Given an instance=(v, E), feasible solutionsare all vertex covers for G. The objective function value for a solution Vcv is the cardinality ofv. Any minimum cardinality vertex cover is an optimal solutionAn approximation algorithm produces feasible solutions that are"close to the optimal: theformal definition differs for minimization and maximization problems. let ii be a minimization(ma.ximin. a tion) problem, and let. 8 be a posit ive real number, 821(8<1. An algorithm A is saidto be a factor appro imation algorithm for li if on each instance I. a produces a feasible solutions for i, such that fn(r,s)≤6.OPT()(fn(,s)≥δ·OPT(T). An important requirement isthat a be a polynomial time algorithIn, i.e., its rulling time should should be bounded by a ixedpolynomial in the size of instance 1. Clearly, thee closerd is to 1, the better is the approximationalgorit hmOn occassion. we will relax this definition and will allow a to be randomized i.e.. it will beallowed to use the fips of a fair coin. Assume we have a minimization problem. Then, we will sayChat a is adJaclor randomized appro.wirnalior alyorilhn Jor ii if oI each instanice 1, A producesa feasible solution s for 1, such thatP(s8:OPT(≥2where the probability is over the coin fips the definition for a maximization pro blem is analogousThus, an algorithm is a factor 2 approximation algorithm for the minimum vertex cover problemif it produces a vertex cover whose cardinality is within twice the optimal cover, and its runningtime is polynomial in the number of vertices in the instance (notice that the size of an instance isbounded by a polynomial in the number of verticesLower bounding OPtWhile designing an approximation algorithm for an np-hard problem, one is immediately facedwith the following dilemma: For establishing the approximation guarantee, the cost of the solutionproduced by the algorithIm leeds to be cornpared with the cost of all optirnal solutiOn. However, itis NP-hard not only to find an optimal solution, but also to compute the cost of such a solutionSo, how do we establish the approximation guarantee? The answer to this question provides a keystep in the design of approximation algorithms. Let us show this in the context of the minimumvertex cover problemDesigning an approximation algorithm for vertex coverWe will get around the difficulty mentioned above by coming up with a. good"polynomia l timecomputable lower bound on the size of the optimal cover. Let us observe that the size of a maximalmatching in G provides a lower bound. This is so because any vertex cover has to pick at least oneenld point of each matched edge. This lower bounding scheire iinnedialely suggests the followingsimplc algorithmAlgorithm 1. 2(Minimum vertex cover)Find a maximal matching in g, and out put the set of matched verticesTheorem 1. 3 Algorithm 1.2 is a factor 2 appro imation algorithm for the minimum verter coverproblemProof: No edge can be lett uncovered by the set of vertices picked other wise such an edgecould have been added to the matching, contradicting its maximality. Let i be the matchingpicked. As argued above, MA_(Si-1)/2. Thefollowing min max relation holdsmax{(C)}ching mdd set9Chough strongly believed, conjectures ) Since Most of these problerns are in fact NP-hard, chartingthe landscape of approximability of these problems via efficient algorithms becomes a compellingsubject of scientific inquiry in computer science and mathematics. This task involves identifyingcornerstone problems. finding relationships among problems, understanding the kinds of combinatorial structures that play the role of footholds"refererred to earlier, developing appropriatealgorithmic lechniques, and developing the theory of hardiess of approximability so that at least foithe key problems we obtain positive and negative approximability results having matching boundsExact algorithms have been studied intensively for over three decades, and yet basic insights arestill being obtained. It is reasonable to expect the theory of approximation algorithms to take itstimeNP-hard problerns abound in practice. Ol the olle hand this lact adds to the irnportallce olthe area of approximation algorithms, and on the other hand, it raises our expectations from itAt the very least, we can expect this theory to provide guidelines for use in practice. For instancesimply knowing the limits of approximability of natural problems can be useful in choosing theright problem to model a realistic situation withWhat about the possibility of using algorithns developed directly in practice? Call we expectthis theory to have such direct impact on practice? One is inclined to say No, simply becausepractitioners are normally looking for algorithms that give solutions that are very close to optimal; say having error within 2% or 5%0 of the optimal, not within 100%, a common bound inseveral approximation algorithms. Further, by this token, what is the usefulness of im proving theapproximation guarantee from say factor 2 to 3 /2Let us address both issues and point out some fallacies in these assertions. The approximationguarantee only reflects the performance of the algorithm on the most pathological instances. Perhaps it is more appropriate to view the approximation guarantee as a measure that forces us toexplore deeper com binatorial structure of the problem (really getting into its guts! and discovermore powerful tools for exploiting this structure. It has been observed that the difficulty of constructing tight examples incrcascs considcrably as onc obtains algorithms with better guaranteesIndeed, for some recent algorithms?1, obtaining a tight exam ple has been a paper by itself Theseand other sophisticated algorith ms do have error bounds of the desired magnitude, 2% to 3%ontypical instances, even though their worst case error bounds are much higher ? In addition, theTheoretically proven algorithIn should be viewed as a core algorithmic idea that leeds to be finetuned to the types of instances arising in specific applications. All this points to the importance ofimplementing and experimenting with the algorithms developedMore than half of this book is based on the surge of progress made in the last eight years, aftera long period of lull. Already, in a few instances, sophisticated algorithmic ideas have found theirway into industrial products (see for example [ ] For more widespread use however, we will haveto wait until these ideas diffuse into the right circles hopefully this book will helr creed upDroCessExercise 1.5 Show that the greedy algorithm for IniniInuIn vertex cover achieves anl approximation guarantee of O(log n ). Give a tight example for this algorithmExercise 1. 6 Give a lower bounding scheme for the weighted version of vertex cover, in whichyou are trying to minimize the weight of the cover picked
用户评论