Local Search for Combinatorial Optimisation Problems

Transcription

Local Search for Combinatorial Optimisation Problems
"Shortest Path problem, the Traveling Salesman problem, or the Knapsack problem, are a few
of the many combinatorial optimization problems, which have very wide applications in all
spheres of our lives. Unfortunately, there is a large class of problems, denoted by class NP, for
which the time required to solve them is an exponential function of their input size. So, for
these difficult problems, one settles for approximate or near optimal solutions."
Brics
Local Search for Combinatorial
Optimisation Problems
By Prabha Sharma
An optimization problem is minimising or
maximising an objective function f on a set F of
feasible solutions. In case F contains finite
number of feasible solutions, the problem is
known as a combinatorial optimization problem.
The objective function f, for such problems, is
usually a linear function. Shortest Path problem,
the Traveling Salesman problem, or the
Knapsack problem, are a few of the many
combinatorial optimization problems, which
have very wide applications in all spheres of our
lives.
Finiteness of the feasible set F can be quite
misleading. One would think that a computer
could solve any instance of a combinatorial
optimization problem by systematically
evaluating the objective function f for every
feasible solution in F and selecting the one with
minimum (or maximum) cost. Finding the
minimum distance tour of the capital cities of
USA would require 50! elementary computations
and with the fastest computer available today
this would take many billions of years. A lifetime
is certainly not enough!
Fig 1: A TSP solution to be worked
out for 50 US capital cities.
25
neighbourhood search is repeated from the new
solution ; we stop when a locally optimal solution
is reached.
Some Critical Issues
Before the search for a locally optimal solution
can begin it has to be decided how to obtain an
initial feasible solution. It is sometimes practical
to execute local search from several different
starting points and to choose the best result.
Next, a 'good' neighbourhood has to be chosen
for the problem at hand and a method for
searching it. The choice is normally guided by
intuition because very little theory is available as
a guide. A clear trade-off can be seen between
small and large neighbourhoods. A larger
neighbourhood would seem to hold promise of
providing better local optima but will take longer
to search. Design of effective local search
algorithms has been and remains very much an
art. The analysis of the performance of a
standard local search algorithm is concerned
with the following: (i) Time complexity, i.e the
time required by the algorithm to arrive at the
final answer, (ii) Size of the neighbourhood to be
searched, (iii) choice of the pivot element, i.e. to
which better neighbouring solution to move to,
and (iv) the number of iterations required to reach
a locally optimal solution.
Usually the size of N (x)
is a small degree
polynomial in the number of variables, and hence
neighbourhood search takes very little time. But
the size of F being very large, number of
iterations before the algorithm terminates, may
be large too. For the Simplex algorithm, the size
of the neighbourhood to be searched is O (m(n - m)),
where m is the number of constraints and n the
number of variables. Since the total number of
basic feasible solutions is bounded above by
n
C m , some instances of linear programming
problems have been constructed for which the
simplex algorithm will go through each vertex
before arriving at the optimal solution. Thus the
number of iterations for simplex algorithm cannot
be bounded by a polynomial in n and m . In fact,
when there is an exponential range of objective
function costs, no apriory bound on the number
of iterations exists, better than exponential. In
such cases one would like to know whether or
not the local search algorithm will terminate in
polynomial time, [6].
Thus, algorithms need to be designed for solving
combinatorial optimization problems, which
require reasonable amount of effort for arriving
at the optimal solution. By reasonable amount of
effort, it is meant that the time taken to arrive at
the optimal solution should be a small degree
polynomial of the number of bits required to input
the problem data in the computer, so that the
effort required to solve large instances of the
problem, does not blow up with the input size of
the instance. This class of problems is denoted
by P. Unfortunately, there is a large class of
problems, denoted by class NP, for which the
time required to solve them is an exponential
function of their input size. So, for these difficult
problems, one settles for approximate or near
optimal solutions. Literature abounds with
various methods known as heuristics for
obtaining approximate solutions.
Genetic
Algorithms, Simulated Annealing and Local
Search are some of them.
Local Search Methods
Here, the search for an approximate solution is
conducted with respect to a neighbourhood
structure defined on the set of feasible solutions
F. For every x Î F , N ( x) Í F , is a neighbourhood
function. Feasible solutions in N (x) are called
neighbours of x , or solutions adjacent to x .
Simplex algorithm is a local search algorithm
N (x) consists of all basic feasible
where
solutions which differ from x in only one basic
column.
For the traveling salesman problem, N k (x) is the
set of all tours which differ from x in at most k
*
edges. A locally optimal solution x Î F is better
than all solutions in N ( x* ) .The neighbourhood
N (x) is searched at point x Î F for improvement
by the subroutine,
Improve(x)
ì any y Î N (x )with f (y )< f (x ), if
ï
= í such a y exists
ï no otherwise
î
The search begins at some initial feasible
0
solution x Î F and uses subroutine improve to
search for a better solution in its neighbourhood,
so long as an improved solution exists. The
26
There is a class of problems for which choice of a
neighbourhood function exists with respect to
which, the locally optimal solution is indeed the
optimal solution. Most of these problems are in
class P . Even though simplex algorithm is not a
polynomial time algorithm, Linear Programming
problems have been shown to be in class P , i.e.
polynomial time algorithms have been
constructed for solving them. For the problems
in NP, nothing can be said about the quality of
the locally optimal solution i.e. how close it is to
the optimal solution, unless, of course, N (x) is
whole of the feasible set F . For the Travelling
Salesman problem, for no value of k except for
k = n , a locally optimal solution with respect to
N k is the optimal solution. Choice of the
neighbourhood function and the pivot rule
certainly play a role in determining the quality of
the locally optimal solution but in the absence of
any other information it is difficult to say what
this exact relationship is.
Work Done at IIT Kanpur
In a series of papers, [3],[4] and [5], I have
considered three different scheduling problems
for which I have constructed a local search
algorithm ADJACENT. It has been shown that in
each of the three cases, the time complexity of
ADJACENT is polynomial in the number of jobs n
and that the locally optimal solution obtained
dominates more than half the feasible solutions.
The three problems considered are:
(i) Minimising variance of completion times of n
jobs on a single Machine.
(ii) Minimising flow-time of n deteriorating jobs
with equal processing times and arbitrary rates
of deterioration.
(iii) Minimisng makespan of n deteriorating jobs
with arbitrary processing times and arbitrary
rates of deterioration.
Set of feasible solutions for the three problems is
the set of n! schedules (permutations) of the
processing times p1 , p2 ,...., pn of the n jobs.
Bowman [1] defined the convex hull Pn of these
n
n! permutations as a subset of R and showed
that the n! permutations are the vertices/extreme
points of Pn . Gupta and I in a later paper [2],
defined adjacency on Pn . We showed that for
any vertex of Pn
there are (n-1) adjacent
vertices. Figure 2 depicts all 4! Or 24
permutations and their neighbourhood structure.
Fig 2: The neighbourhood structure of P4
By exploiting the structure of the three objective
functions, I could show that conditional
dominance exists in F . Using this conditional
dominance, choosing the
neighbourhood
function defined above on Pn and choosing the
pivot rule of moving to the best neighbour, I
could prove the following results,
(a) ADJACENT moves along a straight path on Pn.
The largest straight path on Pn contains O(n 2 )
vertices of Pn , as proved in [2], it therefore,
follows that the number of iterations for
2
ADJACENT is bounded above by O(n ). Since
size of the neighbourhood to be searched at each
iteration is 0(n-1),the total time complexity of
ADJACENT is O(n 3 ) .
(b) Locally optimal solution obtained by
ADJACENT dominates more than half the
feasible solutions.
Thus, by exploiting the structure of the objective
function and choosing the right neighbourhood
function and the right pivot rule it is possible to
bound the number of iterations for a standard
local search algorithm, and also give a guarantee
for the quality of the locally optimal solution
obtained.
The claim I have tried to make through the series
of papers is that for a class of combinatorial
optimisation problems, partial and/or conditional
dominance among the feasible solutions, is
inherent in the structure of the objective
27
and which obtains good quality locally optimal
solutions.
functions.By discovering this inherent
dominance it is possible to design a local search
algorithm which has polynomial time complexity
References
1 Bowman, V. J. (1972),
“Permutuation Polyhedra,” SIAM J. of Applied.
Math 22, 580-589
2. Gaiha, P. and S K. Gupta, (1977), “Adjacent
Vertices on a Permutohedron,” SIAM J. Appld.
Math 32, 323-327
3. Sharma, Prabha, (2002), “Permutation Polyhedra
and Minimisation of the Variance of Completion
Times on a single Machine, “ Journal of Heuristies
8, 467-485
4. Sharma, Prabha, (2001),
“Permutation Polyhedra and Minimising Flow
Time of Deteriorating jobs,” in ed. Manju Lata
Agarwal and Kanwar Sen, Recent Developments in
Operations Research, 377-394, Narosa, Delhi.
5. Sharma, Prabha, (2003), “Minimising Makespan
of Deteriorating jobs with Common Due Dates :
An Efficeint Heuristic”, presented at the 18th
International Symposium on Math. Progr. 2003
Copenhagen, Denmark.
6. Yannakakis, Mihalis, (1999),
“ Computational Complexity,” in ed. Emile Arts
and Jan Karel Lenstra, Local search in
Combinatorial, Optimization, 19-56, John Wiley,
New York.
About the author: Prabha Sharma is a Professor in the Department of Mathematics at IIT
Kanpur. She received her PhD in Industrial Engineering from Northwestern University,
USA, in 1970. She was a post doctoral fellow in the Department of Industrial and Systems
Engineering at the University of Florida, Gainesville, USA, before joining IIT Kanpur in
1972. Her research areas include, Network Flows, Network Designing and Local Search
for Combinatorial Optimisation.
28

Documents pareils