!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"
"http://www.w3.org/TR/REC-html40/loose.dtd">
No Title
Lab: Local Search
CSC 261 - Weinman
- Summary:
- You will explore and compare the performance of various
local search algorithms.
Preparation
- Start MediaScheme.
- On the MathLAN, start a Terminal window
- Launch GIMP by typing the command rebelsky/bin/gimp
&
- Use the pull-down menu Xtns > MediaScript > Console
- The MediaScript interpreter should appear
- To use the problem representations and local search algorithms provided,
copy the following into your definitions pane:
(load "/home/weinman/courses/CSC261/code/local-search/eight-queens.scm")
(load "/home/weinman/courses/CSC261/code/local-search/local-search.scm")
(load "/home/weinman/courses/CSC261/code/general.scm")
Exercises
- You will be experimenting with a variety of local searching algorithms
on the eight queens problem. Before we can do any experiments you'll
need to get familiar with the procedures for representing the eight
queens problem that have been provided for you.
- A state is represented as a list of eight numbers, representing the
rows of each of the columns on an 8x8 board. For instance,
(list 1 2 3 4 8 7 6 5)
indicates queens starting (from left-to-right) up the main diagonal,
and then running from the top-down across the right half of the board.
Create your own eight queens state (in the definitions pane).
(define state (list ____))
- The procedure (eight-queens-successors state)
produces a list of all possible "successors" to a board with
eight queens on it. That is, every state in which one queen has moved
to a different row. Using this procedure, determine the sHomework
Assignmentsuccessors to your state.
- How many successors should there be? Does eight-queens-successors
give you the number you expect?
- The procedure (eight-queens-score state)
produces the negative of the number of attacking pairs of queens
for a given board configuration. (Since our algorithms try to maximize
an objective function, and we want to minimize the number of
attacking pairs, we use the negative for our objective.) Using this
procedure, determine the number of attacking pairs of queens in your
state.
- Determine (without manually counting) the number of attacking pairs
of queens for all successors to your state. Does the score increase
for any of them?
- Now we can try using a simple local search method to find an improved
state that (hopefully) has fewer attacking queens.
- The general procedure (hill-climbing initial-state
successor-fun value-fun)
takes a starting problem configuration (initial-state), a
procedure that takes a state and generates its successors (successor-fun),
and a procedure that measures the objective function value for a given
state (value-fun). Find a (hopefully) improved board configuration
with, starting from your initial state and using the successor and
score functions for the eight queens problem introduced above.
(define local-optimum _____)
- What is the score of the resulting state? Does it improve?
- The optimal score is zero (for no pairs of attacking queens). Using
the same strategy as above, determine scores of all the successors
of local-optimum. Is the state a local maximum, global maximu,
or (at least partial) plateau?
- Simple hill-climbing always chooses the successor that gives the best
improvement. What if, instead, we chose the first successor that gives
us some amount of improvement? The procedure first-choice-hill-climbing
takes the same parameters as hill-climbing and does exactly
this.
- Use this procedure to find another optimum, starting from your initial
state.
(define first-choice-optimum ______)
- What is the score of the resulting state? Is it any better than local-optimum?
- It is possible that we quickly find our way to a local maximum with
the simple search routines above. What if we perform our search from
multiple starting places and use the best result? This is what hill-climbing-random-restart
does.
- The first parameter to hill-climbing-random-restart takes
a list of starting states. Define your own list of (3-5) potential
starting states.
(define state-list (list _____ _____ _____))
- Use hill-climbing-random-restart to find another optimum
with your initial states.
(define restart-optimum ______)
- What is the score of the resulting state? Is it any better than the
previous scores?
- All of these search methods only take uphill steps. What if we occasionally
take some downhill steps? This is what simulated annealing does. The
procedure (simulated-annealing initial-state successor-fun
value-fun schedule) again takes just one starting state, but has
an additional parameter schedule that takes an integer (representing
the annealing iteration) and produces a real number, the temperature
to use at the given iteration. Annealing will continue until the temperature
is zero. The following is an example schedule that exponentially decreases
the temperature at every iteration, up to a limit of 500 iterations:
(lambda (iter) (if (> iter 500) 0 (expt 0.50 iter)))
- Use simulated-annealing and a schedule of your choice to
find another optimum from your original initial state. (Be sure your
schedule returns zero at some point to avoid an infinite loop.)
(define annealing-optimum ______)
- What is the score of the resulting state? Is it any better than the
previous scores?
- Now that you've gotten some experience with the four basic local search
algorithms, you can do some more formal analysis of their performance.
Record your answers to the following questions somewhere.
- What percentage of problems do you expect hill climbing to solve optimally?
- What percentage of problems do you expect first-choice hill climbing
to solve optimally?
- What percentage of problems do you expect hill climbing with random
restarts to solve optimally?
- What percentage of problems do you expect simulated annealing to solve
optimally?
- In order to do the analysis, we will need to generate many problems.
Obviously, doing this manually would be quite time consuming. Fortunately,
(random-eight-queens-state) will generate a random (valid)
board configuration for you.
- How can we generate several board configurations, even random ones,
easily? Recall the function (iota n)
will create a list of integers from 0-n. We should be able
to then create a random state for each of these integers using map
with an anonymous procedure that ignores its argument. Use this strategy
to create a list of 1000 random board states.
(define problems (map ______ (iota 1000)))
- Use map to find solutions to each of these problems using
hill-climbing.
(define climbing-solutions (map (lambda (state) _______)
problems)
- Use map to determine the scores of all of these solutions
with procedure eight-queens-score.
(define climbing-scores ______)
- How many of the scores are optimal? We can use map to determine
which are optimal by giving a one for those with a score of zero:
(map (lambda (score) (if (zero? score) 1 0)) climbing-scores)
Use apply in conjunction with the map above to tally
the optimal (zero-valued) scores. Record this value so we can discuss
it later.
- How does this result compare to your prediction?
- Repeat the above analysis using the first-choice-hill-climbing
and the same initial states.
- Find solutions to each of the problems.
(define first-choice-solutions ______)
- Determine the scores of all of these solutions
(define first-choice-scores ______)
- Use apply in conjunction with map to tally the optimal
(zero-valued) scores. Record this value so we can discuss it later.
- How does this result compare to your prediction?
- The analysis for hill-climbing-random-restart is not quite
as straightforward. The book reports (p. 114) that if p is
the proportion of problems successfully solved by hill climbing, then
1/p restarts are needed.
- Using your result from above, how many restarts do you expect to need
in order to have a high chance of success?
- Instead of creating a list of problems (as above) we will create a
list of lists of problems, each to be used as input to the search
procedure. If you predict you will need num-restarts starting
points, then we can create a list of 1000 lists containing num-restarts
problems with the following definition.
(define restart-problems
(map (lambda (p-num)
(map (lambda (r-num) (random-eight-queens-state)) (iota num-restarts))
(iota 1000)))
(The formatting probably looks terrible. Be sure to format it nicely
in your definitions pane so that it makes sense to you.)
- We may now continue our analysis in a similar vein as before Use map
to find solutions to each of these sets problems using hill-climbing-random-restart.
(define restart-solutions (map (lambda (state-list) _______)
restart-problems)
- Determine the scores of all of these solutions
(define restart-scores ______)
- Use apply in conjunction with map to tally the optimal
(zero-valued) scores. Record this value so we can discuss it later.
- How does this result compare to your prediction?
- Analyzing simulated-annealing is similar to hill-climbing,
except we also have to worry about the annealing schedule. Use a schedule
of the form given above, choosing your own iteration limit (1000 in
the example) and exponent base 0 < b < 1 (0.50
in the example).
- Find solutions to each of the problems in problems.
(define annealing-solutions ______)
- Determine the scores of all of these solutions
(define annealing-scores ______)
- Use apply in conjunction with map to tally the optimal
(zero-valued) scores. Record this value so we can discuss it later.
- How does this result compare to your prediction?
We will discuss your results as a class. In the mean time, you are
encouraged to explore the influence of your scheduling choices on
the success of simulated annealing below.
For Those with Extra Time
How much do you suppose the schedule has to do with the success of
simulated annealing? Experiment with different exponent bases and
iteration limits. Higher bases b result in slower cooling schedules.
Thus, you are more likely to take sub-optimal steps early on (hopefully
reaching a state from which an optimal configuration can be reached).
How does this influence the number of iterations needed?
- Start increasing your base from 0.5 until you see a noticeable increase
in the success rate. How high does it have to be to reach the optimum
nearly always?
- If you do not reach the optimum very often, try running annealing
for a greater number of iterations. How many iterations do you need?
What is the base?
File translated from
TEX
by
TTH,
version 3.80.
On 14 Sep 2009, 10:34.