Lab: Local Search
CSC 261 - Artificial Intelligence - Weinman
- Summary:
- You will explore and compare the performance of various
local search algorithms.
Preparation
- Start DrScheme.
- To use the problem representations and local search algorithms provided,
copy the following into your definitions pane:
-
(load "/home/weinman/courses/CSC261/code/local-search/eight-queens.scm")
(load "/home/weinman/courses/CSC261/code/local-search/local-search.scm")
(load "/home/weinman/courses/CSC261/code/local-search/general.scm")
Exercises
A: Eight Queens
You will be experimenting with a variety of local searching algorithms
on the eight queens problem. Before we can do any experiments you'll
need to get familiar with the procedures for representing the eight
queens problem that have been provided for you.
- A state is represented as a list of eight numbers, representing the
rows of each of the columns on an 8x8 board. For instance,
-
(list 1 2 3 4 8 7 6 5)
indicates queens starting (from left-to-right) up the main diagonal,
and then running from the top-down across the right half of the board.
Create your own eight queens state (in the definitions pane).
-
(define state (list ____))
- The procedure (eight-queens-successors state)
produces a list of all possible "successors" to a board with
eight queens on it. That is, every state in which one queen has moved
to a different row. Using this procedure, determine the successors
to your state.
- How many successors should there be? Does eight-queens-successors
give you the number you expect?
- The procedure (eight-queens-score state)
produces the negative of the number of attacking pairs of queens
for a given board configuration. (Since our algorithms try to maximize
an objective function, and we want to minimize the number of
attacking pairs, we use the negative for our objective.) Using this
procedure, determine the number of attacking pairs of queens in your
state.
- Determine (without manually counting) the number of attacking
pairs of queens for all successors to your state.
-
(map ______ _______)
Does the score increase for any of them?
B: Hill Climbing
Now we can try using a simple local search method to find an improved
state that (hopefully) has fewer attacking queens.
- The general procedure (hill-climbing initial-state
successor-fun value-fun)
takes a starting problem configuration (initial-state), a
procedure that takes a state and generates its successors (successor-fun),
and a procedure that measures the objective function value for a given
state (value-fun). Find a (hopefully) improved board configuration
with, starting from your initial state and using the successor and
score functions for the eight queens problem introduced above.
-
(define local-optimum
(hill-climbing state
eight-queens-successors
eight-queens-score))
- What is the score of the resulting state? Does it improve?
- The optimal score is zero (for no pairs of attacking queens). Determine
scores of all the successors of local-optimum.
-
(map eight-queens-score
(eight-queens-successors local-optimum))
Is the state a local maximum, global maximum, or (at least partial)
plateau?
- Simple hill-climbing always chooses the successor that gives the best
improvement. What if, instead, we chose the first successor that gives
us some amount of improvement? The procedure first-choice-hill-climbing
takes the same parameters as hill-climbing and does exactly
this.
- Use this procedure to find another optimum, starting from your initial
state.
-
(define first-choice-optimum
(first-choice-hill-climbing state
eight-queens-successors
eight-queens-score))
- What is the score of the resulting state? Is it any better than local-optimum?
- It is possible that we quickly find our way to a local maximum with
the simple search routines above. What if we perform our search from
multiple starting places and use the best result? This is what hill-climbing-random-restart
does.
- The first parameter to hill-climbing-random-restart takes
a list of starting states. Define your own list of (3-5) potential
starting states.
-
(define state-list (list _____ _____ _____))
- Use hill-climbing-random-restart to find another optimum
with your initial states.
-
(define restart-optimum
(hill-climbing-random-restart state-list
eight-queens-successors
eight-queens-score))
- What is the score of the resulting state? Is it any better than the
previous scores?
C: Simulated Annealing
All of the search methods you have tried so far only take uphill steps.
What if we occasionally take some downhill steps? This is what simulated
annealing does. The procedure (simulated-annealing initial-state
successor-fun value-fun schedule) again takes just one starting state,
but has an additional parameter schedule that takes an integer
(representing the annealing iteration) and produces a real number,
the temperature to use at the given iteration. Annealing will continue
until the temperature is zero. The following is an example schedule
that exponentially decreases the temperature at every iteration, up
to a limit of 500 iterations:
-
(lambda (iter) (if (> iter 500) 0 (expt 0.50 iter)))
- Use simulated-annealing and a schedule of your choice to
find another optimum from your original initial state. (Be sure your
schedule returns zero at some point to avoid an infinite loop.)
-
(define annealing-optimum
(simulated-annealing state
eight-queens-successors
eight-queens-score
___________))
- What is the score of the resulting state? Is it any better than the
previous scores?
D: Analysis
Now that you've gotten some experience with the four basic local search
algorithms, you can do some more formal analysis of their performance.
In order to do the analysis, we will need to generate many problems.
Obviously, doing this manually would be quite time consuming. Fortunately,
(random-eight-queens-state) will generate a random (valid)
board configuration for you.
- Record your answers to the following questions somewhere.
- What percentage of problems do you expect hill climbing to solve optimally?
- What percentage of problems do you expect first-choice hill climbing
to solve optimally?
- What percentage of problems do you expect hill climbing with random
restarts to solve optimally?
- What percentage of problems do you expect simulated annealing to solve
optimally?
- How can we generate several board configurations, even random ones,
easily? Recall the function (iota n)
will create a list of integers from 0-n. We should be able
to then create a random state for each of these integers using map
with an anonymous procedure that ignores its argument. Use this strategy
to create a list of 1000 random board states.
-
(define problems (map (lambda (num)
(random-eight-queens-state))
(iota 1000)))
Hill Climbing
- Use map to find solutions to each of these problems using
hill-climbing.
-
(define climbing-solutions
(map (lambda (state)
(hill-climbing state
eight-queens-successors
eight-queens-score))
problems))
- Determine the scores of all of the solutions with procedure (eight-queens-score
state).
-
(define climbing-scores (map eight-queens-score climbing-solutions))
- How many of the scores are optimal? We can use map to determine
which are optimal by giving a one for those with a score of zero and
tallying:
-
(apply +
(map (lambda (score) (if (zero? score) 1 0))
climbing-scores))
Record this value so we can discuss it later.
- How does this result compare to your prediction?
First Choice Hill Climbing
- Use map to find solutions to each of these problems using
first-choice-hill-climbing.
-
(define first-choice-solutions
(map (lambda (state)
(first-choice-hill-climbing state
eight-queens-successors
eight-queens-score))
problems))
- Determine the scores of all of these solutions with procedure (eight-queens-score
state).
-
(define first-choice-scores
(map eight-queens-score first-choice-solutions))
- Tally the number of states with optimal scores using apply
and map.
-
(apply +
(map (lambda (score) (if (zero? score) 1 0))
first-choice-scores))
Record this value so we can discuss it later.
- How does this result compare to your prediction?
Random Restart Hill Climbing
The analysis for hill-climbing-random-restart is not quite
as straightforward as the other methods. The book reports (p. 124)
that if p is the proportion of problems successfully solved
by hill climbing, then 1/p restarts are needed.
- Using your result from above, how many restarts do you expect to need
in order to have a high chance of success?
- Instead of creating a list of problems (as above) we will create a
list of lists of problems, each to be used as input to the search
procedure. If you predict you will need num-restarts starting
points, then we can create a list of 1000 lists containing num-restarts
problems with the following definition.
-
(define num-restarts _______)
(define restart-problems
(map (lambda (p-num)
(map (lambda (r-num) (random-eight-queens-state))
(iota num-restarts)))
(iota 1000)))
- We may now proceed with our analysis as before. Use map to
find solutions to each of these sets problems using hill-climbing-random-restart.
-
(define restart-solutions
(map (lambda (state-list)
(hill-climbing-random-restart state-list
eight-queens-successors
eight-queens-score))
restart-problems))
- Determine the scores of all of these solutions.
-
(define restart-scores (map eight-queens-score restart-solutions))
- Tally the optimal (zero-valued) scores.
-
(apply +
(map (lambda (score) (if (zero? score) 1 0))
restart-scores))
Record this value so we can discuss it later.
- How does this result compare to your prediction?
Simulated Annealing
Analyzing simulated-annealing is similar to hill-climbing,
except we also have to worry about the annealing schedule.
- Create a schedule procedure of the form given above in Part C*,
choosing your own iteration limit (500 in the example) and exponent
base 0 < b < 1 (0.50 in the example).
-
(define schedule
(lambda (iter)
(if (> iter _____)
0
(expt _____ iter))))
- Find solutions to each of the problems in problems.
-
(define annealing-solutions
(map (lambda (state)
(simulated-annealing state
eight-queens-successors
eight-queens-score
schedule))
problems))
- Determine the scores of all of these solutions
-
(define annealing-scores (map eight-queens-score annealing-solutions))
- Tally the optimal (zero-valued) scores.
-
(apply +
(map (lambda (score) (if (zero? score) 1 0))
annealing-scores))
Record this value so we can discuss it later.
- How does this result compare to your prediction?
We will discuss your results as a class. In the mean time, you are
encouraged to explore the influence of your scheduling choices on
the success of simulated annealing below.
For Those with Extra Time
How much do you suppose the schedule has to do with the success of
simulated annealing? Experiment with different exponent bases and
iteration limits. Higher bases b result in slower cooling schedules.
Thus, you are more likely to take sub-optimal steps early on (hopefully
reaching a state from which an optimal configuration can be reached).
How does this influence the number of iterations needed?
- Start increasing your base from 0.5 until you see a noticeable increase
in the success rate. How high does it have to be to reach the optimum
nearly always?
- If you do not reach the optimum very often, try running annealing
for a greater number of iterations. How many iterations do you need?
What is the base?