!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> No Title
Lab: Local Search
CSC 261 - Weinman


Summary:
You will explore and compare the performance of various local search algorithms.

Preparation

  1. Start MediaScheme.
    1. On the MathLAN, start a Terminal window
    2. Launch GIMP by typing the command rebelsky/bin/gimp &
    3. Use the pull-down menu Xtns > MediaScript > Console
    4. The MediaScript interpreter should appear
  2. To use the problem representations and local search algorithms provided, copy the following into your definitions pane:
    (load "/home/weinman/courses/CSC261/code/local-search/eight-queens.scm") 
    (load "/home/weinman/courses/CSC261/code/local-search/local-search.scm") 
    (load "/home/weinman/courses/CSC261/code/general.scm")

Exercises

  1. You will be experimenting with a variety of local searching algorithms on the eight queens problem. Before we can do any experiments you'll need to get familiar with the procedures for representing the eight queens problem that have been provided for you.
    1. A state is represented as a list of eight numbers, representing the rows of each of the columns on an 8x8 board. For instance,  
        
      (list 1 2 3 4 8 7 6 5) 
        
      indicates queens starting (from left-to-right) up the main diagonal, and then running from the top-down across the right half of the board. Create your own eight queens state (in the definitions pane). 
        
      (define state (list ____))
       
    2. The procedure (eight-queens-successors state) produces a list of all possible "successors" to a board with eight queens on it. That is, every state in which one queen has moved to a different row. Using this procedure, determine the sHomework Assignmentsuccessors to your state.
    3. How many successors should there be? Does eight-queens-successors give you the number you expect?
    4. The procedure (eight-queens-score state) produces the negative of the number of attacking pairs of queens for a given board configuration. (Since our algorithms try to maximize an objective function, and we want to minimize the number of attacking pairs, we use the negative for our objective.) Using this procedure, determine the number of attacking pairs of queens in your state.
    5. Determine (without manually counting) the number of attacking pairs of queens for all successors to your state. Does the score increase for any of them?
  2. Now we can try using a simple local search method to find an improved state that (hopefully) has fewer attacking queens.
    1. The general procedure (hill-climbing initial-state successor-fun value-fun) takes a starting problem configuration (initial-state), a procedure that takes a state and generates its successors (successor-fun), and a procedure that measures the objective function value for a given state (value-fun). Find a (hopefully) improved board configuration with, starting from your initial state and using the successor and score functions for the eight queens problem introduced above.
       
      (define local-optimum _____)
       
    2. What is the score of the resulting state? Does it improve?
    3. The optimal score is zero (for no pairs of attacking queens). Using the same strategy as above, determine scores of all the successors of local-optimum. Is the state a local maximum, global maximu, or (at least partial) plateau?
  3. Simple hill-climbing always chooses the successor that gives the best improvement. What if, instead, we chose the first successor that gives us some amount of improvement? The procedure first-choice-hill-climbing takes the same parameters as hill-climbing and does exactly this.
    1. Use this procedure to find another optimum, starting from your initial state.
       
      (define first-choice-optimum ______)
       
    2. What is the score of the resulting state? Is it any better than local-optimum?
  4. It is possible that we quickly find our way to a local maximum with the simple search routines above. What if we perform our search from multiple starting places and use the best result? This is what hill-climbing-random-restart does.
    1. The first parameter to hill-climbing-random-restart takes a list of starting states. Define your own list of (3-5) potential starting states.
       
      (define state-list (list _____ _____ _____))
       
    2. Use hill-climbing-random-restart to find another optimum with your initial states.
       
      (define restart-optimum ______)
       
    3. What is the score of the resulting state? Is it any better than the previous scores?
  5. All of these search methods only take uphill steps. What if we occasionally take some downhill steps? This is what simulated annealing does. The procedure (simulated-annealing initial-state successor-fun value-fun schedule) again takes just one starting state, but has an additional parameter schedule that takes an integer (representing the annealing iteration) and produces a real number, the temperature to use at the given iteration. Annealing will continue until the temperature is zero. The following is an example schedule that exponentially decreases the temperature at every iteration, up to a limit of 500 iterations:
     
    (lambda (iter) (if (> iter 500) 0 (expt 0.50 iter)))
     
    1. Use simulated-annealing and a schedule of your choice to find another optimum from your original initial state. (Be sure your schedule returns zero at some point to avoid an infinite loop.)

       
      (define annealing-optimum ______)
       
    2. What is the score of the resulting state? Is it any better than the previous scores?
  6. Now that you've gotten some experience with the four basic local search algorithms, you can do some more formal analysis of their performance. Record your answers to the following questions somewhere.
    1. What percentage of problems do you expect hill climbing to solve optimally?
    2. What percentage of problems do you expect first-choice hill climbing to solve optimally?
    3. What percentage of problems do you expect hill climbing with random restarts to solve optimally?
    4. What percentage of problems do you expect simulated annealing to solve optimally?
  7. In order to do the analysis, we will need to generate many problems. Obviously, doing this manually would be quite time consuming. Fortunately, (random-eight-queens-state) will generate a random (valid) board configuration for you.
    1. How can we generate several board configurations, even random ones, easily? Recall the function (iota n) will create a list of integers from 0-n. We should be able to then create a random state for each of these integers using map with an anonymous procedure that ignores its argument. Use this strategy to create a list of 1000 random board states.
       
      (define problems (map ______ (iota 1000)))
       
    2. Use map to find solutions to each of these problems using hill-climbing.
       
      (define climbing-solutions (map (lambda (state) _______) problems)
       
    3. Use map to determine the scores of all of these solutions with procedure eight-queens-score.
       
      (define climbing-scores ______)
       
    4. How many of the scores are optimal? We can use map to determine which are optimal by giving a one for those with a score of zero:
       
      (map (lambda (score) (if (zero? score) 1 0)) climbing-scores) 
        
      Use apply in conjunction with the map above to tally the optimal (zero-valued) scores. Record this value so we can discuss it later.
    5. How does this result compare to your prediction?
  8. Repeat the above analysis using the first-choice-hill-climbing and the same initial states.
    1. Find solutions to each of the problems. 
        
      (define first-choice-solutions ______)
       
    2. Determine the scores of all of these solutions
       
      (define first-choice-scores ______)
       
    3. Use apply in conjunction with map to tally the optimal (zero-valued) scores. Record this value so we can discuss it later.
    4. How does this result compare to your prediction?
  9. The analysis for hill-climbing-random-restart is not quite as straightforward. The book reports (p. 114) that if p is the proportion of problems successfully solved by hill climbing, then 1/p restarts are needed.
    1. Using your result from above, how many restarts do you expect to need in order to have a high chance of success?
    2. Instead of creating a list of problems (as above) we will create a list of lists of problems, each to be used as input to the search procedure. If you predict you will need num-restarts starting points, then we can create a list of 1000 lists containing num-restarts problems with the following definition.
       
      (define restart-problems 
      (map (lambda (p-num)  
      (map (lambda (r-num) (random-eight-queens-state)) (iota num-restarts)) 
      (iota 1000))) 
        
      (The formatting probably looks terrible. Be sure to format it nicely in your definitions pane so that it makes sense to you.)
    3. We may now continue our analysis in a similar vein as before Use map to find solutions to each of these sets problems using hill-climbing-random-restart.
       
      (define restart-solutions (map (lambda (state-list) _______) restart-problems)
       
    4. Determine the scores of all of these solutions
       
      (define restart-scores ______)
       
    5. Use apply in conjunction with map to tally the optimal (zero-valued) scores. Record this value so we can discuss it later.
    6. How does this result compare to your prediction?
  10. Analyzing simulated-annealing is similar to hill-climbing, except we also have to worry about the annealing schedule. Use a schedule of the form given above, choosing your own iteration limit (1000 in the example) and exponent base 0 < b < 1 (0.50 in the example).
    1. Find solutions to each of the problems in problems. 
        
      (define annealing-solutions ______)
       
    2. Determine the scores of all of these solutions
       
      (define annealing-scores ______)
       
    3. Use apply in conjunction with map to tally the optimal (zero-valued) scores. Record this value so we can discuss it later.
    4. How does this result compare to your prediction?
We will discuss your results as a class. In the mean time, you are encouraged to explore the influence of your scheduling choices on the success of simulated annealing below.

For Those with Extra Time

How much do you suppose the schedule has to do with the success of simulated annealing? Experiment with different exponent bases and iteration limits. Higher bases b result in slower cooling schedules. Thus, you are more likely to take sub-optimal steps early on (hopefully reaching a state from which an optimal configuration can be reached). How does this influence the number of iterations needed?
  1. Start increasing your base from 0.5 until you see a noticeable increase in the success rate. How high does it have to be to reach the optimum nearly always?
  2. If you do not reach the optimum very often, try running annealing for a greater number of iterations. How many iterations do you need? What is the base?



File translated from TEX by TTH, version 3.80.
On 14 Sep 2009, 10:34.