Lab: Perceptrons and Logistic Regression
CSC261 - Artificial Intelligence - Weinman
- Summary:
- We create the initial building blocks for perceptron
learning.
Preparation
- Create a directory and copy the starter files for the lab:
-
$ mkdir regression
$ cp -R ~weinman/courses/CSC261/code/regression/*.* regression/
$ cd regression
If you do not have the starter files from the previous
lab, you will need to acquire them as well.
- Open the starter file with DrScheme:
-
$ drscheme start.scm &
Read over the file briefly and make sure you understand what the commands
for preparing the data are doing and why.
Exercises
A: Perceptrons
In this portion of the lab, we will create the building blocks for
perceptron learning.
- In assignment.scm, write a procedure (threshold z)
(as defined in AIMA 18.6.3, p. 724, top), which returns 1 if z > =0
and 0 otherwise.
- Note the procedure (dot a b),
which computes the dot product between two lists of numbers, has been
defined for you. In assignment.scm, write a procedure (threshold-output weights input)
that completes the definition of hw(x)=Threshold(w·x).
Hint: Please use compose for simplicity and elegance.
Do not use lambda.
B: Learning rules
In this portion of the lab, we will begin to explore the simple learning
rules and apply them to the mushroom data.
- Locate the procedure perceptron-update-instance in logistic.scm.
Read the documentation and implementation and make sure you understand
what it is doing and why. Ask the instructor if you have any questions
about it.
- Return to start.scm and reload so your new definitions from
assignment.scm are available.
- How many entries will the weights for the mushroom data have?
Hint: Find the length of an instance.
- Define an initial weights list of all zeros for the mushroom problem.
Hint: You can use map with l-s or left-section
for a quick and easy solution.
-
(define mushroom-weights-zeros ______________)
- What output do you expect threshold-output to produce for
any mushroom instance? Verify your prediction.
- Apply the perceptron-update-instance to a labeled mushroom
instance with your zero weights. Compare the output to the instance
itself. Does the result seem sensible?
C: Learning functions
In this portion of the lab, we will apply the learning rule en
masse to the entire mushroom data set.
- Locate the procedure update-functor in logistic.scm.
Given a whole list of training examples, this procedure generates
another procedure that takes a list of weights and sums the updates
called for by instance-update-fun (of which perceptron-update-instance
might be an example). Ask the instructor if you have any questions
about it.
- In start.scm, define a function for calculating batch perceptron
updates for the mushroom data using perceptron-update-instance
and update-functor.
-
(define batch-perceptron-update ______________________)
- What do you expect the results of this update rule for the zero weights
list to look like? Verify your prediction.
-
(batch-perceptron-update mushroom-weights-zeros)
- Locate the procedure update-weights in logistic.scm.
Given weights, a step size and a (batch) update function, it returns
an updated set of weights. What do you expect the results of updating
the zero weights list to be with a step size of [1/100]?
Verify your prediction.
-
(define weights-one-update (update-weights mushroom-weights-zeros 1/100 batch-perceptron-update))
- Make a second update to these weights and inspect the values.
-
(define weights-two-updates (update-weights weights-one-update 1/100 batch-perceptron-update))
Do the results change in any interesting ways?
- Apply one final update to the weights.
-
(define weights-three-updates ________________)
D: Evaluation
In this final portion of the lab, we investigate the performance of
the learned hypothesis function hw.
- In assignment.scm, define the procedure (zero-one-loss expected produced)
(as defined in AIMA 18.4.2, p. 711) that produces zero if its parameters
are equal, and 1 otherwise.
- Write an expression that calculates the prediction of your learned
perceptron hypothesis with mushroom-weights on all the training
data.
-
(define mushroom-predictions __________________)
Hint: Use l-s.
- Write an expression that generates the list of 0/1 losses for the
predictions.
-
(define mushroom-losses ___________________)
- Write an expression that totals the number of losses - this is the
number of errors your hypothesis suffers from.
-
(define total-mushroom-losses ______________)
- Calculate the error rate by dividing the total loss by the total number
of examples. One minus this number would give you the accuracy.
- How does this number compare to your decision tree hypotheses?
Lab Assignment
Between this lab and snippets in the assignment document, you now
have nearly all the pieces necessary to implement the more general
logistic regression model. You should begin to work on the lab assignment.
Copyright © 2011 Jerod
Weinman.
This work is licensed under a Creative
Commons Attribution-Noncommercial-Share Alike 3.0 United States License.