Fast Learning for StereoResearch < Jerod Weinman < CompSci < Grinnell | |||||
Motivation![]()
Figure 1. As the potential disparity D grows large, message passing-based learning algorithms get uncomfortably slow.
Markov field models are a common approach to incorporating smoothness constraints for predicting disparity from a pair of stereo images. However, learning the parameters of such models can be slow for larger images because they have potentially large disparities, and most inference algorithms required for learning are linear in the size of the state space (i.e., the number of disparity levels). Our work proposes modifications to standard learning algorithms that speed the inference process by passing sparse messages. This is accomplished by solving a simple constrained optimization problem that makes messages as sparse as possible while remaining close to the originals. The result is that the updates that incorporate the global smoothness constraints on the disparity drop from being linear in complexity to being highly sub-linear. Sparsity![]()
Figure 2. Even at the outset of learning and the beginning of message passing, most beliefs have very low entropy.
The messages passed in many classes of inference algorithms typically correspond to, or are closely related to, approximate probability distributions, or "beliefs." The key insight is that for many types of models, especially stereo models, most of these beliefs have extremely low probabilities very close to zero. It is precisely these entries in the messages that we want to eliminate, but in a principled fashion. Rather than impose a limited number of states, or an arbitrary cut-off for probability mass, a divergence criterion is imposed. Thus high entropy messages are kept dense, reflecting their greater uncertainty. However, since most messages are very low entropy, many message entries will be dropped. Speed Up![]()
Figure 3. Comparison of the time to minimize free energy with original dense and our proposed sparse method.
Variational inference techniques such as mean field are common approaches that approximate an intractable probability distribution by choosing a tractable version and minimizing the difference between the original and approximating distribution. This is results in minimizing what is known as a "free energy." The original method for calculating dense variational updates in our stereo model (which iteratively minimize the free energy), take a great deal of time. Our proposed sparse method reaches the free energy mininima before the dense method completes even one iteration. Example Results![]()
Figure 3. Comparison of error after learning with our sparse variational technique versus graph cuts.
Previous work has used a point estimate during model learning by finding the most likely disparity map using graph cuts. Graph cuts is a very fast algorithm, but this can be problematic for the handful of beliefs with higher entropy, where a point estimate does not reflect the greater uncertainty. Our approach has the dual advantage of being fast (unlike traditional variational approaches) while also flexibly incorporating the appropriate amount of uncertainty during learning (unlike graph cuts or other point estimates). The root mean square (RMS) error prediction on test images is lower when learned using our algorithm. Example disparity predictions are also shown here for comparison.
Figure 4. Disparity predictions of models trained using point estimates from graph cuts and sparse messages
Related Papers
|