Research ProjectsJerod Weinman < CompSci < Grinnell

Multimodal Keypoint Detection (2022–2023)

Self-Supervised Depth Prediction (2019–2021)

Map Processing (2010–2020+)

Wearable Aid for the Blind (2004–2014)

Developing algorithms as part of a wearable aid for the blind called VIDI (Visual Information Dissemination for the Impaired) involved several contributions and subprojects toward this end.

Text and Sign Detection

Scene Image Detected Sign Detected Sign Scene Image Detected Sign

Using a contextual model to eliminate isolated false positives and more fully cover all regions of a detected sign, we are able to robustly detect text and logos with arbitrary sizes and layouts in complex scenes. Read more ...

Papers: MLSP04, CVAVI05, Master's Thesis.

Robust Recognition

Example Signs

By integrating character appearance models more closely with statistical language and lexicon models, as well as a locally adaptive font model, we can more reliably recognize characters from signs in different fonts. Read more ...

Papers: CVPR06, ICDAR07, ICPR08, PAMI09, ICPR10.

Joint Detection and Recognition

Detection and Recognition

By asking the "where?" and "what?" questions of finding and identifying text simultaneously during learning, we can be faster or more accurate than the usual method of learning these independently. Read more ...

Papers: Tech Report UM-CS-2006-054

Portions of this work have been funded by NSF grant numbers IIS-0100851, IIS-0326249, and IIS-0546666 as well as the Central Intelligence Agency and the National Security Agency.