Skip To Content

Day 1: OSA Small Eyes & Smart Minds Incubator

Sandra A. Gutierrez Razo, University of Maryland


With the clever title Small Eyes & Smart Minds, this Incubator may have left me with a smarter mind, but in contrast to the title, it has also left me wide-eyed with amazement.  This Incubator is all about new imaging techniques, the computational work that makes them possible and the exciting applications this work can engender.
 
The hosts of this Incubator, Rama Chellappa from the University of Maryland, Francisco Imai from Apple, Inc., and Ashok Veeraraghavan from Rice University are seeking fundamental advances and new solutions to problems that are moving targets as consumer and industry demand for imaging increases.  Jacob Robinson from Rice University showcased one of the eye-widening sensors discussed this morning. It is a lenseless, penny-sized fluorescence microscope with the potential to image hundreds of microns into brains as they learn.  Eric Fossum from Dartmouth University showed another sensor that is able to detect single photons at a high framing rate, ideal for low-light situations that require low noise, high dynamic range, and high spatial resolution. Chris Davis from the University of Maryland showed a system of deformable mirrors that can detect and correct for distortions through the atmosphere with minimal intensity loss.  He reassured us that instead of making a better death ray, this technology will most likely be used for defense purposes and imaging through media with high turbulence.  Jingyi Yu from the University of Delaware demonstrated elegant virtual reality (VR) technology that can bring a rockstar onto your kitchen table for a unique performance.  Chellappa joked that this technology was so convincing that he feared a 3D image of himself could be rendered for blackmail purposes.
 
During the Sensors & Systems Panel Discussion, the consensus was that dealing with the large volume of data produced while imaging is a big problem.  Sanjeev Agarwal from the Night Vision & Electronic Sensors Directorate noted that “we need to be more intelligent about what we sense.”  Veeraraghavan agreed that sensing devices are now better than the human eye in every dimension except for processing and information extraction.  Fossum pointed out that despite the physical limitations of the human eye, the “big brain behind it” is what makes it all work so well.  The problem with larger, higher-pixel count camera systems was aptly driven home by Small Eyes Keynote speaker David Brady from Duke University.  He simply stated “one person should be able to pick it up if you’re going to call it a camera.”  He demonstrated an impressive 107 MByte camera system that stitches together images from multiple cameras to render large-field, zoomable images that actually deliver on what science fiction and procedural dramas have been promising for years.
 
The panel was asked what applications will motivate new technology like astronomy and microscopy have driven progress in the past. Several uses were mentioned but Fossum specifically highlighted different forms of entertainment. Jingyi Yu expanded on this idea by saying that human-centric imaging (e.g. selfies) will drive many new advancements.
 
There are two more sessions left: “Applications – Health, Automotive, VR, Mobile & Scientific” and “Towards Smart Minds” which focuses on computational imaging, so stay tuned for the second post.
 
David Brady, Duke University, demonstrates his 107 mega pixel camera.
 
 

















Jigyi Yu, University of Delaware projects a virtual guitar playing figure onto an object from a live video feed.
 
 
Image for keeping the session alive