Digital Holography & 3-D Imaging
25 June 2018 – 28 June 2018
Wyndham Orlando Resort International Drive, Orlando,
Florida United States
Topic areas include interferometry, phase microscopy, novel holographic processes, 3D and novel displays, integral imaging, computer generated holograms, compressive holography, full-field tomography, and holography with various light sources including coherent to incoherent and X-ray to terahertz waves.
This is a highly inter-disciplinary forum with applications in biomedicine, biophotonics, nanomaterials, nanophotonics, and scientific and industrial metrologies.
- Advances in Digital Holographic Techniques
- 3D Imaging and Display Systems
- Computer Generated Holograms
- Compressive Holography
- Transport of Intensity
- Quantitative Phase Imaging
- Holographic Lithography
- Digital Holographic Microscopy
- Digital Holographic Tomography
- Digital Holographic Optical Processing
- Metrology and Profilometry
- Holographic Remote Sensing Techniques
- Incoherent holography
- Biomedical/Clinical/Medical Applications
- Dynamic Holography and Novel Recording Materials
- Digital Holography in Nonlinear Optical Systems
- Terahertz Generation and its Application to Digital Holography
- Polarization Holography
- Digital Holography for Inspection of Scattering Media
- 2D&3D Image Processing for Digital Holography & Feature Recognition
- Deep Learning in DH and Related Areas
- Emerging Applications of Digital Holography
- Aydogan Ozcan, University of California Los Angeles, United States
Machine Learning Enabled Computational Imaging and Sensing for Point-of-Care Medicine and Global Health Keynote
- Hoonjong Kang, Korea Electronics Technology Institute, South Korea
Full color holographic printing techniques and fast digital hologram generation methods Tutorial
- Laura Waller, University of California Berkeley, United States
Computational Microscopy for 3D Imaging Tutorial
- Marc Brunel, Universitaire du Madrillet CORIA, France
Interferometric out-of-focus imaging of ice particles for airborne instrumentation
- Victor Dyomin, Tomsk State University, Russia
Underwater Digital Holography for Particles Research
- Jorge Garcia-Sucerquia, Universidad Nacional de Colombia, Colombia
Telecentric Imaging in Reflection and Transmission Digital Holographic Microscopy
- Yoshio Hayasaki, Utsunomiya University, Japan
Volumetric display with holographic femtosecond laser accesses
- Yuan Luo, National Taiwan University, Taiwan
Multiplexed Illumination Holographic Fluorescence Imaging
- Biagio Mandracchia, Istituto Nazionale di Ottica, Italy
Compact Solutions for Off-axis Holography in Optofluidics
- Giancarlo Pedrini, Universität Stuttgart, Germany
Multi-Wavelength Digital Holography for Erosion Measurements inside the ITER Tokamak
- David Roberts, BEAM Engineering for Adv. Measurements, United States
Switchable, broadband, polarization-independent diffractive optical components and systems
- Pablo Ruiz, Loughborough University, United Kingdom
Advances and Challenges in Synthetic Aperture Interferometry
- Gene Serabyn, Jet Propulsion Laboratory, United States
Digital Holographic Microscopy as Means of Remote Life Detection
- Yuhong Wan, Beijing University of Technology, China
Adaptive Three-dimensional Fluorescence Holographic Microscopy with Anisotropic Aberration Correction
- Aimin Yan, Shanghai Normal University
Optical Cryptography with Biometrics and Optical Scanning Holography
- Jianlin Zhao, Northwestern Polytechnic University, China
Near-field Imaging Using Digital Holographic Interferometry with Total Internal Reflection and Surface Plasma Resonance
- Tomasz Kozacki, Warsaw University of Technology, Poland , Chair
- Guohai Situ, Shanghai Inst. Opt. Fine Mech., China , Chair
- Liangcai Cao, Tsinghua University, China , Program Chair
- Pascal Picart, LAUM CNRS Université du Maine, France , Program Chair
- Percival Almoro, University of the Philippines-Diliman, Philippines
- Chau-Jern Cheng, National Taiwan Normal University, Taiwan
- Daping Chu, University of Cambridge, United Kingdom
- Konstantinos Falaggis, Univ of North Carolina at Charlotte, United States
- Marc Georges, Liege Universite, Belgium
- Yoshio Hayasaki, Utsunomiya University, Japan
- Hoonjong Kang, Korea Electronics Technology Institute, South Korea
- Björn Kemper, University of Muenster, Germany
- Myung Kim, University of South Florida, United States
- Taegeun Kim, Sejong University, South Korea
- Juan Liu, Beijing Institute of Technology, China
- Kyoji Matsushima, Kansai University, Japan
- Fernando Mendoza-Santoyo, Centro de Investigaciones en Optica AC, Mexico
- George Nehmetallah, Catholic University of America, United States
- Wolfgang Osten, Universität Stuttgart, Germany
- Jae-Hyeung Park, Inha University, South Korea
- Nikolai Petrov, ITMO University, Russia
- Peter Schelkens, Vrije Universiteit Brussel
- Yunlong Sheng, Universite Laval, Canada
- Kehar Singh, Indian Institute of Technology, Delhi, India
- Mikael Sjodahl, Lulea Tekniska Universitet, Sweden
- Elena Stoykova, Bulgarian Academy of Sciences, Bulgaria
- Nelson Tabiryan, Beam Enginering for Adv Measurements Co, United States
- Peter Tsang, City University of Hong Kong, Hong Kong
- Wei Wang, Heriot-Watt University, United Kingdom
- Hiroshi Yoshikawa, Nihon University, Japan
- Ting-Chung Poon, Virginia Tech, USA
- Byoungho Lee, Seoul National University, South Korea
- Toyohiko Yatagai, Utsunomiya University, Japan
- Partha Banerjee, University of Dayton, USA
Google VR, USA
Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality
This talk will present work from USC ICT and Google VR in creating actors and environments for movies, games, and virtual reality. The Light Stage computational illumination and facial scanning systems are geodesic spheres of inward-pointing LED lights which have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. The lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjuction with natural language processing and automultiscopic projection to record and project interactive conversations with survivors of the World War II Holocaust. Debevec will conclude by discussing the technology and production processes behind "Welcome to Light Fields", the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement.
About the Speaker
Paul Debevec is a research professor at the University of Southern California and the associate director of graphics research at USC's Institute for Creative Technologies. Debevec's Ph.D. thesis (UC Berkeley, 1996) presented Façade, an image-based modeling and rendering system for creating photoreal architectural models from photographs. Using Façade he led the creation of virtual cinematography of the Berkeley campus for his 1997 film The Campanile Movie whose techniques were used to create virtual backgrounds in The Matrix. Subsequently, Debevec pioneered high dynamic range image-based lighting techniques in his films Rendering with Natural Light (1998), Fiat Lux (1999), and The Parthenon (2004); he also leads the design of HDR Shop, the first high dynamic range image editing program. At USC ICT, Debevec has led the development of a series of Light Stage devices for capturing and simulating how objects and people reflect light, used to create photoreal digital actors in films such as Spider Man 2, Superman Returns, and The Curious Case of Benjamin Button, and Avatar, as well as 3D Display devices for telepresence and teleconferencing. He received ACM SIGGRAPH's first Significant New Researcher Award in 2001, co-authored the 2005 book High Dynamic Range Imaging from Morgan Kaufmann, and chaired the SIGGRAPH 2007 Computer Animation Festival. He serves as Vice President of ACM SIGGRAPH and is a member of the Visual Effects Society, the Academy of Motion Picture Arts and Sciences, and the Academy's Science and Technology Council.
Luminar Technologies, USA
OSA Light the Future Presentation: The Role of Optics and Photonics in the Vehicles of Tomorrow
In this presentation, OSA Fellow Jason Eichenholz will take a high level look at the future of optics and photonics technologies in autonomous vehicles. Optics are a crucial component in an industry headed for extreme disruption over the next few decades and will play a critical role in shaping the future of navigation, passenger experience and the ultimate safety of the autonomous trip. Eichenholz will discuss the key components of all-things-optic, including LiDAR, laser headlights, passenger monitoring and interior lighting and displays, the role each plays inside a future automobile and its impact on the transportation industry.
About the Speaker
Jason Eichenholz is a serial entrepreneur and pioneer in laser, optics and photonics product development and commercialization. Over the course of his twenty-five year career, his unique blend of business and technical leadership has resulted in hundreds of millions of dollars of new photonics products being brought to market. Eichenholz is an inventor on ten U.S. patents on new types of solid-state lasers, displays and photonic devices.
Space Telescope Science Institute, USA
Exoplanet Imaging: From Precision Optics to Precision Measurements
During this plenary talk Laurent will present recent observational results in exoplanet imaging and discuss prospects for similar experiments on NASA missions such as the upcoming James Webb Space Telescope and the currently studied The Large UV/Optical/IR Surveyor.
About the Speaker
Laurent Pueyo is an astronomer at the Space Telescope Science Institute, in Baltimore, Maryland. He earned his doctorate from Princeton University in 2008 and conducted his post-doctoral work as a NASA Fellow at the Jet Propulsion Laboratory and as a Sagan Fellow at the Johns Hopkins University. His research focuses on imaging faint planets around nearby stars. He has pioneered advanced data analysis methods that are now standard tools used to study extrasolar planets, and invented an optical technique that is now baselined for future NASA missions. At STScI his duties include optimizing the extrasolar-planet imaging capabilities of NASA's James Webb Space Telescope (JWST), scheduled to launch in late 2019. He is also a member of the Science and Technology Definition Team for the Large Ultraviolet Optical and Infrared telescope, a future observatory that will identify Earth-sized planets and assess their habitability.
Digital Holographic Microscopy: Present and Future Panel Discussion
Monday, 25 June, 12:30–14:00
Join the OSA Holography and Diffractive Optics Technical Group for a panel discussion exploring potential breakthroughs in digital holographic microscopy. Brief
presentations from our featured panelists will be followed by a moderated question and answer session, helping facilitate the exchange of information with our
community. Contact TGactivities@osa.org to register, pending availability.
Monday, 25 June; 18:30–20:00
Come join your colleagues for drinks, networking and thoughtful discussion. Enjoy light fare while networking. The reception is open to all full conference attendees.
Conference attendees may purchase extra tickets for their guest.
Student & Early Career Professional Development & Networking Lunch and Learn
Tuesday, 26 June; 12:30-14:00
This program will provide a unique opportunity for students and early career professionals, who are close to finishing or who have recently finished their doctorate degree, to interact with experienced researchers. Key industry and academic leaders in the community will be matched for each student based on the student's preference or similarity of research interests. Students will have an opportunity to discuss their ongoing research and career plans with their mentor, while mentors will share their professional journey and provide useful tips to those who attend. Lunch will be provided.
This Workshop is complimentary for OSA Members and space is limited. Not all who apply will be able to attend due to space limitations and priority will be given to those who have most recently or are close to graduation.
50th Anniversary of Introduction to Fourier Optics by Joseph Goodman
Tuesday, 26 June; 13:30-19:30
This year marks the 50th anniversary of the publishing of Introduction to Fourier Optics
by Joseph Goodman, a book that has fundamental influence in the field of optical imaging. To commemorate this anniversary a special series of talks will be presented covering Fourier optics in the classroom to the evolvement of the field.
Join the Image Sensing and Pattern Recognition Technical Group for a small reception immediately following the conclusion of the program.
- Joseph W. Goodman, Stanford University, USA, Origins and Evolution of Introduction to Fourier Optics
- James Fienup, University of Rochester, USA, ABCD Matrix Analysis for Fourier-Optics Imaging
- Raymond Kostuk, University of Arizona, USA, A review of the wonderful discussion of Holography by Professor Goodman in his book: The Introduction to Fourier Optics.
- James Leger, University of Minnesota Twin Cities, USA, What’s the Problem? Insight and Inspiration Derived from Solving the Exercises in J. Goodman’s Classic Book Introduction to Fourier Optics
- Masud Mansuripur, University of Arizona, USA, Fourier Optics in the Classroom
- Demetri Psaltis, Ecole Polytechnique Federale de Lausanne, Switzerland, The Transition of Fourier Optics Towards Computational Imaging and Digital Holography
- William T. Rhodes, Florida Atlantic University, USA, Teaching Fourier Optics: What I do Differently after 50 Years
- Bahaa Saleh, University of Central Florida, USA
Reception Hosted By
Tuesday, 26 June 2018, 19:00 – 21:00
You are invited to join the OSA Display Technology Technical Group for Illumicon II, an exclusive members-only event. Building on the declarations established at the inaugural Illumicon, which was convened in 2016, attendees will come together to discuss and debate emerging trends, technologies and opportunities in advanced 3D displays. Our discussions will also seek input on how the Display Technology Technical Group can further engage the 3D community in the years ahead. Illumicon II attendees will converge over drinks and appetizers at the confidential location. Entrance will be granted to those able to provide the secret Illumicon II event password. RSVP to firstname.lastname@example.org to receive the event location and password.
Applications of Visual Science Technical Group Networking Lunch
Wednesday, 27 June 2018, 12:00–13:00
Members of the OSA Applications of Visual Science Technical Group are invited to join us for a networking lunch on Wednesday. The event will provide an opportunity
to connect with fellow attendees who share an interest in this field and to learn more about this technical group. Contact TGactivities@osa.org to register, pending availability.
Thursday, 28 June (13:00-18:00) and Friday (07:00-12:00)
Tour of Laser Propagation Facilities at Kennedy Space Center
Additional Fee: $25 per person.
*Fee includes only transportation. Transportation will leave from the conference hotel and return to the Orlando International Airport and the conference hotel.
During this tour at Kennedy Space Center (KSC), you will see various facilities used for outside field experiments such as laser propagation measurements. The tour will include UCF’s Townes Institute Science and Technology Experimentation Facility (TISTEF), the Shuttle Landing Facility (SLF), and Vehicle Assembly Building (VAB). TISTEF is a site for experiments that require deployment in a fielded setting and consists of a 1 km grass range equipped with atmospheric monitoring instruments and multiple scintillometers as well as capabilities for optical tracking and remote sensing. From this site, slant path measurements can be made over the 13 km path to the top of the VAB. The 5 km long SLF is ideal for longer path measurements because of its homogeneity and flatness (earth’s curvature has been removed). This tour is possible because of the pcAOP committee and University of Central Florida.
Student Grand Challenge: the optical systems of the future
The challenge is open to OSA student members and their advisors interested in presenting concepts for enhanced machine visioning or systems that enhance the human vision system by augmenting or extending another human sense.
Individuals or teams are invited to submit ideas for either a novel passive or active optical system in the form of a 35-word abstract and 2 page summary highlighting the novelty, originality and feasibility of the concept. (Additional materials may include videos highlighting system mock-up or demos.)
Up to four finalists will be chosen to attend the Imaging and Applied Optics Congress, 25-28 June 2018 in Orlando, FL USA to present a 3-minute synopsis of their concept as well as to host a poster during the conference poster session. Finalists will receive a travel stipend up to $2,000 USD to cover airfare and hotel as well as full technical registration for the congress.
Two winners will be announced on-site and will receive a recognition plaque and a $250 prize.
Sponsored by Lockheed Martin and the OSA Foundation.
Passive Optical System Challenge Problem
The image processing community strives to duplicate human vision. For certain specific and well defined tasks we have succeeded or surpassed human capability, but still struggle with poorly defined and dynamic environments. The category comparison between machine vision and human vision include:
The passive optical systems challenge is to create a novel concept, technology or system for improving results in one of the 8 categories below:
- Spectrum: Machine vision is superior as human vision is limited to the visible spectrum. Machine vision is also more capable of seeing narrower spectrum steps and larger dynamic ranges than our eyes.
- Resolution: Human vision is superior. Current machine vision systems that are approaching 8K x 8K formats are starting to get there but, only with visible systems.
- Focus: Human vision is superior being able to focus from very close to very far with a single lens element. The eye aperture is limited by the size of the pupil for objects far away. Machine vision systems are specifically designed for very close or very far away and do not suffer from being aperture limited, but utilize many lens elements to accomplish the same human eye tasks.
- Optical Processing: Human vision + brain is superior to machine vision on pattern recognition and decision making.
- Image Processing: Ideas focused on detection and categorization of objects in the view field.
- Lens Technology: Ideas that are focused on optical sensors.
- High Speed Data Transport: Ideas that are focused on fast and efficient transport of high resolution image data and streams.
- Adaptable Lens Technology: Ideas that are focused on adaptable optical sensors.
- Liquid Lens Optical Sensors: Ideas focused on liquid lens.
- Artificial Intelligence: Ideas that focus on image based cognition
- AV/VR Technology: Ideas that are focused on Augmented and Virtual Reality technologies.
- Other: Ideas that do not fall into one of the existing categories.
Active Optical System Challenge
Given you have a human vision system, which is intrinsically passive, how would you use active sensing techniques to augment that vision system to mimic or extend the human senses? Augmentation could mean adding higher precision 3D vision, active foveae imaging, active IR assisted sensing, vibrometry, polarimetry, sensing motion in the FOV, chemical/biological sensing, looking through fog/turbulence, etc. Many such systems have been demonstrated, but they are often large, heavy, and costly.
The active optical system challenge is to come up with novel sensor concepts that mimic at least two of the human senses at a distance of at least 10 m, with the sensor fitting into one third of the human brain (roughly 0.5 liters). More sensing modalities are encouraged, especially those that extend what humans can do.
- Sight (e.g., producing 2D or 3D images)
- Hearing (e.g., measuring object vibrations through optical means)
- Smell (e.g., chemical/biological sensing)
- Taste (e.g., chemical/biological sensing)
- Touch (e.g., characterization of surface texture and/or temperature)
- Limited to undergraduate or graduate students.
- The teams should be composed of at least one OSA student member and at least one advisor who is an OSA member.
- Required submission format: PDF with a 35-word abstract and 2 page summary.
- Optional submission material: videos, system mock-ups, demonstrations.
- Compliance: is the idea submission complete and does it comply with the rules of the challenge?
- Novelty: does the idea describe a novel approach to providing a solution?
- Originality: how original is the proposed technology or use of existing technology?
- Relevance: How well does the idea relate to the topic and provide a solution aligned with the goals of this challenge?
- Feasibility: how likely can the idea be prototyped?
Submissions will be evaluated by a committee of Imaging and Applied Optics Congress leadership and Lockheed Martin executives.