Degree Type

Thesis

Date of Award

2021

Degree Name

Master of Science

Department

Mechanical Engineering

Major

Mechanical Engineering

First Advisor

Beiwen Li

Abstract

3D imaging technologies have multi-faceted applications in diverse domains such as traffic regulation, advanced manufacturing, healthcare, sports, and the entertainment industry. Such technologies are capable of reconstructing 3D images from a surface scan of an object and provide intricate information that can be used for engineering analysis based on the type of application. All the 3D image acquisition methods that have been developed over many past years can be broadly classified into contact and non-contact methods. One of the initial contact-based methods that is still used in manufacturing industries for quality assurance and object scanning is the Coordinate Measuring Machine (CMM). The method is one of the most accurate ones among those available as it can provide micrometer-level precision in scanning and inspection of objects. Because it is a contact-based method it has some drawbacks like the inspecting probe can damage the object surface if it is soft or delicate and the scanning procedure itself can be time consuming.

To address the challenges of contact-based methods, non-contact scanning methods like triangulation based laser scanner, time of flight laser scanner, depth from defocus, and stereo vision system were developed. The triangulation-based laser scanner requires more scanning time as they can only scan a dot/line at a time. Time-of-flight laser scanners have poor depth resolution. Depth from defocus technique derives depth information by scanning objects at various depths. But this requires multiple captures which is a time consuming process. Stereo vision systems imitate the human vision where two cameras acquire images from two separate viewpoints and 3D image reconstruction is achieved through triangulation. However, the method is challenged when an object has repetitive patterns or uniform texture, making it difficult to establish correspondence pairs. Thus, the structured light system (SLS) was developed to address some of the limitations of the methods above.

Structured Light System (SLS) is similar to the stereo vision system in the sense that one of the cameras is replaced by a projector. The projector uses codified structured patterns to project onto an object. The patterns are required to be coded in a way that camera and projector correspondence can be established. There are different ways to do this – random/pseudo codifications, binary and N-ary codifications. Random/Pseudo codification has patterns varying in both directions. But this method has a limited spatial resolution. Whereas binary codifications require the compilation of sequential binary patterns. The method is robust to noise due to the use of only two intensities (0’s and 255’s). However, it also has limited spatial resolution and multiple patterns are needed to construct a scene. Addressing this drawback is the N-ary codification that uses different intensity levels from 0 to 255 which reduces the number of coded patterns needed. But the limitation of this method is that it can be vulnerable to noise and varying surfaces. To address the disadvantages of N-ary and binary codifications, the digital fringe projection (DFP) system was developed. In a DFP system, sinusoidal patterns are projected onto objects and the resulting distorted patterns are then captured by the camera. These images are then used to create the 3D image. This method is robust to ambient light, surface reflectivity, and noise. Despite the advancements in 3D shape measurement, one of the long standing problems is accurately measuring the 3D shape of a highly reflective surface as saturation in acquired images results in an incomplete/distorted 3D geometry. To address such challenges, high-dynamic-range (HDR) scanning methods were developed.

The two conventional HDR methods are - (i) performing measurements with multiple instances of different camera exposure times, and (ii) identifying the optimal camera exposure time. Most of the HDR 3D shape measurements can be broadly categorized as done with – multiple exposures, adjustment of projected pattern intensities, polarizing filters, color invariants, and other miscellaneous methods. Multiple exposure methods involved changing exposure time rather than changing the camera aperture to generate good fringe quality for varying sizes of saturation. But the method is computationally expensive and can be time consuming. Another method of HDR scanning is by modifying the intensities of projected fringes. This method is beneficial in achieving pixel-by-pixel data for 3D shape measurement but it has its limitations as well such as limited Z depth resolution. For every object, an optimal intensity level should be identified and the intensity level chosen for one object might not be useful for another object. Using polarizing filters can be beneficial for scanning highly reflective objects as they alter the angle between transmission axes of polarizers which dampens the impact of shiny spots when capturing images. But this method requires precise setup of polarizers which complicates the hardware setup of the entire system and is also time consuming. Using polarizing filters also weakens the overall contrast and intensities of images acquired. HDR scanning with color invariants is based on the dichromatic reflection model and gives good results with simpler computation. However, when high contrast objects with complex topology are scanned, the results can be poor.

In this research, we have introduced a new 3D shape measurement method that can scan highly reflective surfaces with a higher dynamic range. Our method was developed to address the case where all the patterns projected onto an object give images that have certain levels of saturation in all of the captures. To do so, we used an additional set of inverted fringe patterns (180-degree phase-shifted patterns) besides the regular set of phase-shifted patterns. We also employed a double-shot-in-single-illumination technique to capture images twice in a single projection cycle. We discussed all the fundamental principles to establish the theoretical validation for our HDR method. We performed experiments with an object (bird sculpture) to verify our theory and the results proved the success of the method.

DOI

https://doi.org/10.31274/etd-20210609-178

Copyright Owner

Amit Anil Singh

Language

en

File Format

application/pdf

File Size

52 pages

Share

COinS