Date of Award
Master of Science
3D shape measurement has a variety of applications in many areas, such as manufacturing, design, medicine and entertainment. There are many technologies that were successfully implemented in the past decades to measure three dimensional information of an object. The measurement techniques can be broadly classified into contact and non-contact measurement methods. One of the most widely used contact method is Coordinate Measuring Machine (CMM) which dates back to late 1950s. The method by far is one of the most accurate method as it can have sub-micrometer accuracy. But it becomes difficult to use this technique for soft objects as the probe might deform the surface of the object being measured. Also the scanning could be a time-consuming process.
In order to address the problems in contact methods, non-contact methods such as time of flight (TOF), triangulation based laser scanner techniques, depth from defocus and stereo vision were invented. The main limitation with the time of flight laser scanner is that it does not give a high depth resolution. On the other hand, triangulation based laser scanning method scans the object line by line which might be time consuming. The depth from defocus method obtains 3D information of the object by relating depth to defocus blur analysis. However, it is difficult to capture the 3D geometry of objects that does not have a rich texture. The stereo vision system imitates human vision. It uses two cameras for capturing pictures of the object from different angles. The 3D coordinate information is obtained using triangulation. The main limitation with this technology is: when the object has a uniform texture, it becomes difficult to find corresponding pairs between the two cameras. Therefore, the structured light system (SLS) was introduced to address the above mentioned limitations.
SLS is an extension of stereo vision system with one of the cameras being replaced by a projector. The pre-designed structured patterns are projected on to the object using a video projector. The main advantage with this system is that it does not use the object's texture for identifying the corresponding pairs. But the patterns have to be coded in a certain way so that the camera-projector correspondence can be established. There are many codifications techniques such as pseudo-random codification, binary and N-ary codification. Pseudo-random codification uses laser speckles or structure-coded speckle patterns that vary in both the directions. However, the resolution is limited because each coded structure occupies multiple pixels in order to be unique. On the other hand,
binary codifications projects a sequence of binary patterns. The main advantage with such a codification is that it is robust to noise as only two intensity levels are used (0s and 255). However, the resolution is limited because the width of the narrowest coding stripe should be more than the pixel size. Moreover, it takes many images to encode a scene that occupies a large number of pixels. To address this, N-ary codification makes use of multiple intensity levels between 0 and 255. Therefore the total number of coded patterns can be reduced. The main limitation is that the intensity-ratio analysis may be subject to noise.
Digital Fringe Projection (DFP) system was developed to address the limitations of binary and N-ary codifications. In DFP computer generated sinusoidal patterns are projected on to the object and then the camera captures the distorted patterns from another angle. The main advantage of this method is that it is robust to the noise, ambient light and reflectivity as phase information is used instead of intensity. Albeit the merit of using phase, to achieve highly accurate 3D geometric reconstruction, it is also of crucial importance to calibrate the camera-projector system. Unlike the camera calibration, the projector calibration is difficult. This is mainly because the projector cannot capture images like a camera. Early attempts were made to calibrate the camera-projector system using a reference plane. The object geometry was reconstructed by comparing the phase difference between the object and the reference plane. However, the chosen reference plane needs to simultaneously possess a high planarity and a good optical property, which is typically difficult to achieve. Also, such calibration may be inaccurate if non-telecentric lenses are used. Calibration of the projector can also be done by treating it as the inverse of a camera. This method addressed the limitations of reference plane based method, as the exact intrinsic and extrinsic parameters of the imaging lenses are obtained. So a perfect reference plane is no longer required. The calibration method typically requires projecting orthogonal patterns on to the object. However, this method of calibration can be used only for structured light system with video projector. Grating slits and interferometers cannot be calibrated by this method as we cannot produce orthogonal patterns with such systems.
In this research we have introduced a novel calibration method which uses patterns only in a single direction. We have theoretically proved that there exists one degree-of-freedom of redundancy in the conventional calibration methods, thus making it possible to use unidirectional patterns instead of orthogonal fringe patterns. Experiments show that under a measurement range of 200mm x 150mm x 120mm, our measurement results are comparable to the results obtained using conventional calibration method. Evaluated by repeatedly measuring a sphere with 147.726 mm diameter, our measurement accuracy on average can be as high as 0.20 mm with a standard deviation of 0.12 mm.
Suresh, Vignesh, "Calibration of structured light system using unidirectional fringe patterns" (2019). Graduate Theses and Dissertations. 17106.