|dc.description.abstract||Three dimensional (3D) cameras provide distance measurements to objects, allowing computers and instruments to interact with their environment. The applications are wide-ranging, from human gesture control to industrial processing. Time-offlight cameras measure the distance to the scene by measuring the flight time of a modulated light source. Sequential captures are required to produce the depth map, hence time-of-flight cameras are vulnerable to depth errors from motion blur in dynamic scenes. This is a major hindrance for industrial applications, where accurate results are required when reconstructing objects. The fruit grading industry is of particular interest for this work, where significant advancements can be made using 3D cameras. The produce moves at a constant velocity, providing an ideal case for initial work into industrial motion correction.
The SR4000 from Mesa Imaging is an industrial grade time-of-flight camera with a high quality factory calibration, and is used throughout this work. When applying custom algorithms (such as motion correction), the camera is run in ‘raw mode’ where the sequential captures can be individually manipulated, however the factory calibration set is lost. The first part of this work investigates calibrations in time-of-flight cameras, where the factory calibration set in the SR4000 is extracted from the camera to be used on the ‘raw mode’ data in custom algorithms. The factory calibrated data is compared to both the ‘raw mode’ data, as well as data acquired using the extracted calibration set. The key results show a root mean squared error (RMSE) of 62.4 mm for ‘raw mode’ data, while using the extracted calibrations shows an RMSE of 6.1 mm.
The effects of motion blur on time-of-flight cameras are then investigated. The technique from Hussmann et al. (2011) provides a good first attempt at motion correction, however fails to implement a number of calibrations. The improvements presented in this thesis on the motion correction technique manipulates the demodulation of time-of-flight cameras so that these additional calibrations are incorporated, resulting in a more robust motion correction algorithm. To test these improvements, a controlled experiment is setup to image a moving spherical object, and a stationary reference image of the same object is captured for comparison. Without motion correction the RMSE is 75.9 mm. Using the naive correction technique from Hussmann et al. (2011) gives an RMSE of 58.7 mm, and finally applying the suggested improvements reduces the RMSE to 4.3 mm.||