Concepts for Underwater Stereo Calibration

Concepts for Underwater Stereo Calibration,

Stereo 3D-Reconstruction and Evaluation

Tim Dolereit

12

1

2

Fraunhofer Institute for Computer Graphics Research IGD, Joachim-Jungius-Str.

11, 18059 Rostock, Germany

University of Rostock, Institute for Computer Science, Albert-Einstein-Str. 22,

18059 Rostock, Germany

, [email protected]

Abstract.

Handling refractive effects in computer vision disciplines like underwater stereo camera system calibration or 3D-reconstruction is a major challenge. Refraction occurs at the borders between different media on the way of the light and introduces non-linear distortions, that are dependent on the imaged scene. In this paper, concepts will be proposed for the calibration of a stereo camera system including a set of additional refractive parameters, for underwater stereo 3D-reconstruction and for evaluation of the computations.

Key words:

Underwater Imaging, Underwater Camera Calibration,

Underwater 3D-Reconstruction, Stereo Camera Systems

1 Introduction

The application of imaging devices in underwater environments has become a common practice. They can be installed on autonomous underwater vehicles

(AUV), remotely operated vehicles (ROV) and also divers can be equipped with them. The non-destructive behavior toward marine life and its repeatable application makes underwater imaging an efficient sampling tool. Underwater imaging is confronted with quiet different constraints and challenges than imaging in air.

The camera’s constituent electric parts have to be protected against water. This leads to setups where cameras are looking through a viewing window like an aquarium or to cameras being placed inside a special waterproof housing. All of these setups are subject to refraction of light passing bounding, transparent interfaces between media with differing refractive indices (water-glass-air transition). Refractive effects lead to objects seeming to be closer to the observer and hence bigger than they actually are. The effects are non-linear distortions, that depend on the incidence angle of light rays onto the refractive interface.

These non-linear magnifications are a problem for gaining metric information from images like 3D-reconstruction with conventional in air approaches.

For gaining metric 3D-reconstructions, using a stereo-camera-system is a common practice. The cameras’ intrinsic and relative extrinsic parameters have to be calibrated. Since the imaging behavior of a camera in air can be well ap-

71

proximated using the linear pinhole camera model of perspective projection, it forms the foundation of most calibration algorithms. Additionally, refractive effects have to be handled in underwater environments. Underwater images have multiple viewpoints [15], hence, calibration of cameras in underwater usage is theoretically not possible with the pinhole camera model. Due to the fact that the imaging model does not match the imaging conditions, it is an acknowledged approach to account for refractive effects by modeling them explicitly. Therefore, the pose of the refractive interface towards the cameras has to be calibrated as well. Afterward, a physically correct tracing of light rays can be utilized for

3D-reconstruction.

The parameters representing the pose of the refractive interface will be referred to as refractive parameters. These parameters comprise the orientation between a camera’s optical axis and a refractive interface’s normal (retrieval will be referred to as axis determination), as well as the distance of the camera’s center of projection along this normal (retrieval will be referred to as distance

determination).

In the following, some concepts will be presented on how to perform stereo

3D-reconstruction underwater, ranging from system calibration to evaluation of gained results. Most of the concepts are based on earlier works of the author on virtual object points - proposed to be actually seen by the cameras - in underwater imaging [2]. A model can be utilized to relate the location of these virtual object points non-ambiguously to the real object points. The main contributions of this work are to show

that axis determination in system calibration can be performed in-

dependently of knowing refractive indices of the participating media as well as interface thickness.

that 3D-reconstruction can be done by utilizing virtual object points

and can be simultaneously used as a constraint for system calibration.

how evaluation concepts like generation of ground truth data, re-

fractive reprojection error or computation of correspondence curves can be realized.

2 Related Work

In this section a brief overview on handling refraction in relation to camera calibration is given. A comprehensive overview on camera models in underwater imaging can be found in [12].

Many works are founded on the pinhole camera model alone. This means, refractive effects are either completely ignored [5, 8, 14] or expected to be absorbed by the non-linear distortion terms [13, 10]. Further similar approaches using in-situ calibration strategies are mentioned in [6, 11].

A second way to handle refraction is by approximation. Ferreira et al. [4] assume only low incidence angles of light rays on the refractive surface. Lavest

72

et al. [9] try to infer the underwater calibration from the in air calibration in form of an approximation of a single focal length and radial distortion. It is also based on the pinhole camera model.

The applicability of the pinhole model in imaging through refractive media is said to be invalid by many authors [15, 1, 16, 7]. Since the pinhole camera model cannot handle refractive effects, approaches were developed handling these explicitly. Hence, refractive effects are modeled physically correct and are incorporated into the camera model and calibration process. The camera model is extended by refractive parameters.

Since this is the only way to handle refractive effects physically correct, the proposed concepts also aim at a solution to calibrate additional refractive parameters.

3 System Design and Restrictions

The concepts to be presented are for now restricted to stereo cameras in a single underwater housing with a flat interface. This design leads to some useful simplifications, which will be explained in section 4. The cameras can be arbitrarily oriented towards the refractive interface and each other. The stereo camera system has to be calibrated for intrinsic and relative extrinsic camera parameters in air and both cameras are supposed to have a constant focal length. Both cameras’ non-linear distortion terms in air are supposed to be calibrated as well. It is expected that lens distortion is not inuenced by refraction and hence can be eliminated by standard in air distortion correction algorithms in advance. The way of the light is characterized by a water-glass-air transition. The indices of refraction of the involved media are expected to stay constant. The refractive index of water is supposed to be equal to 1.33 and of air equal to 1. Furthermore, the refractive interface’s thickness and its refractive index are known as well, since they can be usually determined manually.

4 Calibration and 3D-Reconstruction

The calibration of the underwater stereo camera system is performed in two phases. The first phase is the determination of all the described parameters in the previous section 3 in a pre-process. This is done in air. The second phase comprises the determination of the refractive parameters. These parameters are illustrated in Fig. 1. During the so called axis determination the orientation between the left camera’s optical axis and the interface’s normal is computed. This can be parametrized in 3D space by spherical coordinates. Afterward, the last refractive parameter is computed during the so called distance determination.

It is the distance between the center of projection of the left camera and the water-sided interface border along the determined axis.

In the following, concepts for this refractive calibration for the earlier specified setup will be presented.

73

Fig. 1.

Simplified illustration of the left camera with a center of projection P

l

. A ray arriving in pixel I

l

is refracted twice at the interface between water and air. The interface is parametrized by the angle α between the camera’s optical axis and the interface’s normal and the distance d along this normal.

4.1 Independence of Axis Determination

The specified setup of a stereo camera and a single flat refractive interface leads to some useful simplifications. As is known from physics, refraction always happens in a plane - the so called plane of refraction. A single plane of refraction is spanned by two vectors. The first is the image ray from a pixel through the center of projection and the second is the refractive interface’s normal.

Hence, we get two planes of refraction for a corresponding pixel pair in a stereo camera system. Since we have a single refractive interface, both, the left plane of refraction as well as the right plane of refraction are spanned by the same interface normal and the corresponding image ray (see Fig. 2). This leads to the fact, that both planes of refraction have to intersect in a line with the same direction as the interface’s normal, as long as both planes are not parallel. Since the planes of refraction can be determined without knowing the entire way of the light through all participating media, the axis determination is independent of the values of the refractive indices, the interface’s thickness and the distance to the interface.

Axis determination can be performed, for example, by deriving some constraints with the aid of known calibration targets like in [3]. An error metric is minimized followed by a second minimization for distance determination based on the resulting axis. This means, if the resulting axis is erroneous, the resulting distance will incorporate this error. Another way is to do a two-dimensional search over the possible spherical coordinate space for the axis. Hence, one minimization process can be replaced due to the independence of axis deter-

mination. In the following, a concept will be proposed on how to perform axis- and distance determination simultaneously by connecting such a search and 3Dreconstruction.

74

Fig. 2.

Rear view of the stereo camera system in 3D. The image rays (green lines) of a corresponding pixel pair I

l

and I

r

together with the refractive interface’s normal (blue line) span the two planes of refraction (orange planes). Both planes intersect in a line which has the same direction as the interface’s normal.

4.2 3D-Reconstruction as a Constraint

Suppose, we already know the true axis. This enables us to compute the planes of refraction for every corresponding pixel pair, as well as the corresponding line of intersection of both these planes. The real object point has to lie on this line.

The non-refracted left and right image ray (extended into water - see solid lines in Fig. 3 right) also have to intersect this line. These intersection points are called virtual object points in the following, since they are proposed to be seen virtually by the cameras. The left and right virtual object points only coincide if the incidence angles of the non-refracted rays are equal. On the contrary, the left and right refracted rays always coincide in the real object point (see dashed lines in Fig. 3 right). Hence, the resulting left and right virtual point as well as the real object point lie on the line of intersection (see Fig. 2 & Fig. 3 right). The overall closest virtual point gives us a maximal distance at where the refractive interface can be placed. If the interface is placed at the true distance d, tracing a pair of corresponding rays by explicitly considering refraction until both of them meet the line of intersection should result in a single real point (see Fig. 3 right). The final placement of the interface is done by minimizing the difference between the left and right real point for all corresponding rays (compare [3]).

In this way, the distance to the interface can be determined simultaneously with 3D-reconstruction of the corresponding pixel pairs. There is no need for a known calibration target. The 3D-reconstruction forms the constraint for the error metric. In combination with the above proposed two-dimensional search over the possible spherical coordinate space for the axis and by finding the overall minimal error value, the refractive calibration can be fulfilled.

75

Fig. 3.

3D-reconstruction as a constraint for refractive calibration. Left: Initial conditions for a given axis (dashed blue line). Middle: Wrong placement of the interface at distance d. Right: Correct placement of the refractive interface results in a single real point.

5 Evaluation Concepts

After the calibration of the refractive parameters, a physically correct tracing of the rays can be done resulting in a 3D-reconstruction with explicit consideration of refraction. The calibration as well as the 3D-reconstruction should be evaluated. In-air algorithms use the reprojection error for this purpose. The reconstructed 3D points are projected perspectively onto the image and are compared to the corresponding pixels. The distance between projected and detected pixels forms the error metric. Since perspective projection is invalid underwater due to refraction, the reprojection error has to be modified.

5.1 Refractive Reprojection Error

The projection of underwater 3D points onto the image requires solving a polynomial of 4th degree for a single refraction and of 12th degree for two refractions

[1] (water-glass-air). The proposed concept is to determine a virtual object point

V - non-ambiguously related to the 3D point O - that can be projected perspectively (see Fig. 4). The relation between the location of the virtual and the real object point was described in previous works [3]. Utilizing this relation, one can determine V by simple bisection. The following perspective projection is straight-

76

forward. This refractive reprojection error is an efficient means for evaluation of reconstructed 3D points.

Fig. 4.

Refractive reprojection for a calibrated system. 3D point O is related to a virtual object point V that can be projected perspectively onto the image.

5.2 Computation of Correspondence Lines

Another means for evaluation of reconstructed 3D points are correspondence lines. The computation of correspondence lines in underwater computer vision is supposed to correspond to the computation of epipolar lines with the aid of epipolar geometry in air. Since perspective projection is invalid underwater, epipolar geometry is as well. Epipolar geometry is used to reduce the search space for a corresponding pixel in the second view to a single straight line.

These lines are curves underwater due to refractive effects. The correspondence lines can be computed in a similar way as the refractive reconstruction error in the last section. Therefore, a ray in water that belongs to the pixel for which the correspondence is searched for, is sampled into a specific number of 3D points.

The points start at the water-sided interface border and end at a user-defined distance.

Besides the application for reduction of the search space for correspondences, the so computed correspondence lines can be used as a visual cue for evaluation of the refractive calibration. An example can be seen in Fig. 5. If the calibration is correct, the correspondence line for a chosen pixel should hit the same feature point in the second view.

77

Fig. 5.

Comparison of epipolar line computation after image rectification (top row) and computation of a correspondence line (bottom row) for a selected pixel for simulated image data. As can be seen, the epipolar line wold clearly miss by several pixels.

5.3 Generation of Ground Truth Data

The last proposal for evaluation is simply the generation of ground truth data.

As can be seen in Fig. 6, a solid frame was built around a fish tank. Profile rails were used to fix a checker pattern target and a stereo camera system rigidly.

Two GoPro Hero 3 Black cameras were used. The intrinsic and relative extrinsic parameters were calibrated in air. The whole frame can be lowered into the water.

Hence, a 3D point cloud can be computed in air with conventional reconstruction algorithms. This point cloud in the stereo camera’s coordinate system serves as ground truth data. It can be directly compared with the reconstructed 3D points underwater.

6 Conclusion and Future Work

The proposed concepts a preliminary works that are currently tested and improved. They mostly build upon basic findings from previous works of the author

[3]. Handling refractive effects correctly in underwater computer vision tasks like system calibration and 3D-reconstruction is a major challenge. The concepts are supposed to lead to a solution for calibration of a stereo camera system with a flat refractive interface in underwater usage. The refractive calibration can be done without the need of a known calibration target. Simultaneously, the capability of 3D-reconstruction was presented. Combining the independence of

axis determination of refractive calibration with the proposed 3D-reconstruction constraint seems to be a promising concept for calibration.

78

Fig. 6.

Generation of ground truth data. A checker target that is fixed to the stereo camera system can be lowered into water.

Evaluation of the refractive calibration is naturally a difficult task, since most of the times it can be measured physically. The presented evaluation concepts like generation of ground truth data, refractive reprojection error or computation of correspondence lines are means to check the quality of the computations, both visually and computationally.

Acknowledgment

This research has been supported by the German Federal State of Mecklenburg-

Western Pomerania and the European Social Fund under grant ESF/IV-BM-

B35-0006/12

References

1. A. Agrawal, S. Ramalingam, Y. Taguchi, and V. Chari, “A theory of multi-layer flat refractive geometry,” in 2012 IEEE Conference on Computer Vision and Pattern

Recognition (CVPR), 2012, pp. 3346 – 3353.

2. T. Dolereit and A. Kuijper, “Converting underwater imaging into imaging in air,” in VISAPP 2014 - Proceedings of the 9th International Conference on Computer

Vision Theory and Applications, Volume 1, Lisbon, Portugal, 5-8 January, 2014,

S. Battiato and J. Braz, Eds.

SciTePress, 2014, pp. 96–103.

3. T. Dolereit, U. Freiherr von Lukas, and A. Kuijper, “Underwater Stereo Calibration

Utilizing Virtual Object Points,” in OCEANS 2015, 2015, pp. 1–7.

4. R. Ferreira, J. P. Costeira, and J. A. Santos, “Stereo reconstruction of a submerged scene,” in Proceedings of the Second Iberian conference on Pattern Recognition and

Image Analysis - Volume Part I, ser. IbPRIA’05, 2005, pp. 102–109.

5. N. Gracias and J. Santos-Victor, “Underwater video mosaics as visual navigation maps,” Computer Vision and Image Understanding, vol. 79, pp. 66 –91, 2000.

6. M. Johnson-Roberson, O. Pizarro, S. B. Williams, and I. Mahon, “Generation and visualization of large-scale three-dimensional reconstructions from underwater robotic surveys,” Journal of Field Robotics, vol. 27, no. 1, pp. 21–51, 2010.

79

7. A. Jordt-Sedlazeck and R. Koch, “Refractive calibration of underwater cameras,” in Proceedings of the 12th European conference on Computer Vision - Volume Part

V, ser. ECCV’12, 2012, pp. 846–859.

8. C. Kunz and H. Singh, “Stereo self-calibration for seafloor mapping using AUVs,” in Autonomous Underwater Vehicles (AUV), 2010 IEEE/OES, 2010, pp. 1–7.

9. J. M. Lavest, G. Rives, and J. T. Lapreste, “Dry camera calibration for underwater applications,” Mach. Vision Appl., vol. 13, no. 5-6, pp. 245 – 253, 2003.

10. A. Meline, J. Triboulet, and B. Jouvencel, “A camcorder for 3D underwater reconstruction of archeological objects,” in OCEANS 2010, 2010, pp. 1–9.

11. A. Sedlazeck, K. Koser, and R. Koch, “3D reconstruction based on underwater video from ROV kiel 6000 considering underwater imaging conditions,” in OCEANS

2009 - EUROPE, 2009, pp. 1–10.

12. A. Sedlazeck and R. Koch, “Perspective and non-perspective camera models in underwater imaging - overview and error analysis,” in Outdoor and Large-Scale

Real-World Scene Analysis, ser. Lecture Notes in Computer Science.

Springer

Berlin Heidelberg, 2012, vol. 7474.

13. M. R. Shortis and E. S. Harvey, “Design and calibration of an underwater stereo-video system for the monitoring of marine fauna populations,” International

Archives Photogrammetry and Remote Sensing, vol. 32, no. 5, pp. 792–799, 1998.

14. A. P. Silvatti, F. A. Salve Dias, P. Cerveri, and R. M. Barros, “Comparison of different camera calibration approaches for underwater applications,” Journal of

Biomechanics, vol. 45, no. 6, pp. 1112–1116, 2012.

15. T. Treibitz, Y. Y. Schechner, and H. Singh, “Flat refractive geometry,” in IEEE

Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008.

IEEE, Jun. 2008, pp. 1–8.

16. T. Yau, M. Gong, and Y.-H. Yang, “Underwater camera calibration using wavelength triangulation,” in 2013 IEEE Conference on Computer Vision and Pattern

Recognition (CVPR), 2013, pp. 2499–2506.

80

Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement