Three Dimensional Measurement of Objects in Liquid and

Three Dimensional Measurement of Objects in Liquid and
Three Dimensional Measurement of Objects in Liquid and Estimation
of Refractive Index of Liquid by Using Images of Water Surface
with a Stereo Vision System
Atsushi Yamashita, Akira Fujii and Toru Kaneko
Abstract— In this paper, we propose a new three-dimensional
(3-D) measurement method of objects in unknown liquid with a
stereo vision system. When applying vision sensors to measuring
objects in liquid, light refraction is an important problem.
Therefore, we estimate refractive indices of unknown liquids by
using images of water surface, restore images that are free from
refractive effects of the light, and measure 3-D shapes of objects
in liquids in consideration of refractive effects. The effectiveness
of the proposed method is shown through experiments.
(a) View-disturbing noise.
(b) Light attenuation effect.
I. I NTRODUCTION
In this paper, we propose a new stereo measurement
method of objects in liquid whose refractive index is unknown.
In recent years, demands for underwater tasks, such as
digging of ocean bottom resources, exploration of aquatic environments, rescues, and salvages, have increased. Therefore,
underwater robots or underwater sensing systems that work
instead of human become important, and technologies for
observing underwater situations correctly and robustly from
cameras of these systems are needed [1]. However, it is very
difficult to observe underwater environments with cameras
[2]–[4], because of the following three big problems.
1) View-disturbing noise (Fig. 1(a))
2) Light attenuation effect (Fig. 1(b))
3) Light refraction effect (Fig. 1(c))
The first problem is about suspended matters, such as
bubble noises, small fishes, small creatures, and so on. They
may disturb camera’s field of view (Fig. 1(a)).
The second problem is about the attenuation effects of
light. The light intensity decreases with the distance from
objects in water by light attenuation depending on the
wavelength of light. Red light decreases easier than blue
light in water [2]. In this way, colors of objects observed
in underwater environments are different from those in air
(Fig. 1(b)).
Those two problems make it very difficult to detect or to
recognize objects in water by observing their textures and
colors.
This research was partially supported by the Ministry of Education, Culture, Sports, Science and Technology, Grant-in-Aid for Scientific Research
(C), 19560422, Japan.
A. Yamashita, A. Fujii and T. Kaneko are with Department of Mechanical
Engineering, Shizuoka University, 3–5–1 Johoku, Naka-ku, Hamamatsu-shi,
Shizuoka 432–8561, Japan [email protected]
A. Yamashita is with Department of Mechanical Engineering, California
Institute of Technology, Pasadena, CA 91125, USA
(c) Light refraction effect.
Fig. 1.
Examples of aquatic images.
As to these two problems, theories or methods for aerial
environments can be expanded for underwater sensing. Several image processing techniques can be effective for removing adherent noises. Color information can be also restored
by considering reflection, absorption, and scattering phenomena of light in theory [2]. Indeed, we have already proposed
underwater sensing methods for the view-disturbing noise
problem [5] and the light attenuation problem [6].
The third problem is about the refraction effects of light.
If cameras and objects are in the different condition where
the refraction index differs from each other, several problems
occur and a precise measurement cannot be achieved.
For example, Fig. 1(c) shows an image of a duck model
when water is filled to the middle. In this case, contour
positions of the duck model above and below the water
surface looks discontinuous and disconnected, and its size
and the shape look different between above and below the
water surface. This problem occurs not only when a vision
sensor is set outside the liquid but also when it is set inside,
because in the latter case we should usually place a protecting
glass plate in front of viewing lens.
As to the light refraction problem, three-dimensional (3D) measurement methods in aquatic environments are also
proposed [7]–[10]. However, techniques that do not consider
the influence of the refraction effects [7]–[9] may have the
problems of accuracy.
Accurate 3-D measurement methods of objects in liquid
[11]–[14] with a laser range finder by considering the refraction effects are also proposed. However, it is difficult to
measure moving objects with a laser range finder.
A stereo camera system is suitable for measuring moving
objects, though the methods by using a stereo camera system
[10] have the problem that the corresponding points are
difficult to detect when the texture of the object’s surface
is simple in particular when there is the refraction on the
boundary between the air and the liquid. The method by the
use of motion stereo images obtained with a moving camera
[15] also has the problem that the relationship between the
camera and the object is difficult to estimate because the
camera moves. The surface shape reconstruction method of
objects by using an optical flow [16] is not suitable for the
accurate measurement, too.
By using properly calibrated stereo systems, underwater
measurements can be achieved without knowing the refraction index of the liquid. For example, we can make a calibration table of relations between distances and pixel positions
in advance and utilize this table for 3-D measurement [13].
However, the calibration table is useless when the refractive
index of liquid changes.
Therefore, the most critical problem in aquatic environments is that previous studies cannot execute the 3D measurement without the information of the refractive
index of liquid [5], [10]. It becomes difficult to measure
precise positions and shapes of objects when unknown liquid
exists because of the image distortion by the light refraction.
Accordingly, it is very important to estimate the refractive
index for underwater sensing tasks.
In this paper, we propose a new 3-D measurement method
of objects in unknown liquid with a stereo vision system.
The refractive index of unknown liquid is estimated by
using images of water surface (Fig. 2). Discontinuous and
disconnected edges of the object in the image of the water
surface can be utilized for estimating the refractive index.
A 3-D shape of the object in liquid is measured by using
the estimated refractive index in consideration of refractive
effects. In addition, images that are free from refractive
effects of the light are restored from distorted images.
Our proposed method is easy to apply to underwater
robots. If there is no information about refractive index of
work space of an underwater robot, the robot can know the
refractive index and then measure underwater objects only
by broaching and acquiring an image of water surface.
II. E STIMATION OF R EFRACTIVE I NDEX
There is the influence of the light refraction in liquid
below the water surface, while there is no influence above
the water surface. An image below the water surface is
distorted in consequence of the light refraction effect in
liquid, and that above the water surface is not distorted
(Fig. 2). Therefore, such discontinuous contour indicates the
refraction information. We utilize the difference between
Object
Air
Stereo Camera
Water
Fig. 2.
Fig. 3.
Overview of our method.
Estimation of refractive index.
edges in air and those in liquid to estimate the refractive
index of the liquid.
Fig. 3 shows the top view of the situation around the water
surface region when the left edge of the object is observed
from the right camera.
Here, let u1 be a horizontal distance in image coordinate
between image center and the object edge in air, and u2
be that in liquid. Note that u1 is influenced only by the
refraction effect in glass (i.e. camera protection glass), and
u2 is influenced by the refraction effects both in glass and
in liquid (Lower figure in Fig. 3).
Angles of incidence from air to glass in these situations
(θ1 and θ4 ) are expressed as follows:
u2
θ1 = φ + tan−1 ,
(1)
f
u1
(2)
θ4 = φ + tan−1 ,
f
where φ is the angle between the optical axis of the camera
and the normal vector of the glass, and f is the image
distance (the distance between the lens center and the image
plane), respectively.
Parameters f and φ can be calibrated easily in advance
of the measurement, and coordinate values u1 and u2 can
be obtained from the acquired image of the water surface.
Therefore, we can calculate θ1 and θ4 from these known
parameters.
By using Snell’s law of refraction, angles of refraction (θ2
and θ5 ) are expressed as follows:
n1
n2
n1
n2
=
=
sin θ2
,
sin θ1
sin θ5
,
sin θ4
(3)
a3
a4
=
d tan θ1 ,
(5)
=
=
t tan θ2 ,
t tan θ5 ,
(6)
(7)
= (l − t) tan θ4 + a3 ,
(8)
where d is the distance between the lens center and the glass
surface, t is the thickness of the glass, and l is the distance
between the lens center and the object.
Refractive indices n1 and n2 can be calibrated beforehand
because they are fixed parameters. Parameters d and t can
be calibrated in advance of the measurement, too. This is
because we usually placed a protecting glass in front of the
lens when we use a camera in liquid, and the relationship
between the glass and the lens never changes. Parameter l
can be gained from the stereo measurement result of the edge
in air.
By using these parameters, angle of refraction from glass
to liquid θ3 can be calculated as follow:
a4 − a2 − a1
θ3 = tan−1
.
(9)
l−t−d
Consequently, refractive index of liquid n3 can be obtained
by using Snell’s law.
n3 = n1
sin θ1
.
sin θ3
CL2
Water
Glass
Air
dL3
dL2
N
Cp
dR3
CR2
dR2
(4)
where n1 is the refractive index of air, and n2 is that of glass,
respectively.
On the other hand, we can obtain a1 , a2 , a3 , a4 from the
geometrical relationship among the lens, the glass, and the
object.
a1
a2
Object
(10)
In this way, we can estimate refractive index of unknown
liquid n3 from the image of water surface, and measure
objects in liquid by using n3 .
III. 3-D M EASUREMENT
It is necessary to search for corresponding points from
right and left images to measure the object by using the
stereo vision system. In our method, corresponding points are
searched for with template matching by using the normalized
cross correlation (NCC) method.
After detecting corresponding points, an accurate 3-D
measurement can be executed by considering the refraction
effects of light in aquatic environments.
Refractive angles at the boundary surfaces among air, glass
and liquid can be determined by using Snell’s law (Fig. 4).
We assume the refractive index of air and the glass to be
n1 and n2 , respectively, and the incidence angle from air
to the glass to be θ1 . A unit ray vector d2 = (α2 , β2 , γ2 )T
dR1
dL1
Left camera
Fig. 4.
Right camera
3-D measurement.
(T denotes transposition) traveling in the glass is shown by
(11).




α2
α1
n
 β2  = 1  β1 
n2
γ2
γ1
 λ 
n21
n
1
+
1 − 2 sin2 θ1 −
cos θ1  µ  , (11)
n2
n2
ν
where d1 = (α1 , β1 , γ1 )T is the unit ray vector of the
= (λ, µ, ν)T is a normal vector of
camera in air and N
the glass plane. Vector d1 can be easily calculated from the
coordinate value of the corresponding point, and vector N
can be calibrated in advance of the measurement as described
above.
A unit ray vector d3 = (α3 , β3 , γ3 )T traveling in liquid is
shown by (12).




α2
α3
n
 β3  = 2  β2 
n3
γ3
γ2
 λ 
n22
n
2
+
1 − 2 sin2 θ3 −
cos θ3  µ  , (12)
n3
n3
ν
where n3 is the refractive index of liquid that is estimated
in Section II, and θ3 is the angle of incidence from the glass
to liquid, respectively.
p = (xp , yp , zp )T on the ray vector is
An arbitrary point C
shown by (13).


 


α3
x2
xp
 yp  = c  β3  +  y2  ,
(13)
zp
γ3
z2
2 = (x2 , y2 , z2 )T is the point on the glass and c is
where C
a constant.
Two rays are calculated by ray tracing from the left
and the right cameras, and the intersection of the two rays
gives the 3-D coordinates of the target point in liquid.
Theoretically, the two rays intersect at one point on the object
surface, however, practically it is not always true because of
noises and quantization artifacts. Consequently, we select the
Cr
l
Cp
Ray from right camera
glass to liquid θ3x is obtained by using Snell’s law.
Cl
Ray from left camera
Fig. 5.
Ray tracing from two cameras.
θ2x
= sin−1
θ3x
= sin−1
n1 sin θ1x
,
n2
n1 sin θ1x
.
n3
(15)
(16)
On the other hand, parameters a1x , a2x , a3x are obtained
from the geometrical relationship in Fig. 6.
a1x
=
d tan θ1x ,
(17)
a2x
=
=
t tan θ2x ,
(zi − t − d) tan θ3x + a1x + a2x .
(18)
(19)
a3x
At the same time, a3x can be expressed as follows:
a3x = (zi − t) tan θ4x + t tan θ5x .
Finally, we can obtain the following equation.
n1 sin θ4x .
a3x = (zi − t) tan θ4x + t tan sin−1
n2
(20)
(21)
From (21), we can calculate θ4x by numerical way. Therefore, parameter g1x is gained by using obtained θ4x and f .
g1x = f tan θ4x .
Fig. 6.
Image restoration.
midpoint of the shortest line connecting two points each of
which belongs to each ray (Fig. 5).
Note that the detail of the solution is explained in [11].
IV. I MAGE R ESTORATION
Images that are free from the refraction effects can be
generated from distorted images by using 3-D information
acquired in Section III.
Fig. 6 shows the top view of the situation around the water
surface region. Here, let e2 be the image coordinate value
that is influenced by the refraction effect in liquid, and e1 be
the image coordinate value that is rectified (in other word,
free from the refraction effect of liquid). The purpose is to
reconstruct a new image by obtaining e1 from the observed
value e2 .
In Fig. 6, the image distance (f ), the angle between the
optical axis of the camera and the normal vector of the glass
(φ), the distance between the lens center and the glass (d),
the thickness of the glass (t), the distance between the image
center and e2 (g2x ), and the distance between the lens and
the object (zi ) is known parameters.
We can restore the image if g1x (the distance between the
image center and e1 ) is obtained.
At first, angle of incidence θ1x is expressed as follows:
g2x
θ1x = φ + tan−1
.
(14)
f
Angle of refraction from air to glass θ2x and that from
(22)
By using g1x , the image that is free from the refraction
effect can be restored.
The vertical coordinate value after the restoration is also
calculated in the same way. In this way, the image restoration
is executed.
However, there may be no texture information around or
on the water surface because a dark line appears on the water
surface in images.
Therefore, textures of these regions are interpolated by
image inpainting algorithm [17]. This method can correct
the noise of an image in consideration of slopes of image
intensities, and the merit of this algorithm is the fine reproducibility for edges.
Finally, we can obtain the restored image both below and
around the water surface.
V. E XPERIMENT
We constructed an underwater environment by using a
water tank (Fig. 7). It is an equivalent optical system to
sinking the waterproof camera in underwater. We used two
digital video cameras for taking images whose sizes are
720×480pixels. We set the optical axis parallel to the plane
of the water surface.
In the experiment, the geometrical relationship between
two cameras and the glass, the thickness of the glass, and
intrinsic camera parameters [18] were calibrated before the
3-D measurement in air. These parameters never change
regardless of whether there is water or not.
To evaluate the validity of the proposed method, two
objects are measured in liquid whose refractive index is
unknown. Object 1 is a duck model and Object 2 is a cube.
Object 1 (duck model) floated on the water surface, and
Object 2 (cube) was put inside the liquid (Fig. 7).
Object
Stereo camera
(a) Birds-eye view.
Fig. 7.
(b) Front view.
(a) Without consideration.
Overview of experiments.
Fig. 9.
(a) Left image.
Fig. 8.
(b) Right image.
(a) Original image.
Stereo image pair.
Figs. 8(a) and (b) show acquired left and right images of
the water surface, respectively.
At first, the refractive index of unknown liquid (n3 ) is
estimated from four edge positions inside red circles. Table
I shows the result of estimation. The variation of the results
is small enough to trust, and the average of four results is
1.333, while the ground truth is 1.33 because we used water
as unknown liquid.
From this result, it is verified that our method can estimate
the refractive index precisely.
Fig. 9 shows the 3-D shape measurement result of Object
1. Fig. 9(a) shows the result without consideration of light
refraction effect. There is the disconnection of 3-D shape
between above and below the water surface. Fig. 9(b) shows
the result by our method. Continuous shape can be acquired,
although the acquired images have discontinuous contours
(Fig. 8).
By using the estimated refractive index, the shape of
Object 2 (cube) was measured quantitatively. When the refractive index was unknown (n3 = 1.000) and the refraction
effect was not considered, the vertex angle was measured as
111.1deg, while the ground truth was 90.0deg. On the other
hand, the result was 90.9deg when the refraction effect was
considered by using the estimated refractive index.
From these results, it is verified that our method can
measure accurate shape of underwater objects.
Fig. 10 shows the result of the image restoration. Fig. 10(a)
shows the original image, Fig. 10(b) shows extracted result
TABLE I
E STIMATION RESULT OF REFRACTIVE INDEX .
Left camera
Left edge Right edge
1.363
1.335
(b) With consideration.
3-D measurement results.
Right camera
Left edge Right edge
1.334
1.300
(b) Extraction result.
Fig. 10.
Image restoration results.
of the object by using color extraction method [19], and Fig.
10(c) shows the restoration result, respectively.
These results show that our method can work well without
failure regardless of the existence of unknown liquid by
estimating the refractive index of liquid and considering the
light refraction.
VI. D ISCUSSION
As to the estimation of the refractive index, the error of
the estimation is within 1% through all experiments. The
accuracy and the stability is very high, however, the proposed
method needs image pairs of the water surface. Therefore,
this method may not be applicable directly for deep water
applications, because the refractive index changes little by
little when water pressure and temperature change. On the
other hand, we can use the distance between two rays (l in
Fig. 5) for the estimation when water surface images are
difficult to obtain. The value of the refractive index in case
that the distance between two rays becomes the smallest is
a correct one. Therefore, the refractive index nest can be
estimated by using following optimization.
li (n),
(23)
nest =arg min
n
Average
1.333
(c) Image restoration result.
i
where li (n) is the calculated distance between two rays at ith measurement point when the refractive index is presumed
as n. However, this method is not robust because it is very
TABLE II
ACCURACY OF MEASUREMENT ( POSITION ERROR ).
Average
Standard deviation
With consideration
2.0mm
0.4mm
Without consideration
36.1mm
1.1mm
sensitive to an initial value of the estimation. Therefore,
it is better to use both two approaches for deep water
applications; at first in shallow water the refractive index is
estimated by using water surface images, then in deep water
by using the distance between two rays.
As to the refraction effects, they may be reduced by using
an individual spherical protective dome for each camera.
However, it is impossible to eliminate the refraction effects.
Therefore, our method is essential to the precise measurement in underwater environments.
As to the image restoration, near the water surface appears
an area without information in form of a black strip. We
cannot have information about this area. Therefore, textures
of these regions are interpolated for visibility. Note that 3D measurement explained in Chapter III can be achieved
without the image restoration. Therefore, 3-D measurement
results do not include interpolated results. This means that
the proposed method shows both reliable results that is
suitable for underwater recognition and images that have
good visibility for the sake of human operators.
To evaluate the proposed method quantitatively, another
well-calibrated objects whose shapes are known and whose
positions were measured precisely in air in advance were
measured in water. Table II shows the measurement result.
In this experiment, mis-corresponding points were rejected
by a human operator. Position error with consideration of the
refraction effects is 2.0mm on an average when the distance
between the stereo camera system and the object is 250mm,
while the error without consideration of the refraction effects
is 36.1mm. The error in the depth direction was dominant in
all cases.
From these results, it is verified that our method can
measure accurate positions of objects in water.
VII. C ONCLUSION
We propose a 3-D measurement method of objects in
unknown liquid with a stereo vision system. We estimate
refractive index of unknown liquid by using images of water
surface, restore images that are free from refractive effects
of the light, and measure 3-D shapes of objects in liquids in
consideration of refractive effects. The effectiveness of the
proposed method is verified through experiments.
It is expected that underwater robots acquire the refractive
index and then measure underwater objects only by broaching and acquiring an image of water surface in the case of
unknown refractive index by using our method.
As the future works, a single-lens stereo camera system
(e.g. [20]) for underwater environments should be constructed for simplicity and usability of equipments.
R EFERENCES
[1] Junku Yuh and Michael West: “Underwater Robotics,” Advanced
Robotics, Vol.15, No.5, pp.609–639, 2001.
[2] E. O. Hulburt: “Optics of Distilled and Natural Water,“ Journal of the
Optical Society of America, Vol.35, pp.689–705, 1945.
[3] W. Kenneth Stewart: “Remote-Sensing Issues for Intelligent Underwater Systems,” Proceedings of the 1991 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition (CVPR1991),
pp.230–235, 1991.
[4] Frank M. Caimi: Selected Papers on Underwater Optics, SIPE Milestone Series, Vol.MS118, 1996.
[5] Atsushi Yamashita, Susumu Kato and Toru Kaneko: “Robust Sensing
against Bubble Noises in Aquatic Environments with a Stereo Vision
System,” Proceedings of the 2006 IEEE International Conference on
Robotics and Automation (ICRA2006), pp.928–933, 2006.
[6] Atsushi Yamashita, Megumi Fujii and Toru Kaneko: “Color Registration of Underwater Images for Underwater Sensing with Consideration
of Light Attenuation,” Proceedings of the 2007 IEEE International
Conference on Robotics and Automation (ICRA2007), pp.4570–4575,
2007.
[7] Bryan W. Coles: “Recent Developments in Underwater Laser Scanning
Systems,” SPIE Vol.980 Underwater Imaging, pp.42–52, 1988.
[8] Robert F. Tusting and Daniel L. Davis: “Laser Systems and Structured
Illumination for Quantitative Undersea Imaging,” Marine Technology
Society Journal, Vol.26, No.4, pp.5–12, 1992.
[9] Nathalie Pessel, Jan Opderbecke, Marie-Jose Aldon: “Camera SelfCalibration in Underwater Environment,” Proceedings of the 11th
International Conference in Central Europe on Computer Graphics,
Visualization and Computer Vision, (WSCG2003), pp.104–110, 2003.
[10] Rongxing Li, Haihao Li, Weihong Zou, Robert G. Smith and Terry
A. Curran: “Quantitive Photogrammetric Analysis of Digital Underwater Video Imagery,” IEEE Journal of Oceanic Engineering, Vol.22,
No.2, pp.364–375, 1997.
[11] Atsushi Yamashita, Etsukazu Hayashimoto, Toru Kaneko and Yoshimasa Kawata: “3-D Measurement of Objects in a Cylindrical Glass
Water Tank with a Laser Range Finder,” Proceedings of the 2003
IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS2003), pp.1578–1583, 2003.
[12] Atsushi Yamashita, Hirokazu Higuchi, Toru Kaneko and Yoshimasa
Kawata: “Three Dimensional Measurement of Object’s Surface in
Water Using the Light Stripe Projection Method,” Proceedings of
the 2004 IEEE International Conference on Robotics and Automation
(ICRA2004), pp.2736–2741, 2004.
[13] Hayato Kondo, Toshihiro Maki, Tamaki Ura, Yoshiaki Nose, Takashi
Sakamaki and Masaaki Inaishi: “Relative Navigation of an Autonomous Underwater Vehicle Using a Light-Section Profiling System,” Proceedings of the 2004 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS2004), pp.1103–1108, 2004.
[14] Atsushi Yamashita, Shinsuke Ikeda and Toru Kaneko: “3-D Measurement of Objects in Unknown Aquatic Environments with a Laser
Range Finder,” Proceedings of the 2005 IEEE International Conference on Robotics and Automation (ICRA2005), pp.3923–3928, 2005.
[15] Hideo Saito, Hirofumi Kawamura and Msasato Nakajima: “3D Shape
Measurement of Underwater Objects Using Motion Stereo,” Proceedings of 21th International Conference on Industrial Electronics,
Control, and Instrumentation, pp.1231–1235, 1995.
[16] Hiroshi Murase: “Surface Shape Reconstruction of a Nonrigid Transparent Object Using Refraction and Motion,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, Vol.14, No.10, pp.1045–
1052, 1992.
[17] Marcelo Bertalmio, Guillermo Sapiro, Vicent Caselles and Coloma
Ballester: “Image Inpainting,” ACM Transactions on Computer Graphics (Proceedings of SIGGRAPH2000), pp.417–424, 2000.
[18] Roger Y. Tsai: “A Versatile Camera Calibration Technique for HighAccuracy 3D Machine Vision Metrology Using Off-the-Shelf TV
Cameras and Lenses,” IEEE Journal of Robotics and Automation,
Vol.RA-3, No.4, pp.323–344, 1987.
[19] Alvy Ray Smith and James F. Blinn: “Blue Screen Matting,”
ACM Transactions on Computer Graphics (Proceedings of SIGGRAPH1996), pp.259–268, 1996.
[20] Ardeshir Goshtasby and William A. Gruver: “Design of a Single-Lens
Stereo Camera System,” Pattern Recognition, Vol.26, No.6, pp.923–
937, 1993.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement