Field Experiments with the Ames Marsokhod Rover

Field Experiments with the Ames Marsokhod Rover
Field Experiments with the Ames Marsokhod Rover
Daniel Christian1, David Wettergreen, Maria Bualat,
Kurt Schwehr2, Deanne Tucker, Eric Zbinden1
Intelligent Mechanisms Group
NASA Ames Research Center, MS 269-3
Moffett Field, CA 94035-1000 USA
Abstract
In an ongoing series of field experiments, the Ames
Marsokhod rover is deployed to remote locations and
operated by scientists in simulated planetary explorations. These experiments provide insight both for scientists preparing for real planetary surface exploration
and for robotics researchers. In this paper we will provide an overview of our work with the Marsokhod,
describe the various subsystems that have been developed, discuss the latest in a series of field experiments,
and discuss the lessons learned about performing
remote geology.
1995]. The most recent test in the Painted Desert region of
Arizona in 1996 further improved sensors, control modes,
onboard autonomy, sample handling, and remote science
simulation.
1 Introduction
A terrestrial geologist investigates an area by systematically
moving among and inspecting surface features, such as outcrops, boulders, contacts, and faults. A planetary geologist
must explore remotely and use a robot to approach and image
surface features. Close-up inspection of remote planetary surfaces is a key part of understanding the geological processes
at work in our Solar System. Upcoming NASA missions, as
well as the recent Mars Pathfinder landing with the Sojourner
robot, offer planetary scientists opportunities to use mobile
robots to make close observations of surface features, and to
help answer long-standing questions regarding planetary formation.
The Intelligent Mechanisms Group (IMG) at NASA Ames
Research Center (ARC) has been developing capabilities and
control systems in order to research and evaluate the scientific exploration of other planetary surfaces. A series of mission simulations have been performed where the robot is
located at a remote field site, but controlled from ARC (or
other sites). In each test, the abilities of the robot, the efficiency of remote control, and the accuracy of mission simulation have improved.
Initial tests in Kamchatka, Russia in 1993 tested virtual
reality remote control of the robot and imagers. The Amboy
crater test in California in 1994 investigated remote science
team interaction in the presence of significant time delays.
The Kilauea Volcano test in Hawaii in 1995 added a manipulator to the vehicle and more advanced control modes [Hine
1. Caelum Research Corporation
2. Recom Technologies, Inc.
Figure 1: Ames Marsokhod in Arizona
2 Marsokhod systems
The research emphasis using Marsokhod (see Figure 1) is on
software and sensors for remote exploration. While Marsokhod is larger than the size currently being considered for
Mars missions, little or no miniaturization is needed to test
many common field instruments as part of the robot. This
enables more rapid, inexpensive research into the field effectiveness of different sensor, actuator, and software configurations. Figure 2 shows Marsokhod with a Sojourner model and
the Koala micro rover for comparison.
the surface or an obstacle (see Figure 3).
Figure 2: Marsokhod, Sojourner, and Koala
2.1
Russian built chassis
The Marsokhod chassis is an all terrain vehicle developed by
the Mobile Vehicle Engineering Institute (VNIITransmash)
in Russia [Kermurdjian 1992]. The chassis is 100 cm wide,
150 cm long, and 35 kg unloaded mass.
The chassis consists of three pairs of independently driven
titanium wheels, joined together by a three degree-of-freedom passively articulated frame. Two degrees-of-freedom
allow the frame to twist, while the third allows it to pitch.
This design enables the rover to conform passively to very
rugged terrain. The shape of the wheels provides low ground
pressure and minimizes the risk of high centering the rover
by enclosing most of the frame. The amplifiers, motors and
batteries are mounted inside the wheels to produce a very
low center-of-gravity.
Mounted above each segment is a rigid pallet for mounting
additional equipment. The front pallet supports the arm and
its electronics. The middle pallet supports electronics for the
pan-tilt head and arm force sensor and the mast holding the
cameras and antennas. An averaging mechanism keeps the
mast at the median angle between the front and rear segments. The rear pallet houses the rest of the electronics and
computers. Fully equipped, the robot weighs about 100kg.
2.2
Figure 3: Instrument carousel without housing
Multiple instruments on the carousel can be used without
moving the arm. This minimizes disturbance of the site and
ensures that the readings from different sensors can be
directly correlated. The carousel has instruments in 4 positions: a monochrome camera, a color camera, a clam-shell
and an experimental haptic device (not shown). Calibration
targets for size and color are in the field of view of each camera.
The resolution of both cameras is 0.08mm/pixel at the
housing bottom. The depth of field is approximately 20mm
and is centered on the housing bottom. The signal from the
color camera is converted into separate red, green, and blue
signals and digitized independently. Figure 4 shows a monochrome image of the scratch created by the haptic sensor on
compacted soil and the 5mm calibration stripes.
Instrument and sampling arm
An arm with tool carousel mounted on the front pallet allows
close up imaging, soil mechanics tests, and sample acquisition. Fixed stereo cameras on the front of the vehicle aid in
arm placement.
The arm is a research unit developed by MacDonnell Douglas. It has 5 degrees of freedom (DOF) and has a 1m maximum reach.
A rotary carousel has been added which holds three instruments and a sampler inside a 100x100mm clear cylindrical
housing. An opening at the bottom of the housing allows
access to the target. The carousel is linked to the arm through
a 6 axis force sensor enabling the robot to sense contact with
Figure 4: Close-up monochrome image
The clam-shell acquires samples using a single actuator.
The clam-shell fits entirely within the housing when the jaws
are open and extends outside the housing to acquire soil or
pebble samples of up to 12cc. The carousel cannot rotate
while the clam-shell is out. The acquired sample can be
dropped in a storage container on the front pallet.
2.3
Automated arm placement
It is exceptionally difficult to place the instrument carousel in
contact with a surface using cameras alone. The arm force
sensor enables contact detection and surface normal estimation for automated carousel placement.
Using the force sensor, the arm can be placed in contact
with a surface in as little as two commands. The first moves
the arm to a position above the target placement. The second
does a guarded move until contact is made. Either move can
be a simple joint interpolated move or a cartesian move
along a line or about a rotation center.
2.4
Imaging systems
The primary imaging hardware consists of a monochrome
stereo pair of cameras and a multi-spectral camera mounted
about 1.5m above the ground. These cameras are mounted on
a pan-tilt device with a 318˚ pan range and a 111˚ tilt range
with minimal backlash (see Figure 5).
Figure 5: Primary imagers on pan-tilt
All cameras produce monochrome RS-170 format video
which is digitized at 512 by 480 resolution by an 8 bit frame
grabber. Each camera has independent automatic gain control (AGC) and internal sync generation.
The stereo cameras are inexpensive PC board units with
12mm lenses that result in 1 mrad/pixel resolution. The stereo cameras can be precisely adjusted in roll, pitch, and vergence. The cameras are verged at infinity with zero relative
roll and pitch.
The color wheel camera is configured as a low cost simulation of the Imager for Mars Pathfinder (IMP). A monochrome camera with a 12mm lens yields 0.7 mrad/pixel
resolution. Figure 6 compares the commercial filters used
against the ones on the IMP. Because the camera uses
onboard AGC, gray reference targets must be placed in the
image to be able to correct the color balance.
A series of tests of the filter wheel were conducted prior to
the field test by several geologists familiar with remote sensing. A suite of test rocks were imaged with all filters with
direct sun illumination. Although there were color calibration targets in the scene, image saturation made precise color
correction impossible. The geologists concluded that the system was useful only for the identification of iron oxide state.
The scientists were able to determine if the rock was weathered (oxidized) or had a fresh surface. From these tests it
Filter
Center
Width
IMP
IMP
nm
nm
Center
Width
1
All pass
2
440
31
440
35
3
480
30
480
30
4
530
25
530
30
5
600
25
600
20
6
670
20
670
20
7
750
25
750
20
8
800
20
800
20
Figure 6: Multi-spectral imager vs. IMP imager
became clear that this system of filters is not suited to rock
and mineral identification.
The multi-spectral imager is most often used in the 440nm
(blue), 530nm (green), and 670nm (red) bands to assemble
color images or panoramas. Also, the narrow band images
provide higher resolution and contrast than the stereo cameras (see Figure 7).
Figure 7: 440nm band image of vehicle tracks
Four types of images are produced: mono images, stereo
pairs, mono panoramas, and stereo panoramas. Mono images
Figure 8: Stereo pair from the primary cameras
consists of a single grayscale image. Stereo pairs take a left
image, then a right image, and send them joined together
left-right in a single image (see Figure 8)
Panoramas are formed by moving the pan-tilt to a start
location, digitizing the central column from the image, step-
Figure 9: 263˚ stereo panorama from landing site with windsock and color calibration targets.
ping one pixel width to the next pan position, and repeating
After a human operator designates a target feature, two
until reaching the end location. This technique can generate
control loops are activated to drive the robot to the feature. A
a 318˚ panorama with no lens distortion effects along the pan
gaze fixation loop correlates between previous and current
axis. Camera AGC changes are smoother and less distracting
images from one of the cameras, controlling the pan-tilt head
than when mosaicing individual images together. Stereo panto keep the target feature centered in the camera’s field-oforamas are done in the same fashion (see Figure 9).
view. A robot motion control loop correlates between left
Each image is sent back to the control station as a header
and right images, and uses the stereo range data in conjuncwith state information and a series of image packets.
tion with the bearing data from the fixation loop to keep the
Because a packet may be lost, a simple acknowledgment
vehicle driving to the feature. This control loop also halts the
from the control station indicates that a packet has been
vehicle when the feature is within the desired distance.
received. If the acknowledgment is not received within 3 seconds, the packet is resent up to 5 times. Because of satellite
2.7 Remote control interface
delays (600ms) and communication overhead, this resulted
The Marsokhod remote control interface is made up of four
in a full size mono image taking 4 minutes to transmit from
subsystems: the Virtual Environment Vehicle Interface
Arizona to ARC.
(VEVI), the rover manager, the rover operator, and the telem2.5 Onboard computers
etry router (see Figure 10).
In an enclosure on the rear pallet, a VME chassis houses the
computers and interfaces for all onboard systems. A Motorola
68060 processor provides all computation. One 6 axis servo
control board provides velocity control servo loops for the
vehicle chassis. A 5 axis servo control board provides position control for the arm. A 4 port serial interface card talks to
the compass/inclinometer, the arm force sensor, the pan-tilt
controller, and a multi-drop serial control bus.
The multi-drop serial bus controls a series of simple digital
and analog interface modules. These control power to the
base, control power to the arm, control the carousel, control
camera selection, control the filter wheel, and monitor the
vehicle chassis angles.
An Arlan radio ethernet bridge communicates offboard at
500kbps. The robot uses a 6db omni-direction antenna for
uniform coverage. The stationary end of the link uses a 12db
directional antenna to improve operating range and reduce
multi-path effects.
Figure 10: Remote control interface
2.6
Vision-based navigation
To reduce operator control cycles and improve navigational
accuracy, a vision-based tracking system autonomously
drives the robot to an operator designated natural feature (e.g.
a rock outcropping) [Wettergreen 1997].
The feature tracking system uses a robust image correlator
based on binary correlation of the sign of the difference of
Gaussian of an image. As the robot moves, this correlator
tracks a feature from frame to frame and performs stereo correlation to estimate the feature range. Input imagery comes
from the stereo pair of cameras on the rover mast.
VEVI is a real time, networked visualization system used
to display rover state and local terrain. As the rover moves,
VEVI shows the rover position and orientation, the articulation of the chassis, and the state of the arm in a 3D virtual
world [Hine 1995].
The rover manager, rover operator, and telemetry router
are written in the TCL/Tk interpreted scripting language.
Using the built in widgets for buttons, entries, canvases, etc.,
a 2D interface controls all systems of the rover.
A single rover manager is the one connection from any
number of “ground stations” to the rover. Several operator
interfaces may be run at once, but all messages are routed
through the manager. The rover manager can disable families
of rover commands by simply not relaying those commands
to the rover. The rover manager can also introduce a communications delay by holding on to a message for a specified
amount of time before sending it out. In this way, we can
simulate communications delays between Earth and Mars.
The rover operator consists of several windows that display rover telemetry and provide the controls for the various
rover subsystems. The two main panels are the telemetry
panel that displays rover state, trip distance, and warning
messages, and the camera control panel that enables the user
to control the pan/tilt head and grab images from the various
cameras on-board. Several specialized interchangeable panels give the user access to two driving modes (gross motion
and individual wheel control), three arm control modes (joint
angles, jog, and cartesian), and control of the carousel. An
image display/tracker control panel is available to display
camera outputs and to provide the controls for the on-board
visual tracking system. Additionally, two pop-up display
panels are available to aid the operator in visualizing arm
controls and chassis articulation.
A telemetry router is the single connection from the rover
to any number of operator interfaces. All telemetry from the
Marsokhod is sent through the telemetry router to its final
destination.
All three interface subsystems automatically create logs.
The rover manager logs all commands to the rover, the
telemetry router logs all telemetry from the rover, and the
rover operator logs comments and annotations made by the
user and all imagery data (file names, time stamp, rover state
at time of image grab).
As part of an outreach program, the IMG developed a textonly interface that could be used by school children to drive
the rover and control its cameras. The participating Native
American schools logged on to an IMG workstation and ran
the text-only interface which gave them simple menus for
controlling the rover direction, speed, and pan/tilt angles.
They could see the results of their commands by watching as
imagery and telemetry were updated on the IMG web site.
3 Field test environment
Our most recent field experiment using the Marsokhod took
place near Tuba City on the Navajo reservation in the Painted
Desert region of northern Arizona. The site was chosen for its
sparse vegetation, varied geology, and Mars-like appearance.
The test was run in November to avoid high temperatures.
The science team was told little about the location of the
site (only that it was in Arizona) and nothing about the local
geology. The few people who had been to the site (in order to
select it) were placed in administrative roles and conveyed
no knowledge of the site to the science team. Simulated
descent images were taken from a helicopter several weeks
prior to the field test. A simulated satellite image was provided from high altitude overflights done years earlier.
Because of limited availability of the science team, operations that would have taken weeks or months on Mars were
compressed into 6 days of operations. A full resolution, color
panorama was taken prior to the science test to save the 7
hours necessary to acquire and transmit it.
3.1
Field operations
The field site was 10 miles from the nearest town and 60 miles
from a major city. No power or telephone service were available near the site. A rented recreational vehicle served as a
operations truck and contained all operational and logistical
equipment for the field test. Portable generators provided
power for the operation truck and for the robot. A rented moving van provided overnight shelter for the robot from the
weather and served as a workshop. Because of the age of the
batteries in Marsokhod, it was run off of a tether from an easily moved generator for all field operations. The batteries provide heavy load power and the tether provides all standard
operating power.
A commercial satellite link provided communications at
112kbps from the field site back to ARC. Cellular phones
with high gain antennas aimed at the nearest cell in Utah provided voice communications for trouble shooting and ground
truth support. The phone link was kept open during all operations. A person on the operations truck could talk to ARC
via phone, talk to the field support team via hand held radios,
and could monitor all data from the robot.
Field team operations ran from sunrise to a hour after sunset each day. Operation from ARC ran roughly 9-5 each day
(AZ time). A geologist from Arizona State University took
notes on what the robot was observing, took reference photographs, and prepared a detailed ground truth report after the
test was complete.
3.2
Online collaboration
The ARC Marsokhod team developed a web site3 through
which all mission data was available as soon as it was
received at Ames. Ancillary information was also available
for the science teams to assist in finding and processing the
data and understanding the rover capabilities. Additionally,
the web site provided public outreach on all aspects of the
mission.
The original portable pixel map (PPM) format images
from the robot are archived, and are also converted to GIF,
JPEG, and TIFF formats for various image analysis tools.
The header and telemetry information can be overlaid onto
the image for printing and web site use.
Data available on the web site includes the following:
guided tour of the web site; most recent images and telemetry (updated every minute); red-blue stereo images; air photos annotated with actual rover path; Quicktime VR
panoramas of the science panorama images; mission logs;
background information; control display screen shots; camera specifications; Marsokhod specifications; photos of people and equipment; animated image sequences from onboard cameras; press releases; test site general information;
and the scientist briefing packet.
3. http://img.arc.nasa.gov/marsokhod
3.3
Control center operations
The science and rover operations at ARC was conducted in a
small control room. The rover operators and mission scientists shared the space and interacted constantly. Silicon
Graphics workstations provided all operational control and
access to all the data. A menu driven interface was provided
to access the different data analysis programs. Several image
processing and display tools were provided: XV, PhotoShop,
Netscape, and ImageMagick. Two Macintoshes were provided for access to NIH Image and PhotoShop.
Two workstations were designated for the rover operator
and engineering support. Scientists were requested to stay
out of this area to avoid distracting the rover operator. A second, large monitor of the rover operator’s screen was placed
in the science area to view live telemetry and images.
Three methods were used to view stereo images: LCD
shutter glasses (Crystal Eyes) were used view a computer
monitor in a special mode, red/blue anaglyphs were use with
on screen display and color printouts, and a stereo air photo
viewer was used to view printouts. The red/blue anaglyphs
can be viewed on any monitor or on color print outs. The
LCD glasses allow multiple people to view a single image on
a specially equipped workstation. Manual processing steps
were needed before any images could be viewed in stereo.
Several monochrome and color printers were made available in the control center or nearby for hard copy of images.
The initial full color panorama was printed out at 10ft wide
resolution and hung on one wall.
3.4
4 Science experiments
The science teams working at NASA Ames Research Center
had three specific goals for the 6 days of field testing in Arizona. The primary goal was to establish the general geology
and biology of the field test site. The second goal was to test
methods and techniques to be used with the Mars Pathfinder
mission and the Sojourner rover in July 1997. The third goal
was to try out a mode of exploration based on rapid exploration.
To accomplish the science goals, the field test was split
into two day phases: Pathfinder simulation, general exploration, and rapid exploration. Some scientists participated in
the team for more than one phase. Since the Pathfinder
lander was not equipped with a descent camera, the Pathfinder team went first and was not given descent images. The
second and third phases were given simulated descent imagery at the start of operations. This allowed the team to select
not only nearby science sites, but areas outside the “landing
area” that looked significant. The third phase team came
with a path sequence planned before arriving at Ames using
only descent imagery. The path was executed regardless of
what the team learned at each site.
The robot was carried back to the “landing site” at the start
of each phase. Onboard odometry show that the rover travelled a total of 469m during the science test (see Figure 11).
Ancillary tests
Some additional tests were performed to take advantage of
the field site. These tests showed promise for future research
but are only in preliminary development.
A deployable micro-rover was tested to investigate the
potential for a small rover to assist as an imaging calibration
target, an external inspection camera, a navigation reference,
and a high risk explorer. Koala4 is a 32x32x20 cm, 6
wheeled skid steered rover with simple onboard autonomy.
For the field test it was equipped with a fixed monochrome
camera which sent live video to the operations truck.
A haptic device recorded the frequency and amplitude of
the needle sliding over the surface of the sample using a
piezo vibration transducer. The forward motion of the needle
is provided by the rotation of the carousel. A replay device
allowed the remote scientist to feel by tactile sensation the
texture of the sample recorded by the needle.
A simulation of the Pathfinder windsock experiment was
performed to test imaging requirements. The rigid windsocks are displaced from a vertical orientation by air movement and imaged to measure this displacement and its
direction. The objective was to get image data in realistic
environments for later experiments with image compression
levels.
Figure 11: Rover paths on a 10m grid
The robot was driven back from the last science stop toward
the landing site with a 6 minute time delay enabled. The
longest single move was 45.3m.
4.1
4. Koala is made by the K-team (http://www.kteam.com) in collaboration with the Swiss Federal
Institute of Technology of Lausanne (EPFL).
Science team organization
Initially, the science team organized itself as a team lead,
CapCom, scribe, and an analyst and processor for trafficability and for site analysis. However, the extensive terrain capa-
bilities of Marsokhod made trafficability a simple issue that
each person could understand. The distinction between analyst and processor also proved to be unnecessary. Members
did their own processing and helped each other as needed.
The science members were supposed to talk to the rover
operator/pilot through one designated person that the science
team referred to as “CapCom.” This formal structure was
often skipped, and the science team members talked directly
to the rover operator.
The science team used labeled descent photos with 2m or
10m grids to communicate positions (see Figure 11). They
would mark a target destination on a gridded photo and give
that to the rover operator. It let the rover pilot focus on driving without having to continually interact with the science
team to make sure the rover was heading to the correct location. The field reference geologist used the same gridded
photos for preparing the ground truth notes.
Having little formal structure to the team promoted free
exchange of ideas, but was occasionally chaotic.
4.2
Rover operations
The rover operator is an engineer trained in all the details and
limitations of operating Marsokhod. The operator takes science sampling objectives and controls the robot to get the
desired information.
To navigate to a new destination, the operator would take a
high resolution stereo image toward the destination to establish context of the new area, designate a clear target near the
desired position, and use the vision-based tracking to drive
there. The position and orientation of the rover was then verified using the cameras and descent photos. The tracking
cycle may be repeated if the rover was very far from the destination or manual drive commands may be used for final
moves to the desired destination.
Depending on the nature of the science objectives, various
types of images would be taken.
For an arm placement, the rover was first moved into a
position that can reach the target and has good visibility from
the cameras. The arm was then moved to a position above
the target. The main and front pallet cameras can be used to
align the position and orientation of the arm above the target.
The close-up imagers may be useful, but are usually too out
of focus for approach alignment. Then a guarded move
would place the arm in contact with the surface. If the orientation of the arm is wrong, the close-up imager will be out of
focus. The arm then had to be raised, re-oriented, and replaced. Samples would be placed in the storage container
using preset positions. The arm was returned to the “home
position” before any further driving.
4.3
Robotic field exploration methods
Several robotic exploration methods emerged during the testing. Many limitations found early on were fixed during the
course of the test. Other limitations in the capabilities of the
robot define the areas of future research in remote surface
exploration.
Teams with descent imaging used them to communicate
locations and discuss theories. The high resolution panorama
was used for up to date information and was the only source
of context for the Pathfinder team.
Comparison of half and full resolution images shows that
many conclusions cannot be made with the lower resolution
images alone. Single band images from the multi-spectral
imager were sharper and higher resolution than the main stereo cameras. However, the assembled full color images had
color alignment problems that lowered the overall spatial
resolution. Infrared band images lost most of their resolution
due to poor focus.
Mono images were used as the primary source for scientific analysis. Color images and stereo viewing provide the
understanding of features and geologic context. Then
detailed analysis is done on individual images.
Different people had specific preferences for viewing stereo images. Some people were annoyed by the flicker from
the LCD glasses. Some people were annoyed by the cross
over effects in the red/blue anaglyphs.
The stereo cameras on the front pallet used for arm placement sometimes provided better close up views than the
main cameras. However, the shadows from the rover and the
image quality on these particular cameras greatly limited
their use. Fast access to stereo viewing would have made
them more useful for arm placements.
The use of paper prints of images varied from team to
team. In particular, the Pathfinder team (which didn’t have
descent photos) needed many high resolution print outs of
sections from the main panorama to aid in planning.
Locating the rover position and orientation from imaging
and the descent photos was critical and often difficult. Imaging was taken around the robot and matched to descent
images. At one point, a terrain discrepancy more than tripled
the time to verify position.
Several techniques were used to help determine surface
characteristics. The simplest is to look at the tracks behind
the rover. The surface will show the effects caused by three
wheels travelling over it. By driving forward then reversing,
the effects of a single set of wheels can be observed. Turning
in place shows the effects of having wheels dragged across
the surface.
Additional surface experiments were done using the clamshell with the instrument carousel in contact with the ground.
The simplest was to close-up image the surface, close the
clam-shell, open the clam-shell, and then image again. Hard
surfaces would not be marked. Medium hardness surfaces
would have marks from the edges of the clam-shell. Loose
surfaces would be disturbed by clam-shell closing through it.
Another clam-shell technique was to close-up image the
site, close the clam-shell, remove (and optionally store) the
sample, replace the arm in the same position, and re-image
the site. The hole left from sampling would show soil characteristics.
The close-up cameras provided much needed detail about
surfaces, but were limited by the narrow depth of field. Only
flat surfaces could be entirely within the depth of field. This
was further hampered by difficulties getting the instrument
carousel flat against a surface.
A number of data organization issues limited ease of
access to the rover data. The web interface was little used
because it was simply too slow at finding and bringing up
images. Photoshop was limited by the large number of file
names to scroll through to open an image and by the limited
number of machines licensed to run it. XV was unfamiliar to
most people, but became the most used interface due to its
intuitive controls and fast response. Difficulties with file
naming conventions and cross referencing of images in the
logs could only be partially remedied during the test to avoid
inconsistencies in the data archive.
The scientists were impatient both with accessing previous
data as well as taking new images. The delays slowed down
their ability to formulate theories about what they were seeing and to have discussions about them.
One scientist did work remotely, using the web interface
for data and the conference phone to interact with the rest of
the science team.
Feedback from the ground truth scientists on a daily or test
phase basis would have helped the scientists understand how
much can be inferred from the robot imagery and what types
of misconceptions tend to arise.
5 Discussion
This work was supported by the NASA Space Telerobotics
Program. Dr. Carol Stoker provided additional funding and
leadership for the field test.
This represents the efforts of the entire IMG team: Maria
Bualat, Dan Christian, Mike Costa, Dan Daily, Lorenzo
Flueckiger, Aaron Kline, Linda Kobayashi, David Hasler,
Phil Hontalas, Michael Li, Cesar Mina, Pat Payte, Daryl
Rasmussen, Kurt Schwehr, Michael Sims, Mike Solomon,
Geb Thomas, Hans Thomas, Deanne Tucker, Dave Wettergreen, and Eric Zbinden. In additional many people at ARC
contributed to setting up and performing the science experiment: Nathalie Cabrol, Jack Farmer, Edmund Grin, Virginia
Gulick, Rags Landheim, Ted Roush, and Jeff Moore.
Additional thanks go to many people at the USGS and
Ron Greeley, Michael Kraft, Dave Nelson, and Jim Rice
from Arizona State University for helping with logistic and
scientific support. Many thanks to George McGill, Henry
Moore, and Robert Reid for documenting the results of the
science teams and reviewing the field test. Special thanks to
the Navajo nation for their hospitality, support, and enthusiasm (and Navajo Tacos).
The most recent Marsokhod field experiment revealed many
areas for research to improve remote science exploration efficiency and effectiveness.
The resolution of the cameras , 1mrad/pixel, was not satisfactory for remote geology. Scientists felt that they could not
see enough geologic structure from a distance and that they
could not resolve surface features close up. In one anecdotal
view, the Marsokhod “missed 90%” of the interesting geology it encountered. An expert image processor can enhanced
versions of images to make particular features clearer.
Color imaging is vital for establishing geologic context.
Multi-spectral imaging, if used, must be carefully configured
to discriminate specific mineral comparisons.
Panoramic images provide essential context for other data.
Using fully encompassing panoramas, scientists were able to
localize (determine the position and orientation of the robot),
to more fully characterize the gross geology of the site, and
to plan areas for Marsokhod to investigate and/or traverse.
Range measurements in stereo image pairs need to be
readily available and accurately calibrated. Alternatively, a
3D surface model constructed from the stereo images should
incorporate tools for making measurements.
Manual or automated techniques for tracking horizon features could greatly speed up the ability to verify position and
orientation.
The ability of the carousel to image and sample at the
same site proved very useful and enabled several useful
investigation techniques. The ability of the arm to land precisely on a designated target will require further development, both in methods of defining the target and in the ability
to automatically approach, contact and settle on the target.
Access to the field test data should be re-structured to promote fast, understandable access. Pre-test training sessions
would reduce confusion and identify issues regarding units,
coordinate frames, and other conventions. The team structure
of the science and engineering teams continues to improves
as we gain more experience. Fast access to high resolution,
well understood data enables strong collaboration.
The rapid development techniques used in the user interface enabled many adjustments to the needs of the science
team without creating limitations or reliability concerns.
6 Summary
Field experiments with the Ames Marsokhod have helped to
understand the requirements for doing science in a place
where people have never been. It is clear that mobility is a
very powerful part of understanding the geology of a site. A
number of areas for future research are revealed by system
limitations. Many of the successful techniques suggest areas
to develop onboard autonomy.
Acknowledgments
References
[Kermurdjian 1992] A. Kermurdjian, V. Gromov, V. Mishkinyuk, V. Kucherenko, and P. Sologub. Small Marsokhod Configuration. Proceedings of the 1992 IEEE
International Conference on Robotics and Automation,
May 1992.
[Hine 1995] Butler Hine and Phil Hontalas. VEVI: A Virtual
Environment Teleoperations Interface for Planetary
Exploration. In SAE 25th International Conference on
Environmental Systems, San Diego, CA, July 1995.
[Wettergreen 1997] David Wettergreen, Hans Thomas, and
Maria Bualat. Initial results from vision-based control of
the Ames Marsokhod rover. In IEEE International Conference on Intelligent Robots and Systems, Grenoble,
France, September 1997.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising