The Principles and Practices of Image and Spatial Data Fusion

The Principles and Practices of Image and Spatial Data Fusion
4
The Principles and
Practice of Image and
Spatial Data Fusion*
4.1
4.2
4.3
4.4
Introduction
Motivations for Combining Image and Spatial Data
Defining Image and Spatial Data Fusion
Three Classic Levels of Combination for Multisensor
Automatic Target Recognition Data Fusion
Pixel-Level Fusion • Feature-Level Fusion • Decision-Level
Fusion • Multiple-Level Fusion
4.5
Image Data Fusion for Enhancement of Imagery
Data
Multiresolution Imagery • Dynamic Imagery • ThreeDimensional Imagery
4.6
Spatial Data Fusion Applications
Spatial Data Fusion: Combining Image and Non-Image Data
to Create Spatial Information Systems • Mapping, Charting
and Geodesy (MC&G) Applications
Ed Waltz
Veridian Systems
4.7 Summary
References
4.1 Introduction
The joint use of imagery and spatial data from different imaging, mapping, or other spatial sensors has the
potential to provide significant performance improvements over single sensor detection, classification, and
situation assessment functions. The terms imagery fusion and spatial data fusion have been applied to
describe a variety of combining operations for a wide range of image enhancement and understanding
applications. Surveillance, robotic machine vision, and automatic target cueing are among the application
areas that have explored the potential benefits of multiple sensor imagery. This chapter provides a framework
for defining and describing the functions of image data fusion in the context of the Joint Directors of
Laboratories (JDL) data fusion model. The chapter also describes representative methods and applications.
Sensor fusion and data fusion have become the de facto terms to describe the general abductive or
deductive combination processes by which diverse sets of related data are joined or merged to produce
*Adapted from the principles and practice of image and spatial data fusion, in Proceedings of the 8th National
Data Fusion Conference, Dallas, Texas, March 15–17, 1995, pp. 257–278.
©2001 CRC Press LLC
a product that is greater than the individual parts. A range of mathematical operators has been applied
to perform this process for a wide range of applications. Two areas that have received increasing research
attention over the past decade are the processing of imagery (two-dimensional information) and spatial
data (three-dimensional representations of real-world surfaces and objects that are imaged). These
processes combine multiple data views into a composite set that incorporates the best attributes of all
contributors. The most common product is a spatial (three-dimensional) model, or virtual world, which
represents the best estimate of the real world as derived from all sensors.
4.2 Motivations for Combining Image and Spatial Data
A diverse range of applications has employed image data fusion to improve imaging and automatic
detection/classification performance over that of single imaging sensors. Table 4.1 summarizes representative and recent research and development in six key application areas.
Satellite and airborne imagery used for military intelligence, photogrammetric, earth resources, and
environmental assessments can be enhanced by combining registered data from different sensors to refine
the spatial or spectral resolution of a composite image product. Registered imagery from different passes
(multitemporal) and different sensors (multispectral and multiresolution) can be combined to produce
composite imagery with spectral and spatial characteristics equal to or better than that of the individual
contributors.
Composite SPOT™ and LANDSAT satellite imagery and 3-D terrain relief composites of military
regions demonstrate current military applications of such data for mission planning purposes.1-3 The
Joint National Intelligence Development Staff (JNIDS) pioneered the development of workstation-based
systems to combine a variety of image and nonimage sources for intelligence analysts4 who perform
TABLE 4.1
Representative Range of Activities Applying Spatial and Imagery Fusion
Activities
Sponsors
Satellite/Airborne Imaging
Multiresolution image sharpening
Terrain visualization
Planetary visualizationexploration
Multiple algorithms, tools in commercial packages
Battlefield visualization, mission planning
Planetary mapping missions
U.S., commercial vendors
Army, Air Force
NASA
Geographic information system
(GIS) generation from multiple
sources
Earth environment information
system
Terrain feature extraction, rapid map generation
DARPA, Army, Air Force
Earth observing system, data integration system
NASA
Mapping, Charting and Geodesy
Military Automatic Target Recognition ATR
Battlefield surveillance
Battlefield seekers
IMINT correlation
IMINT-SIGINT/MTI correlation
Various MMW/LADAR/FLIR
Millimeter wave (MMW)/forward looking IR (FLIR)
Single Intel IMINT correlation
Dynamic database
3-D multisensor inspection
Non-destructive inspection
Product line inspection
Image fusion analysis
Human body visualization,
diagnosis
Tomography, magnetic resonance imaging, 3-D fusion
Army
Army, Air Force
DARPA
DARPA
Industrial Robotics
Commercial
Air Force, commercial
Medical Imaging
©2001 CRC Press LLC
Various R&D hospitals
• registration — spatial alignment of overlapping images and maps to a common coordinate system;
• mosaicking — registration of nonoverlapping, adjacent image sections to create a composite of a
larger area;
• 3-D mensuration-estimation — calibrated measurement of the spatial dimensions of objects
within in-image data.
Similar image functions have been incorporated into a variety of image processing systems, from
tactical image systems such as the premier Joint Service Image Processing System (JSIPS) to Unix- and
PC-based commercial image processing systems. Military services and the National Imagery and Mapping
Agency (NIMA) are performing cross intelligence (i.e., IMINT and other intelligence source) data fusion
research to link signals and human reports to spatial data.5
When the fusion process extends beyond imagery to include other spatial data sets, such as digital
terrain data, demographic data, and complete geographic information system (GIS) data layers, numerous
mapping applications may benefit. Military intelligence preparation of the battlefield (IPB) functions
(e.g., area delimitation and transportation network identification), as well as wide area terrain database
generation (e.g., precision GIS mapping), are complex mapping problems that require fusion to automate
processes that are largely manual. One area of ambitious research in this area of spatial data fusion is the
U.S. Army Topographic Engineering Center’s (TEC) efforts to develop automatic terrain feature generation techniques based on a wide range of source data, including imagery, map data, and remotely sensed
terrain data.6 On the broadest scale, NIMA’s Global Geospatial Information and Services (GGIS) vision
includes spatial data fusion as a core functional element.7 NIMA’s Mapping, Charting and Geodesy Utility
Software package (MUSE), for example, combines vector and raster data to display base maps with
overlays of a variety of data to support geographic analysis and mission planning.
Real-time automatic target cueing/recognition (ATC/ATR) for military applications has turned to
multiple sensor solutions to expand spectral diversity and target feature dimensionality, seeking to achieve
high probabilities of correct detection/identification at acceptable false alarm rates. Forward-looking
infrared (FLIR), imaging millimeter wave (MMW), and light amplification for detection and ranging
(LADAR) sensors are the most promising suite capable of providing the diversity needed for reliable
discrimination in battlefield applications. In addition, some applications seek to combine the real-time
imagery to present an enhanced image to the human operator for driving, control, and warning, as well
as manual target recognition.
Industrial robotic applications for fusion include the use of 3-D imaging and tactile sensors to provide
sufficient image understanding to permit robotic manipulation of objects. These applications emphasize
automatic object position understanding rather than recognition (e.g., the target recognition) that is, by
nature, noncooperative).8
Transportation applications combine millimeter wave and electro-optical imaging sensors to provide
collision avoidance warning by sensing vehicles whose relative rates and locations pose a collision threat.
Medical applications fuse information from a variety of imaging sensors to provide a complete 3-D
model or enhanced 2-D image of the human body for diagnostic purposes. The United Medical and
Dental Schools of Guy’s and St. Thomas’ Hospital (London, U.K.) have demonstrated methods for
registering and combining magnetic resonance (MR), positron emission tomography (PET), and computer tomography (CT) into composites to aid surgery.9
4.3 Defining Image and Spatial Data Fusion
In this chapter, image and spatial data fusion are distinguished as subsets of the more general data fusion
problem that is typically aimed at associating and combining 3-D data about sparse point-objects located in
space. Targets on a battlefield, aircraft in airspace, ships on the ocean surface, or submarines in the 3-D ocean
volume are common examples of targets represented as point objects in a three-dimensional space model.
Image data fusion, on the other hand, is involved with associating and combining complete, spatially
filled sets of data in 2-D (images) or 3-D (terrain or high resolution spatial representations of real objects).
©2001 CRC Press LLC
Data Fusion
FIGURE 4.1
Sparse
Point
Targets
Regions of
Interest
(spatial
extent)
Locate, ID,
and track
targets in
space-time
Detect, ID
objects in
imagery
Combine
multiple
source
imagery
Create spatial
database
from multiple
sources
General
Data
Fusion
Problem
Multisensor
Automatic
Target
Recognition
Image
Data
Fusion
Spatial
Data
Fusion
Complete
Data
Sets
Data fusion application taxonomy.
Herein lies the distinction: image and spatial data fusion requires data representing every point on a
surface or in space to be fused, rather than selected points of interest.
The more general problem is described in detail in introductory texts by Waltz and Llinas10 and Hall,11
while the progress in image and spatial data fusion is reported over a wide range of the technical literature,
as cited in this chapter.
The taxonomy in Figure 4.1 distinguishes the data properties and objectives that distinguish four
categories of fusion applications.
In all of the image and spatial applications cited above, the common thread of the fusion function is
its emphasis on the following distinguishing functions:
• Registration involves spatial and temporal alignment of physical items within imagery or spatial
data sets and is a prerequisite for further operations. It can occur at the raw image level (i.e., any
pixel in one image may be referenced with known accuracy to a pixel or pixels in another image,
or to a coordinate in a map) or at higher levels, relating objects rather than individual pixels. Of
importance to every approach to combining spatial data is the accuracy with which the data layers
have been spatially aligned relative to each other or to a common coordinate system (e.g., geolocation or geo-coding of earth imagery to an earth projection). Registration can be performed
by traditional internal image-to-image correlation techniques (when the images are from sensors
with similar phenomena and are highly correlated)12 or by external techniques.13 External methods
apply in-image control knowledge or as-sensed information that permits accurate modeling and
estimation of the true location of each pixel in two- or three-dimensional space.
• The combination function operates on multiple, registered “layers” of data to derive composite
products using mathematical operators to perform integration; mosaicking; spatial or spectral
refinement; spatial, spectral or temporal (change) detection; or classification.
• Reasoning is the process by which intelligent, often iterative search operations are performed
between the layers of data to assess the meaning of the entire scene at the highest level of abstraction
and of individual items, events, and data contained in the layers.
The image and spatial data fusion functions can be placed in the JDL data fusion model context to
describe the architecture of a system that employs imagery data from multiple sensors and spatial data
©2001 CRC Press LLC
LEVEL 1 Object Refine
Sensor
Data
Align
Association
Track
Identity
Track
Multisensor
ATR
LEVEL 1 Object Refine
Imaging
Sensor
Spatial
Register
NonImaging
Sensors
Register
Segment
Detect
LEVEL 2
LEVEL 3
Situation
Refine
Impact
Refine
LEVEL 2
LEVEL 3
Scene
Refine
Impact
Refine
• Model Data
• Terrain Data
FIGURE 4.2 Image of a data fusion functional flow can be directly compared to the joint directors of labs (JDL)
data fusion subpanel model of data fusion.
(e.g., maps and solid models) to perform detection, classification, and assessment of the meaning of
information contained in the scenery of interest.
Figure 4.2 compares the JDL general model14 with a specific multisensor ATR image data fusion
functional flow to show how the more abstract model can be related to a specific imagery fusion
application. The Level 1 processing steps can be directly related to image counterparts:
• Alignment — The alignment of data into a common time, space, and spectral reference frame
involves spatial transformations to warp image data to a common coordinate system (e.g., projection to an earth reference model or three-dimensional space). At this point, nonimaging data
that can be spatially referenced (perhaps not to a point, but often to a region with a specified
uncertainty) can then be associated with the image data.
• Association — New data can be correlated with previous data to detect and segment (select) targets
on the basis of motion (temporal change) or behavior (spatial change). In time-sequenced data
sets, target objects at time t are associated with target objects at time t – 1 to discriminate newly
appearing targets, moved targets, and disappearing targets.
• Tracking — When objects are tracked in dynamic imagery, the dynamics of target motion are
modeled and used to predict the future location of targets (at time t + 1) for comparison with
new sensor observations.
• Identification — The data for segmented targets are combined from multiple sensors (at any one
of several levels) to provide an assignment of the target to one or more of several target classes.
Level 2 and 3 processing deals with the aggregate of targets in the scene and other characteristics of
the scene to derive an assessment of the “meaning” of data in the scene or spatial data set.
In the following sections, the primary image and spatial data fusion application areas are described
to demonstrate the basic principles of fusion and the state of the practice in each area.
4.4 Three Classic Levels of Combination for Multisensor
Automatic Target Recognition Data Fusion
Since the late 1970s, the ATR literature has adopted three levels of image data fusion as the basic design
alternatives offered to the system designer. The terminology was adopted to describe the point in the
traditional ATR processing chain at which registration and combination of different sensor data occurred.
These functions can occur at multiple levels, as described later in this chapter. First, a brief overview of
©2001 CRC Press LLC
PreProcess
S1
Register
PreProcess
S2
Pre-detect
Segment Extract
S1
Register
S2
• Highest potential detection
performance
• Demands accurate spatial
registration – registration errors
directly impact
combination performance
• Greatest computational cost
Detect
Segment
Extract
Classify
Pre-detect
Segment Extract
Combined
F1, F2
Space
S1
Detect
Segment
Extract
Classify
S2
Detect
Segment
Extract
Classify
Classify
• Presumes independent detection
in each sensor
• Combines extracted features in
common decision space
• Optimizes classification for
selected targets
Combine
Decision
• Presumes independent detection,
classification in each sensor
domain
• Combines sensor decisions using
AND, OR Boolean, or Bayesian,
inference
• Simplest computation
FIGURE 4.3 Three basic levels of fusion are provided to the multisensor ATR designer as the most logical alternative
points in the data chain for combining data.
TABLE 4.2
Most Common Decision-Level Combination Alternatives
Decision Type
Method
Description
Hard Decision
Boolean
Weighted Sum Score
M-of-N
Apply logical AND, OR to combine independent decisions.
Weight sensors by inverse of covariance and sum to derive score function.
Confirm decision based on m-out-of-n sensors that agree.
Soft Decision
Bayesian
Dempster-Shafer
Fuzzy Variable
Apply Bayes rule to combine sensor independent conditional probabilities.
Apply Dempster's rule of combination to combine sensor belief functions.
Combine fuzzy variables using fuzzy logic (AND, OR) to derive combined
membership function.
the basic alternatives and representative research and development results is presented. (Broad overviews
of the developments in ATR in general, with specific comments on data fusion, are available in other
literature.15-17)
4.4.1 Pixel-Level Fusion
At the lowest level, pixel-level fusion uses the registered pixel data from all image sets to perform detection
and discrimination functions. This level has the potential to achieve the greatest signal detection performance (if registration errors can be contained) at the highest computational expense. At this level,
detection decisions (pertaining to the presence or absence of a target object) are based on the information
from all sensors by evaluating the spatial and spectral data from all layers of the registered image data.
A subset of this level of fusion is segment-level fusion, in which basic detection decisions are made
independently in each sensor domain, but the segmentation of image regions is performed by evaluation
of the registered data layers.
Fusion at the pixel level involves accurate registration of the different sensor images before applying
a combination operator to each set of registered pixels (which correspond to associated measurements
©2001 CRC Press LLC
in each sensor domain at the highest spatial resolution of the sensors.) Spatial registration accuracies
should be subpixel to avoid combination of unrelated data, making this approach the most sensitive to
registration errors. Because image data may not be sampled at the same spacing, resampling and warping
of images is generally required to achieve the necessary level of registration prior to combining pixel data.
In the most direct 2-D image applications of this approach, coregistered pixel data may be classified
on a pixel-by-pixel basis using approaches that have long been applied to multispectral data classification.18 Typical ATR applications, however, pose a more complex problem when dissimilar sensors, such
as FLIR and LADAR, image in different planes. In such cases, the sensor data must be projected into a
common 2-D or 3-D space for combination. Gonzalez and Williams, for example, have described a
process for using 3-D LADAR data to infer FLIR pixel locations in 3-D to estimate target pose prior to
feature extraction.19 Schwickerath and Beveridge present a thorough analysis of this problem, developing
an eight-degree of freedom model to estimate both the target pose and relative sensor registration
(coregistration) based on a 2-D and 3-D sensor.20
Delanoy et al. demonstrated pixel-level combination of spatial interest images using Boolean and fuzzy
logic operators.21 This process applies a spatial feature extractor to develop multiple interest images
(representing the relative presence of spatial features in each pixel), before combining the interest images
into a single detection image. Similarly, Hamilton and Kipp describe a probe-based technique that uses
spatial templates to transform the direct image into probed images that enhance target features for
comparison with reference templates.22,23 Using a limited set of television and FLIR imagery, Duane
compared pixel-level and feature-level fusion to quantify the relative improvement attributable to the
pixel-level approach with well-registered imagery sets.24
4.4.2 Feature-Level Fusion
At the intermediate level, feature-level fusion combines the features of objects that are detected and
segmented in the individual sensor domains. This level presumes independent detectability of objects in
all of the sensor domains. The features for each object are independently extracted in each domain; these
features crate a common feature space for object classification.
Such feature-level fusion reduces the demand on registration, allowing each sensor channel to segment
the target region and extract features without regard to the other sensor’s choice of target boundary. The
features are merged into a common decision space only after a spatial association is made to determine
that the features were extracted from objects whose centroids were spatially associated.
During the early 1990s, the Army evaluated a wide range of feature-level fusion algorithms for
combining FLIR, MMW, and LADAR data for detecting battlefield targets under the Multi-Sensor Feature
Level Fusion (MSFLF) Program of the OSD Multi-Sensor Aided Targeting Initiative. Early results demonstrated marginal gains over single sensor performance and reinforced the importance of careful
selection of complementary features to specifically reduce single sensor ambiguities.25
At the feature level of fusion, researchers have developed model-based (or model-driven) alternatives
to the traditional statistical methods, which are inherently data driven. Model-based approaches maintain
target and sensing models that predict all possible views (and target configurations) for comparison with
extracted features rather than using a more limited set of real signature data for comparison.26 The
application of model-based approaches to multiple-sensor ATR offers several alternative implementations, two of which are described in Figure 4.4. The Adaptive Model Matching approach performs feature
extraction (FE) and comparison (match) with predicted features for the estimated target pose. The process
iteratively searches to find the best model match for the extracted features.
4.4.2.1 Discrete Model Matching Approach
A multisensor model-based matching approach described by Hamilton and Kipp27 develops a relational
tree structure (hierarchy) of 2-D silhouette templates. These templates capture the spatial structure of
the most basic all-aspect target “blob” (at the top or root node), down to individual target hypotheses at
specific poses and configurations. This predefined search tree is developed on the basis of model data
©2001 CRC Press LLC
FIGURE 4.4 Two model-based sensor alternatives demonstrate the use of a prestored hierarchy of model-based
templates or an online, iterative model that predicts features based upon estimated target pose.
for each sensor, and the ATR process compares segmented data to the tree, computing a composite score
at each node to determine the path to the most likely hypotheses. At each node, the evidence is accumulated by applying an operator (e.g., weighted sum, Bayesian combination, etc.) to combine the score
for each sensor domain.
4.4.2.2 Adaptive Model Matching Approach
Rather than using prestored templates, this approach implements the sensor/target modeling capability
within the ATR algorithm to dynamically predict features for direct comparison. Figure 4.4 illustrates a
two-sensor extension of the one-sensor, model-based ATR paradigm (e.g., ARAGTAP28 or MSTAR29
approaches) in which independent sensor features are predicted and compared iteratively, and evidence
from the sensors is accumulated to derive a composite score for each target hypothesis.
Larson et al. describe a model-based IR/LADAR fusion algorithm that performs extensive pixel-level
registration and feature extraction before performing the model-based classification at the extracted feature
level.30 Similarly, Corbett et al. describe a model-based feature-level classifier that uses IR and MMW
models to predict features for military vehicles.31 Both of these follow the adaptive generation approach.
4.4.3 Decision-Level Fusion
Fusion at the decision level (also called post-decision or post-detection fusion) combines the decisions of
independent sensor detection/classification paths by Boolean (AND, OR) operators or by a heuristic
score (e.g., M-of-N, maximum vote, or weighted sum). Two methods of making classification decisions
exist: hard decisions (single, optimum choice) and soft decisions, in which decision uncertainty in each
sensor chain is maintained and combined with a composite measure of uncertainty.
The relative performance of alternative combination rules and independent sensor thresholds can be
optimally selected using distribution data for the features used by each sensor.32 In decision-level fusion,
each path must independently detect the presence of a candidate target and perform a classification on
the candidate. These detections and/or classifications (the sensor decisions) are combined into a fused
decision. This approach inherently assumes that the signals and signatures in each independent sensor
©2001 CRC Press LLC
chain are sufficient to perform independent detection before the sensor decisions are combined. This
approach is much less sensitive to spatial misregistration than all others and permits accurate association
of detected targets to occur with registration errors over an order of magnitude larger than for pixellevel fusion. Lee and Vleet have shown procedures for estimating the registration error between sensors
to minimize the mean square registration error and optimize the association of objects in dissimilar
images for decision-level fusion.33
Decision-level fusion of MMW and IR sensors has long been considered a prime candidate for
achieving the level of detection performance required for autonomous precision-guided munitions.34
Results of an independent two-sensor (MMW and IR) analysis on military targets demonstrated the
relative improvement of two-sensor decision-level fusion over either independent sensor.35-37 A summary
of ATR comparison methods was compiled by Diehl, Shields, and Hauter.38 These studies demonstrated
the critical sensitivity of performance gains to the relative performance of each contributing sensor and
the independence of the sensed phenomena.
4.4.4 Multiple-Level Fusion
In addition to the three classic levels of fusion, other alternatives or combinations have been advanced.
At a level even higher than the decision level, some researchers have defined scene-level methods in which
target detections from a low-resolution sensor are used to cue a search-and-confirm action by a higher
resolution sensor. Menon and Kolodzy described such a system, which uses FLIR detections to cue the
analysis of high spatial resolution laser radar data using a nearest neighbor neural network classifier.39
Maren describes a scene structure method that combines information from hierarchical structures developed independently by each sensor by decomposing the scene into element representations.40 Others
have developed hybrid, multilevel techniques that partition the detection problem to a high level (e.g.,
decision level) and the classification to a lower level. Aboutalib et al. described a hybrid algorithm that
performs decision-level combination for detection (with detection threshold feedback) and feature-level
classification for air target identification in IR and TV imagery.41
Other researchers have proposed multi-level ATR architectures, which perform fusion at all levels,
carrying out an appropriate degree of combination at each level based on the ability of the combined
information to contribute to an overall fusion objective. Chu and Aggarwal describe such a system that
integrates pixel-level to scene-level algorithms.42 Eggleston has long promoted such a knowledge-based
ATR approach that combines data at three levels, using many partially redundant combination stages to
reduce the errors of any single unreliable rule.43,44 The three levels in this approach are
• Low level — Pixel-level combinations are performed when image enhancement can aid higherlevel combinations. The higher levels adaptively control this fine grain combination.
• Intermediate symbolic level — Symbolic representations (tokens) of attributes or features for
segmented regions (image events) are combined using a symbolic level of description.
• High level — The scene or context level of information is evaluated to determine the meaning of
the overall scene, by considering all intermediate-level representations to derive a situation assessment. For example, this level may determine that a scene contains a brigade-sized military unit
forming for attack. The derived situation can be used to adapt lower levels of processing to refine
the high-level hypotheses.
Bowman and DeYoung described an architecture that uses neural networks at all levels of the conventional ATR processing chain to achieve pixel-level performances of up to 0.99 probability of correct
identification for battlefield targets using pixel-level neural network fusion of UV, visible, and MMW
imagery.45
Pixel, feature, and decision-level fusion designs have focused on combining imagery for the purposes
of detecting and classifying specific targets. The emphasis is on limiting processing by combining only the
most likely regions of target data content and combining at the minimum necessary level to achieve the
desired detection/classification performance. This differs significantly from the next category of image
©2001 CRC Press LLC
fusion designs, in which all data must be combined to form a new spatial data product that contains the
best composite properties of all contributing sources of information.
4.5 Image Data Fusion for Enhancement of Imagery Data
Both still and moving image data can be combined from multiple sources to enhance desired features,
combine multiresolution or differing sensor look geometries, mosaic multiple views, and reduce uncorrelated noise.
4.5.1 Multiresolution Imagery
One area of enhancement has been in the application of band sharpening or multiresolution image fusion
algorithms to combine differing resolution satellite imagery. The result is a composite product that
enhances the spatial boundaries in lower resolution multispectral data using higher resolution panchromatic or Synthetic Aperture Radar (SAR) data.
Veridian-ERIM International has applied its Sparkle algorithm to the band sharpening problem,
demonstrating the enhancement of lower-resolution SPOT™ multispectral imagery (20-meter ground
sample distance or GSD) with higher resolution airborne SAR (3-meter GSD) and panchromatic photography (1-meter) to sharpen the multispectral data. Radar backscatter features are overlayed on the
composite to reveal important characteristics of the ground features and materials. The composite image
preserves the spatial resolution of the pancromatic data, the spectral content of the multispectral layers,
and the radar reflectivity of the SAR.
Vrabel has reported the relative performance of a variety of band sharpening algorithms, concluding
that Veridian ERIM International’s Sparkle algorithm and a color normalization (CN) technique provided
the greatest GSD enhancement and overall utility.46 Additional comparisons and applications of band
sharpening techniques have been published in the literature.47-50
Imagery can also be mosaicked by combining overlapping images into a common block, using classical
photogrammetric techniques (bundle adjustment) that use absolute ground control points and tie points
(common points in overlapped regions) to derive mapping polynomials. The data may then be forward
resampled from the input images to the output projection or backward resampled by projecting the location
of each output pixel onto each source image to extract pixels for resampling.51 The latter approach permits
spatial deconvolution functions to be applied in the resampling process. Radiometric feathering of the data
in transition regions may also be necessary to provide a gradual transition after overall balancing of the
radiometric dynamic range of the mosaicked image is performed.52 Such mosaicking fusion processes have
also been applied to three-dimensional data to create composite digital elevation models (DEMs) of terrain.53
4.5.2 Dynamic Imagery
In some applications, the goal is to combine different types of real-time video imagery to provide the
clearest possible composite video image for a human operator. The David Sarnoff Research Center has
applied wavelet encoding methods to selectively combine IR and visible video data into a composite
video image that preserves the most desired characteristics (e.g., edges, lines, and boundaries) from each
data set.54 The Center later extended the technique to combine multitemporal and moving images into
composite mosaic scenes that preserve the “best” data to create a current scene at the best possible
resolution at any point in the scene.55,56
4.5.3 Three-Dimensional Imagery
Three-dimensional perspectives of the earth’s surface are a special class of image data fusion products
that have been developed by draping orthorectified images of the earth’s surface over digital terrain
models. The 3-D model can be viewed from arbitrary static perspectives, or a dynamic fly-through, which
provides a visualization of the area for mission planners, pilots, or land planners.
©2001 CRC Press LLC
TABLE 4.3
Basic Image Data Fusion Functions Provided in Several Commercial Image Processing Software Packages
Function
Registration
Sensor-platform modeling
Ground Control Point (GCP) calibration
Warp to polynomial
Orthorectify to digital terrain model
Resample imagery
Combination
Mosaic imagery
Edge feathering
Band sharpening
Description
Model sensor-imaging geometry; derive correction
transforms (e.g., polynomials) from collection parameters
(e.g., ephemeris, pointing, and earth model)
Locate known GCPs and derive correction transforms
Spatially transform (warp) imagery to register pixels to
regular grid or to a digital terrain model
Resample warped imagery to create fixed pixel-sized image
Register adjacent and overlapped imagery; resample to
common pixel grid
Combine overlapping imagery data to create smooth
(feathered) magnitude transitions between two image
components
Enhance spatial boundaries (high-frequency content) in
lower resolution band data using higher resolution registered
imagery data in a different band
Off-nadir regions of aerial or spaceborne imagery include a horizontal displacement error that is a
function of the elevation of the terrain. A digital elevation model (DEM) is used to correct for these
displacements in order to accurately overlay each image pixel on the corresponding post (i.e., terrain
grid coordinate). Photogrammetric orthorectification functions57 include the following steps to combine
the data:
• DEM preparation — the digital elevation model is transformed to the desired map projection for
the final composite product.
• Transform derivation — platform, sensor, and the DEM are used to derive mapping polynomials
that will remove the horizontal displacements caused by to terrain relief, placing each input image
pixel at the proper location on the DEM grid.
• Resampling — The input imagery is resampled into the desired output map grid.
• Output file creation — The resampled image data (x, y, and pixel values) and DEM (x, y, and z)
are merged into a file with other geo-referenced data, if available.
• Output product creation — Two-dimensional image maps may be created with map grid lines,
or three-dimensional visualization perspectives can be created for viewing the terrain data from
arbitrary viewing angles.
The basic functions necessary to perform registration and combination are provided in an increasing
number of commercial image processing software packages (see Table 4.3), permitting users to fuse static
image data for a variety of applications.
4.6 Spatial Data Fusion Applications
Robotic and transportation applications include a wide range of applications similar to military applications. Robotics applications include relatively short-range, high-resolution imaging of cooperative
target objects (e.g., an assembly component to be picked up and accurately placed) with the primary
objectives of position determination and inspection. Transportation applications include longer-range
sensing of vehicles for highway control and multiple sensor situation awareness within a vehicle to provide
semi-autonomous navigation, collision avoidance, and control.
The results of research in these areas are chronicled in a variety sources, beginning with the 1987
Workshop on Spatial Reasoning and MultiSensor Fusion,58 and many subsequent SPIE conferences.59-63
©2001 CRC Press LLC
4.6.1 Spatial Data Fusion: Combining Image and Non-Image Data
to Create Spatial Information Systems
One of the most sophisticated image fusion applications combines diverse sets of imagery (2-D), spatially
referenced nonimage data sets, and 3-D spatial data sets into a composite spatial data information system.
The most active area of research and development in this category of fusion problems is the development
of geographic information systems (GIS) by combining earth imagery, maps, demographic and infrastructure or facilities mapping (geospatial) data into a common spatially referenced database.
Applications for such capabilities exist in three areas. In civil government, the need for land and
resource management has prompted intense interest in establishing GISs at all levels of government. The
U.S. Federal Geographic Data Committee is tasked with the development of a National Spatial Data
Infrastructure (NSDI), which establishes standards for organizing the vast amount of geospatial data
currently available at the national level and coordinating the integration of future data.64
Commercial applications for geospatial data include land management, resources exploration, civil engineering, transportation network management, and automated mapping/facilities management for utilities.
The military application of such spatial databases is the intelligence preparation of the battlefield
(IPB),65 which consists of developing a spatial database containing all terrain, transportation, groundcover, manmade structures, and other features available for use in real-time situation assessment for
command and control. The Defense Advanced Research Projects Agency (DARPA) Terrain Feature
Generator is one example of a major spatial database and fusion function defined to automate the
functions of IPB and geospatial database creation from diverse sensor sources and maps.66
To realize efficient, affordable systems capable of accommodating the volume of spatial data required
for large regions and performing reasoning that produces accurate and insightful information depends
on two critical technology areas:
• Spatial Data Structure — Efficient, linked data structures are required to handle the wide variety
of vector, raster, and nonspatial data sources. Hundreds of point, lineal, and areal features must
be accommodated. Data volumes are measured in terabytes and short access times are demanded
for even broad searches.
• Spatial Reasoning — The ability to reason in the context of dynamically changing spatial data is
required to assess the “meaning” of the data. The reasoning process must perform the following
kinds of operations to make assessments about the data:
• Spatial measurements (e.g., geometric, topological, proximity, and statistics)
• Spatial modeling
• Spatial combination and inference operations, in uncertainty
• Spatial aggregation of related entities
• Multivariate spatial queries
Antony surveyed the alternatives for representing spatial and spatially referenced semantic knowledge67
and published the first comprehensive data fusion text68 that specifically focused on spatial reasoning for
combining spatial data.
4.6.2 Mapping, Charting and Geodesy (MC&G) Applications
The use of remotely sensed image data to create image maps and generate GIS base maps has long been
recognized as a means of automating map generation and updating to achieve currency as well as
accuracy.69-71 The following features characterize integrated geospatial systems:
• Currency — Remote sensing inputs enable continuous update with change detection and monitoring of the information in the database.
• Integration — Spatial data in a variety of formats (e.g., raster and vector data) is integrated with
meta data and other spatially referenced data, such as text, numerical, tabular, and hypertext
©2001 CRC Press LLC
FIGURE 4.5 The spatial data fusion process flow includes the generation of a spatial database and the assessment
of spatial information in the database by multiple users.
formats. Multiresolution and multiscale spatial data coexist, are linked, and share a common
reference (i.e., map projection).
• Access — The database permits spatial query access for multiple user disciplines. All data is
traceable and the data accuracy, uncertainty, and entry time are annotated.
• Display — Spatial visualization and query tools provide maximum human insight into the data
content using display overlays and 3-D capability.
Ambitious examples of such geospatial systems include the DARPA Terrain Feature Generator, the
European ESPRIT II MultiSource Image Processing System (MuSIP),72,73 and NASA’s Earth Observing
Systems Data and Information System (EOSDIS).74
Figure 4.5 illustrates the most basic functional flow of such a system, partitioning the data integration
(i.e., database generation) function from the scene assessment function. The integration functions spatially registers and links all data to a common spatial reference and also combines some data sets by
mosaicking, creating composite layers, and extracting features to create feature layers. During the integration step, higher-level spatial reasoning is required to resolve conflicting data and to create derivative
layers from extracted features. The output of this step is a registered, refined, and traceable spatial
database.
The next step is scene assessment, which can be performed for a variety of application functions (e.g.,
further feature extraction, target detection, quantitative assessment, or creation of vector layers) by a
variety of user disciplines. This stage extracts information in the context of the scene, and is generally
query driven.
Table 4.4 summarizes the major kinds of registration, combination, and reasoning functions that are
performed, illustrating the increasing levels of complexity in each level of spatial processing. Faust
described the general principles for building such a geospatial database, the hierarchy of functions, and
the concept for a blackboard architecture expert system to implement the functions described above.75
4.6.2.1 A Representative Example
The spatial reasoning process can be illustrated by a hypothetical military example that follows the process
an image or intelligence analyst might follow in search of critical mobile targets (CMTs). Consider the
layers of a spatial database illustrated in Figure 4.6, in which recent unmanned air vehicle (UAV) SAR
data (the top data layer) has been registered to all other layers, and the following process is performed
(process steps correspond to path numbers on the figure):
©2001 CRC Press LLC
TABLE 4.4
Spatial Data Fusion Functions
Increasing Complexity and Processing
Registration
Combination
Reasoning
Data Fusion
Functions
Image registration
Image-to-terrain registration
Orthorectification
Image mosaicking, including
radiometric balancing and
feathering
Multitemporal change detection
Multiresolution image sharpening
Multispectral classification of
registered imagery
Image-to-image cueing
Spatial detection via multiple layers
of image data
Feature extraction using multilayer
data
Image-to-image cross layer
searches
Feature finding: extraction by
roaming across layers to increase
detection, recognition, and
confidence
Context evaluation
Image-to-nonimage cueing (e.g.,
IMINT to SIGINT)
Area delimitation
Examples
Coherent radar imagery change
detection
SPOT™ imagery mosaicking
LANDSAT magnitude change
detection
Multispectral image sharpening
using panchromatic image
3-D scene creation from multiple
spatial sources
Area delimitation to search for
critical target
Automated map feature extraction
Automated map feature updating
Note: Spatial data fusion functions include a wide variety of registration, combination, and reasoning processes and algorithms.
FIGURE 4.6 Target search example uses multiple layers of spatial data and applies iterative spatial reasoning to
evaluate alternative hypotheses while accumulating evidence for each candidate target.
1. A target cueing algorithm searches the SAR imagery for candidate CMT targets, identifying
potential targets in areas within the allowable area of a predefined delimitation mask (Data Layer 2).*
2. Location of a candidate target is used to determine the distance to transportation networks (which
are located in the map Data Layer 3) and to hypothesize feasible paths from the network to the
hide site.
3. The terrain model (Data Layer 8) is inspected along all paths to determine the feasibility that the
CMT could traverse the path. Infeasible path hypotheses are pruned.
4. Remaining feasible paths (on the basis of slope) are then inspected using the multispectral data
(Data Layers 4, 5, 6, and 7). A multispectral classification algorithm is scanned over the feasible
*This mask is a derived layer produced, by a spatial reasoning process in the scene generation stage, to delimit the
entire search region to only those allowable regions in which a target may reside.
©2001 CRC Press LLC
paths to assess ground load-bearing strength, vegetation cover, and other factors. Evidence is
accumulated for slope and these factors (for each feasible path) to determine a composite path
likelihood. Evidence is combined into a likelihood value and unlikely paths are pruned.
5. Remaining paths are inspected in the recent SAR data (Data Layer 1) for other significant evidence
(e.g., support vehicles along the path, recent clear cut) that can support the hypothesis. Supportive
evidence is accumulated to increase likelihood values.
6. Composite evidence (target likelihood plus likelihood of feasible paths to candidate target hide
location) is then used to make a final target detection decision.
In the example presented in Figure 4.6, the reasoning process followed a spatial search to accumulate
(or discount) evidence about a candidate target. In addition to target detection, similar processes can be
used to
•
•
•
•
Insert data in the database (e.g., resolve conflicts between input sources),
Refine accuracy using data from multiple sources, etc.,
Monitor subtle changes between existing data and new measurements, and
Evaluate hypotheses about future actions (e.g., trafficability of paths, likelihood of flooding given
rainfall conditions, and economy of construction alternatives).
4.7 Summary
The fusion of image and spatial data is an important process that promises to achieve new levels of
performance and integration in a variety of application areas. By combining registered data from multiple
sensors or views, and performing intelligent reasoning on the integrated data sets, fusion systems are
beginning to significantly improve the performance of current generation automatic target recognition,
single-sensor imaging, and geospatial data systems.
References
1. Composite photo of Kuwait City in Aerospace and Defense Science, Spring 1991.
2. Aviation Week and Space Technology, May 2, 1994, 62.
3. Composite multispectral and 3-D terrain view of Haiti in Aviation Week and Space Technology,
October 17, 1994, 49.
4. Robert Ropelewski, Team Helps Cope with Data Flood, Signal, August 1993, 40–45.
5. Intelligence and Imagery Exploitation, Solicitation BAA 94-09-KXPX, Commerce Business Daily,
April 12, 1994.
6. Terrain Feature Generation Testbed for War Breaker Intelligence and Planning, Solicitation BAA
94-03, Commerce Business Daily, July 28, 1994; Terrain Visualization and Feature Extraction, Solicitation BAA 94-01, Commerce Business Daily, July 25, 1994.
7. Global Geospace Information and Services (GGIS), Defense Mapping Agency, Version 1.0, August
1994, 36–42.
8. M.A. Abidi and R.C. Gonzales, Eds., Data Fusion in Robotics and Machine Intelligence, Academic
Press, Boston, 1993.
9. Derek L.G. et al., Accurate Frameless Registration of MR and CT Images of the Head: Applications
in Surgery and Radiotherapy Planning, Dept. of Neurology, United Medical and Dental Schools of
Guy’s and St. Thomas’s Hospitals, London, SE1 9R, U.K., 1994.
10. Edward L. Waltz and James Llinas, Multisensor Data Fusion, Norwood, MA: Artech House, 1990.
11. David L. Hall, Mathematical Techniques in Multisensor Data Fusion, Norwood, MA: Artech House,
1992.
12. W.K. Pratt, Correlation Techniques of Image Registration, IEEE Trans. AES, May 1974, 353–358.
©2001 CRC Press LLC
13. L. Gottsfield Brown, A Survey of Image Registration Techniques, Computing Surveys, 1992, Vol. 29,
325–376.
14. Franklin E. White, Jr., Data Fusion Subpanel Report, Proc. Fifth Joint Service Data Fusion Symp.,
October 1991, Vol. I, 335–361.
15. Bir Bhanu, Automatic Target Recognition: State-of-the-Art Survey, IEEE Trans. AES, Vol. 22, No. 4,
July 1986, 364–379.
16. Bir Bhanu and Terry L. Jones, Image Understanding Research for Automatic Target Recognition,
IEEE AES, October 1993, 15–23.
17. Wade G. Pemberton, Mark S. Dotterweich, and Leigh B. Hawkins, An Overview of ATR Fusion
Techniques, Proc. Tri-Service Data Fusion Symp., June 1987, 115–123.
18. Laurence Lazofson and Thomas Kuzma, Scene Classification and Segmentation Using Multispectral
Sensor Fusion Implemented with Neural Networks, Proc. 6th Nat’l. Sensor Symp., August 1993,
Vol. I, 135–142.
19. Victor M. Gonzales and Paul K. Williams, Summary of Progress in FLIR/LADAR Fusion for Target
Identification at Rockwell, Proc. Image Understanding Workshop, ARPA, November 1994, Vol. I,
495–499.
20. Anthony N.A. Schwickerath and J. Ross Beveridge, Object to Multisensor Coregistration with Eight
Degrees of Freedom, Proc. Image Understanding Workshop, ARPA, November 1994, Vol. I, 481–490.
21. Richard Delanoy, Jacques Verly, and Dan Dudgeon, Pixel-Level Fusion Using “Interest” Images,
Proc. 4th National Sensor Symp., August 1991, Vol. I, 29.
22. Mark K. Hamilton and Theresa A. Kipp, Model-based Multi-Sensor Fusion, Proc. IEEE Asilomar
Circuits and Systems Conf., November 1993.
23. Theresa A. Kipp and Mark K. Hamilton, Model-based Automatic Target Recognition, 4th Joint
Automatic Target Recognition Systems and Technology Conf., November 1994.
24. Greg Duane, Pixel-Level Sensor Fusion for Improved Object Recognition, Proc. SPIE Sensor Fusion,
1988, Vol. 931, 180–185.
25. D. Reago, et al., Multi-Sensor Feature Level Fusion, 4th Nat’l. Sensor Symp., August 1991, Vol. I, 230.
26. Eric Keydel, Model-Based ATR, Tutorial Briefing, Environmental Research Institute of Michigan,
February 1995.
27. M.K. Hamilton and T.A. Kipp, ARTM: Model-Based Mutisensor Fusion, Proc. Joint NATO AC/243
Symp. on Multisensors and Sensor Data Fusion, November 1993.
28. D.A. Analt, S.D. Raney, and B. Severson, An Angle and Distance Constrained Matcher with Parallel
Implementations for Model Based Vision, Proc. SPIE Conf. on Robotics and Automation, Boston,
MA, October 1991.
29. Model-Driven Automatic Target Recognition Report, ARPA/SAIC System Architecture Study
Group, October 14, 1994.
30. James Larson, Larry Hung, and Paul Williams, FLIR/Laser Radar Fused Model-based Target Recognition, 4th Nat’l. Sensor Symp., August 1991, Vol. I, 139–154.
31. Francis Corbett et al., Fused ATR Algorithm Development for Ground to Ground Engagement,
Proc. 6th Nat’l. Sensor Symp., August 1993, Vol. I, 143–155.
32. James D. Silk, Jeffrey Nicholl, David Sparrow, Modeling the Performance of Fused Sensor ATRs,
Proc. 4th Nat’l. Sensor Symp., August 1991, Vol. I, 323–335.
33. Rae H. Lee and W.B. Van Vleet, Registration Error Between Dissimilar Sensors, Proc. SPIE Sensor
Fusion, 1988, Vol. 931, 109–114.
34. J.A. Hoschette and C.R. Seashore, IR and MMW Sensor Fusion for Precision Guided Munitions,
Proc. SPIE Sensor Fusion, 1988, Vol. 931, 124–130.
35. David Lai and Richard McCoy, A Radar-IR Target Recognizer, Proc. 4th Nat’l. Sensor Symp., August
1991, Vol. I, 137.
36. Michael C. Roggemann et al., An Approach to Multiple Sensor Target Detection, Sensor Fusion II,
Proc. SPIE Vol. 1100, March 1989, 42–50.
©2001 CRC Press LLC
37. Kris Siejko et al., Dual Mode Sensor Fusion Performance Optimization, Proc. 6th Nat’l. Sensor
Symp., August 1993, Vol. I, 71–89.
38. Vince Diehl, Frank Shields, and Andy Hauter, Testing of Multi-Sensor Automatic Target Recognition and Fusion Systems, Proc. 6th Nat’l. Sensor Fusion Symp., August 1993, Vol. I, 45–69.
39. Murali Menon and Paul Kolodzy, Active/passive IR Scene Enhancement by Markov Random Field
Sensor Fusion, Proc. 4th Nat’l. Sensor Symp., August 1991, Vol. I, 155.
40. Alianna J. Maren, A Hierarchical Data Structure Representation for Fusing Multisensor Information, Sensor Fusion II, Proc. SPIE Vol. 1100, March 1989, 162–178.
41. A. Omar Aboutalib, Lubong Tran, and Cheng-Yen Hu, Fusion of Passive Imaging Sensors for Target
Acquisition and Identification, Proc. 5th Nat’l. Sensor Symp., June 1992, Vol. I, 151.
42. Chen-Chau Chu and J.K. Aggarwal, Image Interpretation Using Multiple Sensing Modalities, IEEE
Trans. on Pattern Analysis and Machine Intelligence, August 1992, Vol. 14, No. 8, 840–847.
43. Peter A. Eggleston and Harles A. Kohl, Symbolic Fusion of MMW and IR Imagery, Proc. SPIE
Sensor Fusion, 1988, Vol. 931, 20–27.
44. Peter A. Eggleston, Algorithm Development Support Tools for Machine Vision, Amerinex Artificial
Intelligence, Inc., (n.d., received February 1995).
45. Christopher Bowman and Mark DeYoung, Multispectral Neural Network Camouflaged Vehicle
Detection Using Flight Test Images, Proc. World Conf. on Neural Networks, June 1994.
46. Jim Vrabel, MSI Band Sharpening Design Trade Study, Presented at 7th Joint Service Data Fusion
Symp., October 1994.
47. P.S. Chavez, Jr. et al., Comparison of Three Different Methods to Merge Multiresolution and
Multispectral Data: LANDSAT™ and SPOT Panchromatic, Photogrammetric Engineering and
Remote Sensing, March 1991, Vol. 57, No. 3, 295–303.
48. Kathleen Edwards and Philip A. Davis, The Use of Intensity-Hue-Saturation Transformation for
Producing Color-Shaded Relief Images, Photogrammetric Engineering and Remote Sensing, November 1994, Vol. 60, No. 11, 1379–1374.
49. Robert Tenney and Alan Willsky, Multiresolution Image Fusion, DTIC Report AD-B162322L,
January 31, 1992.
50. Barry N. Haack and E. Terrance Slonecker, Merging Spaceborne Radar and Thematic Mapper
Digital Data for Locating Villages in Sudan, Photogrammetric Engineering and Remote Sensing,
October 1994, Vol. 60, No. 10, 1253–1257.
51. Christopher C. Chesa, Richard L. Stephenson, and William A. Tyler, Precision Mapping of Spaceborne Remotely Sensed Imagery, Geodetical Info Magazine, March 1994, Vol. 8, No. 3, 64–67.
52. Roger M. Reinhold, Arc Digital Raster Imagery (ADRI) Program, Air Force Spatial data Technology
Workshop, Environmental Research Institute of Michigan, July 1991.
53. In So Kweon and Takeo Kanade, High Resolution Terrain Map from Muliple Sensor Data, IEEE
Trans. Pattern Analysis and Machine Intelligence, Vol. 14, No. 2, February 1992.
54. Peter J. Burt, Pattern Selective Fusion of IR and Visible Images Using Pyramid Transforms, Proc.
5th Nat’l. Sensor Symp., June 1992, Vol. I, 313–325.
55. M. Hansen et al., Real-time Scene Stabilization and Mosaic Construction, Proc. Image Understanding Workshop, ARPA, November 1994, Vol. I, 457–465.
56. Peter J. Burt and P. Anandan, Image Stabilization by Registration to a Reference Mosaic, Proc.
Image Understanding Workshop, ARPA, November 1994, Vol. I, 425–434.
57. Christopher C. Chiesa and William A. Tyler, Data Fusion of Off-Nadir SPOT Panchromatic Images
with Other Digital Data Sources, Proc. 1990 ACSM-ASPRS Annual Convention, Denver, March
1990, 86–98.
58. Avi Kak and Su-shing Chen (Eds.), Proc. Spatial Reasoning and Multi-Sensor Fusion Workshop,
AAAI, October 1987.
59. Paul S. Shenker (Ed.), Sensor Fusion: Spatial Reasoning and Scene Interpretation, SPIE Vol. 1003,
November 1988.
©2001 CRC Press LLC
60. Paul S. Shenker, Sensor Fusion III: 3-D Perception and Recognition, SPIE Vol. 1383, November 1990.
61. Paul S. Shenker, Sensor Fusion IV: Control Paradigms and Data Structures, SPIE Vol. 1611, November 1991.
62. Paul S. Shenker, Sensor Fusion V, SPIE Vol. 1828, November 1992.
63. Paul S. Shenker, Sensor Fusion VI, SPIE Vol. 2059, November 1993.
64. Content Standards for Digital Geographic Metadata, Federal Geographic Data Committee, Washington D.C., June 8, 1994.
65. Intelligence Preparation of the Battlefield, FM-34-130, HQ Dept. of the Army, May 1989.
66. Development and Integration of the Terrain Feature Generator (TFG), Solicitation DACA76-94-R0009, Commerce Business Daily Issue PSA-1087, May 3, 1994.
67. Richard T. Antony, Eight Canonical Forms of Fusion: A Proposed Model of the Data Fusion Process,
Proc. of 1991 Joint Service Data Fusion Symp., Vol. III, October 1991.
68. Richard T. Antony, Principles of Data Fusion Automation, Norwood, MA: Artech House Inc., 1995.
69. R.L. Shelton and J.E. Estes, Integration of Remote Sensing and Geographic Information Systems,
Proc. 13th Int’l. Symp. Remote Sensing of the Environment, Environmental Research Institute of
Michigan, April 1979, 463–483.
70. John E. Estes and Jeffrey L. Star, Remote Sensing and GIS Integration: Towards a Prioritized
Research Agenda, Proc. 25th Int’l Symp. on Remote Sensing and Change, Graz Austria, April 1993,
Vol. I, 448–464.
71. J.L. Star (Ed.), The Integration of Remote Sensing and Geographic Information Systems, American
Society for Photogrammetry and Remote Sensing, 1991.
72. G. Sawyer et al., MuSIP Multi-Sensor Image Processing System, Image and Vision Computing,
Vol. 11, No. 1, January-February 1993, 25–34.
73. D.C. Mason et al., Spatial Database Manager for a Multi-source Image Understanding System,
Image and Vision Computing, Vol. 10, No. 9, November 1992, 589–609.
74. Nahum D. Gershon and C. Grant Miller, Dealing with the Data Deluge, IEEE Spectrum, July 1993,
28–32.
75. Nickolas L. Faust, Design Concept for Database Building, Project 2851 Newsletter, May 1989, 17–25.
©2001 CRC Press LLC
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement