ER Mapper
ER Mapper
Applications Guide
September 2008
This space reserved for graphic.
Left and right bleed.
Copyright © 2008 ERDAS, Inc.
All rights reserved.
Printed in the United States of America.
The information contained in this document is the exclusive property of ERDAS, Inc. This work is protected under
United States copyright law and other international copyright treaties and conventions. No part of this work may be
reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and
recording, or by any information storage or retrieval system, except as expressly permitted in writing by ERDAS, Inc.
All requests should be sent to the attention of:
Manager, Technical Documentation
ERDAS, Inc.
5051 Peachtree Corners Circle
Suite 100
Norcross, GA 30092-2500 USA.
The information contained in this document is subject to change without notice.
Government Reserved Rights. MrSID technology incorporated in the Software was developed in part through a
project at the Los Alamos National Laboratory, funded by the U.S. Government, managed under contract by the
University of California (University), and is under exclusive commercial license to LizardTech, Inc. It is used under
license from LizardTech. MrSID is protected by U.S. Patent No. 5,710,835. Foreign patents pending. The U.S.
Government and the University have reserved rights in MrSID technology, including without limitation: (a) The U.S.
Government has a non-exclusive, nontransferable, irrevocable, paid-up license to practice or have practiced
throughout the world, for or on behalf of the United States, inventions covered by U.S. Patent No. 5,710,835 and has
other rights under 35 U.S.C. § 200-212 and applicable implementing regulations; (b) If LizardTech's rights in the
MrSID Technology terminate during the term of this Agreement, you may continue to use the Software. Any provisions
of this license which could reasonably be deemed to do so would then protect the University and/or the U.S.
Government; and (c) The University has no obligation to furnish any know-how, technical assistance, or technical data
to users of MrSID software and makes no warranty or representation as to the validity of U.S. Patent 5,710,835 nor
that the MrSID Software will not infringe any patent or other proprietary right. For further information about these
provisions, contact LizardTech, 1008 Western Ave., Suite 200, Seattle, WA 98104.
ERDAS, ERDAS IMAGINE, IMAGINE OrthoBASE, Stereo Analyst and IMAGINE VirtualGIS are registered trademarks;
IMAGINE OrthoBASE Pro is a trademark of ERDAS, Inc.
SOCET SET is a registered trademark of BAE Systems Mission Solutions.
Other companies and products mentioned herein are trademarks or registered trademarks of their respective owners.
Table of Contents
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Data Fusion and Mosaicing . . . . . . . . . . . . . . . . . . . . . . . . . 3
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Introducing data fusion and mosaicing .
Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mosaicing . . . . . . . . . . . . . . . . . . . . . . . .
ER Mapper Fusion and Mosaicing . . . . . . . .
What you need to know . . . . . . . . . . . . . . .
.
.
.
.
.
. . . . . . .
........
........
........
........
.
.
.
.
.
. .3
.. 3
.. 4
.. 4
.. 4
Using data fusion to provide an integrated view into several datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Color and intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Attribute highlighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Region highlighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Vectors over raster data . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Data fusion using color and intensity . . . . . . . . . . . . . . . . . . 6
Common ways of fusing data . . . . . . . . . . . . . . . . . . . .7
Landsat TM (30 meter resolution) and SPOT Panchromatic data
(10 meter resolution) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Landsat TM and airborne magnetics surveys . . . . . . . . . . . . 7
Radiometrics (potassium, thorium and uranium) and airborne
magnetic surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Classified images that can be shown over air photos or high
resolution satellite images . . . . . . . . . . . . . . . . . . . . . . . . . 7
Combining some information (vegetation, water shed factors,
fire hazards, etc.) with elevation . . . . . . . . . . . . . . . . . . . . 7
Using both absolute values and structure from a single source of
data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Pseudocolor colordrape data fusion . . . . . . . . . . . . . . . 8
To create a colordrape algorithm from scratch . . . . . . . . . . . 8
RGBI (RGB->HSI->RGB) data fusion . . . . . . . . . . . . . . 9
To create a RGBI display using the Templates/RGBI algorithm:
10
Brovey transform RGBI data fusion . . . . . . . . . . . . . . 10
To create a virtual dataset with Landsat TM and SPOT Pan data
11
Data fusion using attribute highlighting . . . . . . . . . . 12
To create an algorithm for data fusion with attribute highlighting
13
More complex examples of attribute highlighting . . . . . . . . 14
Data fusion using region highlighting . . . . . . . . . . . . 15
Fusion of vector and raster data . . . . . . . . . . . . . . . . 16
Table of Contents
iii
Useful formulas for data fusion and mosaicing . . . . . 17
Using PC1 to show terrain and terrain changes . . . . . . . . . . 17
Using INREGION() on a conditional basis . . . . . . . . . . . . . . 18
Digital Elevation Models . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Sources and quality of DEM data . . . . . . . . . . . . . . . . 19
DEM creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
DEM quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Typical DEM applications . . . . . . . . . . . . . . . . . . . . . . 22
Displaying DEMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
grayscale image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
False color image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Realtime “Sun” shaded image . . . . . . . . . . . . . . . . . . . . . 24
Realtime combined “Sun” and “False Color” Colordrape image .
24
Shiny (HSI model) Colordrape image . . . . . . . . . . . . . . . . 25
To create a shiny colordrape image . . . . . . . . . . . . . . . . . . 25
Mosaicing DEMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Edge enhancement and other filters . . . . . . . . . . . . . . . . . 27
Raster contour line generation . . . . . . . . . . . . . . . . . . . . . 28
To generate a simple raster contour line . . . . . . . . . . . . . . 28
Realtime 3-D Perspective views . . . . . . . . . . . . . . . . . . . . 29
Realtime 3-D Flythrough views . . . . . . . . . . . . . . . . . . . . . 29
Stereo Realtime 3-D Perspective or Flythrough views . . . . . 30
Hardcopy Left/Right Stereo pair images . . . . . . . . . . . . . . 30
Integrating DEM and other data
Colordrape et al . . . . . . . . . . . . .
RGB-Height display views . . . . . .
Classification-Height display views
Raster and Vector data integration
. . . . . .
.......
.......
.......
.......
.
.
.
.
.
. . . . . . .
........
........
........
........
.
.
.
.
.
. 30
. 31
. 31
. 31
. 31
Processing DEMs . . . . . . . . . . . . . . . . . .
Slope from a DEM . . . . . . . . . . . . . . . . . . .
Aspect from a DEM . . . . . . . . . . . . . . . . . .
Selecting a range of heights from a DEM . . .
Showing DEM values within Regions . . . . . .
Abstracted views into DEMs . . . . . . . . . . . .
To create a virtual dataset . . . . . . . . . . . . .
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
.
.
.
.
.
.
.
. 31
. 32
. 33
. 33
. 33
. 34
. 34
What-if processing . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Highlighting areas of fire risk . . . . . . . . . . . . . . . . . . . . . . 36
Deciding placement of a pipeline . . . . . . . . . . . . . . . . . . . 38
Classification using DEM data . . . . . . . . . . . . . . . . . . 38
Correcting SAR data with DEMs . . . . . . . . . . . . . . . . . 39
Map Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Producing quality combined image/vector maps . . . . 41
Integrating GIS and CAD data with aerial photos using
iv
Table of Contents
ER Mapper
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Sub-sectioning data for GIS/CAD systems with limited
imagery capabilities . . . . . . . . . . . . . . . . . . . . . . . . . 42
Sub-sectioning data into smaller files . . . . . . . . . . . . . . . . 43
Producing lower resolution overview image files . . . . . . . . . 43
Directly use the ER Mapper image handlers within your GIS or
CAD system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Overlay your GIS or CAD data in ER Mapper, over aerial photo
mosaics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Directly sharing image files between different systems .
45
Change Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Data sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Registering data to ground control
Selecting Control Points . . . . . . . . . .
Data Enhancement . . . . . . . . . . . . .
RMS Error . . . . . . . . . . . . . . . . . . .
points
......
......
......
Rectifying raster data to vector data . . .
Resampling . . . . . . . . . . . . . . . . . . . . . . .
Polynomial Order . . . . . . . . . . . . . . . . . . .
Results . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
. . . . . . .
........
........
........
.
.
.
.
. 49
. 49
. 49
. 50
. . . . . . .
........
........
........
.
.
.
.
. 50
. 50
. 50
. 51
Rectifying raster data to raster data . . . . . . . . . . . . . 51
Comparing Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Eliminating seasonal and atmospheric effects . . . . . . 52
Creating a change image . . . .
Red Green Difference Image . .
Band Ratios . . . . . . . . . . . . . .
Principal Components Analysis .
Image Differencing . . . . . . . . .
.
.
.
.
.
. . . . . . .
........
........
........
........
.
.
.
.
.
. . . . . . .
........
........
........
........
.
.
.
.
.
. 53
. 54
. 54
. 54
. 54
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Crop Type Classification and Area Inventory . . . . . . . . . . . . 59
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Multispectral classification . . . . . . . . . . . . . . . . . . . . 60
Getting to know your image . . . . . . . . . . . . . . . . . . . 61
Image display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Image enhancement . . . . . . . . . . . . . . . . . . . . . . . . . 61
Classification schema . . . . . . . . . . . . . . . . . . . . . . . . 62
Sequence of operations . . . . . . . . . . . . . . . . . . . . . . . 62
Table of Contents
v
Multispectral sensing of crops . . . . . . . . . . . . . . . . . . 63
Spectral Reflectance Characteristics . . . . . . . . . . . . . . . . . 63
Ground truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Training Site Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Classification methodology . . . . . . . . . .
Training Class Refinement . . . . . . . . . . . . .
Typicality Thresholds . . . . . . . . . . . . . . . . .
Weighting the Classifier . . . . . . . . . . . . . . .
A Note About Real-Time Classification . . . . .
.
.
.
.
.
. . . . . . .
........
........
........
........
.
.
.
.
.
. 69
. 69
. 70
. 71
. 71
Displaying classification results . . . . . . . . . . . . . . . . 71
A note about unsupervised classification . . . . . . . . . . 71
Classification accuracy assessment . . . . . . . . . . . . . . 72
Summarizing the position of remote sensing in crop
analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Vegetation in Remote Sensing FAQs . . . . . . . . . . . . . . . . . 75
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Revision history .
Version 1.0 . . . .
Version 0.7 . . . .
Version 0.6 . . . .
Version 0.5 . . . .
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
.
.
.
.
.
. . . . . . .
........
........
........
........
.
.
.
.
.
. . . . . . .
........
........
........
........
.
.
.
.
.
. 75
. 75
. 75
. 75
. 75
Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
List of questions . . . . . . . . . . . . . . . . . .
General . . . . . . . . . . . . . . . . . . . . . . . . .
Vegetation index . . . . . . . . . . . . . . . . . .
Basic indices . . . . . . . . . . . . . . . . . . . . .
Indices to minimize soil noise . . . . . . . .
Indices to minimize atmospheric noise .
Other indices . . . . . . . . . . . . . . . . . . . . .
Problems . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
.
.
.
.
.
.
.
.
. 77
. 77
. 77
. 77
. 77
. 77
. 77
. 78
Answers . . . . . . . . . . . . . . . .
General . . . . . . . . . . . . . . . . .
Vegetation index . . . . . . . . . .
Basic indices . . . . . . . . . . . . .
Indices to minimize soil noise . .
Indices to minimize atmospheric
Other indices . . . . . . . . . . . . .
Problems . . . . . . . . . . . . . . . .
Future directions . . . . . . . . . .
Final question . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
........
.
.
.
.
.
.
.
.
.
.
. 78
. 78
. 81
. 82
. 85
. 89
. 91
. 93
. 95
. 95
. . . . . . . .
.........
.........
.........
.........
noise . . . .
.........
.........
.........
.........
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Geophysical Data Imaging and Presentation . . . . . . . . . . . 101
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
vi
Table of Contents
Geophysical methods . . . . . . . . . . . . . . . . . . . . . . . 102
Other datasets to aid geophysical interpretation . . . 103
Raster datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Vector datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Gridding and imaging . . . . . . . . . . . . . . . . . . . . . .
Gridding of geophysical data . . . . . . . . . . . . . . . . . . . .
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Image grid spacing and geophysical data spacing . . . . . .
Resampling of Image data in imaging . . . . . . . . . . . . . .
. 104
. 104
. 104
. 106
. 106
General approach to geophysical imaging
Visualizing Geophysical Data . . . . . . . . . . . .
Colour perception . . . . . . . . . . . . . . . . . . . .
Data Familiarization . . . . . . . . . . . . . . . . . .
Using Transforms to improve displays . . . . . .
Data viewing options . . . . . . . . . . . . . . . . . .
. . . . . . .
........
........
........
........
........
. 107
. 107
. 107
. 108
. 109
. 109
Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Edge enhancement/directional . . . . . . . . . . . . . . . . . . .
Second derivative filters . . . . . . . . . . . . . . . . . . . . . . .
Residual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Notes on the use of filters with geophysical data . . . . . .
Use of Equations in Geophysics . . . . . . . . . . . . . . . . . .
Combining Filters and Equations . . . . . . . . . . . . . . . . . .
Pseudo - Vertical Derivative . . . . . . . . . . . . . . . . . . . . .
C User code links . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 112
. 112
. 112
. 113
. 114
. 114
. 114
. 115
. 115
. 116
. 116
. 116
Interpretation using annotation . . . . . . . . . . . . . . . 117
Imaging magnetics . . . . . . . . . . . . . . . .
Magnetic data collection . . . . . . . . . . . . . .
Effect of Depth on Anomaly Shape . . . . . . .
Reliability of Gridded Magnetic Data . . . . . .
Data Levelling Artefacts . . . . . . . . . . . . . . .
Effect of Aircraft Altitude . . . . . . . . . . . . . .
Grid Manipulation of Magnetic Data . . . . . . .
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
. 117
. 117
. 118
. 118
. 118
. 118
. 118
Airborne radiometrics imaging . . . . . . .
Radiometric data . . . . . . . . . . . . . . . . . . .
Survey accuracy and calibration . . . . . . . . .
Ground resolution . . . . . . . . . . . . . . . . . . .
Aircraft Altitude . . . . . . . . . . . . . . . . . . . .
Geology, mineralization and soil mapping . .
Interpreting Radiometrics Data . . . . . . . . .
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
. 119
. 119
. 119
. 119
. 120
. 120
. 121
Seismic reflection data imaging . . . . . . .
Introduction . . . . . . . . . . . . . . . . . . . . . . .
2D seismic data . . . . . . . . . . . . . . . . . . . .
3D seismic data . . . . . . . . . . . . . . . . . . . .
Imaging two-way time data . . . . . . . . . . . .
Creative operations on seismic image data .
Dip and Azimuth Displays in Seismic . . . . . .
Data integration . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
. 124
. 124
. 124
. 125
. 125
. 125
. 125
. 126
Gravity imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Table of Contents
vii
Gravity image displays and spatial frequency .
Vector Overlays . . . . . . . . . . . . . . . . . . . . .
Gravity and Bouguer density choice in imaging
Gravity stripping . . . . . . . . . . . . . . . . . . . . .
Gravity Profiling . . . . . . . . . . . . . . . . . . . . .
Dynamic links to user code . . . . . . . . . . . . .
Geophysical filters in gravity . . . . . . . . . . . .
Airborne electromagnetics imaging . . . .
Resistivity . . . . . . . . . . . . . . . . . . . . . . . .
EM imaging . . . . . . . . . . . . . . . . . . . . . . .
Ground based EM systems imaging . . . . . . .
.
.
.
.
...
...
..
...
...
...
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
.
.
.
.
.
.
.
126
127
127
128
128
128
128
. 129
. 129
. 129
. 129
Topography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Sources of topography data . . . . . . . . . . . . . . . . . . . . . . 130
Topography in combination with other data . . . . . . . . . . . 131
References and further reading . . . . . . .
Magnetics and gravity . . . . . . . . . . . . . . . .
Radiometrics . . . . . . . . . . . . . . . . . . . . . .
Seismic . . . . . . . . . . . . . . . . . . . . . . . . . .
Airborne EM . . . . . . . . . . . . . . . . . . . . . . .
Ground geophysics . . . . . . . . . . . . . . . . . .
Data integration . . . . . . . . . . . . . . . . . . . .
Geophysical filters . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
. 131
. 131
. 131
. 131
. 131
. 132
. 132
. 132
Image Processing in Mineral Exploration . . . . . . . . . . . . . . 133
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Displaying geochemical data . . . . . . . . . . . . . . . . . . 133
Combined geochemical data . . . . . . . . . . . . . . . . . . 134
Geochemical fused with Landsat . . . . . . . . . . . . . . . 134
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Semi-Automatic Interpretation of Aeromagnetic Datasets . . 135
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
The semi-automatic interpretation algorithm . . . . . . 135
Abbreviated time and motion study of the interpretation
process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Traditional interfacing of algorithm output with data and
interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Interfacing with ER Mapper . . . . . . . . . . . . . . . . . . . 138
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Acknowledgmens . . . . . . . . . . . . . . . . . . . . . . . . . . 141
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Custom-built Dynamic Links . . . . . . . . . . . . . . . . . . . . . . 143
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
viii
Table of Contents
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Advantages of digital data integration . . . . . . . . . . . 143
Data Integration at CNGC . . . . . . . . . . . . . . . . . . . . 144
What is SQL? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Data storage considerations . . . . . . . . . . . . . . . . . . 146
Dynamic link construction . . . . . . . . . . . . . . . . . . . . 147
Hardwired . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Interactive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Dynamic operation within ER Mapper . . . . . . . . . . . 148
Hardwired . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Interactive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Example: Talbot Island . . . . . . . . . . . . . . . . . . . . . . 148
Future implications . . . . . . . . . . . . . . . . . . . . . . . . . 154
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Appendix A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
EOH Geology HARDWIRED Dynamic Link for ER Mapper (Bourne
Shell) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Appendix B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
EOH Geology Hardwired Link Database Query Output (Input to
PostScript Plotting Script) . . . . . . . . . . . . . . . . . . . . . . . 162
Fast Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . 165
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
The Fourier transform of an image . . . . . . . . . . . . . 165
Notch filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Low-pass and High-pass Fourier filtering . . . . . . . . 168
Fourier processing of potential-field data . . . . . . . . 170
Aeromagnetic data from Cape York Peninsula, Queensland . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
(A) Edge effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
(B) Display of transforms . . . . . . . . . . . . . . . . . . . . . . . . 177
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . 179
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Using Principal Component Analysis . . . . . . . . . . . . 179
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Aerial Photography and Data Integration . . . . . . . . . . . . . 183
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Table of Contents
ix
Context: Data Gathering . . . . . . . . . . . . . . . . . . . . . 183
Example: Transport Corridor Development . . . . . . .
All existing mapping . . . . . . . . . . . . . . . . . . . . . . . . . .
Statutory designations . . . . . . . . . . . . . . . . . . . . . . . .
utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
land ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
demographics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
archive material . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Georeferencing and Rectification . . . . . . . . . . . . . . . . .
Integrating and updating vector data for planning . . . . .
. 184
. 184
. 185
. 185
. 185
. 185
. 185
. 186
. 186
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Monitoring Crop Production and Assessing Irrigation Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Context: Agricultural development . . . . . . . . . . . . . 189
Example: Irrigation development . . . . . . . . . . . . . . 190
Crop Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Irrigation requirements . . . . . . . . . . . . . . . . . . . . . . . . . 191
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Coastal Habitat Mapping . . . . . . . . . . . . . . . . . . . . . . . . 193
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Available imagery .
Aerial photography
Scanned data . . . .
Satellite imagery .
. . . . . . .
........
........
........
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
.
.
.
.
. . . . . . .
........
........
........
. 194
. 194
. 194
. 194
Geographe Bay . . . . . . . . . . . . . . . . . . .
Georectification and image enhancement . .
Data integration . . . . . . . . . . . . . . . . . . . .
Masking . . . . . . . . . . . . . . . . . . . . . . . . . .
Image interpretation . . . . . . . . . . . . . . . . .
.
.
.
.
.
. . . . . . .
........
........
........
........
. 195
. 195
. 195
. 195
. 195
The Abrolhos and Montebello islands . . . . . . . . . . . . 196
How to merge images . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Monitoring Environmental Change in Lake Turkana . . . . . . . 199
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Water resources, agricultural development and environmental change . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Example: Erosion, Siltation and Lake Capacity . . . . . 200
False colour image or Colordrape . . . . . . . . . . . . . . . . . . 200
x
Table of Contents
Rectification . . . . . . . . . . . . . . .
Classification . . . . . . . . . . . . . .
Image mosaicing and annotation
Statistics . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
200
201
201
201
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Monitoring the Gulf of Gdansk . . . . . . . . . . . . . . . . . . . . 203
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Bio-optical Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Image processing . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Principal Components Analysis (PCA) . . . . . . . . . . . . . . . 204
Turbidity and Chlorophyll Concentration . . . . . . . . . . . . . 204
Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
The future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Boreal Forest Monitoring . . . . . . . . . . . . . . . . . . . . . . . . 207
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Lightning project . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Boreal forest project . . . . . . . . . . . . . . . . . . . . . . . . 208
Customizing forestry applications . . . . . . . . . . . . . . 208
Fusing, mosaicing, and virtual datasets . . . . . . . . . . 210
Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
SAR imagery in mineral and oil exploration . . . . . . . . . . . . 213
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Characteristics of ERS-1 SAR imagery . . . . . . . . . . . 213
Panama case study . . . . . . . . . . . . . . . . . . . . . . . . . 214
Satellite image availability and acquisition . . . . . . . . . . . . 214
ERS-1 SAR interpretation . . . . . . . . . . . . . . . . . . . . . . . . 215
Irian Jaya case study . . . . . . . . . . . . . . . . . . . . . . . 217
Satellite image availability and acquisition . . . . . . . . . . . . 217
ERS-1 SAR interpretation . . . . . . . . . . . . . . . . . . . . . . . . 217
Concluding points . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Mapping Shelf Circulation . . . . . . . . . . . . . . . . . . . . . . . 221
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Table of Contents
xi
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Data processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Gaussian transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Conversion of digital counts to brightness temperature . . . 223
Sea-surface temperatures using McMillan and Crosby (1984)
formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Sea-surface temperatures using the Llewellyn-Jones et al.
(1984) formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
3 x 3 pixel smoothing kernel to reduce the noise generated by the
temperature-correction algorithm. . . . . . . . . . . . . . . . . . 228
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Oil and gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
ER Mapper for Unix required . . . . . . . . . . . . . . . . . . . . . 231
Schlumberger Geoquest and Landmark imports . . . . 231
Importing seismic data . . . . . . . . . . . . . . . . . . . . . . . . . 233
Seismic raster grid formats supported . . . . . . . . . . . 233
Raster file import utilities . . . . . . . . . . . . . . . . . . . . . . . . 234
SEG-Y . . . . . . . . . . . . . . . . . .
To import a SEG-Y tape . . . . . .
Command line . . . . . . . . . . . .
Configuration and setup . . . . .
Input data format . . . . . . . . . .
.
.
.
.
.
. . . . . . .
........
........
........
........
.
.
.
.
.
. . . . . . .
........
........
........
........
. 234
. 234
. 234
. 234
. 235
Zycor ASCII grid . . . . . . . . .
To import Zycor ASCII grids . .
Command Line . . . . . . . . . . .
Configuration and setup . . . .
Input data format . . . . . . . . .
.
.
.
.
.
. . . . . . .
........
........
........
........
.
.
.
.
.
. . . . . . .
........
........
........
........
. 235
. 235
. 235
. 235
. 236
GeoQuest (IESX) ASCII grid dump . . . . .
Usage . . . . . . . . . . . . . . . . . . . . . . . . . . .
Command Line . . . . . . . . . . . . . . . . . . . . .
Configuration and setup . . . . . . . . . . . . . .
Input data format . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
. . . . . . .
........
........
........
........
. 236
. 236
. 236
. 236
. 236
Charisma 2D XYZ ASCII grid
Usage . . . . . . . . . . . . . . . . .
Command Line . . . . . . . . . . .
Configuration and setup . . . .
Import data format . . . . . . . .
.
.
.
.
.
. . . . . . .
........
........
........
........
. 237
. 237
. 237
. 237
. 237
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
Charisma 3D Inline Xline XYZ ASCII grid . . . . . . . . . 237
To run the Charisma 3D Inline Xline XYZ ASCII grid utility from
the menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Configuration and setup . . . . . . . . . . . . . . . . . . . . . . . . 239
The Charisma configuration file . . . . . . . . . . . . . . . . . . . 240
GeoQuest (IESX) MapView ASCII Grid . . . . . . . . . . . 240
xii
Table of Contents
To import a GeoQuest (IESX) MapView file . . . . . . . . . . . 241
Command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Input data format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Generic issues related to importing raster grid data 241
Updating header files with projection/datum details . . . . . 241
Editing registration information in header files . . . . . . . . . 242
Importing geological and cultural vector data . . . . . . . . . . 243
OpenWorks 3.1 Wells—Import . . . . . . . . . . . . . . . . . 243
To import OpenWorks 3.1 Wells . . . . . . . . . . . . . . . . . . . 244
SeisWorks Fault Polygons . . . . . . . . . . . . . . . . . . . . 244
To import SeisWorks Fault Polygons . . . . . . . . . . . . . . . . 244
SeisWorks Manual Contours . . . . . . . . . . . . . . . . . . 245
To import SeisWorks Manual Contours . . . . . . . . . . . . . . 245
Updating geological and cultural vector data . . . . . . 247
Moving updated vector data back to other products . 247
AutoCAD DXF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
SeisWorks Fault Polygons . . . . . . . . . . . . . . . . . . . . . . . 248
Generating hardcopy prints . . . . . . . . . . . . . . . . . . . . . . 249
Generating maps as a TIFF file . . . . . . . . . . . . . . . . 249
How to create a map . . . . . . .
Page Setup . . . . . . . . . . . . . .
Including Vector Data . . . . . . .
Adding Map Composition Items
Using the Page Setup Wizard . .
.
.
.
.
.
. . . . . . .
........
........
........
........
.
.
.
.
.
. . . . . . .
........
........
........
........
. 250
. 250
. 251
. 252
. 253
Data viewing tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
The Intensity layer . . . . . . . . . . . . . . . .
To view your seismic dataset . . . . . . . . . . .
To use a formula to invert the dataset values
To use sun shading . . . . . . . . . . . . . . . . . .
.
.
.
.
. . . . . . .
........
........
........
. 255
. 255
. 255
. 256
Colordraping . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
To create a colordrape image . . . . . . . . . . . . . . . . . . . .
To drape amplitude data in color over time data . . . . . . .
Tips for Colordrape Algorithms . . . . . . . . . . . . . . . . . . .
. 256
. 257
. 258
. 258
Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Supplied Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Applications . . . . . . . . . . . . .
Airphoto . . . . . . . . . . . . . . . .
Fire_Risk . . . . . . . . . . . . . . . .
Land_Information . . . . . . . . . .
Mineral_Exploration . . . . . . . .
Mt_St_Helens . . . . . . . . . . . . .
Oil_and_Gas_Exploration . . . . .
Telecommunications . . . . . . . .
World_Topography . . . . . . . . .
Table of Contents
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
. 260
. 260
. 260
. 260
. 260
. 262
. 262
. 262
. 262
xiii
Data_Types . . . . . . . . . . . . .
Airphoto . . . . . . . . . . . . . . .
Arc_info . . . . . . . . . . . . . . . .
AutoCAD_DXF . . . . . . . . . . .
Digital_Elevation . . . . . . . . . .
Ers1 . . . . . . . . . . . . . . . . . .
Landsat_MSS . . . . . . . . . . . .
Landsat_TM . . . . . . . . . . . . .
Abrams_Ratios.alg . . . . . . . .
Magnetics_And_Radiometrics .
RadarSat . . . . . . . . . . . . . . .
Seismic . . . . . . . . . . . . . . . .
SPOT_Panchromatic . . . . . . .
SPOT_xs . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
........
........
........
........
........
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
........
........
........
........
........
. 262
. 262
. 263
. 264
. 264
. 268
. 268
. 270
. 270
. 277
. 280
. 280
. 283
. 284
Functions_And_Features . . .
3D . . . . . . . . . . . . . . . . . . .
3D_multi_surface . . . . . . . . .
Airphoto_Turorial . . . . . . . . .
Class_Interactive_Compute . .
Classification . . . . . . . . . . . .
Classification_Display . . . . . .
Contours . . . . . . . . . . . . . . .
Data_Fusion . . . . . . . . . . . . .
Data_Mosaic . . . . . . . . . . . .
Dynamic_Links . . . . . . . . . . .
Fft . . . . . . . . . . . . . . . . . . .
Geocoding . . . . . . . . . . . . . .
Gridding . . . . . . . . . . . . . . .
HSI_Transforms . . . . . . . . . .
Language_Specific_Links . . . .
Map_Objects . . . . . . . . . . . .
Map_Production . . . . . . . . . .
Radar . . . . . . . . . . . . . . . . .
Regions . . . . . . . . . . . . . . . .
Spatial_Filters . . . . . . . . . . .
Vectors_and_Imagery . . . . . .
Virtual Datasets . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
. 285
. 285
. 288
. 289
. 289
. 290
. 291
. 292
. 292
. 296
. 297
. 297
. 298
. 298
. 298
. 299
. 299
. 300
. 301
. 301
. 302
. 304
. 305
Miscellaneous/Templates . . .
Common . . . . . . . . . . . . . . . .
Dataset_Output . . . . . . . . . . .
FFT . . . . . . . . . . . . . . . . . . . .
Virtual_Datasets . . . . . . . . . . .
.
.
.
.
.
. . . . . . .
........
........
........
........
.
.
.
.
.
. . . . . . .
........
........
........
........
. 305
. 305
. 307
. 307
. 308
Miscellaneous/Test_Patterns
3D_Cube.alg . . . . . . . . . . . . .
HSI_wheel.alg . . . . . . . . . . . .
3D_Julia_Fractals.alg . . . . . . .
rgbcmyk_bands.alg . . . . . . . . .
Test_Patterns.alg . . . . . . . . . .
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
. 309
. 309
. 309
. 309
. 309
. 309
Shared_Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Supplied Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
xiv
Table of Contents
DEM filters . . . . . . . . . . .
Edge filters . . . . . . . . . .
Gaussian filters . . . . . . .
Geophysics filters . . . . . .
High pass averaging filters
Low pass averaging filters
Ranking filters . . . . . . . .
SAR filters . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
311
311
312
312
314
315
317
317
. . . . . . .
........
........
........
.
.
.
.
. . . . . . .
........
........
........
.
.
.
.
. . . . . . .
........
........
........
. 319
. 320
. 320
. 320
Seismic . . . . . . . .
Easterly_dip . . . .
Hanning_3 . . . . .
Northerly_dip . . .
.
.
.
.
.
.
.
.
Standard Filters .
Sobel1 . . . . . . . .
Sobel2 . . . . . . . .
darn . . . . . . . . .
default . . . . . . .
force_nosubs . . .
h_edge . . . . . . .
h_edge_5.ker . . .
lap_clean.ker . . .
laplacian.ker . . .
laplacian5.ker . . .
low_freq . . . . . .
sharpen . . . . . . .
thresh3.ker . . . .
thresh5.ker . . . .
v_edge . . . . . . .
v_edge_5 . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
. 320
. 320
. 320
. 320
. 320
. 320
. 320
. 321
. 321
. 321
. 321
. 321
. 321
. 321
. 322
. 322
. 322
Sunangle Filters .
East_West . . . . .
North_South . . .
North_West . . . .
North_West_5 . .
South_East . . . .
South_North . . .
South_West . . . .
West_East . . . . .
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
. 322
. 322
. 322
. 322
. 323
. 323
. 323
. 323
. 323
Usercode . . . . . . .
average . . . . . . .
kernel.template .
local_enhance . .
majority . . . . . .
median . . . . . . .
modefilt . . . . . . .
rm2badli . . . . . .
rmbadlin . . . . . .
w_loc_me . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
........
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
........
.
.
.
.
.
.
.
.
.
.
. . . . . . .
........
........
........
........
........
........
........
........
........
. 323
. 323
. 323
. 324
. 324
. 324
. 324
. 324
. 324
. 324
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Table of Contents
xv
xvi
Table of Contents
Preface
The purpose of this manual is to show you examples of ER Mapper
at work. It attempts to go deeper than the other manuals in the set,
discussing the application of general concepts to particular
circumstances. The topics have been grouped roughly into parts for
convenience but the chapters are by no means exclusive. The aim
has been to go for breadth of coverage rather than duplication. By
the nature of the product and disciplines in which it is used, some
techniques will apply to many applications. You may want to
approach the manual by reading the chapters most relevant to your
application and then skimming the headings in the other chapters to
see if they discuss other techniques you are interested in. You may
also want to use the index to find references to particular
applications or techniques.
This manual assumes that you already know your way around
ER Mapper - for example how to change histograms or add filters. If
not, you will have to look these techniques up in the User Guide or
tutorial manual. For example, the User Guide will describe how to
add a filter while this manual will suggest which filter to use in a
particular instance.
If you haven’t used ER Mapper before you may still want to browse
through this manual to get an idea of the diverse ways in which the
product can be used.
Reference is made in certain chapters to files in the "examples"
directory. This directory is located in the ER Mapper installation
directory (usually "C:\Program Files\ERDAS\ERDAS ER
Mapper 7.2" for 32-bit Windows and "C:\Program Files
(x86)\ERDAS" for 64-bit Windows). Note that the examples
files may not be installed. This will depend on the configuration
option you specified when first running the setup program.
Acknowledgemen
ts
Preface
This manual is the result of many hours of work by many people and
encapsulates even more hours and years of experience. Thank you
to all who have willingly shared this experience to swap ideas and
help others on their way. This manual would not have been possible
without you. All material in this manual has been used by permission
and provided in good faith. All techniques are suggestions only and
may not work in your circumstances.
1
2
Preface
Data Fusion and Mosaicing
Author
Stuart Nixon is the Founder and Managing Director of Earth
Resource Mapping.
Abstract
ER Mapper is unique in that its algorithm concept includes several
innovative features such as automatic data fusion and mosaicing.
Data fusion (also known as data merging) is the combination of
different types of data over the same area. Mosaicing is the tiling of
images showing different areas to generate a composite view of a
larger area.
Introducing data
fusion and
mosaicing
An integral part of image processing is the fusion and mosaicing of
different types of data.
Fusion
1. Data fusion is the combination of different types of data over the
same area in some way. Common uses include:
Data Fusion and Mosaicing
•
Sharpening Landsat TM imagery with SPOT Panchromatic
imagery.
•
Fusion of airborne magnetics data with radiometrics data.
•
Generating a map that shows the existing known road network
as a GIS vector map over an image which highlights road
changes between years, and all of that over a backdrop satellite
map.
•
Producing several options for powerline placements, by showing
land use within 200 meters of the alternative routes and using a
mosaiced set of airphotos as a backdrop.
•
Showing land use changes computed from classified satellite
images. Also, measuring all changes in land use related to
vegetation only, by removing or highlighting natural forests.
•
Showing existing oil wells over 3-D seismic data by highlighting
areas worthy of further exploration.
•
Showing geochemical assay information over mineralogy,
classified from a radiometrics survey, and draped over a
magnetics survey. This highlights subsurface structure and
correlates it with known geochemical assays.
3
Mosaicing
Mosaicing is the tiling of more than one image to generate a
composite view of larger areas. The typical example is the
generation of a high resolution regional view based on the tiling of
40 or 50 air photos. Another example is the mosaicing of several
cloudy satellite images to produce a cloud-free image. This is similar
to data fusion.
ER Mapper Fusion and
Mosaicing
Unlike older image processing systems which have a number of
programs that do specific functions, ER Mapper is built on a unique
algorithm concept which encompasses several innovative ideas.
These are fundamental to the way that ER Mapper works. Thus,
ER Mapper has:
•
Algorithms, for generating an image from one or more source
raster and vector datasets.
•
Automatic data fusion, allowing a particular combination of two
or more datasets over the same area.
•
Automatic mosaicing, allowing the mosaicing of two or more
datasets which cover an area.
•
Arbitrary raster dataset resolutions and formats, as raster
datasets can be fused regardless of their resolution or formats.
For example, an 8-bit integer based Landsat TM image with a
ground resolution of 28.5 meters can be combined with a
floating-point based elevation image with a ground resolution of
100 meters.
•
Interactive integration of datasets, as it is not necessary to write
intermediate datasets prior to integrating different data.
•
Interactive formula and kernel (filter) processing, for complete
integration of raster and vector data, with the ability to specify
the priority of different layers of data.
ER Mapper possesses a combination of different processing functions
giving it considerable power. These functions can be combined in an
almost limitless number of ways.
Before different datasets can be fused or mosaiced together, they
must be in the same map projection (although they may have
different spectral resolutions, data format and cell size, and cover
different areas).
What you need to know
4
This chapter assumes that you are familiar with the algorithm
concept and how to carry out a number of common ER Mapper
functions such as how to:
•
Load, save, and modify algorithms;
•
Add layers to an algorithm, including data fusing (different layer
types) and data mosaicing (the same layer types);
•
Modify a formula within a layer;
Data Fusion and Mosaicing
•
Change transform limits to rescale data, particularly after a
formula has changed the dynamic range of the data;
•
Use the INREGION function within a formula;
•
Generate vector polygons from raster images;
•
Create and use virtual datasets.
These issues are covered in detail in the Tutorial and User Guide.
Using data fusion
to provide an
integrated view
into several
datasets
There are often cases where it is desirable to see two or more types
of data at the same time. Most procedures involve one of the
following techniques, which are explained in more detail in the next
sections.
Color and intensity
We perceive object color differently compared to actual object
shape. This means that we can use structure to represent one type
of data, and color for another type of data - over the same area simultaneously. For example, we can show fire risk in color over a
monochromatic airphoto in an intensity layer. This makes it easy to
see roads that lead to an area of high fire risk.
Attribute highlighting
If the features you are interested in cover only small portions of the
entire map area, you can generate a base map, and show the areas
of interest highlighted in color. For example, a grayscale satellite
image could be used as a basemap, and over it areas that have
increased in vegetation over the past 5 years could be shown in
green, and areas that have decreased in vegetation in red. Areas of
no change would show through as the backdrop satellite image. This
technique requires deciding, on a pixel by pixel basis, the state of a
given part of the map.
Region highlighting
This is similar to the previous example, except in this case the areas
of interest are known in advance in the form of polygon regions. This
could be obtained from a GIS system or from earlier classification of
a satellite image. An example of this type of data fusion would be to
show classified forest stands, only for the regions (polygons) of
interest, rather than for the entire area. Again, a backdrop image
could be added, to show in the areas that are not the regions of
interest. The region highlighting technique is suited to both raster
and vector data. A vector example would be to only show vectors
within a given county.
Vectors over raster data
Selective use of vectors in front of raster data allows fusion of
information gathered from raster and vector data. An example would
be to view access roads into a forest stand by overlaying it on a
classified image of the forest stand.
Data Fusion and Mosaicing
5
By combining these different techniques on a single map or image,
it is possible to show quite complex information simply. It is best to
start with one technique, then add additional techniques to the map
as required. Only two or three different types of information should
be shown on a single map. If more information is required it is better
to create two or three maps of the same area.
ER Mapper's algorithm concept works well here because it is possible
to generate many different views from the same data, without
requiring intermediate datasets to be computed or stored.
Furthermore, as the algorithm is run on demand to compute the final
image, the final image need not be stored on disk. Only the
algorithm (which requires small disk space) need be stored, and the
resultant image called up when required.
Data fusion using color
and intensity
Images made by data fusion using color and intensity are easy to
generate, and very easy to explain and interpret. Some common
types of images visualised in this way could be:
•
Fire danger maps for an area, color coded red as high risk, and
scaled down to green as low risk, over an air photo over the same
area. It is easy to correlate the risk of a fire (red being high risk)
against the actual terrain, or against access into areas of high fire
risk.
•
Forest management maps showing different types or growths of
forest as a color coded image over a satellite image, to plan the
placement of access roads and fire breaks.
•
Mineral exploration maps showing radiometrics in red, green and
blue (potassium, thorium and uranium respectively) over a
magnetics image processed to show structure. The resultant
image makes it easy to correlate radiometrics to show
mineralogy over the sub-surface magnetics structure.
These types of maps are created by using color to represent one
type of data, and intensity to show another type of data.
The human eye is sensitive to small changes in structure (for
example, a road running through an image), and to overall patterns
in color. This means that high variability information (that is, high
frequency information) should be shown as intensity, whereas
overall trends or patterns should be shown in color. When looking at
data, it generally becomes obvious as to which way one should
represent the data.
Consider the case of a map that combines precipitation, measured
every 1 kilometer, with a map of vegetation, derived from a 30meter pixel satellite image. In this instance, the precipitation (which
changes slowly over distance) would be shown in color, whereas the
vegetation would be shown as intensity.
6
Data Fusion and Mosaicing
Common ways of
fusing data
There are a number of common combinations of data. They include:
Landsat TM (30 meter
resolution) and SPOT
Panchromatic data (10
meter resolution)
Landsat TM data has 7 wavelengths of data, whereas the SPOT
Panchromatic data has only one wavelength of data, but at a higher
resolution. An effective technique is to combine these two sources of
data, resulting in a higher resolution image (using the 10 meter
SPOT data) with the spectral information from the Landsat TM (using
some combination of the 7 bands). In this case, color is used to show
the Landsat TM image (often as a red-green-blue image) and
intensity is used to show the SPOT Pan image.
Landsat TM and airborne
magnetics surveys
This allows surface geology—mapped from the Landsat TM
imagery—to be correlated with sub-surface magnetics. Common
techniques are to show the Landsat TM in color, and the structure
from the magnetics image as intensity.
Typically, the Landsat TM would be shown as red-green-blue (bands
7, 4, 1, are good to highlight geology), and the magnetics image is
“sun-shaded” to highlight structure, which is used for intensity in the
image.
Radiometrics (potassium,
thorium and uranium)
and airborne magnetic
surveys
Once again, radiometrics is shown in color, and magnetics in
intensity.
Classified images that
can be shown over air
photos or high resolution
satellite images
This combination gives a feeling for locality and structure to the
classified imagery.
Combining some
information (vegetation,
water shed factors, fire
hazards, etc.) with
elevation
The information (for example, fire danger level) is generally shown
in color, against the elevation data which is shown by intensity.
Using both absolute
values and structure from
a single source of data
A single source of data is usually processed to show the absolute
high/low values as a color graduation. This is integrated with the
same data processed to show structure.
This technique, known as pseudocolor colordrape, is a powerful
method for on-screen interpretation and annotation of data.
Common types of data displayed this way include:
Data Fusion and Mosaicing
•
Elevation: where color represents the height, and intensity
•
Magnetics images: where color represents the height, and
represents the structure (slopes and values).
intensity represents the structure from the magnetics.
7
•
3-D seismic two-way-time horizons: where color represents the
time response on the two-way-time, and intensity is used to
highlight structure.
•
Bathymetry data: where color shows depth, and intensity shows
structure.
In order to fuse these or other types of data, one of several different
data fusion techniques may be used. It should be noted that this list
is not exhaustive. You may find other techniques of interest, and
choose to use them. Because data fusion is implemented using
algorithms, it is not necessary to have specialised programs to carry
out data fusion. New techniques can be implemented by the user
using the ER Mapper Graphic User Interface, within the standard
structure.
The rest of this section explains some of the common techniques in
more detail.
Pseudocolor
colordrape data
fusion
Pseudocolor colordrape data fusion is the simplest of all. It is often
used to show two variables: one in color, and one in intensity. The
information shown in color would be shown in a Pseudocolor layer,
and the information to be shown in intensity would be shown in an
Intensity layer.
To create a pseudocolor colordrape image, the normal technique is
clarify the data to display the data as separate layers, and then
combine the layers.
This technique is called pseudocolor because the colors are all
generated from a single layer of data, resulting in a range of colors.
The exact colors used depend on the lookup table selected for the
algorithm. Common lookup tables range from red as high values
down to blue as low values.
An example of color drape is to drape a magnetics image, as color,
over a satellite image, as intensity. Another (more common)
example is to use the same band of data for both color and intensity,
with the intensity being a processed version of the data to highlight
structure.
It is quicker to use an existing colordrape algorithm and simply
change the datasets being used. However, below is the technique to
create a pseudocolor image from scratch:
To create a colordrape
algorithm from scratch
1. Create a Pseudocolor layer.
Specify the dataset, and the band in the dataset to be shown as
color. Transform the data so that the color range is acceptable.
2. Turn the Pseudocolor layer off.
This way, you can work on the Intensity layer without being
distracted by the color information.
8
Data Fusion and Mosaicing
3. Add an Intensity layer, and specify the dataset to use.
If the dataset is different to the one used in the Pseudocolor layer,
remember to click on the ‘OK - this layer only” button rather than the
OK button, as the latter would also change the dataset being used in
the Pseudocolor layer.
4. Transform the data so that the intensity is acceptable.
You might need to apply a logarithmic or piecewise transform (a
linear transform with multiple segments) to get the image to look
acceptable.
5. Turn the Pseudocolor layer back on.
6. Save the algorithm so that the resulting image can be recalled at any
time.
RGBI (RGB->HSI>RGB) data fusion
The RGBI technique takes a Red-Green-Blue image (for example, a
Landsat TM image), and sharpens it by replacing intensity with data
from another source (for example, SPOT Panchromatic).
ER Mapper has an inbuilt transformation for the common RGB to
HSI, replacing Intensity, then back to RGB formula. Other variations
(such as HSV transforms or the Brovey transform) can also be
implemented by using virtual datasets and formula processing. For
the simple RGBI case, a color image is defined in a Red-Green-Blue
space. This means that three layers are defined in the algorithm—a
Red, a Green and a Blue layer. These are loaded with the data
appropriate for these colors. For example, bands 5, 4 and 1 from
Landsat TM highlight vegetation and cultural information, whereas
bands 3, 2 and 1 show a natural image.
A fourth layer, the Intensity layer, is also defined. When ER Mapper
sees an algorithm which has an Intensity layer and Red, Green and
Blue layers, it knows to automatically do the RGB to HSI to RGB
transform.
It is important to appreciate that this transform is done on a pixel by
pixel basis, and only transforms non-NULL data for a given pixel,
changing it in the Intensity layer. This is a useful feature, as quite
often two images (for example, a Landsat TM and a SPOT
Panchromatic image) will not overlap exactly. There will be some
places with both the Landsat TM and the SPOT Pan, some places with
only Landsat TM, and some places with only the SPOT Pan.
ER Mapper tries to provide the best result for each pixel.
Where both datasets exist for a pixel, the full transform is done.
Where only the intensity data exists for a pixel, that pixel is shown
in grayscale. Where only the color information exists, the raw color
information is shown without the RGB->HSI->RGB transform being
carried out which generally results in less detailed imagery at those
pixels.
An RGBI algorithm can be built up from scratch—often you will have
an RGB algorithm that you wish to sharpen by simply adding an
Intensity layer.
Data Fusion and Mosaicing
9
When starting from scratch, it is faster to load and modify an existing
algorithm—a similar one you have previously defined, or one of the
Templates algorithms.
To create a RGBI display
using the
Templates/RGBI
algorithm:
1. Load the examples/Miscellaneous/Templates/Common/RGBI
algorithm into a window.
2. In the Algorithm dialog box, select the Red layer.
3. Select the dataset (for example, Landsat TM) to be used for the Red,
Green and Blue layers and click on ‘OK - all layers’.
This will change the Red, Green and Blue layers to use your new
dataset.
4. Set the band to display in the Red, Green and Blue layers (click on
each layer, then change the band).
As the Intensity layer was defined to use a different dataset to
the Red, Green and Blue layers, it will not have the new dataset.
Also, if your new dataset has a different map projection from the
template dataset, you may find that the Intensity layer has been
turned off as it has an incompatible map projection.
5. Click on the Intensity layer, and load (this layer only) the new
dataset (for example, SPOT Panchromatic).
6. For a multi-band dataset, select the appropriate band to use.
7. If the layer was turned off, turn it back on.
8. Set up the transforms.
9. Save your new algorithm if you wish to use it in the future to re-
create this image.
Brovey transform
RGBI data fusion
The Brovey transform is a highly effective transform that generates
a better looking image than the normal RGBI image for many types
of data, in particular for combining Landsat TM and SPOT Pan
imagery. In this case, it generates natural looking water, and a crisp
image that may be further sharpened without degradation by
applying a sharpening filter to the SPOT Pan layer.
The Brovey transform is a formula based process that works by
dividing the band to display in a given color (for example, the Green
layer) by the sum of all the color layers (for example, Red, Green and
Blue) and then multiplying by the SPOT Panchromatic layer.
10
Data Fusion and Mosaicing
Thus, to combine bands 5,4,1 of Landsat TM as Red, Green and Blue
layers, with SPOT Panchromatic as intensity, the generic formula is:
(Input1/(Input2+Input3+Input4))*Input5
We use four inputs for the Landsat TM image because it makes the
formula setup easier to modify. The above generic formula is
mapped to actual bands as follows (B5, B4 and B1 are Landsat TM,
Pan is the SPOT Pan):
Red layer:
(B5/(B5+B4+B1))*Pan
Green layer:
(B4/(B5+B4+B1))*Pan
Blue layer:
(B1/(B5+B4+B1))*Pan
It can be seen that to be effective, the Brovey transform requires:
•
Pixels from the Landsat TM and the SPOT Pan to be processed
together.
•
Floating point operations.
ER Mapper can carry out the complete Brovey transform in a single
step, without requiring intermediate datafiles to be created. Also, the
Landsat TM and the SPOT Panchromatic data remains in their original
resolution—it is not necessary to regrid the Landsat TM down to 10
meter resolution first.
To use the Brovey Transform you must create a virtual dataset
containing both Landsat TM and SPOT Pan data and then apply the
processing to it.
To create a virtual
dataset with Landsat TM
and SPOT Pan data
1. Create an 8 band virtual dataset that contains the 7 band Landsat
TM image and the 1 band SPOT Panchromatic image.
Although we are only going to use three of the Landsat TM bands,
we may as well create a virtual dataset with all seven Landsat TM
bands, as it does not add to storage space (a virtual dataset is simply
an algorithm that looks like a dataset—it is computed on demand)
and allows you to try different RGB band combinations at a later
date.
The easiest way to create the virtual dataset is to use an existing
template. For Landsat TM and SPOT Panchromatic, there is an
existing template that can be used called:
examples\Miscellaneous\Templates\Virtual_Datasets\Land
sat_TM_and_SPOT_Pan
2. Load this template algorithm.
Data Fusion and Mosaicing
11
3. Change the datasets to reflect your own Landsat TM and SPOT
Panchromatic datasets.
4. Save your new algorithm as a virtual dataset, ideally in the area that
you store the true Landsat TM and SPOT Pan images.
Follow the convention of ending virtual dataset names with the
letters _vds, making it easy to see at a glance that a dataset is really
a virtual dataset. The Landsat TM image represents the first 7 bands
of the algorithm/virtual dataset, and the 8th and final band (and also
layer) is the SPOT Pan layer.
Make sure the SPOT Pan layer is turned on after you finish loading
it. Once you create this virtual dataset, it can be used by any
algorithm requiring processing of both Landsat TM and SPOT Pan
within the same formula.
To use the Brovey Transform algorithm
1. Load the
examples\Functions_And_Features\Data_Fusion\Brovey_Transform
algorithm.
Note that it has three layers: Red, Green and Blue (intensity is
computed from within the formula for each layer).
2. Select one of the layers and change the dataset to your new virtual
dataset.
3. Click ‘OK - all layers’ to load the virtual dataset into all layers.
4. Adjust the transforms if required.
You can sharpen the image without undue degradation by applying
a sharpening filter to the SPOT Panchromatic. You can apply a
sharpening filter to the SPOT Pan input for each of the layers (you
need to load the filter three times, once for each layer), or you could
modify the original virtual dataset to sharpen the SPOT Pan layer.
Alternatively, you can create a new layer in the virtual dataset called
‘0.65_sharp_Pan’ and add a sharpened filter to it.
The Brovey transform works well for a wide range of data, and in
general is a better choice than the RGBI transform. However, as it
causes visual change to colors (generally for the better), the RGBI
transform is better for pure intensity sharpened transforms.
Data fusion using
attribute
highlighting
12
The attribute highlighting technique is good for highlighting certain
parts of the image, based on a pixel by pixel formula. The general
concept is to define a backdrop image (depicting non-valid
attributes) and add another layer to emphasize the highlighted parts
of the image (depicting valid attributes).
Data Fusion and Mosaicing
For example, an effective image highlighting water in a region can
be created using SPOT Panchromatic for the backdrop, and
computing and highlighting water based on the Landsat TM imagery.
The SPOT Panchromatic would be used for the backdrop because of
its superior resolution (10 meters), whereas the Landsat TM would
be used to decide if water exists, as water cannot be reliably
detected from the single band 0.65µm SPOT Panchromatic imagery.
To create an algorithm for
data fusion with attribute
highlighting
1. Create an algorithm which displays the backdrop image.
This involves creating a Pseudocolor layer loaded with the SPOT
Panchromatic image. Use a grayscale lookup table to create a simple
grayscale image. A more complex color backdrop image could be
created; however, a grayscale backdrop makes it easier to see
highlighted areas.
2. Highlight required areas.
For each type of highlight to be shown, create a Classification (not
Classification Display) layer. A Classification layer is a layer that has
a higher priority than normal raster layers (such as Pseudocolor or
Red), so it shows on top of any normal raster data.
A Classification layer will only be shown if the pixel value is non-NULL
for that pixel. Note that this is non-NULL, not non-zero, which is a
different meaning. If, for a given pixel, more than one Classification
layer is valid, then the layer with the highest numerical value for that
pixel is shown.
So, to show highlights using the Classification layers:
3. Add a Classification layer.
4. Load the dataset to be used for decision making.
For example, for water detection, you could use Landsat TM.
5. Set up the formula for the layer, to return 1 if true for this pixel, or
NULL if false for this pixel.
Many common formulas have already been defined and can simply
be loaded with the Load Formula menu inside the formula dialog
box. You can also create any formula that returns a true/false type
of answer.
As water is a high absorber of infrared, the infrared channels from
Landsat TM have a very low reflectance over water. The exact values
depend on the region. Generally, any value below about 20 for band
5 can be considered to be water. Use ER Mapper Traverses and Cell
Coordinates to look at pixel values point by point to help decide the
exact value for the threshold. A more robust water test is to divide a
high reflectance wavelength (blue or red wavelengths) over a low
reflectance wavelength.
Data Fusion and Mosaicing
13
For the simple water test case, the formula for the Classification
layer would be:
IF INPUT1 < 20 THEN 1 ELSE NULL
Where INPUT1 is mapped to one of the infrared channels, for
example Band 5:
IF B5:1.65µm < 20 THEN 1 ELSE NULL
6. Select a color for the highlighted areas.
For example, choose blue for water, by clicking on the color chooser
button on the Classification layer.
7. Only areas that are water will be shown in the color selected. For
other areas, the underlying backdrop image will be visible.
The above is a simple example of how to highlight certain parts of
the image based on conditions. Complex maps can also be generated
by combining different techniques.
More complex examples
of attribute highlighting
Some examples of additional attribute highlighting techniques
include:
1. Combining different datasets to show time–based changes
By comparing two classified images, changes in certain classes can
be picked out and highlighted. To use a formula that accesses data
from different datasets, a virtual dataset that combines the different
datasets must be defined.
2. Assuming two classified datasets for 1990 and 1994
For example, with class number 12 for urban development, the
following formula would highlight urban growth between 1990 and
1994:
IF CLASS_1990 != 12 and CLASS_1994 = 12 THEN 1 ELSE NULL
3. Highlighting more than one attribute
For example, one Classification layer could be colored red, to show
areas of vegetation decrease, and another area could be colored
green, to show vegetation increase.
4. Only showing attribute data within a region.
For example:
IF inregion('Colorado') and B5:1.65um < 20 then 1 else
NULL
would highlight water only in the region defined as Colorado.
5. Using noise removal or averaging filters to remove small areas, to
show a smoother image, or to highlight only larger areas.
14
Data Fusion and Mosaicing
It is important to put any filters before rather than after the formula,
as the formula inserts NULLs into the data, and the filters would
cause the resulting data to shrink by rejecting areas bounded by the
NULLs. If the formula is a true/false from complex input data, create
a virtual dataset which generates the desired data, and then run that
data through a filter before the final ‘IF <true> THEN 1 ELSE
NULL’ formula.
Two good filters are the median filter, to remove small areas, and
smoothing filters, to generate more regular looking areas.
Data fusion using
region
highlighting
Region highlighting entails using the INREGION() function within
formulas to decide if the area to display falls within the region(s) of
interest. It will only be displayed if it does.
As with attribute highlighting, this technique can be combined with
backdrop maps to generate detailed information only within areas of
interest.
For example, a grayscale SPOT Panchromatic image could be used
as a backdrop behind a classified image of Landsat TM, that is only
shown within certain areas of interest (perhaps forest stands).
The region highlighting technique (like attribute highlighting) can be
applied to any raster layer type. However, the attribute highlighting
technique is most effective for Classification layer types.
In the following examples, the term INREGION('my region') will
return true/false if inside the region named 'my region', whereas
the term INREGION(REGION1) means that in the formula the
variable REGION1 must first be mapped to a valid region for the
dataset before it is used. By default, when adding a region variable
(REGION1) to a formula the region 'All' is used, which is unlikely to
be the region you want.
It is much better to use REGION1 (or REGION2, REGION3, etc.)
and map it to a region, rather than specifying an absolute region
name, as it makes your formula much more flexible and easy to
change.
Common region highlighting techniques for different layers include:
1. Region highlighting by a single blocked out color.
Use a Classification layer set to the appropriate color, with the
formula:
IF inregion(REGION1) THEN 1 ELSE NULL
Use ‘... ELSE NULL’ not ‘... ELSE 0’, which would set the entire
image to be classified.
2. Classification highlighting only within a region of interest.
Use a Classification Display layer (not a Classification layer), and the
formula:
IF inregion(REGION1) THEN INPUT1 ELSE NULL
Data Fusion and Mosaicing
15
Use ‘... THEN INPUT1 ....’, as the Classification Display layer
type takes the input value, and runs it through the colors defined
for the classified file. This can be used to advantage. For
example, to display certain classes only within a region, you can
use: IF inregion(REGION1) and INPUT1>10 and
INPUT1<20 THEN INPUT1 ELSE NULL. This would only show
classes if they are within the range 11 to 19, and in the region
defined by REGION1.
3. Color highlighting only for a colordrape image
By leaving intensity still in the image, this leaves an overall feel for
the map, but draws attention to the area defined by the colored
image. The INREGION() would be applied to the Pseudocolor
region. The same concept also applies to RGBI images.
4. Imagery highlighting only within a certain area
For example, it might be desirable to only show data within a certain
map sheet, or within a certain county or state. In this case, for each
raster layer, add the INREGION() to the formula:
IF INREGION(REGION1) THEN INPUT1 ELSE NULL
Regions are associated with raster datasets. They also contain other,
statistical, information for the polygon region.
It is sometimes desirable to use a vector polygon as a region across
different datasets. In this case, use the vector polygons <-> raster
dataset regions function in ER Mapper to create regions within raster
datasets from vector file polygons and vice versa.
Regions are stored as polygons (not bitmaps) within the raster
dataset header. Because raster regions contain additional
statistical information, they must be associated with each raster
dataset, which is why simple vector polygons from vector files
are not used for INREGION() and associated functions within
formulas.
Fusion of vector
and raster data
ER Mapper can also show vector based data over raster (and
classification) layers.
This data can come from many sources. It can be from ER Mapper
vector files, or from external GIS or DBMS systems.
Access to vector data is via a technique know as Dynamic Links. Each
link may have its own processing techniques.
Some points to consider when working with vector overlays:
16
Data Fusion and Mosaicing
•
Vector layers may or may not allow additional selection and
processing of the vector data.
This depends on the actual link implementation.
•
Data is left in the external system, and the dynamic link is run on
demand each time the algorithm is processed and displayed or
used to generate hardcopy.
The ER Mapper algorithm always shows the latest information in
the external system.
•
The Raster to Vector conversion function within ER Mapper can
be used to define polygons and vectors based on raster data.
•
Each layer may be either a single color, or may generate multiple
colors within that layer. This is up to the dynamic link of that
layer.
Single color layers are known as MONOCOLOR layers, and
multiple color layers are known as TRUECOLOR layers.
MONOCOLOR layers may be any color—but only one color per
layer.
•
You can change priority on vector layers, just as with raster
layers.
Moving a layer up or down means it will be drawn over or under
another vector layer.
•
The Map Composition functions can have dynamic links within
map objects.
This means that you can clip a vector layer (for example, a road
network) to only display within a polygon region defined by a
map object.
•
You can add your own dynamic links to your own custom vector
layers, for access to proprietary systems.
Useful formulas
for data fusion
and mosaicing
There are a number of formulas and processing techniques that are
useful for data fusion and mosaicing. Many of these techniques can
be applied in varied situations, so some ideas are listed here:
Using PC1 to show terrain
and terrain changes
For Landsat TM, PC1 (principal component number 1) is often the
terrain. This is because the most significant change on a pixel by
pixel basis is often the effect of shadow on terrain. This is most
apparent in hilly or mountain areas, but less effective in urban
regions. Band 6 (the thermal band) should not be included in this
calculation.
Data Fusion and Mosaicing
17
By showing PC1 as intensity, good structure is visible from most
Landsat TM imagery.
By using an algorithm to compute PC1 for two different Landsat TM
images, then subtracting one from the other, areas that have had
structural change (for example, hill slippages, water erosion, sand
dune movement) become visible—without having digital elevation
changes for the area.
To generate this sort of image, create a virtual dataset that contains
the 14 bands from both Landsat TM images. Then use the 14 band
virtual dataset to create a principal component number one for each
dataset, which in turn are used in another calculation.
The formula ‘(I1 - I2) / (I1 + I2)’, where I1=[scene#1 PC1] and
I2=[scene#2 PC1], is an effective method of normalizing the data.
Using INREGION() on a
conditional basis
It is often desirable to do processing based on INREGION if the pixels
in question are within a region (that is, vector based polygons). For
example:
IF INREGION('Forest') THEN ...
This will process only data in the region called ‘Forest’.
Complete AND and OR logic can be used for regions. For example,
the formula:
IF INREGION('Forest') and INREGION('West County') THEN
...
will only process pixels that are both of the regions ‘Forest’ and
‘West County’. Note that an ER Mapper region is defined as one or
more polygons, so the result of the above can be quite complex.
As another example, to process pixels that are inside the region
‘Forest’ but not in the region 'West County', whose values are less
than 100 use:
IF I1 > 100 and (INREGION('Forest') and not
inregion('West County') THEN ...
18
Data Fusion and Mosaicing
Digital Elevation Models
Author
Stuart Nixon is the Founder of Earth Resource Mapping.
Abstract
Digital Elevation Models (DEMs) are a digital representation of
elevation, organized as a regular grid of numbers. A great many
applications, such as laying pipes or assessing fire risks, have a
direct use for DEMs. This study examines how DEMs are created,
ways to enhance DEMS and extract information from them, how to
combine DEMs with other types of data, and how to do what-if
processing with them.
Introduction
Digital Elevation Models are represented as a regular grid of
numbers. The spacing between grid elements represents the interval
between samples. For example, a 30 meter grid spacing means that
there is one elevation sample every 30 meters. The numerical value
in each grid element represents the elevation at that point. This is
often a floating point number, to ensure that small variations in
elevation can be recorded accurately.
This chapter discusses the generation and quality of DEMs, the
display and integration of DEMs with other types of data, and how to
carry out what-if processing and classification of DEMs and related
data.
Sources and
quality of DEM
data
This sections lists and descibes the different sources and qaulity of
DEM data.
DEM creation
DEMs may be created from a number of sources, including:
•
Measuring selected elevation points over an area, and then
gridding them into a regular grid using a gridding package. This
technique has several disadvantages, including the effort
required to collect the individual elevation points, and
inaccuracies in the regular grid in areas where limited numbers
of points have been gathered. Any area that has had gravity
surveys will also have elevation heights collected at each point.
Otherwise, the source for individual elevation points are likely to
be from analytical plotters.
Digital Elevation Models
19
•
Generating a DEM from a contour line source of elevation, for
example, from a map or from an LIS system. Contour maps are
generally created using analytical plotters, and thus have a large
range of points gathered. Also, contour maps typically have
valley floors and ridge lines defined, ensuring that these crucial
aspects of a DEM are well preserved. When generating a DEM
from a contour source, care must be taken to ensure that the
resultant DEM does not have a “terraced” effect between contour
lines.
•
Interferometric SAR processing. This computationly intensive
technique uses phase between Radar signals to derive a
complete elevation map. Advantages include good accuracy
regardless of climatic conditions. Disadvantages include some
range of errors over the resultant DEM, and currently a limited
range of SAR systems capable of being used to generate these
DEMs.
•
Softcopy Photogrammetric generation of DEMs by comparing
stereo airphoto or satellite images using autocorrelation
techniques. This technique allows a quite dense mesh of DEM
points to be computed. There can be a wide range of output from
different DEM generation systems. Some systems generate a
sample of DEM points, which are then gridded up to a higher
resolution. Other systems generate a high number of points, but
a significant number of points can be inaccurate.
DEM quality
An important consideration is the quality of the DEM being
generated. Ideally, you should know the original source of data and
the processing technique used to convert the original elevation
samples into a regular grid.
Before working with a DEM, your first step is often to carry out real
time shading on the DEM grid, to observe if any gridding errors exist.
You should examine the DEM from different elevation and azimuth
perspectives, as many errors only become obvious from certain
angles or while interactively carrying out the shading. Errors to look
for include:
•
Terraces of regular flat areas (indicating a poor gridding result
from a contoured line source).
•
Large, very regular areas as compared to areas with many small
variations. This might be an indication of a poor source of data,
with only a few grid points in the large regular areas.
20
Digital Elevation Models
•
Regular shadow strips in the DEM, in a particular direction, often
more visible when the shading elevation is near the horizon. This
is indicative of a gridding system introducing artifacts in a certain
direction. This is a very common gridding problem, and shows up
on most DEMs generated using gridding techniques. The problem
can often be minimized or eliminated with different gridding
techniques. Also, FFT filtering of the data in the frequency
domain can remove gridding artifacts. Another cause of regular
variations in DEMs can be systematic errors in autocorrelated
DEMs, in particular from satellite images.
•
Areas with sharp variations in height. This is often an error
introduced by automated DEM generation systems which
incorrectly assign an elevation height to points, due to shading
or other problems in the original stereo pair. If the data can’t be
reprocessed, selective use of a median or other form of ranking
filter can remove most of these errors; as with all filtering
techniques, care must be taken that valid data is not removed
during the filtering process.
•
Incorrect registration. A surprising number of DEMs are gridded
using incorrect map projection/datum details—or the data may
have been generated using a map projection/datum different to
that of other data you are working with. Unlike airborne photo
and satellite data, where mis-registrations are obvious, misregistrations in DEMs may not obvious unless looked for. When
looking for mis-registration, either examine the elevation value
at several known locations (use the geoposition - zoom center
of image window to center an image window on a specific
location), or compare river forks, etc., with the same location in
other imagery or vector data over the same area. If the DEM is
in a different map projection and/or datum, use the Image
Rectification - Map Projection to Map Projection function to
rectify (warp) the DEM into your desired projection.
Displaying and processing DEMs
Once your DEM has been gridded to your satisfaction, and is in the
appropriate map projection and datum, a considerable range of
processing and display techniques are available to you to extract the
information held with a DEM.
In general, these can be divided into three categories:
•
Displaying DEM data in a number of ways.
•
Integrating and displaying DEM data with other sources of vector
and/or raster based data.
•
Processing DEM data using what-if rules.
The following sections examine the above in more detail.
Digital Elevation Models
21
Typical DEM
applications
22
Because the processing and display of DEM data is largely dependant
on the resultant application, ER Mapper’s ability to quickly and easily
process and integrate DEMs makes it quite easy to make effective
use of DEM data. For example:
•
You need to identify areas of high fire risk, considering the
direction of prevailing hot winds, and the dry vegetation content.
This can be quickly done by comparing the dry vegetation index
from Landsat TM, with the slope (steepness) and aspect
(direction) of a DEM. The threshold could be applied to the
resultant image, indicating areas of high fire risk.
•
A new water pipeline is to be constructed. Key factors are the
slope, and the variation in height, between proposed starting and
ending points for the pipeline. An image showing regions of low
slope, and within the desired height band, can be constructed.
This image can then be draped over the original DEM and viewed
in 3-D to select optimum pipeline locations.
•
You need to identify fire-break roads that must be constructed in
a forested area, to reduce fire impact. After doing on-screen
annotations identifying the required roads, a stereo pair
hardcopy image can be generated which is a merge of DEM data
to show height, air photo data, and vector based annotations
showing where the required fire-break roads must be built. These
stereo pair images can be given to construction teams in the
field, who now have accurate maps that show the location of the
required fire-break roads in context with air photo images of the
area. The stereo pair images appear as true stereo images when
viewed through a standard stereo pair viewer.
•
A new development which must meet strict visibility impact
guidelines is to be carried out. By draping air or satellite images
over a DEM, the proposed development can be viewed from any
position, in stereo, using the real-time perspective stereo
capabilities of ER Mapper.
•
You wish to carry out geological interpretations in the field, to
identify candidate locations for exploration drill locations, based
on a combination of aeromagnetic data, geochemical data, and
elevation observations. By generating a false color aeromagnetic
image, adding the geochemical data (stored in vector or point
form, possibly in an external system) to a DEM, and then printing
a hardcopy stereo pair of the image, the geologist can take an
image to the field that can be viewed in stereo using conventional
stereo viewers. ER Mapper has the ability to generate a stereo
pair that has one image orthographically correct and the other
image containing the elevation changes. The geologist can carry
out interpretations in the field on the orthographically correct
image. These interpretations can then be entered into the
computer back at the office using a digitizer.
Digital Elevation Models
•
You need to observe sedimentation and erosion effects from a
recent severe flood. ER Mapper can compare before and after
DEM images of the affected area, highlighting areas where
elevation has decreased, or increased, in different colors. The
volumes of erosion can also be computed from this differencing
of DEM images.
As can be seen, the applications for DEM data are quite varied, and
the actual techniques used to process, integrate, and display the
DEM data vary depending on the required results.
Keeping this in mind, the following sections cover in detail the
different display, processing and integration that can be carried out
on DEM data. Where appropriate, real world examples are given for
the use of the different techniques.
As with using any part of ER Mapper, it is important to appreciate
that, like a spread-sheet, ER Mapper provides a powerful what-if
tool. The effective use of this tool comes from understanding the
underlying functionality and how to apply it.
Displaying DEMs
There are a number of basic ways to enhance and display DEM data
(in all of the following discussion, any technique used for display can
also be output to a hardcopy print). The basic techniques can in turn
be combined to come up with superset display techniques, which
combine the DEM data with other types of data.
Many of these techniques are useful because of the nature of DEM
data: unlike the spectral responses from a satellite image, which
might vary wildly from one location to the next, there is often a
correlation between one DEM location and the next. In this respect,
there is a close affinity between display techniques for DEMs, and for
field strength datasets such as gravity datasets or magnetics
datasets. Indeed, all of the display techniques discussed here for
DEMs are also applicable to field strength datasets.
As with all of ER Mapper display techniques, the display techniques
can be applied directly to the original data—or to virtual datasets
containing more than one source of data —without having to
generate intermediate image files.
Note that vector overlays and dynamic links to external systems
such as GIS and DBMS systems operate in all ER Mapper display
modes detailed below. Thus, you can, for example, carry out an
annotation interpretation while sun-shading a DEM to show
structure. Also, the following display techniques don’t have to be
applied to just the basic DEM data. For example a false color image
can be made of the Slope computed from a DEM, rather than from
the original DEM.
The basic display techniques and combination techniques are as
follows.
Digital Elevation Models
23
grayscale image
A grayscale DEM is the simplest image that can be displayed—by
assigning a gray shade to each elevation value. A grayscale DEM
image is also one of the least useful images, as it does not enhance
structure, and as it is difficult to see changes in elevation in a gray
image (because your eyes are less sensitive to gray scale changes
than to color changes). You can generate a grayscale by clicking on
the “Common Toolbar: grayscale” button, or by loading the
“examples\Data_Types\Digital_Elevation\grayscale” algorithm and
loading your own dataset. A grayscale image can be generated using
a Pseudocolor layer in the algorithm with a grayscale lookup table
loaded, or using an Intensity layer in the algorithm which is
automatically a grayscale image (if no other raster layers exist in the
algorithm).
False color image
A false color image is similar to a grayscale image except that a
palette of colors is allocated to the different DEM values. By selecting
a color lookup table that highlights differences in values, such as the
step lookup table, you can differentiate subtle differences in
elevation height. Also, the transform can be changed in realtime to
selectively highlight a certain range of DEM values. You use a
Pseudocolor raster layer for false color images.
Realtime “Sun” shaded
image
The realtime sun-shaded image is a powerful mode used to highlight
structure within a DEM. It is called “sun” shading because it
simulates how a terrain surface would look if the sun were in
different positions (defined as azimuth rotation from North, and
elevation above the horizon). ER Mapper will update the sun-shaded
image in realtime, as the “sun” is moved via the sun-shading window
for the raster layer. A sun shaded layer is normally set up as an
Intensity layer, so that it can be combined with a Pseudocolor layer
to generate a colordrape image. To produce a sun-shaded layer, turn
the sun-shading on for the desired layer. ER Mapper 5.0 can update
a real-time sun shaded image on any display type (that is, on both
24 bit and on 8 bit displays).
When the sun is overhead the resultant image shows “dip” for the
DEM. In other words, it shows features in the DEM regardless of
angle. If the DEM is a high detail DEM, and you are interested in
seeing only major structures within the DEM, you can add averaging
filters to the layer prior to carrying out the sun-shading.
When displaying a sun shaded image, using a logarithmic transform
often generates a more balanced looking image than the default
linear transform.
Realtime combined “Sun”
and “False Color”
Colordrape image
24
This is a very powerful technique, created by combination a False
color Pseudocolor layer (or possibly Red, Green and Blue layers) with
a “Sun” shaded layer in Intensity. With this display technique, you
can modify the structure highlighting by moving the sun shading
position (on the Intensity layer) in real time, and you can modify the
color highlighting (on the Pseudocolor layer) in real time. You can
also modify the transform in the Intensity layer to brighten or darken
the overall image; a logarithmic enhancement works well for this.
Digital Elevation Models
The advantage of this technique is that you can see both absolute
elevation height variations (the color of the image) and the structure
in the DEM (the shaded intensity).
Shiny (HSI model)
Colordrape image
The “shiny” colordrape is a technique developed by an ER Mapper
user to provide a different reflectance model for colordraped images.
It is a good example of how quite different display techniques can be
achieved, using the standard ER Mapper software and user interface.
As with the standard colordrape image, this technique results in a
colordrape image showing the DEM height as color, and the structure
from the DEM as shaded intensity.
Unlike the standard colordrape model, which uses two layers
(Pseudocolor and Intensity) with an internal calculation of
reflectance, the shiny colordrape uses three layers in the algorithm:
Hue, Saturation and Intensity. The original height values are
assigned to the Hue layer, and the shaded intensity to both the
Saturation and the Intensity layers. By modifying the transforms for
the Intensity and in particular for the Saturation layer, we can
achieve a “shiny” look to the colordraped image.
In this processing, we apply transforms after the sun-shading is
carried out. To do this, we use an algorithm (or virtual dataset)
containing two layers: Height (the original DEM data) and Structure
(the sun-shading we wish to use). This algorithm, instead of the
original data, is used as input to the HSI mode.
To create a shiny
colordrape image
The easiest way to create the required algorithm to input to the shiny
colordrape HSI display mode is to:
1. Create a normal colordrape algorithm (with two layers: Pseudocolor
and Intensity). The intensity layer has sun shading turned on.
2. Name the Pseudocolor “Height” to indicate that it is the original data,
and name the Intensity layer “Structure” to indicate that it is a
shaded version of the original data.
3. Save the colordrape algorithm.
4. Create an HSI algorithm with three layers.
The easiest way to do this is to load the
“examples\Data_Types\Digital_Elevation\Colordrape_shiny_look”
algorithm. Load your colordrape algorithm, saved in step (3) above,
as the input dataset for the colordrape algorithm. Make sure you are
using the “Height” input for Hue and the “Structure” input for both
Saturation and Intensity.
You can now call up the transforms for the Hue, Saturation and
Intensity layers. Changing the transforms will have the following
effect, all in real time (regardless of display type):
•
Digital Elevation Models
Hue layer: This will change the color of the image.
25
Mosaicing DEMs
•
Saturation layer: This will change how saturated (or white
looking) colors will be. Because this layer uses the “Structure”
you will not change the overall saturation but rather the
saturation on areas with changing structure. Generally, an image
that shows a white looking surface on structure changes will look
as though it is shiny.
•
Intensity layer: This will change the overall intensity of the
image.
In ER Mapper, any time there are more than one of a type of raster
layer, those layers are automatically mosaiced. For example, if you
have more than one Intensity layer, all the intensity layers are
automatically mosaiced together. Each of the Intensity layers can
display a different dataset, and each dataset can have a different
resolution and data format. The act of having more than one raster
layer of the same name automatically tells ER Mapper that you want
to mosaic the datasets in question—nothing else is required.
Some issues that make mosaicing easy to do:
26
•
You can use the Image Display and Mosaic Wizard on the
Common Functions Toolbar to automatically create a mosaic of
all files of the same type in a particular directory. The wizard
does all the basic setup work for you and the resulting mosaic can
then be fine tuned by hand if necessary.
•
After mosaicing and balancing the DEMs, it can be convenient to
treat all the input DEMs as a single “dataset”. Unlike older image
processing systems, which require you to generate an output
dataset which is a merge of all the input datasets, you can simply
save your mosaiced algorithm, and then use the algorithm as
input to other algorithms, thus making the datasets appear to be
a single dataset. (You can also use the Save as Dataset option
on the File menu to create a true output dataset, for example to
use in an external system as a single gridded file).
•
You specify the priority of the DEMs to be mosaiced by the order
of the layers. For example, you may have several DEMs to mosaic
together of different resolutions—perhaps a coarse DEM covering
a large area, and several detailed, higher resolution DEMs over
parts of the area. In this case, you would put the higher
resolution DEMs at the top.
•
DEMs quite often have leveling problems—they are not exactly
the same level along the edges, resulting in a quite noticeable lip
along each edge. You can remove this lip in several ways:
•
turn on feathering in the algorithm (the feathering menu will
automatically be enabled when you have more than one raster
layer of the same type in an algorithm). This will feather
datasets, left to right, along the overlap line.
Digital Elevation Models
•
Define polygon regions that you wish to bound a DEM by. For
example, you might choose to use a river as the border of a DEM.
Use these regions in the layer formula, to selectively mask out
portions of a DEM, for example:
IF inregion(REGION1) THEN input1 ELSE NULL
In the above, you would use the formula window graphical user
interface to map REGION1 to the region you wish to use to bound
the dataset.
•
You can use histogram equalization to equalize the histograms
for each of the DEM datasets. However, this should be used with
caution, as the resultant values for the DEMs, while looking
balanced between images, will no longer reflect the true DEM
values (this is always a problem with histogram equalization).
•
Apply linear leveling across the image, using statistics for edges,
to level the image. In this case, you would apply a linear ramp
(or polynomial ramp) to a dataset to match edges exactly with
another dataset.
In most cases, mosaicing will be all that is needed. Note that, when
shading DEMs you will not see mosaic lines anyway, as a shaded
image ignores level differences between images. Thus, for most
common DEM mosaicing operations, feathering might not be needed
at all.
•
Edge enhancement and
other filters
If you have DEMs at very different resolutions you might find that
coarse DEMs are so coarse they are at a lower resolution than the
display (or hardcopy). Selecting Smoothing on the algorithm
will automatically interpolate images, removing the blocky pixel
look to the imagery.
More correctly processing techniques, these are filter operations that
process the DEM data in such a way as to highlight only certain
features and structures. Filters can be added to any raster layer by
selecting the Filter button (in the Algorithm dialog box) and
loading a filter. You can append more than one filter to a layer if you
wish. The filters process data in double floating point precision so no
DEM accuracy is lost.
Some useful filters for DEM display include:·
•
Digital Elevation Models
Sun angle filters. These filters are edge filters, and highlight
structures in a specific direction. Unlike the real-time sunshading mode, you must load (or create) a new filter if you wish
to change the angle of the filter. The size of the filter affects the
size of the structures being highlighted. Thus, a 3 x 3 filter will
highlight considerable detail, including smaller structures,
whereas a 9 x 9 filter will only highlight major structures present
in the DEM.
27
•
Averaging filters. These blur the data, by taking an average of
neighboring cells. These are useful for smoothing noisy DEMs
prior to sun-shading them. Average filters are also handy for
showing high frequency information in the DEM, in other words
areas that have considerable variation. By subtracting an
averaging filter from the original DEM, the resultant image is a
high frequency image showing areas that are rough. To subtract
a filter from the original data, create a layer with the following
formula:
INPUT1 - INPUT2
On the INPUT2 input stream to the formula, add the average filter.
•
Raster contour line
generation
Ranking filters. So called because they “rank” data in various
ways, ranking filters are useful for cleaning up noisy DEMs—
especially useful for DEMs that have a lot of noise due to a poor
autocorrelator taking some DEM values from tree crown height
and some values from the ground. The best filters here are
majority filters which replace the DEM value for a cell if it is
significantly different from the neighboring cells.
The best way to generate contours with ER Mapper is to use the
Contouring Wizard on the Common Functions toolbar.
Another approach is to generate a quick raster contour line from a
DEM is outlined below. (Note: the
examples\Data_Types\Digital_Elevation\ Contours_from_Raster
algorithm is already set up—just load it and change the dataset to
your DEM).
To generate a simple
raster contour line
1. Add a Classification layer to your algorithm, and load the DEM into
this new layer. Select a color for the layer. This will be the color of
your contour lines.
2. For the formula for the Classification layer, define the following
formula. For the “base” variable put in the starting level for the
contour lines (for example, 1200 meter height) and for “interval” put
in the interval between contour lines (for example, 100 meters).
(CEIL(((input1 - base)/interval))*interval)
3. After the formula, add a laplacian filter. This filter will set all values
between contours to zero.
4. Change the final transform to a notch transform. The transform
should stay at zero until the value 0.5 on input, then go straight up
to maximum output (e.g. 255) then stay there until the final value
of output.
When run, the above layer will “classify” the DEM into contour lines.
It is simple to enhance the above formula to only show contours
within a certain range of DEM heights, using IF ... THEN ... ELSE ...
processing.
28
Digital Elevation Models
Because these contour lines are raster based it is best to use a
dynamic link to a contouring system for more advanced contouring.
Realtime 3-D Perspective
views
Realtime 3-D Flythrough
views
Digital Elevation Models
The 3-D perspective viewing capability in ER Mapper can be used to
examine DEMs as a 3-D perspective. The 3-D viewer takes a
standard algorithm and uses Height layers to produce a perspective
view of the algorithm. When working with perspective views,
remember that:
•
When you save algorithms, ER Mapper will store your view
position, background color, and so on, to allow you to easily
generate the same perspective view again.
•
If you don’t have any Height layers in your algorithm the 3-D
viewer will search for and use Intensity layers instead. Thus, you
can directly load a normal Colordrape (Pseudocolor/Intensity)
algorithm into the 3-D viewer without changing Intensity to
Height and it will work.
•
Any algorithm (that has one or more Height and/or Intensity
layers) can be used to generate 3-D perspectives. Thus, you can
drape a satellite image and GIS vectors over a DEM and view the
result as a perspective.
•
ER Mapper has two different 3-D viewers. One, which runs on all
platforms, is a software based viewer. The other, which is not yet
available on all platforms, is faster and also allows stereo glasses
to be used.
•
You can drape vectors over perspective views; however, you
need to process the view at a higher resolution in order to make
effective use of vector overlays.
•
You can run the viewers in fallback modes, which process the
image at a lower resolution while moving the image, then redraw
at a higher resolution once you stop moving the perspective.
•
On workstations capable of real time texture mapping the ER
Mapper 3-D viewer can generate the texture for the displayed
image at a different (higher) resolution to the triangulated mesh
used for height. You may need to try different resolution texture
maps and different resolution triangle meshes to obtain the
optimum resolution and speed for your workstation.
The flythrough views are available from the Realtime 3-D viewer in
ER Mapper. The notes in the 3-D Perspective section above are also
applicable to Flythrough views. In flythrough mode you can bring up
a bird’s eye view which allows you to see the look direction for your
current flythrough position.
29
Stereo Realtime 3-D
Perspective or Flythrough
views
On some platforms, you can view both perspective and flythrough
views in true stereo. To do this you will need stereo glasses for your
computer. Some computers, such as Silicon Graphics workstations,
come “stereo ready”. You just need to add a stereo transmitter and
stereo glasses. Other computers require specialized display boards
and/or displays to accept the higher refresh rates used by stereo
glasses.
The notes in the 3-D perspective section above are also applicable to
stereo mode.
Hardcopy Left/Right
Stereo pair images
Any algorithm that can be displayed as a perspective (or flythrough)
can also be output as a hardcopy stereo pair. In this case, you select
Stereo... options in the Print window (from the File menu).
Some points to consider when generating stereo pairs:
•
You can vary the stereo angle (which varies the apparent height)
of the stereo pair.
•
When generating stereo pairs, if you wish to annotate over the
images, generate one of the images as an orthoimage. This
image will then have no height distortions (all distortions are in
the other image of the pair) allowing you to annotate on the
image in the field and then digitize directly from the image.·
•
Vectors, and any other form of dynamic link such as links to GIS
systems and to DBMS systems, can be included over stereo pairs.
Because stereo pairs are produced at hardcopy device resolution,
the vectors will be of high quality.
•
When generating stereo pairs, each image printed is limited to a
maximum of the size of one page of output. However, as you can
output to graphics formats, you can if desired generate very
large stereo pair images by generating to a graphics output
format such as TIFF.
Other display techniques combinations are also possible. For
example, you can generate a Red, Green, Blue image, each layer
consisting of the DEM data shaded from a different direction. While
somewhat confusing for interpreting complex areas, such an image
allows easy examination of the major strike directions for the entire
area in a single image.
Integrating DEM
and other data
It is quite often desirable to combine DEMs with other sources of
data: for processing (discussed in more detail in the following
section); or for integrated display of DEMs and other data.
There are several ways to integrate the display of DEM data and
other sources of data. These include:
30
Digital Elevation Models
Colordrape et al
All of the display view modes discussed above can be used for
integration. For example, the NDVI (a vegetation index that
measures biomass) from Landsat TM could be color coded, with red
being high vegetation content, through the spectrum to blue being
low vegetation content. This could be draped over the DEM in an
Intensity layer (or viewed in 3-D), to see the correlation between
vegetation and DEM height.
RGB-Height display views
The RGB-Height combination is often used when you have a color
image, for example a satellite image, which is to be shown over a
DEM image. The term RGB-Height is not a special display mode: it
simply indicates an algorithm that has Red, Green and Blue layers
(containing the satellite image), and a Height layer containing the
DEM. If you use an Intensity layer instead of a Height layer, and turn
on sun-shading for the layer, you get a form of colordrape image:
that is, RGB, converted to HSI color space, with intensity replaced by
the Height from the DEM, and then converted back to RGB
colorspace.
Classification-Height
display views
In this combination, imagery that has been classified into land use
can be displayed as Classification Display layers, over a DEM in an
Intensity or Height layer.
Raster and Vector data
integration
Algorithms that process and display DEMs can also show other data.
This other data can be data in vector formats (for example GPS
coordinate information from a GPS unit, shown as a DXF format file
overlay), or data from a GIS (for example displaying a road network
from an ARC/INFO coverage over the DEM), or from dynamic links
to external products, such as a custom link to an Oracle database
showing microcell locations for a cellular network.
All of the normal vector dynamic link capabilities and vector editing
can be carried out over processed DEM data.
These links are also visible in 3-D views (perspectives, flythroughs,
and hardcopy stereo pairs) that can be generated by ER Mapper.
Processing DEMs
In addition to the display modes described in the previous section,
DEMs can be processing using what-if processing. This adds
considerable value to the use of DEMs, especially when the DEM data
is combined with other types of data.
In general, there are a number of types of processing that you might
want to do with DEM data, reflected by the nature of DEM data. This
processing includes:
•
Digital Elevation Models
Generating slope from a DEM. Slope is a measure of how steep
any given area of a DEM is. Two slope filters are provided: slope,
which returns a number from 0 to 200 as a percent of slope,
where 0% is no slope, 200% is maximum slope, and 100% is a
45 degree slope; and slope_degrees which is slope measured as
degrees, from 0 degrees (no slope) to 90 degrees.
31
The slope filters have been implemented as standard filters in ER
Mapper. You can create your own variations of the slope filter in the
same way—these filters are not hard coded into ER Mapper.
As with all ER Mapper processing of this nature, a feature is that the
slope filter can be directly applied to the DEM, and the results used.
It is not necessary to create an intermediate “slope” dataset on disk.
Because filters are processed in double floating point precision, the
results are accurate to the accuracy of the original data.
•
Generating aspect from a DEM. Aspect is a measure of the
direction a slope is facing, with 0 being North, 90 being East, 180
being South, and so on. Aspect is implemented as a standard
filter, which you may inspect and use as a basis for creating your
own specialized slope filters.
•
Selecting a range of heights from a DEM. For example, there
might be a range of heights that are allowable for building a road
or pipeline (in conjunction with an allowable range of slopes
within that height range).
•
Selecting DEM values within certain regions.
•
Abstracting data. Using Virtual Datasets or Algorithms to give
abstracted views of the original data.
•
What-if processing on the above can be carried out to perform
•
Combining DEM data with other sorts of data, for what-if
processing. For example, highlighting all polluted areas (from an
general queries, such as all areas with a limited range of slope,
within a certain height range.
assay database) at lower height levels, or highlighting all areas
with a certain slope that are very wet—indicating possible mud
slide danger areas.
•
Combining DEM data with other sorts of data for classification
work. It has been shown that classification techniques can be
dramatically improved if DEMs are used as one of the layers of
the input data being classified. For example, certain types of
trees only grow above certain heights. In this case, the raw DEM
height data would be used. The slope from a DEM can also be
used as input to classification techniques, to improve
classification in regions where slope makes a difference to the
type of vegetation cover.
The following sub-sections discuss each of these types of processing
in more detail. In many cases, combinations of the above are used,
along with combinations of display methods and integration with
other types of data.
Slope from a DEM
32
Slope from a DEM is computed by comparing the value for a DEM cell
with the value of its neighbouring cells. The result is normalized, so
that slope is measured regardless of the direction of the slope.
Digital Elevation Models
The standard slope filters supplied with ER Mapper work from a 3 x
3 region. This can be expanded by modifying the slope filters, or
smoothing or other averaging filters can be inserted into the layer
prior to the slope filter to smooth the DEM prior to computing slope.
(Large slope variations over small areas can be a problem with DEMs
generated by some stereo pair autocorrelation DEM software
products.)
The slope filter is added to a layer simply by adding the
filter_DEM/slope or filter_DEM/slope_degrees filter to the layer. The
slope filter returns a slope number from 0 to 200; the slope_degrees
filter returns a number from 0 to 90.
Aspect from a DEM
Aspect from a DEM is computed in much the same way as slope,
except that the value returned by the filter is the direction of slope,
measured in degrees from 0 to 360.
The aspect filter is added to a layer by adding the filter_DEM/aspect
filter to the layer. The filter returns a number from 0 to 360.
Selecting a range of
heights from a DEM
Selecting ranges of heights from a DEM is achieved by the use of IF
... THEN ... ELSE ... logic within a layer formula. For example, to only
show DEM heights between 1200 and 1500 meters, the following
formula would be used (INPUT1 would be mapped to the DEM height
band):
IF input1>=1200 AND input1 <=1500 THEN input1 ELSE NULL
The use of upper and lower case in the above formula is optional; it
is only used to clarify the formula.
If the above formula were added to a Pseudocolor layer for a
colordrape algorithm (an algorithm with both a Pseudocolor layer
and an Intensity layer) the image would be grayscale in areas
outside the height range of 1200 to 1500, and colored in areas within
that range. This is an effective way to draw attention to areas that
meet the height range criteria.
Showing DEM values
within Regions
The formula INREGION function can be used to selectively work with
a DEM, within one or more polygons in the DEM.
In this case, the formula would contain the INREGION function, for
example as in the following generic formula:
IF inregion(region1) and not inregion(r2) THEN i1 ELSE
NULL
The term “r2” has the same meaning as “region2” (it is an
abbreviation), and the term “i1” has the same meaning as “input1”.
Once again, in the above example, input1 would be mapped to the
height band from the DEM. Using the Formula dialog box, you would
map “region1” to the region you wish to include in the output image,
and you would map “r2” (that is, “region2”) to the region you do not
wish included. Thus, the specific formula once mapped might look
like:
IF inregion(‘buffer zone’) and not inregion(‘towns’)
THEN B1:DEM_HEIGHT ELSE NULL
Digital Elevation Models
33
The above example would only show the DEM where it falls within a
buffer zone, but not if it is within towns (a logical region such as
‘towns’ may consist of more than one polygon).
Abstracted views into
DEMs
Given the above types of processing, we might find it convenient to
have a “dataset” that contains all of the above, and possibly other,
processed views of the original data. Unlike older image processing
systems, ER Mapper has a concept known as “Virtual Datasets”.
These virtual datasets look like real data, but are actually
recomputed, on demand, from the original data, as required. This
processing is carried out by an algorithm attached to the Virtual
Dataset.
This concept has a number of advantages:
•
Less disk space. As the processing we are doing often produces
floating point precision results, we need to store the results at
high precision. Virtual datasets allow us to work in high precision
without wasting large amounts of disk storage.
•
High precision. The processing always works from the original
data, at the original resolution. You do not need to store reduced
precision temporary output datasets.
•
High performance. Because less data is being processed (only
the original data is read), processing of virtual datasets is very
fast. This is also because of the dynamic algorithm compiler built
into ER Mapper, which optimizes algorithms when they are run.
Also, only bands of the virtual dataset that are required for
processing are generated. Thus, if only slope and aspect from a
DEM view dataset are required, only these are computed.
•
Unlimited processing. Algorithms and virtual datasets are not
limited by disk or memory (a virtual dataset does not have to fit
into memory—it is pipelined through processing).
•
Simpler data views. Instead of complex processing of input data,
we can construct a virtual dataset that gives “views” into the
original data. For example, we might wish to look at a DEM as
height, as slope, as aspect, within a certain height range, or
within a set of polygon regions. Each of these views can be
defined as a band in a virtual dataset.
Creating a virtual dataset requires nothing more than defining an
algorithm with one layer for each band we wish to have in the virtual
dataset (actually, you can do more than this, for example mosaicing
several layers together into a single virtual dataset layer).
To create a virtual
dataset
To create a virtual dataset, you would carry out the following steps:
1. Define an algorithm with one layer per view into the original dataset.
34
Digital Elevation Models
For example, we might wish to have a virtual dataset that provides
us, from a single DEM, with the following information as bands; we
would create a layer for each of these, and apply the appropriate
formulae and/or filters (see above) to the layers:
DEM_HEIGHT
the original DEM height
SLOPE
slope, as a percentage
SLOPE_DEGREES
slope, as degrees
ASPECT
slope aspect
GOOD_HEIGHTS
heights between 1200 and 1400 meters
GOOD_REGIONS
in ‘buffer zone’ and not in ‘towns’
TM_NDVI
The Vegetation Index from Landsat TM
1. For each of the above layers make sure the layer has a name. This
name will appear in the virtual dataset as the band name. Note that
the final layer is not from the DEM—it is being generated from a
different dataset, this time from a Landsat TM image. A virtual
dataset can have bands from different datasets (this is how you get
bands from different datasets into a single formula).
2. Make sure that the final transform for each layer is deleted (unless
we actually want to scale the data to 0..255, which is not what we
want in this case).
3. Save the algorithm, perhaps calling it “DEM_view.alg”.
This is so we can call up the algorithm at a later time, perhaps to add
more bands to it, or to see what processing is being done within the
algorithm.
4. Save the algorithm as a virtual dataset, calling it “DEM_view.ers”.
As you can actually use algorithms as input to layers, we could
directly use the “DEM_view.alg” as our input dataset. However, in
this case we are also saving the algorithm as a virtual dataset
because virtual datasets can have statistics computed for them and
can have regions added to the virtual dataset—just like a real
dataset.
If you have a common view you often use for different DEMs, you
can save the algorithm as a template view algorithm. To create a
view for a new dataset, you would load the template view algorithm,
change the dataset, and save the algorithm as a view for the dataset
in question.
What-if
processing
Digital Elevation Models
This section on what-if makes use of the example DEM virtual
dataset view created above. Although not necessary, it makes the
following examples easier to follow (by abstracting the DEM height
data into specific, processed views).
35
However, in all cases it is important to note that when these
examples refer to slope, aspect and so on—the original data is being
used. No intermediate disk files need to be generated.
The key to what-if processing is to be able to:
•
Express the problem in terms of what-if processing of your data;
and
•
Work with algorithm formula, kernels and transforms to achieve
the results.
It is most important to appreciate that ER Mapper is not a “black box”
of processing techniques. Rather, it is like a spreadsheet, allowing
you to generate your own interactive processing. The following give
examples of some types of processing to demonstrate how to put
together what-if processing for some real world applications. When
convenient, the virtual dataset created using the previous section is
used as an example.
Highlighting areas of fire
risk
Application: in many cases, fire risk is defined by the direction of
prevailing hot winds, by the slopes that trend up from that direction,
and from the vegetation biomass on those slopes. Slopes facing
away from the prevailing wind, that have high vegetation biomass,
are at high risk. Also, steeper slopes are at higher risk, as the flame
front can travel quickly up the slope.
In this case, we have three factors to take into account: slope (from
the DEM), aspect (from the DEM), and vegetation mass (from
Landsat TM).
Given the view into the DEM and Landsat TM shown in the previous
section, this problem reduces to a simple formula in a layer.
In this formula, we would like to express fire risk as a number from
0 to 1; with 1 being higher fire risk.
Looking at each of the three factors, the formula would be made up
of the following components:
Slope: A slope of 90 degrees is maximum danger, and a slope of zero
is lowest danger. To reduce this to a portion of the formula, we would
do:
(SLOPE_DEGREES/90)
which will result in 1 being high fire risk. We could weight this,
against the other factors, by multiplying it by a weighting factor if we
choose.
Aspect: Slopes facing south have the highest fire risk, slopes facing
North have the lowest. We express this as a number from 1 (high)
to 0 (low) fire risk as follows:
(1-(ABS(180-ASPECT)/180))
Vegetation: In this case, the vegetation ratio returns a number from
-1 to 1, with 1 being high vegetation biomass, -1 being lowest. We
reduce this to a number from 1 (high risk) to 0 (low risk):
((TM_NDVI + 1)/2)
36
Digital Elevation Models
The complete formula: Putting all this together, we get a generic
formula, adding these three factors together and dividing by three:
((I1/90) + (1-(ABS(180-I2)/180)) +((I3+1)/2)) / 3
Which, as the specific formula after mapping I1 to SLOPE_DEGREES,
I2 to ASPECT and I3 to TM_VEGETATION, looks like:
((SLOPE_DEGREES/90) + (1-(ABS(180-ASPECT)/180)) +
((TM_NDVI+1)/2)) / 3
This will result a fire risk layer being output, with 1 as high fire risk,
and 0 as low fire risk.
We would use the final transform in the layer to scale this number,
to color code the fire risk map, using a Pseudocolor layer.
Also, we could drape the above fire risk layer over the original DEM,
as a Height layer, and view the processed result in 3-D.
Showing areas at risk within regions: We could add a region
selection to the above formula, to only show fire risk areas if they
are within regions. We could add the following to the formula to
achieve this:
General:
IF INPUT4 THEN <fire risk formula> ELSE 0
Specific formula:
IF GOOD_REGIONS THEN <fire risk formula> ELSE 0
The “GOOD_REGIONS” input is, via the virtual dataset, doing
“INREGION()” functions to decide if locations are within acceptable
regions of interest. Note we are using ELSE 0 here, not ELSE NULL,
so that if we decide to add filters after the formula processing, the
regions do not “shrink” because of null cell value processing.
Classification of fire risk: If we choose to, we could change the layer
to a Classification layer, and apply a threshold to the final transform
(perhaps only selecting regions with a greater than 80% fire risk).
Generating Vector polygons of high fire risk: The above fire risk
Classification layer, which is computed from the original data, could
be converted into vector polygons for input into a GIS system.
Before doing so, we might choose to run a median filter to remove
small regions over the image (this filter would be inserted into the
layer after the above formula). A median ranking filter will reject
small regions of fire risk, ensuring that we only end up with large fire
risk polygons for the GIS.
The raster to vector module in ER Mapper can do this conversion
directly from the above algorithm (save the algorithm to disk first,
then use it as input to the Raster to Vector converter).
It is important to appreciate that no intermediate image disk files
were created in this entire process. The only thing that needs to be
stored on disk is the virtual dataset and the algorithms—these files
express how to generate the results from the original data, and are
quite small files.
Digital Elevation Models
37
Deciding placement of a
pipeline
In this example, we are interested in highlighting any area of the
DEM that meets the following criteria:
•
Slope is less than 15%
•
DEM heights are within the range 1200 meters to 1500 meters
In this example, the generic formula would be:
IF input1<15 AND input2>=1200 AND input2<=1500 THEN 1
ELSE NULL
And the specific formula would be mapped to the following inputs:
IF SLOPE<15 AND DEM_HEIGHT>=1200 AND DEM_HEIGHT<=1500
THEN 1 ELSE NULL
In this case, I chose to use the original DEM_HEIGHT band, rather
than the GOOD_HEIGHTS band, just to show the processing. Using
the example DEM_view virtual dataset, a simpler formula would be:
IF SLOPE<15 AND GOOD_HEIGHTS THEN 1 ELSE NULL
which would generate the same results as the previous example.
The layer outputs 1 (valid) or NULL (no data, therefore, invalid). We
would use a Classification layer to display the results of this
processing. If we drape this layer over a Height layer containing the
DEM_HEIGHT, we could view the results in 3-D perspective (or
generate a stereo pair output) to help us decide where the best
placement for the pipeline would be.
Classification
using DEM data
DEM data is effective when used, in conjunction with other data, for
classification. To do this create an algorithm that contains:
•
the bands from the imagery you wish to classify (for example,
bands 1-5, and band 7, from Landsat TM); and
•
the DEM height, and possibly the DEM Slope and/or Aspect.
Make sure the algorithm has no final transforms, then save it as an
algorithm (for possible use again later), and as a virtual dataset. You
need to save it as a virtual dataset, since statistics must be
calculated for classification.
Use the created virtual dataset, which now contains “bands” from
both the imagery and the DEM, to do your supervised or
unsupervised classification as you would normally.
If you haven’t used ER Mapper before you may still want to browse
through this manual to get an idea of the diverse ways in which the
product can be used.
You can also use the virtual dataset as input to the Scattergram
Display function of ER Mapper, to aid you in defining training
regions, or for checking accuracy of the resultant classified file.
38
Digital Elevation Models
Correcting SAR
data with DEMs
DEMs can be used to correct Radar layover effects, by back
calculating the actual reflectance points for the radar signal. Refer to
the ER Radar manual for further details on processing of Radar data
in association with DEM data.
Digital Elevation Models
39
40
Digital Elevation Models
Map Production
This chapter describes the production of high quality real-world maps
containing both image and vector data.
Producing quality
combined
image/vector
maps
In addition to image processing functions such as creating aerial
photograph mosaics, ER Mapper contains very advanced map
production abilities.
Integrating GIS and CAD
data with aerial photos
using ER Mapper
Using ER Mapper for map production offers a range of benefits:
Map Production
ER Mapper is unique in that it is designed to handle image based
data as well as vector data when producing maps.
•
Easy creation of real-world maps, containing aerial photographs
as backdrops, GIS and CAD data, and map objects such as scale
bars, and so on.
•
Integration of all types of data. ER Mapper dynamic links enable
easy combination of data from not just one, but multiple sources.
For example, you can integrate data from ARC/INFO, AutoCAD,
MapInfo, Oracle, tabular data, and proprietary systems, into a
single map.
41
•
Easy and intuitive to use map production. Rather than complex
commands, ER Mapper’s map production is built using an
intuitive drag and drop design.
•
Smart map objects. When a map object, such as a scale bar, is
placed onto a map, it automatically uses the scale information
appropriate to the current map, and updates automatically if you
change it. For example, if you resize a scale bar map object, it
automatically recreates itself to take into account the new size.
•
User extendible map objects. You can add your own smart map
objects to ER Mapper. Also, company logos and other symbols
may be easily added.
•
No limits to file sizes, for image or vector data. This also applies
to printing the map. ER Mapper can print a map of any
complexity to any of the printer or graphics formats supported.
•
Built in RIP (Raster Image Processing) engine. Whereas other
products require you to purchase an addition press printing RIP
engine, ER Mapper contains a complete RIP engine built in. This
engine handles the merging of image and vector data at print
time, ready for output to the hardcopy device. Because the RIP
engine is built into ER Mapper, you get consistent output
regardless of output device (no changes because of different
fonts, etc), and you are not limited to printer memory - as the
RIP work is all done in ER Mapper, the printer simply has to print,
not process, the output.
•
Print at any scale. Maps can be printed at any scale, even a scale
larger than the printer can handle (ER Mapper will cut the map
up into strips that can fit onto the printer, in this case). Also,
ER Mapper can rotate a map so it better fits onto your printer
page.
The ER Mapper map production system is well suited for use as a
production system for GIS and CAD users who wish to integrate
aerial photography with vector data, and produce high quality maps.
Sub-sectioning
data for GIS/CAD
systems with
limited imagery
capabilities
42
GIS and CAD systems are designed to handle vector based data, not
image based data. There are often limits to the size of files or the
format of data that a GIS or CAD system can handle. For example,
some products can only use grayscale imagery, and very few
systems can handle the large (10,000 x 10,000 pixels or larger)
image files that cover an entire project area at high resolution.
ER Mapper makes it easy to side-step these limits, in several ways:
•
Sub-sectioning the mosaic into files small enough to be handled
by the GIS or CAD system.
•
Produce lower resolution overview image files that are small
enough to be handled by the GIS or CAD system.
Map Production
Sub-sectioning data into
smaller files
•
Direct use of ER Mapper image handlers within some GIS or CAD
systems.
•
Use ER Mapper’s extensive vector capabilities to bring data from
the GIS or CAD system into ER Mapper, and integrate it with
image data within ER Mapper.
ER Mapper can easily re-process an image file into any size file
needed for a GIS or CAD system. Zoom to the sub-section area of
interest using the Zoom pointer, or using the Geoposition window to
set exact extents. Then use the “Save As” icon on the ER Mapper
toolbar, to output an image at the desired resolution.
If you haven’t used ER Mapper before you may still want to browse
through this manual to get an idea of the diverse ways in which the
product can be used.
You can automate this procedure, and many others, by using the
powerful ER Mapper scripting language. For example, you might
want to divide a single large mosaic into a set of sub-section
files, each covering a specific area, in a cookie cutting fashion.
Use the scripting language to define a procedure to step through
a mosaic, creating sub-section files, writing these out to your
GIS or CAD image file format. This entire procedure can be
linked to an ER Mapper menu or toolbar icon, making the whole
procedure a one-click operation.
Producing lower
resolution overview
image files
Use the same procedure detailed for producing sub-sectioned
images, except in the “Save As...” menu, specify the size of the
image your GIS or CAD system can handle in number of cells across
and down. While saving the new image dataset, ER Mapper
automatically sub-samples the image down to the resolution you
specify.
For example, suppose your aerial photograph mosaic covers 20km x
30km at 1 meter resolution. This would be a 20,000 x 30,000 cell
file. Unlike ER Mapper, few GIS or CAD systems can handle this size
of file. If you output the file at 2,000 x 3,000 cells, the file will be
much smaller—but the size of each cell will be 10 meters instead of
1 meter. If you wish to view the higher resolution data within a GIS
or CAD product with limited image handling capabilities, sub-section
the mosaic into multiple smaller files, as described in the previous
section.
Directly use the
ER Mapper image
handlers within your GIS
or CAD system
Map Production
Some GIS and CAD systems can directly use the ER Mapper image
handlers within the GIS or CAD environment. For example, the free
add-ons for ArcView, AutoCAD Map, Autodesk World, and MapInfo
enable users to directly use ER Mapper images and algorithms,
without having to import the data. This resolves the image file size
limits in these systems.
43
Contact Earth Resource Mapping for details about how to best
integrate your GIS or CAD system with large image files.
Overlay your GIS or CAD
data in ER Mapper, over
aerial photo mosaics
One of ER Mapper’s strengths is the ability to share vector based
data with popular GIS and CAD systems. In many cases (for example
ARC/INFO coverage format), ER Mapper can directly access the data
in the native GIS format. If you don’t need to edit the data
ER Mapper can directly overlay the data in the exchange format such
as DXF without having to import it, making integration a simple onestep process. If you do want to edit the data in ER Mapper, you must
import it into ER Mapper format (except for ARC/INFO format, which
ER Mapper can directly edit in the native format).
There are several benefits to doing the integrated mapping within ER
Mapper, using your GIS or CAD data:
44
•
ER Mapper has no image file limits, and can handle very large
vector based files.
•
There are no limits to the number of vector or image layers of
data in an ER Mapper map.
•
ER Mapper supports integration of multiple types of data, from
multiple sources. For example, you may have data in AutoCAD
CAD format, ARC/INFO GIS formats, and Oracle tables (as well
as the aerial photo mosaic created using ER Mapper). ER Mapper
can integrate data from all these and other sources into a single
map. This way, you can keep the data in its native format best
suited to that type of data while using ER Mapper as the
integration tool.
•
New Dynamic Links can be added to access data stored in
proprietary systems in your company.
•
ER Mapper has very powerful map production capabilities, ideal
for giving your GIS or CAD data a real-world feel, by combining
aerial photograph imagery with your GIS or CAD based vector
data.
•
ER Mapper has a complete printing RIP engine built in, that
generates very high quality image/vector based maps, with no
limit to file size. Unlike most products which rely on the printer
to do the RIP work for a map, ER Mapper has its own RIP engine,
so you can produce maps of any complexity without problems, to
all supported printer or graphics formats.
•
The key to these abilities is ER Mapper’s extensive range of data
translators, and the unique ER Mapper Dynamic Links concept,
which enables ER Mapper to reach out to external systems and
directly share data with them.
Map Production
Directly sharing
image files
between different
systems
There is a large assortment range of different image data formats.
ER Mapper supports most of these either directly through raster
translators or indirectly through import programs which can convert
other formats into ER Mapper’s native format.
This presents a question to users who wish to share the same file
between different systems. Ideally, one copy of the imagery should
be maintained, as this reduces maintenance and disk space
requirements.
There is a common sub-set of image formats, known as the BIL
(Binary Interleaved by Line) image file format. Many systems,
including ER Mapper, use this as the basis of the storage of image
data. Systems still need to know details such as the geocoding of the
imagery, number of cells and lines, etc. These are generally stored
in a separate header file. You can use ER Mapper to create BIL
imagery files to be shared by different systems.
For example, ARC/INFO and ER Mapper both support BIL image files.
Each system has its own header file; a “.hdr” file for ARC/INFO, and
an “.ers” header file for ER Mapper. Thus, to share the same
imagery, there would be three disk files:
•
<filename> - The actual imagery, in BIL format. This is a large
file, containing the imagery.
•
<filename>.ers - The ER Mapper header file. This is a small
header file.
•
<filename>.hdr - The ARC/INFO (and ArcView) header file.
This is a small header file.
Whenever ER Mapper exports or imports a file that is in a format that
is compatible with ER Mapper, it simply adds the .ers file (or the
appropriate header file for an export), and does not re-create the
image file.
Many systems support BIL based imagery. Check your GIS or CAD
system to see if it directly supports this style of image file, in which
case you can directly share the data with ER Mapper without having
to import or export the imagery.
Although ER Mapper handles data in any byte order (data larger
than 1-byte format, such as floating point data, must have a
byte order defined), many other systems can not. You may have
to swap the byte order to conform to a more limited product’s
capabilities. The Customizing ER Mapper manual gives
information on how to share information between different
systems.
Some systems embed the header information directly in the BIL file,
at the start of the file. For example, the ERDAS 7.5 format does this.
ER Mapper can also handle this form of BIL file, and can be instructed
to skip the header bytes at the front of the file.
Map Production
45
46
Map Production
Change Detection
Author
Amy Hall, Main Roads Western Australia, Don Aitken Centre,
Waterloo Cresent, East Perth, 6004, Western Australia.
Introduction
The issue of data currency is particularly pertinent to organisations
that deal in large volumes of land related data. Change detection
processing using satellite imagery is an ideal way to determine
changes in land cover in order to enable organisations to maintain
the integrity of the data that they manage. The periodic availability
of remotely sensed data makes it well suited to change detection
applications. Multidate imagery can be processed to highlight
changes in pixel spectral response between image dates. Such
information can be used in the decision making process, or used to
monitor changes over time as an aid to updating information
databases.
One particular application of change detection techniques is to
highlight where new road development has occurred between two
dates. Such information is used by Main Roads Western Australia
(MRWA) to update their digital centreline network files, to ensure the
integrity and currency of the network. This is important as the digital
centreline network forms the frame of reference for road and road
related data, and hence needs to explicitly represent the current
state of the road network.
The advantage of using image processing and change detection
methods is that there is no need for reliance on ‘third parties’ for
information, and the branch can independently update the digital
network.
This chapter documents the change detection process as applied to
detecting new road development from the initial stage of ordering
the satellite data through to the production of a change image.
Data sources
The raster and vector data used in this project and their sources are
described below.
Raster Data
SPOT Panchromatic data was used for this project. The SPOT satellite
in panchromatic mode detects information in a single band (0.510.59 µm) and has a spatial resolution of 10 x 10 m. A typical full
SPOT scene covers a ground area of approximately 60 x 88 km, and
costs A$1800. Partial scenes are also available at a cost of A$600 for
each 10km by 10km area. Repeat coverage for SPOT data is
obtained every 26 days (Richards 1986).
Change Detection
47
Supply of remotely sensed data in Australia is co-ordinated by the
Australian Centre for Remote Sensing (ACRES) in Canberra. All data
supplied by ACRES has been radiometrically corrected to account for
varying detector responses that often cause striping in a digital
image. Satellite data from ACRES is classified into levels according
to the amount of pre-processing that has been performed on the
data.
•
Level 4 data is path oriented. This means that the angle between
east and the scan lines is 11 degrees. Data has been resampled
to the Superficial conic map projection, and original pixel size has
been maintained.
•
Level 5 data is path oriented, has been resampled to the
Australian Map Grid (AMG), and the original pixel size has been
maintained.
•
Level 6 data is path oriented, generated from ground control
points from 1:100 000 series mapping, has been resampled to
the AMG, and original pixel size has been maintained.
•
Level 8 data is a map sheet product based upon 1:100 000 or
1:50 000 map sheets and data is rotated to align with the grid of
the map sheets. Data has been resampled to derive a smaller
pixel size.
•
Level 9 data is a map sheet product based upon 1 :100 000 or
1:50 000 map sheets, and has been rotated to align with the grid
of the map sheets. Data is generated from ground control points
from topographic maps. Data has been resampled to derive a
smaller pixel size than that of the raw data (source: ACRES).
For the best results from any multidate change detection process, all
images should ideally be captured at the same time of the year in
order to eliminate any seasonal variations that may cause differing
spectral responses on the ground (for example, healthy versus
unhealthy crops). Differences in atmospheric conditions
(summer/winter) can effect pixel responses. The aim is to reduce the
likelihood of ‘false’ change responses being generated, and to
maximise the likelihood of all changes pixels representing ‘true’
changes.
ACRES supplies data in a variety of formats. Before importing the
raster data it is necessary to be aware of the format the data is
stored on the tape, as this will determine the import module that is
used. This information is usually identifiable from the tape itself. It
is also necessary to be aware of how to extract the data from the
tape. This information can easily be obtained from ACRES.
Additional information that may be required is the number of bands,
cells and lines in the dataset, and the name of the sensor. It is not
necessary to load the entire scene into the system if the study area
only comprises a small area.
48
Change Detection
In terms of storage space, a typical raw SPOT Panchromatic scene
requires between 50-53 megabytes equating to around 10 000 bytes
per square kilometre of ground data, and can be used as a guide
where partial scene processing is desirable.
Vector Data
A vector dataset is ideal for warping the raster datasets into real
world co-ordinate systems. A road network dataset covering the
same ground area as the satellite image can be used to select ground
control points. ER Mapper supports a large number of external vector
formats, which can be converted to ER Mapper datasets or can be
accessed via dynamic linking. A list of supported formats can be
found in the ER Mapper reference manual.
Registering data
to ground control
points
Rectifying the satellite data to ground control involves the selection
of a number of ground control points (GCPs). These are defined as
points that are clearly identifiable on both the satellite image and the
control image (that is, vector road centreline network). A first,
second or third order best fit polynomial is then used to model the
image to the ground control co-ordinates. A rectified dataset will
require more storage space than an unwarped dataset. A single
SPOT Panchromatic scene will require around 70-73 megabytes.
ER Mapper will let you know if you have insufficient space available
to store a rectified dataset.
Selecting Control Points
Control points should be widely distributed across the image to
provide the most stable solution possible. Common practice is to
choose the majority of control points around the edges of the image
with several uniformly spaced points in the central portion of the
image (Richards, 1986). Each order of rectification requires a
minimum number of control points in order to define the rectification
equation. A general rule is to select no less than twice the number of
control points required in order to allow the calculation of a root
mean square (RMS) error that quantifies the accuracy of control
point selection.
For a single SPOT scene, 30 ground control points should be
sufficient to accurately rectify the data. A SPOT scene typically
contains 6000 x 8800 pixels. This equates to approximately one
control point per 16 square kilometres, and can be used as a rough
guide to the number of GCPs to select.
Data Enhancement
Change Detection
Some form of contrast enhancement can be used on the raw satellite
image to redistribute the pixel values to make use of the full range
of intensity values available. This increases the contrast of the
image, and makes features stand out more clearly. Best results are
obtained by using the grayscale look up table with SPOT
Panchromatic images. Turning on ‘Smoothing’ in the Algorithm
dialog box enhances the raster image by removing the ‘blocky’ effect
of individual pixels. Significant saving in processing time can be
achieved by saving the above enhancements and features as
algorithms particularly where more than one image is to be warped.
Remember to add a title to each algorithm that reflects the origin of
the data and the processing carried out.
49
Once three or more control points have been defined, the efficiency
of the exercise can be increased by setting up the workspace with
one half screen window containing the ‘To’ dataset (vector) and two
half screen windows containing the ‘To’ and ‘From’ datasets
respectively. An approximate point is then selected in the large
vector window and the Compute FROM button used to calculate the
corresponding pixel co-ordinates in the raster dataset. This point can
then be zoomed to more closely to allow refinement of the control
point location.
RMS Error
The RMS error listed for each point signifies the number of pixels in
error for the current order of rectification. It is advisable to keep the
RMS error below one pixel. This signifies an error of ±10m for SPOT
Panchromatic imagery.
Rectifying raster
data to vector
data
Once sufficient control points have been defined, the raster image
can be rectified to the vector data. There are no predefined rules as
to the best type of resampling or the order of polynomial to use with
images to obtain the best results. Results from rectification are
dependent on a number of factors, including local reflectance
characteristics and the location and number of control points
selected.
Resampling
Nearest neighbour resampling assigns the value of the closest input
pixel to the output pixel and preserves the spectral values of the
original dataset. Spatial shift errors may be introduced and can
cause edges to occur in the direction of resampling, resulting in a
blocky image. Bilinear resampling uses the four surrounding input
pixels to determine the output pixel value and has the effect of
smoothing the dataset, as spectral values are modified rather than
spatial relationships. The resultant image may contain blurred
edges.
Cubic convolution uses the closest 16 input pixel values to determine
the output pixel value. Cubic convolution may increase the high
frequency component of the data, causing the introduction of noise
and can distort the image considerably (Lodwick et al., 1992).
Polynomial Order
50
Linear order rectification is often used for a quick rectification where
precise co-ordinate accuracy is not required, as it is less sensitive to
control point variations and processing time is considerably less. A
general rule is that it is best to use the lowest order polynomial
possible to achieve the best results and to avoid introducing
distortion in the output image. The control point error table shows
the magnitude of RMS errors in each part of the image. Unevenly
distributed RMS errors indicate that there is differential distortion in
the image, and suggests that a polynomial order higher than linear
is justified. Before making this assumption, it is necessary to recheck these erroneous points to ascertain that it is not human error
that has caused this effect.
Change Detection
Results
A SPOT image was subjected to all nine combinations of resampling
and polynomial order. In all cases where nearest neighbour
resampling has been used, the image appears blocky and linear
features appear stepped compared to the original unrectified
dataset. Bilinear resampling tends to produce a smoother image with
roads and other linear features occurring more as straight lines.
Cubic convolution seems to have introduced noise into the data
making it slightly blocky but linear features are preserved well. Of
the three methods, the integrity of the original dataset seems best
preserved on the images resampled using bilinear techniques.
To validate the results achieved from various polynomial orders, the
raster dataset can be overlaid with the vector control image and a
visual comparison undertaken to determine which image the vector
data fits best.
Rectifying raster
data to raster data
In order to create a meaningful difference image from change
detection, pixel to pixel registration between images is of utmost
importance, as even half a pixel in error between images will produce
erroneous results. To reduce the likelihood of this occurring, the
remaining raster images should be registered to the first rectified
image as it is far easier to select corresponding pixels on two raster
images than corresponding locations on vector and raster datasets.
Additionally, to ensure some consistency between rectifications, a
suggestion is to use the same control point locations that were used
for the initial rectification. This may reduce the likelihood of error as
the polynomial function used to rectify the data will be similar to the
function used to rectify the original image. You can load the GCPs
from the first image into the second image in the GCP edit dialog
box.
To determine the average error in each pixel, divide the total RMS
error by the number of control points. The average error should
ideally never be above one pixel. For change detection purposes, less
than half a pixel average error is highly desirable.
Comparing Results
To compare the results of two raster rectifications, a good technique
is to produce a normalised difference image. This is accomplished by
creating a virtual dataset that contains both of the rectified images.
A virtual dataset is simply a type of algorithm that treats multiple
datasets as a single dataset so that the information can be
collectively manipulated. To create a normalised difference image,
the virtual dataset of both raster images is subjected to an equation
of the form:
(input1- input2) / (input1+ input2)
The resultant image contains response values between -1 and 1.
Values of 0 indicate where pixel values have not changed between
images and deviations from 0 indicate where pixel values are
different to a greater degree. A totally gray image would indicate
that pixel values are identical on both images and would suggest
perfectly registered datasets.
Change Detection
51
It should be emphasised that a normalised difference image will
highlight differences in absolute pixel values between datasets, as
well as misregistered pixels. For this reason a normalised difference
image will obviously show change information where ground cover
has changed between images, and this should not be confused as
indicating misregistered datasets. A normalised difference index is
simply a quick and easy way of gauging registration between
datasets. It should highlight obvious misregistration problems and
indicate if re-registration is necessary.
If a Gaussian Equalise histogram stretch is applied to each dataset
before calculation of the normalised difference index this will
‘normalise’ the pixel values in each dataset and allow a more
accurate comparison to be made. If misregistration is apparent,
examine the control point location table and check that sufficient
control points have been selected in accordance with the guidelines
set out previously. Select additional control points if necessary and
rewarp the image. This time, when you create the normalised
difference image, manipulate the image histogram to highlight those
areas where the difference is between -0.01 and 0.01 (indicating
very small or no change between pixels). If most of the roads on this
image are highlighted this indicates good registration between
datasets.
Eliminating
seasonal and
atmospheric
effects
For the most effective comparison, both images should ideally be
captured at the same time of the year in order to eliminate any
seasonal variations that may cause different spectral responses.
Spectral variations may also occur owing to atmospheric factors. If
one image was captured on a day when the atmosphere was full of
smog or smoke, the pixel values will be affected. There are many
ways to reduce the effects of the earth's atmosphere on satellite
imagery. The most applicable method will depend on whether the
effect is constant over the entire image or random. A comprehensive
discussion of techniques is not presented here but can be found in
any remote sensing textbook. The following two methods are quick
and simple ways of standardising datasets, and can be applied in a
variety of situations.
A Gaussian Equalise histogram transformation applied to each
dataset may be of use. Another technique used to reduce
atmospheric effects on imagery and normalise scenes for
comparison is documented by Richards (1986) and Schowengerdt
(1983). The technique involves subtracting the pixel value of the
darkest pixel over water from all pixel values in the same image. This
is perhaps the quickest and easiest technique and assumes that the
atmosphere has affected each pixel in the dataset equally. This
cannot always be assumed, and care should be taken.
52
Change Detection
This technique is made easier by opening up two windows, one with
each dataset, and geolinking them so that, at all times, they show
the same ground area. It is best to select an area of the image where
pixel values are relatively uniform such as a waterbody. By using the
Cell Values Profile window, you can determine corresponding pixel
responses in each dataset, and build up a table that will allow you to
work out if one dataset exhibits brighter or darker responses on
average. You should sample at least 20 pixels across the entire
image in order to establish an appropriate pattern. If, on average,
the pixels in image A are brighter or darker than image B by a
constant amount, then you can standardise the images by altering
the formula of one image to read Input1 ±(some constant value).
You will need to save this as a new dataset in order to carry out any
further processing.
It should be noted that a Gaussian Equalise transform has no actual
effect on the absolute spectral values of the pixels, and simply
changes the way we perceive the image. Conversely, subtracting a
constant value from all pixels does affect the spectral values and is
perhaps more desirable where further processing or thresholding is
to be carried out to avoid the introduction of error based upon
perception.
Creating a change
image
Once you have two or more datasets registered to your satisfaction,
a change image can be produced. There are many ways to produce
a change image; each has its own benefits and limitations. The
optimum change detection method is defined as the method that
combines accuracy with ease of implementation and computational
efficiency. The method you choose will be determined by the purpose
of the project you are undertaking.
The underlying principle behind change detection is that a change in
land cover causes a change in pixel response. Whether pixels
become brighter or darker depends on the type of change being
detected. For example, clearing land of vegetation will typically
cause an increase in pixel brightness (from darker vegetation to
lighter soil) whereas planting a crop would typically cause a decrease
in pixel brightness (bare soil to vegetation). Once you have
determined what to look for you are ready to begin! Numerous
change detection techniques will be discussed briefly, followed by a
more in-depth discussion of the method deemed to be the most
efficient for detecting changes to the road network and how it is
applied using ER Mapper.
Change Detection
53
Red Green Difference
Image
The production of a red/green difference image is a widely used
technique, and is particularly useful for interactive viewing of change
areas. This technique involves displaying simultaneously one dataset
in green and one dataset in red. The resultant combined image will
contain mainly shades of yellow (indicating the same response
between dates), but areas which have changed will appear as green
or red. Red areas tend to have more contrast than green areas,
therefore it is suggested that the most current image uses the red
layer if increases in pixel brightness are of importance and vice
versa. A viewing scale of 1:20 000 or larger is ideal for panning
across the image to delineate areas of interest. This technique is
most effective where the magnitude of the areas to be found is
anticipated as being quite large, such as cleared fields or changes in
crop growth.
Band Ratios
The technique of ratioing bands involves dividing the spectral
response value of a pixel in one image with the spectral value of the
corresponding pixel in another image. This is done in order to
suppress similarities between bands. The nature of band ratioing is
that every pixel that has the same spectral response between input
bands will have a value of 1 in the output image; deviations from 1
indicating progressively different initial spectral values. Areas of
greatest change are found in the tails of the resultant histogram.
Production of a change image will involve thresholding the image
histogram to suppress those areas where little or no change has
occurred. The thresholding process will be discussed in greater detail
in a following section.
Principal Components
Analysis
Principal components analysis (PCA) is a technique employed in
image processing to reduce the correlation between bands of data
and enhance features that are unique to each band. A characteristic
of PCA is that information common to all input bands (high
correlation between bands) is mapped to the first Principal
Component (PC) whilst subsequent PCs account for progressively
less of the total scene variance. This principle can be applied to multi
temporal datasets. If two images covering the same ground area but
taken at different times of the year are subjected to PCA, then the
first PC will contain all of the information that has not changed
between the two dates whilst the second PC will contain all the
change information. The areas of greatest change are found in the
tails of the image histogram.
Image Differencing
Image differencing is perhaps the most simple of all change
detection methods. It is based upon the principle that by subtracting
the pixel responses in one image from the corresponding pixel
responses in another image, any negative value in the output image
represents an increase or decrease in pixel response depending on
the order of subtraction. For example, if the response from the latest
image was subtracted from the response from the older image then
all negative values in the output image would indicate an increase in
pixel brightness.
54
Change Detection
All increases in scene brightness would cause negative pixel values
in the output image but this is not to say that all negative values
indicate significantly ‘changed’ pixels. The histogram of the resultant
dataset needs to be manipulated to isolate areas of
increase/decrease that are significant.
To create a difference image
The following documents a procedure for creating a meaningful
difference image that isolates areas of interest.
1. The first step is to create a virtual dataset comprising both warped
images (that have been corrected for atmospheric effects if
necessary).
Before any further processing is carried out, it is advisable to display
both datasets side by side, set Geolink to Window and simply scroll
through the image to get a feel for the sorts of things that you may
be looking for. If you already know of an area where change has
taken place, take a look at how the pixel responses have been
affected between dates. This will help you determine what levels of
brightness change you are interested in and will later aid the
thresholding process.
2. Depending upon the change detection application, you may wish to
apply a filter to each image before calculating a difference image.
Median filters are ideal for reducing the variation between adjacent
pixels, by smoothing each dataset (Ingram et al., 1981) and are
most effective where large areas of change are to be detected. For
small changes such as changes to the road network, leave the pixels
‘raw’ to avoid introducing any errors or to avoid ‘smoothing’ the
dataset too much and losing smaller details. If you decide to apply a
filter, make sure you save the filtered image as a new dataset before
producing the virtual dataset.
3. When producing the virtual dataset, edit the layer description of each
dataset layer to reflect the dataset contents.
For example, if the image is rectified, give it a prefix of ‘rectified’
followed by the date of the image. This way, each band can be
manipulated separately in the virtual dataset. If you leave the
overlay descriptions as ‘pseudocolor’, the two images will be
combined into one band in the virtual dataset.
4. Create a change image by applying a formula to the virtual dataset
of the form:
Input1 - Input2
Save this formula into an algorithm and call it ‘Create a change
image’ or similar so that you can use it again to compare other
datasets.
5. Assign bands (datasets) to the formula inputs.
Which image you make Input1 depends upon whether you are
looking for increases or decreases in pixel brightness. For increases
in brightness, make the newest image Input1.
Change Detection
55
6. Redisplay the image after applying the formula using the grayscale
look up table.
7. The next step is to manipulate the image histogram in order to
extract those areas of interest.
The selection of threshold boundaries is one of the most critical
elements in change detection processing. You want to eliminate all
pixels that have no chance of being change pixels, at the same time
being careful not to reject any pixels that may be of interest.
Making the newest image Input1 means that all increases in pixel
brightness will have a positive response in the output image. It
follows that if you are interested in increases in pixel brightness then
you can immediately eliminate all negative pixels in the output
image. Conversely, if you are interested in decreases in pixel
brightness then reject all positive pixel responses.
To modify the histogram, first set the output limits to ‘actual’ using
the transform dialog and refresh the display. Thresholding is
typically an iterative, interactive process which requires some prior
knowledge of the types of changes in order to obtain a meaningful
result. Thresholding can be significantly aided by selecting a feature
with the highest probability of no change between dates, and
manipulating the histogram accordingly. Waterbodies are ideal for
this process.
Open up three windows, two containing the original images and one
containing the ‘change’ image. Set Geolink to Window on all three
images to ensure that they are displaying the same ground area. If
you can locate a waterbody, use this as a ‘benchmark’ for
thresholding by manipulating the histogram to eliminate any change
pixels that are occurring in water. You should make sure that the
reflectance of the waterbody in the original image is not affected by
sun glare. Alternatively, if you know of some areas where change has
occurred, pan to these areas and manipulate the histogram to
maintain the relevant pixels and reject the others. Large scale aerial
photographs are ideal for locating target change areas. Be careful
not to reject too much. If this process is undertaken for a few small
test areas, threshold values can be chosen that maximise the
likelihood of pixels in the entire image actually representing true
change. Once you are satisfied with the threshold level that you have
chosen, refresh the display.
8. Further processing can be carried out to eliminate false change
responses.
If you have regions defined, you can use them to mask out unwanted
change pixels. For example, if you are interested in changes to crop
growth and you have a vector dataset comprised of polygons
representing forested areas, waterbodies, urban areas (that is, areas
that you are not interested in) and so on, then you can convert this
vector information to region information. Then you can use the
regions in the change image, applying a formula that sets the pixel
value to null if it is within one of the defined regions. Alternatively, if
you are using a GIS or similar system for further processing you may
wish to carry out this stage of processing in that system.
56
Change Detection
9. You are now ready to save your change image as a new dataset.
Use the transform dialog box to make sure that all of the ‘change’
pixels are given an output value of 255 and all other pixels an output
value of 0, then change the layer type from Pseudocolor to
Classification. This will produce an image in which all ‘change’ pixels
have an output value of 1 and all other pixels are ‘null’. Save this as
a dataset using the Save as Dataset option. The dataset can now be
used as an layer. It can be draped over another SPOT scene or it can
be exported for further analysis in another system such as a GIS.
References
Ingram, K., Knapp, E. and Robinson, J.W. (1981), Change Detection
Technique Development for Improved Urbanized Area Delineation,
Technical Memorandum CSC/TM-81/6087, Computer Sciences
Corporation, Silver Springs, Maryland, USA.
Lodwick, G.D. and S.H. Paine (1992), Remote Sensing and Image
Interpretation, Perth, Curtin University School of Surveying and Land
Information, pp 145-146.
Richards, J.E. (1986), Remote Sensing Digital Image Processing - An
Introduction, New York, Springer-Verlag.
Schowengerdt, R.A. (1983), Techniques of Image Processing and
Classification in Remote Sensing, New York, Academic Press.
Change Detection
57
58
Change Detection
Crop Type Classification and Area
Inventory
Authors
L.A. Chisholm and R. Rumbachs, Centre for Image Analysis, Charles
Introduction
This chapter illustrates the use of ER Mapper for crop type inventory.
This involves the use of formulae enabling the user to define complex
processing techniques such as classification and a sequence of steps
for image processing to achieve this end.
Sturt University—Riverina, Locked Bag 678, Wagga Wagga, NSW
2650, Australia.
In the field of agriculture, a dominant requirement is for information
on crop conditions and areas for efficient management purposes, in
particular:
•
crop identification
•
area measurement
•
identification of crop stress.
According to Barrett and Curtis (1992), any inventory designed to
determine areas of one particular land characteristic should be
compared with the total land area concerned. In the study
presented, a land cover classification schema was determined, from
which agricultural land was subdivided as required.
Project
The sample project used for illustration is based upon a study area
at Charles Sturt University located near Wagga Wagga, NSW. The
University Farm comprises 490 ha of University-owned land and 370
ha of leased land of which 150 ha is river flats. The farm system is
winter crops, cereals, legumes and oilseed with rain-fed lucerne and
subclover pastures for beef and sheep production. The University
Farm maintains a valuable role in demonstrating modern farming
practices, and providing a valuable resource for academic and
research pursuits. There is a need to provide an automated crop
inventory, primarily to support farm management practices, and to
provide accurate production figures to regional and State agronomy
departments.
Crop Type Classification and Area Inventory
59
Image processing was performed using ER Mapper on a Sun Sparc
10 workstation on a local network. Geometrically corrected SPOT
imagery obtained in July 1992 was imported into ER Mapper format
and subsetted according to the boundaries of the project area.
Various image enhancements and classifications were performed to
help identify the features of interest in the data. A dye sublimation
printer was used to produce final hardcopy of images.
Multispectral
classification
Important thematic information can be extracted from remotelysensed imagery through a process called multispectral classification.
Classification is a statistical process which groups homogeneous
pixels into areas of interest based upon a concept referred to as
spectral pattern recognition. Several classification algorithms are
popular in image processing, the most common being:
parallelepiped, minimum distance to means, and maximum
likelihood. While each algorithm has its advantages and
disadvantages, the maximum likelihood algorithm is most commonly
used due to its rigour.
Classification can be categorized into two methods: supervised
(human assisted), or unsupervised (clustering) techniques. Each
method serves a particular purpose, and the two methods are often
used in conjunction. Results from the process are typically in the
form of a thematic map from which information can be used to solve
a particular problem or to provide important data unobtainable from
other sources.
The steps taken using ER Mapper for this study are presented below
as a general guide, with more detail outlined later:
•
60
classification schema
Crop Type Classification and Area Inventory
•
initial image display
•
image interpretation
•
ground truth
•
training site selection
•
supervised classification
•
accuracy assessment
Getting to know
your image
Once the data has been imported into the system, the first step is to
determine image quality and obtain information on general image
characteristics. It is usually based upon the information gained from
this step that further processing is determined: atmospheric
corrections to remove distortion from the image; and necessary
image enhancements to obtain the best visual display for
interpretation and/or analysis. It is often necessary to apply a range
of image enhancement procedures to image data to display it more
effectively and increase the amount of information which can be
visually interpreted from it (Lillesand and Kiefer, 1994). This is
particularly true for training site selection, an essential precursor to
supervised classification.
Image display
Typically an image analyst initially views the image in the standard
three band colour composite (green, red, and infrared brightness
values to the blue, green and red colour guns, respectively). While
the merit of other three-band colour composites should not be
overlooked, it is wise that the composite with which the analyst is
most comfortable for interpretation be used prior to the classification
procedure to establish the location of ground truth, and to determine
training sites as input to the classification process.
Image
enhancement
Image enhancement techniques are applied to the data to improve
visual interpretation. There are no hard and fast rules for producing
the single “best” image for interpretation; the analyst should
enhance the image in the manner to which they are comfortable.
Typically, initial enhancements serve as input to further image
processing steps, with thematic information extracted from the
image using either supervised or unsupervised classification
techniques (Jensen, 1986).
Crop Type Classification and Area Inventory
61
Classification
schema
Particularly with agricultural studies, the analyst must attempt to
account for all major land cover areas present in the image. The
schema in which the image is to be classified is worth considerable
attention before proceeding. For this study, the following schema
was arrived at based upon the major crops and cover types present
in large quantity in the project area based upon the classification
system proposed by Anderson:
Level 1
Level 2
Urban or built-up land
residential
Agricultural land
oats
barley
subclover
wheat
lucerne
pasture
other agricultural land
Forest
eucalypt
Water
dams
Barren
bare soil
transitional
(Modified from Anderson, et al., 1970)
These are the classes which are to be extracted as thematic classes
from the image and for which area statistics are to be generated.
Sequence of
operations
62
In general, the sequence of operations for crop identification/surveys
includes data preprocessing, training set selection and classification
by statistical pattern recognition. Data preprocessing includes
radiometric correction and registration of each pixel in geometric
coincidence. The training sample selection phase aims to determine
the separable classes and subclasses in a given data set. Pattern
analysis systems have been developed that allow several methods of
class selection to be employed. Statistics can be computed and
printed out in the form of histograms, correlation matrices, and
spectral plots. One of these types of output can then be used to
group areas having similar spectral responses. Another method of
class separation consists of using clustering techniques to group
image points to minimise the overall variance of the resultant sets
Crop Type Classification and Area Inventory
Histograms and statistics are often computed and printed for each
training site. An adequate statistical sample (>30) for each crop type
is necessary. Samples of data from each class identified by the
statistical process are then used for training the pattern classifier; for
example, the histograms for barley can be used as training sets for
the automatic classification of barley.
Following the pattern recognition phase, which produces automatic
classification of the land cover, it is necessary to evaluate the
classification accuracy quantitatively. A large number of test fields
are necessary. These are located in the computer classification and
the ground data is compared with the computer result.
There are drawbacks which should be considered. In the case of
supervised classification methods, there can be considerable
difficulty when spectral signatures show high variability. Further,
accuracy assessment demands that reference signatures be
collected directly from a training area lying within or nearby the
survey area. The unsupervised technique avoids this by not requiring
reference signatures in the data processing phase. Unsupervised
techniques group the multispectral data into a number of classes
based on the same intrinsic similarity within each class. Classes are
labeled after data processing by checking a small area belonging to
each class.
Where crops have been affected by disease or adverse
environmental conditions, dead portions of the crop can be identified
readily by using the infrared portion of the spectrum. In this spectral
region, healthy plants are typically red whereas diseased, dead or
otherwise stressed plants assume different colours. The colours of
diseased plants often range from salmon pink to dark brown
depending on the severity of the stress. Dead portions image in
green or bluish gray.
Crop discrimination using infrared bands has been studied closely
and it has been found that the percentage accuracy depends on time
of year, location and environment (Barrett and Curtis, 1987) as well
as the spatial resolution of the sensor.
A common problem is that occasionally nearly all crops image red. It
has been noted that the quality of grass pasture can often be
detected by variations in the intensity of the red hue. Multi-temporal
analysis can help to distinguish between crops when this situation
arises.
Multispectral
sensing of crops
Below is information provided regarding the multispectral sensing of
crops.
Spectral Reflectance
Characteristics
Many studies have been performed to determine the different
spectral responses of plant types, particularly the characteristics of
a number of crops. Crop signatures can be useful throughout the
preprocessing and classification processes. It is important to note,
however, that the spectral response of a field crop depends partly on
the layering within the crop and the percent soil background.
Crop Type Classification and Area Inventory
63
A fundamental problem occurs when crop identification is attempted
where field sizes may be less than 1 ha. From space many of the
ground resolution elements are individually composed of a mixture
of crop categories. Thus many of the pixels generated by sensors are
not characteristic of any one crop, but relate to a mixture of several
crops.
The radiance of crop canopy is more complex than modelling a
complete cover for natural vegetation. As the crop grows different
plant characteristics are developed and simultaneously the amount
of underlying soil visible to the sensor changes through the growing
season. Remote sensing devices primarily detect radiance from the
canopy but additional complicating factors must be kept in mind. The
reflectance characteristics are dependent on a number of variables
which change during the growing season, including:
•
optical properties of stems and reproductive structures within the
canopy
•
leaf area indices
•
canopy densities
•
leaf orientations and shapes
•
foliage height distribution
•
transmittance of canopy components
•
amounts of soil background viewed and their reflectance
contribution
(Barrett and Curtis, 1992).
Radiometric data were acquired from field trial sites located at the
Wagga Wagga campus of Charles Sturt University for the period
February 1991 through December 1992. The field trials were
undertaken to catalogue the changes in plant reflectance throughout
the growing season, to assess crop vigour by ground radiometric
measurements and remotely-sensed imagery. It was found that the
spectral response of the crop canopy significantly changed due to:
•
sky conditions and time of day
•
plant development
•
plant nutrition
•
plant stress
•
plant species.
Examples of the signatures obtained for certain crops using a handheld spectrometer are given in Figures 1 through 4.
64
Crop Type Classification and Area Inventory
Figure 1. Spectral response of crop species, 6 September 1991
Figure 2. Spectral response of crop species, 19 September 1991
Figures 1 and 2 are examples of spectral response of different plant
species showing the spectral response of four different crop species
from the cultivar trial 1991. The main effect on the spectral
responses is biomass for the cereal crops, although triticale does
tend to have a slightly higher reflectance in the blue wavelengths.
Field peas are generally a light green colour which is shown by the
high reflectance in the green wavelengths, and also higher in the
near infrared—possibly due to the broader leaf structure. The data
consists of single measurements from a treatment plot.
Crop Type Classification and Area Inventory
65
Figure 3. Change in clover canopy spectral response with plant biomass
Figure 3 is an example of spectral response change due to plant
development showing data collected on the same day from areas of
clover with different biomass (displayed as plant height). The
spectral signature of clover/soil was from an area where the clover
plants were still quite small and soil was clearly visible around the
plants.
Figure 4. Change in wheat spectral response with stripe rust infection.
Figure 4 is an example of spectral response change due to plant
stress showing the change in the spectral response for wheat which
was infected with stripe rust (Source: van der Rijt, et al., 1992).
66
Crop Type Classification and Area Inventory
Ground truth
A commonly used term for observations made on the surface of the
Earth with respect to remotely-sensed data is ground truth (Barrett
and Curtis 1992). Other terms of a similar meaning are in situ data,
or collateral data, but all refer to sampled data gathered in order to
establish a relationship between the sensor response and particular
surface conditions.
It is commonly used to determine the accuracy of categorized data
obtained through classification.
Of utmost importance to the collection of ground truth is to obtain
data within as short a time period of the acquisition of the sensor
data as possible. If this is not the case, relationships established
between the ground and sensor data can be unreliable due to
changes which occur on the ground surface over time. This is
especially true for crop studies, where the phenomena under
investigation are continually changing: height and colour of crop,
effects of stress or disease, percent cover, etc. Knowledge of the rate
of change of these variables can help to identify a sufficient window
of time to collect the ground truth with respect to the sensor
overpass. For agricultural studies, normally the shorter the time
period between these two aspects, the better.
Training Site Selection
As previously stated, supervised classification methods are based
upon prior knowledge of the image, specifically the statistical nature
of the spectral classes used to classify the image (Mather, 1986). It
is important to collect training samples with the method of
supervised classification to be performed in mind, as each method
demands different statistical requirements. Lillesand and Kiefer
(1994) emphasise that all spectral classes constituting each
information class must be adequately represented in the training set
statistics used to classify an image. Training estimates are
commonly a function of the spatial resolution and geometrical
accuracy of the remote sensing data (Barrett and Curtis, 1992).
When classifying an unknown pixel, the maximum likelihood
algorithm evaluates the variance and covariance of the category
spectral response patterns. It is assumed that the distribution of the
training sample for each category is Gaussian. The maximum
likelihood classifier delineates “equal probably contours”, with the
shape of the contours expressing the sensitivity of the likelihood
classifier to variance. Estimates of the mean vector and variancecovariance matrix of each class are required as inputs to the
classifier. According to numerous authors (Mather, 1986; Jensen,
1986; Harrison and Jupp, 1990) the statistical validity of the result
will largely depend upon two factors:
•
the size, and
•
representativeness of the sample.
Crop Type Classification and Area Inventory
67
Sample size is usually related to the number of spectral bands from
which the statistical samples are to be collected. For purposes of
multivariate analysis, Mather (1987) recommends at least 30n pixels
per class, where n is the number of spectral bands. According to
Jensen (1986) ideally > 10n pixels of training data are collected for
each class.
There is a lack of standardisation of methods of data collection for
training sites. While there are general guidelines to be given, the
user should be aware that the tendency is to underestimate the
number of samples needed for any particular study. Remember that
the training site collection aims to determine the separable classes
and subclasses in a given data set (Barrett and Curtis, 1992). The
minimum sample sizes given here are valid only if the individual
pixels within the training sample are independent (Mather, 1986). If
a high spatial autocorrelation can be assumed, then a larger number
of training samples should be obtained to yield unbiased results.
Statistics can be computed for each band and printed in the form of
histograms, correlation matrices and spectral plots. One of these
types of output can then be used to group areas with similar spectral
responses.
In ER Mapper, regions were delineated and their spectral statistics
calculated. A region can contain multiple polygons; for instance,
several sub-regions can be delineated to represent the “barley”
class.
For the sample project, regions for each of the classes were selected
and spectral statistics were collected for each pixel found within the
training region. Usually each region is composed of many pixels. In
this project, at least 30 pixels for each region were collected. This
allows for the inverse of the covariance matrix for each class to be
calculated (Jensen, 1986) which is important for the maximum
likelihood classification algorithm.
Drawing training regions is described in Chapter 19, “Supervised
classification” in the ER Mapper Tutorial manual.
Histograms and statistics are then computed under the Process
Menu and printed for each region. The mean and covariance of each
crop type can be obtained from the selected regions. The major
value of the histograms is to ensure that the training regions chosen
approximate a Gaussian distribution.
Statistical summaries are also provided in the View Menu, where an
area summary report, mean summary report, standard deviations,
and distance between means can be obtained for either the image or
regions of interest. Two windows are displayed, one to display the
statistics, and the other to specify the classes to display.
An adequate statistical sample (>30) for each crop type is necessary
at this stage as previously outlined. The samples of data from each
region identified by the statistical process are used to train the
pattern classifier; for example, the histograms for barley can be used
as training sets for the automatic classification of barley.
68
Crop Type Classification and Area Inventory
It is important to realize that the processing stages within
classification procedures are iterative: training classes are updated
on the basis of classification results obtained from the previous
training data, and the process continues until satisfactory results are
achieved.
Classification
methodology
Implementing the actual maximum likelihood classification in ER
Mapper consists of two separate steps:
1. statistics extraction
2. the actual maximum likelihood calculations.
Twelve regions were trained upon using the classification schema
given at the beginning of this chapter.
Spectral statistics for each region were submitted to the supervised
classifier in the Process menu. The default maximum likelihood
algorithm was used with equal prior probabilities.
At this stage regions can be added individually, or alternatively, as
for this study, all regions can be added as input to the classifier.
The Edit Class/Region Color and Name option allows colours to be
assigned. Upon completion and colour assignment to the classified
image, the image was displayed for visual assessment. This is
discussed later.
Training Class
Refinement
Within the training signatures established there are often areas of
confusion between the crops of interest and other land cover types
such as forest or urban areas, and/or other crops under study
(Barrett and Curtis). This is often the result of real spectral
confusion. For this project, there was substantial spectral confusion
between the large, mixed residential area, and another area of
variable yet highly reflective “bare soil/transitional”. Confusion was
present due to the large degree of variance inherent in the
“residential” area, which actually contains semi-developed land, as
well as developed land with a mixture of greenery, cement, roofs,
etc., typical of “mixed-residential”. There are several ways to handle
this situation, the most obvious being to retrain on one of the
confused regions.
The general concept is to view a scatterplot of each region for
comparison. It is convenient to plot the training regions versus the
classified area (all bands) as an overlay onto the original image. If it
is evident that there are different classes present, there ought to be
a way to discriminate between them.
Thus, the first step taken to resolve this situation was to train on the
other class (in this case, residential) in order to obtain a more
discreet spectral signature for this region. Unfortunately, due to the
large amount of variance in the “residential” region, retraining did
not sufficiently resolve the problem.
Crop Type Classification and Area Inventory
69
Typicality Thresholds
The maximum likelihood classifier works upon forced allocation,
where every pixel must be allocated to a class present. Thus, one
way to limit the confusion is to place a typicality threshold on the
class. If spectral plots show the “bare soil/stubble” to be closer to
“residential” that any other class, one of the regions can be trained
upon in order to put boundaries on its typicality.
The maximum likelihood classifier calculates the posterior (relative)
probability of every pixel belonging to a class and forces it into the
one with the maximum posterior probability. This could be a
completely different class that happens to be spectrally similar. The
classifier also provides an index of typicality for each class—for
example, the probability of a pixel actually belonging to the class,
generally scaled from 0 to 100.
Whether a pixel is likely to belong to any of the original training
classes is also accounted for, where an index of typicality for each
class suggests the probability of the pixel belonging to the
corresponding training class. These values are also usually scaled
from 0 to 100.
Thus, a typicality probability indicates whether a pixel is likely to
belong to the corresponding training class. Each probability is
interpreted as a “tail area” probability in the statistical sense, so the
value of 1 is equivalent to a 1% significance level. The group
membership probabilities are relative values of the pixel belonging
to one or the other of the training classes. In other words, a value of
50 does not necessarily indicate that the pixel belongs in the
corresponding class, only that it is the most likely among the training
classes present. The typicality indicates if this is a reasonable result.
Once appropriate spectral statistics are collected, they can be
resubmitted to the classifier. After it has been allocated, unless the
pixel is within a specified degree of typicality for that class, it will not
be classified.
Thus, if simply retraining upon one class still does not resolve the
situation, one can use the concept of thresholding. If one specified a
typicality of 1%, this means that anything allocated in the 1% tail (a
long distance away from that class) will not get allocated. The user
could specify the typicality as 5%, or could be really harsh and
specify it as 10%. Thus, the 10% component will not be allocated.
Essentially this means that the bulk (90%) of the pixels are typical
of this particular class.
This idea of a threshold helps to identify what might be a separate
class. In the case of “bare soil/transitional” versus “residential”, at
the training stage the user may not know it should be a separate
class, but when a typicality is later forced upon it, one class will be
black from not being allocated.
70
Crop Type Classification and Area Inventory
Weighting the Classifier
In cases where discrimination between classes remains difficult, any
available a priori information can be used to obtain a more accurate
classification. The user can assign more weight to one class than
another, with the weight applied according to the area of the class.
Essentially the user is telling the classifier that the solution it comes
up with must match the fixed prior probabilities. It is quite common
for users to have access to geographic information systems (GIS)
from which areas of a known class can easily be obtained. It is fairly
straightforward to estimate the proportion of the study area within
the total image area to be classified. Areas for each class can
therefore be used to weight the classifier, entered as prior
probabilities.
A Note About Real-Time
Classification
A particularly useful option in ER Mapper allows “real-time”
classification using the “Classification Display” algorithm. Once
training classes are determined, the classified pixels can be
superimposed upon the raw satellite image to aid the analyst in the
interpretation and relative accuracy assessment.
Displaying
classification
results
ER Mapper can display a Pseudocolor map of the resultant classes,
with options to include masking with atypical pixels and a display of
the posterior probability and typicality indexes for a specified class.
A note about
unsupervised
classification
There is often considerable difficulty because insufficient knowledge
exists about the study area, spectral signatures show high
variability, and supervised techniques generally demand that
training sites lie within the survey area of interest. If one or more of
these problems exist, a useful alternative is unsupervised
classification. The unsupervised classification technique avoids these
difficulties by not requiring training sites as the basis for
classification. Unsupervised techniques group the multispectral data
into a number of classes based on the same intrinsic similarity within
each class. The basic premise is that values within a given cover type
should be close together in spectral space, as opposed to data in
different classes being comparatively well separated (Lillesand &
Kiefer, 1994). The result of an unsupervised classification,
therefore, is spectral classes. The meaning of each class in terms of
land cover is obtained after data processing by checking a small area
belonging to each class.
It is useful to assign individually selected colours to the classified
image, as the colours assigned using the Auto-gen colors option are
very similar to the colour of the class on the raw image. It can also
be useful to display similar classes in the same colour to group them.
Crop Type Classification and Area Inventory
71
Classification
accuracy
assessment
The overall accuracy of the classified image can be computed by
dividing the total number of correctly classified pixels by the total
number of reference pixels. Likewise, the accuracies of individual
categories can be calculated by dividing the number of correctly
classified pixels in each category by the total number of pixels.
Lillesand and Kiefer (1994) point out that error matrices should be
used with caution; if the results are good, it means no more than
that the training areas are homogeneous, the training classes were
separable, and the classification strategy employed worked well in
the training areas. In particular, they express concern with the
sampling approach taken from which pixels will be used for accuracy
assessment, and conclude that a concept of both random and
systematic sampling should be employed. In other words,
systematically sampled areas can be obtained early in the project,
usually as part of the training site selection, and random sampling
within the image after classification is complete.
Agricultural areas typically consist of homogeneous fields many
hectares in area well suited for the 20-by-20m ground resolution
pixel of SPOT XS. The digital numbers of the SPOT bands for that
pixel are a composite of the spectral reflectance of the various
materials (Sabins, 1987). Despite these problems, the resultant
SPOT classification map clearly portrayed the major categories of
land use and land cover as specified in the classification schema.
Recognition of homogenous features such as bare soil and water was
fairly accurate as would be expected. However, as previously
discussed, crop discrimination presents problems due to the
similarity of the plant reflectance of different crops, the variability of
the reflectance within fields, and the bare soil background of crops
having less than 100% ground cover.
Reasonably good results were obtained for recognition of crops
outlined in this study. There remained a light problem with respect
to the residential region tending to be confused with the bare
soil/transition region. However, this was satisfactorily resolved using
the methodology outlined.
Summarizing the
position of remote
sensing in crop
analysis
The crop calendar is an important guide to which data sets are likely
to be most useful and should serve as a basis for ordering data.
If the area under study contains classes which are difficult to
separate spectrally, are several procedures can be used to resolve
the situation. If sufficient ground truth can be obtained, weightings
can be assigned to the classifier prior to processing the image. In
addition, multitemporal image analysis has been shown to improve
classification accuracy, particularly in agricultural areas. Image
ratios and transformations are also useful in removing redundancy.
Often, important areas of confusion exist between crops of interest
and other land cover categories, such as forest or urban areas, and
even with other crops under study. It is recognized that this is often
the result of real spectral confusion. Two strategies often employed
to improve the accuracy of classification are:
72
Crop Type Classification and Area Inventory
References
•
grouping crops together to form a more homogeneous class for
better spectral recognition, then splitting these group categories
by ground information alone.
•
masking of water, forest and urban areas by means of visual
interpretation of images of previous years.
Anderson, J.R., Hardy, E., Roach, J., and Witmer, R. (1976) A land
use and land cover classification system for use with remote sensor
data, US Geological Survey Professional Paper 964.
Barrett, E.C. and Curtis, L.F. (1992) Introduction to environmental
remote sensing, London: Chapman & Hall, 3rd edition.
Harrison, B. and Jupp, D.L.B. (1990) Introduction to image
processing, Melbourne: CSIRO Publications.
Jensen, J.R. (1986) Introductory digital image processing: a remote
sensing perspective, Englewood Cliffs: Prentice-Hall.
Lillesand, T.M. and Kiefer, R.W. (1994) Remote sensing and image
interpretation, New York: John Wiley & Sons, Inc, 3rd edition.
Mather, P.M. (1987) Computer processing of remotely-sensed
images: an introduction, Chichester: John Wiley & Sons.
Sabins, F.F. (1987) Remote sensing: principles and interpretation,
New York: W.H. Freeman and Company.
van der Rijt, V. et al (1992) Plant canopy spectral responses from
field trials; a component of the airborne video imagery project, Data
Report 1992-1, Charles Sturt University, Wagga Wagga.
Crop Type Classification and Area Inventory
73
74
Crop Type Classification and Area Inventory
Vegetation in Remote Sensing FAQs
This FAQ has been used with permission. It is version 1.0 dated 13
October 1994.
The text has been formatted to be consistent with the manual but is
otherwise unchanged.
Author
Terrill W. Ray, Division of Geological and Planetary Sciences,
California Institute of Technology, Mail Code 170-25, Pasedena, CA
911255, USA.
Acknowledgemen
ts
Thanks to the following people for comments and suggestions (listed
in no particular order):
•
A. Chehbouni - ORSTOM
•
Martin Hugh-Jones - Louisiana State University
•
Kjeld Rasmussen -
•
Mike Stevens - University of Nottingham
Revision history
Listed below is a description of each created version.
Version 1.0
Major revision. Discussion of radiance vs. reflectance added.
Addition of vegetation indices designed to minimize atmospheric
noise (GEMI, ARVI, etc.). Addition of SPOT HRV bands. Numerous
minor changes. Cautions regarding the use of SAVI, MSAVI, etc.
added.
Version 0.7
Numerous minor non-substantive typographical errors fixed.
Addition of question 14a. Some stylistic and grammatical problems
dealt with.
Version 0.6
Major typographical error in TSAVI equation fixed and minor error in
MSAVI2 fixed.
Version 0.5
Original version posted.
Conventions
In most cases, reflectance, apparent reflectance and radiance can be
used interchangeably in this FAQ. (But see question #5 for some
important considerations about this.)
Wavelengths are given in nanometers (nm).
The “origin” is the point of zero red reflectance and zero nearinfrared reflectance.
Vegetation in Remote Sensing FAQs
75
The abbreviation SPOT refers to the Systeme Pour l’Observation de
la Terre which has five bands of interest (the bandpasses may not be
precisely correct since the document I am looking at lists the
“proposed” bands):
•
SPOT1 covers 430-470 nm
•
SPOT2 covers 500-590 nm
•
SPOT3 covers 610-680 nm
•
SPOT4 covers 790-890 nm
•
SPOT5 covers 1580-1750 nm
The abbreviation AVHRR refers to the Advanced Very High Resolution
Radiometer which has two bands of interest:
•
AVHRR1 covers 550-700 nm
•
AVHRR2 covers 700-1000 nm
The abbreviation TM refers to the Landsat Thematic Mapper which
has six bands of interest:
•
TM1 covers 450-520 nm
•
TM2 covers 520-600 nm
•
TM3 covers 630-690 nm
•
TM4 covers 760-900 nm
•
TM5 covers 1550-1750 nm
•
TM7 covers 2080-2350 nm
The abbreviation MSS refers to the Landsat MultiSpectral Scanner.
MSS bands are referred to by the old system:
•
MSS4 covers 500-600 nm
•
MSS5 covers 600-700 nm
•
MSS6 covers 700-800 nm
•
MSS7 covers 800-1100 nm
NIR is used to indicate a band covering all or part of the near-infrared
portion of the spectrum (800- 1100 nm or a subset of these
wavelengths). Examples: MSS7, TM4, AVHRR2.
76
Vegetation in Remote Sensing FAQs
R is used to indicate a band covering all or part of the portion of the
visible spectrum perceived as red by the human eye (600-700 nm).
Examples MSS5, TM3, AVHRR1.
List of questions
Provided below is a list of questions that will be answered in the next
section to help you familiarize yourself with subject matter.
General
1) What are the important spectral characteristics of vegetation that
I should know about?
2) I have some remote sensing data, what bands will show
vegetation best?
2a) TM data
2b) MSS data
3) I want to use band ratioing to eliminate albedo effects and
shadows. What band ratios are best?
3a) TM data
3b) MSS data
4) Why is vegetation usually shown in red by remote sensing people?
5) What is the difference between radiance and reflectance?
Vegetation index
6) What is a vegetation index?
7) What are the basic assumptions made by the vegetation indices?
8) What is the soil line and how do I find it?
Basic indices
9) What is RVI?
10) What is NDVI?
11) What is IPVI?
12) What is DVI?
13) What is PVI?
14) What is WDVI?
Indices to minimize soil
noise
15) What is Soil Noise?
16) What is SAVI?
16a) Why is there a (1+L) term in SAVI?
17) What is TSAVI?
18) What is MSAVI?
19) What is MSAVI2?
Indices to minimize
atmospheric noise
20) What is Atmospheric Noise?
21) What is GEMI?
22) What are the atmospherically resistant indices?
Other indices
23) What is GVI?
Vegetation in Remote Sensing FAQs
77
24) Are there vegetation indices using other algebraic functions of
the bands?
25) Are there vegetation indices that use bands other than the red
and NIR bands?
26) Plants are green, why isn’t the green chlorophyll feature used
directly?
Problems
27) How well do these vegetation indices work in areas with low
vegetation cover?
28) What is “non-linear” mixing?
29) Is the variation in the soil the only problem?
30) What if I can’t get a good soil line from my data?
31) How low a plant cover is too low for these indices?
Future directions
32) I hear about people using spectral unmixing to look at
vegetation, how does this work?
33) Are there any indices which use high spectral resolution data?
Final question
34) What vegetation index should I use?
Answers
General
Provided in this section is the answers to the questions in the
previous “General” section.
1) What are the important spectral characteristics of vegetation that I should
know about?
A: The cells in plant leaves are very effective scatterers of light
because of the high contrast in the index of refraction between the
water-rich cell contents and the intercellular air spaces.
Vegetation is very dark in the visible (400-700 nm) because of the
high absorption of pigments which occur in leaves (chlorophyll,
protochlorophyll, xanthophyll, etc.). There is a slight increase in
reflectivity around 550 nm (visible green) because the pigments are
least absorptive there. In the spectral range 700-1300 nm plants are
very bright because this is a spectral no-man’s land between the
electronic transitions which provide absorption in the visible and
molecular vibrations which absorb in longer wavelengths. There is no
strong absorption in this spectral range, but the plant scatters
strongly as mentioned above.
From 1300 nm to about 2500 nm vegetation is relatively dark,
primarily because of the absorption by leaf water. Cellulose, lignin,
and other plant materials also absorb in this spectral range.
78
Vegetation in Remote Sensing FAQs
Summary:
•
400-700 nm = dark
•
700-1300 nm = bright
•
1300-2500 nm = dark (but brighter than 400-700 nm)
2) I have some remote sensing data, what bands will show vegetation best?
A: Basically a band covering part of the region from 700-1300 nm if
you want the vegetation to be bright. (Using a band covering part of
400-700 nm would make vegetation dark, but this isn’t the way we
generally do things.)
2A) For TM data, either TM4 or TM5.
2B) For MSS data, either MSS6 or MSS7 (MSS7 is usually better since
it avoids the transition near 700 nm).
3) I want to use band ratioing to eliminate albedo effects and shadows. What
band ratios are best?
A: If you want the vegetation to turn out bright (which is usually the
most sensible approach) ratio a band covering part of the range 7001300 nm with a band covering either 400-700 nm or 1300-2500 nm.
Ratioing a near-infrared band to a visible band is the traditional
approach. Usually a visible band covering 650 nm is preferred since
this is near the darkest part of the vegetation spectrum usually
covered by remote sensing instruments. Basically you want a band
where vegetation is bright on the top of the ratio, and a band where
vegetation is dark on the bottom.
Although vegetation is more highly reflective in green than in red,
early work showed that near-infrared-red combinations were
preferable to green-red combinations (Tucker, 1979).
3A) TM: The traditional ratio is TM4/TM3. TM5/TM7 is also good, but
many clays will also be fairly bright with this combination. I see no
immediate reason why TM5/TM3 or TM4/TM7 wouldn’t work, but
they usually aren’t used.
3B) MSS: The traditional ratio is MSS7/MSS5 MSS6/MSS5 is also
used.
4) Why is vegetation usually shown in red by remote sensing people?
A: This is one of the apparently silly things done in remote sensing.
There are three reasons for it:
Vegetation in Remote Sensing FAQs
79
The first (and rather pointless) reason is TRADITION. People in
remote sensing have been doing this a long time and virtually
everyone who has spent much time working with remote sensing will
instinctively interpret red splotches as vegetation. Bob Crippen (not
the astronaut) at JPL said that he spent some time trying to break
this tradition by showing vegetation in green, but he was ultimately
beaten into submission. (Consider it this way: you are a remote
sensing professional, you usually give talks to remote sensing
professionals. They expect vegetation in red so you don’t have to add
an explanation that “vegetation is shown as green.” This simplifies
your life.)
The second reason is the fact that the human eye perceives the
longest visible wavelengths to be red and the shortest visible
wavelengths to be blue. This is an incentive for remote sensing
images to be set up so that the shortest wavelength is shown as blue
and the longest one is shown as red. Usually a near-infrared band is
the longest wavelength being displayed (this is especially true for
MSS and aerial color infrared photography). Since vegetation is
brightest in the near-infrared, vegetation turns out red. Using red for
vegetation in digital data makes the digital data color scheme similar
to that for color infrared film. This can make it easier for a person
familiar with color infrared film pictures to adjust to the
interpretation of digital remote sensing data.
The third (and only sensible) reason is to remind the audience that
they are not seeing real colors. If vegetation is shown as green, the
audience is more likely to subconsciously think that the image is true
color, while if vegetation is red they will immediately realize that the
image is false color.
5) What is the difference between radiance and reflectance?
A: Radiance is the variable directly measured by remote sensing
instruments. Basically, you can think of radiance as how much light
the instrument “sees” from the object being observed. When looking
through an atmosphere, some light scattered by the atmosphere will
be seen by the instrument and included in the observed radiance of
the target. An atmosphere will also absorb light, which will decrease
the observed radiance. Radiance has units of watt/steradian/square
meter.
Reflectance is the ratio of the amount of light leaving a target to the
amount of light striking the target. It has no units. If all of the light
leaving the target is intercepted for the measurement of reflectance,
the result is called “hemispherical reflectance.”
80
Vegetation in Remote Sensing FAQs
Reflectance (or more specifically hemispherical reflectance) is a
property of the material being observed. Radiance, on the other
hand, depends on the illumination (both its intensity and direction),
the orientation and position of the target and the path of the light
through the atmosphere. With effort, many of the atmospheric
effects and the solar illumination can be compensated for in digital
remote sensing data. This yields something which is called “apparent
reflectance,” and it differs from true reflectance in that shadows and
directional effects on reflectance have not been dealt with. Many
people refer to this (rather inaccurately) as “reflectance.”
For most of the vegetation indices in this FAQ, radiance, reflectance,
and apparent reflectance can be used interchangeably. However,
since reflectance is a property of the target material itself, you will
get the most reliable (and repeatable) vegetation index values using
reflectance. Apparent reflectance is adequate in many cases.
Vegetation index
Provided is the answers to the questions poised in the previous
“Vegetation index” subheading.
6) What is a vegetation index?
A: A vegetation index is a number that is generated by some
combination of remote sensing bands and may have some
relationship to the amount of vegetation in a given image pixel. If
that sounds sarcastic or even insulting, it’s meant to. Jim Westphal
at Caltech pointed out to me one day that vegetation indices seemed
to be more numerology than science. This may be an overly harsh
assessment, since there is some basis for vegetation indices in terms
of the features of the vegetation spectrum discussed above;
however, the literature indicates that these vegetation indices are
generally based on empirical evidence and not basic biology,
chemistry or physics. This should be kept in mind as you use these
indices.
7) What are the basic assumptions made by the vegetation indices?
A: The most basic assumption made is assuming that some algebraic
combination of remotely-sensed spectral bands can tell you
something useful about vegetation. There is fairly good empirical
evidence that they can.
A second assumption is the idea that all bare soil in an image will
form a line in spectral space. This is related to the concept of the soil
line discussed in question number 8. Nearly all of the commonly used
vegetation indices are only concerned with red-near-infrared space,
so a red-near-infrared line for bare soil is assumed. This line is
considered to be the line of zero vegetation.
At this point, there are two divergent lines of thinking about the
orientation of lines of equal vegetation (isovegetation lines):
1) All isovegetation lines converge at a single point. The indices that
use this assumption are the “ratio-based” indices, which measure
the slope of the line between the point of convergence and the redNIR point of the pixel. Some examples are: NDVI, SAVI, and RVI.
Vegetation in Remote Sensing FAQs
81
2) All isovegetation lines remain parallel to soil line. These indices
are typically called “perpendicular” indices and they measure the
perpendicular distance from the soil line to the red-NIR point of the
pixel. Examples are: PVI, WDVI, and DVI.
8) What is the soil line and how do I find it?
A) The soil line is a hypothetical line in spectral space that describes
the variation in the spectrum of bare soil in the image. The line can
be found by locating two or more patches of bare soil in the image
having different reflectivities and finding the best fit line in spectral
space. Kauth and Thomas (1976) described the famous “triangular,
cap shaped region with a tassel” in red-NIR space using MSS data.
They found that the point of the cap (which lies at low red reflectance
and high NIR reflectance) represented regions of high vegetation
and that the flat side of the cap directly opposite the point
represented bare soil.
THE SIMPLE WAY OF FINDING THE RED-NIR SOIL LINE: Make a
scatterplot of the red and NIR values for the pixels in the image. I
recommend putting red on the x-axis and NIR on the y-axis (the rest
of the instructions assume this). There should be a fairly linear
boundary along the lower right side of the scatterplot. The straight
line that best matches this boundary is your soil line. You can either
select the points that describe the boundary and do a least squares
fit, or you can simply made a hardcopy and draw in the line that
looks like the best fit. (You have to make a lot of judgment calls
either way.)
Basic indices
This section answers the questions poised in the previous “Basic
indices” section.
9) What is RVI?
A) RVI is the ratio vegetation index which was first described by
Jordan (1969). This is the most widely calculated vegetation index,
although you rarely hear of it as a vegetation index. A common
practice in remote sensing is the use of band ratios to eliminate
various albedo effects. Many people use the ratio of NIR to red as the
vegetation component of the scene, and this is in fact the RVI.
Summary:
82
•
ratio-based index
•
isovegetation lines converge at origin
•
soil line has slope of 1 and passes through origin
•
range 0 to infinity
Vegetation in Remote Sensing FAQs
Calculating RVI:
RVI = NIR / red
10) What is NDVI?
A) NDVI is the Normalized Difference Vegetation Index which is
ascribed to Rouse et al. (1973), but the concept of a normalized
difference index was first presented by Kriegler et al. (1969). When
people say vegetation index, this is the one that they are usually
referring to. This index has the advantage of varying between -1 and
1, while the RVI ranges from 0 to infinity. RVI and NDVI are
functionally equivalent and related to each other by the following
equation:
NDVI = (RVI -1) / (RVI + 1)
Summary:
•
ratio-based index
•
isovegetation lines converge at origin
•
soil line has slope of 1 and passes through origin
•
range -1 to +1
Calculating the NDVI:
NDVI = (NIR - red) / (NIR + red)
11) What is IPVI?
A) IPVI is the Infrared Percentage Vegetation Index which was first
described by Crippen (1990). Crippen found that the subtraction of
the red in the numerator was irrelevant, and proposed this index as
a way of improving calculation speed. It also is restricted to values
between 0 and 1, which eliminates the need for storing a sign for the
vegetation index values, and it eliminates the conceptual
strangeness of negative values for vegetation indices. IPVI and NDVI
are functionally equivalent and related to each other by the following
equation:
IPVI = (NDVI) / 2
Summary:
•
ratio-based index
•
isovegetation lines converge at origin
Vegetation in Remote Sensing FAQs
83
•
soil line has a slope of 1 and passes through origin
•
range 0 to +1
Calculating IPVI:
IPVI = NIR / (NIR + red)
12) What is DVI?
A) DVI is the Difference Vegetation Index, which is ascribed in some
recent papers to Richardson and Everitt (1992), but appears as VI
(vegetation index) in Lillesand and Kiefer (1987). (Lillesand and
Kiefer refer to its common use, so it was certainly introduced earlier,
but they do not give a specific reference.)
Summary:
•
perpendicular index
•
isovegetation lines parallel to soil line
•
soil line has arbitrary slope and passes through origin
•
range infinite.
Calculating DVI:
DVI = NIR - red
13) What is PVI?
A) PVI is the Perpendicular Vegetation Index which was first
described by Richardson and Wiegand (1977). This could be
considered a generalization of the DVI which allows for soil lines of
different slopes. PVI is quite sensitive to atmospheric variations, (Qi
et al., 1994) so comparing PVI values for data taken at different
dates is hazardous unless an atmospheric correction is performed on
the data.
Summary:
84
•
perpendicular index
•
isovegetation lines are parallel to soil line
•
soil line has arbitrary slope and passes through origin
•
range -1 to +1
Vegetation in Remote Sensing FAQs
Calculating PVI:
PVI = sin(a)NIR - cos(a)red
where a is the angle between the soil line and the NIR axis.
14) What is WDVI?
A) WDVI is the Weighted Difference Vegetation Index which was
introduced by Clevers (1988). This has a relationship to PVI similar
to the relationship IPVI has to NDVI. WDVI is a mathematically
simpler version of PVI, but it has an unrestricted range. Like PVI,
WDVI is very sensitive to atmospheric variations (Qi et al., 1994).
Summary:
•
perpendicular index
•
isovegetation lines parallel to soil line
•
soil line has arbitrary slope and passes through origin
•
range infinite
Calculating WDVI:
WDVI = NIR - g x red
where g is the slope of the soil line.
Indices to minimize soil
noise
This sub section list and answers the questions that were stated in
the one of the subheadings from the previous section.
15) What is Soil Noise?
A) Not all soils are alike. Different soils have different reflectance
spectra. As discussed above, all of the vegetation indices assume
that there is a soil line, where there is a single slope in red-NIR
space. However, it is often the case that there are soils with different
red-NIR slopes in a single image. Also, if the assumption about the
so vegetation lines (parallel or intercepting at the origin) is not
exactly right, changes in soil moisture (which move along
isovegetation lines) will give incorrect answers for the vegetation
index. The problem of soil noise is most acute when vegetation cover
is low.
The following group of indices attempt to reduce soil noise by
altering the behaviour of the isovegetation lines. All of them are
ratio-based, and the way that they attempt to reduce soil noise is by
shifting the place where the isovegetation lines meet.
Vegetation in Remote Sensing FAQs
85
These indices reduce soil noise at the cost of decreasing the
dynamic range of the index. These indices are slightly less
sensitive to changes in vegetation cover than NDVI (but more
sensitive than PVI) at low levels of vegetation cover. These
indices are also more sensitive to atmospheric variations than
NDVI (but less so than PVI). (See Qi et al. (1994) for
comparisons.)
16) What is SAVI?
A) SAVI is the Soil Adjusted Vegetation Index which was introduced
by Huete (1988). This index attempts to be a hybrid between the
ratio-based indices and the perpendicular indices. The reasoning
behind this index acknowledges that the isovegetation lines are not
parallel, and that they do not all converge at a single point. The initial
construction of this index was based on measurements of cotton and
range grass canopies with dark and light soil backgrounds, and the
adjustment factor L was found by trial and error until a factor that
gave equal vegetation index results for the dark and light soils was
found. The result is a ratio-based index where the point of
convergence is not the origin. The convergence point ends up being
in the quadrant of negative NIR and red values, which causes the
isovegetation lines to be more parallel in the region of positive NIR
and red values than is the case for RVI, NDVI, and IPVI.
Huete (1988) does present a theoretical basis for this index based
on simple radiative transfer, so SAVI probably has one of the better
theoretical backgrounds of the vegetation indices. However, the
theoretical development gives a significantly different correction
factor for a leaf area index of 1 (0.5) than resulted from the empirical
development for the same leaf area index (0.75). The correction
factor was found to vary between 0 for very high densities to 1 for
very low densities. The standard value typically used in most
applications is 0.5 which is for intermediate vegetation densities.
Summary:
86
•
ratio-based index
•
isovegetation lines converge in negative red, negative NIR
quadrant
•
soil line has slope of 1 and passes through origin.
•
range -1 to +1
Vegetation in Remote Sensing FAQs
Calculating SAVI:
SAVI = {(NIR - red) / (NIR + red + L)}(1 + L)
where L is a correction factor which ranges from 0 for very high
vegetation cover to 1 for very low vegetation cover. The most
typically used value is 0.5 which is for intermediate vegetation cover.
16a) Why is there a (1+L) term in SAVI?
A) This multiplicative term is present in SAVI (and MSAVI) to cause
the range of the vegetation index to be from -1 to +1.
This is done so that both vegetation indices reduce to NDVI when the
adjustment factor L goes to zero.
17) What is TSAVI?
A) TSAVI is the Transformed Soil Adjusted Vegetation Index which
was developed by Baret et al. (1989) and Baret and Guyot (1991).
This index assumes that the soil line has arbitrary slope and
intercept, and it makes use of these values to adjust the vegetation
index. This would be a nice way of escaping the arbitrariness of the
L in SAVI if an additional adjustment parameter had not been
included in the index. The parameter “X” was “adjusted so as to
minimize the soil background effect,” but I have not yet been able to
come up with an a priori, non-arbitrary way of finding the parameter.
The value reported in the papers is 0.08. The convergence point of
the isovegetation lines lies between the origin and the usually-used
SAVI convergence point (for L = 0.5)
•
Summary:
•
ratio-based index
•
isovegetation lines converge in negative red, negative NIR
quadrant
•
soil line has arbitrary slope and intercept
•
range -1 to +1
Calculating TSAVI:
TSAVI = {s(NIR - s x red +a)} / {a x NIR + red - a x s + X x (1 + s x s)}
where a is the soil line intercept, s is the soil line slope, and X is an
adjustment factor which is set to minimize soil noise (0.08 in original
papers).
Vegetation in Remote Sensing FAQs
87
18) What is MSAVI?
A) MSAVI is the Modified Soil Adjusted Vegetation Index which was
developed by Qi et al. (1994). As noted previously, the adjustment
factor L for SAVI depends on the level of vegetation cover being
observed which leads to the circular problem of needing to know the
vegetation cover before calculating the vegetation index which is
what gives you the vegetation cover. The basic idea of MSAVI was to
provide a variable correction factor L. The correction factor used is
based on the product of NDVI and WDVI. This means that the
isovegetation lines do not converge to a single point.
Summary:
•
ratio-based index
•
isovegetation lines cross the soil line at different points
•
soil line has arbitrary slope and passes through origin
•
range -1 to +1
Calculating MSAVI:
MSAVI = {(NIR - red) / (NIR + red + L)} (1 + L)
where L = 1 - 2s(NDVI)(WDVI), and s is the slope of the soil line.
19) What is MSAVI2?
A) MSAVI2 is the second Modified Soil Adjusted Vegetation Index
which was developed by Qi et al. (1994) as a recursion of MSAVI.
Basically, they use an iterative process and substitute 1-MSAVI(n-1)
as the L factor in MSAVI(n). They then inductively solve the iteration
where MSAVI(n)=MSAVI(n-1). In the process, the need to
precalculate WDVI and NDVI and the need to find the soil line are
eliminated.
Summary:
•
ratio-based
•
isovegetation lines cross the soil line at varying points
•
soil line has arbitrary slope and passes through origin
•
range -1 to +1
Calculating MSAVI2:
MSAVI2 = (1 / 2)(2(NIR + 1) - sqr{(2 x NIR + 1)2 - 8(NIR -red)}
88
Vegetation in Remote Sensing FAQs
Indices to minimize
atmospheric noise
This section provides the answers to the questions in the previous
section.
20) What is Atmospheric Noise?
A) The atmosphere is changing all of the time and all remote
sensing instruments have to look through it. The atmosphere both
attenuates light passing through it and scatters light from suspended
aerosols. The atmosphere can vary strongly across a single scene,
especially in areas with high relief. This alters the light seen by the
instrument and can cause variations in the calculated values of
vegetation indices. This is particularly a problem for comparing
vegetation index values for different dates. The following indices try
to remedy this problem without the requirement of atmospherically
corrected data.
These indices achieve their reduced sensitivity to the
atmosphere by decreasing the dynamic range. They are
generally slightly less sensitive to changes in vegetation cover
than NDVI. At low levels they are very sensitive to the soil
background. (See Qi et al. (1994) for comparisons.)
I seldom work with data without performing an atmospheric
correction, so I have made no significant use of any of the
indices in this section (T. Ray).
21) What is GEMI?
A) GEMI is the Global Environmental Monitoring Index which was
developed by Pinty and Verstraete (1991). They attempt to eliminate
the need for a detailed atmospheric correction by constructing a
“stock” atmospheric correction for the vegetation index. Pinty and
Verstraete (1991) provide no detailed reasoning for this index other
than that it meets their requirements of insensitivity to the
atmosphere empirically. A paper by Leprieur et al. (1994) claims to
find that GEMI is superior to other indices for satellite
measurements. However, A. Chehbouni (who happens to be the
fourth author of Leprieur et al. (1994) showed me some examples
using real data (the analysis in the paper was based on a model)
which strongly contradicted the Leprieur et al. (1994) conclusions.
Qi et al. (1994) shows a violent breakdown of GEMI with respect to
soil noise at low vegetation covers. I understand that there are
several ongoing studies to evaluate GEMI, and I think that the jury
is still out.
Summary:
•
non-linear
•
complex vegetation isolines
Vegetation in Remote Sensing FAQs
89
•
range 0 to +1
Calculating GEMI:
GEMI = eta(1 - 0.25 x eta) - {(red - 0.125) / (1 - red)}
where eta = {2(NIR2 - red2) + 1.5 x NIR + 0.5 x red } /
(NIR - red + 0.5)
22) What are the atmospherically resistant indices?
A) The atmospherically resistant indices are a family of indices with
built-in atmospheric corrections. The first of these was ARVI
(Atmospherically Resistant Vegetation Index) which was introduced
by Kaufman and Tanre (1992). They replaced the red reflectance in
NDVI with the term:
rb = red - gamma(blue - red)
with a value of 1.0 for gamma. Kaufman and Tanre (1994) also
suggested making the same substitution in SAVI which yields SARVI
(Soil adjusted Atmospherically Resistant Vegetation Index). Qi et al.
(1994) suggested the same substitution in MSAVI2 which yields
ASVI (Atmosphere-Soil-Vegetation Index). Obviously the same
substitution can also be made in MSAVI or TSAVI.
Qi et al. (1994) showed that this class of indices were very slightly
more sensitive to changes in vegetation cover than GEMI and very
slightly less sensitive to the atmosphere and the soil than GEMI for
moderate to high vegetation cover. The atmospheric insensitivity
and the insensitivity to soil break down violently for low vegetation
cover.
Summary:
•
ratio-based
•
isovegetation lines cross as assumed by parent index
•
soil line as assumed by parent index
•
range -1 to +1
Calculating ARVI:
ARVI = (NIR - rb) / (NIR + rb)
with rb defined as:
rb = red - gamma x (red - blue)
90
Vegetation in Remote Sensing FAQs
and gamma usually equal to 1.0
The parent index of ARVI is NDVI. The substitution of rb for red in
any of the ratio-based indices gives the atmospherically resistant
version of that index.
“I view these indices for reducing atmospheric noise as lateevolving dinosaurs. The utility of a good atmospheric correction
for remotely-sensed data is so high as to make the effort of
performing a proper atmospheric correction worthwhile. These
end runs around this problem may serve a useful purpose at
present while better atmospheric corrections for data collected
over land are being developed. However, the move towards
atmospheric correction of remote sensing data is underway, and
it is almost certainly the wave of the future.” - Terrill Ray
Other indices
This sections answers some of the questions from the previous
chapter.
23) What is GVI?
A) GVI stands for Green Vegetation Index. There are several GVIs.
The basic way these are devised is by using two or more soil points
to define a soil line. Then a Gram-Schmidt orthogonalization is
performed to find the “greenness” line which passes through the
point of 100% (or very high) vegetation cover and is perpendicular
to the soil line. The distance of the pixel spectrum in band space from
the soil line along the “greenness” axis is the value of the vegetation
index. The PVI is the 2-band version of this, Kauth and Thomas
(1976) developed a 4-band version for MSS, Crist and Cicone (1984)
developed a 6-band version for TM, and Jackson (1983) described
how to construct the n-band version.
Summary:
•
perpendicular vegetation index using n bands
•
isovegetation lines are parallel to soil line
•
soil line has arbitrary orientation in n-space
•
range -1 to +1
Calculating GVI:
Default version for MSS:
GVI = -0.29 x MSS4 - 0.56 x MSS5 + 0.60 x MSS6 + 0.49 x MSS7
Default version for TM:
GVI = |-0.2848 x TM1 - 0.2435 x TM2 - 0.5436 x TM3 +
0.7243 x TM4 + 0.0840 x TM5 - 0.1800 x TM7
Vegetation in Remote Sensing FAQs
91
24) Are there vegetation indices using other algebraic functions of the bands?
A) Yes. Rouse et al. (1973, 1974) proposed using the square root of
NDVI+0.5, Goetz et al. (1975) proposed log ratios, Wecksung and
Breedlove (1977) proposed arctangent ratios, and Tuck (1979)
discussed the square root of the NIR/red ratio. These seem to have
been generally abandoned. They make the same assumptions about
the isovegetation lines and the soil lines as made by RVI and NDVI,
and they have neither the value of common use or of ease of
calculation. You will probably never see these, and there is really no
good reason to bother with them.
25) Are there vegetation indices that use bands other than the red and NIR
bands?
A) Yes. First, the various GVIs make use of more than just the NIR
and red bands. In general, the GVI for a given multispectral sensor
system uses all of the available bands. Secondly, there have been
attempts to develop vegetation indices based on green and red
bands as discussed in the next question.
Mike Steven at the University of Nottingham has recently informed
me of some work on an index using NIR and mid-infrared bands.
More on this will be included when I have received a paper from him.
26) Plants are green, why isn’t the green chlorophyll feature used directly.
A) There are several reasons for this. First, the reason that plants
look so green is not because they are reflecting lots of green light,
but because they are absorbing so much of the rest of the visible
light. Try looking at an area of bare dry soil and compare that to a
grassy field. You will immediately notice that the grassy field is
generally darker. It is generally easier to detect things when they are
bright against a dark background.
Second, this was tried early in the history of satellite remote sensing
by Kanemasu (1974) and basically abandoned after a study by
Tucker (1979) which seemed to demonstrate that the combinations
of NIR and red were far superior than combinations of green and red.
The idea of using red and green with MSS data was resurrected in
recent years by Pickup et al. (1993) who proposed a PVI-like index
using MSS bands 4 and 5 which they called PD54 (Perpendicular
Distance MSS band 5 MSS band 4). They claimed a tassel cap like
pattern in the scatterplot for these two bands, but most of the MSS
data I have looked at doesn’t show this pattern. A significant point
for PD54 was that it detected non-green vegetation (dry grass).
Third, many soils have iron oxide absorption features in the visible
wavelengths. As the soil gets obscured by vegetation cover, this
feature becomes less apparent. It is likely that a great deal of the
variance measured by the green-red indices is due to this instead of
the plant chlorophyll feature (which is why PD54 might appear to be
sensitive to non-green plant material). This is fine if you know that
the iron oxide absorption in the soil is uniform across the image, but
if the iron oxide absorption is highly variable, then this will confuse
green-red indices.
92
Vegetation in Remote Sensing FAQs
Problems
This section provides questions and answers to the questions that
were listed in the previous section.
27) How well do these vegetation indices work in areas with low vegetation
cover?
A) Generally, very badly. When the vegetation cover is low, the
spectrum observed by remote sensing is dominated by the soil. Not
all soils have the same spectrum, even when fairly broad bands are
being used. Both Huete et al. (1985) and Elvidge and Lyon (1985)
showed that the soil background can have a profound impact and
vegetation index values with bright backgrounds producing lower
vegetation index values than dark backgrounds. Elvidge and Lyon
(1985) showed that many background materials (soil, rock, plant
litter) vary in their red-NIR slope, and these variations seriously
impact measurements of vegetation indices. Then there is the
problem of non-linear mixing.
28) What is “non-linear” mixing?
A) A lot of remote sensing analysis has been based on the concept
of the Earth as spots covered by differently-colored paint. When the
spots of paint get too small, they appear to blend together to form a
new color which is a simple mixture of the old colors. Consider an
area covered by 50% small red spots and 50% small green spots.
When we look at the surface from far enough away that we can’t see
the individual dots, we see the surface as yellow. Different
proportions of red and green dots will produce different colors, and
if we know that the surface is covered by red and green dots we can
calculate what the proportions are based on the color we see. The
important thing to know is that any light reaching the observer has
only hit one of the colored dots. That is linear mixing.
Non-linear mixing occurs when light hits more than one of the
colored dots. Imagine a surface with a lot of small, colored bumps
which stick out varying distances from the surface. We can now
imagine that light could bounce from one colored bump to another
and then to the observer. Now some of the light coming from the
green bump bounced off of a red bump first, and this light will have
characteristics of both the red and green bumps. There is also light
coming directly from the green bump that only bounced from the
green bump. If we could see this individual green bump, it would not
look as green as it should. Now, when the light from all of the bumps
reaches the observer, the light looks different than when the bumps
were simple spots even through the proportion of the area covered
by each color is unchanged. (We are assuming that there are no
shadows.)
Vegetation in Remote Sensing FAQs
93
A second way for non-linear mixing to happen is if light can pass
through one material and then reflect off another. Imagine a piece
of translucent plastic with half of the area covered by randomly
placed translucent green spots placed on top of a red surface. Now
light can pass through a green spot on the plastic and then reflect off
of the red below before returning to the observer. Once again, the
interaction of the light with multiple spots along its path changes the
character of the light coming from each spot. Once again the color
looks different than the linear case, which is just the case when light
cannot pass through green spots.
The basic point is that non-linear mixing twists the spectra of the
materials being observed into different spectra which do not
resemble any of the targets. This can magnify the apparent
abundance of a material. Consider the piece of translucent plastic
with the translucent green dots. If we put it on top of a lowreflectivity surface, very little of the light that passes through the
green dots will be reflected back, so all we see is the light directly
reflected from the green dots. Now we put a highly reflective surface
behind it, and we see a brighter green because we now see both the
light directly reflected from the dots and most of the light which has
passed through the green dots (which is green) is reflecting back
from the highly-reflective background. If we didn’t know better, we
might think that we just had more green dots instead of a brighter
background.
29) Is the variation in the soil the only problem?
A) No. Many of the commonly studied areas with low vegetation
cover are arid and semi-arid areas. Many plants which grow in such
areas have a variety of adaptations for dealing with the lack of water
and high temperatures. (Even plants growing in areas with relatively
cool air temperature have problems with heat regulation in dry
climates since transpiration is the main way they keep cool.) These
adaptations often decrease the amount of visible light absorbed by
the plants and/or decrease the amount of sunlight striking the plants
(hence the plants do not reflect as much light). These inherent
qualities make arid and semi-arid vegetation hard to detect unless it
is observed during periods of relatively abundant water when a
whole new set of adaptations to maximize plant productivity takes
effect.
30) What if I can’t get a good soil line from my data?
A) If you’re working in an area with high plant cover, this can be
common. This makes it virtually impossible to use the perpendicular
indices or things like TSAVI and MSAVI1. However, NDVI is at its best
with high plant cover, so it is still available to you. The correction
factor L for SAVI should be near 0 for this sort of situation, which
makes SAVI equivalent to NDVI. MSAVI2 also need no soil line. If you
really want to use an index which requires a soil line, you will need
to construct it with field and laboratory spectra, but this is not an
easy task, and really not advisable.
94
Vegetation in Remote Sensing FAQs
31) How low a plant cover is too low for these indices?
A) These are rules of thumb, your mileage may vary:
•
RVI, NDVI, IPVI = 30%
•
SAVI, MSAVI1, MSAVI2 = 15%
•
DVI = 30%
•
PVI, WDVI, GVI = 15%
The more uniform your soil, the lower you can push this.
Future directions
This section provides questions and answers to the questions that
were listed in the previous section.
32) I hear about people using spectral unmixing to look at vegetation, how
does this work?
A) See question 22 for a thumbnail description of linear mixing.
Basically, you assume that the given spectrum is a linear
combination of the spectra of materials which appear in the image.
You do a least squares fit to find weighting coefficients for each
individual material’s spectrum which gives the best fit to the original
spectrum. The weighting coefficients are considered to be equal to
the abundances of the respective materials. For detailed discussions
of this see Adams et al. (1989), Smith et al. (1990), Roberts et al.
(1994) and Smith et al. (1994). There is also the highly sophisticated
convex geometry technique discussed in Boardman (1994).
33) Are there any indices which use high spectral resolution data?
A) Yes. Elvidge and Chen (1994) have developed indices of this kind.
They depend on the fact that when you take a derivative of the red
edge in the vegetation spectrum you get a bump at about 720 nm.
It is known that the red edge in be seen in high spectral resolution
data down to about 5% cover (Elvidge and Mouat, 1988; Elvidge et
al., 1993). Three indices were developed. The first used the integral
of the first derivative of the reflectance spectrum over the range
626-795 nm. The second took the first derivative of the reflectance
spectrum, subtracted the value of the derivative at 625 nm and
integrated the result over the range 626-795 nm. The third index
used the integral of the absolute value of the second derivative of the
reflectance spectrum integrated over the range from 626-795 nm. Of
these three indices, the first one was found to have greater
predictive power than RVI or NDVI, but less predictive power than
SAVI or PVI. The index which used the second derivative has greater
predictive power than SAVI and PVI. The index which used the
difference between the first derivative and the value of the first
derivative at 625 nm had the greatest predictive power.
Final question
This section provides questions and answers to the questions that
were listed in the previous section.
Vegetation in Remote Sensing FAQs
95
34) What vegetation index should I use?
A) NDVI.
Nearly everyone who does much with the remote sensing of
vegetation knows NDVI, and its often best to stick to what people
know and trust. NDVI is simple. It has the best dynamic range of any
of the indices in this FAQ and it has the best sensitivity to changes
in vegetation cover. It is moderately sensitive to the soil background
and to the atmosphere except at low plant cover. To just take a quick
qualitative look at the vegetation cover in an image, you just can’t
beat NDVI unless you are looking at an area with low plant cover.
PVI is somewhat less common in its use, but it is also widely
accepted. It has poor dynamic range and poor sensitivity as well as
being very sensitive to the atmosphere. It is relatively easy to use,
and finding the soil line is important for using some of the other
indices. It sometimes is better than NDVI at low vegetation cover.
You really should probably use SAVI if you are looking at low
vegetation cover, and if you use a correction factor which is not 0.5
you had better be prepared to cite the Huete (1988) paper and the
fact the correction factor is larger than 0.5 for very sparse
vegetation. MSAVI is also good, but it has seen very little use. If you
have high spectral resolution data, you should consider the Elvidge
and Chen (1994) indices.
Remember that many of the indices which correct for the soil
background can work poorly if no atmospheric correction has been
performed. If you are planning to seriously use vegetation indices for
a multitemporal study, you should take a close look at the variability
of the soil, and you should do an atmospheric correction. There is
some concern about vegetation indices giving different values as you
look away from the nadir, but this may not be terribly serious in your
application.
Summary:
In order of preference for each type of sensor:
•
TM or MSS (or any broad-band sensor)
•
1. NDVI (or IPVI)
•
2. PVI
•
3. SAVI (top of list for low vegetation)
•
4. MSAVI2
High Spectral Resolution Data (e.g. AVIRIS)
1. First derivative index with baseline at 625 nm.
96
Vegetation in Remote Sensing FAQs
References
Adams, J.B., Smith, M. O., and Gillespie, A. R. (1989) “Simple
models for complex natural surfaces: a strategy for the
hyperspectral era of remote sensing”, in Proc. IEEE Int. Geosci. and
Remote Sensing Symp. ‘89, IEEE, New York, 16-21.
Boardman, J. W. (1994) “Geometric Mixture Analysis of Imaging
Spectrometry Data”, in Proc. IEEE Int. Geosci. and Remote Sensing
Symp. ‘94, IEEE, New York, 2369-2371.
Baret, F., Guyot, G., and Major, D. (1989) “TSAVI: A vegetation
index which minimizes soil brightness effects on LAI or APAR
estimation,” in 12th Canadian Symposium on Remote Sensing and
IGARSS 1990, Vancouver, Canada, July`10-14.
Baret, F. and Guyot, G. (1881) “Potentials and limits of vegetation
indices for LAI and APAR assessment, Remote Sensing of
Environment, vol. 35, pp. 161-173.
Clevers, J. G. P. W. (1988) “The derivation of a simplified reflectance
model for the estimation of leaf area index, Remote Sensing of
Environment, vol 35., pp. 53-70.
Crippen, R. E. (1990) “Calculating the Vegetation Index Faster,”
Remote Sensing of Environment, vol 34., pp. 71-73.
Crist, E. P. and Cicone, R. C. (1984) “Application of the tasseled cap
concept to simulated thematic mapper data,” Photogrammetric
Engineering and Remote Sensing, vol. 50, pp. 343-352.
Elvidge, C. D. and Chen, Z. (1994) “Comparison of Broad-band and
Narrow-band red versus near infrared vegetation indices,” Remote
Sensing of Environment, in review.
Elvidge, C. D. and Lyon, R. J. P. (1985) “Influence of rock-soil
spectral variation on the assessment of green biomass,” Remote
Sensing of Environment, vol. 17, pp. 265-269.
Goetz, A. F. H. and 7 others (1975) “Application of ERTS images and
image processing to regional geologic problems and geologic
mapping in northern Arizona,” in JPL Technical Report 32-1597, Jet
Propulsion Laboratory, Pasadena, CA.
Huete, A. R., Jackson, R. D., and Post, D. F. (1985) “Spectral
response of a plant canopy with different soil backgrounds”, Remote
Sensing of Environment, vol. 17, pp.37-53.
Huete, A. R. (1988) “A Soil-Adjusted Vegetation Index (SAVI),”
Remote Sensing of Environment, vol. 25, pp. 295-309.
Jackson, R. D. (1983) “Spectral indices in n-space,” Remote Sensing
of Environment, vol. 13, pp. 409-421.
Jordan, C. F. (1969) “Derivation of leaf area index from quality of
light on the forest floor,” Ecology, vol. 50, pp. 663-666.
Kanemasu, E. T. (1974) “Seasonal canopy reflectance patterns of
wheat, sorghum, and soybean,” Remote Sensing of Environment,
vol. 3, 43-47.
Kaufman, Y. J., Tanre, D. (1992) “Atmospherically resistant
vegetation index (ARVI) for EOS-MODIS, in Proc. IEEE Int. Geosci.
and Remote Sensing Symp. ‘92, IEEE, New York, 261-270.
Vegetation in Remote Sensing FAQs
97
Kauth, R. J. and Thomas, G.S. (1976) “The tasseled cap--A graphic
description of the spectral-temporal development of agricultural
crops as seen by Landsat,” Proceedings of the Symposium on
Machine Processing of Remotely Sensed Data, Purdue University,
West Lafayette, Indiana, pp. 41-51
Kriegler, F. J., Malila, W. A., Nalepka, R. F., and Richardson, W.
(1969) “Preprocessing transformations and their effects on
multispectral recognition”, in Proceedings of the Sixth International
Symposium on Remote Sensing of Environment, University of
Michigan, Ann Arbor, MI, pp.97-131.
Leprieur, C., Verstraete, M.M., Pinty, B., Chehbouni, A. (1994)
“NOAA/AVHRR Vegetation Indices: Suitability for Monitoring
Fractional Vegetation Cover of the Terrestrial Biosphere,” in Proc. of
Physical Measurements and Signatures in Remote Sensing, ISPRS,
1103-1110.
Lillesand, T. M. and Kiefer, R. W. (1987) Remote Sensing and Image
Interpretation, 2nd edition, John Wiley and Sons, New York,
Chichester, Brisbane, Toronto, Singapore, 721p.
Pickup, G., Chewings, V. H. and Nelson, O. J. (1993) “Estimating
changes in vegetation cover over time in arid rangelands using
Landsat MSS data,” Remote Sensing of Environment, vol. 43, pp.
243-263.
Pinty, B. and Verstraete, M. M. (1991) “GEMI: A Non-Linear Index to
Monitor Global Vegetation from Satellites,” Vegetation, vol. 101, 1520.
Qi, J., Chehbouni, A., Huete, A. R., and Kerr, Y. H. (1994) “Modified
Soil Adjusted Vegetation Index (MSAVI),” Remote Sensing of
Environment, vol. 48, pp. 119-126.
Qi, J., Kerr, Y., and Chehbouni, A. (1994) “External Factor
Consideration in Vegetation Index Development,” in Proc. of Physical
Measurements and Signatures in Remote Sensing, ISPRS, 723-730.
Richardson, A. J. and Everitt, J. H. (1992) “Using spectra vegetation
indices to estimate rangeland productivity”, Geocarto International,
vol. 1, pp. 63-69.
Richardson, A. J. and Wiegand, C. L. (1977) “Distinguishing
vegetation from soil background information,” Photogrammetric
Engineering and Remote Sensing, vol. 43, pp. 1541-1552.
Roberts, D. A., Smith, M. O. and Adams, J. B. (1993) “Green
Vegetation, Nonphotosynthetic Vegetation, and Soils”, in AVIRIS
Data, Remote Sensing of Environment, 44: 117-126.
Rouse, J. W., Haas, R. H., Schell, J. A., and Deering, D. W. (1973)
“Monitoring vegetation systems in the great plains with ERTS,” Third
ERTS Symposium, NASA SP-351, vol. 1, pp.309-317.
Rouse, J. W., Haas, R. H., Schell, J. A., Deering, D. W., and Harlan,
J. C. (1974) “Monitoring the vernal advancement and retrogradation
(greenwave effect) of natural vegetation,” NASA/GSFC Type III Final
Report, Greenbelt, Md. 371 p.
98
Vegetation in Remote Sensing FAQs
Smith, M., Roberts, D., Hill, J., Mehl, W., Hosgood, B., Verdebout, J.,
Schmuch, G., Koechler, C. and Adams, J. (1994) “A New Approach
to Quantifying Abundance of Materials in Multispectral Images”, in
Proc. IEEE Int. Geosci. and Remote Sensing Symp. ‘94, IEEE, New
York, 2372-2374.
Smith, M. O., Ustin, S. L., Adams, J. B. and Gillespie, A. R. (1990)
“Vegetation in Deserts: I. A Regional Measure of Abundance from
Multispectral Images”, Remote Sensing of Environment, 31: 1-26.
Tucker, C. J. (1979) “Red and Photographic Infrared Linear
Combinations for Monitoring Vegetation,” Remote Sensing of
Environment, vol. 8, 127-150.
Wecksung, G. W. and Breedlove, J. R., Jr. (1977) “Some techniques
for digital processing, display, and interpretation of ratio images in
multispectral remote sensing,” in Applications of Digital Image
Processing, Proceedings of Society of Photo-Optical Instrumentation
Engineers, Bellingham, Washington, vol. 119, pp. 47-54.
Vegetation in Remote Sensing FAQs
99
100
Vegetation in Remote Sensing FAQs
Geophysical Data Imaging and
Presentation
ER Mapper has the ability to process geophysical data, in
combination with other software, and enhance geological
interpretation through its unique image processing capabilities. The
imaging can be an integral part of the interpretation process.
Author
Geoff Pettifer, Petroleum Unit, Department of Agriculture Energy
and Minerals, PO Box 2145, MDC Fitzroy, Victoria, Australia.
Introduction
This chapter describes the principles underlying geophysical
imaging, with some examples from the standard datasets supplied
with ER Mapper.
Fourier processing of geophysical data is covered in Chapter 11,
“Fast Fourier Transforms”.
The approach taken is to consider first particular aspects peculiar to
visualising geophysical data as opposed to systematically prerasterizerized satellite data, specifically the dynamic range and
spatial distribution of geophysical data and the effects of gridding.
The applications of standard ER Mapper functionality (e.g. kernels,
equations etc.) are discussed in generalities including specific
geophysical filters and then magnetics, radiometrics, gravity,
seismic and electromagnetics are discussed in some detail.
Throughout, the over-riding theme is to use geophysical images,
preferably using all available data in an integrated multiband dataset
and with vector overlays showing the original data distribution (bias)
from which the image was created by gridding. The latter is
important to enable recognition of meaningful data and spatial
frequencies (as opposed to gridding artefacts).
The use of an empirical and creative approach to combining and
interpreting geophysical data is made possible by the power of
ER Mapper. If the proper empirical approaches are used and due
care is taken to relate to known geology, new relationships between,
and interpretations of, the geophysics and geology, can be explored.
The advantage of ER Mapper is that you can do ‘what-if's’ quickly and
easily to see whether different processing options are useful or not.
A significant advantage of ER Mapper is that the dataset is stored
only once and any processing to provide the different enhancements
is defined in an algorithm and dynamically created on the screen as
required. This means that no additional data storage is required if
many different processing techniques are to be applied. The system
is therefore ideal for geophysical imaging.
Geophysical Data Imaging and Presentation
101
Geophysical data, unlike satellite data is in real world unquantized
values. ER Mapper maintains data precision during all aspects of
processing prior to quantizing them into 256 levels prior to display.
Earlier image processing system were restricted in treating
geophysical data as they only operated on byte data.
Geophysical
methods
There are five main types of data that form the bulk of geophysical
data collected and used by geophysicists, and which are amenable
to geophysical data imaging and interpretation. They are gravity,
seismic, airborne magnetics, airborne radiometrics and airborne EM.
In addition to these five are topography, the simplest geophysical
property of the earth plus all ground based geophysical methods
which can be gridded for imaging purposes.
Below is an explanation of some types of geophysical methods.
102
•
Magnetics measures the magnetic field over an area. Variations
in magnetic composition of rocks produce variations in the
magnetic field which can be interpreted in terms of geology to aid
subsurface geological mapping to depths of several kilometres.
•
Radiometrics is a measure of total radioactivity and specific
elements (potassium, thorium and uranium) in the top 1 or 2
metres of soil and rock over and area. Near surface geological
and soil mapping can be aided using radiometrics.
•
Seismic methods are used in petroleum search to measure two-
•
Gravity measures the combined gravity effects of subsurface
geological formations and structure. Variations in subsurface
density produce variations in the gravity field which can help to
interpret subsurface geology to depths of several kilometres.
•
Airborne EM (electromagnetics) is an emerging geological
mapping tool which measures over many data channels the
decay of an energizing electromagnetic field in the earth. EM
responds to electrical conductivity of rocks in the first few
hundreds of metres in the earth. Some airborne EM systems
measure apparent conductivity of the earth at particular EM wave
frequencies. Creative imaging of this data is in its embryonic
stages in 1995.
way seismic wave travel time from the seismic measurement
datum to subsurface geological boundaries of significance.
Variations in seismic velocity can be measured and reflect
lithology porosity and hydrocarbon content variations. Two way
time imagery is a recent application of imaging in seismic
exploration.
Geophysical Data Imaging and Presentation
Other datasets to
aid geophysical
interpretation
Geophysical imaging and interpretation is best carried out using and
integrating, on ER Mapper, all available raster and vector data to aid
the interpretation. How these ancillary datasets are generated (e.g.
other mapping and gridding software packages) is an important
consideration. Fortunately ER Mapper supports a wide range of
raster grid (e.g. Landmark, ECS, Geosoft, Sierra, Charisma etc.) and
vector import (e.g. DGN, DXF, ARC/INFO) and dynamic link (e.g.
DXF and ARC/INFO) standards and in some cases (e.g. Intrepid)
ER Mapper links very closely to geophysical processing/mapping
packages.
These ancillary datasets to aid interpretation can include:
Raster datasets
Vector datasets
•
Gridded topography/bathymetry (DTM)
•
Gridded borehole depths
•
Seismic velocity and attribute (e.g. amplitude, phase) grids
•
Landsat
•
Radar
•
Gridded geophysical properties
•
Gridded borehole data attributes
•
Gridded aircraft altimeter and or terrain clearance
•
Gridded geochemistry
•
Culture (roads, rivers etc.)
•
Flight line overlays
•
Contours of the geophysical data
•
Geophysical station and line locations
•
Seismic fault polygons and lines
•
Survey boundary polygons
•
Geological boundaries
•
Borehole locations
•
Mapsheets
•
Exploration licence areas
•
Thematic map overlays from mapping software via DXF or DGN
formats
Geophysical Data Imaging and Presentation
103
Gridding and
imaging
Gridding of geophysical
data
Geophysical data unlike other remote sensing data is not usually
collected on a standard grid (except for 3-D seismic and mining
geophysics grid data). Geophysical data is usually gridded for
contouring and imaging purposes. The gridding algorithms and
criteria used can have a marked effect on the geophysical image
aesthetics and the subsequent fourier operations and spatial filtering
of an image grid.
There is considerable variation in the ability of gridding packages to
adequately grid various data coverage biases without introducing
gridding artefacts, which affect the integrity of images. Four main
types of data distribution (and an infinite combination of these) are
encountered with geophysical data and these are shown in Figure 1.
Figures
Fig 1a. Random point data (e.g. gravity)
Fig 1b. Random line data (e.g. 2 D seismic)
104
Geophysical Data Imaging and Presentation
Fig 1c. Contour data, with faults (e.g. topography, seismic contours)
Fig 1d. Regular line data (e.g. airborne magnetics; 3-D seismic)
Gridding is a complex subject in itself, but users of geophysical
images need to be aware of the possibility of gridding artefacts (e.g.
one gridding algorithm may be optimized for random point data but
perform poorly on contour data).
A simple way to recognise gridding artefacts is to apply a sun angle
to an image (which has the “force_nosubs” kernel set) and then
overlay a vector dataset showing the original geophysical data bias
(points/lines/contours). Artefacts could include “pimples” at random
point data; “terracing” with contour data and poorly levelled
magnetics; trends following random line data; false (or conjugate)
trends in directionally gridded data; “scalloping” along faults in
seismic data and the “string of pearls” effect on linear anomalies in
magnetic images.
Once recognised, there are three choices with gridding artefacts. The
first is to minimize them by regridding the data with a different
gridding algorithm and different gridding criteria. The second is to
live with the artefacts and take account of them when interpreting
further enhanced images (particularly spatial filtered images which
tend to amplify artefacts). The third is to regrid the data with
different gridding criteria and/or algorithms/software, compare the
various grids and choose the one most suitable for the application.
Geophysical Data Imaging and Presentation
105
Image grid spacing and
geophysical data spacing
With most gridding algorithms the optimum grid spacing is about
20% (one-fifth) of the average data spacing (for aeromagnetic,
seismic or contour data). This limit can be extended to 10% in some
cases.
Line collected data such as airborne magnetics is a special case
where grid spacing relative to line spacing and along line data
intervals is important.
Unlike satellite or airborne scanner data, airborne magnetic data has
high spatial sampling rates along flight lines (e.g. 8 to 50 metres)
and lower spatial sampling across flight lines (e.g. flight-line
spacings of 200m to 5 km).
Typically, data is gridded to a grid size of about 20% of the flight line
spacing (or 10% at the limit). This means that in the gridding
process the original magnetic data is often subsampled along flight
lines and high frequencies in the data may be lost.
Furthermore, oversampling across flight lines may occur and false
correlations and high frequencies between lines may be created.
Some contouring algorithms attempt to cross correlate anomaly
shapes between lines, and are generally only good at enhancing one
trend direction. Others correlate anomalies poorly between lines and
produce localized “bullseye” anomalies related to flight lines. Linear
“bullseyes” or the “string of pearls” effect are common for linear
magnetic features, with many gridding packages.
These effects are most pronounced on regional magnetic data (at
1.5km line spacing). Also improper choice of grid size close to the
spatial frequency of near surface related magnetic anomalies can
produce ‘aliasing’ of the data (erroneous low frequency, deep body
anomalies) which cannot be simply corrected in the image.
Resampling of Image
data in imaging
The relative pixel resolution between the geophysical grid to be
displayed and the image window or hardcopy device is important. if
the number of gridpoints (pixels) to be displayed in an image
exceeds the available display device pixels the geophysical grid on
disc will normally be resampled by ER Mapper before display and any
equations or kernels will be applied to the resampled data grid.
This can be acceptable for many image applications but in some
circumstances can lead to erroneous aliasing of the geophysical
data. ER Mapper supplies a “force_nosubs” filter which ensures that
no resampling of a geophysical grid from disk occurs before further
filtering and ultimate display.
Regular use of this option is recommended with geophysical imaging,
even though it does slow down redisplay speeds of large geophysical
datasets.
A further useful facility in ER Mapper is the “smooth resampling”
option which smooths the display to the display or hardcopy device
resolution to eliminate spatial quantization or blockiness of the
display image and enhance the aesthetics of the image.
106
Geophysical Data Imaging and Presentation
General approach
to geophysical
imaging
The processing of geophysical data requires careful consideration of
the characteristics of the data. The aim is to use colour to emphasize
geologically important amplitude variations and spatial relationships
in anomalies in the data. Processing the data without consideration
of the data limits, range of amplitudes and spatial frequencies in the
data may produce artefacts or false anomalies from which erroneous
geological interpretation is made. Some of the considerations are
described in the following subsections.
Visualizing Geophysical
Data
Visualizing of geophysical data involves consideration of the data
intensity, spatial frequency and processing aspects of the data.
Human perception limitations also influence the care we must take
in imaging geophysical data.
Variations in geophysical intensity are best expressed in geophysical
images by assigning a continuous range of colours or ‘graytones’ to
the range of intensity values. The dynamic range of geophysical data
can be very large (e.g. 1000's of nanoTeslas in magnetics) compared
to the available colours (256 for 8 bit screens).
Colour perception
The dynamic range of the data in geophysical applications is far
greater than either the available colours or the final perceived colour
differences. It is important to optimise the images to emphasize
some aspect of the data. The key point is that a typical geophysical
data set will need more than one image to display its full information
content.
A number of algorithms are normally created which have different
limits, transformations and look up table choices in order to interpret
the dataset. Additional processing techniques such as kernels
(filters) and formulae (equations) may be applied to the dataset to
further enhance perception and to overcome possible dynamic range
problems. This is discussed in more detail later.
The limitations of hardcopy devices imposes further restrictions on
the dynamic range of the data. Colour variation and therefore
perception varies considerably from one form of hardcopy to the
other. Being aware of the limitations of hardcopy devices allows you
to be critical of any hardcopy on which some interpretation is being
performed.
Variations in geophysical anomaly spatial frequencies, relationships
and continuity are best displayed in gray tone (or colordrape) sun
angle images.Many spatial filters used in satellite image processing
have limited use on geophysical data because of the generally
smoothly varying nature of geophysical data. Several geophysical
filters are available in ER Mapper to apply standard geophysical
processing (e.g. upward and downward continuation and spatial
derivatives) and directional filtering.
Fourier processing of geophysical data by internal software and
dynamic linking of geophysical images to C user code are other
facilities to enhance and aid geophysical data visualization in
ER Mapper.
Geophysical Data Imaging and Presentation
107
Data Familiarization
Before beginning any processing and interpretation of the data it is
necessary to familiarize yourself with the data. Four common
methods for examining the spatial/amplitude distribution of data,
apart from use of the image statistics, are:
•
non-cumulative histogram display
•
display of the data using a “water-level” color look up table
•
z-profiling
•
traversing
Non-cumulative histogram display
The non-cumulative histogram display (default histogram) shows an
image statistical distribution of data values. The cumulative
histogram display shows the histogram equalized cumulative
distribution of data. Zooming to different areas enables a quick
assessment of values in different areas of the image. If regions in the
dataset are of importance, use of the ‘inregion’ formulae to display
particular region data only, can be useful to visualize the statistical
variation within a geologically meaningful area (e.g. a geological unit
polygon). Use of the ER Mapper vector dataset to region conversion
facility is useful to designate regions and prior consideration needs
to be given to maximizing the designation of meaningful areas as
polygons in ancillary vector overlays.
Waterlevel Color Look Up Table
A useful technique for examining the spatial distribution and limits of
the data in an image is the use of a “waterlevel” colour look up table
which partitions the range of data into three distinct colour ranges.
Data values from the minimum to a predetermined threshold are all
blue. Values between this threshold and a higher threshold are
grayscale, and values from the upper threshold to the maximum are
yellow. In effect all of the data values below the lower threshold are
below the water level and the yellow coloured values have been
partitioned above the upper threshold. These values can be
discounted and the grayscale values observed. So the look up table
behaves as a bandpass in order to concentrate the mid range values.
One useful technique is to display your data with the ‘watercolor’
look up table, click to limits and slowly increase the lower limit of the
transform linear graph along the bottom of the histogram display
spreading the blue in the image. Similarly, the yellow in the image
will spread as the top cut-off in the data's linear transform is reduced
in a like fashion. Using the middle mouse to interrogate the top and
bottom limits of the linear transform curve identifies the gray area
bandpass values. Typing in limits produces a similar band pass
image interrogation effect.
108
Geophysical Data Imaging and Presentation
Z-profiling
Z-profiling (text or graph) indicates amplitude information for all
bands in a dataset at once and is useful for detailed interrogation.
Traversing
More useful than Z-profiling is the traversing facility of ER Mapper
which can be used to observe profiles across geophysical anomalies
of interest. This can be applied to one dataset band at a time but it
is easy to switch between bands, other coregistered datasets and to
move the traverse around the image (e.g. along a linear anomaly
strike to observe anomaly change along strike)
Using Transforms to
improve displays
You can apply transforms to your data to improve the display.
Limits to actual (linear stretch)
As a default, data values from 0 to 255 are displayed using the 256
colours normally available on screen. While this may be fine for
satellite data with, for example, 256 integer data values in the range
0 to 255, the data range for geophysical data is usually much greater
than 256. Selecting Limits to Actual causes the data values to be split
up into equal groups (a linear stretch) and assigned to the available
colours.
Non-linear transforms
Other transformations (such as Gaussian, Histogram Equalize, 99%
clip, 95% clip, real time) emphasize different aspects of the data. For
example, Histogram Equalize provides maximum brightness.
For the non-linear transformations each quantization colour
represents a different length of data interval. Changing limits for a
fixed stretch affects the displayed image. These effects may include
saturation or loss of detail in all or part of the image.
The effect of zooming on limits
When stretch limits are set to actual, the values used to define the
stretch are those displayed in the window. This means that if the
image has been zoomed only some of the data has been used to
calculate the limits. If you pan to a different location in the dataset
or perform a Zoom to All Datasets the transform may no longer be
optimal.
Data viewing options
There are a number of ways to image and present geophysical
datasets such as aeromagnetic data. They are described below.
Pseudocolor Single Band Display
Pseudocolor single band displays contribute useful information when
comparisons are made with other pseudocolor displays.
Geophysical Data Imaging and Presentation
109
Real Time Shading
While the use of pseudocolor shows the amplitude and illustrates
some trends in the data, more subtle trends can be emphasized
using sun angle shading. ER Mapper provides interactive shading so
that you can change the elevation and azimuth with the mouse and
see the results straight away. This is known as Real Time Shading.
Shading or sun angle simulates the illumination of the data as if it
was a topographic surface. The position of the artificial sun which
illuminates the data is specified by an elevation and azimuth angle.
Sun angle shading emphasizes trends perpendicular to and
sometimes parallel to the sun azimuth. Watch out for gridding
artefacts which may be emphasized using this method. Those trends
which are perceptible at most azimuths, if not gridding artefacts, are
the most likely to be real trends.
Use the ‘force nosubs’ filter with realtime shading to see maximum
resolution of subtle trends.
Vertical shading (elevation 90 degrees) provides an unbiased display
of the trends.
Note also that some data requires smoothing prior to sun angle
imaging because of data noise and gridding artefacts.
Selecting different display stretches such as Histogram Equalize or
Gaussian Equalize prior to shading produces a display with little or
no difference in the shaded display. Clipping using 95% and 99%
limits has a noticeable effect. Clipping may be usefully applied when
mosaicing shaded images.
Shading with sun azimuth along flight line direction minimizes
levelling problems.
Example:
examples\Data_Types\Magnetics_And_Radiometrics\Magnetics_Re
altime_Sun_Shade
Color Draping
The use of the color drape display combines the advantages of
directional information available from shading and amplitude
information shown by the pseudocolour. Again use of the ‘force
nosubs’ kernel particularly in the intensity layer is recommended to
maximise enhancement of subtle gradients in the data.
There are a number of colour schemes that you can use. These are
selected using the colour look up tables in the Algorithm window, for
example, Rainbow1.
Colour draping over a vertical sun angle gives an unbiased
colordrape effect.
110
Geophysical Data Imaging and Presentation
Choice of display stretch for the intensity layer in a colour drape is
important. Data should not be clipped. Histogram equalized stretch
gives a very dark intensity background which may appear OK on the
screen but can display too dark on certain lower colour resolution
hardcopy devices. To solve this generally requires some
experimentation with your own hardcopy devices. The logarithmic
stretch (curved upwards on the transform graph) is very useful in
reducing darkness in the intensity layer whilst retaining edge
shadow. The skew of data (i.e. to higher or lower limits of the
transform) can affect the relative enhancement of high or low spatial
frequencies using a logarithmic stretch.
Often with high spatial frequency data (e.g. magnetics over surface
volcanics) a colour drape display is very dark and ineffective.
If data has a wide distribution of amplitudes and spatial frequencies,
use of the equations option to divide the geophysical intensity
spectrum into discrete contiguous bandpasses enables multiple
optimum sun angle stretches for each bandpass and maximum
definition of subtle trends in widely variable data (e.g. topography
data with steep mountains, gently undulating plateaus and plains).
This technique produces a black line at the edge of each band pass
in the intensity layer however.
Colordrape (plus use of kernels e.g. ‘force no subs’) trims the
intensity layer of the image because a 3 x 3 kernel is used for the
sun angle. Use of the dummy 3, 5, 7 etc. filters in the pseudocolor
layer can trim the pseudocolor to the intensity image layer. Holes in
an image are magnified by use of colordraping. This can be a
problem with some datasets (e.g. seismic faults containing null
values).
The colordrape affords great flexibility in the colours used for
draping. Colorbars used in map annotation utilize the colour limits
and distribution of the output transform in the first active
pseudocolor layer in an algorithm and do not display the intensity
grayscales. For this reason the colour bars appear brighter in colour
than their corresponding colours in the image.
Example:
examples\Data_Types\Magnetics_And_Radiometrics\Magnetics_Col
ordrape
HSI
HSI provides an impressive metallic lustrous or dull continuous
intensity background to a hue/saturation colour image. The HSI
display overcomes the quantization and darkness problems of the
intensity layer of a colordrape display.
Geophysical Data Imaging and Presentation
111
Filters
Filtering of geophysical data can be done by fourier analysis methods
or by use of spatial filter methods. Filtering can be done in the fourier
domain in ER Mapper (v5.0+) or a geophysical package (e.g. AGP,
GEOSOFT/GIPSI or Intrepid) or using the very limited spatial filters
supplied by ER Mapper. Filter methods will be discussed here, and
fourier analysis in Chapter 11, “Fast Fourier Transforms”.
Some filters, which are designed for cleaning up satellite images, are
useful for crude sun angle and image smoothing of geophysical data.
These include smoothing, frequency, edge enhancements and
residual filters. Additional filters designed specifically for geophysical
processing include the hanning filter and upward and downward
continuation and second derivative filters.
All filtering applied to geophysical should be preceded by a force
nosubs’ filter to ensure the data is sampled at the original dataset
resolution not the display device/window resolution.
Below are examples of filters that can be applied to geophysical
data:
Smoothing
Smoothing filters supplied with ER Mapper are:
•
hanning3
•
avg3
•
avg31
•
avg5
•
avg7
•
avg_diag
•
avg_h_v
•
avg_hor
•
avg_ver.
These are used to smooth data noise and gridding artefacts, for
example, for radiometric data and pre sun-angle. The hanning filter
is probably best suited to geophysical data.
Frequency
112
Frequency filters may be highpass, lowpass or bandpass. Some
ER Mapper supplied examples are:
•
highpass_4
•
lowpass_4
Geophysical Data Imaging and Presentation
These are derived pass filters at spectral wavelengths of
approximately a multiple of grid spacing. For example, highpass_4
passes high frequency (low wavelength) data with wavelength less
than or equal to approximately four times grid spacing. There is a 3
db cutoff at 4 grid spacings.
Frequency filters may be useful in some cases, for example, to
remove surface noise in magnetics. Note the highpass filter shows
edges in data but ringing effects are noted. The low pass filter
removes noise but retains a lot of high frequency.
Edge
enhancement/directional
Some examples of ER Mapper's edge enhancement/directional filters
are:
•
East_West
•
North_East
•
North_South
•
North_West
•
NorthWest_5
•
Sobel1
•
Sobel2
•
South_East
•
South_North
•
South_West
•
West_East
•
h_edge
•
h_edge_5
•
Laplacian
•
Laplacian5
•
sharpedge
•
sharpen
•
v_edge
•
v_edge_5
Geophysical Data Imaging and Presentation
113
Edge enhancement or horizontal gradients can be carried out on raw
located magnetic data or on the image grid. In most cases this can
be done with a derivative technique. Edge enhancements are always
useful to define structure and in some cases the edge enhancement
filters highlight subtle structure better than real time shading. In
other cases the edge filters severely amplify gridding artefacts.
With magnetics data, anomalies edges are shifted to the north in the
Southern Hemisphere and to the south in the Northern Hemisphere.
This is due to latitude effects on magnetics. Edge detection kernels
are best applied to reduced to pole data --> import to fourier
analysis package
Second derivative filters
There are also approximate spatial second derivative filters, for
example from Henderson and Zeitz (Fuller, 1967):
•
hz10_2nd_deriv
•
hz13_2nd_deriv
•
hz1960_2nd_deriv
Note that you can try appending smoothing filters to second
derivative filters. But these filter responses are poor as they are
approximate filters. The hz1960 filter is probably best.
Residual
Continuation
114
These filters highlight local anomalous values and can be very
sensitive to gridding artefacts in some cases:
•
low_frequency
•
residual_1
•
residual_2
•
thresh_3
•
thresh5
The following filters are the only true geophysical frequency filters
for enhancing low or high frequency for deep or near surface
sources.
•
up_cont_half
•
up_cont_1
•
up_cont_2
•
down_cont_half
•
down_cont_1
•
down_cont_2
Geophysical Data Imaging and Presentation
Note that:
•
On the Newcastle magnetics dataset downward continuation at
1/2 spacing (20m) enhances WNW dykes and fractures well.
•
Downcontinuation 1 spacing becomes noisy, so apply/append
avg3 and retry.
•
Down_cont_ 2 spacing is next to useless unless flying height is a
lot greater than grid spacing.
Try appending 2 dc_half filters and compare with dc_1. Differences
could be due to the fact that filters are imperfect in response
•
Upward continuation has little effect on the Newcastle magnetics
data other than to filter out high frequencies in data. This is
useful for defining gross effects of major magnetic bodies and
filtering out magnetic noise, due to surface cover such as
laterites and basalts.
Notes on the use of filters
with geophysical data
For geophysical data, do not subsample because when processing
and displaying images you can apply the kernel before or after
subsampling. For geophysical applications you should normally apply
the kernel to the complete dataset. To do this you must select
Process at dataset resolution in the Filter dialog box.
Use of Equations in
Geophysics
There are a number of ways to combine data from different bands in
the same dataset, by using equations. They are:
•
Mosaicing data with base level shifts. For example:
MagA - MagB + contrast
•
Ratioing. For example:
K/Th ratio in radiometrics
or
(K-Kmin)/(Th - Thmin)
This removes arbitrary level shifts in K and Th before ratioing. The
latter formula is similar to Dark/Black pixel subtraction in satellite
imaging.
•
Show the Pseudo gravity effects of a layer using the Bouguer slab
formulae. For example:
Bouguer slab effect of a layer = 2πρG x thickness
where ρ = density (T/m3) and G = Gravitational constant.
Any empirical equation can be experimented with to try to enhance
some aspect of the geophysical data.
•
Combining different data types. For example:
Geophysical Data Imaging and Presentation
115
barometer - altimeter = DTM in airborne magnetics
•
Classifying (see radiometrics).
Combining Filters and
Equations
You can combine filters and equations in any way you like. For
example:
Pseudo - Vertical
Derivative
You can take the up_cont_half of a dataset and subtract the dataset
from it. Display this as a pseudocolor image to see the pseudovertical derivative as a quick alternative to full fourier processing for
vertical derivative.
To do this you would create a formula with “I1-I2”. In I1 you would
have the original data with a up_cont_half filter applied to it, while
in I2 you would have the original data with a vertical derivative filter
applied to it.
Note that in this example the same band of the dataset is being used
twice with different processing applied to it.
C User code links
Geophysicists like to apply their own processing to their data.
ER Mapper affords a degree of flexibility to do this to a small grid of
geophysical data adjacent to an image pixel, through the C User
code links facility. This facility passes adjacent pixel values via an n
x n kernel (where n is an odd number) to an external processing
routine which takes the kernel values as inputs and returns a new
central (to the kernel) pixel value. An example might be inner zone
terrain corrections for gravity pixels using prism based modelling on
the coregistered topography grid of pixels. The standard ER Mapper
manuals describe how to construct your own C User code link.
Some standard kernels which employ external C code are supplied
with ER Mapper and a useful one to try on your geophysical data is
the “local_enhance” filter which allocates maximum pixel brightness
based on a local histogram equalized stretch in a (31x31 sized)
kernel centred on the current pixel. This is like a moving AGC on the
data, normalising all data values to the range 0-255 and this
maximizes subtle local effects. Use with care as the results can be
quiet dramatic, especially where subtle anomalies are superimposed
on large anomalies. Again an empirical approach is recommended.
Another filter is the “majority” filter which works on 8-bit integer
datasets like a Classification layer from radiometrics data for
example. The 3 x 3 filter chooses the majority value within the filter
area and replaces the central pixel with the majority value. This can
be used to “clean up” visually noisy classifications.
Other supplied filters can be found in the “usercode” kernel
directory.
116
Geophysical Data Imaging and Presentation
Interpretation
using annotation
The on screen annotation capabilities of ER Mapper enable
interpretation direct from images rather than from hardcopy.
Intelligent use of the power of annotation can enable quicker
interpretation of one dataset and checking against other datasets.
For example it is possible to have two windows open in geolink mode,
one showing magnetics the other gravity for the same area. The
geolinking maintains the same zoom between the two windows.
Annotating the magnetics interpretation vector and saving it
regularly and then reloading it over the gravity image and
redisplaying in the ‘gravity’ window enables regular checking of the
magnetics interpretation with respect to the gravity as the magnetics
interpretation progresses.
Other interpretations and ancillary vector data (e.g. hydrography,
data distribution) can be overlaid in other colours to aid on-screen
interpretation. To further aid interpretation and annotation the
image mode can be changed between Pseudocolour, Colordrape and
real-time sunangle. Zooming enables detailed interpretation and full
resolution not afforded on hardcopy interpretation. Two windows
(not geolinked) open on the same dataset, one at a regional zoom
the other on a detailed zoom gives a regional perspective to a
detailed interpretation of part of an image.
Topography, if available (with a hydrography overlay) is a useful
other image window to have open and geolinked during
interpretation, as it often has an influence on geophysical response,
relevant to the interpretation.
Imaging
magnetics
The magnetic data provided in the ER Mapper standard datasets is a
detailed, high quality, airborne dataset from the Newcastle Range in
north Queensland, Australia. Two datasets are included: total count
and vertical derivative images.
The vertical derivative data is not normally available and is used in
the algorithms below to illustrate the results obtainable from this
processing. Additional algorithms provide a processing technique
which approximates the results available from vertical derivative
processing.
Magnetic data collection
An aeromagnetics dataset is commonly captured using a Caesium
Vapour Magnetometer. Typical parameters of a modern detailed
magnetic survey are:
•
Flight-line spacing 200 m
•
Along line sampling 8 m
•
Tie-line spacing 2000 m
The data may have diurnal correction, levelling and IGRF correction
applied.The final dataset will generally be gridded, for example to 40
m x 40 m cells.
Geophysical Data Imaging and Presentation
117
Effect of Depth on
Anomaly Shape
Magnetic signals are strongly influenced by the depth of the
magnetic body. Magnetic anomalies from shallow bodies usually
show high spatial frequency, whereas anomalies from deep bodies
exhibit low spatial frequency. As a general rule, the “sharper” an
anomaly the shallower the causative body.
Reliability of Gridded
Magnetic Data
Many factors affect the reliability of magnetic image grid data, which
needs to be considered when analysing magnetic images. The
gridding process to produce the grid is crucial because magnetic data
is often highly spatially variable and may exhibit several systematic
directional trends.
Available contouring algorithms vary considerably in their ability to
adequately grid magnetic data without producing artefacts. The
gridding artefacts are noticeable particularly when applying kernels
(or filters) to the grid that enhance the derivatives, edges or slope of
the data surface. Real Time shading can amplify gridding artefacts.
Often there is no simple way to overcome these artefacts other than
to apply an averaging filter to the grid prior to Real Time Shade or
other kernels.
Data Levelling Artefacts
Magnetic data along flight lines is “levelled” to eliminate altitude
clearance differences effects and diurnal variations between
adjacent flight lines. Levelling corrections are not always effective
particularly on earlier regional data and can produce “striping” or
“stepping” effects in the data. Sun angle images with sun
illumination across line direction enhances the effect of levelling
errors. Illumination along line direction is most effective in
minimizing the visual impact of levelling errors.
Effect of Aircraft Altitude
In rugged terrain constant survey altitude can not be maintained and
causing low aircraft ground clearance over ridges and high
clearances in deep valleys. This is particularly severe with fixed wing
surveys, less so with helicopter surveys. Magnetic response is
related to distance from the aircraft and this means there is often a
negative correlation between radar altimeter and magnetics.
Therefore imaging of the aircraft radar altimeter gridded data may
be useful in interpreting the magnetic image.
In favourable circumstances a DTM may be available or can be
constructed from the aircraft barometer data, altimeter data and
ground contours for calibration purposes.
Grid Manipulation of
Magnetic Data
Grid filters for upward and downward continuation of magnetic data
to enhance deep or shallow magnetic anomalies can be used on
magnetic data. The bulk of geophysical filtering on magnetic data
must be done in the fourier domain in ER Mapper or in an external
magnetics mapping package (e.g. AGP, Geosoft / GIPSI or Intrepid).
ER Mapper can import and display raw and fourier domain processed
grids from geophysical software. Once imported upward and
downward continuation filters can be applied. ER Mapper provides
the facility for this by filters for upward and downward continuation
at one-half and one grid spacing, band pass and strike filters (Mufti,
1972; and Fuller, 1967).
118
Geophysical Data Imaging and Presentation
Airborne
radiometrics
imaging
The radiometric data provided in the ER Mapper standard datasets is
a detailed, high quality, airborne dataset from the Newcastle Range
in north Queensland, Australia.
Radiometric data
Radiometrics is airborne ground geochemistry of the selected
elements K, Th and U. A radiometrics dataset usually includes data
for K, Th, U and Total Count. Typical datasets will have along-line
sampling at 60 m intervals and a crystal volume of 33.5 litres. Listed
below are factors that influence radiometric data collection and
image reliability.
Survey accuracy and
calibration
The accuracy of radiometric data depends on the size and counting
statistics of the counting crystal used in an aircraft and the flight line
spacing.
Earlier BMR regional surveys of Australia are at wide line spacing
(1.5km) using very small crystals and generally only Total Count and
possibly the potassium channel are statistically significant. Levelling
of this early data is poor by modern standards also, however,
because of the widespread availability of this early regional survey
data it can be of some use in the absence of modern data.
Commonly, modern radiometrics exploration uses close line spacings
of 200 - 400 metres (or less) and large crystals of 33.2 litres or 16.6
litres.
Radiometric surveys are typically calibrated to local standards for
internal consistency and the unique response of the aircraft system
used. Matching disparate radiometric surveys with different crystal
volumes, aircraft and flying heights can be a problem. Only recently
are attempts being made in Australia to standardise radiometric
surveys between aircraft to units of equivalent ppm K, U and Th,
based on the Canadian experiences.
Ground resolution
Survey altitudes affect survey resolution. Regional surveys are
typically 75 to 150 metres ground clearance with a typical “field of
view” or detector effective swath width of about 400 to 500 metres.
With 1.5 km line spacing this leaves some 1 km of ground between
lines uncovered by radiometrics and data has to be interpolated in
the gridding process that generates the gridded data from the raw
line data. Typically, pixel sizes for gridded regional data are 100 to
300 metres.
With detailed survey data of about 250 metre line spacing and 80 to
100 metre ground clearance (or less), the detector “swath” from
adjacent flight lines overlap and provide more or less continuous
coverage of the ground. Typical pixel sizes of detailed data are 50 to
100 metres.
As a result of large possible range of ratios of line spacings to swath
widths and the statistical nature of the data, gridded radiometric
data needs to be interpreted in conjunction with a flight-line overlay.
Geophysical Data Imaging and Presentation
119
Image Striping
Radiometric corrections of airborne radon are not always effective
and image “striping” can be a problem. The stripes are not simply
related to grid lines on the image but flight lines on the original data.
Generally, corrections need to be done on the original flight data and
the data needs to be regridded to minimize or eliminate “striping”.
This is in contrast to satellite data striping which is generally grid line
related. Satellite image destriping methods may not be fully effective
on radiometric data.
Aircraft Altitude
Altitude of the aircraft measured by the radar altimeter is important
in determining the reliability of radiometric data, especially in
mountainous terrain. As a general rule if aircraft ground clearance
exceeds 250 to 300 metres, radiometrics data corrections for
deteriorating radiometric signals at certain heights are unreliable.
Thus, in mountainous terrain, deep valleys show noisy, unreliable or
erroneous radiometric signals, and mountain tops, where survey
aircraft altitude is often lower than the average altitude, show strong
radiometric signals.
Also, altitude, or more correctly terrain shape/curvature affects
radiometric signals. Valleys tend to “focus” or increase radiometric
signals artificially and ridges tend to “disperse” or decrease
radiometric signals artificially.
Altitude can also affect radon “noise” in the data due to near ground
radar inversion layers and this can be evident for radiometric data
flown in early morning. Uranium channel responses may show a
positive correlation with altitude.
Thus, given the importance of altitude for data quality, where
available, a grid of altimeter (ground clearance) data is useful to help
interpret the reliability of the airborne radiometric data images.
Geology, mineralization
and soil mapping
Different types of geology have particular radiometric “signatures”
associated with them, where a “signature” is a combination of high
and low densities of different bands of data. To look for areas of likely
mineral deposits, the signature of an area of known deposits is
studied and then other regions are searched for similar patterns.
Radiometrics is also particularly useful in environmental studies
mapping the regolith and mapping soils for agricultural and soil
salinity studies.
For example, BHP Pty. Ltd. initiated the airborne survey for
epithermal gold exploration in the Newcastle Range Volcanics,
Australia (see the Newcastle Range airborne dataset supplied with
ER Mapper). The classic target signature is magnetic lows and
radiometric potassium highs indicating possible hydrothermal
alteration of the volcanics.
Example algorithm:
examples\Data_Types\Magnetics_And_Radiometrics\Radiometrics_
Magnetics_RGBI.alg
120
Geophysical Data Imaging and Presentation
Interpreting
Radiometrics Data
In ER Mapper, radiometrics can be used in innovative ways for
geological mapping.
Typical ways for showing radiometrics data are:
•
As an RGB combination of Potassium, Thorium and Uranium (K,
Th and U).
Note that the industry standard for radiometrics RGB display is K for
Red, Th for Green and U for Blue. The least important data, which is
Uranium, is sent to the blue gun as this is the colour that the eye is
least sensitive to.
•
As a pseudocolor image of a single band. For example, K, Th, U
or Total Count)
•
As a ratio of two bands by using formulae.
•
In combination with other datasets such as magnetics. For
example, with radiometrics as color and magnetics data as an
intensity layer or vice versa.
Interpretation of radiometric data (like satellite data) without
ground truthing is not advised.
Ratios
Ratios can be displayed as single bands. Such as:
K/Th (K - K min/Th- Th min)
and
U/Th, K/U
In interpreting radiometric data, ratios are useful for the following
reasons.
1. Ratios of channels in radiometrics have been used to diagnose or
enhance various geological features, for example, for mineralization
and clay types. These examples include:
•
K/Th
•
U/Th
•
U/K
•
Th*K/U
•
Th/K
•
Th/U
•
K/U
Geophysical Data Imaging and Presentation
121
•
U*U/Th
•
K/TC%
•
Th/TC%
•
U/TC%
•
U*TC/Th
Generally an empirical approach has been used. Actually any
combination is empirically valid provided noise levels in the data do
not dominate and proper empirical approaches are used.
Example algorithm:
examples\Data_Types\Magnetics_And_Radiometrics\Radiometrics_
ratio_K_Th
2. They are useful to highlight a geological feature which may otherwise
be almost indistinguishable from another feature on an K, U, Th and
KUTh RGB image.
Ratios differentiate where simple bands do not. For example,
carboniferous granite has a similar K, U, Th and TC response to
ignimbrites of the similar age. However, granite shows a
generally lower U/Th and higher Th/U than the ignimbrites. But
K/Th and Th/U are similar. Also U/K is lower and K/U higher for
many granites compared to iginimbites.
3. Ratios can also be used as a classifier.
But, generally ratios are in the form of “(A-A min)/(B-B min)”
because of negative minimum response. This is similar to dark pixel
subtraction in Landsat atmospheric correction.
4. Ratios can be normalized.
Note that ratios also often sharpen geological boundaries for
structural control of features. You can set up ratios by editing the
formula for a layer, typing in the formula (for example, “I1/I2”),
setting the inputs to the appropriate bands and rescaling the output.
5. Ratios can be combined. For example, RGB radiometrics over VD or
DTM. Ratios can be used any other way you would like to combine
the data.
Ratio Combinations (RGB)
Almost an infinite variety of RGB ratio combinations is possible and
generally more than one is needed to define all geology.
122
Geophysical Data Imaging and Presentation
Combining Radiometrics and Other Data
Radiometric data can be combined with Landsat TM, magnetic and
other data. It is best to have a Digital Terrain Map to interpret
radiometric data because soils can be transported downslope or
throughout catchments, away from source rocks.
It can also be useful to overlay streams or river vectors to identify
transported clays in valleys. For example, create a colour drape
image with magnetics data as colour and radiometrics data as an
intensity layer or vice versa. Alternatively, use the RGBI image with
Potassium, Thorium, Uranium and Magnetics.
Radiometrics and Satellite Data
Combination of radiometric images with good satellite data (e.g. TM,
SPOT) can enhance soil classification and enable swamps and water
bodies to be identified (Wilford et al, 1992). With similar pixel sizes
and different vegetation responses detailed radiometrics and TM
data complement each other well as the information they can
potentially provide. There is much scope for applied research and
experimentation in combination of satellite and radiometric data.
Classification and Principal Components
Other processing that can be applied to radiometric data is
classification and principal components.
Classification involves taking a large dataset with many variations in
it, for example, many bands with many variable levels and dividing
it into a small number of classes. You can either identify individual
known types of mineralogy and then let ER Mapper compute the
probable mineralogy for the rest of the image, as in supervised
classification, or have ER Mapper automatically identify separate
classes to which you can then assign probable identities.
In radiometrics, different geology and soil have different K, Th, U
concentrations which might reflect underlying in situ geology or
transported soils.
Note that if people want to classify on other than single bands, for
instance ratios, they must first save as a virtual dataset, define
regions and statistics for new datasets, and then classify.
It is always important to try to classify radiometrics to see what it
looks like. There are two techniques:
•
Supervised classification. Where you can use maximum
•
Unsupervised classification. Where you can use an arbitrarily
likelihood or mahalanobis the interpreter can define training
regions where you are confident of known uniform geology and a
reasonably uniform response and use ER Mapper to classify the
entire image into these training region classes.
small number of classifications, for example, 10 or 20. If you use
more, you will have trouble differentiating between the different
classes.
Geophysical Data Imaging and Presentation
123
•
Principal Components. Principle components are only useful
where you have multiple bands of data. They are very difficult to
interpret but may be of use in an empirical approach. You can be
fairly sure that PC1 will show you the most common changes
from the perspective of the radiometric data, even though it is
not clear what those changes are. Therefore, PC1 can highlight
prominent features in the area. The final PC, in radiometrics,
which is generally PC4, normally has most noise. Because
Uranium is most noisy, most of the Uranium noise will end up in
PC4. You may want to look at the other principal components to
see if they highlight strong features. Again an empirical approach
using ground control (geological and / or soil mapping) is
advised.
Principal components analysis on 3 band radiometrics (K, Th, U)
and 4 band radiometrics (K, Th, U and Total Count) may be worth
experimenting with to see which is more empirically useful.
Seismic reflection
data imaging
Seismic data images supplied with ER Mapper are from a 3D seismic
dataset
Introduction
Seismic imaging first started with 3D data (Tilbury and Bush, 1991)
and has been used for 2D data. Care must be taken in preparation
of the images to get a satisfactory result.
2D seismic data
Most available seismic horizon data is 2D data collected along a
random series of lines. It is interpreted on a seismic workstation
(e.g. Landmark, Charisma or GEOQUEST) where it can be exported
as a grid or as interpreted horizons. Because of the large line
spacings between 2D lines the grids produced by seismic workstation
software are adequate for workstation interpretation guidance
purposes, but they are mostly not suitable for imaging because the
gridding algorithms may not be optimal for imaging. Export of the 2D
seismic two-way time horizon data and fault polygon data to a
seismic mapping package (e.g. Petroseis, Zycor or Radian CP3) is
recommended for image quality gridding.
Also gridding of time thickness is problematical and probably needs
to be done on the horizon time difference data rather than
subtracting the two horizon grids. Care needs to be take in both
horizon and time isochron gridding with gridding with faults, across
faults and in small fault blocks where little seismic control is
available. The better the contour/fault editing capability of the
seismic mapping package, the better its fault gridding capability
(e.g. ability to assign two-way time values around fault polygons)
and the better its ability to incorporate interpreter guidance in the
attainment of a final grid / contour, then the better the final image
result.
124
Geophysical Data Imaging and Presentation
3D seismic data
3D seismic horizon data is growing in usage and volume and is
readily exported as a grid from workstations to ER Mapper. With 3D
data the grid resolution is so fine that pixelation at the 3D grid
resolution using the gridding capabilities of the workstation produces
image quality grids. With 3D data various seismic attributes can be
exported to ER Mapper along with average and interval velocity
grids.
Grids and fault polygon transfer to ER Mapper is necessary to
successfully image the seismic data. ER Mapper’s capability to
import from the workstations directly is growing rapidly. Links to the
seismic mapping packages is via ascii grid format transfer and DXF
vector format or user shell scripts for polygons. Once in ER Mapper
the individual horizon and isochron (time, velocity and attribute)
grids need to be combined in a multiband dataset (with other data
e.g. topography/bathymetry, gravity, magnetics) for analysis.
Imaging two-way time
data
Pseudocolour or Colordrape images with solid fill fault polygons are
a useful presentation. DXF (linked or imported) contour and seismic
shotpoint map overlays can be instructive also. A reverse linear
transform portrays shallow times as reds (structural highs) and
deeper times as blues / violets (structural lows). For isochron data
the normal linear transform shows thickest formations in reds and
thinnest formations in blue / violet.
Creative operations on
seismic image data
If velocity grids are bands of the seismic ER Mapper dataset, use of
simple ER Mapper formulae can effect simple depth and formation
thickness conversions.
Datuming of any horizon and aggregating of minor seismic units
(e.g. between major unconformities) can also be easily effected.
Ratioing of formation thickness can enhance relative thickness and
aid in an understanding of geological activity (e.g. relative fault
movement, deposition and erosion) throughout time.
Attribute data can be colordraped over a two-way time structure (or
formation isochron) intensity layer. If multiple attribute data related
to physical properties of formations and sufficient well data is
available, then the statistics and classification capabilities of
ER Mapper affords possibilities for experimentation with new
methods to determine hydrocarbon indicators.
Dip and Azimuth Displays
in Seismic
ER Mapper enables dip and azimuth displays to be generated for 3D
and 2D horizons (though beware gridding artefacts on 2D horizons).
Several standard algorithms are provided under the
“Application_seismic” directory and an azimuth lookup table gives
the standard display look and feel (Dalley et al, 1989). The dip
algorithm gives a display similar to a static sunangle.
Example algorithms:
•
examples\Data_Types\Seismic\Horizon_Azimuth.alg
•
examples\Data_Types\Seismic\Horizon_Dip
Geophysical Data Imaging and Presentation
125
Data integration
Most importantly, previously disparate datasets (seismic,
topography/bathymetry, magnetics, Landsat), all of which
contribute to the structural understanding of an area are now easily
integrated and available, to enhance the seismic interpretation by
joint interpretation and comparison in ER Mapper. Figure 2 below
shows an example of an integrated dataset from a study in the
Otway Basin, Australia (Pettifer, 1992).
Gravity imaging
Gravity data is available in Australia from AGSO National Gravity
database and can be supplemented with new data. Old and new
data, if properly tied to the IGSN71 gravity datum (gravity units used
are in micrometers sec-2) and AHD height datums can be combined
and regridded at any appropriate Bouguer reduction density for
display as an image. Older gravity data may be referenced to the
earlier Potsdam gravity datum and be in units of milligals and need
conversion to the IGSN71 datum and then combination with IGSN71
data and total regridding before imaging.
Offshore gravity data can be displayed as Bouguer anomaly (with
water replaced by the Bouguer density) or free air gravity. Bouguer
anomaly onshore with free air anomaly offshore is a common
convention.
Gravity image displays
and spatial frequency
Gravity data (unlike magnetics, seismic, or any line collected data)
is usually much sparser in data coverage, leading to lower spatial
frequencies in the data.
Thus gravity data is particularly suited to colordrape imaging.
Depending on gridding artefacts, structural trends in gravity data
can be greatly enhanced with colordraping. Minimum curvature
gridding seems to give the best gridding with least station related
“pimple” artefacts.
126
Geophysical Data Imaging and Presentation
As the spatial frequencies in the gravity data only reflect the gravity
data coverage, gravity images are best studied in conjunction with
other higher spatially sampled data for an area (e.g. magnetics,
topography etc.) to gain a better appreciation of near surface
geological variation. Use of a gravity station vector overlay is highly
recommended to understand the spatial frequency deficiencies and
possible artefacts of your gravity image.
Vector Overlays
Gravity images can be displayed with geological boundary overlays,
and station position overlays imported as DXF from geophysical
mapping packages (e.g. AGP, Geosoft/GIPSI, Intrepid, Petroseis).
Gravity and Bouguer
density choice in imaging
Gravity data can be affected by terrain and if topography images are
available, ideally they ought to be combined with the gravity data.
Apart from terrain corrections in gravity, incorrect and/or variable
Bouguer (terrain) density can give terrain correlated specious
anomalies which mask the subsurface gravity effects. The Nettleton
method of minimization of the correlation of Bouguer gravity with
terrain can easily be effected over an entire image area or areas of
uniform geology with ER Mapper in a gravity/topography multiband
dataset. In the discussion that follows terrain corrections are
ignored.
Bouguer Gravity combined with free air gravity, topography and/or
bathymetry in a multiband dataset enables calculation of Bouguer
anomaly at densities other than that used in the original Bouguer
anomaly calculation on the point gravity data prior to imaging an
gridding, using the equations capability of ER Mapper and the
equations in gravity.
Bouguer gravity = Free air gravity - Bouguer plate correction
This reduces to:
BAonshore = FA - 2πρG x elevation
BAoffshore = FA + 2πρG(ρwb - ρwater) x water depth
where ρwb = water bottom density; ρwater = water density; and G =
Gravitational constant.
Using the “Save As Dataset” option a new multiband bouguer
anomaly dataset can be created containing topography and bouguer
anomaly calculated by the formulae above at a range of plausible
densities for the area (say 1.9 to 3.3 T/m3 at 0.1 T/m3 intervals). If
geological boundaries are available and shared in ER Mapper as
attributed polygons within an ERVEC vector dataset, the polygons
(preferably labeled by a geological name) can be imported as regions
to the multiband bouguer anomaly dataset.
Calculation of statistics on the bouguer anomaly dataset will then
enable analysis of the correlation of gravity and terrain for regions
(ALL or individual geological regions). From this analysis a better
bouguer anomaly image can be facilitated by a better choice of
density. Use of the crossplotting facility will also enable a visual
assessment of terrain/bouguer anomaly correlation as well for
various densities.
Geophysical Data Imaging and Presentation
127
A word of caution: typically a topography image may be based on
more data than the gravity image and therefore contain higher
spatial frequencies than the gravity. In this case grid the terrain data
from the gravity data alone using the same gridding algorithm and
criteria and both topography and gravity should have the same
spatial frequencies.
Gravity stripping
The above mentioned bouguer plate correction can also be used to
enable a crude form of gravity stripping where subsurface depth
horizons are available for an image area (for example, from seismic
and tracehole data in sedimentary basin areas). Gravity stripping
can be effected by calculating the approximate total effect of each
layer at an appropriate density, and subtracting it from the free air
gravity.
This technique can identify where gross gravity effects are coming
from within a sedimentary basin but can introduce errors and
artificial trends in the stripped gravity data at faulted edges of
subsurface layers where the bouguer plate assumption breaks down.
A more rigorous approach to gravity stripping (or whole geology
modelling) involves 3D modelling of terrain and is beyond the scope
of this discussion and most gravity interpretations extant in the
literature today. The NODDY software package enables forward
modelling of gravity for complex structures and integrated seismic /
gravity surveys / modelling is becoming more frequently used in the
petroleum industry, so no doubt this area of gravity interpretation
will evolve with a role for creative imaging in the interpretation
process.
Gravity Profiling
Gravity profiles can be extracted from a gravity image using the
profiling option. simple cross-sections can be digitized or existing
vectors can be chosen for the profile (for example seismic traverses,
flight paths or roads).
Dynamic links to user
code
Dynamic links to user code enable further innovative use of
ER Mapper with gravity and is an area worthy of further exploration.
Crude nearzone terrain/water bottom connections of bouguer
gravity is one example where the user code link could be useful.
Geophysical filters in
gravity
Geophysical filters (previously discussed) can be applied to gravity
data (for example, upward continuation). Downward continuation
filters must be used with caution.
128
Geophysical Data Imaging and Presentation
Airborne
electromagnetics
imaging
Resistivity
Earth resistivity is primarily a function of the interconnected porosity
of the rock, the amount and quality of contained water, and degree
of saturation. Electronically conductive minerals such as metallic
sulphides which have very low resistivities generally contribute little
to bulk resistivities of earth materials unless present in large
quantities. Thus, earth resistivity is normally determined by ionic
conduction through the pore spaces of rocks.
Intrusive rocks have low porosity and much higher resistivities than
sediments. A sandstone aquifer, if saturated with fresh water, might
have intermediate values. The presence of clay minerals also will
lower resistivities significantly because of their surface electrical
properties.
EM imaging
Airborne EM data varies from early time-domain (pulse decay) 6
channel INPUT data through to the modern SALTMAP and GEOTEM
III multichannel systems. Apparent conductivity or raw channel
voltage data is gridded as per conventional airborne magnetics. The
sheer volume of data from a modern airborne EM system has seen a
move toward 3D imaging as in seismic 3D, so it remains to be seen
where the near future role for conventional image processing in fixed
wing time-domain airborne EM. Certainly ER Mapper’s multiband
dataset capability and equations capability is ideal for imaging this
data.
Helicopter bird EM systems (e.g. Dighem) use multifrequency EM to
enable a range of effective depths of investigation and the data from
these systems are readily amenable to gridding and ER Mapper
imaging. These systems are popular in Canada. There are few case
histories to establish an extant practice of imaging of EM data.
Ground based EM
systems imaging
Mining grid geophysics often uses multichannel time-domain EM
(e.g. SIROTEM, ZONGE, Geonics) and frequency domain EM (e.g.
Geonics EM34) systems, all of which can be gridded and imaged with
other geophysical data.
Topography
Surface topography is another geophysical anomaly method
amenable to use with image processing in exploration. Curiously,
apart from airphoto interpretation for geological structure and some
geomorphological studies, there has traditionally, been little use of
topography as another “geophysical” property to be used for
interpretation purposes in geological mapping or exploration.
Geophysical Data Imaging and Presentation
129
Topography and bathymetry: surface elevation, is the simplest of all
physical Z values that can be attributed to an x,y position on the
earth's geological surface. Surface elevation is therefore arguably
the simplest measurable geophysical property of the earth. The
earth's surface is the current day unconformity or disconformity and
its shape is determined by current and previous geological
processes. It can therefore be reasonably expected that knowledge
of that shape combined with other surficial and interpreted
subsurface geological and geophysical properties or attributes, may
considerably enhance the understanding of local and regional
geology.
Erosion is the principal agent of formation of the shape of either a
subsurface or the surface topography unconformity, smoothing the
topography, producing. low spatial frequency, long wavelength
topographic anomalies. Steep structural relief on the present day
surface or an older subsurface unconformity surface, may be related
to structure, either directly by new faulting or by reactivation of
existing older and deeper structure or by differential settlement
(often manifest along pre-existing faults) and this produces high
frequency, short wavelength topographic anomalies.
This wavelength separation of topographic anomalies makes
topography, like other geophysical techniques, amenable to
enhancement, detection and interpretation using standard imaging
techniques. In the case of sedimentary basin exploration
topography/bathymetry imaging is directly analogous to two-way
time imaging.
Sources of topography
data
Sources of digital elevation data for gridding, detailed mapping and
imaging of topography and bathymetry, include topographic contour
data, benchmarks, boreholes, cultural surveys, bathymetric surveys
gravity stations and seismic traverses.
Fortunately, in parallel with the digital and computing revolution in
geophysics, there has been an increase in the availability of digital
terrain data from non-geophysical sources. In Australia spot height,
contour and gridded terrain data is now readily available from
national mapping authorities like the Australian Land Information
Group (AUSLIG) or State mapping authorities from 1:100000 and
1:50000 scale mapping. In geophysically acquired topography the
advent of GPS spot and continuous navigation in a readily useable
digital form has lead to a revolution in topography geophysics (e.g.
terrain models are a routine by-product of airborne geophysics).
Another potential source of data is unmigrated digital terrain derived
from airborne geophysics, which has potentially a higher data
density than AUSLIG data, but until recently, has not been given the
quality control attention needed to make it of sufficient accuracy.
SPOT satellite derived terrain models area another potential data
source.
If all these data sources are utilised, then topography has the
highest data density of any geophysical technique over the earth's
surface. It is essentially “everyman’s geophysics”.
130
Geophysical Data Imaging and Presentation
Topography in
combination with other
data
Topography is best used with other data. Draping of geophysical
anomaly pseudocolour over terrain intensity is the most obvious
application.
Use of terrain data with the shallowest seismic horizon and a
weathering velocity enables a surface to the shallowest time horizon
time to be calculated if necessary.
Combination of topography with geophysical anomalies is desirable
to eliminate near surface effects on other geophysics.
References and
further reading
Below is are references which were utilized to create this document.
Magnetics and gravity
Milligan, P.M., Morse, M. P., and Rajagopalan, S. 1992, ‘Pixel map
preparation using the HSV colour model’, Exploration Geophysics,
23, 219-24
Reeves, C.R. 1992, ‘New horizons for airborne geophysical mapping’,
Exploration Geophysics, 23, pp 273-80
Radiometrics
Dickson, B. L. and Scott, K. M., 1992, ‘Intepretation of aerial
gamma-ray surveys’, CSIRO Division of Exploration Geoscience
Report 301 (144pp)
Geophysics and Geochemistry in the Search for Metallic Ores,
Geological Survey of Canada Economic Geology Report 31, 1979...
particularly:
Grasty, R.L. ‘Gamma ray Spectrometric Methods in Uranium
Exploration - Theory and Operational Procedures’ (pp 147 - 162)
Killeen, P.G., ‘Gamma Ray Spectrometric Methods in Uranium
Exploration - Application and Interpretation’ (pp 163-229)
Seismic
Dalley, R.M., Gevers, E.C.A., Stampfli, G. M., Davies, D.J., Gastaldo,
C.N., Ruijtenberg, P.A., and Vermeer, G.J.O. 1989, ‘Dip and azimuth
displays for 3D seismic interpretation’, First Break, 7, 86-96
Tilbury, L. A. and Bush, D. 1991, ‘Image processing of interpreted
3D seismic data to enhance subtle structural features/lineations’,
Exploration Geophysics, 22, 391-6
Tilbury, L. and Barter, T. 1992, ‘New technology - a major impact on
a producing field: North Rankin Gas Field, North West Shelf’, APEA
Journal, 32, 20-32
Cockshell, C.D., Allender, J.F., and Vinall, D.R. 1993, ‘Image
Processing for seismic mapping saves papering the walls’,
Exploration Geophysics, 24, 407-14
Airborne EM
Anderson, A., Dodds, A.R., McMahon, S., and Street, G.J. 1993, ‘A
comparison of airborne and ground electromagnetic techniques for
mapping shallow zone resistivity variations’, Exploration Geophysics,
24, 323-32
Geophysical Data Imaging and Presentation
131
Ground geophysics
Pettifer, G.R., Djordjevic, N., Heislers, D., Schaeffer, J., and Withers,
J.A. 1989, ‘Geophysical and image processing methods for detection
of fireholes in brown coal, Latrobe Valley’, Exploration Geophysics,
20(1/2), 153-8
Data integration
Conradson, K., and Nilsson, G. 1984, ‘Application of integrated
Landsat, geochemical and geophysical data in mineral exploration’,
Proceedings of the Third Thematic Conference on Remote Sensing
for Exploration Geology, Colorado Springs, Colorado, pp499-511
Kowalik, W.S., and Glenn, W.E. 1987, ‘Image processing of
aeromagnetic data and integration with Landsat images for
improved structural interpretation’, Geophysics, 52, 875-84
Spencer, G.A., Pridmore, D.F., and Isles, D.L. 1990, ‘Data
integration of exploration data using colour space on an image
processor’, Exploration Geophysics, 20(1/2), 31-5
Pettifer, G., Tabassi, A. and Simons, B.A. 1991, ‘A new look at the
structural trends in the onshore Otway Basin, Victoria, using image
processing of geophysical data’, APEA Journal 31(1), pp213-28
Pettifer, G.R., and Paterson, R.G. 1992, ‘An ideal image processing
and interpretation system’, EMU Newsletter, No3, Appendix 1
(Abstract and figure)
Wilford, J.T., Pain, C.F., and Dohrenwend, J.C. 1992, ‘Enhancement
and integration of airborne gamma-ray spectrometric and Landsat
imagery for regolith mapping - Cape York Peninsula’, Exploration
Geophysics, v23, pp441-46
Pettifer, G.R., Simons, B.A., and Olshina, A. 1992, ‘Advances in
image processing for data integration in basin studies - Eastern
Otway Basin - a case study’, Proceedings of the AAPG International
Conference and Exhibition, Sydney - August 2-5, 1992, p68
(Abstract)
Geophysical filters
Fuller, B.D. 1967, ‘Two-dimensional frequency analysis and design of
grid operators’, Mining Geophysics, Volume 2, pp 658-708. SEG,
Oklahoma
Mufti, T.R. 1972, ‘Design of small operators for continuation of
potential field data’, Geophysics, v37, pp488-506
132
Geophysical Data Imaging and Presentation
Image Processing in Mineral Exploration
Author
Peter Williams, Western Mining Corporation, Mineral Exploration
Division, Perth, Western Australia.
Introduction
Since 1991 WMC Mineral Exploration Division has followed a strategy
of decentralizing its Image Processing functionality into its Regional
Exploration offices. To be able to do this effectively required Image
Processing software which could be easily learnt and used, yet retain
sufficient flexibility and functionality to be able to cater for the
sophisticated user. Each regional office has between 2-5 gigabytes
of geoscientific data, and the amount of data is increasing at an
alarming rate, as high resolution airborne geophysical datasets and
other geoscientific datasets are acquired or become available from
State and Federal Government Exploration initiatives.
The vast amount of data to be processed and analyzed biased the
hardware choice towards Unix workstations. The image processing
software package chosen was ER Mapper, developed by Earth
Resource Mapping in Perth. The package met all the criteria at the
time and has continued to develop in a direction which has enhanced
its functionality in the mineral exploration industry.
Today ER Mapper is used by exploration geophysicists, geologists
and geochemists in their own offices on a daily basis. Typically
different geoscientific datasets in different forms are overlain at
chosen scales, and relationships between the datasets are
interpreted in terms of mineral prospectivity.
The data forms are raster or vector. Raster datasets are obtained by
gridding of data gathered along lines onto an equidimensional grid
(e.g. Airborne Geophysical mapping software or Geographical
Information Systems).
Case studies of the use of ER Mapper in the exploration office follow.
Displaying
geochemical data
Soil geochemical data gathered on lines 200m apart, employing
sampling along the lines of 40m was gridded using carefully selected
parameters that best honor the data. The equidimensional grid or
mesh interval is 40m. The data was displayed using a pseudocolor
look up table in which warm colors were higher values and cooler
colors were lower values. The transformation from input gridded
geochemical values to screen display values (normally ranging from
0 to 255 for an 8 bit display unit) was achieved via a linear function.
This is typically referred to as a linear stretch. Many elements can be
measured in a soil geochemical survey, and the grid of each element
can be displayed as a band in a manner analogous to the bands in a
Landsat image.
Image Processing in Mineral Exploration
133
Combined
geochemical data
Different gridded geochemical elements can be added together or
ratioed, and referenced to other geoscientific datasets, given that
the datasets use a common map projection. Generally use is made
of the elements Nickel and Copper in Nickel exploration, and the
range of values are referenced to airborne magnetic datasets.
Magnetic high values can be caused by ultramafic rocks which host
known nickel mineralizations. Coincident Nickel, Copper and
magnetic highs are seen as encouraging.
For example you can show Nickel plus Copper grid values as
pseudocolor overlain on a filtered version of gridded airborne
magnetic data, displayed as intensity or tonal variations. In this
case high values appear as white or lighter tones and low values
appear as darker tones.
Geochemical
fused with
Landsat
The same gridded geochemical dataset can be referenced to Landsat
data in a similar manner. For example, you can overlay geochemical
data on band 1 of Landsat data, which records the reflected
electromagnetic energy from the earth’s surface in the 0.485 micron
wavelength part of the spectrum. This part of the spectrum is
sensitive to the presence of iron oxides at the surface, which is
important when interpreting the significance of the variations of the
geochemical data.
Conclusion
These are 3 simple illustrations of how ER Mapper is used to help
process, visualize and analyze geoscientific datasets in the WMC
exploration offices. Even though ER Mapper is increasing its
computational functionality and visualization aids it is retaining its
easy to learn and easy to use interface. The user now needs to be
able to harness and direct this power in the direction of exploration
effectiveness. Efficient software is useless unless it is used
effectively, and this is the challenge to the user issued by such a
product as ER Mapper.
134
Image Processing in Mineral Exploration
Semi-Automatic Interpretation of
Aeromagnetic Datasets
Authors
P.K. Williams and Graham Staker, Western Mining Corporation, 191
Introduction
For the at least the last 40 years, geophysicists involved in trying to
extract information from aeromagnetic surveys have been designing
of ways to speed up the process (Werner, 1953; McGrath and Hood,
1970; Gerard and Deberglia, 1975; Kilty, 1983; Shi, 1994). There
are basically 2 main requirements for this to happen:
Great Eastern Highway, Belmont, Western Australia 6104.
•
A robust computationally efficient and fast mathematical
algorithm, which will calculate solutions for each anomaly. As all
interpreters know there are generally a number of possible
solutions which can fit the aeromagnetic data for each anomaly.
Hence for large surveys with a high density of anomalies there
can be an enormous number of solutions for each survey.
•
A way of displaying the results of the above process in a manner
where the interpreter can then rapidly judge, based on his or her
knowledge and experience, which solutions are the most
appropriate for the anomalies in the specific Geological
environment.
The Geophysical literature is abound with answers directed at the
first requirement, but very few have effectively tackled the second
requirement. Those that have, have typically not addressed the
Three dimensional nature of the anomalies effectively. This note
attempts to redress that limitation.
The semiautomatic
interpretation
algorithm
The algorithm used for the purpose of this paper is the improved
Naudy Method, which is described in detail by Shi (1994). In essence
the program tries to fit a number of two dimensional models to each
anomaly encountered along a line of data. The models are those of
a thin horizontal sheets, a vertical contact and a thin vertical dyke.
For each selected anomaly along a line, several calculations are
made for different windows of the line data, for each model. For each
such calculation a degree of fit is calculated, termed a similarity
coefficient, which is an indicator of the degree of mathematical fit
between the calculated response and the measured line data. Each
anomaly typically has several lines of data across it. Thus for any one
anomaly there are often several models suggested by the program.
The program is applied in two stages. The first is one of selecting
appropriate anomalies for more detailed analysis. The second
involves doing the more detailed analysis.
Semi-Automatic Interpretation of Aeromagnetic Datasets
135
Abbreviated time
and motion study
of the
interpretation
process
136
To attempt to put this interpretive process into a time and motion
context, I have assumed that the average Mineral Aeromagnetic
Survey has 3 independent magnetic anomalies per square kilometre
(a conservative number for some geological environments). Each
anomaly has on average 3 lines of data defining the anomaly
(roughly assuming a line spacing of 200 meters) and that to analyse
each anomaly, 3 different windows have been used and 3 different
models (see above) used for each of the windows. These simple
assumptions indicate, that for each square kilometre of surveyed
ground (or for each 6 linear kilometres), there are 81 potential
solutions suggested from such a semi automatic interpretation
process! It is not difficult to thus see how Professor David Boyd
(Personal Communication) derived his estimates of interpreting
aeromagnetic data at a rate of 16 kilometres per hour for surveys
flown for Mineral Geophysics!
Semi-Automatic Interpretation of Aeromagnetic Datasets
Traditional
interfacing of
algorithm output
with data and
interpreter
Traditionally the output from the program has been in profile form
(Figure 1).
Figure 1 A typical output from the Naudy Aeromagnetic Semi Automatic
Interpretation Algorithm.
This is difficult to work with as:
•
It does not allow the interpreter to appreciate such critical
information as strike direction or extent of the anomaly, the plan
form of the anomaly, exactly where the line being analysed is
with respect to the location of the anomaly peak, and other such
information as the geological setting of the anomaly.
Semi-Automatic Interpretation of Aeromagnetic Datasets
137
•
It radically compresses the geometrical significance of the
interpretation into a set of somewhat abstract symbols. (For
instance a dyke is shown as a box whose dimensions are
independent of the interpreted dimension of the model.)
In this traditional interface, the interpreter has to have access to a
large number of paper maps and plans, which can make the job
cumbersome and slow. Now this map and plan information is
available digitally and can be displayed on the same computer as
that which does uses the Semi Automatic Interpretation algorithm,
very easily using such programs as ER Mapper.
Interfacing with
ER Mapper
ER Mapper has the capacity to display both vector and raster
information. Importantly in this context is its ability to store
efficiently a number of bands of raster information. We also use ER
Mapper’s formula capability to help sift through the solutions output
from the Semi Automatic Interpretation algorithm.
For the sake of brevity we consider only the output from the second
part of the interpretive process. However it should be noted that a
very similar style of interface can be easily applied to the first stage
of the interpretive process, anomaly selection. The output consists
of a similarity coefficient, sample window length, depth, halfwidth,
susceptibility and dip estimates for each body type. This information
is loaded into different bands in an ER Mapper BIL file. That is Band
1 will contain all and only similarity coefficients for a selected body
type and sample window, Band 2 may contain the Depth estimates
associated with the similarity coefficient (that is the similarity
coefficient in band 1 is derived from a model which has a depth to
source that is recorded in band 2), Band 3 the half width estimates,
Band 4 the Susceptibility estimates, Band 5 the dip estimates (etc.).
This interpreted output from the analysis stage may then be quickly
displayed and analysed by an interpreter, so that a final interpretive
map can be constructed. The different modes of display of the
selected out put are:
•
138
The numerical output in each band can be displayed as a number
in vector form (Figure 3, 4 and 5). The number is written to the
immediate right of the point of calculation, with the point of
calculation being shown as a single dot. This capability is via a
dynamic link which can allow the user to make such decisions as,
show all the interpreted vertical contacts which are interpreted
with a degree of fit better than some chosen level within certain
geographical bounds
Semi-Automatic Interpretation of Aeromagnetic Datasets
•
The output in each band can be displayed as strip raster
information (Figures 2, 3, 4 and 5). The strip represents the least
square best fit line, fit to the true flight path of the plane when
gathering the data. When displayed in this manner it is possible
to use selective stretching (that is using carefully chosen
minimum and maximum values), as well as the formula
functionality, for example, showing all depth estimates greater
than 200 meter, and/or showing all bodies with interpreted
susceptibility less than 5x10-3 SI units by assigning a value of
255 to Band 99 (say) in a position where those models fit this
criteria, or a value of 0 to those positions where models do not
fit that criteria
All of these displays can be superimposed onto various maps of the
data, including raster images (Figures 2, 3, 4 and 5) or flight path or
stacked vector plots of the aeromagnetic data or its derivatives (e.g.
second horizontal derivatives, the analytical signal or reductions to
the pole) or in fact on maps of any pertinent Geoscientific
information (e.g. topography, drilling isopach maps, geological
outcrop maps, or regolith maps), so that the experience and
knowledge of the interpreter/s can be quickly put to work, to derive
a final interpretation. The interpreted widths can be plotted (Figure
2) in strip raster form over the pseudocolored Total Magnetic
Intensity (TMI) without regard to the degree of fit or appropriateness
of the model. The interpreted depths from all dyke models with
exhibiting a good fit can be plotted (Figure 3) as numbers (below the
sensor, in meters) and superimposed on a pseudocolored TMI image.
The horizontal raster strips show the selectively stretched similarity
coefficients for each of the model fits. White and red colours within
this strip indicate a good fit, blue represents a poor fit. Similarly dip
interpretations are shown in Figure 4 (dip in degrees measured
clockwise form the horizontal), and magnetic susceptibility (value
are in *10-3 SI units) in figure 5. The final interpretation can then be
drawn on the screen using ER Mapper annotation system and/or a
digital map symbol library linked to ER Mapper.
Semi-Automatic Interpretation of Aeromagnetic Datasets
139
Figure 2: Interpreted widths output from the Improved Naudy Semi
Automatic Interpretation Algorithm for the magnetic dyke model. The widths
are displayed as white horizontal bars, and are superimposed on a
pseudocolored raster image of the TMI
Figure 3 Interpreted depths output from the Improved Naudy Semi
Automatic Interpretation Algorithm assuming a magnetic dyke model. The
depths are displayed as meters below the sensor, and are superimposed on
the pseudo-colour raster image of the TMI.
Figure 4 Interpreted magnetic susceptibility output from the Improved
Naudy Semi Automatic Interpretation Algorithm assuming a magnetic dyke
model. The susceptibility values are displayed as in SI units (*10-3), and are
superimposed on a pseudocolor raster image of the TMI.
140
Semi-Automatic Interpretation of Aeromagnetic Datasets
Figure 5 Interpreted dips output from the Improved Naudy Semi Automatic
Interpretation Algorithm assuming a magnetic dyke model. The dips are
displayed as the degrees of dip, when measured clockwise facing north and
are superimposed on a pseudcolored raster image of the TMI.
The output from this interpretation, specifically geometrical and
magnetic susceptibility information, can then be written to another
file which can be input into quantitative magnetic modelling software
such as ENCOM’s MODELVISION. An exact mathematical calculation
can then be performed and a more detailed and specific
interpretation can be performed if needed.
Summary
The functionality in ER Mapper can be easily used to increase the
efficiency and effectiveness of interpreting semi-regional to regional
aeromagnetic data. These gains are achievable by interfacing a
suitable semi-automatic interpretive algorithm, like the Improved
Naudy method, with numerical and appropriate visualisation in a
manner which allows the results of this analysis to be reconciled with
the three dimensionality of the data as well as other datasets.
Acknowledgmens
We would like to acknowledge permission from WMC to publish such
information, and Digital Equipment Corporation for providing part of
the money to fund this research. We would also like to acknowledge
Dr. Shi for providing us with output from her Improved Naudy
Method Computing Code.
References
Gerard, A. and Deberglia, N., 1975, ‘Automatic three-dimensional
modelling for the interpretation of gravity or magnetic anomalies’,
Geophysics, 40: 1014-1034
Kilty, K.T., 1983, ‘Werner deconvolution of profile potential field
data’, Geophysics, 48: 234-237
Semi-Automatic Interpretation of Aeromagnetic Datasets
141
McGrath, P.H., and Hood, P.J., 1970, ‘The dipping dike case: A
computer curve-matching method of magnetic interpretation’,
Geophysics, 35: 349-358
Shi, Z., 1993, Automatic Interpretation of Potential Field Data
Applied to the Study of Overburden Thickness and Deep Crustal
Structures, South Australia, PhD Thesis, The University of Adelaide,
Department of Geology and Geophysics
Werner, S., 1953, Interpretation of magnetic anomalies at sheet-like
bodies, Stockholm, Sveriges Geologiska undersok, Ser. C.C. Arsbok,
43
142
Semi-Automatic Interpretation of Aeromagnetic Datasets
Custom-built Dynamic Links
Authors
Simon Crosato, an exploration geophysicist at the Central
Norseman Gold Corporation, and Bernie Plath are employees of
Western Mining Corporation.
Abstract
In this study ER Mapper has been used with the Ingres Geology
Database to construct dynamic links to increase exploration
effectiveness. Over thirty dynamic links from ER Mapper to different
portions of the Central Norseman Gold Corporation (CNGC) Ingres
geology database have been developed to date.
Introduction
The great need for an integrated, team approach to achieve
successful exploration goals has been emphasised by many workers.
This team consists of geologists, geophysicists and geochemists
directly, and other key units such as data clerks and drafting staff
indirectly. All provide a data processing service for the exploration
effort depending on their field of expertise.
The team may be together, but is the data from each individual
source together, and easily accessible by other team members? The
answer in the past has been to spend large amounts of time
preparing and updating numerous hardcopy plans to display the
relevant data. Now, the increasing power of workstations is enabling
large amounts of data to be processed in a short period of time.
Thus, it is becoming increasingly viable to examine the data whilst in
a digital form through image processing.
Western Mining Corporation (WMC) has chosen ER Mapper (Earth
Resource Mapping) and Ingres (Ingres Corporation) as the corporate
standard image processing and database package respectively. Over
thirty links from ER Mapper to different portions of the CNGC Ingres
geology database have been developed to date. These have proved
popular and are commonly used every day, with requests for further
data integration links accommodated when possible. This case
history aims to outline the method of data integration utilising
ER Mapper and Ingres at CNGC, with several examples.
Advantages of
digital data
integration
The advantages of dynamic links are many:
•
Only one copy of data exists in the database and this copy is
accessible by anyone. This is unlike the situation where many
plans exist for the same dataset(s) with all of different vintage.
•
Updates from a single authorized input source are automatic to
those who use the database. This ensures that users always
receive the most up-to-date copy of the data.
Custom-built Dynamic Links
143
•
Scale independence. Data is automatically registered to its
correct position and scale on the screen, and is readily usable at
regional or prospect scale.
•
Interpretation is made easier and more thorough by
simultaneously displaying all available data (or combinations
thereof) for an area of interest.
•
Quality control of new data is on-going via comparison with other
•
More efficient use of time.
known ‘clean’ datasets.
Conversely, there are not many disadvantages to digital data
integration, except maybe the need to learn how to use the relevant
computing packages.
Data Integration
at CNGC
As the percentage of exploration under cover increases, geophysical
methods become of great importance in targeting and lithological
mapping. Modern, ultra-detailed geophysical surveys generate vast
quantities of digital data that must be used efficiently and effectively.
Software (including ER Mapper) has to be created to handle such
processing using desktop workstations in real time. A major feature
of ER Mapper is its ability to simultaneously display vector and raster
data, thus enabling integration of data from multiple sources and
types.
At CNGC, the Ingres database will eventually store all nongeophysical data (it currently stores drill hole, geochemical, leasing
and target data). Data integration is achieved by extracting data
from Ingres, and using ER Mapper as the display and manipulation
tool (see Figure 1).
144
Custom-built Dynamic Links
Figure 1 - Schematic diagram illustrating technical hardware and data flow.
What is SQL?
SQL (Structured Query Language) is the major high-level, nonprocedural database language used in commercial data processing,
and is the industry standard (ANSI) for relational databases such as
Ingres. With SQL statements, the user tells the database what
he/she wants; then the database determines how to obtain the
information or how to compute it. Use of SQL does not require any
previous programming knowledge.
The user translates a criteria structured information request into
valid SQL commands. An example could be: Extract all the aircore
drill holes currently in the database that intersected ultramafic rocks
and were drilled after the 1st of July, 1993, with nickel assays
greater than 10000 ppm.
Custom-built Dynamic Links
145
Depending on how the database tables are linked together, the
required SQL could be:
SELECT DISTINCT startdate, hole_type,key_type,ni_value% requested data fields
FROM drill_source dso2,drill_rock_code drc,drill_assay das %use listed tables as
search area
WHERE
% Following are the search criteria:
dso2.startdate > ‘01/07/93’ AND% holes drilled after 1/7/93
dso2.hole_type like ‘AIRCORE%’ AND% hole type must be aircore
drc.key_type like ‘U%’ AND% ultramafic criteria
das.ni_value >10000 % nickel assay over 10000 ppm
There is no limit on the length or number of constraints placed in the
SQL request. The above query may be entered interactively through
the database server program or submitted in batch mode as a
command file. The retrieved data is then written to the screen or an
output file.
Data storage
considerations
In order to have acceptable response times to reasonably complex
database queries, significant thought must go into the way in which
the data is stored. At CNGC, query time has been kept to a minimum
predominantly by performing the body of the query overnight, and
simply storing a pointer to the output value in another table. These
dynamic link tables have been indexed on northing, and hence have
short interrogation/extraction times via geographic area. This
method of updating the dynamic link tables saves the database
server searching through the entire database every time.
An example is the Maximum Gold Value in Hole link routinely used at
CNGC. The database server stores the maximum gold assay found in
each drill hole in a table with the hole’s coordinates once every week.
This entails searching through every drill assay for gold per hole and
determining the maximum. If this search was to be performed every
time an image was redrawn in ER Mapper, the time taken to
compute/extract the relevant data would be enormous and quite
unacceptable. By utilising the dynamic link tables described above,
the desired data search has already been completed, hence
extraction and plotting is accomplished in minutes
146
Custom-built Dynamic Links
Dynamic link
construction
All database links developed at present work in the same general
manner. The required data is primarily searched for by area, with
further restrictions/criteria imposed in the actual database query
(SQL). To facilitate easy extraction of data not necessarily to be
utilised solely by ER Mapper, a C program containing embedded SQL
statements has been written by B. Plath. This program takes
minimum and maximum eastings and northings, a data type, and
output filename arguments on the command line. Upon execution,
the program constructs the SQL query and submits it to the database
server. The data matching the SQL search criteria is then written to
the output ASCII file, an example of which is reproduced in Appendix
B. This data file may be utilised by other gridding, contouring and
plotting packages.
Two broad classes of external link currently exist - these have been
termed hardwired and interactive types. Although both classes
extract data from the database according to the general rules listed
above, each goes about this task in a different way.
Hardwired
In this link type, data extraction is performed by area (the active
ER Mapper image window) and is displayed according to pre-set
PostScript rules within the Bourne shell script. A good example of
this is the End of Hole Geology link (see Appendix A). The primary
geological rock code logged at the end of the hole is extracted for
every drill hole within the image area (see Appendix B). The extracted
data is then read by the link script record for record, and a plotting
colour determined by the geological rock code according to the
standard WMC legend.
Interactive
The most recent external link development has involved interactive
entering of SQL script (file or keyboard input) into an ER Mapper
overlay. The entered SQL may be altered between each display,
allowing easy graphical comparisons between the requested
datasets. This link type searches the database directly using a full
query constructed by the C program. Query response time is
acceptable because the more complex data restrictions/constraints
(compared to a hardwired link) entered interactively ‘sift’ the data
quite efficiently.
Overall, response time for each link type is in the order of a few
minutes. It is quite common however, for the desired data to be
extracted before large raster datasets have completed display within
ER Mapper. In this case, no extra time is needed for the external link
to be performed.
The current computing configuration at CNGC will soon be greatly
improved with the addition of a faster workstation to solely run
Ingres. The main image processing machine will then not have to
perform the database extraction, increasing the efficiency of the data
integration process enormously.
Custom-built Dynamic Links
147
Dynamic
operation within
ER Mapper
In order to reduce confusion and the effort needed to extract data
from the geological database, the dynamic linking function has been
incorporated into ER Mapper utilising familiar X-Window buttons
under the intuitive Add menu. Data integration is then as simple as
ON/OFF, the internal intricacies quite invisible to users with limited
computer literacy (save for operating ER Mapper).
Hardwired
For Hardwired links, the generated ER Mapper overlay is
TRUECOLOUR, meaning that colour information is determined by the
link, inside the PostScript plotting routines. This results in numerous
pre-set choices depending on the data displayed and the standard
colour scheme used.
Interactive
At present, the generated ER Mapper overlays for Interactive links
are displayed in MONOCOLOUR, meaning that all data points
satisfying the database search criteria are plotted on the screen in
the colour selected using the colour button. This link type will change
to TRUECOLOUR in the near future, when the PostScript has been
modified to become intelligent enough to recognise the data type
(e.g. geology or geochemical assay) it is plotting. Colour definitions
will then be stored in the database, and become internally generated
and invisible to the user.
After adding an Interactive link, the user must supply the ‘where’
clause of the SQL query (see What is SQL? above) before the data
extraction can commence. The table linking and variable string
definitions (these are standard for each table) are automatically
performed by the SQL-building C program. Clicking on the chooser
button activates the SQL query entry window.
For example, the SQL query:
drc.key_type like ‘U%’ and drc.key_field in (‘RK’,
‘WR’)
will extract all drill holes that have intersected either weathered
(WR) or fresh (RK) ultramafic.
The user may also enter the name of a file containing the desired
SQL ‘where’ clause (e.g. ‘@ultramafic.sql’). Alternatively, ‘@file’ may
be entered as the query, which in turn presents the user with an XWindow chooser listing all compatible SQL files.
The user may zoom and change the query as desired - the link script
is intelligent enough to detect changes in the active image area or
query, and extracts data only when necessary.
Example: Talbot
Island
148
Talbot Island is an active nickel exploration project situated 18 km
NNE of the Norseman town site. This site has been chosen to
demonstrate external database dynamic linking because various
different digital data types exist in this area. Among these are multielement surface soil and drill core assays, detailed geological rock
descriptions, and physical drill hole characteristics.
Custom-built Dynamic Links
The nickel potential of Talbot Island was first recognised by
McGoldrick (personal communication to S. Peters, 1990, 1991) and
then later by CNGC personnel (Offe,1992) after reconnaissance rock
chip sampling returned highly anomalous nickel assays within a
komatiite. Detailed soil sampling has been undertaken over
amenable portions of the island, and has defined several strong
anomalies (see Figure 2 below). The link to Ingres extracts all nickel
soil geochemical assays in the active window, and displays them
colour coded according to pre-defined threshold limits as outlined in
the legend. The green vector overlay contains major topographic
features, such as the island outline and the lake edge. The internal
line on the island is the dune (west) and subcrop (east) boundary.
Overlaid in a sand brown colour is the slightly out of date
causeway/access track system on the island.
Figure 2- Map depicting Nickel Soil deposits
Also present in this image are dynamic lease boundaries. These
boundaries have been stored within Ingres as separate straight line
segments along with lease type (e.g. surveyed, reserve, Aboriginal
site) and drawing attributes (e.g. line style, thickness, RGB colour).
The tenement boundary link extracts all line segments falling wholly
or partially within the active image window, creates the PostScript
code to display them on the screen from the relevant attributes, and
then reports all leases found to the user via an X-window interface,
which may be dismissed or printed. As long as the surveyors keep
the Ingres database updated with the current leasing information,
the need for messy and time-consuming DXF file imports is
eliminated.
Custom-built Dynamic Links
149
Existing drilling information is an essential layer of data to be
integrated with geophysical/geological interpretation and when
further drilling is to be planned. Numerous hardwired links to the
drilling database have been devised, one of which is demonstrated
here. Collar and drill hole trace data is useful to analyse portions of
ground already tested, and also to indicate areas of untested
potential within the project area (see Figure 3). The surface
projection of each drill hole is calculated from depth, dip and azimuth
data extracted by the link, and then displayed as a straight line
approximation. The hole type (e.g. aircore, percussion, diamond) is
tested by the link script, allowing hole types to be distinguished via
different colours on the screen. A link that displays colour-coded
collars and hole identifiers is also available.
Figure 3 - Map of Drill Hole Traces
150
Custom-built Dynamic Links
A broad scale geological interpretation can be generated and
integrated with raster data by examining the geology logged at the
end of the drill hole. A hardwired link to the rock code database
extracts the key geological rock code for the end of hole (EOH) one
metre sample from each drill hole within the active image area (see
Appendix B). The link script tests the rock code (see Appendix A) to
determine the standard WMC colour, and then displays a relevantly
coloured cross at the collar position (see Figure 4). This display is
obviously an incorrect projection if the hole is deep and shallowlymoderately dipping (e.g. a diamond hole). Correction to calculate
the actual XY positional coordinate will take place in the near future.
In the meantime, the EOH geology display should be viewed
alongside the drill trace display for the same area, so the user is
aware of when the dip factor is important, thus allowing the
necessary mental correction to be made.
Figure 4 - Map depicting End Of Hole samples
When planning aircore/percussion drilling programs, it is desirable to
be able to predict the expected hole depths in order to estimate the
number of holes that can be completed per shift. To facilitate this, a
hardwired link has been created to a database table that stores the
depth to the Tertiary/Archaean boundary for all existing surface drill
holes. This data may also aid geophysical data interpretation. For
example, gravity data is influenced by the depth of semiconsolidated Tertiary sediment, so if the geometry of this veneer of
low density rock is known, some effort may be afforded to correcting
for its effect. Magnetic depth interpretation of Archaean sources can
be strengthened by aircore holes indicating large depths of Tertiary
over the anomaly with respect to surrounding areas.
Custom-built Dynamic Links
151
As is expected at Talbot Island, the depth of Tertiary is small because
the Archaean rocks crop out (or subcrop) in the immediate island
vicinity (see Figure 5). The depth scale shown is probably too coarse
to be relevant in the island vicinity, but the aircore holes in the SE of
the image have penetrated Tertiary with depths between 40 and
60m (see yellow crosses in Figure 5).
Figure 5 - Map depicting Depth of Tertiary Sediment
The interactive SQL links are the newest and most versatile yet
developed at CNGC. As described above, they require the user to
enter the SQL ‘where’ clause which restricts the data extracted to fall
within certain criteria. At Talbot Island, it is important to know which
holes have intersected fresh ultramafic. The appropriate ‘where’
clause to obtain this data would be:
drc.key_type like ‘U%’ and drc.key_field = ‘RK’
152
Custom-built Dynamic Links
where key_type matches all rock codes beginning with U (e.g. USOL,
UACT, UTSM) and key_field must be equal to RK (corresponds to
fresh rock; conversely, WR is the weathered rock code). This query
will match a fresh ultramafic intersection anywhere in the hole
because no constraints have been put on the depth at which the
intersection may occur. The output is a display of the collars of all
holes matching the where clause criteria (see Figure68). It is evident
that there are quite a few. Of interest is the discrete magnetic
anomaly immediately to the west of the island shore. A fence of
vertical aircore holes was drilled to intersect the anomaly source (see
Figure 3). Of these nine holes, five encountered ultramafic in the last
metre of the hole (see Figure 4). Only one intersected fresh
ultramafic and is situated on the western-most end of the traverse
(see Figure 6). This data suggests that the serpentinite source of the
magnetic anomaly has not been thoroughly tested. Consequently,
surface diamond hole ETS 5 (see western-most blue trace on
traverse in Figure 3) has recently been completed to explain the
magnetic anomaly source.
Figure 6- Interactive SQL-1
By incorporating drill assay information into the display, a subset of
holes shown in Figure 8 define the nickel anomalous regions,
allowing correlation with geophysical and other datasets. To
demonstrate this, all drill holes having maximum nickel assays
greater than 1% (10000 ppm) occurring in fresh ultramafic have
been extracted from the database. The ‘where’ clause query used
was:
drc.key_type like ‘U%’ and drc.key_field = ‘RK’
and da.element = ‘NI_PPM’ and max(value) >= 10000
Custom-built Dynamic Links
153
Five drill holes satisfied the above criteria within the coordinate
extents of the image window (Figure 7).
Figure 7 - Interactive SQL-2
When viewed in conjunction with Figures 2 and 3, it can be seen that
the diamond hole has intersected a mineralised zone not seen in the
rock chip/soil sampling, and the four percussion holes have tested
the best part of the NE-trending soil anomaly at depth.
Future
implications
There is no limit to the type of data that may be stored in modern
relational databases. Already at CNGC, there is a move to enter all
new exploration target information into the database, whilst also
entering the backlog of historical targets when time permits. With
this information readily on hand and easily graphically displayable,
problems that have previously occurred such as targeting old,
already targeted areas, will cease to exist. This facilitates a far more
efficient use of time.
Time consuming tasks such as Form 5 reporting will require less
effort in the future if the data can easily be obtained from Ingres and
verified graphically with ER Mapper.
Other data types planned to be stored within Ingres at CNGC
include:
154
•
down hole geophysical data (e.g. DHEM, petrophysical logs)
•
existing geophysical survey boundaries and survey parameters
•
digitally scanned bitmaps (e.g. thin section photomicrographs of
EOH rock units)
•
existing and planned mine designs
•
topographic elements (e.g. roads, creeks, clearings)
Custom-built Dynamic Links
•
accounting information (e.g. current and required lease
expenditure)
Ways of increasing the efficiency of the database links within
ER Mapper are constantly being investigated by ER Mapper technical
support and CNGC computing/geophysical staff. If demand is great
enough (the potential is certainly enormous), Earth Resource
Mapping may even consider designing/writing a multi-functional
direct link to Ingres (bypassing ASCII file step) similar to the one
they have just written for ARC/INFO.
Conclusion
This document has aimed to outline the method of digital data
integration developed at CNGC, which utilises links between an
Ingres relational database and ER Mapper image processing
software. The link process requires two steps:
1. Data extraction from Ingres facilitated by embedded SQL commands
within C code.
2. Display within ER Mapper involving both Bourne Shell and PostScript
programming.
Two classes of dynamic link (Hardwired and Interactive) have been
devised to date, with the total currently available exceeding 30
different options. With Hardwired links, no user input is required.
Activating an Interactive link allows the user to control the data
extracted from Ingres by entering SQL commands.
The ability of data integration to help manage time more efficiently,
and aid in more thorough and realistic interpretations is exciting, and
obviously a major factor in the future direction of mineral
exploration. A small demonstration of the power of this approach has
been outlined with several different data types existing within the
Talbot Island Nickel Project. The example used here is commonplace
and used every day at CNGC.
Future development of dynamic database linking to image
processing software will take place as the need grows. This will
involve CNGC computing/geophysical staff, MIS, and possibly Earth
Resource Mapping programming personnel.
References
Offe, L.A., (1992) - Talbot Island Ultramafic, Norseman, Western
Australia; Internal CNGC Technical Report, CNG/T/004
Appendix A
EOH Geology
HARDWIRED Dynamic
Link for ER Mapper
(Bourne Shell)
Custom-built Dynamic Links
#!/bin/sh
#
#---------------------------------------------------------------------------# ermps_34_ingresCOLOUR-CODED EOH GEOLOGY
#
155
# CREATED:S.Crosato (June ‘93)
# MODIFIED:S.Crosato (April ‘94) added legend
#
# EOH geology information is extracted from Ingres (key
rock code only)
# and then displayed as colour-coded points at the
collar location on the screen
# using standard WMC geological legend colours where
possible.
#---------------------------------------------------------------------------#
# This script runs an Ingres query embedded in a C
program
# (written by B. Plath) that enables image extents from
ER Mapper
# to be passed for data extraction. Format is:
#
# ingres_2_xyz.exe 1 minEAST maxEAST minNORTH maxNORTH
outputfilename
#
# script code = 1 max gold in hole
#
= 2 max gold in Tertiary
#
= 3 max gold in Archaean
#
= 4 depth to top Archaean
#
. . .
#
. . . etc.
#
# $TLX,$TLY,$BRX,$BRY are variables containing top left
x, y
# and bottom right x,y coordinates of the active
ER Mapper
# image.
#
# Output is the ASCII file erm_ingres_interface.txt
which is renamed to
# something more meaningful before being imported into
ERMapper and
# displayed using PostScript.
#
# Any errors trying to run Ingres are redirected to
STDOUT
#
#
# For the specified table, expects 3 columns in the
format:
#
# EASTING NORTHING ROCK_CODE
#
#
# Arguments (see ER Mapper manuals for full details):
156
Custom-built Dynamic Links
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
# cmd datum proj’n coord units rotation tlx tly brx bry
canvasw canvash dpiX dpiY file
#
if [ $# -ne 15 ]
then
echo “$0: Expected 15 Dynamic Link arguments.” 1>&2
exit 1
fi
cat <<EOF
%!PS-Adobe-1.0
% Table Circle dynamic link for table file $FILESPEC
% Args: $*
%
EOF
# Grab the arguments and set them up in meaningful
variable names.
NAME=$0
# name of program, might change with the
link COMMAND=$1# should be “postscript”
DATUM=$2
# geodetic datum name
PROJECTION=$3 # projection name
COORDTYPE=$4
# type of coordinates (EN, LL, or RAW)
UNITS=$5
# Units (eg: METERS)
ROTATION=$6
# rotation
TLX=$7
# top left x coordinate
TLY=$8
# top left y coordinate
BRX=$9
# bottom right x coordinate
shift 9
BRY=$1
# bottom right y coordinate
CANVASWIDTH=$2 # window width (0 on init)
CANVASHEIGHT=$3# window height (0 on init)
DPIX=$4
# x dots per inch (0 on init)
DPIY=$5
# y dots per inch (0 on init)
FILESPEC=$6
# file spec or choice string
FILESPEC=/usr5/ermapper/dataset/Data_Tables/EOH_geol_k
ey.dat
FILESPEC=`echo $FILESPEC | tr -d \”` # bypass quotes
TLY=`echo $TLY | cut -c1-10` #take first 10 chars to
make compatible
BRY=`echo $BRY | cut -c1-10` #with defined char
strings in SQL query.
TLX=`echo $TLX | cut -c1-10`
BRX=`echo $BRX | cut -c1-10`
#......Ingres link
cd /usr5/ermapper/dataset/Data_Tables
NEWFILE=0#set newfile to true (default)
Custom-built Dynamic Links
157
if [ -f bounds34.txt ] ; then
read MAX_N1 MIN_N1 MAX_E1 MIN_E1 < bounds34.txt
if [ “$TLY” -le “$MAX_N1” ] && [ “$BRY” -ge “$MIN_N1”
] && [ “$TLX” -ge “$MIN_E1” ] && [ “$BRX” -le “$MAX_E1” ]
then
NEWFILE=1
echo “ERM/INGRES link: Using existing data file
EOH_geol_key.dat” > /dev/console
else
echo “ERM/INGRES link: New bounds not contained
in existing data file EOH_geol_key.dat” > /dev/console
fi
fi
if [ $NEWFILE -eq 0 ] ; then
echo “ERM/INGRES link: Extracting data from Ingres
..... (EOH_geol_key.dat)” > /dev/console
echo “Using bounds: $TLY $BRY $TLX $BRX” >
/dev/console
$ING_EXES/ingres_2_xyz.exe 34 $TLX $BRX $BRY $TLY
EOH_geol_key.dat > /dev/console
echo “EOH_geol_key.dat `wc -l EOH_geol_key.dat | cut
-c1-8` data points found.” > /dev/console
fi
if [ “$TLY” -gt “$MAX_N1” ] || [ “$BRY” -lt “$MIN_N1” ]
|| [ “$TLX” -lt “$MIN_E1” ] || [ “$BRX” -gt “$MAX_E1” ]
then echo “$TLY $BRY $BRX $TLX” > bounds34.txt
#write bounds to file if needed
fi
#......Truecolor PostScript plotting of data using awk
# [05] handle GEODETIC correction
if [ “$PROJECTION” = “GEODETIC” ]
then
cat <<EOF
/x_map {
dup${BRX} gt
% is X > right hand side?
{ 360.0 sub }% x was off the right hand side
{ dup ${TLX} lt% is X < left hand side?
{ 360.0 add } if% x was off the left hand side
ifelse
${TLX} sub
} def
EOF
else
cat <<EOF
/x_map { ${TLX} sub } def
EOF
fi
158
Custom-built Dynamic Links
cat <<EOF
/x_scale { ${CANVASWIDTH} ${BRX} ${TLX} sub div } def
/y_scale { ${CANVASHEIGHT} ${TLY} ${BRY} sub div } def
/size { x_scale 15 mul } def
/-size { x_scale -15 mul } def
/draw_cross { gsave% get Northing (Y)
${BRY} sub
y_scale mul
exch
% get Easting (X)
x_map
x_scale mul
exch
translate
newpath
2 from72pt setlinewidth
-size 0 moveto
% draw cross 30m diameter
size 0 lineto
0 -size moveto
0 size lineto
stroke
grestore
} def
/Helvetica findfont ${CANVASWIDTH} 60 div scalefont
setfont % define text size
/draw_legend_box {newpath
0.999 0.999 0.999 setrgbcolor% set background color
to white
0.86 ${CANVASWIDTH} mul 0 moveto% draw background
box
0.86 ${CANVASWIDTH} mul 0.20 ${CANVASWIDTH} mul
lineto
${CANVASWIDTH} 0.20 ${CANVASWIDTH} mul lineto
${CANVASWIDTH} 0 lineto closepath
}def
/draw_legend {
0 1 1 setrgbcolor% set color to cyan
0.88 ${CANVASWIDTH} mul 0.18 ${CANVASWIDTH} mul
moveto
(EOH Geology) show
0 0 1 setrgbcolor% set color to blue
0.87 ${CANVASWIDTH} mul 0.155 ${CANVASWIDTH} mul
moveto
(+ Sediment) show
0 1 0 setrgbcolor% set color to green
Custom-built Dynamic Links
159
0.87 ${CANVASWIDTH} mul 0.14 ${CANVASWIDTH} mul
moveto
(+ Dolerite) show
1 1 0 setrgbcolor% set color to yellow
0.87 ${CANVASWIDTH} mul 0.125 ${CANVASWIDTH} mul
moveto
(+ Basalt) show
1 0.647 0 setrgbcolor% set color to orange
0.87 ${CANVASWIDTH} mul 0.11 ${CANVASWIDTH} mul
moveto
(+ Felsics) show
1 0 0 setrgbcolor% set color to red
0.87 ${CANVASWIDTH} mul 0.095 ${CANVASWIDTH} mul
moveto
(+ Granite) show
1 0.745 0.890 setrgbcolor% set color to pink
0.87 ${CANVASWIDTH} mul 0.08 ${CANVASWIDTH} mul
moveto
(+ Gneiss) show
0.765 0.125 0.941 setrgbcolor% set color to purple
0.87 ${CANVASWIDTH} mul 0.065 ${CANVASWIDTH} mul
moveto
(+ Chert etc) show
0.133 0.545 0.133 setrgbcolor% set color to dark
green
0.87 ${CANVASWIDTH} mul 0.05 ${CANVASWIDTH} mul
moveto
(+ Mafic/Gabbro) show
0.675 0.647 0.125 setrgbcolor% set color to brown
0.87 ${CANVASWIDTH} mul 0.035 ${CANVASWIDTH} mul
moveto
(+ Ultramafic) show
0 0 0 setrgbcolor% set color to black
0.87 ${CANVASWIDTH} mul 0.02 ${CANVASWIDTH} mul
moveto
(+ Unclassified) show
} def
/do_blue_cross {
0 0 1 setrgbcolor% blue
draw_cross
} def
160
Custom-built Dynamic Links
/do_green_cross {
0 1 0 setrgbcolor% green
draw_cross
} def
/do_yellow_cross {
1 1 0 setrgbcolor% yellow
draw_cross
} def
/do_orange_cross {
1 0.647 0 setrgbcolor% orange
draw_cross
} def
/do_red_cross {
1 0 0 setrgbcolor% red
draw_cross
} def
/do_brown_cross {
0.675 0.647 0.125 setrgbcolor% brown
draw_cross
} def
/do_pink_cross {
1 0.745 0.890 setrgbcolor% white
draw_cross
} def
/do_black_cross {
0 0 0 setrgbcolor% black
draw_cross
} def
/do_purple_cross {
0.765 0.125 0.941 setrgbcolor% purple
draw_cross
} def
/do_dgreen_cross {
0.133 0.545 0.133 setrgbcolor% dark green
draw_cross
} def
%
EOF
awk ‘ BEGIN{ FS=”,” }
{ if( $3~/U/ ) printf(“%s %s
do_brown_cross\n”,$1,$2) }’ $FILESPEC
awk ‘ BEGIN{ FS=”,” }
{ if( $3~/GNI/ ) printf(“%s %s
do_pink_cross\n”,$1,$2) }’ $FILESPEC
awk ‘ BEGIN{ FS=”,” }
{ if( $3~/FV/ ) printf(“%s %s
do_orange_cross\n”,$1,$2) }’ $FILESPEC
awk ‘ BEGIN{ FS=”,” }
Custom-built Dynamic Links
161
{ if( $3~/MGB|MGN|M/ ) printf(“%s %s
do_dgreen_cross\n”,$1,$2) }’ $FILESPEC
awk ‘ BEGIN{ FS=”,” }
{ if( $3~/MB|MTB/ ) printf(“%s %s
do_yellow_cross\n”,$1,$2) }’ $FILESPEC
awk ‘ BEGIN{ FS=”,” }
{ if( $3~/FD|MD|DI/ ) printf(“%s
%do_green_cross\n”,$1,$2) }’ $FILESPEC
awk ‘ BEGIN{ FS=”,” }
{ if( $3~/SV|SIF|SCT/ ) printf(“%s %s
do_purple_cross\n”,$1,$2) }’ $FILESPEC
awk ‘ BEGIN{ FS=”,” }
{ if( $3~/SS|VS|SND|MS/ ) printf(“%s %s
do_blue_cross\n”,$1,$2) }’ $FILESPEC
awk ‘ BEGIN{ FS=”,” }
{ if( $3~/GT|GO|GNGD|GR|GNGT|PG|SHQ|Q/ ) printf(“%s %s
do_red_cross\n”,$1,$2) }’ $FILESPEC
awk ‘ BEGIN{ FS=”,” }
{ if(
$3!~/GT|GO|GNGD|GR|GNGT|PG|SHQ|Q|SS|SV|VS|SIF|SCT|SND|
FD|MGB|MGN|MB|MTB|MS|M|FV|U|MD|DI|GNI/ ) printf(“%s %s
do_black_cross\n”,$1,$2) }’ $FILESPEC
if [ $? -ne 0 ]
then
exit 1
fi
cat <<EOF
% Draw legend in bottom right corner of image window
draw_legend_box fill
draw_legend
% end of postscript EOF
Appendix B
EOH Geology Hardwired
Link Database Query
Output (Input to
PostScript Plotting
Script)
162
391200.000,6454000.000,MB,ET 4020R
391721.000,6452501.800,M,ET 1540R
391828.000,6452501.300,M,ET 1530R
391860.000,6452740.000,M,ET 3220R
391880.000,6452740.000,USOL,ET 3210R
391895.140,6452537.540,UTSM,ETS 3
391900.000,6452740.000,USOL,ET 3200R
391920.000,6452640.000,MGB,ET 4290R
391920.000,6452740.000,USOL,ET 3190R
Custom-built Dynamic Links
391920.000,6452800.000,M,ET 3280R
391933.000,6452501.300,M,ET 1520R
391940.000,6452640.000,M,ET 4300R
391940.000,6452740.000,USOL,ET 4030R
391940.000,6452800.000,M,,ET 3270R
391960.000,6452400.000,M,ET 3540R
391960.000,6452640.000,UTCB,ET 4310R
391960.000,6452740.000,USOL,ET 4040R
391960.000,6452800.000,USOL,ET 3260R
391969.090,6452501.900,M,ET 1610R
391980.000,6452400.000,U,ET 3350R
Custom-built Dynamic Links
163
164
Custom-built Dynamic Links
Fast Fourier Transforms
Author
Maurice Craig is from CSIRO Division of Exploration and Mining,
Introduction
The original implementation of frequency-domain-processing
capabilities was concerned with constructing a bridge to ER Mapper’s
existing ‘formula processing’ facility, thereby extending its
usefulness to geophysicists needing to manipulate potential-field
data in raster format. However, there is also provision for simple
low-pass, high-pass and notch Fourier filtering of arbitrary images.
The chapter is organised as follows:
Leeuwin Centre for Earth Sensing Technologies, 65 Brockway Road,
Floreat Park, Western Australia 6014.
1. Brief explanation of Fourier-transform images;
2. Illustration of notch-filter use to remove noise in aircraft scanner
imagery;
3. Low-pass and high-pass Fourier filtering;
4. Account of capabilities for processing potential-field data (magnetics
or gravity);
5. Illustration based on aeromagnetic data from Cape York Peninsula,
Queensland;
6. Appendix - (A) Edge effects (B) Display of transforms.
No prior knowledge of Fourier methods is assumed, since even
experienced ER Mapper users may be unfamiliar with this topic. But
rapid progress is assured, and there are few techniques so well worth
the effort expended in acquiring them. Those already expert in
frequency-domain processing may find the first half of section 4
sufficient.
The Fourier
transform of an
image
Imagine a rectangular, backyard swimming-pool with a tiled,
horizontal bottom. The form of the fluid surface, at a particular
moment in time, can be described by specifying the water depth at
each point. To obtain a digital image representation, we have only to
assign a gray level to the mean instantaneous height of the watercolumn above each tile.
Fast Fourier Transforms
165
But this is not the only description possible. For example, if the water
is sloshing from end to end, then it may be sufficient to indicate the
wavelength, trough-to-crest height and phase (crest location). When
the waveform is sinusoidal, these three numbers permit full
reconstruction of all the gray levels in the previous description.
Although not every configuration of the surface can so simply be
encapsulated, by an appropriate superposition of waves travelling in
different directions we can, in fact, match a ‘wave’. description to
every ‘height’ description.
The theory of Fourier transformation (named for Joseph Fourier, a
one-time associate of Napoleon) is concerned with making this
equivalence precise and analytical. Thus, for every digital image one
can compute its transform-image from which, by an almost identical
inversion process, the original image is recoverable. The paired
images are of the same dimensions and contain identical
information. Although very different from one another, both forms
are meaningful for those equipped to interpret them.
Notch filtering
Why bother with this complicated, alternative description by waves
travelling in all directions? There are several answers. One of the
most important of them may be illustrated by supposing that, as the
water in our pool heaves back and forth, a light cross-wind creates
surface ripples. This situation is closely analogous to the practical
case of aircraft scanner imagery spoiled by detector vibration. We
may wish to eliminate this ‘noise’ without corrupting the ‘signal’.
The problem is an awkward one inasmuch as the ripples are
‘everywhere in general, but nowhere in particular’. But such global
effects in the ‘space domain’ (original image) can mirror merely local
effects in the ‘frequency domain’ (Fourier image). In our case, the
transform image will be zero everywhere except for two widely
separated pairs of spikes - one pair to represent the gross waveform,
the other one for the ripples. We can take advantage of their
isolation to remove the latter spikes by setting the amplitudes at
these locations to zero. The result of retransformation will be a
ripple-free version of the original picture. By subtracting it from the
true original, we get an image of the ripples alone.
The ‘notch-filtering’ process just described can not be recommended
for noise removal generally. One reason is the overhead involved in
forward and reverse transformation; another, deficiencies in
localisation. For example, if the ripples are square waves rather than
sinusoids (as in Landsat Thematic Mapper imagery with 16-line
horizontal striping effects), then the representative spikes will be
periodically repeated. Removing all their repetitions may then
require more effort than alternative, space-domain processing
strategies. However, with aircraft scanner data, where the noise may
be virtually intractable by other methods, the technique is
invaluable.
166
Fast Fourier Transforms
The ideas outlined above are illustrated in Figures 1-5. They relate
to a 480x480-sample subscene (not included as a demonstration
data-set) from roll-corrected, band-32 GERIS data collected over
Coppin Gap, W.A. Space-domain processing to suppress aperiodic,
horizontal line-striping (Figure 1) leaves an image free from obvious
noise, except for the cross-track ‘shading’ visible as limb-darkening.
Figure 1
Figures 2 and 3 illustrate the corresponding Fourier transform. (More
precisely, they are images of the ‘power spectrum’, showing
amplitudes and directions but not phases of the waves; see Appendix
(B).) Large values concentrate about the centre in Figure 2,
producing an appearance reminiscent of globular star clusters. This
is a typical pattern; spatial variation in most naturally produced
images is gradual, so the long-wavelength (low-frequency)
components predominate. The dark rectangles (notches) visible in
Figure 2 mask wave components with amplitudes anomalously large
for the high-frequency regions in which they occur. Fainter vertical
streaks betray further noise artifacts that have not been masked.
Fast Fourier Transforms
167
Figure 3, an enlargement of just the upper part of the image, shows
its appearance before masking. The dark, central, vertical line
attests to the success of the space-domain process for removing
strictly horizontal stripes, but the lighter background betrays its
inability to cope with sub-horizontal striping, even at very low
angles. Figure 4 depicts the space-domain noise-pattern isolated by
the larger paired rectangles in Figure 2.
The two smaller notches cover paired, star-like noise-spikes, one of
them clearly apparent just left of centre, about one-third from the
bottom in Figure 3. It proved difficult to obtain a hardcopy illustration
of the corresponding interference pattern, consisting of wave-fronts
inclined at about 11.23o to horizontal, with approximately 110 crests
intersecting the left margin of the image. Readers will be familiar
with the curious effect, seen in wild-west movies, of wagon wheels
that appear to rotate backwards. Its cause is the too-infrequent
sampling of a periodic motion—the rotation of the spokes—by the
camera shutter. A two-dimensional variant of this ‘aliasing’
phenomenon, wherein subsampling by the printing device produced
a full 90o rotation of the wavefronts that we sought to illustrate,
unexpectedly limited us to the mere verbal description given above.
Figure 4 is probably likewise affected, but at least the textural
appearance is similar to the unaliased original.
Low-pass and
High-pass Fourier
filtering
168
Granted the existence of the transform-image, and the recognition
that each of its pixels corresponds to an undulating component of the
input image, a wide range of possibilities lies open for effecting
modifications to improve data-interpretation. For example, with an
image much affected by random noise, picture quality may be
improved by stripping out all the high-frequency components, which
is to say resetting the Fourier image to zero in regions remote from
the central, high-amplitude region.
Fast Fourier Transforms
Again, with better-quality data, we may wish to enhance edges (such
as road boundaries in remote sensing data) by removing the more
slowly varying components of the scenery. That end could be
accomplished by annulling low-frequency terms within the Fourier
image. Alternatively, the difference between input and low-pass
images gives a high-pass-filtered image; one may also enhance
edges without loss of geographic context by mixing the input data
with a high-pass version of itself, in suitable proportions. There are
still-more-creative possibilities, such as enhancing or suppressing
trends with a particular range of orientations (strike filtering), but
these capabilities have not been incorporated in the present version.
ER Mapper does, however, accommodate the low-pass and highpass filtering operations needed for the noise suppression and edge
enhancement just described. Low-pass filtering is done by
multiplying the Fourier image by zero at high frequencies as just
described. Conversely, high-pass filtering multiplies low frequencies
by zero. But a further refinement is necessary: experience shows
that if the demarcation between high and low frequencies is a sharp
one, such as a circle or rectangle centred on the zero-frequency point
in the Fourier image, then peculiar artifacts are observed following
retransformation to the space-domain.
This phenomenon, usually called ‘ringing’, is further considered in
Appendix (A). Here it is sufficient to note how it may be counteracted
by allowing the multiplier to vary smoothly over a range of
intermediate frequencies between the extreme values zero and
unity. In Release 5.0, the user specifies two ellipses centred at the
zero-frequency point, their principal axes proportional and aligned
parallel to the sides of the image (Figure 12). The annular region
between the two curves defines the set of intermediate frequencies.
For low-pass filtering, the filter (i.e., the multiplier) is unity within
the inner curve, zero outside the outer curve. Between the ellipses,
its value drops in the same way as do the values of the function
(1+cos x)/2, for x between zero and π. This type of filter is therefore
known as a cosine roll-off filter. For high-pass filtering, the multiplier
is obtained by subtracting the low-pass filter from unity (no actual
subtraction of images is performed).
Finally, it seems desirable to reiterate the warning of the previous
section about overuse of the Fourier transform. Suppression of
‘pepper-and-salt’ and ‘spike’ noise is generally better done in the
space-domain, say with moving-average and median filters
respectively. The existence of alternative means to the same end is
never a sound reason for choosing the worse. A valid case for
Fourier-domain low-pass filtering to remove noise would be a noisy
or badly gridded aeromagnetic dataset that must for other reasons
(see next section) be Fourier-transformed anyway. No particular
advantage would be secured by noise-removal preprocessing in the
space domain.
Fast Fourier Transforms
169
Fourier
processing of
potential-field
data
Digital imagery is an excellent medium for storage, display and
manipulation of spatially distributed readings such as many
geoscientific datasets. The present section concerns measurements
of the earth’s magnetic or gravitational field. Here, the use made of
the Fourier transform is quite different to the one outlined above,
since in principle there is no need for visual inspection of the
transformed image. In practice, one is well advised to check its
appearance, thereby exploiting the unique opportunity to spot
problems with the data that may have gone undetected.
A few remarks are first necessary regarding the assumed initial form
of the data:
•
The readings may result from ground measurements of the field
made at regular spacings. At larger scales most datasets derive
from sampling along the track of a survey aircraft or seagoing
vessel. Thus, an interpolation step is needed to produce values
on a regular grid. Here we assume that this preprocessing has
already taken place. The data may therefore be supposed to
reside in an ER Mapper image file but there is no assumption that
they fill an entire rectangle. Locations at which data are unknown
are assumed to be filled with a designated “blank” or “null” value,
which the user must indicate to the program.
•
The data are further assumed to be measured on one horizontal
plane. The theory to be invoked does not directly apply to
“draped surveys”, such as “aeromag” or “helimag” flown over
precipitous terrain, where the pilot aimed at constant groundclearance, rather than constant altitude. Further preprocessing
to achieve “upward continuation from an uneven track” would
then be indicated. Fortunately there are large regions of the
world where surveys can be flown horizontally.
•
By “measurements of the earth’s field” are meant readings of
some directional component of the vector field. But this case
practically includes total-field magnetic anomaly data (almost
the only kind readily available) since it can be shown that, except
for very intense fields, the difference between the total field and
its regional value (the IGRF) is approximately equal to the
component of the residual field in the direction of the earth’s
field.
In summary, the data are assumed to be an ER Mapper image of
some component of a magnetic or gravitational field measured on a
horizontal plane, but possibly incomplete, incorporating a constant
blank value at non-data points. The software allows processing of
such pictures to images showing:
A. Vertical continuation;
B. First upward derivative;
C. Second vertical derivative;
D. Derivative in a specified direction in space;
E. Field component in a specified direction in space;
170
Fast Fourier Transforms
F. Reduction to the pole (meaningful in magnetic case only).
This list contains two surprises for the uninitiated. Firstly, why should
one component of a vector field be sufficient information to compute
the component in some other specified direction or, equivalently, the
components in three mutually orthogonal directions (such as north,
east, and down)? Secondly, why (to take the aeromagnetic case)
should information gathered from one altitude be sufficient to
deduce the corresponding information for another altitude, at which
the aircraft did not fly? The answers, which depend on the physical
significance of the readings, are not merely heuristic but form the
basis for the algorithms employed.
Briefly, process E is possible because the force-field is conservative;
it is the gradient (in the sense of vector calculus) of a scalar
potential-function. Given one field-component, we can antidifferentiate it to get the potential, then re-differentiate in another
direction to get a new component of the field. These operations are
particularly simple in the Fourier-domain.
The scalar potential-function, hence also each of its directional
derivatives (the field components), satisfies the three-variable
Laplace differential equation, which serves to connect the values of
its second partial derivatives in three mutually orthogonal directions.
Knowing the function-value on the flight-plane (equivalent to
knowing one field component), we can numerically evaluate two of
the second derivatives - say those in the x and y directions, where
(x, y) are Cartesian co-ordinates in the plane. Laplace’s equation
then determines the second derivative in the z direction,
perpendicular to the plane.
The resulting knowledge about how the field varies vertically is
sufficient to permit process A, movement to another stratum, either
higher or lower. The correctness of this theory, and of the Fourierdomain processing capabilities can be verified by the use of simple
models for which explicit analytical expressions of the field
components are known. Examples include uniformly magnetised
parallelpipeds, circular cylinders, and a few others. If images of the
field at two levels are computed directly from these formulae, we can
compare the values on the upper level with those obtained by
continuation from the lower level, and vice versa. With suitable
precautions the agreement is excellent. (Notwithstanding this
concurrence of theory and experiment, a lingering sense of the
miraculous may be found hard to dispel!)
In keeping with the warnings of previous sections, it is fair to add
that process C is easier to effect by space-domain processing. With
potential-field data, vanishing of the space Laplacian means that the
plane Laplacian, for the horizontal plane, equals the negative of the
second vertical derivative. It is well known that the plane Laplacian
operator has a finite-difference approximation by a mask with (-4)
at the central position, unity in the locations of its four nearest
neighbours. Indeed, convolution with such a mask is often used for
non-directional edge-detection in image processing generally. This
remark also shows whence the second vertical derivative has the
ability for which it is chiefly valued, that of outlining magnetic
sources.
Fast Fourier Transforms
171
The other processes A-F can likewise be accomplished by suitable
convolutions in the space-domain. But here the necessary masks are
much less compact. The advantage of Fourier-domain processing is
that point-operations (multiplications and divisions at each pixel,
without regard to the values at neighbouring pixels) take the place
of computationally expensive convolutions. Computational overhead
associated with the transform is offset by the consideration that,
once it is to-hand, a host of different results can all be read off
relatively cheaply.
Aeromagnetic
data from Cape
York Peninsula,
Queensland
Figure 5. The examples/Shared_Data/Magnetics_Grid
demonstration dataset, comprising total magnetic intensity data
sampled at 80m elevation with a 200m line spacing, thence
interpolated at 100m spacing in both horizontal dimensions. The
image straddles the Great Dividing Range, rising just inland from
Princess Charlotte Bay on the north-east coast of Australia. The
region is bounded by longitudes 142o57’19.3” and 143o32’43.35”,
and between south latitudes 13o 59’20.32” and 14o 32’23.98”,
north being to the top of the image. For whole-scene processing
purposes, the magnetic declination and inclination were taken as
6.4o and -42.1o, respectively.
172
Fast Fourier Transforms
Figure 6. The power spectrum for this image. For uniformity, all
power spectra illustrated are shown with input limits of 0 to 0.01,
followed by a logarithmic stretch. The subsequent figures are as
follows.
Figure 7. Upward continuation to 1200m. This operation is commonly
used to isolate the contribution from magnetic sources deep below
the surface.
Figure 8. Field reduced to pole and downward continued to 40m.
Downward continuation helps counteract attenuation imposed by the
need to fly surveys at a safe ground-clearance.
Reduction-to-pole attempts improved spatial correlation, between
magnetic anomalies and their sources, by giving the induced
magnetisation a vertical direction. (It has been criticised for its
distorting effect upon fields due to remanence not parallel to the
induced component.)
Fast Fourier Transforms
173
Figure 9. Power spectrum for horizontal derivative in the northeasterly direction.
Figure 10. North-east derivative. Note the NW-SE trending ‘fringing
reef’ through the centre of the image, faintly visible in Figure 5, but
strongly highlighted by its up-slope and down-slope gradients in
Figure 10. Derivatives are available in arbitrary directions, not
necessarily horizontal, e.g. parallel to the terrestrial field.
Figure 11. Power spectrum for second vertical derivative. The
horizontal stripes through the centre are a noise artifact, but are not
easy to separate from signal.
174
Fast Fourier Transforms
Figure 12. Appearance of Figure 11 following application of elliptical
low-pass filter. (The transform being a poor candidate for such
filtering, the improvement possible in Figure 13 is negligible.)
Figure 13. Second-vertical-derivative image. As noted above, this
enhancement is commonly used for accentuating the edges of
causative bodies. However, it can also emphasise faults of gridding
and levelling, leading to ‘edginess’ (lack of spatial cohesion) and
artifacts that may have dubious basis in reality.
These illustrations far from exhaust the capabilities, but should
provide sufficient points of comparison for users to confirm the
results of their own processing.
Appendix
(A) Edge effects
Sensible use of the Fourier transform requires a heightened
awareness of sharp edges within images. The first purpose of this
appendix is to alert users to one or two situations involving such
discontinuities in the data.
The high-frequency content of imagery was attributed above to the
fine detail within the picture, but there may be other causes. If a
remotely sensed image is Fourier transformed after first being copied
into a slightly larger file where the unfilled region is preset to some
value such as zero, then the high-frequency region of the transformimage will relate to more than just the roads and rivers and
shorelines. Fourier-image pixels of large magnitude (at low as well
as high frequencies, but most noticeable at high frequencies, where
they would not otherwise occur) will be needed to represent the
discontinuous steps, where the original data drop abruptly to zero.
Of course, we do not usually thus embed space-domain imagery
within a field of zeros. However, the low-pass filtering as first
described in section 3, where parts of the transform-image are
annulled, has precisely the effect of creating a sharp step to zero.
Since the inverse transformation process is the same in its nature as
the forward procedure (with only the technical difference of a certain
algebraic sign-change), we should expect trouble. This is the reason
for implementing a cosine roll-off, instead of a simple ‘pill-box’ filter,
in section 3 Low Pass and High Pass Fourier Filtering above.
Fast Fourier Transforms
175
The second instance of image discontinuity, though not its
unpleasant effects, is well hidden. It was asserted earlier that the
Fourier transform has the same dimensions as the image from which
it derived. Users who inspect their transforms will find, on the
contrary, that they are generally some 10% bigger in each
dimension. This discrepancy results from a deliberate, pretransformation enlargement of the data, which are copied to a bigger
file as just envisaged. However, instead of a filling by zero values,
the border region receives synthetic data extrapolated from the
actual values. This extrapolation is done in such a way that, in a tiling
of the plane by translated copies of the enlarged image, it would be
difficult to locate the boundary between one tile and its neighbours.
Why is such a matching of data at the top and bottom, and at the left
and right edges, considered necessary? Recall that the Fourier
representation of an image explains the image in terms of waves.
But waves naturally continue periodically forever. Consequently, the
Fourier transform actually represents, not just the given image, but
also the unbounded image obtained by its endless repetition as a
tiling of the entire plane. Now, this unbounded image will contain
discontinuities whenever the top and bottom, or left and right, fail to
match one another smoothly in the original image. And in order to
model these discontinuities, high-frequency terms of large amplitude
will be needed.
Thus, even though an image may appear smooth, for purposes of
Fourier transformation it counts as including jump-discontinuities if
(as in nearly every practical case) data at opposite edges fail to
match. This is of small concern with upward continuation and similar
low-pass filtering operations. With high-pass filters like those for
downward continuation or derivatives, ignoring the effects of ‘edge
discontinuity’ is catastrophic. For, this amplification of highfrequency components is prescribed by a theory that assumes them
to represent the physical structure of the field; there is no allowance
for artifacts due to the approximation of the continuous Fourier
transform of theory (Bracewell, 1965; Gunn, 1975) by the discrete
transform used in the programs. When an image with edgemismatch is high-pass filtered in the Fourier-domain then
retransformed by inversion, the result is an image with severe
‘ringing’ - undulating patterns parallel to and strongest near the
edges, but often propagating so far into the centre as to make the
entire image useless for interpretation. Our backyard swimming-pool
looks then as if an earthquake had just hit the suburb!
176
Fast Fourier Transforms
With data that include blank values, the enlargement and
extrapolation process also interpolates. Particular circumstances can
render image expansion undesirable. For example, a noise pattern
whose wavelength is commensurate with the sampling interval could
be isolated more precisely without it. Provision has therefore been
made to enforce size-retention, if required. (With this option, edgemismatch will produce a well marked St. George cross passing
through the centre of the power spectrum.) But a null value must
always be nominated and, if the program detects an input pixel with
that value, it will ignore the override instruction. Expansion will also
occur contrary to user wishes if the original image-dimensions are
not sufficiently highly composite numbers to suit the needs of fast
transformation. More precise information on this point will be found
in the Reference Manual.
(B) Display of transforms
The mathematics of Fourier transformation has some hard to ignore
effects that may engender confusion. The following brief explanation
will perhaps be helpful.
The Fourier transform of a real image (one whose gray-levels are of
byte, integer or floating-point type) comprises complex numbers,
equivalent to two real numbers for each pixel. This fact seems to
imply a doubling of data. However, such Fourier transforms also
have a type of symmetry, with respect to their centres (zerofrequency points), that implies redundancy. Thus, the real part of the
pixel at (-x, -y) is the same as at (x, y), while the corresponding
imaginary parts are equal in value but of opposite sign. Here, x, y
are sample and line co-ordinates relative to the centre of the
transform-image.
This symmetry must be kept in mind with notch-filtering, for
example (Figure 2). Changes made to one part of the transform must
be mirrored symmetrically with respect to the centre, or the product
of alteration will not be the transform of any real image. In other
words, the inverse transform would no longer be real.
The complex values complicate display of the Fourier transform. The
usual solution is to produce an image of absolute values, or their
squares (u2+v2 being the squared absolute value of the complex
number u+iv), called the Fourier power spectrum. The ‘Log-power
spectrum’, obtained by taking logarithms (preferably after adding a
suitable positive constant, to avoid the singularity at zero), is also
often useful, as a means of compressing the wide range of values.
Power spectra show the orientation of wave-fronts (by pixel position)
and the amplitudes of waves (by gray-level). They lack only the
phase information, which depends on the relative magnitudes of real
and imaginary parts.
As will be clear from these remarks, Fourier-transform images are in
many respects inconvenient; whether to store, to display, or to
manipulate. The so-called cosine transform, a close relative of the
Fourier, has all-real values, no symmetry properties, and a high
degree of natural immunity from ringing due to edge mismatch.
Unfortunately it is less well suited to achieving the objective of
processing of potential-field data.
Fast Fourier Transforms
177
References
Bracewell R., 1965. The Fourier transform and its applications: New
York, McGraw-Hill, 381 p.
Gunn P. J., 1975. Linear transformations of gravity and magnetic
fields. Geophysical Prospecting 23, 300-312.
Huang, T. S., Schreiber, W. F., Tretiak, O. J., 1971. Image
Processing. Proc. IEEE, v. 29, p. 1586-1609.
Singleton, R. C., 1969. An algorithm for computing the mixed radix
fast Fourier transform. IEEE Transactions on Audio and
Electroacoustics 17, 93-103.
178
Fast Fourier Transforms
Principal Component Analysis
Principle Component Analysis for Alteration Mapping Using Landsat
TM Satellite Imagery
Author
Eric Augenstein, Americas Region Office, Earth Resource Mapping,
4370 La Jolla Village Drive, Suite 900, San diego, CA 92122-1253,
USA.
Using Principal
Component
Analysis
The following image processing technique provides a simple, robust
way to map alteration zones for mineral exploration using Landsat
Thematic Mapper (TM) satellite imagery. It is based on the use of
Principal Components Analysis (PCA) and is sometimes called the
‘Crosta technique’ after the researcher who carried out the initial
studies (Crosta and McM.Moore, 1989).
The following steps describe how to build the PCA alteration mapping
algorithm in ER Mapper. In order create the algorithm, you need to
create and examine dataset statistics to account for variation in the
spectral properties of different areas and TM images. After choosing
the appropriate PCs for your image, you can display the result as an
RGB algorithm for analysis. For complete details on the concepts and
theory behind the PCA alteration mapping algorithm, please refer to
the references listed at the end (this particular technique is described
in the Loughlin paper).
1. Create two Virtual Datasets, one that contains TM bands 1, 3, 4 and
5 (this is dataset 1), and another that contains bands 1, 4, 5 and 7
(this is dataset 2).
Be sure to delete the transforms from each layer to avoid rescaling
the data, and add a label to each layer indicating the band.
2. Calculate statistics for each of the Virtual Datasets.
3. Display the statistics for dataset 1 and examine the Covariance
Eigenvectors.
Identify which PC has the greatest loadings (values) for TM bands 1
and 3 but that also has opposite signs (+ or -). Typically this is either
PC3 or PC4. For example, in this dataset you would choose PC4 (you
are looking at VDS bands 1 and 2 below as they refer to actual TM
bands 1 and 3):
Cov. Eigen.
PC1
PC2
PC3
PC4
Band 1
0.568
0.115
-0.487
-0.654
Band 2
0.571
0.157
-0.300
0.747
Band 3
0.229
-0.973
-0.003
0.029
Band 4
0.547
0.123
0.820
-0.115
Principal Component Analysis
179
This PC represents the “iron oxide” (F) component.
When examining the statistics, remember that the “Band”
number above refers to the order of bands in the VDS, *not* to
the actual TM band number. For example, TM band 3 may
actually be Band 2 in your VDS. In this case, you must first
determine whichVDS band numbers correspond to TM bands 1
and 3, then examine the PC3 and PC4 statistics for those two
bands.
1. Display the statistics for dataset 2 and examine the Covariance
Eigenvectors. Identify which PC has the greatest loadings (values)
for TM bands 5 and 7 but that also has opposite signs (+ or -).
Typically this is also either PC3 or PC4. This PC represents the
“hydroxyl” (H) component.
2. Create a 2-band Virtual Dataset that calculates PC4 (or PC3 as
appropriate) from bands 1,3,4,5 in one layer (the F image), and PC4
from bands 1,4,5,7 in a second layer (the H image).
Delete the transforms to avoid rescaling the data and add a label for
each.
After saving the VDS, calculate statistics for it. You will use this VDS
later to generate PC1 of the H and F images.
3. Create an RGB algorithm with the following contents (remember to
use the appropriate PC for your scene; PC4 is shown below):
•
Red = PC4 of bands 1,4,5,7 (the “H” image)
•
Green = PC1 of the “H” and “F” images (using the Virtual Dataset
created in step 5)
•
Blue = PC4 of bands 1,3,4,5 (the “F” image)
On each layer, apply a linear contrast stretch that clips the darkest
portion of the histogram - you are interested in highlighting the
frequently occurring values in the center of the histogram. You might
start by applying a 99% clip and then moving the bottom node of the
transform line in closer to the histogram center.
The resulting image is usually a dark bluish color composite image
on which alteration zones are unusually bright. White pixels are both
iron-stained and argillyzed, bright reds and oranges are more
argillyzed than iron-stained, and bright cyan to bluish tones are
more iron-stained than argillyzed.
Optional: To add the major structural features of the image back into
your RGB algorithm, you can add an Intensity layer that generates
PC 1 of the TM image and contrast stretch it. This may make it easier
to interpret the image because the overall scene brightness or
albedo information is restored to the scene. In some cases, however,
it may detract from interpretation so you need to experiment.
180
Principal Component Analysis
In addition, you may wish to add low pass (averaging) filters to the
RGB layers to smooth noise in the higher order PCs, and/or a high
pass (sharpening) filter to enhance structural edge features in the PC
1 Intensity layer if you decide to use it.
References
W.P. Loughlin, 1991. “Principal Component Analysis for Alteration
Mapping.” Photogrammetric Engineering and Remote Sensing, Vol.
57, No. 9, September 1991, pp. 1163-1169.
Crosta, A. P. and J. McM. Moore, 1989. “Enhancement of Landsat
Thematic Mapper Imagery for Residual Soil Mapping in SW Minas
Gerais State, Brazil: A Prospecting Case History in Greenstone Belt
Terrain” Proceedings of the 7th (ERIM) Thematic Conference:
Remote Sensing for Exploration Geology. Calgary, 2-6 Oct. pp.
1173-1187.
Principal Component Analysis
181
182
Principal Component Analysis
Aerial Photography and Data Integration
Aerial Photography and Data Integration in Environmental,
Engineering and Cartographic Applications
Authors
James Cutler and Justin Saunders, Geographic Information Services
Ltd. (GISL), PO Box 85, Egham, Surrey TW20 8SE, UK.
Context: Data
Gathering
It is frequently assumed that image processing is confined to the
processing of remotely sensed data acquired from satellite borne
sensors. In fact remote sensing has its origins in more prosaic
circumstances, those of the humble balloon and it is in these more
near-Earth surroundings that remotely sensed data offers a valuable
tool to those concerned with the alteration, monitoring and
evaluation and mapping of the Earth.
In Europe and North America in particular urbanisation and the
spread of the automobile have resulted an increasing awareness of
the fragility and temporal nature of much of what is taken for
granted. In response increasingly stringent legislation requires that
local/federal authorities and sectoral agencies are in a position not
only to oversee their domain but also to provide documentary
evidence on what exists or occurs and what should happen in the
future. In some cases, for instance forest management in Canada,
conventional satellite imagery has been used as an integral
component of development strategies. However, in other areas,
especially on the urban fringe, an area especially subject to
developmental (and preservational) pressures, the demand is for
near-real-time information; this information has to be cost-effective,
accurate and easily integrated with existing data collation practices.
Aerial photography has played a major part in map composition since
1945. It has enabled cartographers to prepare accurate maps of
previously inaccessible areas and, coupled to ground survey, has
resulted in the production of large scale maps of the most densely
populated areas. However, the drawbacks of cost, human and capital
resources and time from flying to publishing now represent burdens
that many local authorities cannot afford to bear. The search for an
alternative methodology must be aware of the growth of information
technology in surveying, database creation and information
gathering; geographic information systems (GIS) have become a
prominent tool in the armoury of such organisations as they seek to
respond to public concerns and internal need in a cost-effective way.
Aerial Photography and Data Integration
183
The response has been to evaluate the way in which sporadic and
specific aerial photography can be complemented by airborne video
photography and how digital imaging techniques (either cameraoriginated or through scanning) can be used to minimise the cost of
aerial photography. Image processing allows the user to extract the
required information; it also provides a contextual medium to
examine existing data, to document the landscape and to extract
secondary information. Further, as a digital dataset, this approach
allows integration with existing digital datasets and retains data
integrity and unlimited recall.
Specific applications that require this kind of information include:
•
village plans
•
development applications/planning licences
•
map updating
•
environmental assessments
•
transport corridors
•
infrastructure development
•
utilities mapping
•
urban planning
•
legislative obligations and infringements (e.g. Green Belt, SSSIs)
The attached images illustrate the way in which many of these
applications can be addressed using ER Mapper.
Example:
Transport
Corridor
Development
A local council, subject to local and national conflicts of interest with
respect to the development of a new transport corridor, identifies a
need to develop and maintain an up-to-date information
management system for the area in question.
At the same time developers, be they government or commercial,
consultants and pressure groups all have a need for information to
guide, mitigate and ultimately justify the decisions made. While
some may be concerned with issues of planning blight and
confidentiality, others will be concerned about short and long-term
impacts, access, compensation and the wider debate. The former
may look to internal resources to meet objectives while the latter
may depend on public information dissemination; all, however,
would agree on one thing—that the key to successful development
lies in the availability of up-to-date and accurate information.
Essential inputs to such a database would include:
All existing mapping
184
(both from the national authority and from local planning/survey
offices)
Aerial Photography and Data Integration
•
topography
•
infrastructure
•
political boundaries
•
settlements
•
drainage and watersheds
•
landscape
•
ecology
•
archaeological/historical
•
agriculture
•
minerals
•
health and safety
•
water
•
sewage
•
electricity
•
telephone
•
gas
•
pipelines
•
land registry
•
four-figure field references/areas (MAFF)
demographics
•
census data, etc.
archive material
•
aerial photography, old maps, etc.
Statutory designations
utilities
land ownership
Aerial Photography and Data Integration
185
A quick assessment of the validity of this data would soon reveal that
much of it was not adequately up-to-date and that some data
collection would be required. Black and white aerial photography and
colour infrared aerial photography are increasingly used to redress
this balance where data gaps are identified; with the advent of digital
aerial photography cameras, computer compatible images are
readily available from these sources. Alternatively, the hardcopy
aerial photos may be scanned to produce the digital data. The spatial
resolution of hardcopy aerial photography is proportional to flying
height, film used and focal length of camera lens; with scanned or
digital data the resolution of the digital device determines spatial
resolution. The resulting resolutions are typically sub-meter and can
be used to map features such as street lighting, man-hole covers and
gardens.
Airborne video is an increasingly popular alternative; it is cheap and
easy to obtain and, by generating digital data at source provides
data in a format immediately ready for image processing. Spatial
resolution is closely linked to flying height and has been used for
pipeline surveys, catchment measurement and for adding detail to
other lower resolution imagery such as SPOT Panchromatic data.
Georeferencing and
Rectification
Neither process removes the need for rectification; however, the
widespread availability of national mapping in digital form allows
organisations concerned with spatial information management to
obtain the required datasets easily. The ability simply to examine
change, update existing maps, evaluate local, sectoral and regional
policies and provide data in a format for use in the GIS as a result of
the georeferencing of high resolution imagery with existing digital
data represents a significant step forwards.
Using ER Mapper the client was able to import three scanned black
and white and one colour infrared aerial photograph of the area
under consideration. These were then georeferenced to digital
national mapping data using the Geocoding Wizard in ER Mapper.
For ease of interpretation the user defined regions on each aerial
photograph to exclude fiducial marks (and other ancillary
information) as well as the photo border and then uses a formula to
mask out the region using the Formula button. The resulting strip of
aerial photographs provided an excellent overview of the area of
interest.
Integrating and updating
vector data for planning
186
The national mapping data was then overlaid to illustrate the degree
to which ER Mapper can provide workable rectification in near-realtime and to allow planners to identify areas where mapping has yet
to be fully updated. In this example a recently developed site was
clearly visible on the image but was evidently missing from the
national mapping data; an area of forest had also clearly be seen to
have been clear-felled. This form of raster–vector integration is a
powerful presentation tool, especially when the outputs are linked to
a GIS and other datasets.
Aerial Photography and Data Integration
While a developer would rapidly become aware of such a situation
from on-the-ground investigations, initial corridor alignments
depend on the information on the existing mapping; there is plenty
of scope for embarrassment! The developer would commission an
engineering consultant to design the transport corridor—horizontal
and vertical alignments, curvature, drainage and construction
requirements (in terms of cut and fill and sites) are among the first
objectives to be identified. These materialise in the form of
engineering drawings from systems such as MOSS and AutoCAD.
ER Mapper imports files from these data sources very easily.
The developing authority will commission an environmental
assessment either at its own discretion or in line with mandatory
obligations and legislation. This will draw on a wealth of primary and
secondary data sources to compile a picture of the state of the tobe-affected corridor and immediate environs. The preparation of
digital datasets from these findings and the subsequent integration
of these into spatial databases of both planners and engineers is not
yet a standard procedure. However, with ER Mapper any digital
datasets can be brought in and overlaid on the imagery and existing
mapping at an early stage. This can make significant contribution to
engineering design both through realignment and the adoption of
suitable mitigation measures and through recognition by planners of
previously unidentified constraints to development.
Conclusion
Much is made of the volumes of data and the amount of information
available today. Image processing technology coupled to GIS
represents the foremost way in which this fact can be made to work
for those affected by development. Data collection, compression and
collation, brought to the desktop and the decision maker by the new
technologies and the ever-evolving methodologies, enables users to
evaluate decisions both in advance (and in hindsight), to assess
impacts and to generate the outputs needed to explain and justify
the final decision.
Aerial Photography and Data Integration
187
188
Aerial Photography and Data Integration
Monitoring Crop Production and Assessing
Irrigation Requirements
High Resolution Satellite Imagery for Monitoring Crop Production
and Assessing Irrigation Requirements
Authors
James Cutler and Justin Saunders, Geographic Information Services
Ltd. (GISL) PO Box 85, Egham, Surrey, TW20 8SE, UK.
Introduction
One of the increasingly common sights in Africa and the Middle East
are great patches of colour scattered across a seemingly barren
landscape; an agricultural and environmental anomaly to many,
others feel that these represent one of the few viable alternatives by
which countries can both generate export revenues and feed their
people. Crop circles or, more accurately, centre pivot irrigation, are
one of the more visible ways in which the application of technology
can alter the physical landscape. This chapter focuses on the role
that a different technology can play in monitoring such
developments at local and regional levels.
Context:
Agricultural
development
From Kansas to the Kalahari and from Saudi Arabia to Southern
Spain, irrigation circles up to a kilometre in diameter are becoming
a common agricultural characteristic. As visible from space as from
the highway, they are the focus for mono-cropping and agricultural
export in many developing countries and, in the more arid areas
such as the Middle East, often symbolise the ability to stand alone.
As such, considerable investments have been made in
hydrogeological and hydrological investigations, borehole
development, equipment, seed and fertiliser imports, and in training
and recruitment.
As the global population doubles in the next 30-40 years, its main
requirement will be adequate food and water. This means not only
potable water (for drinking, cooking etc.) but water for agricultural
use. To this end, focus is switching to agricultural and rural
development programmes that take full account of the current and
future water needs of the region.
Land inventory studies are among the first essential stages in any
rural development project; making them is often difficult and
increasingly constrained. The lands to be assessed are frequently
unknown or undeveloped or may have undergone rapid changes.
Planners increasingly require data that is detailed, complex and
diverse, to devise programmes that are sociologically acceptable and
economically and environmentally viable. Traditional techniques
based on intensive ground surveys are cumbersome, unreliable and
costly.
Monitoring Crop Production and Assessing Irrigation Require-
189
Hydrologists and others recognise the river-basin as the
fundamental unit in effective water resource management and
planning. Regional planners acknowledge that an up-to-date
knowledge of current farming systems, land tenure systems, water
resource dynamics (particularly the balance between abstraction and
recharge in semi-arid catchments) and planning strategies are
integral to the evolution of sustainable agricultural programmes.
These needs place data gathering and effective information
dissemination at the centre of all decision making.
Multispectral satellite imagery, available from the Landsat series of
satellites since 1972 at resolutions of 80m and latterly 30m (with
Landsat TM) were complemented and enhanced in 1986 with the
launch of the first SPOT satellite. Carrying two sensors, one
multispectral with three bands in the visible and near-infrared and
one wider panchromatic band, these sensors return imagery with a
spatial resolution of 20m and 10m respectively. While SPOT has
undoubted spectral resolution limitations, the increased spatial
resolution has provided the resource planner with a new and
valuable tool in data gathering.
Example:
Irrigation
development
A national agency maintains up-to-date projections on population
growth and the associated demands in food and other goods and
services. Such a forecasting system enables planners to identify
impending shortfalls in production of any one of these goods and
services. Many semi-arid countries are afflicted by rapidly growing
populations, environmental fragility, weak economies and
inadequate infrastructure. As new schemes come on stream to
redress this imbalance the planners must monitor and evaluate their
development not only to ensure that it is effective but also to devise
new programmes.
Crop Production
There are a range of ER Mapper classification routines that allow the
user to quickly differentiate between crops and to identify which
areas are growing which crop. Irrigation circles were discriminated
simply by applying a contrast stretch; in ER Mapper the
Transformation dialog box and the histogram options allow the user
to control exactly what is seen on the screen in real-time.
In very arid areas such discrimination may be all that is required to
identify agricultural areas. However, in semi-arid areas the user
must separate agricultural or cropped areas from natural vegetation
and browse. Supervised maximum likelihood or unsupervised multispectral classification allows wheat, barley, lettuce and fallow circles
to be identified. The ER Mapper annotation and map production
system was used to create symbols identical in size to the irrigation
circles. The resulting map product is a clear illustration of the power
of ER Mapper to provide clear documentation to decision makers.
190
Monitoring Crop Production and Assessing Irrigation Require-
The area of the irrigated circles is easily calculated; with a 500m
radius, the area is about 78 hectares. Considerable work has been
done on crop yield estimation not only from satellite data but also
from empirical evaluation of a large range of crops using different
irrigation technologies under varying climatic conditions throughout
the year. This allows vegetative biomass to be monitored throughout
the growing season using satellite imagery and digital image
processing techniques, and enables crop yield forecasts to be made
and updated; this helps with marketing, exports, imports and
pricing.
Vegetation indices have become a conventional means by which
image processing can provide accurate estimates of biomass; the
Normalised Difference Vegetation Index (NDVI) has been widely
correlated with green biomass and final crop yields but has been
criticised for its failure to accommodate densely vegetated areas or
to make allowance for the influence of soil background on
reflectance. The Perpendicular Vegetation Index (PVI) and the Soil
Brightness Index (SBI) were derived to overcome these limitations
but do require ground data. ER Mapper contains a library of indices
and band-ratios including NDVI, PVI and SBI; these are readily
accessed and applied through the Formula dialog box.
NDVI can be used to derive gray scale images that can then be
density sliced to produce an alternative but equally clear output.
Irrigation requirements
Various techniques use reference crop evapotranspiration rates
based either on the Penman method or a class “A” pan to assess
irrigation requirements. In semi-arid areas with sporadic rainfall,
irrigation requirements are often assumed to be equivalent to crop
evapotranspiration taking into account irrigation technique, soil
type, leaching requirement, crop growth stage, crop yield demanded
and frequency of irrigation application.
Intensive agricultural programmes in semi-arid areas, either
extracting water from major rivers through dam construction and
diversion/off-take structures or abstracting it from aquifers through
tube-wells, makes tremendous demands on the water source. When
to irrigate and how much to apply is very important for the proper
growth of crops and water use efficiency. Irrigation enables the
farmer to grow crops more-or-less continuously; climatic and other
factors limit actual cultivation periods.
A satellite image based study of centre-pivot irrigated circles within
a catchment over one or more years using ER Mapper will provide a
very good indication of the number of crops being grown per circle
per year, the type of crop, the yields generated and therefore the
water demand of those farming systems. Water resource managers
and those responsible for agricultural development can collude to
ensure that abstraction rates for agriculture or other large scale
activities such as mining, power stations and industry do not exceed
aquifer recharge and do not adversely affect water quality in the
catchment.
Monitoring Crop Production and Assessing Irrigation Require-
191
Conclusion
192
Satellite imagery offers a tool for our times, a cost-effective means
of data gathering; ER Mapper is another tool, one that can exploit
the widespread availability of remotely sensed data to provide
information to decision makers in a coherent and systematic
manner. As issues of environment and development become
increasingly contentious and conflict-ridden, the ability to provide
up-to-date and accurate information to help guide and resolve
agricultural and rural development planning is essential.
Monitoring Crop Production and Assessing Irrigation Require-
Coastal Habitat Mapping
Author
Fiona Evans, Remote Sensing Applications Centre,Department of
Abstract
This application study involves two separate projects in two locations
of Western Australia. One study area is Geographe Bay, in the southwest (33.3S, 115.3E), and the other is the Abrolhos and Montebello
Islands, near Geraldton (28.5S, 114.4E).
Land Administration (Leeuwin Centre For Earth Sensing
Technology), 65 Brockway Road, Floreat, Western Australia, 6014.
In this study, ER Mapper has been applied to map coastal habitats
because of its unique ability to process satellite data for nearshore
habitat investigations. This information is useful for marine habitat
protection and management, oil spill response and coastal
management planning.
Introduction
Monitoring of our environment, is by necessity, presently being given
a high profile. It has been realised that the use of leading-edge
technology, where it can be shown to provide definitive information
economically, will greatly increase the efficiency and effectiveness of
projects. The length of the Western Australian (WA) coastline, is
approximately 12,500km. The requirement for a conclusive study on
marine habitats over a short period of time, necessitated the use of
remotely sensed imagery in order to provide high quality information
at the required scales.
Commonwealth Scientific Industry and Research Organisation
(CSIRO), Division of Fisheries is undertaking a project to map the
underwater features of the WA coastline at a scale of 1:250 000 with
the support of the Western Australian Department of Transport. The
study area of this project is located in Geographe Bay, in the southwest of Western Australia.
The final products from this project will be used by the Department
of Environmental Protection (DEP), Department of Planning and
Urban Development (DPUD), Local Government, Waterways
Commission, Department of Transport, Conservation and Land
Management (CALM), Western Australian Department of Agriculture
(WADA), Department of Land Administration (DOLA) and the State
Combat Committee.
The second project involves two study areas. They are located in the
Abrolhos Islands, west of Geraldton, and Montebello Islands, north
of Onslow, and are also in Western Australia.
The information obtained from the above projects will be used for
marine habitat protection and management, oil spill response and
coastal management planning.
Coastal Habitat Mapping
193
Available imagery
A number of different sources of imagery were used to build up a
picture of the area including aerial photography, scanned data and
satellite imagery.
Aerial photography
There is, at present, a limited archive of aerial photography of the
coastline of Western Australia with suitable specifications. Problems
to date with the use of aerial photography for water coverage relate
to having an appropriate sun angle, to reduce sunglint and thereby
maximise water penetration, and to the prevailing weather
conditions, as low wind currents are required. This is because low
wind currents reduce surface scatter and produce low swell, thus
reducing the concentration of suspended sediment. In addition to
these problems, most of the photography is limited to land coverage,
so often the inclusion of the sea at the end of flight lines is fortuitous.
Scanned data
Other imagery that is available for marine habitat mapping is
provided by systems such as the Gascoyne MC Airborne Multispectral
Scanner. The scanner allows the imagery to be flown under ideal
weather conditions with high spatial resolution and multispectral
capability in the visible, shortwave and thermal infrared. Scanner
data are useful for mapping shallow waters and has been used in the
Perth Metropolitan Coastal Waters Study. The Gascoyne scanner is
limited in its coverage area by a field view of 900 metres and a flying
height ranging from 1,000 to 10,000 metres, giving swath widths of
2-20 kms and a nominal spatial resolution of 2-20 metres.
Satellite imagery
Under ideal conditions, penetration of sea water by Landsat
Thematic Mapper (TM), band 1, is superior to aerial photography. TM
imagery was chosen, in part, for this project because of its cost
advantage over aerial photography. Satellite data met all the
specified base requirements of CSIRO for this project. TM data,
which covers an area of 185km x 185km, have the advantage of
being archived every 16 days since 1986, giving a vast choice of
images from which to choose suitable cloud-free scenes.
After a search of the current archive of TM scenes, which are stored
on microfiche at the Australian Centre for Remote Sensing (ACRES)
and at the Remote Sensing Applications Centre, a scene containing
bands 1, 2, 3 and 4 was selected for its clarity and penetration into
water (wind, seas, tide). TM Bands 1, 2, 3 and 4 were used for the
following reasons: Band 1 (blue band) can penetrate water to depths
of 25 metres depending upon sun angle and water clarity; Band 2
(green band) is used to determine the turbidity component; Band 3
(red band) allows better delineation of the coastline; and Band 4
enhances both land and vegetation. The selected scene also showed
features in water depths to 50 metres for the study area, a further
requirement of the project.
194
Coastal Habitat Mapping
Geographe Bay
Georectification and
image enhancement
Georectification and image enhancement were required. The
selected image was rectified to the Australian Map Grid (AMG) using
a technique based on the selection of ground-control points which
gave a root mean square (RMS) error of 1 pixel (25metres). The
image was then enhanced using combination linear stretches to
highlight bottom types such as reefs, seagrass, sand and rocks. Each
enhancement was written to film on a Colorfire 240. Photographic
prints were made at scales of 1:250 000 and 1:100 000.
Data integration
Field validation Trawl data has been used for this study. This data
was collected during a Western Australian Fisheries Department
survey in 1991 using a towed video system and Global Positioning
System (GPS). The method is described in Western Australian
Fisheries- Bulletin No. 100. The trawl data were gathered every 15
seconds and included latitude, longitude, time, date and substrate
type. The latitudes and longitudes were converted to AMG coordinates, rasterized and merged with the TM data.
Bathymetry lines, supplied by the Western Australian Department of
Transport, were digitised in ER Mapper file format, converted to DXF
files then merged into the TM image. Files were converted to DXF
files because they can be imported into a wide variety of image
processing software.
Masking
The TM data were subsectioned using one of ER Mapper’s Regions
menu options. Both polygon and class type functions are available.
Regions specified according to one or more polygons can be used to
mask or block off part of an image.
In this case, the TM data were subsectioned to mask out the land
using a level slice function. This function determines that digital
numbers of less than a certain value represent water and that where
greater than a certain value they represent land. This discriminating
value will be different for each scene. This is used on band 4 because
the water absorption properties of this band allows the interface
between land and water to be separated. This permits the separate
enhancement of the water using bands 3(R), 2(G) and 1(B).
The land was enhanced separately using bands 4(R), 3(G) and 2(B).
The enhancement of this band combination suits delineation of the
land-water interface, as well as showing terrestrial features.
Following the enhancement the two masked images were merged.
Image interpretation
Coastal Habitat Mapping
The identification of the seagrass and other substrate types was
done by a CSIRO marine biologist with local knowledge of the area.
Polygons were hand drawn onto the image around features of similar
tone, then labeled and digitised. These files were then archived for
use in future projects such as the State Combat Committee’s Oil Spill
Response Group.
195
The Abrolhos and
Montebello
islands
In another project centred on the Abrolhos Islands, west of
Geraldton, Western Australia (28.49S, 114.36E), and the Montebello
Islands, north of Onslow, Western Australia (21.41S, 115.12E),
thematic mapper imagery was used with SPOT panchromatic
imagery.
How to merge images
The SPOT and Landsat TM data sets were merged to create an image
that has the high resolution of the SPOT data, that is 10 metres,
together with TM spectral qualities. The steps used to merge the two
images were to:
1. Register SPOT to Transverse Mercator projection (Australian Map
Grid).
2. Register the TM Band image to the SPOT panchromatic file.
3. Merge the data using a fusion algorithm from the
‘Example_Data_Fusion’ directory such as
‘Sharpen_TM_with_Pan.alg’.
This algorithm merges three bands of the TM dataset displayed as
Red, Green and Blue layers with the SPOT dataset displayed as an
intensity layer. This is carried out instantly, without creating
intermediate files.
Classification
Another method of identifying marine habitats includes the use of
supervised and unsupervised classification techniques. However, the
water column effect on the classification is apparent as the depth of
water increases.This effect is caused by the absorption and
scattering of light in the water which obscures the sea-floor
reflectance variations.
In this situation low values are attributed to pixels defining features
in deep water and higher values in shallow water for the same
biological features. An algorithm to remove the water-column effects
may need to be used to produce more accurate results.
An example of this problem was shown in an unsupervised
classification routine of the Abrolhos Island which identified
approximately 20-25 classes. (Each group of islands in the Abrolhos
were classified separately due to changes in habitat). The depth of
water around these islands is approximately 10 metres but there is
still an effect from the water column. This was confirmed with field
checking in an area where coral seen at 2 metres depth was of the
same species as that at 8 metres. These groups were actually
defined as different classes on the image using the classification
routines.
Conclusion
196
The results achieved in the mapping of marine habitats around the
Abrolhos Islands, Montebello Islands and Geographe Bay, have
verified that TM satellite imagery was the best source of data
meeting the required specifications.
Coastal Habitat Mapping
This application discusses a number of remotely sensed data that
can be used for nearshore habitat mapping. It also suggests a
number of routines such as supervised and unsupervised
classifications that could be applied to the data to help determine the
different marine habitats.
Thematic Mapper imagery was used in these projects because of the
amount of coverage of a scene, the ability of band 1 to penetrate into
the water to detect marine bottom types and the cost advantage
over other systems for the large area of Geographe Bay. TM images
also are archived every 16 days which allows for a larger selection of
imagery.
ER Mapper allows the data to be manipulated in real time using steps
that are easy to follow and understand. It has the ability to integrate
vector and raster data with an easy to use graphical interface to
produce enhanced products useful to all people.
Coastal Habitat Mapping
197
198
Coastal Habitat Mapping
Monitoring Environmental Change in Lake
Turkana
The Use of Multi-temporal Satellite Imagery to Examine
Environmental Change in Lake Turkana
Authors
James Cutler and Justin Saunders, Geographic Information Services
Ltd. (GISL) PO Box 85, Egham, Surrey, TW20 8SE, UK.
Introduction
Environmentalists and hydrologists are chief among those who have
accepted the river-basin as the fundamental unit in effective water
resource management and planning. Regional planners acknowledge
that an up-to-date knowledge of current farming systems, land
tenure systems, water resource dynamics and planning strategies
are integral to the evolution of sustainable agricultural programmes.
That this point has been reached comes less from the political well
than from the emergence of means by which change, especially
degradation, in the whole environment can be recorded and
reported. The improvement of data collection systems, the
development of low-cost means of analysing and then disseminating
the results could be deemed one of the early, if less public, benefits
of the evolving information super-highway. That decision making can
be at least in part determined by the presentation of previously
remote and difficult to assimilate issues in terms of the stark reality
that they represent is a triumph for the technology that makes it
possible.
This technology revolves around remotely sensed data. Traditional
aerial photography was supplemented, and may eventually be
replaced in most forms, by the emergence into the public domain of
satellite borne imaging systems in the early 1970s. Multi-spectral
satellite imagery has been available from the Landsat series of
satellites since 1972 at a spatial resolution of 80m and latterly from
Landsat TM (since 1984) at a spatial resolution of 30m. One of the
most important characteristics of these data sources is that the
satellites cover exactly the same part of the Earth’s surface at
regular intervals (16-18 days in the case of Landsat).
Water resources,
agricultural
development and
environmental
change
From the Danube to the Mississippi and the Nile, trans-national water
management bodies are being set up to try to ensure the fair and
effective utilisation of water resources. In other basins (e.g.
Tigris/Euphrates), negligence, political will and international standoffs are combining to ensure uneven and destructive development.
The Omo-Gibe catchment of south-west Ethiopia is a small but useful
example illustrating the need for international cooperation and
participation.
Monitoring Environmental Change in Lake Turkana
199
While no-one can argue that the major needs of the global
population are food and water, there are many who forget, ignore or
simply ride roughshod over the legitimate concerns of others to pitch
short-term gain against long-term need, the powerful against the
powerless and administrators and bureaucrats against the land users
and guardians.
Water is perhaps the most important resource upon which future
agricultural production depends. The hydrological cycle that charges
the rivers, lakes and aquifers of any river basin feeds the soils, the
trees and the crops. Sustainable agricultural production depends not
only on the continuation of this cycle but on other characteristics that
determine land capability and land suitability. Micro-climatic factors,
slope, soils, wildlife and farming systems can only be combined in a
finite number of ways until their inter-relationships start to
disintegrate. When this happens, the very sustainability of the
system is called into question. There may be solutions but they are
invariably short-term, consisting of chemicals and fertilisers and the
dubious benefits of intensification. Soil erosion, water pollution,
nutrient depletion and falling yields and incomes are all products of
short-sighted development strategies.
However, until there is evidence it is frequently too late to pre-empt
the development of agricultural systems that are, in the end, not
sustainable. Thus, it is essential that case studies are produced and
models evolved that enable the planner to cogently argue the case
for more pragmatic development. Hard data on environmental
change is notoriously difficult to obtain; proving the causal links is
even harder. Hence the pressure on the monitoring and data
collection systems to provide tangible evidence of change and to link
it to changes in the use of the environmental and natural resources
of the contributing regions.
Example: Erosion,
Siltation and Lake
Capacity
False colour image or
Colordrape
The spectral resolution of Landsat satellite imagery allows the user
to produce false colour composite images (FCCs) in ER Mapper using
a Band 123 combination for MSS and Band 234 for TM. While the
spatial resolutions do differ significantly, this is not a major
disadvantage when evaluating trends and gross changes over time.
The band combinations produce similarly coloured images for the
same time of year.
Rectification
The next step is to ensure that both images are corrected to the
national mapping series. This is easily done using the ER Mapper
Geocoding Wizard; the user is prompted for map coordinates for
ER Mapper to correct the imagery to the map sheets. In this case a
nearest neighbour resampling algorithm was used to ensure DN
value integrity.
200
Monitoring Environmental Change in Lake Turkana
Classification
A classification procedure was then adopted for each image. The
objective of the analysis was to discriminate the water body from the
surrounding land. Water has a very distinct spectral signature that
enables easy discrimination; either of the infrared bands, which are
completely reflected by water bodies, or the blue band, which allows
for extended water penetration, can be used. In this case the
infrared band in each FCC scene was used to identify the extents of
the water body. By combining a scattergram with the cursor values
associated with Cell Values Profile Window (from the View menu)
it was easy to identify the range of DN values associated with the
water bodies.
With these values a simple classification algorithm was developed for
the each image that created an effective “mask” or class for the lake.
On the MSS scene this was coloured blue while on the TM scene this
was coloured white. It was then a simple task to subtract the 1994
data from the 1973 data to gather a rough estimate of the change in
the lake at the north end associated with the deposition of soil.
Image mosaicing and
annotation
One of the unfortunate facts about the different generations of
Landsat satellites is that their paths are very slightly different. This
means that while the MSS scene covers the whole of the north end
of the lake, the TM scene only covers about 90% of it, leaving out
the Western edge. As an intermediate solution the MSS and the TM
data were mosaiced and the two classifications overlain.
ER Mapper’s annotation system was used to draw an approximation
of the area to the west of the TM data that could be assumed (on the
basis of all the other evidence) to have also become silted up. This
was added to the 1994 data to give an improved statistic on
environmental change.
Statistics
The approximate statistics extracted indicate that some 610 sq. km
of lake have been lost as a result of soil deposited in the lake by the
Omo-Gibe river system. It should be noted that this basin is about
77000 sq. km. Without some more precise bathymetric data it is not
possible to estimate a total volume of silt but using simple figures
(e.g. depth of new silt to an average depth of 5m), an erosion figure
for the whole catchment of 20m3/ha/yr emerges!
Conclusion
When such data is coupled to parallel evidence in the dams and
rivers of Northern Ethiopia and neighbouring Sudan, it is difficult to
avoid the conclusion that there has been a dramatic change in the
way in which land and water resources are being managed. Aware
as we are of the recent history of drought, famine, warfare and
widespread deforestation in this region it would be easy to describe
the changes as self-inflicted. We are also aware that the soil rich and
climatically rich Ethiopian highlands are both the heartland of much
of modern international agriculture and their own mine of
regeneration.
Monitoring Environmental Change in Lake Turkana
201
However, man-made environmental changes take many forms, from
short-term expediency and necessity through political force
(resettlement and trans-migration being significant factors in
Ethiopian land and agricultural change) to long-term international
transgressions of unrealised impact (atmospheric and oceanic in
particular) and internationally imposed economic restructuring. It
would be unwise for planners and decision makers in national
institutions and in donor agencies to plead political and economic
expediency in the face of a wealth of data that, while not indicative
of the proportionate role of causal factors, does provide solid
evidence of the lasting and detrimental impact of poorly conceived
and inadequately controlled agricultural development.
As more and more studies such as this preliminary investigation
reveal the extent of environmental change and the associated
decline in environmental (and human) condition so the role of
satellite imagery, the function of ER Mapper and the role of the
planner will be empowered by national and international
organisations alike.
202
Monitoring Environmental Change in Lake Turkana
Monitoring the Gulf of Gdansk
Authors
Marek Graniczny, Director of the Department for Remote Sensing
and Cartography, National Geological Institute (PIG), Warsaw,
Poland.
Zbigniew Kowalski and Michiel Zevenbergen, Geodan Polska,
GIS Consultants, Warsaw, Poland.
Background
The Baltic Sea is a unique water body characterized by extreme cold,
slow circulation, extensive shallows and a narrow channel or
channels to a more dynamic, conventional ocean regime. The
countries that border the Baltic Sea have large coastal towns and, in
some cases and more importantly, catastrophic industrial (and
agricultural) legacies that have in themselves been the concern of
environmentalists for some time.
Concern increased during the 1980s as the downstream impacts of
smoke-stack industrial centres, large scale mono-cropping, intensive
fertiliser use and acid rain deposition became increasingly selfevident (declining fish stocks, polluted beaches, red tides,
deteriorating water quality) across the region culminating in the
endorsement of the Helsinki Convention on sustainable use of the
Baltic by Baltic Sea adjoining countries in 1990. With the collapse of
the regimes of the former Eastern Europe and as concerns about the
effects of anthropogenic activities in changing nutrient runoff
became more serious, a regional task force (HELCOMTF) was
established. The members are all countries bordering the Baltic that
have major rivers flowing into the sea and major multilateral
financial institutions.
Poland, as on of the contracting parties, has pursued several
programmes to monitor the flow of pollutants into the Baltic and to
evolve adequate mitigation measures to reduce these outputs. This
approach recognises that one cannot hope to manage impacts on a
runoff-affected marine area without managing the man-induced
exports to it from adjacent watersheds or to formulate land
development programmes without considering potential impacts on
ecologically and economically important downstream systems. It is
in this context that the National Geological Institute in Poland (PIG)
has carried out a pilot study to assess the use of satellite imagery to
study the effects of pollution, sedimentation and algal growth in the
Gulf of Gdansk caused mainly by the Vistula River.
Every year this river transports a heavy load of nutrients, nitrogen
and phosphor from a heavily urbanised and polluted hinterland that
reaches as far as the coalfields of Silesia. This water (often carrying
4-5 times or more higher levels of nitrates and phosphates than in
“undeveloped” outflow) mixes with the saline Baltic waters in the
Gulf of Gdansk and eventually with the rest of the Southern Baltic
Sea.
Monitoring the Gulf of Gdansk
203
Bio-optical Algorithms
Traditionally, researchers and others collect marine-based
measurements of sea-water variables at the same time as the
satellite imagery is acquired. In this case there was no ground data
coincident with the imagery, precluding supervised classification
techniques from being used; therefore alternative techniques were
evolved.
Bartolucci (1977) and Alfoldi (1982) both suggest a non-linear
relationship between reflectance and particles in suspension. The
greatest discrimination in spectral response between clear and turbid
waters occurs between 0.6-0.9mm while the 0.51 mm to nearinfrared bandwidth has several windows ideal for assessing
phytoplankton related phenomena such as the near-surface
concentration of chlorophyll-a pigments and ocean colour.
Phytoplankton has a sunlight absorption peal in the blue band by
virtue of its resident photosynthetic pigment, chlorophyll-a. The
annual phytoplankton growth cycle (often peaking with algal bloom)
depends on the effects of photosynthesis which are strongest in the
summer months.
Image processing
Having acquired Landsat MSS and TM from the last 4 years, PIG have
used ER Mapper for image processing. ER Mapper was considered
particularly appropriate to the tasks in hand because of its algorithm
module which allows the user to develop and edit in situ the
complete description of the image processing tasks to be
implemented. The study had to use formulae developed elsewhere;
ER Mapper lets the user apply these very easily.
Principal Components
Analysis (PCA)
ER Mapper simplifies this process in which data redundancy
engendered by extensive interband correlation can be reduced. This
transformation was performed on all Landsat MSS bands and the
first 4 bands of the TM imagery. Subsequent multi-band
enhancement and histogram equalisation created readily
interpretable hardcopy.
Turbidity and Chlorophyll
Concentration
The study used the formula developed by the REMONO project
(REMOte sensing NOrth Sea) in ER Mapper to assess turbidity in the
Gulf of Gdansk:
Tu = -24.47 + (1.03 x b1) - (1.65 x b2) + (1.46 x b3)
+(0.3 x b4)
Similarly, the following formula was used in ER Mapper to evaluate
phytoplankton blooms:
Ch = -131.32 + (7.3 x b1) - (13.91 x b2) + (12.66 x b3)
- (3.63 x b4)
where:
204
•
b1 = green (0.50-0.60mm)
•
b2 = blue (0.60-0.70mm)
•
b3 = reflective infrared (0.70-0.80mm)
Monitoring the Gulf of Gdansk
•
b4 =reflective infrared (0.80-1.10mm)
These algorithms were applied to detect the sedimentation process
and the intensity of algal growth. The thermal band from Landsat TM
(10.40-12.50mm) was also incorporated at this stage because there
is a strong correlation between sea surface temperature (SST) and
phytoplankton blooms.
Comments
At the moment there is inadequate infrastructure and poor capacity
to apply known methods to monitor the fast-deteriorating Baltic Sea
marine environment. Remotely sensed data, marine based and
laboratory measurements need to be integrated in a cost-effective
way to provide a monitoring solution for the dynamic marine
ecosystem of the Baltic. The potential role of satellite data could be
seen in the images. In the MSS scene of 8 July, 1991 water masses
from the Vistula were clearly visible as they moved east along the
coast. The currents were clear because of the algal bloom
synonymous at this time of year with the drift of warm water
masses. In the same image the Russian port of Baltijsk could be
identified as a possible major source of pollution. In the Landsat
image, illustrating the utility of the Landsat TM thermal band for high
temperature contrasts, significant temperature differentials were
clearly visible in a 15km diameter from the mouth of the Vistula
River.
The future
Several projects are about to start which will further enhance the
knowledge and monitoring capabilities of the Gulf of Gdansk and for
the whole Polish coastal zone. The focus of these programmes is an
analysis of both currents and the sedimentation process, an
applications sector that continues to evaluate the role of satellite
imagery and for which intuitive image processing packages such as
ER Mapper are very appropriate.
Monitoring the Gulf of Gdansk
205
206
Monitoring the Gulf of Gdansk
Boreal Forest Monitoring
Author
Kevin Corbley is a freelance writer and communications consultant
in Aurora, Colorado, USA. He specializes in remote sensing and GIS
technology.
Abstract
ER Mapper has been used to study lightning strikes and forest
degradation in tundra-like regions with poor atmospheric conditions.
Locations
There are two study areas: one in the Yukon Flats, Alaska, where the
lightning project is based; and the other in Anchorage at the Pacific
Northwest Experiment Station, where boreal forest management is
studied.
Introduction
Is lightning actually more likely to strike a second time in the same
place? That’s one of the theories USDA Forest Service researchers
are investigating in Alaskan forests. The lightning strike project is
part of a multiphase resource inventory program the Forest Service
is conducting to develop improved methods of indentifying forest
areas that are susceptible to, or have already undergone,
catastrophic change.
Also, although Alaska and other northern regions are often
characterized as barren tundra, many are actually covered with vast
expanses of boreal or high-latitude forests. The conifers, aspens,
birch and other species that comprise these forests survive in harsh
yet delicately balanced ecosystems that are highly susceptible to
forest fires, pollution and other more subtle environmental changes.
Lightning project
The lightning project is not nearly as far-fetched as it sounds.
According to Ken Winterberger, a Forest Service remote sensing
specialist and inventory forester, “Historical data indicates that
previously burned forest areas receive more lightning strikes than
non-burned areas. We are trying to determine if there is a causeand-effect relationship.”
The Alaska Fire Service maintains a network of 15 antennae sensors
that record and locate lightning strikes across the state. When
strikes are recorded in forested area during non-precipitation storm
events, smoke spotting planes are dispatched to see if a blaze has
been ignited and firefighters must be called in. That data is also
being used in the study.
Boreal Forest Monitoring
207
The foresters are testing the theory that burned areas promote the
formation of clouds from which lightning bolts strike. The theory
proposes that the charred areas left behind by forest fires appear
dark and non-reflective. This dark surface is believed to heat the air
above, creating the right conditions for cloud formation. Those builtup clouds could be the source of additional lightning bolts.
“What we are really concerned with in these projects is not the
lightning itself, but the effects of the lightning,” said Winterberger.
“We are seeking new techniques to monitor catastrophic change that
may assist in the management of northern forests.”
Boreal forest
project
The boreal forest research is being conducted by the Forestry
Sciences Laboratory (FSL) in Anchorage, a field office of the Forest
Service’s Pacific Northwest Experiment Station. In recent years, the
FSL has expanded its forest monitoring program to include
applications of remote sensing and advanced spatial data analysis
techniques.
In Alaska, the FSL often teams with forest land owners,
environmental organizations and other groups with an interest in
managing the forest’s resources. The private sector groups usually
assist in the projects as partners, offering funding or manpower.
Because the ecosystem of high-latitude forests is so delicate and
unique, the Forest Service often shares boreal forest information
with other northern nations such as Canada, Norway, Sweden,
Finland and Russia.
“We are trying to move our inventory program into the 21st century
with image processing technology,” said Winterberger. “Most of the
organizations that use our inventory data would like it in digital form
for analysis in geographic information systems.”
Due to widespread application of digital data within the forest
resource management community, and the Forest Service’s plans to
implement an agency-wide GIS, the FSL in Alaska has embarked on
a pilot project to examine the capabilities of in-house image
processing in their boreal forest projects.
“One of our goals is to create image analysis and processing
algorithms tailored specifically to forest inventory applications,”
Winterberger said. “The algorithms developed here can be
standardized, packaged and provided to researchers in other Forest
Service labs or in other countries.”
Customizing
forestry
applications
208
The Forest Service Laboratory in Alaska has equipped its facility with
Sun Sparcstation workstations for all image processing and GIS
projects. In choosing image processing software for the pilot study,
the FSI determined that the ability to develop customized processing
routines and to handle several types of digital data was a critical
requirement.
Boreal Forest Monitoring
“We selected ER Mapper software for image processing because its
dynamic algorithm component allows the user to write custom
programs in realtime and process several types of data as a virtual
dataset,” said Winterberger.
ER Mapper’s algorithm-based design is a new image processing
concept that enables the user to specify a sequence of formulas,
filters or transformations for interactive application to raw datasets.
Instead of being saved to an immediate data file, the changes to the
data are rendered directly on the display. The algorithm design
reduces disk space and lets the user experiment with many
processing functions in realtime on the workstation screen.
The FSL researchers are using the algorithm capability extensively in
a joint project with the VN Sukachev Forest Institute in Moscow.
Russian and U.S. foresters hope to develop new satellite image
processing techniques capable of detecting catastrophic change in
boreal forests and determining the degree of forest damage.
The project is taking place in the Norilsk area of north-central
Siberia, the site of a major copper and iron smelting operation. Years
of smelting have produced acid rain that has poured on the
downwind forests for kilometers.
Russian scientists have collected extensive ground data, measuring
the extent of tree damage in a 200-kilometer by 50 kilometer area
downwind from the smelting operation. The pollution has spread
downwind, causing varying degrees of tree damage. The most
severe damage is located near the core with less destruction, or
improved tree health, farther from the smelters.
The damage is well documented, and the degree of damage changes
subtly away from the smelters,” said Winterberger. “We are trying to
see what image enhancement functions can detect the differences in
spectral reflectance representing the variations in tree health.”
The FSL has obtained several types of satellite data for the project
including Landsat, AVHRR (Advanced Very High Resolution
Radiometer) and Russian imagery. Researchers are experimenting
with a variety of enhancement techniques to highlight the slight
differences in spectral reflectance among various levels of forest
damage. Promise has been shown using various edge enhancement
filters.
“Using the algorithm component, we manipulate the image data
right on the screen, applying filters, making adjustments and then
running it again, all in realtime,” said Winterberger. “It really speeds
the development process.”
To divide the Siberian trees into accurate classes based on their
species and relative health, FSL uses a combination of unsupervised
and supervised image classification techniques to improve the
efficiency and accuracy of the classification process.
First, they choose subsets, or cluster blocks, of the larger image area
that contain as much variation in cover types as they expect to find
in the larger image. Usually the selected cluster blocks are areas
where the cover types are well understood, such as areas previously
imaged with aerial photography.
Boreal Forest Monitoring
209
Next, they perform an ISODATA unsupervised classification on the
cluster blocks which divides the smaller subset images into classes
representing primary cover types or ‘ecoregions’. Using the image
data statistics for each ecoregion as training classes, they then
perform a supervised classification of the entire image area. This
technique uses ISODATA to capture the primary cover type variation
first, which is then used to refine the subsequent supervised
classification of the satellite imagery.
Fusing,
mosaicing, and
virtual datasets
One of the problems the FSL faces in high-latitude projects is the lack
of adequate data over many areas. The northern latitudes are
frequently covered by clouds which thwart optical imaging sensors
such as Landsat and SPOT. Data collection is further hindered by
poor ground station coverage. Neither the Landsat nor SPOT station
covers all of Alaska. Likewise, over Siberia where the joint U.S.
Russian project is underway, downlink capability is poor because of
a hole in coverage by TDRSS, the satellites that relay remote sensing
data back to ground stations when the remote sensing satellite is not
within range of the station.
“There are so many gaps in high-latitude imagery that we use
whatever data we can lay our hands on... Landsat, SPOT, AVHRR,
Russian and high-altitude photography,” Winterberger said.
“Frequently, we have to use different datasets in the same
classification project.”
To overcome this situation, FSL is making use of a new image
processing technology called virtual datasets, which was introduced
by Earth Resource Mapping in its 4.0 release of ER Mapper. Virtual
dataset technology is an extension of the dynamic algorithm feature.
It allows the user to access two or more separate datasets on disk
as if they were a single dataset. The virtual dataset can be accessed
for processing in realtime similar to the way algorithm functions are
accessed without creating intermediate disk files.
The Forestry Sciences Laboratory has used the virtual datasets
feature extensively for projects in Alaska and Russia where a
combination of several types of image data was needed to create a
complete image of cloudy or smoke covered areas so that visual
analysis could be performed.
In those projects, the researchers combined all available imagery for
a given area into one virtual dataset containing several different
types of imagery, for example from Landsat and AVHRR, as well as
multiple dates of the same type of imagery. By applying thresholding
to the data or defining problem areas as polygon masks, cloud or
smoke obscured pixels could be masked out and interactively filled
with clear imagery of another type or different date for the same
image area. By using virtual datasets, the researchers can work
efficiently with the multiple datasets without having to merge them
all into a single, very large disk file.
210
Boreal Forest Monitoring
FSL is using a modification of the virtual dataset technique in the
lightning study. The scientist are creating a virtual dataset of image
data acquired by the Advanced Very High Resolution Radiometer
(AVHRR) aboard a NOAA weather satellite, and the digitized lightning
strike data recorded by the Alaska Fire Service antennae network.
In a pilot study in Yukon Flats in east-central Alaska, Winterberger’s
group has combined the two types of data to create a virtual data
mosaic that is processed interactively. They are analyzing the virtual
dataset to discover correlations between burned areas, revealed by
the AVHRR thermal band, and lightning strike data recorded by the
antennae network. The result is realtime composite display of the
multiple datasets.
Implications
According to Winterberger, “The Forest Service doesn’t make
interpretations from information like the lightning project, but we
supply the data to the forest land owners or environmental
organizations so that they can develop appropriate management
strategies.”
The boreal lightning strike project and other forest management
programs under way at the Forest Sciences Laboratory are also tied
into a larger study - one of global proportion.
Some of the information being collected points to an increase in
forest fires and a northern movement of the forests. Both could be
related to the global warming scenario espoused by environmental
scientists. Furthermore, according to Winterberger, Forest Service
investigations into the delicate ecosystem balance of boreal forests
may provide insight into this warming phenomenon.
The Forest Service has initiated a major program to equip each of its
150 forest offices in nine regions across the United States with
advanced GIS and image processing capabilities. Many of the image
processing techniques being developed for application in boreal
forests by the FSL in Alaska will be standardized and used in forestry
management programs throughout the National Forest system.
Conclusion
Boreal Forest Monitoring
ER Mapper’s flexible and realtime image data processing capabilities
are well suited for integrating information that will help in boreal
forest management. Its advanced technology not only makes the
processing of more than one dataset possible as a single dataset, but
it’s image enhancement capabilities are also useful when data is
scarce owing to poor atmospheric conditions, helping interpretation.
This was shown by the lightning strike and boreal forest applications.
211
212
Boreal Forest Monitoring
SAR imagery in mineral and oil
exploration
Authors
Frank Murphy and Martin Critchley, ERA-MAPTEC Ltd, 5, South
Leinster Street, Dublin 2,, Ireland.
Abstract
In this study ERS-1 Synthetic Aperture Radar (SAR) imagery was
used in optically inaccessible areas such as those susceptible to cloud
cover and thick vegetation.
Introduction
ERS-1 SAR (Synthetic Aperture Radar) provides a new and important
remote sensing technique. It is particularly valuable for geological
analysis in highly vegetated, cloud covered regions. The ability of
SAR to penetrate clouds allows many of these regions to be studied
on satellite imagery for the first time.
Many equatorial regions are the focus of current mineral and
petroleum exploration interest. However, these areas are generally
covered with rain forest and often have persistent cloud cover. For
such areas cloud free optical satellite imagery, such as Landsat TM
or SPOT is commonly unavailable. Consequently, the use of remote
sensing techniques in these regions has been principally restricted to
airborne radar. However, ERS-1 SAR now provides a highly cost
effective means of undertaking regional analysis.
Characteristics of
ERS-1 SAR
imagery
The following features of ERS-1 SAR are important in mineral and
petroleum exploration:
•
ERS-1 SAR is very useful for geological analysis because its
ability to penetrate clouds allows areas under persistent cloud
cover to be studied on satellite imagery for the first time.
•
Although SAR does not penetrate foliage to any great extent, the
vegetation canopy frequently reflects variations of the surface
topography, which may provide information on geological
structure and stratigraphy.
•
SAR also emphasizes subtle topographic variations which may be
related to geological structures. Such features may not be
obvious on vertical optical imagery.
•
ERS-1 SAR has an advantage over airborne radar since it has a
higher depression angle and consequently more information is
obtained from the back slopes of mountains. On airborne radar
the back slopes appear black and featureless.
SAR imagery in mineral and oil exploration
213
•
The depression angle of SAR highlights preferential erosion along
faults, and consequently they appear more pronounced than on
vertical optical imagery. The identification of faults and fault
intersections is particularly important in mineral exploration.
•
Intrusive bodies generally have a significantly different surface
texture to the surrounding rock and thus tend to be emphasized
on SAR imagery. These textural variations are also frequently
reflected in the vegetation canopy. The identification of intrusives
and their margins is of particular significance in mineral
exploration.
•
SAR images can also be used as base maps for logistical purposes
in areas of persistent cloud cover which lack accurate
topographic maps.
•
ERS-1 SAR can also be used to obtain a base line landuse
classification prior to exploration in remote regions. SAR image
acquired during and after exploration can be used to determine
if exploration has had any significant effect on the environment.
•
Recent advances in SAR interferometry has enabled the
generation of digital terrain models. This allows the construction
of structure contour maps and calculation of fault displacement,
for example, so has important implications for petroleum
exploration.
We have recently undertaken two studies using ERS-1 SAR, one for
oil exploration in Irian Jaya, the other for mineral exploration in
Panama.
Panama case
study
We were approached by a major mineral exploration company to
undertake a regional structural analysis of western Panama. Much of
the area is densely vegetated. Topographic maps are available;
however, these show significant inaccuracies in the more remote
areas, particularly in the shape and position of rivers and river
junctions.
Satellite image
availability and
acquisition
The study area is covered by 3 Landsat TM scenes; however, no
cloud free imagery was available. There was only one image in
archive which did not have more than 30% cloud cover. This scene
had 10% cloud cover in its northern quadrants, while its southern
quadrants were cloud free. Cloud free SPOT imagery was
unavailable.
It was decided that the best approach was to acquire the central TM
scene which had the least cloud cover and three ERS-1 SAR scenes.
These provided partial overlap with some of the cloudy areas on the
Landsat TM, as well as coverage of the remainder of the study area.
214
SAR imagery in mineral and oil exploration
ERS-1 SAR interpretation
The ERS-1 SAR provided a great deal of geological information which
could be combined with the Landsat TM interpretation to produce a
coherent structural synthesis for the area and enable the
identification of exploration targets.
Correlation with Landsat TM
In the areas of overlap a useful correlation between the two forms of
imagery could be made. In areas of sparse vegetation cover the
geological structures identified were essentially similar, although
there is a tendency for structures subperpendicular to the radar
wave front to be less obvious on the SAR. However, a major
advantage of the SAR over Landsat TM was found in the
interpretation of the jungle covered areas. On the TM a number of
widely spaced faults were identified, while considerable more
structural detail could be distinguished in the same areas on the
SAR.
Identification of fault zones
Fault zones could be identified on the SAR imagery. They are
generally expressed as linear topographic lows, due to preferential
erosion along the fault trace. Faults can also be identified by linear
changes in surface texture, or by sudden linear elevation changes or
breaks in slope. Some of the faults correspond to structures which
had been previously mapped; however, a large number of previously
unknown structures were also identified.
The depression angle of the SAR tends to emphasize the more subtle
topographic features which often correspond to smaller scale faults.
The identification of these enabled fault zones, composed of clusters
of subparallel fault segment, to be mapped out.
It was possible to determine relative displacements on faults from
their cross-cutting relationships and hence identify dilational zones
along the fault trace. Such areas can be important for localizing
mineralization. A large number of fault intersection zones could also
be identified. Some of these may control mineralization and indeed
many of the known mineral prospects in the region occur on, or close
to, such intersections.
Identification of intrusive bodies
The main mineralizing event in Panama is associated with a phase of
igneous activity. Consequently, many of the mineral deposits are
closely associated with igneous intrusions. By using a combination of
Landsat TM and SAR it was possible to identify a large number of
intrusive bodies which in the right structural setting may be
associated with mineralization.
SAR imagery in mineral and oil exploration
215
In areas of good exposure the wide spectral range of Landsat TM is
useful for discriminating intrusives from the surrounding rocks.
However, since much of the study area is jungle covered the use of
spectral properties was of limited value. By examining textural
characteristics on the SAR it was possible to identify a large number
of uniform texture. Intrusives can still be distinguished even where
the area is densely vegetated by identifying subcircular areas with a
more uniform vegetation canopy, a reflection of the surface
topography.
Subcircular structures with topographically positive rims can also be
distinguished and are interpreted to be extinct volcanic complexes.
Layover effect
A problem with all radar imagery is the layover effect which can
become quite severe in rugged terrain. In Panama the terrain rises
quite rapidly from the coastal lowlands to the Cordillera Central. In
the mountainous west, the layover effect on the SAR is quite
pronounced, but, because of the relatively high depression angle of
the ERS-1 SAR compared to conventional airborne radar, it is
possible to identify features on the backslopes.
Even where the layout is greatest on, or close to, the crest of the
Cordillera Central, it is possible to trace fault zones across the
mountain range.
Synthesis and exploration target generation
The large amount of data on fault zones and intrusives obtained from
the interpretation of the ERS-1 SAR, together with data from the TM,
allowed a coherent structural synthesis to be made. A chronology of
deformation events was established on the basis of cross-cutting
relationships between faults. The faults can be classified in terms of
pre-mineralization and syn-mineralization structures. One of the
most important results was the identification of a set of faults which
were actively forming during igneous activity and which appear to
localize many of the igneous intrusions. Fault zones in this
orientation are also frequently associated with known mineralization.
The intersection zones between these faults and intrusive bodies
represent areas of potential mineralization. The identification of such
areas on the SAR allows a number of exploration targets to be
generated.
On the basis of this interpretation the exploration company now has
a geological framework on which to base their exploration program,
as well as a number of targets to examine. In addition, the SAR
images can be used as logistical base maps since they give a more
realistic and accurate view of the terrain than existing topographic
maps.
216
SAR imagery in mineral and oil exploration
Irian Jaya case
study
Irian Jaya has become the focus for petroleum exploration in recent
years. The region is extremely remote, no detailed topographical
maps are available and there is essentially no inland infrastructure.
Much of the area is covered in dense rain forest. This poses a major
logistic problem in petroleum exploration. The use of satellite
imagery is an obvious technique to overcome this difficulty;
however, the region is also associated with extensive and persistent
cloud cover. Consequently, there is no cloud free optical satellite
imagery available.
ERS-1 SAR provided the first opportunity for this area to be studied
in detail using satellite imagery.
Satellite image
availability and
acquisition
We were requested by a petroleum exploration company to provide
logistical base maps, geological and landuse interpretations from
satellite imagery for an exploration concession in one of the more
remote parts of Irian Jaya. ERS-1 SAR was the only form of imagery
available which fulfilled the requirements. Six ERS-1 SAR scenes
were required. A mosaic of these was produced. This gave the
exploration company their first detailed overview of the terrain and
provided them with an accurate base map which could be used for
logistical purposes.
ERS-1 SAR interpretation
The study which is on going involves a number of phases:
•
Phase 1 involved the provision of an SAR mosaic with licence
block boundaries, existing well locations and a UTM grid to act as
a base map.
•
Phase 2 required a landuse/terrain classification using SAR within
•
Phase 3 involves a geological interpretation of the SAR mosaic,
and has yet to be completed.
the licence block.
Landuse/terrain classification
A landuse/terrain classification map was produced. This was partly
to gauge agricultural activity and deforestation, as well as to provide
an environmental base line before the onset of active exploration. By
applying various filtering techniques the imagery was processed to
highlight different landuse and terrain categories, principally in the
basis of textural features. It was not possible to directly define the
nature of vegetation in each category, although plantations and
areas of deforestation could be identified. Ground control is needed
before the precise nature of each category can be established.
The terrain classification is important for logistical purposes, and
provided an important aid in planning seismic surveys and drilling rig
accessibility.
SAR imagery in mineral and oil exploration
217
Geological interpretation
The SAR imagery enabled a geological analysis of this highly
inaccessible region. By using textural characteristics it is possible to
map out stratigraphic packages. In particular, areas of karstic
limestone have a characteristic highly uneven, low lying surface.
Interbedded sandstone /shale sequences can be distinguished by
narrow resistant ridges alternating with more eroded bands. Using
such criteria tentative stratigraphic correlations can be made,
allowing a basic geological map to be produced which can be refined
by the incorporation of ground data.
Major faults can be readily identified on the SAR. Some of these
appear to be the bounding structures to downthrown blocks and their
delineation is important in providing a preliminary assessment of the
petroleum prospectivity of the area.
A fold and thrust belt was also identified. Gentle topographic domes
seen on the SAR imagery are the surface expression of anticlines
within the fold and thrust belt. The identification of these features is
important since they are the most common form of petroleum trap.
Even where the folds are very broad open structures, the sidelooking nature of SAR emphasizes their topographic expression so
they are more readily distinguished than would be possible from
vertical optical imagery.
Planning of seismic surveys should take into account surface
structures identified from image interpretation. Faults identified on
the seismic profiles can be correlated with the fault surface traces
identified on the imagery. This imposes constraints on the
correlation of structures between adjacent seismic lines. Although
this study is not yet complete the ERS-1 SAR has enabled analysis of
landuse, nature of terrain, stratigraphy and structural geology. This
could not have been possible using optical imagery. The exploration
company will thus have a good logistical base map and a preliminary
geological interpretation which can be refined by field verification.
Concluding points
218
•
ERS-1 SAR has proved very useful for geological analysis in areas
of persistent cloud cover. In many equatorial regions cloud free
imagery is unavailable, and the ability of SAR to penetrate clouds
allows these regions to be studied on satellite imagery for the
first time.
•
Regional geological studies can be undertaken using a mosaic of
ERS-1 SAR images at a relatively low cost.
•
The use of ERS-1 SAR imagery has been a valuable remote
sensing technique for mineral exploration in Panama.
•
It is particularly useful for geological analysis in densely
vegetated areas.
•
The Panama study has shown that fault zones tend to be
emphasized by the SAR, even in areas of dense vegetation.
SAR imagery in mineral and oil exploration
•
Igneous intrusions can be recognized by textural contrasts on the
SAR, which are often also reflected in the vegetation canopy.
Such features in highly vegetated terrains tend to be less obvious
on optical imagery.
•
ERS-1 SAR is also of great benefit to petroleum exploration in
equatorial regions, as seen in the Irian Jaya study.
•
The imagery can provide a logistical base map in remote, poorly
mapped areas which are characterized by persistent cloud cover.
This can be used for planning seismic surveys or determining
drilling rig accessibility.
•
The identification of folds and fault blocks from the SAR can be
used to provide a preliminary assessment of the petroleum
prospectivity.
•
The results of the Irian Jaya study indicate that there is great
potential for the use of SAR for petroleum in areas of persistent
cloud cover and dense vegetation.
SAR imagery in mineral and oil exploration
219
220
SAR imagery in mineral and oil exploration
Mapping Shelf Circulation
Mapping Shelf Circulation off Western Australia
Author
Alan Pearce, CSIRO Division of Oceanography, Marine Laboratories,
PO Box 20, North Beach, Western Australia 6020 and Heather
Aquilina, Earth Resource Mapping, Level 2, 87 Colin Street, West
Perth, Western Australia 6005.
Synopsis
Thermal infrared satellite imagery is widely used to chart ocean
currents by mapping sea-surface temperatures (SSTs) associated
with different water bodies. The technique has been of particular
value in monitoring the structure and behaviour of little-known
ocean currents such as the Leeuwin Current off Western Australia.
Introduction
The Leeuwin Current is a stream of warm tropical water flowing
southwards down the Western Australian coast (Cresswell and
Golding 1980), completely different from the corresponding currents
off the western coasts of southern Africa and South America, where
cool currents flow northwards. Associated with the Benguela Current
(South Africa) and the Humboldt Current (Peru/Chile) are seasonal
‘upwellings’ of nutrient-rich subsurface water onto the continental
shelf, resulting in highly productive regions which support some of
the world’s largest fisheries (Pearce 1991).
While its source is still unknown, it appears that the Leeuwin Current
originates in equatorial regions north of Exmouth, but is also fed
continually by surface waters of the south-eastern Indian Ocean
flowing towards Australia (Godfrey and Ridgway 1985). On reaching
Cape Leeuwin, it swings left and heads eastwards into (and
sometimes across?) the Great Australian Bight. It flows most
strongly during the autumn and winter months, and is responsible
for transporting the larvae of many tropical marine organisms into
southern waters (Maxwell and Cresswell 1981, Hutchins and Pearce
1994). It also seems to play an important role in recruitment to some
Western Australian commercial fisheries such as the rock lobster,
scallops and prawns in Shark Bay, and salmon and pilchards along
the south coast (Lenanton et al. 1991, Pearce and Phillips 1994).
Satellite imagery has shown that the Leeuwin Current is a complex
system of alongshore jets with periodic offshore meanders or waves
which can carry the warm water over 200 km offshore (Legeckis and
Cresswell 1981, Prata et al. 1986, Pearce and Griffiths 1991).
Localised current speeds in these large meander-like features can
exceed 1 m/s (2 knots), but average speeds are generally about half
this. Surface temperature changes across the Leeuwin Current
boundary off the west Australian coast are of order 1°to 2°C, but
along the south coast (where the tropical waters meet Southern
Ocean water) the SST differential can exceed 4°C.
Mapping Shelf Circulation
221
Data
NOAA satellite imagery has been received and archived in Perth since
late 1981 (Pearce 1989), giving us the longest time-series of NOAA
images in Australia. Sea surface temperatures derived from the
Advanced Very High Resolution Radiometer (AVHRR-2) on the NOAA
satellites are used in this case study to map the circulation of the
Leeuwin Current system, using a part scene of calibrated data
supplied by Dr. I. Tapley (CSIRO Division of Exploration and Mining),
prepared from source data courtesy of Curtin University. The orbit is
NOAA7/17111 on 17 October 1984 covering an area off Western
Australia from Shark Bay to Cape Leeuwin.
Each NOAA satellite passes over any particular part of the earth or
ocean twice a day, and there are always two operational satellites,
so any part of Western Australia is covered four times per day.
However, thermal infrared radiation does not penetrate cloud, and
so many passes cannot be used for ocean mapping—a severe
limitation in southern waters during the winter months.
The AVHRR samples the earth’s surface by scanning across the socalled swath of 2048 pixels of nominal (sub-satellite) size of 1.1 km.
For each pixel, the radiance is sampled in 5 wavebands: one in the
visible range, one on the near-infrared, one in mid-infrared, and two
in the thermal infrared region of the electromagnetic spectrum. The
visible and near-infrared bands are used for accurate land location
and for cloud-screening, while the two thermal bands are used to
derive the SST.
Algorithms
Sea-surface temperatures are derived from the raw radiance data
sampled in the two thermal bands 4 and 5 by an essentially twostage process:
1. The radiances for each pixel are converted to radiance (or
brightness) temperatures by inverting the Planck function.
This step incorporates calibration data which is transmitted by the
satellite and comprises the raw counts and temperature of an
internal warm target within the satellite and the raw counts from
space as a cold target. Because the radiant energy emitted by the
earth’s (or ocean) surface is partly absorbed by water vapour in the
atmosphere, these brightness temperatures are generally lower than
the true surface temperature and require a water vapour correction.
2. The correction is applied by using one of a number of algorithms.
These are generally of the form:
SST = a*T4 + b*T5 + c
where T4, T5 are the brightness temperatures in AVHRR bands 4 and
5, and a, b and c are constants (which may include a zenith angle
correction).
This may also be written:
SST = T4 + A*(T4-T5) + C,
showing that the brightness temperature T4 is corrected by a
difference factor proportional to (T4-T5) plus a constant.
222
Mapping Shelf Circulation
A field study which compared satellite-derived SSTs (using a variety
of algorithms) with in situ measurements from a boat off Perth
indicated that an algorithm by McMillin and Crosby (1984) matched
the surface data most satisfactorily; the bias was -0.14°C and the
RMS difference about 0.55°C (Pearce et al. 1989). The McMillin and
Crosby algorithm, which is now being routinely used for determining
the SST in the Leeuwin Current region, has the form:
SST = 3.702 * T4 - 2.702 * T5 - 0.582
A second algorithm which gave similar results was that by LlewellynJones et al. (1984):
SST = 3.908 * T4 - 2.852 * T5 - 2.058
(bias 0.19°C, RMS difference 0.62°C).
While dense clouds are generally quite obvious in satellite pictures,
thin, scattered or sub-pixel cloud (i.e. clouds smaller than a pixel)
can be very difficult to detect and yet can have a major effect on the
SST. This is particularly important when digital data is extracted
from a sequence of images to produce a time-series of SSTs for a
specific area or when spatial or temporal averaging is carried out. A
variety of cloud-screening techniques have been developed, all of
which have some limitations; possibly the simplest involves flagging
as cloudy all pixels whose radiance exceeds pre-set (or sometimes
dynamically-derived) threshold limits in all five AVHRR radiometric
bands or combinations of bands (Saunders and Kriebel 1988).
Data processing
The AVHRR_Leeuwin dataset used here was supplied calibrated. The
calibration would have been accomplished by applying a non-linear
correction to Band 4:10.8 um and Band 5: 12.0 um (Prata 1985).
The data was viewed a number of ways to display and emphasise the
Leeuwin current.
Gaussian transform
The Leeuwin_Current_Gaussian algorithm displays a Pseudocolor
image of the calibrated data. In this algorithm the input values have
been inverted using the ER Mapper Formula:
-INPUT1
to display high values as red and low values as blue/green.
A Gaussian Equalize post-formula transform was applied to enhance
the ocean off the Western Australian coast, showing the warmer
water of the Leeuwin Current.
Conversion of digital
counts to brightness
temperature
Although ER Mapper can easily display the oceanic currents off
Western Australia, this is not sufficient for most climatological and
biological/fishery applications where relative temperature gradients
and absolute temperature are required.
Given two calibration points a linear calibration can be obtained
which converts digital counts into radiance values. The conversion of
the radiance values into brightness temperatures is done by
inverting the Planck function.
Mapping Shelf Circulation
223
The radiance corresponding to channel number i for a given
brightness temperature is:
Ri = v B[vi,T]F(v)idv
where:
Ri is the radiance measured by channel i
B is the Planck function
Vi is the wavenumber for channel i
T is the brightness temperature
F(v)i is the spectral response function for channel i (discrete spectral
response plot is available from NOAA)
Because the Planck function containing the brightness temperature
is embedded within the integral, a transform and lookup table is
normally created for a given wavenumber over a range of brightness
temperatures. Brightness temperature values are derived by
interpolating between points in the look-up tables.
The pre-formula transforms for Band 4:10.4 um and Band 5:12.0 um
were derived from a ‘counts-to-temperature’ table for NOAA-7
provided by NOAA/NESS. The provided tables show a radiance and
temperature value for Band 4 and Band 5 every second digital count.
A series of manual calculations were carried out to scale the digital
counts and brightness temperature values between 0 and 1 for input
into the ER Mapper algorithm file.
Channel 4
Digital
Count
Scaled
Digital
Count
0
224
Temperatu
re K
Temperature
Scaled K
323.22
Channel 5
Temperatu
re K
Temperatur
e Scaled K
326.64
2
0.002096
323.06
0.998916
326.46
0.998866
314
0.329140
296.03
0.815711
296.47
0.809905
328
0.343816
294.67
0.806493
294.96
0.800391
346
0.362683
292.89
0.794429
293.01
0.788104
362
0.379455
291.28
0.783516
291.24
0.776952
378
0.396226
289.65
0.772468
289.45
0.765673
Mapping Shelf Circulation
This user-defined look-up table was then entered into the algorithm
file using a text editor. Part of the algorithm file defining the
transform is shown below.
Mapping Shelf Circulation
225
StreamInput Begin
StreamInputID = 1
Transform Begin
Type = Linear
MinimumInputValue = 0
MaximumInputValue = 954
MinimumOutputValue = 175.68000000000001
MaximumOutputValue = 323.22000000000003
LinearDef Begin
NumberOfPoints = 8
Points = {
0.000000 0.000000
0.002096 0.998916
0.329140 0.815711
0.343816 0.806493
0.370748 0.782313
0.396226 0.772468
0.997904 0.010099
1.000000 1.000000
}
LinearDef End
DoInRealtime = No
Transform End
StreamInput End
StreamInput Begin
StreamInputID = 2
Transform Begin
Type = Linear
MinimumInputValue = 0
MaximumInputValue = 954
MinimumOutputValue = 167.93000000000001
MaximumOutputValue = 326.63999999999999
LinearDef Begin
NumberOfPoints = 9
Points = {
0.000000 0.000000
0.002096 0.998866
0.329140 0.809905
0.343816 0.800391
0.362683 0.788104
0.379455 0.776952
0.396226 0.765673
0.997904 0.009892
1.000000 1.000000
}
LinearDef End
DoInRealtime = No
Transform End
StreamInput End
226
Mapping Shelf Circulation
From these x and y values ER Mapper automatically derived a
transform line, to convert the digital counts to brightness
temperatures.
Once brightness temperatures for bands 3, 4, and 5 are derived,
various algebraic combinations of these bands in fixed algorithms
are used to derive SST.
Mapping Shelf Circulation
227
Sea-surface
temperatures using
McMillan and Crosby
(1984) formula
In this case the sea-surface temperature was derived using the
algebraic expression proposed by McMillan and Crosby (1984):
SST=-0.582 + (3.702 * B4) + (-2.702 * B5)
This is typed into the ER Mapper formula window as:
-0.582+(3.702 * INPUT1)+(-2.702 * INPUT2)
where INPUT1 is band 4 and INPUT2 is band 5.
The image was displayed using a ‘rainbow’ color look-up table and
enhanced using a post-formula linear transform. The transformation
was clipped by typing in a minimum input limit of 273K.
Sea-surface
temperatures using the
Llewellyn-Jones et al.
(1984) formula
A second algorithm determined the Sea-surface temperatures using
the algebraic expression proposed by Llewellyn-Jones et al. (1984):
SST=-2.058 + (3.908 * B4) + (-2.852 * B5)
This is typed into the ER Mapper formula window as:
-2.058 + (3.908 * INPUT1) + (-2.852 * INPUT2)
where INPUT1 is band 4 and INPUT2 is band 5.
The displayed image was again enhanced using a post-formula linear
transform and ‘rainbow’ color look-up table.
3 x 3 pixel smoothing
kernel to reduce the noise
generated by the
temperature-correction
algorithm.
228
To reduce the noise generated by the temperature-correction
algorithm a 3 x 3 smoothing filter was applied.
Mapping Shelf Circulation
References
Cresswell,G.R. and T.J.Golding (1980). ‘Observations of a southflowing current in the southeastern Indian Ocean,’ Deep-Sea
Research 27: 449-466.
Godfrey,J.S. and K.R.Ridgway (1985). ‘The large-scale environment
of the poleward-flowing Leeuwin Current, Western Australia:
longshore steric height gradients, wind stresses and geostrophic
flow,’ Journal of Physical Oceanography 15:481-495.
Hutchins,J.B. and A.F.Pearce (1994). ‘Influence of the Leeuwin
Current on recruitment of tropical reef fishes at Rottnest Island,
Western Australia,’ Bulletin of Marine Science 54, 245-255.
Legeckis,R. and G.R.Cresswell (1981). ‘Satellite observations of seasurface temperature fronts off the coast of western and southern
Australia,’ Deep-Sea Research 28: 297-306.
Lenanton,R.C., L.Joll, J.Penn and K.Jones (1991). ‘The influence of
the Leeuwin Current on coastal fisheries of Western Australia,’
Proceedings of the Leeuwin Current Symposium, Perth, 16 March
1991; Journal of the Royal Society of Western Australia 74, 101-114.
LLewellyn-Jones,D.T., P.J.Minnett, R.W.Saunders and A.M.Zavody
(1984). ‘Satellite multichannel infrared measurements of seasurface temperature of the N.E. Atlantic Ocean using AVHRR/2,’
Quarterly Journal of the Royal Meteorological Society 110, 613-631.
Maxwell,J.G.H. and G.R.Cresswell (1981). ‘Dispersal of tropical
marine fauna to the Great Australian Bight by the Leeuwin Current,’
Australian Journal of Marine and Freshwater Research 32, 493-500.
McMillin,L.M. and D.S.Crosby (1984). ‘Theory and validation of the
multiple window sea surface temperature technique,’ Journal of
Geophysical Research 89, 3655-3661.
Pearce,A.F. (1989) ‘A catalogue of NOAA/AVHRR satellite imagery
received in Perth, Western Australia, 1981-87’. CSIRO Marine
Laboratories Report 203, 36p.
Pearce,A.F., A.J.Prata & C.R.Manning (1989). ‘Comparison of
NOAA/AVHRR-2 sea-surface temperatures with surface
measurements in coastal waters.’ International Journal of Remote
Sensing 10(1), 37-52.
Pearce,A.F. (1991). ‘Eastern boundary currents of the southern
hemisphere. Proceedings of the Leeuwin Current Symposium, Perth,
16 March 1991,’Journal of the Royal Society of Western Australia 74,
35-45.
Pearce,A.F. and R.W.Griffiths (1991). ‘The mesoscale structure of
the Leeuwin Current: a comparison of laboratory model and satellite
images,’ Journal of Geophysical Research 96(C9), 16739-16757.
Pearce,A.F. and B.F.Phillips (1994). ‘Oceanic processes, puerulus
settlement and recruitment of the western rock lobster Panulirus
cygnus,’ In Sammarco,P.W. & M.L.Heron (editors): The bio-physics
of marine larval dispersal. American Geophysical Union, Coastal and
Estuarine Studies 45, Washington DC., 279-303.
Mapping Shelf Circulation
229
Prata, A.J., Pearce, A.F., Wells, J.B., and Carrier, J.M., 1986,
‘Satellite sea surface temperature measurements of the Leeuwin
Current,’ Proceedings of the First Australian AVHRR Conference held
in Perth, Western Australia, on 22-24 October 1986, Perth: CSIRO
Division of Groundwater Research, pp 237-247.
Saunders,R.W. and K.T.Kriebel (1988). ‘An improved method for
detecting clear sky and cloudy radiances from AVHRR data,’
International Journal of Remote Sensing 9, 123-150.
230
Mapping Shelf Circulation
Oil and gas
Introduction
This section discusses the following issues related to using ER
Mapper for Oil and Gas applications.
•
General ER Mapper setup issues
•
Getting seismic horizons into ER Mapper
•
Getting geological and cultural vector data into ER Mapper
•
Updating geological and cultural vector data within ER Mapper
•
Moving updated vector data back to other products
•
Generating hardcopy prints
The information in this section is not confined to ER Mapper but also
covers issues with related software that is commonly used in
conjunction with ER Mapper in the Oil and Gas industry.
ER Mapper for Unix
required
In the oil and gas industry most users of ER Mapper use the Unix
version rather than the PC version. This is because it is used with
companion software products which are only available on the Unix
platform. Thus most of the imports and dynamic links used for
exchanging data are only available with the Unix version of
ER Mapper. In particular, ER Mapper makes use of the Geoquest
Geonet data exchange utility, a major tool for data sharing data with
Schlumberger and Geoquest products. Since these products are
strictly available on the Unix platform, the imports can not be
included for the PC platform.
All ER Mapper versions after 5.5A are only for PC platforms
running Microsoft Windows.
Schlumberger
Geoquest and
Landmark imports
Here is a list of the various import utilities and their location on the
Utilities menu for Schlumberger/Geoquest and Landmark products.
For more information on these imports please see the relevant
chapters in this section.
Schlumberger/Geoquest
Mapview
Import Schlumberger Formats/GeoQuest (IESX) MapView
GeoQuest (IESX) ASCII Grid Dump
Import Schlumberger Formats/GeoQuest (IESX) ASCII Grid
Dump
Geoshare half link
Oil and gas
231
Landmark Raster
SeisWorks Horizons
Import Landmark Formats/ SeisWorks 3D Horizons
SeisWorks Seismic (3DVI)
Import Landmark Formats/SeisWorks 3D Seismic
Zycor Binary Grid from MFD
Import Landmark Formats/Zycor Grid from
MFD
Zycor ASCII Grid
Import Landmark Formats/ Zycor ASCII Grid
Landmark Vector
OpenWorks 3.1 Wells
Import Landmark Formats/OpenWorks 3.1 Wells
Seisworks Fault Polygons
Import Landmark Formats/ SeisWorks Fault Polygons
Seisworks Manual Contours
Import Landmark Formats/SeisWorks Manual Contours
Zmap ZGF
Import Landmark Formats/Zycor Graphics File
Landmark Color Lookup Table
SeisWorks CLM to LUT
232
Import Landmark Formats/SeisWorks CLM to ER Mapper LUT
Oil and gas
Importing seismic data
ER Mapper is often used to enhance seismic data to highlight
structure and to integrate seismic horizon data with other types of
data, such as bathymetry, gravity and magnetics data. ER Mapper
processes two-way-time horizons, as well as vertical slices through
3D seismic cubes. This chapter deals with ingesting seismic raster
grid data into ER Mapper for enhancement and interpretation.
Seismic raster
grid formats
supported
Seismic data is generally stored in seismic workstation software
formats such as SeisWorks or IESX, and loaded into ER Mapper as
needed.
ER Mapper has a number of data exchange utilities designed to
import seismic data from seismic workstation software. Many of
these import routines retrieve data directly from the seismic product.
However, for them to run successfully they need to have been set up
first.
When importing data from Seismic workstations, there are a number
of formats or exchange utilities that can be used. The main ones are
listed below. Only raster horizon formats are listed here: vector
formats are discussed in a following section.
Many data exchange utilities and printing require some set up
and configuration to be carried out first before they will run
successfully.
The imports are available as options on the Utilities menu. Further,
they can be run as stand alone programs in an ER Mapper terminal
window. Thus, they can be included in a batch script. To open an ER
Mapper terminal window select the Utilities menu User
Menu/Open Terminal Window option. For information on
available switches for a particular import type “importname -h”
Importing seismic data
233
Raster file import utilities
The following table lists ways to import seismic data using import
programs from common file formats. Generally, these require less
setup and configuration, but are not as powerful as the direct
exchange utilities listed in the next section, which directly read
seismic data from external software products. Only the specific
formats commonly used for seismic data are listed here; ER Mapper
also supports data imported from over 100 other formats.
Format options in ER Mapper
Description
SEG-Y NON IBM Disk FIle
An ASCII exchange format, it is one of the simplest and most
common data formats for exchanging seismic data. Most
seismic products can create SEG-Y data files. However SEG-Y
is not the preferred method for importing data because you
must create an intermediate SEG-Y file to transfer from the
seismic software to ER Mapper, and SEG-Y files tend to be
very large. Also, because the format is so simple, some of the
information may be lost.
SEG-Y IBM Disk File
Zycor ASCII Grid
A simple ASCII format that Zycor gridding software outputs.
GeoQuest IESX ASCII grid dump
An ASCII format that can be output by GeoQuest IESX.
GeoQuest (IESX) MapView ASCII grid
This is a native IESX MapView format. It can be directly
imported into ER Mapper.
Charisma 2D XYZ Grid
A 2D ASCII grid format that Charisma can output.
Charisma 3D Inline Xline XYZ ASCII
grid
A 3D ASCII grid format that Charisma can output.
SEG-Y
SEG-Y is a simple and common ASCII format used to exchange
seismic data.
To import a SEG-Y tape
1. Select the SEG-Y IBM Disk File option from the Utilities/Import
Landmark formats menu.
2. Specify the input device, for example:
/dev/nrst0
3. Specify the output ER Mapper dataset to create.
An IEEE 4 byte floating point ER Mapper Raster dataset will be
created from the SEG-Y input tape.
Command line
Configuration and setup
234
importsegy inputpath/file outputpath/file.ers
There are no special configuration requirements; however, the
following issues should be noted when importing SEG-Y data:
•
The only data format tested has been type 1 (4 byte IBM floating
point).
•
Type 2 (4 byte fixed point) has been implemented but not tested.
Importing seismic data
Input data format
•
Type 3 (2 byte fixed point) and type 4 (4 byte fixed point with
gain code) have not been implemented—you can not import
SEG-Y data in these formats.
•
As SEG-Y contains no map projection/datum details, the default
of RAW/RAW projection/datum is set, unless you override this
during the import, by manually specifying a map projection and
datum. Note, however, that you will still need to modify the
imported data file to specify an origin point and cell size, after
importing, in order to fully define projection details.
•
Because the headers don’t contain any information on the
number of traces on a tape, it is not possible to give a %
complete status during this import.
A SEG-Y tape contains a single file consisting of:
•
a 3200 byte EBCDIC Job Identification header record
•
a 400 byte binary File Identification record containing details of
the file layout and data formats
•
a number of trace records containing a trace header and trace
data.
The EBCDIC header is converted to ASCII and copied to the
ER Mapper dataset. The File Identification record is also copied to the
head of the ER Mapper dataset. The trace headers are discarded.
Zycor ASCII grid
The Zycor ASCII file format is a raster format produced by the Zycor
Z-Map Plus gridding package.
To import Zycor ASCII
grids
1. Specify the source tape or input data file name, for example:
/usr/data/grid.dat
2. Specify the output ER Mapper dataset to create.
An IEEE 4 byte floating point ER Mapper raster dataset will be
created from the Zycor input file.
Command Line
Configuration and setup
importzmap_ascii_grid inputpath/file
outputpath/file.ers
There are no special configuration requirements; however, the
following issues should be noted when import Zycor ASCII grid data:
•
Importing seismic data
The Zycor Grid Exchange Format does not have a specific field for
the type of geophysical data contained in the file. The import
program puts the text “Band 1” in for the band description. You
may want to edit the .ers header file to put in a more meaningful
description.
235
•
Input data format
The default of RAW/RAW projection/datum is set, unless you
override this during the import, by manually specifying a map
projection and datum.
An example of the first few lines of the ASCII Zycor Grid Exchange
format file is
! ZIMS FILE NAME: WESTEX SEISMIC GRID
! FORMATTED FILE CREATION DATE: FEB 24 92
! FORMATTED FILE CREATION TIME: 11:57
!
@WESTEX SEISMIC GRID HEADER , GRID, 5
15, 99999.00 , , 7, 1
23, 23, 0. , 55000.00 , 0. , 55000.00
38890.87 , 0. , 0.
@
+ RADIUS= 38891. INITGRIDMOD= 1 POSTGRIDMOD= 1
REGRIDMOD= 0 NUMPTS= 95
+ MINPTS/NODE= 1 OPT PTS/NODE= 4 MINSECT= 1 EXTEND=
38890.87
+ ISOCOM= 2.0 REFNS= 1 NO.PASSES= 10 CUTOFF= 0.2 SMOMOD=
0.2
-7135.546 -7185.889 -7236.498 -7287.622 -7338.023
-7385.425 -7426.534 -7457.695 -7477.753 -7490.585
...
GeoQuest (IESX)
ASCII grid dump
This program reads ASCII data dumps from certain GeoQuest or
IESX products.
Usage
Supply the data dump filename, and the name of the ER Mapper
dataset in which you want the data to be stored.
You may also choose to subset the lines in the import.
Command Line
Configuration and setup
importgeoquest inputpath/file outputpath/file.ers
There are no specific setup requirements but the following must be
considered:
•
This import is designed to read space or newline separated ASCII
digits, which may be dumped from Geoquest products.
As GeoQuest ASCII grid dump contains no map projection/datum
details, the default of RAW/RAW projection/datum is set, unless you
override this during the import, by manually specifying a map
projection and datum. Note, however, that you will still need to
modify the imported data file to specify an origin point and cell size,
after importing, in order to fully define projection details.
Input data format
The ASCII dump contains a header of the format
some_text number number number_of _cells
number_of_lines
followed by floating point values.
236
Importing seismic data
There are several GeoQuest products. This import is designed to
read space or newline separated ASCII digits, which may be
dumped from geoquest products.
Charisma 2D XYZ
ASCII grid
This import reads an ASCII Charisma XYZ Grid File and constructs an
ER Mapper raster file.
Usage
You must specify the input file name, for example:
/usr/data/grid.dat and the output ER Mapper dataset to create.
An IEEE 4 byte floating point ER Mapper Raster dataset will be
created from the Charisma XYZ input file.
Command Line
Configuration and setup
Import data format
importcharisma inputpath/file outputpath/file.ers
There are no specific setup requirements but the following must be
considered:
•
The basic format for Charisma is X-position, Y-position, data
value, so the cell size and origin point information are carried
across during the import but the datum and projection will
default to RAW/RAW unless a datum and projection are
specified during import.
•
The data must be on a REGULAR grid. That is, spacing
between adjacent data points must be constant in the Xdirection, and constant in the Y-direction.
•
The Charisma XYZ Grid Format does not have a specific field
for the type of geophysical data contained in the file. The
import program inserts the text “Band 1” in for the band
description. You may want to edit the .ers header file to put
in a more meaningful description.
The ASCII Charisma XYZ format contains lines with X-position
(Easting), followed by Y-position (or Northing), followed by the data
value at that point. After this are row and column indicators.
An example of the first few lines of such a file is:
361478.5 6245581.0 858 ROW 14 COL 88
361478.5 6245689.0 859 ROW 15 COL 88
361611.0 6245689.0 859 ROW 15 COL 89
Charisma 3D
Inline Xline XYZ
ASCII grid
Importing seismic data
This import reads a Charisma ASCII Inline Xline XYZ Format file
produced from Charisma V3.7+ Gridload utility.
The import makes use of a configuration file to setup site specific
default values, such as the rotation. The configuration file
$ERMAPPER/config/char3d.cfg can be edited using any text editor or
you can update the file while running the import.
237
To run the Charisma 3D
Inline Xline XYZ ASCII
grid utility from the menu
1. From the Utilities/Import Schlumberger formats menu, select
the Charisma 3D Inline Xline XYZ ASCII grid option
2. Specify the Import File/Device Name, for example:
/usr/data/grid.dat
The input file must be a disk file: the import cannot be run direct
from a file on tape. This is because the program does a pass through
the input file to ascertain extents and cell size information.
3. Click the Output Dataset Name file chooser button and specify the
output ER Mapper dataset to create.
4. Click OK to start the import.
5. You will be prompted to enter the following:
•
The Easting/Northing units in the input file.
This can be ‘M’ for ‘Meters’ or ‘D’ for ‘Decimeters’. The origin
values will be converted.
•
The Charisma Orientation code
•
The Rotation (Charisma V3.7)
This is the rotation angle in seconds between the North-arrow
and the vertical axis of the survey as specified in the Charisma
project dump information.
The Charisma specific code in the range 0 to 7 that specifies the
origin and whether the input dataset inline is vertical or
horizontal
Rotation is converted from Charisma to ER Mapper using:
ER Mapper rotation = (360 - Char_rotation/3600)
This is based on the rotation in seconds given in the Charisma
project dump for ***charisma Version 3.7******. Earlier
versions of Charisma may calculate rotation from the first
INLINE (row) to either the X or Y axis. This is untested and may
be incorrect.
•
•
238
The INLINE or ROW and XLINE or COLUMN spacing: These are
found in the Charisma project dump information and should be
entered as meters. The Import will convert the spacings to ER
Mapper format depending on the orientation code, and row and
line increments.
The Row and Column increment: The Inline/Xline increment can
be found in the project dump.
Importing seismic data
6. If you enter any values to the above prompts you will be asked if you
want to save the values you entered as the default values.
An IEEE 4 byte floating point ER Mapper Raster dataset will be
created from the XYZ input file.
Command Line
importcharisma3d -C $ERMAPPER/config/char3d.cfg
inputpath/file outputpath/file.ers
The import program will prompt you for the information listed in the
procedure above, and give a default choice.
You must include the -C switch.
Configuration and setup
•
The data file can include comments, either as comment lines or
as trailing comments after data values. If a line does not have an
inline as the first non-blank character, then that line is just
ignored.
•
The data must be on a regular grid. That is, spacing between
adjacent data points must be constant in the X-direction, and
constant in the Y-direction.
This program will not grid irregularly spaced data. Use ER
Mapper’s Gridding Wizard if you want to do this.
•
The data can be in fixed format or free format but it must contain
the keywords and data specified above.
•
The null cell value can be specified in the char3d.cfg config file
and is used by the import, but the import will not prompt for this
value.
•
The XYZ format does not have a specific field for the type of
geophysical data contained in the file. The import program puts
the text “Band 1” in for the band description. You may want to
edit the .ers header file to put in a more meaningful description.
The values specified in the configuration file are critical to the
success of the import. The values vary depending on the project
so it is important that you don’t just accept the defaults. The
values to use can be found in the Charisma Project Dump
information. It is helpful to save the new values if you have a
number of projects with the same parameters. If imported
datasets fail to register correctly check the dataset header to
make sure that the origin easting/northing values are specified
in meters and the cell size is correctly specified in meters.
Ensure that the rotation values are correct.
Importing seismic data
239
Input data format
An example of the first few lines of the ASCII input file is:
INLINE
INLINE
INLINE
INLINE
INLINE
:20
:20
:20
:20
:20
XLINE :1 626458.70000 5756838.00000 156.92996
XLINE :2 626482.65000 5756845.20000 156.88675
XLINE :3 626506.60000 5756852.40000 156.87457
XLINE :4 626530.55000 5756859.60000 156.82094
XLINE :5 626554.50000 5756866.80000 156.73541
The basic format is:
INLINE : row_position XLINE : column_position X-position Y-position
value
where,
•
X-position is the easting in meters or decimeters
•
Y-position is the northing in meters or decimeters
•
value is the data value at that point
The Charisma
configuration file
The format of the $ERMAPPER/config/char3d.cfg file is:
GeoQuest (IESX)
MapView ASCII
Grid
This import reads a GeoQuest (IESX) MapView ASCII data file and
constructs an ER Mapper raster file.
240
######################################################
####
# # This is the configuration file for the
importcharisma3d
# # default values.
# #
# # This file can be edited with any text editor. The
only
# # rules are that the names appear as below and have one
# # space separating the name from the value. # # #
######################################################
###
INPUT_FILE /path/filename.xyz
OUTPUT_FILE /path/filename.ers
PROJECTION LOCAL
DATUM WGS84
UNITS decimeters
ORIENTATION_CODE 6
ROTATION 1182437.000000
INLINE_INTERVAL 25.000000
XLINE_INTERVAL 6.670000
INLINE_INCREMENT 1
XLINE_INCREMENT 1
NULL_CELL_VALUE -999999.000000
Importing seismic data
This import should be used in preference to the ASCII text dump
import if possible as it will convert more information.
To import a GeoQuest
(IESX) MapView file
1. Start the import by selecting the GeoQuest (IESX) MapView option
from the Utilities/Import Schlumberger formats menu or by typing
Importmapview in a terminal window.
2. Specify the input file name, for example:
/usr/data/grid.dat
3. Specify the output ER Mapper dataset to create.
4. Click OK to start the import.
An IEEE 4 byte floating point ER Mapper raster dataset will be
created from the MapView input file.
As GeoQuest (IESX) MapView Grid contains no map
projection/datum details, the default of RAW/RAW
projection/datum is set unless you override it during the import
by manually specifying a map projection and datum.
Command line
Input data format
importmapview inputpath/file outputpath/file.ers
An example of the first few lines of such a file is:
B 1 0
8
8 0.1250000E+02 0.1250000E+02
0.4310028E+06 0.4310918E+06 0.6355005E+07
0.6355094E+07
431002.83386516355006.6749957 1.000
TOP CHALKAP1
- 3D pick
-0.3261754E+04 -0.3245083E+04 -0.3261081E+04 0.3260774E+04 -0.3260717E+04 -0.3260975E+04 0.3246495E+04 -0.3246358E+04
-0.3261800E+04 -0.3244692E+04 -0.3261257E+04 0.3260807E+04 -0.3260537E+04 -0.3260868E+04 0.3245658E+04 -0.3245172E+04
Generic issues
related to
importing raster
grid data
Included in this section are some general issues you should be awar
of when importing raster grid data.
Updating header files
with projection/datum
details
In several of the imports it is necessary to specify the projection and
datum details manually. This requires that the header (.ers) file be
edited. To do this:
Importing seismic data
241
1. Select the Load Dataset button from the Algorithm dialog or the
Common Functions toolbar.
2. Select the .ers file that you wish to edit.
3. Select the Info button at the bottom of the dataset chooser dialog
box. This will open the Dataset Information dialog.
4. From the Dataset Information dialog, click the Edit button. This
will open the Dataset Header Editor.
5. Select the Coord Space... button.
This will open the Dataset
Header Editor: Coordinates dialog.
6. Change the datum and projection to the correct ones. Select OK.
7. Select Save from the Dataset Header Editor. Select OK.
8. Select Cancel from the Dataset Information dialog and select OK
from the dataset chooser to load the dataset.
Editing registration
information in header
files
Again in some cases it is necessary to specify a registration
coordinate, registration cell and cell size in order to properly register
the file in coordinate space. To do this:
1. Click the Load Dataset button from the Algorithm dialog or the
Common Functions toolbar.
2. Select the .ers file that you wish to edit.
3. Click the Info button at the bottom of the dataset chooser dialog
box. This will open the Dataset Information dialog
4. From the Dataset Information dialog, click the Edit button. This
will open the Dataset Header Editor.
5. Click the button labeled Raster Info... This will open the Dataset
Header Editor: Raster Information dialog.
•
To change the registration information click Registration
Point... This will open the Dataset Header Editor:
Registration dialog. Edit the Registration Coordinates and
the registration cell for the image. If the Registration Cell is the
Upper Left then the Registration Cell Values are 0,0. Click OK
to close the Registration dialog.
•
To change the cell size select Cell Size and change the x and y
dimensions to reflect the actual cell size of the dataset. Click OK
to close the Cell Size Information dialog.
6. Select OK to close the Raster Information dialog.
7. Click Save in the Dataset Header Editor. Select OK.
8. Click Cancel in the Dataset Information dialog and click OK in the
dataset chooser to load the dataset.
242
Importing seismic data
Importing geological and cultural vector
data
You can import the following date formats into ER Mapper:
Format
Comments
OpenWorks 3.1 Wells
Converts Landmark Well Data Export Format files to ER Mapper
vector files. This format is the output file obtained by exporting
the default Well Master Block format from OpenWorks, with no
curves.
Seisworks Fault Polygons
Converts Landmark Fault Polygons Export Format files to
ER Mapper vector files. This format is the output file obtained by
exporting the default format for fault polygons from SeisWorks
Seisworks Manual Contours
Converts Landmark Contours Export Format files. This format is
the output file obtained by exporting the default format for
manual contours from SeisWorks
OpenWorks 3.1
Wells—Import
This import reads Landmark Well Export files into ER Mapper vector
format. All wells are imported. Most of the information in these files
becomes the attribute of the well object. Files in Landmark Well Data
Export format are obtained by exporting the default Well Master
Block format from OpenWorks, with no curves. Use the
“LGCReportFMT.wlx” format.
Input data format
Well
Master Block
Common Well Name : Well 1
Operator : DALLAS OIL
Lease Name : Stark
Lease Number : 0001 UWI : 00001
State : UNKNOWN
County : UNKNOWN
Country : UNKNOWN
Longitude : -57.38
Latitude : 20.70 X
Coordinate : 8189.00 Y
Coordinate : 35105.89
Total Depth : 7429.51
Plugged Back TD : 0.00
Completion Date :
Platform ID :
Field : PINTO
Current Class : OIL
Importing geological and cultural vector data
243
To import OpenWorks 3.1
Wells
1. From the Utilities/Import Vectors and GIS formats menu select
OpenWorks 3.1 Wells.
2. Enter the full pathname of the input file and the output dataset.
The output dataset will be registered in RAW co-ordinate space
unless a Datum and Projection are specified from the interface. A
rotation value for the grid can also be specified.
After clicking on the OK button, you will be prompted for the well
symbol size to use and given examples of suggested sizes for various
output map scales.
A circle is drawn at the well location, and the well name appears
to the bottom right of the symbol. The other information is
stored as the attribute of the well symbol.
Command Line
importlandmark_well inputpath/file outputpath/file.erv
SeisWorks Fault
Polygons
This import reads Landmark Fault Polygons Export format files into
ER Mapper vector format. These files are obtained by exporting the
default format for fault polygons from SeisWorks.
To import SeisWorks
Fault Polygons
1. From the Utilities/Import Vectors and GIS formats menu select
Seisworks Fault Polygons .
2. Enter the full pathname of the input file and the output dataset.
The output dataset will be registered in RAW co-ordinate space
unless a Datum and Projection are specified from the interface. A
rotation value for the grid can also be specified.
Command Line
importlandmark_fault inputpath/file
outputpath/file.erv
Input data format
The format is:
Row Col Decimal Places
X 1 12 3
Y 13 24 3
Z 25 36 3
PTYPE 37 40
SRCID 41 44
244
Importing geological and cultural vector data
Where X is Eastings, Y is Northings, Z is Height, PTYPE is Point type
(6 - Start, 7 - Intermediate, 8 - End). SRCID is not used in the import
Example
12345.688
12346.531
12346.625
12346.469
12346.469
12346.969
12346.344
12346.469
12346.719
12346.688
12346.500
SeisWorks Manual
Contours
567890.500
567890.000
567890.000
567890.500
567890.000
567890.500
567890.500
567890.500
567890.500
567890.500
567890.500
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
6
7
7
7
7
7
7
7
7
8
6
1
1
1
1
1
1
1
1
1
1
2
This import reads Landmark Contours Export files into ER Mapper
vector format. All contours are imported and the Z value is the
attribute of the line.
To import SeisWorks
Manual Contours
1. From the Utilities/Import Vectors and GIS formats menu select
Seisworks Manual Contours.
2. Enter the full pathname of the input file and the output dataset.
The output dataset will be registered in RAW co-ordinate space
unless a Datum and Projection are specified.
A rotation value for the grid can also be specified.
Command Line
importlandmark_cont inputpath/file outputpath/file.erv
Format
The Landmark Contour Export files are obtained by exporting the
default format for manual contours from SeisWorks. The format is:
Row Col Decimal Places
X 1 12 3
Y 13 24 3
Z 25 36 3
PTYPE 73 39
Where X is Eastings, Y is Northings, Z is Height, PTYPE is Point type
(1 - Start, 2 - Intermediate, 3 - Open end, 4 - Closed end). For
example,
Importing geological and cultural vector data
245
597644.125
597658.000
597727.500
597658.000
597644.125
598110.562
598142.000
598263.000
598309.812
6184274.000
6184236.000
6184274.000
6184298.000
6184274.000
6184274.000
6184255.000
6184230.500
6184274.000
2135.637
2135.637
2135.637
2135.637
2135.637
2200.637
2200.637
2200.637
2200.637
1
2
2
2
4
1
2
2
2
Notes
246
•
The contour height is stored as the attribute of the contour in the
ER Mapper file (rather than as text). To view the height start the
Annotation editor and select the line of interest. The height will
be displayed as the Object Attribute.
•
If PTYPE is not used in the file a new contour is recognized by a
change in the Z value.
•
Blank lines are ignored.
Importing geological and cultural vector data
Updating
geological and
cultural vector
data
Now that your vector data has been imported into ER Mapper you
may want to make some changes to that data. You can do this using
ER Mapper’s annotation editor.
1. From the Edit menu, select Annotate Vector Layer. This will open
the New Map Composition dialog.
2. Click the File Chooser button next to Load from File. The Load
Map Composition File chooser dialog opens.
3. Locate your vector file and double click to select it.
4. Select OK on the New Map Composition dialog.
The Annotation Tools opens and the image should appear in the
image window. If it does not, try selecting Quick Zoom / Zoom to
All Datasets from the View menu. If your image changes, the
extents must be incorrect for either the raster or the vector dataset.
Setting image extents will be discussed later in this document.
You may also get a warning that your algorithm hasn’t had its page
defined. If so, you can select Close if you wish or Page Setup... to
define the algorithm page. If you are intending to print your
algorithm you must set up the page to control the size and
placement of your output.
See the Printing chapter in the ER Mapper User Guide for
instruction to properly set up a page for printing.
5. In the annotation Tools, double-click on the Poly Line tool. This
opens the Line Style dialog.
6. In the Annotation Tools, select the Select/Edit Points Mode tool.
7. Select the lines that you want to edit. (To select multiple lines you
can use the shift-click or shift-click-drag methods.) When a line is
selected a blue box appears at all nodes of the line.
8. Edit the settings in the Line Style dialog as desired.
9. When changes are complete, click the Save button on the
Annotation toolbar. This saves the vector file not the entire
algorithm.
10. Select Close from the Annotation Tools.
Moving updated
vector data back
to other products
Often, after editing and updating your vector data in ER Mapper,
you’ll want to take the data back to the original product. Although
ER Mapper’s list of exports is far less extensive then its imports, it is
usually fairly easy to take the data back to the other products.
247
In ER Mapper there are basically three different formats that you will
want to use to export your vector data back to other products:
AutoCAD DXF
•
AutoCAD DXF
•
SeisWorks Fault Polygons
This is probably the most common of the vector formats since most
packages can read and write in DXF format.
Exportdxf identifies any layers in the vector file and ask you where
to export them to. The layers are defined in the attribute field of the
vectors.
SeisWorks Fault Polygons
248
This export creates a Landmark Fault Polygon Export Format file.
This file can be imported into SeisWorks to be used in that package.
Generating hardcopy prints
ER Mapper has powerful map production and printing capabilities.
However, you need to set up the printing devices correctly.
This section covers the following issues:
Generating maps
as a TIFF file
•
Generating maps as a TIFF file
•
How to create a map
When creating a TIFF file, rather than saving the image as a TIFF file
you could make a hardcopy plot in TIFF format. The hardcopy TIFF
can be generated in either 8 bit or 24 bit format. The following
section explains how to setup a TIFF export so that the TIFF file
maintains the resolution of the ER Mapper dataset and an output file
can be specified.
1. With the algorithm you want to print saved and displayed on the
screen, select Print from the File menu.
The Print dialog appears. The Algorithm field automatically
displays the current algorithm name so you will see your algorithm
listed. If your algorithm was not open this field might be blank (if no
algorithm is current) or show another algorithm. You can click the
File Chooser button to select an algorithm.
2. Select the Output Name file chooser. The Open Hardcopy Control
chooser opens.
3. Change directories to the ‘hardcopy\Graphics’ directory.
4. Double-click on one of the TIFF output formats to select it.
5. In the Print dialog, click Setup.
6. To maintain resolution, select the Force 1 Dataset Cell = 1 Image
Pixel button. Once this is selected the Force Single Page button
becomes active. Click this button.
7. Change the filter program line to specify the number of colors:
•
for 24 bit TIFF change the filter program line to:
hetotiff -d24 /path/filename.tif.
•
for 8 bit TIFF with compression change the filter program line to:
sh -c 'hetoppm | ppmquant 256 | pnmtotiff /path/filename.tif'
•
Generating hardcopy prints
for 8 bit TIFF with no compression change the filter program line
to:
249
sh -c 'hetoppm | ppmquant 256 | pnmtotiff -none /path/filename.tif'.
In each case /path is the path to the directory in which you want to
write the tiff file and filename.tif is the name of the output file with
a tiff extension added.
8. Select OK on the Setup dialog.
9. Select Print in the Print dialog.
How to create a
map
Page Setup
When creating a map, there are three important things to consider:
•
Page Setup
•
Vector Data to Include
•
Map Composition Items
For an algorithm, there are three different sizes that need to be set
in order to make a map. They are page, contents and border sizes.
These three sizes interact so that you must decide how they are
determined. To do this the Page Setup dialog and wizard offer you
three constraints to choose from for defining an algorithm. The
fourth constraint is the default constraint. It is not generally used for
algorithms that will be printed and will not be discussed here. The
constraints and their effects are:
Constraint
Explanation
Auto Vary: Page
ER Mapper automatically varies the page size based on the Borders
and Scale that are selected. If the final page size is bigger than that
of the hardcopy device, ER Mapper will attempt to strip print the
image.
Auto Vary: Borders
ER Mapper automatically varies the borders based on the selected
Page Size and Scale. This is typically the desired choice as most
users have a specific output device in mind and want a particular
scale.
Auto Vary: Scale
ER Mapper will automatically set the scale based on the page size
and borders selected.
Fixed Page: Extents from Zoom
The page size can be selected and ER Mapper will automatically fit
the image to that page size. Typically when creating a hardcopy
map this selection is not used.
Once a constraint has been selected, you can vary the other two
components. You can also set the background color. The following
is an example of setting up a map:
1. Create an algorithm using a raster dataset.
250
Generating hardcopy prints
From the File menu select Page Setup. The Page Setup dialog
opens. In the window on the left of the dialog box the red box
indicates the Page while the blue box indicates the Contents.
2. From the drop down list of Constraints select Auto Vary: Borders.
3. Change the Background Color to white.
4. From the Size pull down menu, select US Letter.
5. Change the Scale to an appropriate value such as 200000. This will
depend on the scale of your data. Any scale that leaves some border
space will do.
6. Select the Horz Center and Vert Center buttons. This will center
the image on the page. Adjust the position as desired. (Often you’ll
want to place the image slightly higher than center to allow more
room at the bottom to add map composition items. This can be done
by changing the value for the Top Border; The bottom border will
adjust automatically.)
7. Select OK or Apply.
Including Vector Data
Earlier in this chapter we discussed the mechanisms available for
sharing data with companion software products. This section
summarizes the steps for incorporating vector data into your image,
assuming that you have decided which method to use and have
configured your system to suit.
1. From the View menu select Algorithm to open the Algorithm
dialog (if it is not already open).
Generating hardcopy prints
251
2. In the Algorithm dialog click on the Edit menu.
•
to display data that has been imported into ER Mapper vector
format, select Add Vector Layer and Annotation/Map
Composition. This will add an Annotation layer to the
algorithm.
•
to display data in the native format of external systems select
the appropriate dynamic link (for example, Dynamic Link to
OpenWorks 3.1 Wells).
3. In the Algorithm dialog, click the Load a Dataset or Dynamic
Linkbutton in the process diagram for the new layer.
4. Find the desired vector dataset and double-click to load it.
5. ER Mapper redisplays the raster image with the vector dataset
displayed over the top.
Adding Map Composition
Items
Map composition items are added using the Annotation tools. You
can add Map items to an existing ER Mapper Annotation layer.
However, it is a good idea to use a separate layer for the Map items
so that you can easily turn them on or off independently.
1. On the main ER Mapper menu, click the Edit menu and select
Annotate Vector Layer...
2. When the New Map Composition dialog opens, click OK.
3. If the Page Setup Warning opens, you can either select Close or
follow the procedures outlined in the Page Setup section found
earlier in this document.
4. From the annotation Tools select the Map Rectangle tool. The
Map Object Select and Map Object Attributes windows open.
5. Select a category of map composition items from the Category drop
down list, for example, Grid.
6. Select one of the map objects (for example, EN Grid), drag it over
the area you want to place it and drop it.
•
252
To resize these objects there are two choices. For objects like
grids or clip masks, select the Fit Grid button in the Map
Objects Attributes dialog. This will automatically resize the
object to the size and location of the data. Or select the Select
and Move/Resize tool. Click on the object to select it and
resize it as you wish.
Generating hardcopy prints
•
ER Mapper map objects are intelligent. For example, when you
resize some of the objects, their values will change to maximize
the allowed space. For example, if you resize a scale bar, the
spacing of the scale bar will be adjusted as well. North Arrows
are also intelligent in that if there is a rotation from true north in
the dataset, the North Arrow will still point at true north
regardless of direction.
7. Change the item parameters such as color or grid spacing in the Map
Object Attributes dialog as desired. You can also add your own
map composition objects to the available options. For information on
adding map composition objects of your own, either consult the
reference manual or contact your support provider.
Using the Page Setup
Wizard
The Page Setup Wizard leads you sequentially through the page
setup process. It does the same as Page Setup.
1. On the Standard toolbar, click on the Page Setup Wizard button.
2. In the Introduction dialog box, select Algoritm displayed in
current window, and click on Next>.
3. In the Use a template? dialog box, select Define new values with
this wizard, and click on Next>.
4. In the Set background color dialog box, set the background color
to white and select your units of measurement,. Click on Next>.
5. In the Set contents extents dialog box, select Use the current
contents extents, and click on Next>.
6. In the Set pagesize dialog box, select Choose from standard
portrait sizes, and click on Next>
7. In the Standard portrait dialog box, select US Letter, and click on
Next>
8. In the Set borders dialog box, select Center Horizontally and
Center Vertically. Click on Next>
9. In the Set scale dialog box, select Type in the scale, and click on
Next>.
10. In the Set scale dialog box, type in your required scale, and click on
Next>.
11. In the Finish dialog box, select Add a vector layer, and open the
file chooser to select a required vector layer. Save the algorthm
under an appropriate name. Click on Finish.
Generating hardcopy prints
253
254
Generating hardcopy prints
Data viewing tips
The Intensity
layer
ER Mapper provides a type of layer named Intensity. Used on its own
it provides a monochrome image. Used with other layers the data in
the Intensity layer controls the brightness (or intensity) of the image
colors. Low data values in the Intensity layer produce dark colors in
the image, and high data values produce bright colors. Intensity
layers are used in the Oil and Gas industry to create shaded relief
images that highlight structure.
To view your seismic
dataset
Open an image window and the Algorithm dialog
1. On the Standard toolbar, click on the Open Algorithm into Image
Window button.
If you are starting from scratch, your algorithm will have a single
layer, which will be a Pseudocolor layer.
2. Change the Pseudocolor layer to an Intensity layer and load your
seismic dataset. Select the two-way-time data band.
To use a formula to invert
the dataset values
1. In the Algorithm window, click on the Formula button in the
process diagram.
The Formula Editor dialog box appears.
2. In the Formula Editor dialog, type a minus sign in front of the
formula so that it looks like the following:
-INPUT1
This formula tells ER Mapper to negate (invert) all values in the
dataset.
3. In the Formula Editor dialog click the Apply changes button and
then the Close button.
The image appears as black initially because you need to adjust the
transform to account for the new range of negative data values
produced by the formula.
4. Click the Transform button.
The Transform dialog box shows the negative data range produced
by the formula in the Actual Input Limits fields.
5. From the Limits menu (on the Transform dialog), select Limits to
Actual.
The X axis data range changes to match the Actual Input Limits.
Data viewing tips
255
ER Mapper draws the image again, this time using the full range of
gray shades to display the image. Since you inverted the data values
with a formula, structural lows (larger two-way time values) are
shown as dark grays transitioning into structural highs shown as
lighter shades.
To use sun shading
1. Turn on sun shading and display the shaded relief image using the
Open Sun Angle editor button.
2. Drag the small sun icon around to change the sun position.
•
This feature allows you to apply artificial illumination from any
direction to highlight very subtle structural features.
Applications of sun angle shading
ER Mapper’s sun angle shading feature is a very powerful tool for
quickly identifying subtle features in time surfaces. It is commonly
used for many applications, including to:
Colordraping
•
identify small scale faulting, and relative through across faults
•
identify subtle stratigraphic features (pinchouts, truncations,
etc.)
•
highlight data acquisition and/or processing artifacts
•
highlight quality issues related to interpretation
Colordraping is the technique of draping one set of image data in
color over another set of data that controls the color brightness or
intensity. This allows you to effectively view two (or more) different
types of data or methods of processing simultaneously in a combined
display. Colordraping is usually difficult and time consuming using
traditional image processing products, but it is very fast and easy
using ER Mapper’s Intensity layer type.
The colordraping technique has become a very popular and powerful
tool for visualization of interpreted surfaces. For example, combining
two-way time images shown as both color and as structure lets you
create a shaded relief image that enhances subtle faults and colorcodes their placement relative to depth. From these types of images,
far more useful information can be derived than from conventional
visualization techniques.
256
Data viewing tips
Pseudocolor layer
Intensity layer
color image
brightness image
combined
colordrape
image
To create a colordrape
image
1. In the Algorithm window, load your seismic dataset into an
Intensity layer.
2. Duplicate the Intensity layer and change the layer type to
Pseudocolor. (Duplicating the layer instead of adding a new one
saves you from having to reload the dataset.)
3. Drape color over the shaded relief image by turning sun-shading on
for the Intensity layer and off for the Pseudocolor layer.
Sun angle shading is usually applied only to time surface
datasets displayed in Intensity layers because two-way time
describes structural features well. Sun shading is not normally
applied to amplitude or attribute datasets which are generally
displayed in color.
Note that by combining the two processing techniques into one
image, you can simultaneously see structure as brightness relative
to depth as color.
4. To use different color mapping transforms for the color layer click on
the right-hand Transform button in the process diagram.
The Transform dialog box opens showing the current lookup table
and color mapping.
5. On the Transform dialog, click the Histogram equalize
button.
ER Mapper applies a histogram equalization transform to the data.
Histogram equalization maximizes overall color contrast in the image
at the expense of losing contrast in the structural highs and lows.
6. On the Transform dialog, click the Gaussian equalize
button.
ER Mapper applies a gaussian equalization contrast stretch to the
data. This maximizes color contrast in the structural highs and lows,
but tends to flatten out contrast in other parts of the image.
Data viewing tips
257
7. On the Transform dialog, click the Create default linear
transform
button.
ER Mapper resets the color mapping back to a straight linear default.
8. You can display the shaded relief and color images separately by
turning one of them off.
To drape amplitude data
in color over time data
1. Highlight the Pseudocolor layer and change the band to select the
amplitude data band.
2. Adjust the transform for the Amplitude color layer by selecting the
layer and clicking on the right-hand Transform button in the
process diagram.
3. From the Limits menu, select Limits to Actual to display the image
using the actual limits of the amplitude data.
4. On the Transform dialog, click the Histogram equalize button.
Histogram equalization increases the overall color contrast in the
amplitude data. Areas of high amplitudes are shown as reds, and low
amplitude areas are shown in blues.
This colordrape image lets you easily associate variations in
amplitude with structural features shown by the shaded two-way
time surface in the Intensity layer. Using this technique, you can
drape virtually any type of data in color over the shaded time surface
to aid interpretation of subtle relationships.
Tips for Colordrape
Algorithms
Generally the Intensity layer of a colordrape algorithm is used to
show structural features derived from two-way time data, and the
color layers are used to show amplitude, azimuth, isochrons, or any
other derivative or attribute images you feel are useful. To use an
existing algorithm as a “template” algorithm to apply the same
processing to different datasets, simply load new datasets into the
Intensity and Pseudocolor layers and adjust the transforms to
account for the data ranges.
One colordrape variation some researchers use is to display a dip
image in Intensity instead of shading from a specific compass
direction. A dip image may delineate both sides of a fault more
clearly, for example.
Training
258
If you would like training in using ER Mapper in oil and gas
applications contact the ER Mapper office closest to you.
Data viewing tips
Supplied Algorithms
This chapter describes selected algorithms supplied with the
ER Mapper distribution.
Typical use of the supplied algorithms is to load the algorithm and
change the image to your own image, and then click on the Refresh
Image with 99% clip on limits
with 99% transform clipping.
button to run the algorithm
The processing carried out by algorithms depends on the type of data
and the desired result. For example, the Landsat_TM/NDVI
algorithm computes the Normalized Difference Vegetation Index
(NDVI) from the Landsat TM imagery.
The supplied algorithms are designed as starting points for creating
your own commonly used algorithms. They were created using the
standard ER Mapper user-interface - they are not ‘hard coded’ into
ER Mapper.
The algorithms and images supplied with ER Mapper are divided into
sections, typically based on the type of data or the type of
processing. The different sections are located in the following
directories under examples.
•
Applications: Specific application examples; e.g. Mineral
•
Data_Types: Examples which illustrate the the use of a specific
•
Functions_And_Features: Examples which illustrate a particular
ER Mapper function or feature.
•
Shared_Data: Image datasets which are shared by one or more
of the other examples directories.
•
Miscellaneous\Templates: Designed as templates to be used to
create virtual datasets or images. Load the template algorithm,
change to the new image(s), zoom to all datasets (if entire
coverage is desired), and then save the resultant algorithm as a
virtual dataset or as a real image. Most template algorithms do
not have a final transform - this ensures that the data being
processed is not modified by the template algorithm.
Exploration, Oil and Gas Exploration etc.
type of data.
The example algorithms are designed to be used as-is. Simply load
the algorithm, change the image, and click on the Refresh image
with 99% clip on limits
button.
In addition to the example algorithms loaded with ER Mapper you
can also choose to load other application example algorithms during
installation. These are inserted into the appropriate examples
directories.
Supplied Algorithms
259
The complete list of the examples directories are as follows. Each of
these directories is organized as a section within this chapter, with
more information about each algorithm.
Applications
Listed below is information regarding each application.
Airphoto
Contains examples demonstrating orthorectification, mosaicing, and
balancing of airphotos.
Fire_Risk
3D_Vegetation_over_DTM.alg
This algorithm defines a vegetation ratio called the Normalized
Difference Vegetation Index or NDVI which has shown to be strongly
correlated with the amount of vegetation (biomass). This is draped
over an Intensity layer containing a Digital Terrain image band.
Vegetation.alg
This algorithm defines a vegetation ratio called the Normalized
Difference Vegetation Index or NDVI which has shown to be strongly
correlated with the amount of vegetation (biomass).
Land_Information
Contains algorithms related to Land Information applications.
Mineral_Exploration
This contains algorithms related to mineral exploration.
3D_Mag_Colordrape_and_KTh_over_Mag.alg
This image has two surfaces. The bottom one is an aeromag
colordrape using an elevation color lookup table. The upper image
is a Potassium Thorium ratio draped over the aeromag data which is
taking on the 3D appearance of elevation.
3D_Magnetics_and_Radiometrics.alg
A very powerful way to show magnetics data, known as colordrape.
In this algorithm, magnetics data is used for both color and intensity
(the two layers in the algorithm). For intensity, the sun-shading for
the Intensity layer has been turned on to enable structure to be
easily seen.
The transform can be modified in real time for both layers (for the
Pseudocolor layer, changing coloring for the image; for the Intensity
layer, changing the overall image brightness); sun-shading can also
be modified in real time.
If this algorithm is viewed using an ER Mapper 3-D viewer or
printed using ER Mapper hardcopy stereo, the Intensity layer is
used for height. This makes it easy to use this algorithm for 3-D
output.
260
Supplied Algorithms
Magnetics_Colordrape_over_Real_Time_Shade.alg
This is an aeromagnetics scene viewed with color showing the total
value of the magnetic signal and the shading (adjustable in realtime) showing the subsurface structure based on the magnetic
anomonies.
Magnetics_Psuedocolor.alg
This is a gridded aeromagnetics scene viewed with color
representing the total value of the magnetic signal. In this image
red represents higher values and blue represents low values.
The color table can be changed in the Algorithm window to map the
magnetic intensity values to another color range.
Newcastle_Map.alg
A very powerful way to show magnetics data, known as colordrape.
In this algorithm, magnetics data is used for both color and intensity
(the two layers in the algorithm). For intensity, the sun-shading for
the Intensity layer has been turned on to enable structure to be
easily seen.
The transform can be modified in real time for both layers (for the
Pseudocolor layer, changing coloring for the image; for the Intensity
layer, changing the overall image brightness); sun-shading can also
be modified in real time.
If this algorithm is viewed using the ER Mapper 3-D viewer or
printed using ER Mapper hardcopy stereo, the Intensity layer is
used for height. This makes it easy to use this algorithm for 3-D
output.
Radiometrics_K_Th_Ratio.alg
This is a band ratio for Potassium and Thorium. Red indicates high
K/Th values (high Potassium and low Thorium counts) and blue
indicates low K/Th values (low Potassium and high Thorium counts).
Radiometrics_K_Th_U_RGB.alg
This is a radiometrics scene with Potassium values shown in red,
Thorium shown in green and Uranium shown in blue. All data values
are in counts-per-second.
Radiometrics_K_Th_U_RGB_over_Magnetics_RTS.alg
Radiometrics_KThU_RGB.alg) draped over aeromagnetics
highlighting structure using real time shading.
This is powerful as it is allowing us to correlate two different types of
data in real time. The aeromagnetics data is integer numbers and
the radiometrics data is real numbers. This is not possible with
traditional software.
Supplied Algorithms
261
Radiometrics_KThU_RGB.alg
This is a radiometrics scene with Potassium values shown in red,
Thorium shown in green and Uranium shown in blue. All data values
are in counts-per-second.
Mt_St_Helens
Contains algorithms related to the Mt St Helens volcano.
Oil_and_Gas_Exploration
This contains algorithms relating to oil and gas exploration.
3D_Multi_Surface_TWT_and_Amplitude.alg
This algorithm combines Pseudocolor and Intensity layers of the time
data. The illumination (shading) can be edited.
This image combines the illumination (grayscale) with a color layer
of the time data. This type of display significantly enhances the
structural contours by emphasising the main structural blocks.
3D_Seismic_TWT_Horizon.alg
This algorithm combines Pseudocolor and Intensity layers of the time
data. The illumination (shading) can be edited.
This image combines the illumination (grayscale) with a color layer
of the time data. This type of display significantly enhances the
structural contours by emphasising the main structural blocks.
Telecommunications
Contains optionally loaded algorithms related to telecommunication
networks.
World_Topography
Contains optionally loaded example algorithms that use the world
DTM image dataset.
Data_Types
This section contains information on the different data types.
Airphoto
These algorithms give examples of airphotos integrated with other
data. For more complete examples and algorithms that use airphoto
data, see the applications_Land_Information optional examples.
Airphotos are usually scanned in as a RGB image using a desktop or
drum scanner, which is then imported into ER Mapper format.
Unlike pure digital images, the resolution of airphotos is a function
of both the scale that the airphoto was flown, and a function of the
resolution of the scanned image.
Once a airphoto has been scanned in, it needs to be geocoded to the
map projection using ER Mapper image rectification capabilities.
Airphoto mosaics can easily be constructed using ER Mapper’s
interactive data fusion.
Airphoto_and_Landsat_TM_and_Vectors.alg
Demonstrates fusion of different resolution raster data (an airphoto
and Landsat TM), and demonstrates the spatial information (roads,
buildings and so forth) that can be resolved in airphotos.
262
Supplied Algorithms
As an airphoto of higher resolution than Landsat TM data, it is placed
above the Landsat TM data in the algorithm.
A vector layer is also present in this algorithm.
RGB.alg
Shows airphoto data as an RGB image.
Vectors_from_Airphoto.alg
Shows an airphoto as an RGB image with vector data over the top.
The vector data has been created using the ER Mapper annotation
tool while editing over the airphoto.
To edit the vector data, click on the Annotation toolbar button.
Arc_info
ER Mapper can directly access ARC/INFO coverages for display and
editing purposes. You can also convert between ER Mapper vector
format and ARC/INFO format; for example you could create a
classification polygon map using ER Mapper’s Raster to Vector
conversion, and save as an ARC/INFO coverage.
If you haven’t used ER Mapper before you may still want to browse
through this manual to get an idea of the diverse ways in which the
product can be used.
ER Mapper can both import/export ARC/INFO coverages, and
directly edited them.
You do not need to have ARC/INFO present on your system to be
able to edit or access ARC/INFO coverages.
ER Mapper also supports an ARC/PLOT based dynamic link, which
allows ARC/PLOT based information to be integrated into the ER
Mapper imagery. This is the only link which actually requires
ARC/INFO to be present, as it uses ARC/PLOT to generate the
dynamic link data.
As with all information that can be displayed with ER Mapper,
ARC/INFO data can be displayed over other information such as
raster imagery, and can be included as map layers for final hardcopy
maps.
After editing ARC/INFO coverages with ER Mapper, you should run
ARC/INFO’s clean or build utilities, as when changing coverages with
ARC/EDIT.
Airphoto_with_roads.alg
This algorithm shows an ARC/INFO coverage over airphoto data. A
typical use would be to update the ARC/INFO coverage based on the
latest airphotos.
Supplied Algorithms
263
Fast_SPOT_Pan_with_roads.alg
This example shows San Diego and La Jolla roads on top of a SPOT
Panchromatic image of San Diego.
SPOT_Pan_with_roads_and_drainage.alg
This example shows several ARC/INFO coverages - roads and
drainage - shown over a SPOT panchromatic image. Each coverage
is shown in a different color.
via_ARCPLOT_Airphoto_with_roads.alg
This example requires ARC/PLOT (UNIX only) to display a coverage
over an airphoto. Although this dynamic link is slower than the other
links (which directly access the coverage), it allows any ARC/PLOT
based commands to be integrated into ER Mapper images or
hardcopy.
via_ARCPLOT_Coast_with_boundaries.alg
Similar to via_ARCPLOT_Airphoto_with_roads, except it shows
coastlines and boundaries using ARC/PLOT.
AutoCAD_DXF
DXF_Roads_over_Airphoto.alg
Shows an airphoto as an RGB image with vector data over the top.
The vector data has been created using the ER Mapper annotation
tool while editing over the airphoto.
To edit the vector data, click on the Annotation toolbar button.
Digital_Elevation
Digital Elevation image data processed with directional filters
provides lineament detection. It can also be used to refine Landsat
classification results.
Topographic or digital terrain data, once gridded, can be processed
and displayed to highlight important physiographic features such as
palaeo stream terraces in place gold and tin deposits.
You can also drape other images over topography. Topographic
distribution of satellite band response and geophysical/geochemical
anomalies are often used as a diagnostic tool.
DEM data can also be useful used in conjunction with stream or river
vector images.
The algorithms supplied with ER Mapper for DEM data are listed
below.
Slope_degrees.alg
This algorithm demonstrates the use of the Slope filter, which
calculates the slope, or steepness, for a DTM. The numbers output
from the Slope_Degrees filter range from 0 to 90. 0 indicates a flat
area, with no slope; 90 indicates a vertical slope.
264
Supplied Algorithms
Although slope is useful by itself, a common use is to use formula to
select only slopes with a maximum grade. For example, the following
formula would only show slopes with less than 5 degrees of slope:
IF input1 <= 5 THEN input1 ELSE NULL
For this formula to work, input1 should be mapped to a DTM,
and the input layer should have a Slope_degrees filter applied
prior to the formula.
Slope_percent.alg
This algorithm demonstrates the use of the Slope filter, which
calculates the slope, or steepness, for a DTM. The numbers output
from the Slope_percent filter range from 0 to 200. 0 indicates a flat
area, with no slope. 100 indicates a 45 degree slope, and 200
indicates a vertical slope.
Although slope is useful by itself, a common use is to use formula to
select only slopes with a maximum grade. For example, the following
formula would only show slopes with less than 5 percent of slope:
IF input1 <= 5 THEN input1 ELSE NULL
For this formula to work, input1 should be mapped to a DTM,
and the input layer should have a Slope_percent filter applied
prior to the formula.
Aspect.alg
This algorithm demonstrates the use of the Aspect filter, which
calculates the aspect or direction of slope for a DEM. The numbers
output from the Aspect filter range from 0 to 361. Zero indicates a
north facing slope, 90 indicates an east facing slope, and so on. The
special number 361 is used to indicate areas that are perfectly flat
(for example, water) with no aspect for the slope.
Although Aspect is useful by itself, a common use is to use formula
to select only slopes facing a certain direction. For example, the
following formula would only show slopes within 10 degrees of
south:
IF input1 >=170 AND input1 <= 190 THEN 1 ELSE NULL
For this formula to work, input1 should be mapped to a DTM,
and the input layer should have an Aspect filter applied prior to
the formula.
Supplied Algorithms
265
Australia_Colordrape.alg
An example of colordrape, using a DTM for Australia. A colordrape
algorithm uses the DTM in two ways: The image is color coded
according to height (in this case, red is lower areas, blue are higher
areas), and an intensity layer has sun shading turned on to highlight
structure.
If this algorithm is viewed using an ER Mapper 3-D viewers or
printed using ER Mapper hardcopy stereo, the Intensity layer is
used for height. This makes it easy to use this algorithm for 3-D
output.
Colordrape.alg
This is the generic colordrape algorithm for DTMs. This algorithm has
two layers: a Pseudocolor layer showing the DTM as color, and an
Intensity layer showing the DTM structure by shading the Intensity
layer.
This algorithm can be used for any DTM by loading the algorithm,
loading your new DTM into the input image, and clicking on Refresh
Image with 99% clip on limits
to re-clip the image.
If this algorithm is viewed using an ER Mapper 3-D viewers or
printed using ER Mapper hardcopy stereo, the Intensity layer is
used for height. This makes it easy to use this algorithm for 3-D
output.
Colordrape_shiny_look.alg
This algorithm demonstrates how ER Mapper’s formula and
algorithm processing can perform quite different results.
A DTM has been saved as a virtual dataset with two layers, Height
(from the DTM) and Structure (from the DTM, but with sun shading
applied). This virtual dataset is used as input into the
Colordrape_shiny_look algorithm, which uses a HSI (Hue Saturation
Intensity) model to highlight the image in such a way that it has a
gloss or reflectance. This is achieved by modifying the saturation of
the image based on the slope of the DTM.
Contours_from_Raster.alg
Demonstrates the use of formula to compute raster based contours
from a DTM.
The general concept used for this algorithm is to:
•
266
Scale the input DTM into a range of integer values, one value for
each contour level desired. The following formula does this:
Supplied Algorithms
(CEIL(input1 / 100) * 100)
•
Add a Laplacian filter after the above formula. The Laplacian filter
will cause all edges to be shown as non-zero numbers, and all
areas that are flat (areas between contour lines) to be zero.
•
Add a notch transform to only show data above zero; otherwise
show it as zero (which will be rejected from the classification
layer).
All of the above is put into a classification layer, which then
‘classifies’ the image according to if a contour line should be present
or not at each pixel.
For full vector based contouring, we recommend using one of the
many products that do contouring with dynamic links to ER Mapper.
Dip.alg
This algorithm computes the DIP from a DTM. DIP is a measure of
structure within the image. It is similar to having a sun-shaded
image with the sun overhead. DIP shows structure in a DTM,
regardless of angle.
Edge_Shaded.alg
This algorithm implements a 3 x 3 north-west shading filter on one
band of image data.
Example Image: examples\Shared_Data\Australia_DTM
The result is as if the image was illuminated by a light in the northwest corner of the image. Because illumination is from the
northwest, lineaments, faults and features running in a diagonal
direction from north-east to south-west are highlighted.
The transform limits typed in were much narrower (-50 to 50) than
the Actual Input Limits. Other transformations you can try are
clipping the limits of the linear transform, or the Gaussian Equalize
and Histogram Equalize from the Auto Options menu.
You can use different edge filters to highlight edges from different
angles. Also consider using the Sun Angle shading option in layers
instead of filters, which offers real time variation of sun angle and
azimuth.
grayscale.alg
A very simple display of a DTM as a pure grayscale image. This is the
least useful way to display DTMs. A sun-shaded image, or better still
a colordrape image, enables much more information to be extracted
from the DTM.
Pseudocolor.alg
This algorithm displays a digital terrain model (DTM), using a
Pseudocolor look-up table.
Supplied Algorithms
267
In this algorithm the red indicates areas of high elevation and blue
indicates areas of low elevation.
Realtime_Sun_Shade.alg
This algorithm provides the means for using real time shading.
Description: This algorithm is simply a single layer, containing a
DTM image, with real-time shading on the layer. Real time shading
enables you to move the position of the sun relative to your image
data. Illumination enhances lineaments, faults and features.
Ers1
Contains algorithms related to Radar.
Landsat_MSS
Landsat MSS is a second generation earth resources satellite with a
wide synoptic view - a 185 km swath. It has a relatively large ground
cell size (81.5 x 81.5 m) giving a poor spatial resolution, and from a
geoscience point of view a very limited spectral resolution of four
very broad bands in the green visible to near infrared region.
Radiometric quantisation is 7 bits (0.5-0.8 um) and 6 bits (0.8 1.1um).
This wide synoptic view allows structural interpretation of large
areas, contributing to the evaluation of the hydrocarbon and mineral
potential of certain areas. The spectral data, although limited, has
allowed the identification of iron oxide-rich lithologies and
contributed significantly to lithological mapping.
See the Customizing ER Mapper manual for full characteristics of the
Landsat MSS (Landsat 4).
The algorithms supplied with ER Mapper for Landsat MSS data
contained in the examples\Data_Types\Landsat_MSS directory are
listed below.
LandsatMSS_NDVI.alg
This algorithm defines a vegetation ratio called the Normalized
Difference Vegetation Index or NDVI which has shown to be strongly
correlated with the amount of vegetation (biomass).
(I1 - I2) / (I1 + I2)
(B4:0.95_um - B2:0.65_um) / (B4:0.95_um + B2:0.55_um)
Example Image: examples\Shared_Data\Landsat_MSS_27Aug91
LandsatMSS_PVI6.alg
Computes the Perpendicular Vegetation Index (PVI) for Landsat MSS
6, which is:
(((1.091 * I1) - I2) - 5.49) / sqrt((1.091 * 1.091) +1)
Band 3: 0.75 um is mapped to Input1 (I1), and Band 2: 0.65 um
mapped to Input2.
LandsatMSS_PVI7.alg
Computes the Perpendicular Vegetation Index (PVI) for Landsat MSS
7, which is:
268
Supplied Algorithms
(((2.4 * I1) - I2) - 0.01) / sqrt((2.4 * 2.4) +1)
Band 4: 0.95 um is mapped to Input1 (I1), and Band 2: 0.65 um
mapped to Input2.
LandsatMSS_Tasseled_Cap.alg
Three layers, each layer containing the formula for one of the
tasseled cap vegetation calculations for Landsat MSS:
•
Wetness
•
Greenness
•
Brightness
Turn on the layer for the appropriate tasseled cap to see the
calculation.
Landsat_MSS_natural_color.alg
Generates a ‘natural color’ view from a MSS image. The formulae
used are:
RED:Band 2: 0.65 um
GREEN:((Band 2: 0.65 um * 3) + Band 4: 0.95 um) / 4
BLUE:Band 1: 0.55 um
This approximates a natural looking image from MSS imagery, with
vegetation showing as green.
RGB_321.alg
Shows bands 3,2,1 as an RGB image. Vegetated areas show as red.
RGB_321_sharpened_with_SPOT_Pan.alg
Shows bands 3,2,1 of Landsat MSS as Red, Green, Blue. A SPOT
Panchromatic image is included in the Intensity layer to sharpen the
image. Landsat MSS has 80 meter cell resolution; SPOT Pan has a
10 meter cell resolution.
RGB_421.alg
This algorithm is a standard false color image obtained by assigning
Band 4 (0.95 um) to the RED layer, Band2 (0.65 um) to the GREEN
layer and Band 1 (0.55 um) to the BLUE layer.
Example Image: examples\Shared_Data\Landsat_MSS_27Aug91
This is a commonly used color combination for Landsat MSS
composites because the image is similar to color infrared
photography, particularly in the red rendition of vegetation.
Supplied Algorithms
269
TC_Greeness_over_Brightness.alg
A colordrape algorithm with the Greeness of vegetation shown in
color (the Pseudocolor layer), and the brightness of vegetation
shown as Intensity.
Landsat_TM
Landsat TM is a second generation earth resources satellite, and
combines reasonable spatial resolution (cell size of 30 meters by 30
meters) with a reasonable range of spectral bands (7 bands in visible
and near, short and mid infrared wavelengths). The first band covers
the range from 450nm to 520nm, which roughly corresponds to blue
light in the visible spectrum. Refer to the Open Standards manual for
full characteristics of the Landsat TM (Landsat 5). Unlike airborne
data, Landsat TM data is readily available for most of the world. The
algorithms supplied with ER Mapper for Landsat TM data contained
in the examples\Data_Types\Landsat_TM directory are listed below.
Abrams_Ratios.alg
This algorithm combines previously defined ratios into one RGB
composite display. The ratio which highlights phyllosilicates (+clays,
carbonates) is displayed in the red layer (Band 5/Band 7(1.6/2.2)).
The ironoxide ratio (Band 3/Band 2(0.66/0.56)) is displayed in the
green layer and vegetation (Band 4/3 (0.83/0.66)) is displayed in
the blue layer.
(I1 - RMIN(,R1,I1)) / (I2 - RMIN(,R1,I2))
(B5:1.65_um - RMIN(, All, B5:1.65_um)) / (B7:2.215_um RMIN(, All, B7:2.215_um))
Example Image: examples\Shared_Data\Landsat_TM_year_1985
In this algorithm pixels dominated by responses from phyllosilicates
(clay mineral-carbonate)-rich outcrop, or soil, are red; those
dominated by iron oxides are green and those by vegetation, blue.
Pixels with responses due to mixtures of clay+iron oxide, will appear
variously as red-orange-yellow. Areas of iron-rich soil with a grass
cover will appear as cyan and areas of granite soil with a grass cover
will be magenta.
Clay_ratio.alg
This algorithm implements the ratio that highlights clays,
phyllosilicates and carbonates for Landsat TM. It does this by looking
for high spectral responses in the 1.65um (band 5) wavelength and
low responses in the 2.215_um (band 7) wavelength.
(I1 - RMIN(,REGION1,I1)) / (I2 - RMIN(,REGION1,I2))
(B5:1.65_um - RMIN(, All, B5:1.65_um)) / ( B7:2.215_um RMIN(, All, B7;2.215_um))
This ratio has difficulty in separating the response from clays et al.
and vegetation. In the example Landsat TM image it is apparent that
most of the response shown is due to vegetation. There are ways of
combining the two algorithms (typically by using Red-Green-Blue
false color) to highlight only oxides. These are discussed in more
detail under the LandsatTM_abrams_ratios algorithm.
270
Supplied Algorithms
Colordrape_Greeness_over_Brightness.alg
A colordrape algorithm; the Greeness of vegetation is shown in color
(Pseudocolor layer), and the brightness of vegetation is shown as
Intensity.
Colordrape_NDVI_over_PC1.alg
A colordrape algorithm, with the NDVI vegetation index in color
shown over PC1 as intensity.
As PC1 is often albedo from an image, this can be a good way to
correlate vegetation with terrain.
Decorrelation_Stretch.alg
This algorithm carries out a decorrelation stretch by converting the
image to principal component space, histogram equalizing PCs 1, 2
and then inverting back to normal space.
Edge_Shade_from_NE.alg
This algorithm implements a 3 x 3 north-east shading filter on one
band of data; by default band 1.
Example Image: examples\Shared_Data\Landsat_TM_year_1985
The result is as if the image was illuminated by a light in the northeast side of the image.
Because illumination is from the north-east, lineaments, faults and
features running in a diagonal direction from North-West to SouthEast are most strongly highlighted.
A filter is a spatial operation that is the result of moving a template
over the source image, which generates output data based on spatial
variations. Filters are used to detect and enhance edges, sharpen
images, smooth images and remove noise.
Although this algorithm processes only a single band of data, filters
are also useful to look at processed images (such as an image that
highlights clays) to look for structural features such as lineaments.
In addition to the North-East shading filter used by this algorithm,
there are other filters which may be used; for example, sharpening
filters. User defined filters which carry out specialized processing
may be added.
grayscale.alg
This algorithm will show the first band of Landsat TM (blue visible
light band) as a grayscale image. The result is a similar effect to an
oblique photo of the area.
Example Image: examples\Shared_Data\Landsat_TM_year_1985
Supplied Algorithms
271
Iron_Oxide_ratio.alg
This algorithm implements the ratio that highlights iron oxides for
Landsat TM, by looking for a higher spectral responses in the 0.66um
(band 3) wavelength than in the 0.56um (band 2) wavelength. This
is characteristic of the spectral response of iron oxides.
(I1 - RMIN(,R1,I1)) / (I2 - RMIN(,R1,I2))
(B3:0.66_um - RMIN(, All, B3:0.66_um)) / (B2:).56_um RMIN(, All, B2:0.56_um))
Example Image: examples\Shared_Data\Landsat_TM_year_1985
This ratio has difficulty separating the response from iron oxides and
dry vegetation.
Because these bands suffer from considerable atmospheric scatter,
the resulting image is somewhat noisy. Smoothing the iron oxide
ratio image with a 3 x 3 average filter will often make the image
easier to examine.
Landsat_TM_area_specific_stretch.alg
This algorithm demonstrates stretching data within specific areas.
This algorithm has Red, Green and Blue layers for land, and for water
(a total of 6 layers).
The water layers have a formula to only show the data if it is over
water. Band 5 is used to decide if a pixel is over water or not, by
using the following formula:
IF INPUT1 < 20 THEN INPUT2 ELSE NULL
Where Input1 is mapped to Band 5, and Input2 is mapped to
whatever band is to be displayed (depending on if it is a Red, Green
or Blue layer).
The ELSE NULL means that over land, no pixels are shown for the
water layer, so the land layers ‘shine through’ in these areas. This
also means the transform for the water layers only show data for
water.
The water transforms have then been stretched to highlight water.
Water depth, kelp fields, and so on become visible with these
enhancements.
Landsat_TM_Tasseled_Cap_in_RGB.alg
This shows the Brightness, Greeness and Wetness Tasseled Caps as
Red, Green and Blue.
Lsfit.alg
This algorithm computes the Least Squares Fit for Landsat TM.
272
Supplied Algorithms
Pleasing_image.alg
Generates a ‘pleasing image’ for Landsat TM, based on a modified
version of the Brovey Transform (using Landsat TM only).
It gives a very good looking image using only Landsat TM data.
Principal_Component_1.alg
Computes the Principal Component 1 (PC1) for Landsat TM. Only
bands 1-5 and 7 are used in the formula. The generic formula is:
SIGMA(I1..I6 | I? * PC_COV(I1..I6 ), Region1, I?, 1))
Inputs I1 to I5 are mapped to bands 1 to 5. Input 6 is mapped to
band 7. Region 1 can be whatever region is to be used to compute
the PC, and is often the ‘All’ region. The final number 1 in the formula
is the PC to compute; in this example, PC1. Changing this number
generates a different PC.
RGB_321.alg
This algorithm is a ‘natural’ color scene comprising TM bands 3-2-1
(approximately equal to red-green-blue visible light) imaged in the
red-green-blue computer monitor guns respectively.
Example Image: examples\Shared_Data\Landsat_TM_year_1985
The composite is not totally natural because the TM bands do not
exactly match the red-green and blue spectral regions; bands 1
(blue visible) and 3(red visible) have a lower spectral range than that
which the eye recognizes as blue and red. This image looks partly
realistic but there is an apparent absence of green vegetation
(although the northeast corner and a southeast strip has blue-green
coloration). The “greeness” response of plants is not a very strong
one (compare the reflectance of vegetation in the green visible
range, 0.56 um, TM band 2), but the human eye is most sensitive to
green (it has many more cones sensitive to green) and therefore
magnifies the response relative to blue and red.
RGB_321_to_HSI_to_RGB.alg
Converts bands 3, 2 and 1 into Hue, Saturation, Color space from
RGB space. Allows an image to be enhanced in HSI space instead of
in RGB color space. ER Mapper can enhance HSI images in real-time.
RGB_341.alg
The ‘greeness’ of the ‘natural’ image can be improved by substituting
TM band 4 for band 2. In band 4 (centred at 0.83 um) in the near
infrared, vegetation has very high reflectance.
Example Image: examples\Shared_Data\Landsat_TM_year_1985
LandsatTM_rgb_321 is basically the same as LandsatTM_rgb_341,
except we have substituted band 4 for band 2.
Supplied Algorithms
273
By putting band 4 into the GREEN layer the high spectral response
from vegetation (band 4) produces the greenness. This is now a false
color image. This translates the response from a spectral band that
is not from the visible range (has no ‘color’), into a visible
wavelength so that we can ‘see’ a wavelength not normally used.
RGB_432.alg
This algorithm is similar to the LandsatTM_rgb_341 algorithm shown
previously. LandsatTM_rgb_432 is another false color image, usually
referred to as ‘standard’ false color. In this image TM band 4 is
assigned to the RED layer, band 3 (0.66 µm = visible red) is in a
green layer and band 2 (0.56 µm = visible green) in a blue layer.
Example Image: examples\Shared_Data\Landsat_TM_year_1985
The result of this RGB color composite is that the high reflectance of
vegetation (0.83, TM band 4) makes vegetation appear red; red
features, such as Fe-rich soils appear green, while green features
appear blue.
This false color image is generally the one used to evaluate a
particular image for a variety of resource-based applications. People
interested in vegetation can see the location and density of
vegetation; people interested in geology can make the same
determination - and avoid the over-vegetated images.
The Gaussian Equalize transformation was applied to each band.
RGB_531.alg
This algorithm is a false color image which may be used to realize
some lithological discrimination. TM band 5 (µm) is assigned to a red
layer; band 3 ( µm) to a green layer and band 1 (0.48 µm) to a blue
layer.
Example Image: examplesShared_DataLandsat_TM_year_1985
A GAUSSIAN EQUALIZE appears to give the desired results.
RGB_541.alg
Displays bands 5, 4, 1 as an RGB image.
RGB_541_to_HSI_to_RGB.alg
Converts bands 5, 4, 1 into Hue, Saturation, Color space from RGB
space. Allows an image to be enhanced in HSI space instead of in
RGB color space. ER Mapper can enhance HSI images in real-time.
Formula in the HSI layers are used to convert bands 5, 4, 1 from RGB
color space to HSI color space.
RGB_542.alg
Landsat TM Composite, rgb = bands 5,4,2
274
Supplied Algorithms
RGB_741.alg
This is a false color image widely used for geological applications. TM
band 7 (2.2 µm) is assigned to a red layer; band 4 (0.83 µm) to a
green layer and band 1 (0.48 µm) to a blue layer.
Example Image:
examples\Shared_Data\\EXAMPLES\SHARED_DATA\Landsat_TM_y
ear_1985
The LandsatTM_rgb_321 algorithm above showed that while a nearnatural image can be created, the image does not show a great deal
of detail compared to other band selections. This is because
generally the reflectances of terrestrial materials in adjacent spectral
bands are similar.
The advantages of the LandsatTM_rgb_741 algorithm is that we
achieve better color separation, improving detail and information.
A second advantage is that certain mineral groups of interest have
distinctive spectral features in TM bands 7, 4 and 1.
In band 1, iron-bearing minerals have low reflectance whereas
phyllosilicates, quartz, and other light coloured minerals have high
reflectance. In band 7 phyllosilicates and carbonates have
absorption features whereas hematite and, to a lesser extent
goethite have higher responses. Therefore the 741 rgb composite
allows a certain degree of lithological interpretation. Hematite-rich
rock and soil is red, quartzites generally appear blue to blue-green
because they are light coloured and have no iron, while limestones
generally appear pale blue or lavender because of the absorption of
carbonate in band 7 and their general pale colour (high in blue gun
of the computer monitor).
Various cements and fracture fillings also add to the spectral
responses. Fireburn scars, particularly recent ones, appear red and
may be confused with hematite rich areas.
The GAUSSIAN EQUALIZE transformation was applied to each band.
RGB_Principal_Components_123.alg
Generates PC1, PC2 and PC3 and displays them as an RGB image.
RGB_Principal_Components_bands_741.alg
Generates a PC1, PC2, PC3 image using only bands 7, 4, 1 of the
Landsat TM data. Because these bands show mineralogy, this can be
a good image to highlight geology.
RGB_to_HSI_to_RGB.alg
A generic RGB to HSI to RGB colorspace algorithm. This algorithm
has three layers (Hue, Saturation and Intensity), which are
computed from the three bands selected to use as RGB. As the layers
are in HSI color space, you can transform these in real time using
the layer transforms. ER Mapper automatically converts HSI layers
into RGB for final display.
Supplied Algorithms
275
Scan_line_noise_removal.alg
This algorithm demonstrates the removal of scan line noise removal.
Sensors_16.alg
Used by the Scan_line_noise_removal.alg to split an image into a
view based on scan lines; there are 16 scan lines for Landsat TM.
Tasseled_Cap_Transforms.alg
An algorithm with three layers, each layer containing the formula for
one of the tasseled cap vegetation calculations for Landsat TM:
•
Wetness
•
Greeness
•
Brightness
Turn on the layer for the appropriate tasseled cap to see the
calculation. Look at the formula for a given layer to see the exact
formula used for Tasseled Cap Transformations.
Vegetation_NDVI.alg
Computes the NDVI (Normalized Difference Vegetation Index) for
Landsat TM.
Vegetation_TNDVI.alg
Computes the TNDVI for Landsat TM.
These are various algorithms used to highlight vegetation. They all
use the high near-infrared (NIR) reflectivity of green vegetation
(band 4: 0.83um) compared to the high red chlorophyll absorption
(band 3: 0.66um) to highlight vegetation.
sqrt(((I1 - I2) / (I1 + I2)) + 0.5)
SQRT(((B4:0.83_um - B3:0.66_um) / (B4:0.83_um +
B3:0.66_um)) + 0.5)
Example Image: examples\Shared_Data\Landsat_TM_year_1985
Like most processing techniques for Landsat TM, this formula
can enhance information other than vegetation, as there are
materials other than vegetation that have a high spectral
response in NIR and low spectral response in Red.
As dry vegetation has a different spectral response to green
vegetation, this algorithm is not ideal for highlighting dry grass. The
ratio of band 5 over band 7 (e.g. 1.65 um / 2.215 um) is better at
highlighting dry vegetation. Unfortunately, the 5/7 ratio will also
highlight hydroxides.
276
Supplied Algorithms
Magnetics_And_Radiome
trics
The most common geophysical images manipulated and displayed
by exploration companies are aeromagnetics, gravity and
radiometrics. Such continuous tone images of gridded geophysical
data interpolated from real samples preserve the information
content of the raw data and are far more easily interpreted than
contour maps.
These algorithms demonstrate processing airborne magnetics and
radiometrics data. More examples can be found in the
applications_Mineral_Exploration optionally installed images, which
also demonstrate the processing of Landsat, DTM, SPOT, gravity,
ARC/INFO and other types of data.
Aeromagnetics
Aeromagnetic measurements represent variations in the strength
and direction of the Earth’s magnetic field produced by rocks
containing a significant amount of magnetic minerals, commonly
magnetite. The shape and magnitude of an anomaly produced by
one body of rock is complexly related to the body’s shape, depth and
magnetization.
Magnetization is determined by the amount and distribution of
magnetic minerals and the magnetic properties of those minerals,
which are influenced by a number of factors including the history of
the rock. The location of the anomaly over the rock body is normally
offset southward and is accompanied by a weak low on the northern
side, called a polarity low. Anomalies on aeromagnetic maps when
viewed as patterns generally express structural, topographic, and
lithologic variations. Anomalies due to crystalline rocks commonly
dominate aeromagnetic maps because these rocks ordinarily are
more magnetic than other rock types.
Radiometrics
Gamma-ray measurements detect the radiation emitted by
radioisotopes in the near-surface rock and soil, which results from
decay of the natural radioelements uranium-238, thorium-232, and
potassium-40. The near-surface distribution of the natural
radioelements is controlled by geologic processes, which enables the
use of radioelement measurements in geologic mapping and mineral
exploration.
Radiometric maps are often pseudo-geological outcrop maps or can
be used for some mapping purposes.
Typically radiometrics is less affected by vegetation and agriculture
than satellite data for determining soil/geology, providing a useful
substitute for TM in vegetated areas or an adjunct to TM in arid
areas.
Radiometric units can reflect in-situ or transported soils. Radiometric
signals come from the first approximately 0.5m of soil.
The algorithms supplied with ER Mapper for geophysical data
contained in the
examples\Data_Types\Magnetics_And_Radiometrics directory are
listed below.
Supplied Algorithms
277
Magnetics_1Q_Vertical_Derivative.alg
Shows the vertical derivative of magnetics.
Magnetics_2nd_Vertical_Derivative.alg
Shows the vertical derivative of magnetics.
Magnetics_3Q_Vertical_Derivative.alg
Shows the vertical derivative of magnetics.
Magnetics_and_Radiometrics_Colordrape.alg
Integrates surface information from Radiometrics with sub-surface
information from magnetics as a colordrape algorithm.
The radiometrics Potassium count is shown in color (red is higher
counts), draped over magnetics, which is shaded using real time
sun-shading to show structure.
In this way, correlations between magnetics and radiometrics can be
compared. On screen annotation could be carried out over this image
if desired.
Magnetics_and_Vectors.alg
Shows magnetics as a simple grayscale image, with vector geology
structure information in vector format over the magnetics. Vectors
can be edited using the annotation tool.
Magnetics_Colordrape.alg
A very powerful way to show magnetics data, known as colordrape.
In this algorithm, magnetics data is used for both color and intensity
(the two layers in the algorithm). For intensity, the sun-shading for
the Intensity layer has been turned on to enable structure to be
easily seen.
The transform can be modified in real time for both layers (for the
Pseudocolor layer, changing coloring for the image; for the Intensity
layer, changing the overall image brightness); sun-shading can also
be modified in real time.
If this algorithm is viewed using an ER Mapper 3-D viewer or
printed using ER Mapper hardcopy stereo, the Intensity layer is
used for height. This makes it easy to use this algorithm for 3-D
output.
Magnetics_Colordrape_map_legend.alg
This shows a cut-out of part of a magnetics image, as a colordrape.
This algorithm is used as a legend in a map composition - the
algorithm is embedded into the map. Any algorithm can be used as
a legend item for a map.
278
Supplied Algorithms
Magnetics_Colordrape_shiny_look.alg
A demonstration of how ER Mapper’s formula and algorithm
processing can perform quite different results.
Magnetics data was saved as a virtual dataset, with two layers; value
and structure (the same data, but with sun shading applied). This
virtual dataset is used as input into the Colordrape_shiny_look
algorithm, which uses an HSI (Hue Saturation Intensity) model to
highlight the image in such a way that it has a gloss or reflectance.
This is achieved by modifying the saturation of the image based on
the slope of the magnetics data.
Magnetics_Colordrape_wet_look.alg
A demonstration of how ER Mapper’s formula and algorithm
processing can perform quite different results. Similar to the
Magnetics_Colordrape_shiny_look algorithm, except that different
transforms are used in saturation to achieve a wet look to the final
image.
Magnetics data has been saved as a virtual dataset, with two layers;
value and structure (the same data, but with sun shading applied).
This virtual dataset is used as input into the Colordrape_shiny_look
algorithm, which uses a HSI (Hue Saturation Intensity) model to
highlight the image in such a way that it has a gloss or reflectance.
This is achieved by modifying the saturation of the image based on
the slope of the magnetics data.
Magnetics_Contours_from_Raster.alg
Demonstrates the use of formula to compute raster based contours
from magnetics.
The general concept used for this algorithm is to:
•
Scale the input magnetics into a range of integer values, one
value for each contour level desired. The following formula does
this:
(CEIL(input1 / 100) * 100)
•
Add a Laplacian filter after the above formula. The Laplacian filter
will cause all edges to be shown as non-zero numbers, and all
areas that are flat (areas between contour lines) to be zero.
•
Add a notch transform to only show data above zero, otherwise
show it as zero (which will be rejected from the classification
layer).
All of the above is put into a classification layer which ‘classifies’ the
image according to if a contour line should be present or not at each
pixel.
For full vector based contouring, we recommend using one of the
many products that do contouring with dynamic links to ER Mapper.
Supplied Algorithms
279
Magnetics_POB_Vertical_Derivative.alg
Shows POB vertical derivative from magnetics.
Magnetics_Pseudocolor.alg
This algorithm displays a magnetics image data as a single band
Pseudocolor display.
The high-amplitude magnetics are shown as red and the low
amplitude magnetics are shown as cooler blue-green colors.
Magnetics_Realtime_Sun_Shade.alg
This algorithm implements real time shading on a grayscale image.
Real time shade shades the aeromagnetic data as though it were
topography obliquely illuminated by the sun, using a color scale (in
this case grayscale) to show different values.
Magnetics_rgb_3angle.alg
An interesting algorithm that demonstrates the use of an RGB
algorithm that has sun-shading turned on for each of the three
layers. The sun is in a different position for each layer, with the result
that structures in certain directions are shown in a certain color.
While difficult to interpret, it does have the advantage of showing all
structure from a magnetics image for interpretation purposes.
Radiometrics_Magnetics_RGBI.alg
Shows radiometrics as RGB draped over a magnetics image. It
enables detailed interpretation based on both the radiometrics and
magnetics data.
If this algorithm is viewed using an ER Mapper 3-D viewers or
printed using ER Mapper hardcopy stereo, the Intensity layer is
used for height. This makes it easy to use this algorithm for 3-D
output.
Radiometrics_ratio_K_Th.alg
Shows Potassium over Thorium as a ratio, in color. It demonstrates
the use of formula to compute data. The formula can be easily
changed to show the ratio for any two bands of data.
RadarSat
Contains optionally loaded algorithms related to Radarsat
Seismic
The image dataset used in these algorithms was extracted from a
Landmark seismic processing system. The image contains two
channels (bands) of data: two-way time and amplitude of the
reflected wave. The image is a 2-dimensional horizon, and this
corresponds to a plan view of the gridded seismic data. All algorithms
are in the algorithm directory ‘Data_Types\Seismic’.
280
Supplied Algorithms
More complete examples using a 3D seismic survey can be found in
the application_Oil_and_Gas optionally installed dataset directory.
Several horizon attributes can be calculated from the basic horizon
time data. Two of these (dip and Azimuth) are powerful techniques
for seismic interpretation. The dip is the amount of inclination of the
horizon in the subsurface, and the azimuth is the direction of this
inclination measured from a local reference direction, usually north.
In their simplest form these attributes are given by:
Dip = ((dt/dx)2 + (dt/dy)2)1/2
Azimuth = arctan ((dt/dy) / (dt/dx))
Horizon_Azimuth.alg
This algorithm computes the azimuth attribute of an horizon. It is
displayed using the azimuth lookup table: a 5-color Lookup Table
representing the four primary directions. The additional color is
required as the Lookup Table must wrap around; top and bottom
colors must be the same.
The formula combines the easterly dip of the time data (dt/dx) with
the northerly dip of the time data (dt/dy) to obtain the azimuth.
Example Image:
examples\Shared_Data\Landmark_Seismic_Horizon
The major structural relationships are highlighted using the azimuth
attribute. In particular the inter-relationship of faults and direction of
throw can be determined. This is usually used in conjunction with the
dip display to interpret the area.
Horizon_Colordraped.alg
This algorithm combines Pseudocolor and Intensity layers of the time
data. The illumination (shading) can be edited.
Example Image:
examples\Shared_Data\Landmark_Seismic_Horizon
This image combines the illumination (grayscale) with a color layer
of the time data. This type of display significantly enhances the
structural contours by emphasising the main structural blocks.
Horizon_Colordraped_Dip.alg
This algorithm combines a Pseudocolor time layer with either an
intensity time layer with real time shading, or an intensity dip layer
without real time shading. The result is similar, as the vertical
illumination (intensity real time shade) is essentially the same as the
dip attribute display.
The algorithm uses a step Lookup Table in Pseudocolor mode.
Example Image:
examples\Shared_Data\Landmark_Seismic_Horizon
Supplied Algorithms
281
This image is similar to the seismic - colordrape result as it again is
used to significantly enhance the structural contours by emphasising
the main structural blocks. The step Lookup Table emphasises the
structural contours.
Horizon_Colordraped_Step_LUT.alg
This algorithm combines Pseudocolor and Intensity layers of the time
data with sun angle shading. It is identical to Horizon_Colordraped
except that a step Lookup Table is used.
Example Image:
examples\Shared_Data\Landmark_Seismic_Horizon
Horizon_Dip.alg
This algorithm computes the dip attribute of an horizon and displays
it as a grayscale image. The formula combines the easterly dip of the
time data (dt/dx) with the northerly dip of the time data (dt/dy) to
give the true dip.
Example Image:
examples\Shared_Data\Landmark_Seismic_Horizon
The major structural trends will be highlighted using the dip
attribute. In particular the strike of faults can be determined. This
display is usually used in conjunction with the azimuth display to
interpret the area.
Horizon_IHS.alg
Horizon_Low_Amplitude.alg
Horizon_Pseudocolor.alg
This algorithm is a Pseudocolor image with the formula:
- INPUT1
Note the negative sign to invert seismic values.
Example Image:
examples\Shared_Data\Landmark_Seismic_Horizon
This image is a color coded map of the basic interpreted horizon
data. The data ranges from the structural highs (in red tones) to the
surrounding troughs (in blue tones).
Horizon_Realtime_Sun_Shade.alg
This algorithm is a Pseudocolor image using real time shading.
Example Image:
examples\Shared_Data\Landmark_Seismic_Horizon
Real time shade allows you to highlight structural lineations of
interest. By varying the sun angle (both azimuth and illumination)
particular features can be emphasised.
282
Supplied Algorithms
The image from vertical illumination is essentially equivalent to
the image using the Dip algorithm.
Once you are satisfied with the illumination direction using real time
shade, the resolution of the image can be increased by changing the
algorithm mode to Pseudocolor with a grayscale Lookup Table.
SPOT_Panchromatic
The HRV (High Resolution Visible) sensor is a pushbroom system
designed primarily for agricultural studies, with four broad bands all
in the visible region, concentrating on the spectral properties of
vegetation between 0.6 and 0.8 um.
A significant feature of the SPOT HRV sensor is that it can collect data
either in the MSS mode over its image swath width (60 km) with a
scene cell of 20 m, or by in -flight change convert two sensors to
panchromatic sensing covering the same swath width with a scene
cell of 10m resolution.
The following SPOT has been supplied by SPOT Imaging Services
with the following acknowledgement:
SPOT Imagery copyright CNES (1990)
Distribution SPOT Imaging Services, Sydney, Australia
The algorithms supplied with ER Mapper for SPOT data in the
examples\Data_Types\SPOT_Panchromatic directory are listed
below.
Colorize_Regions_Over_grayscale.alg
This algorithm ‘colors’ regions within the SPOT image, using the
INREGION() formula to decide where the regions are. The colorized
regions are shown over the original imagery as grayscale.
grayscale.alg
Displays SPOT Pan as a grayscale image. You can add a sharpening
filter to this image to further sharpen the image.
grayscale_Hospitals_and_Fire_Stations.alg
Table based dynamic links show hospitals and fire stations as circles
over a SPOT Pan image. The tables contain the easting/northing
location for each hospital and fire station.
In_Regions.alg
ER Mapper can attach vector based polygon regions to raster
images. These regions can be used for statistics calculations,
classification, or selecting raster data within regions.
This algorithm uses the “INREGION()” formula to show raster
imagery only within certain regions.
Full conditional logic can be used, for example:
IF inregion(r1) AND NOT inregion(r2) THEN .....
Supplied Algorithms
283
Map region variables (r1, region2, etc.) into actual regions using the
formula editor.
Vectors_over_grayscale.alg
Shows vector based data over SPOT Pan imagery. The vector data
can be edited.
Hybrid_contrast_stretch.alg
SPOT_xs
SPOT_XS_and_Pan_Brovey_merge.alg
Uses the Brovey transform to merge SPOT XS and SPOT
Panchromatic imagery. The XS imagery is used for color and the Pan
imagery for spatial sharpness.
SPOT_XS_natural_color.alg
A natural color image based on SPOT XS. This algorithm uses a
weighted average of bands 2 and 3 for Green resulting in an image
that shows vegetation in Green, not Red.
SPOT_XS_NDVI_colordraped_over_Pan.alg
Shows the NDVI in color calculated from SPOT XS, over SPOT
Panchromatic as Intensity.
SPOT_XS_NDVI_veg_index_grayscale.alg
Computes the NDVI vegetation index from SPOT XS.
In this algorithm the vegetation indices are derived from the ratio of
bands 3 and 2.
I1 / I2
B3:0.84_um / B2:0.645_um
Example Image: examples\Shared_Data\SPOT_XS
The bright areas will show the areas of vegetation (high ratio
values).
SPOT_XS_rgb_321.alg
This algorithm is an RGB color composite of SPOT HRV Band 3
assigned to the Red layer, Band 2 to the Green layer and Band 1 to
the Blue layer.
Example Images:
examples\Shared_Data\SPOT_XS
examples\Shared_Data\SPOT_Pan
284
Supplied Algorithms
SPOT_XS_rgb_321_sharpened_with_SPOT_Pan.alg
This algorithm is an RGB color composite of SPOT HRV Band 3
assigned to the Red layer, Band 2 to the Green layer and Band 1 to
the Blue layer.
It also contains an Intensity layer using SPOT Pan to sharpen the
image. ER Mapper automatically converts from RGB to HSI, replaces
Intensity (with the SPOT Pan data), and converts back to RGB color
space.
Red, Green, Blue and Intensity can all be modified in real time with
the layer transforms.
A sharpen filter could be added to the intensity layer to further
sharpen this image.
SPOT_XS_rotate_hue.alg
This algorithm rotates the hue.
Functions_And_F
eatures
3D
The algorithms in this section are all related to the 3D visualization
of imagery and vector data. Any data shown in 2D can also be shown
in 3D, provided a suitable height component can be used - which
does not have to be altitude from a DEM, although it often is.
The 3D examples can be viewed using the ER Mapper 3D viewers,
and can also be printed as a stereoscopic pair by selecting the Stereo
option within hardcopy.
In general, building a 3D algorithm is done with the following steps:
1. Define the algorithm, as a 2D view.
Include any raster imagery, vector data, dynamic links and so forth
that are to be shown in the 3D view.
2. Add a Height layer (or several if mosaicing heights from several
DEMs) to the algorithm.
Make sure height is scaled to 0-255 in the final transform - you can
also use the transforms on Height layers to clip or exaggerate certain
heights.
Save the algorithm.
Load the algorithm into one of the 3D ER Mapper views.
You can also print it as a stereoscopic hardcopy pair.
The 3D views allow you to save your algorithm. This preserves
the view position, background color, height exaggeration and
other information related to your 3D view, ensuring that your
next viewing of the data will be the same view.
Supplied Algorithms
285
Digital_Terrain_Map.alg
Demonstrates the use of DTM data in 3-D. This algorithm has two
layers: a color layer and a height layer. The Pseudocolor color layer
is the DEM, and the Height layer is also the DEM. Note that you can
also use a conventional colordrape algorithm (with a Pseudocolor
and a Intensity layer) for 3-D visualization. When a 3-D module of
ER Mapper uses an algorithm without a Height layer, it will use
Intensity for Height if there is an Intensity layer. Thus, any
colordrape algorithm can be used for 3-D display.
The advantage of a color coded DEM in 3-D is that there are two
visual clues to the DEM: the height, as shown by the 3-D module,
and the color. Typically red is used to indicate high locations, and
blue to indicate low locations, but any Lookup Table can be used to
color code heights.
The final transform for the Pseudocolor layer could also be changed,
for example to only highlight DEM values within a certain altitude
range.
Landsat_over_DTM.alg
This example algorithm drapes Landsat TM satellite imagery over
DEM data to provide a combined view with height from the DEM and
color information from the Landsat TM data. Because Landsat TM has
a 30 meter ground resolution per pixel, this type of view is only good
for large regional overviews.
The Landsat TM data and the DEM do not need the same
resolution cell size.
Landsat_SPOT_fusion_over_DTM.alg
This example improves on the Landsat_over_DEM algorithm, by first
combining the Landsat TM data with SPOT Panchromatic imagery to
provide a crisper image which is then draped over the DEM.
As SPOT Pan is 10 meter resolution, and Landsat TM is 30 meter
resolution, the spatial information from the SPOT imagery is
combined with the spectral information from the Landsat TM
imagery.
This algorithm uses the Brovey Transform to combine the Landsat
and SPOT imagery, as the Brovey Transform generally provides a
better result than the RGB/HSI transformation, which could also be
used in this case.
Land_Use_Classification.alg
This example drapes a classified image (a Landsat TM image that has
been classified into different types of land use) over a DEM image.
This allows land use to be correlated with elevation, for example
when looking for different types of forests, or when checking
sensitive areas such as valleys for environmental impact problems.
286
Supplied Algorithms
A more sophisticated version of this algorithm could be created by
defining an RGB satellite image then a Classification Display layer
over the RGB image, showing the classification only in selected
regions (using INREGION() in a formula). This combined image
would be draped over a DEM in the Height layer, resulting in an
image which shows classifications only in selected areas.
Magnetics.alg
By viewing airborne magnetics data in 3D, subtle structures within
the magnetics may be more easily discerned. This algorithm is
simply a colordrape algorithm - magnetics is both color and height in
the algorithm.
The height in this algorithm is in fact the magnetic intensity, not
height of terrain. We use the magnetics field strength to indicate
highs and lows. This can be done with any field based data, such
as gravity maps.
Typically, vector based geology information would also be combined
over this algorithm.
Stereoscopic on screen viewing is particularly useful when
viewing this type of data, as it shows subtle magnetic structures
very well.
Radiometrics_over_Magnetics.alg
Similar to the Magnetics algorithm, except the Radiometrics
channels of Potassium, Thorium and Uranium are shown in color and
draped over the magnetics as height.
This is useful for correlating radiometrics information with
magnetics.
Instead of using raw magnetics as the height, one or more of the
common geophysical filters such as upward continuations could
be applied to the magnetics data before it is used as height.
Roads_and_SPOT_over_DTM.alg
Both a raster image (SPOT satellite imagery) and a vector image are
draped over DEM data in this algorithm.
Any form of vector data, including dynamic links, can be draped over
a 3D image.
Supplied Algorithms
287
For best results the resulting 3D view should be displayed as a
hardcopy stereoscopic pair rather than on-screen. This is
because the vectors will be processed at hardcopy device
resolution (typically very high compared to onscreen), resulting
in clearly visible vectors. For on-screen viewing, the 3D image
should be as high a resolution as possible.
Shaded_Digital_Terrain_Map.alg
Similar to the Digital_Terrain_Map algorithm, except the algorithm
contains both a Height and an Intensity layer in addition to a
Pseudocolor layer to provide color to the image.
The advantage in both a Intensity Layer (which is sun-shaded from
a specified azimuth and elevation) and a Height layer is that the
Intensity layer gives the feeling of a ‘sun’ from a certain position
relative to the image, regardless of the orientation of the image in a
3D viewer. In other words, the Height layer contains a 3D sun
shading source that is fixed relative to the viewer; the Intensity layer
provides a form of sun shading that is fixed relative to the image.
This shaded 3D technique can be useful for creating realistic looking
images, especially when combined with satellite or airphoto imagery
for the color for the image. It is also handy when looking for
structure - the Intensity layer would be shaded (using the sun
shading window for the algorithm) to highlight structure in the
desired direction.
Sin_curve.alg
This generates a test 3D image, which is a sine curve.
The actual sine image used is generated using a formula in a virtual
dataset, and is an example of how to generate test data for research
purposes.
SPOT_over_DTM.alg
This is a drape of SPOT Panchromatic (1 band) satellite imagery over
a DEM. It gives good results for regional examination of an area.
The SPOT image could be combined with other imagery, for example
Landsat imagery or a classified image, to draw attention to areas in
color.
3D_multi_surface
DEM_SPOT_2_Surface.alg
Oil_and_Gas_Seismic_TWT.alg
This algorithm combines Pseudocolor and Intensity layers of the time
data. The illumination (shading) can be edited.
Example Image:
examples\Shared_Data\Landmark_Seismic_Horizon
288
Supplied Algorithms
This image combines the illumination (grayscale) with a color layer
of the time data. This type of display significantly enhances the
structural contours by emphasising the main structural blocks.
Radiometric_Magnetics_Correlation.alg
Airphoto_Turorial
This directory contains the image data files required for the Airphoto
Tutorial introduced in ER Mapper 6.2.
Class_Interactive_Comp
ute
These algorithms demonstrate how the powerful ER Mapper formula
capability can help you implement complete classification techniques
using algorithms. Normally you would use Unsupervised and
Supervised Classification within ER Mapper, which provide complete
classification capabilities.
The following algorithms are useful for research into new
classification techniques, and for conditional processing; for
example, drawing on additional information as part of the
classification process.
Classification Compute (using Classification layers) can be used to
compute classifications. This should not be confused with displaying
pre-computed classification files using Class Display.
Compute_MaxLike_Landsat_TM_1985.alg
This algorithm demonstrates how you can implement complete
classification techniques using algorithms. Normally you would use
Unsupervised and Supervised Classification within ER Mapper, which
provide complete classification capabilities.
In this algorithm Maximum Likelihood has been implemented using
formulae, with one layer in the algorithm for each class computed.
Whenever ER Mapper processes an algorithm with Classification
layers it assigns a pixel to the class layer with the highest number at
that pixel.
This algorithm could be enhanced to carry out more sophisticated
processing. For example, a Digital Elevation Model (DEM) could be
added into the classification, or classification could be carried out
only within certain regions.
Compute_Vegetation_over_RGB_541.alg
This algorithm demonstrates how you can implement complete
classification techniques using algorithms. Normally you would use
Unsupervised and Supervised Classification within ER Mapper, which
provide complete classification capabilities.
In this example the algorithm contains a Landsat TM image, with
bands 5,4,1 displayed as RGB. Over this is a Classification Layer
which computes a vegetation index. In addition to computing the
index, the Classification Layer has a conditional IF ... THEN ... ELSE
statement:
IF NDVI > 0.25 THEN 1 ELSE NULL
Supplied Algorithms
289
The classification only returns true (1) if the vegetation index is
greater than 0.25, indicating vigorous vegetation. Because the ELSE
NULL statement causes any imagery under the classification layer to
show through, where there is limited or no vegetation the backdrop
Landsat TM image shows through.
This algorithm demonstrates how interactive formula processing
allows complex formula to be developed and used for classification.
For example, if you had two Landsat TM images from different years
and you wanted to show vegetation decrease, you could do the
following to create an image to show vegetation decrease:
•
Create an algorithm that computes the vegetation NDVI for the
two years. There would be two layers in this algorithm, one for
each year. Each layer would refer to a different image. Name the
first layer NDVI_1993 and the second NDVI_1994 (or whatever
the years are). Delete the final transform, to ensure that the data
is not rescaled to 0..255.
•
Load the Compute_Vegetation_over_RGB_541.alg algorithm,
and change the algorithm so the RGB backdrop layers show
whatever image you want as a backdrop; perhaps one of the
Landsat images.
•
Load your previously saved NDVI algorithm into the Classification
Layer of the algorithm (ER Mapper can load algorithms as well as
images as input images for algorithms). Make sure you press
Apply - This Layer Only to only load it into the Classification
layer.
•
Change the generic formula in the Classification layer to
something like the following:
•
IF i1 > 0.25 AND i2 < 0.25 THEN 1 ELSE NULL
•
After mapping year one NDVI into I1, and year two NDVI into I2,
the specific formula will look like:
IF NDVI_1993 > 0.25 AND NDVI_1994 < 0.25 THEN 1 ELSE NULL
In this example, we have decided to show a result if (a) the 1993
NDVI shows strong vegetation (NDVI_1993 > 0.25) and (b) the
1994 NDVI had a lower vegetation index that this (NDVI_1994 <
0.25). This modification shows vegetation only where it had
decreased between 1993 and 1994.
Classification
ISOCLASS_Landsat_TM_year_1985.alg
This shows the display of classified data (using Supervised or
Unsupervised Classification) as a layer of an algorithm.
290
Supplied Algorithms
Before displaying classified images, you should make sure that
the class colors are correct.
This algorithm is the most simple form of display, showing just the
classified image.
More sophisticated algorithms can be built, for example:
Put raster imagery behind the classified data as a backdrop
Class Display layers are shown on top of raster layers, but under
vector and dynamic link layers. If you only wish to show some
classes, or show classes for some regions, you can put a raster
image under the image as a backdrop.
Selective display of classes
Use the formula in a Class Display layer to only show some layers,
for example:
IF input1 != 5 and input1 != 7 THEN input1 ELSE NULL
In this case, the classes would be displayed only if they are not
classes 5 or 7.
Clean up the classified data
You can insert a ranking filter, such as adaptive median, to
remove small areas of classes. You must use a formula that does
not reprocess the data into non-integer results, so smoothing
filters (for example Average) should not be used.
Ranking filters replace the value with one of the other values in the
neighbourhood, so can be used for cleanup.
Although a “cleaner” classified image might look better, it is
technically inaccurate compared to the original data. However,
it is sometimes useful to clean up a classified image before
running raster to vector conversion, as otherwise a lot of small
polygons can be generated. Also consider enhanced classifiers.
Show classes only in regions
Use the INREGION() function in formulae to only show classes within
regions. For example:
IF inregion(r1) and not inregion(r2) THEN input1 ELSE
NULL
Classification_Display
ISOCLASS_classification.alg
This algorithm uses the INREGION() function in formulae to only
show classes within regions. For example:
Supplied Algorithms
291
IF inregion(r1) and not inregion(r2) THEN input1 ELSE NULL
Contours
Data_Fusion
This directory has the following algorithms that demonstratge the
use of contours in images:
•
contours_auto_range.alg
•
contours_fancy.alg
•
contours_labels.alg
•
contours_line_styles.alg
•
contours_multi_color.alg
•
contours_thick_lines.alg
Data fusion is the combination of multiple types of data into a single
view. For example, sharpening a Landsat TM image with a SPOT
Panchromatic image is data fusion. The following algorithms give
examples of different types of data fusion.
Brovey_Transform.alg
The Brovey transform is a method to fuse different data together
using one image; for example Landsat TM for spectral or color
information, and the other image for spatial or sharpness. The
technique gives very good results for Landsat TM / SPOT
Panchromatic merges, resulting in a better image than a
conventional HSI transformation.
The Brovey transform can be applied to any type of data.
When applied to Landsat TM / SPOT Panchromatic, the formula used
is as follows:
RED: B5 / (B2 + B4 + B5) * SPOT
GREEN:B4 / (B2 + B4 + B5) * SPOT
BLUE: B2 / (B2 + B4 + B5) * SPOT
As this formula requires input from multiple images, a virtual dataset
containing the seven Landsat TM bands and the 1 SPOT
Panchromatic band is used as input.
The normal steps to create a Brovey Transform image are:
1. Create a algorithm, or a virtual dataset, containing the bands from
the Landsat TM and the SPOT Pan image
There is a Template algorithm that can be used. Save this algorithm
(or virtual dataset).
2. Load the Brovey_Transform algorithm
3. Change the input image to your new image
The Brovey Transform does a particularly good job of highlighting
water in a natural fashion, and showing land in a natural looking
image.
292
Supplied Algorithms
You can further sharpen this image by applying sharpening filters to
the SPOT Pan.
You do not have to grid the Landsat TM and SPOT into similar
cell sizes - ER Mapper merges images of different cell
resolutions. The virtual dataset that gives a combined view into
the Landsat TM and SPOT Pan is an abstracted view of the
images - it does not store an intermediate image.
Brovey_Transform_SPOT_XS_and_Pan.alg
The Brovey transform is a method to fuse different data together
using one image. For example SPOT XS for the spectral or color
information, and the other image for spatial or sharpness. The
technique gives very good results for SPOT XS / SPOT Panchromatic
merges, resulting in a better image that a conventional HSI
transformation.
The Brovey transform can be applied to any types of data.
When applied to SPOT XS / SPOT Panchromatic, the formula is as
follows:
RED: B3 / (B1 + B2 + B3) * SPOT Pan
GREEN:B2 / (B1 + B2 + B3) * SPOT Pan
BLUE: B1 / (B1 + B2 + B3) * SPOT Pan
As this formula requires input from multiple images, a virtual dataset
containing the 3 SPOT XS bands and the 1 SPOT Panchromatic band
is used as input.
The normal steps to create a Brovey Transform image are to:
1. Create a algorithm, or a virtual dataset, containing the bands from
the SPOT XS and the SPOT Pan image.
There is a Template algorithm that can be used. Save this algorithm
(or virtual dataset).
2. Load this Brovey_Transform algorithm
3. Change the input image to your new image
The Brovey Transform does a particularly good job of highlighting
water in a natural fashion and showing land in a natural looking
image.
You can further sharpen this image by applying sharpening filters to
the SPOT Pan input.
You do not have to grid the SPOT XS and SPOT Pan into similar
cell sizes - ER Mapper handles merging images of different cell
resolutions. The virtual dataset that gives a combined view into
the SPOT XS and SPOT Pan is an abstracted view into the images
- it does not store an intermediate image.
Supplied Algorithms
293
Landsat_TM_and_SPOT_Pan_IHS_merge.alg
This algorithm carries out a conventional IHS (HSI) merge of Landsat
TM and SPOT Panchromatic. There are three layers in this algorithm:
Hue, Saturation and Intensity. The Hue and Saturation layers have
formulae in the layers to compute Hue and Saturation using three
selected bands from a Landsat TM image. The Intensity layer is
simply the SPOT Panchromatic.
The RGB based Landsat TM bands are converted into Hue and
Saturation; Intensity uses the SPOT Pan image. ER Mapper
automatically converts Hue, Saturation, Intensity layers back into
RGB color space when displaying them, so you do not need to
manually carry out this step.
The result is an image containing the color information from Landsat
TM and sharpness from SPOT Panchromatic.
This general concept can be applied to any algorithm where one
image has color information and the other has spatial information.
Hue, Saturation and Intensity can be modified in real time using
the Transformation dialog box.
To use this algorithm with your own data:
1. Load the algorithm
2. Change the input images (Landsat TM and SPOT Pan) to reflect your
own data.
Landsat_TM_and_SPOT_Pan_RGBI.alg
This algorithm does an RGB -> HSI -> RGB merge of Landsat TM and
SPOT Pan. Similar to the Landsat_TM_and_SPOT_Pan_IHS_merge
algorithm, except ER Mapper automatically converts to and from HSI
color space.
The RGB data from Landsat TM is converted into HSI space, the
Intensity is replaced with the SPOT Panchromatic image, and the
result is converted back to RGB space.
As SPOT Panchromatic has a higher spatial resolution than Landsat
TM (10 meters versus 30 meters), the result is an image that is
sharper and contains more spatial detail such as roads.
Where the Landsat_TM_and_SPOT_Pan_IHS_merge algorithm
enables you to change Hue, Saturation and Intensity in real time,
this algorithm enables you to change Red, Green, Blue and Intensity
in real time. Use whichever algorithm is more appropriate.
If this algorithm is viewed using an ER Mapper 3-D viewers or
printed using ER Mapper hardcopy stereo, the Intensity layer is
used for height.
294
Supplied Algorithms
Radiometrics_and_Magnetics_RGBI.alg
This algorithm does an RGB -> HSI -> RGB merge of Radiometrics
(Potassium, Uranium, Thorium) and airborne Magnetics. Similar to
the Landsat_TM_and_SPOT_Pan_IHS_merge algorithm, except
ER Mapper automatically converts to and from HSI color space.
The RGB data from the Radiometrics image is converted into HSI
space, the Intensity is replaced with the Magnetics image, and the
result is converted back to RGB space. As the Magnetics image is
field strength data, it is sun-shaded to highlight structure.
This demonstrates how to integrate two very different types of data
into a single image that is still easy to understand. The color
information comes from the Radiometrics, and the intensity comes
from the shaded Magnetics.
Where the Landsat_TM_and_SPOT_Pan_IHS_merge algorithm
enables you to change Hue, Saturation and Intensity in real time,
this algorithm enables you to change Red, Green, Blue and Intensity
in real time. Use whichever algorithm is more appropriate.
You can also change the Sun Shading position in real time in this
algorithm.
If this algorithm is viewed using an ER Mapper 3-D viewers or
printed using ER Mapper hardcopy stereo, the Intensity layer is
used for height. This makes it easy to use this algorithm for 3-D
output.
Sharpen_TM_with_Pan.alg
This is the Brovey transform under a different name. Refer to the
Brovey_Transform algorithm for full details.
SPOT_XS_and_SPOT_Pan_RGBI.alg
This algorithm does a RGB -> HSI -> RGB merge of SPOT XS and
SPOT Pan. Similar to the Landsat_TM_and_SPOT_Pan_IHS_merge
algorithm, except ER Mapper automatically converts to and from HSI
color space.
The RGB data from SPOT XS is converted into HSI space, the
Intensity is replaced with the SPOT Panchromatic image, and the
result is converted back to RGB space.
As SPOT Panchromatic has a higher spatial resolution than SPOT XS
(10 meters versus 20 meters), the result is an image that is sharper
and contains more spatial detail such as roads.
Where the Landsat_TM_and_SPOT_Pan_IHS_merge algorithm
enables you to change Hue, Saturation and Intensity in real time,
this algorithm enables you to change Red, Green, Blue and Intensity
in real time. Use whichever algorithm is more appropriate.
Supplied Algorithms
295
If this algorithm is viewed using an ER Mapper 3-D viewers or
printed using ER Mapper hardcopy stereo, the Intensity layer is
used for height. This makes it easy to use this algorithm for 3-D
output.
Transparent_Airphoto_and_Landsat_TM_321.alg
This algorithm shows an airphoto overlaid onto a Landsat satellite
color composite image. The airphoto has been made partially
transparent.
There are two surfaces in this algorithm. The top surface is a Red
Green Blue color composite airphoto. The bottom surface is a
Landsat TM satellite image, with satellite bands 3, 2 and 1 in the Red,
Green and Blue channels respectively. Smoothing has been set on
the Landsat image.
To achieve the transparency of the airphoto, the transparency value
of the airphoto surface has been set to 65%. The transparency of the
airphoto can be varied at will from 0% (opaque) to 100% (totally
transparent) using the Transparency slider on the Surface tab for the
airphoto surface, in the Algorithm window.
Data_Mosaic
Interactive_mosaic_of_4_datasets.alg
This shows the interactive mosaic capability of ER Mapper. When
there is more than one layer of the same type (for example Red or
Pseudocolor layers) in an algorithm, those layers are automatically
mosaiced together. The images do not have to cover the same area,
and do not have to have the same cell resolution.
This algorithm merges 4 different images - Landsat TM, SPOT XS,
SPOT Pan and an airphoto - together into a single image.
Each image has a different resolution.
Virtual_Dataset_mosaic.alg
This algorithm demonstrates how to treat multiple images as a single
image for the purposes of mosaicing. The
Interactive_mosaic_of_4_datasets algorithm has been saved as a
virtual dataset. No data is stored - only the algorithm describing how
to recreate the data.
The virtual dataset has been used as input into this algorithm.
It appears that the virtual dataset only has one band of data,
which in fact is built up from four separate input images.
296
Supplied Algorithms
For example, this enables a transformation to be applied that affects
all the input data, rather than just one of the images.
Dynamic_Links
ER Mapper has the powerful ability to dynamically link with external
systems. For example, an Oracle dynamic link might run an SQL
query through an Oracle based DBMS, extract the results, then
display the results in a graphical format.
Dynamic links are fully documented user extendable open standards.
A wide range of dynamic links are included as part of ER Mapper;
more are continually being added.
Symbols.alg
This algorithm demonstrates the use of the Tabular Data -> Symbols
dynamic link.
Tabular_Data_as_Circles.alg
A simple dynamic link demonstrating taking a table of data, in this
case geochemical assay values, and displaying it over imagery with
the table of data shown as circles - the larger the circle, the greater
the assay value.
Truecolor_Color_Spiral.alg
A dynamic link demonstrating generating truecolor (24 bit color)
PostScript as a layer over imagery. Any PostScript based output can
be used as a dynamic link.
User_Example_Dynamic_Link.alg
An example dynamic link documented in the Customizing ER Mapper
manual.
Fft
FFT_Highpass_Power_Spectrum.alg
This algorithm applies a high-pass power spectrum to an FFT
image. This algorithm can be run through the FFT inverse to
highpass filter an image in the frequency domain.
FFT_Lowpass_Power_Spectrum.alg
This algorithm applies a low-pass power spectrum to an FFT image.
This algorithm can be run through the FFT inverse to lowpass filter
an image in the frequency domain.
FFT_Notch_Power_Spectrum.alg
This algorithm applies a notch filter power spectrum to an FFT image.
This algorithm can be run through the FFT inverse to notch filter an
image in the frequency domain.
Supplied Algorithms
297
FFT_Power_Spectrum.alg
This algorithm applies a power spectrum to an FFT image. It can be
used to view an FFT image.
Input_Image.alg
This algorithm shows the input image used in these FFT examples,
prior to FFT processing.
Output_Image_Highpass.alg
Shows the sample image after a highpass filter has been applied
using FFTs.
Output_Image_Lowpass.alg
Shows the sample image after a lowpass filter has been applied using
FFTs.
Output_Image_Notch.alg
Shows the sample image after a notch filter has been applied using
FFTs.
Geocoding
This directory contains algorithms with geocoded and non-geocoded
images:
Gridding
•
ARC_INFO_vectors_warped.alg
•
Landsat_MSS_not_warped.alg
•
Landsat_MSS_over_SPOT_Pan_warped.alg
•
Landsat_MSS_warped.alg
•
SPOT_Pan_warped.alg
Contains example input sources and output gridded images to
demonstrate the ER Mapper Gridding Wizard
HSI_Transforms
Landsat_TM_and_SPOT_Pan_IHS_merge.alg
This algorithm carries out a conventional IHS (HSI) merge of Landsat
TM and SPOT Panchromatic. There are three layers in this algorithm:
Hue, Saturation and Intensity. The Hue and Saturation layers have
formulae in the layers to compute Hue and Saturation using three
selected bands from a Landsat TM image. The Intensity layer is
simply the SPOT Panchromatic.
The RGB based Landsat TM bands are converted into Hue and
Saturation; Intensity uses the SPOT Pan image. ER Mapper
automatically converts Hue, Saturation, Intensity layers back into
RGB color space when displaying them, so you do not need to
manually carry out this step.
298
Supplied Algorithms
The result is an image containing the color information from Landsat
TM and sharpness from SPOT Panchromatic.
This general concept can be applied to any algorithm where one
image has color information and the other has spatial information.
Hue, Saturation and Intensity can be modified in real time using
the Transformation dialog box.
To use this algorithm with your own data:
1. Load the algorithm
2. Change the input images (Landsat TM and SPOT Pan) to reflect your
own data.
RGB_to_HSI_to_RGB.alg
A generic RGB to HSI to RGB colorspace algorithm. This algorithm
has three layers (Hue, Saturation and Intensity), which are
computed from the three bands selected to use as RGB. As the layers
are in HSI color space, you can transform these in real time using
the layer transforms. ER Mapper automatically converts HSI layers
into RGB for final display.
Language_Specific_Links
•
Korean_Australia.alg
•
Korean_Australia.kor
•
Korean_San_Diego.alg
•
Korean_San_Diego.kor
•
Korean_Text.alg
•
Korean_Text.kor
Map_Objects
Example_3D_Map_Items.alg
Example_Fonts.alg
Shows some of the fonts available in map composition.
Example_Grids.alg
Shows some of the ways that map grids (Easting Northing and
Latitude Longitude) can be generated.
Example_Map_Items.alg
Some of the map composition items available.
Supplied Algorithms
299
Example_Map_Symbols_*.alg
Displays the map symbols defined by the numbers.
Example_North_Arrows.alg
Some of the North Arrows supplied as standard map objects.
Example_Scale_Bars.alg
Some of the Scale Bars supplied as standard map objects.
Example_Symbols.alg
Example_Text.alg
Some of the fonts - including multiple language support - available
in map composition.
Example_Titles.alg
Shows different styles of text that can be used for map titles.
Example_Z_Scales.alg
Shows some of the Z-scales (image values) available such as
triangles, color strips, grayscale strips, HSI triangles, and so on.
Fonts.alg
Displays a list of the fonts supported.
Map_Production
These demonstrate the use of map composition within ER Mapper.
Complex_San_Diego_Map.alg
A complete map which includes items such as multiple fonts, rotated
text, embedded algorithms for legends, auto generated scale bars,
Z-values and grids, clipped regions, and dynamic links to external
vector formats.
Landsat_TM_With_simple_grid.alg
A simple example of Landsat TM imagery with an EN grid.
Map_Symbols_In_Use.alg
This algorithm is a `natural' color scene comprising TM bands 3-2-1
(approximately equal to red-green-blue visible light) imaged in the
red-green-blue computer monitor guns respectively. An Annotation
layer containing a number of map symbols is super-imposed on it.
Newcastle_Magnetics_Map.alg
Simple example of a map of airborne magnetics data (colordraped),
with vector data, dynamic links to external tabular data, and map
composition objects on the map.
300
Supplied Algorithms
Simple_San_Diego_Map.alg
A very simple map showing two types of raster data (Landsat TM and
an Airphoto), with a simple map composition created over the
imagery.
Map_Legends
These algorithms are used as part of maps (see Map_Production
on page 300). In other words, they are included as legends within
maps.
Any algorithm can be used as a map legend item.
Radar
•
Legend_San_Diego_Balboa_Park.alg
•
Legend_San_Diego_Brovey_Thumb.alg
•
Legend_San_Diego_Downtown.alg
•
Legend_San_Diego_Drainage.alg
•
Legend_San_Diego_ISOCLASS.alg
•
Legend_San_Diego_Roads.alg
•
Legend_San_Diego_Vectors.alg
Contains algorithms highlighting ER Mapper’s radar functionality.
Regions
SPOT_Pan_in_Regions.alg
ER Mapper can attach vector based polygon regions to raster
images. These regions can be used for statistics calculations,
classification, or selecting raster data within regions.
This algorithm uses the “INREGION()” formula function to show
raster imagery only within certain regions.
Full conditional logic can be used, for example:
IF inregion(r1) AND NOT inregion(r2) THEN .....
Map region variables (r1, region2, etc.) into actual regions using the
formula editor.
SPOT_Pan_in_Regions_over_Landsat_TM.alg
ER Mapper can attach vector based polygon regions to raster
images. These regions can be used for statistics calculations,
classification, and selecting raster data within regions.
The “INREGION()” function in formulae can be used to show raster
imagery only within certain regions.
Supplied Algorithms
301
This algorithm uses this function to show SPOT Panchromatic
imagery (10 meter cell size) within certain parts of the image, and
show this over the top of Landsat TM imagery (30 meter resolution).
Because both layers are Pseudocolor, ER Mapper automatically
mosaics these two layers. Change the layers to Red and Green to see
the example with the Landsat TM and SPOT Panchromatic imagery
in different colors to highlight the interactive INREGION() processing
carried out.
Spatial_Filters
Adaptive_Median_noise_removal.alg
The Adaptive Median ftiler is a high speed median filter that
preserves edges.
Three median filters are generated:
#1:
x - x
- x x - x
#2:
- x x x x
- x #3:
- - - x - - And then the median is taken from these three medians.
Aspect.alg
This algorithm demonstrates the use of the Aspect filter, which
calculates the aspect or direction of slope for a DEM. The numbers
output from the Aspect filter range from 0 to 361. Zero indicates a
north facing slope, 90 indicates an east facing slope, and so on. The
special number 361 is used to indicate areas that are perfectly flat
(for example, water) with no aspect for the slope.
Although Aspect is useful by itself, a common use is to use formula
to select only slopes facing a certain direction. For example, the
following formula would only show slopes within 10 degrees of
south:
IF input1 >=170 AND input1 <= 190 THEN 1 ELSE NULL
For this formula to work, input1 should be mapped to a DTM,
and the input layer should have an Aspect filter applied prior to
the formula.
302
Supplied Algorithms
Average.alg
This filter takes an average of the cells in the filter box. The result is
a blurring, or averaging, of the image.
Brightest_Pixel.alg
Deviation.alg
This filter computes the mean for the box filter area, and then
returns the deviation of the center pixel from this mean. In other
words, it reports how different the pixel is from the neighboring
pixels.
This filter can be used to highlight structure, to remove noise, and to
enhance edges in an image.
Edge_highlight.alg
This filter is an edge enhancement filter. This particular example
highlights edges from the North East. Increasing the size of the filter
(this example is 3x3) will highlight only major structures and edges.
Edge_sharpen.alg
This 3x3 Edge sharpening filter enhances edges in the image. It can
often be used to highlight roads. If this filter is added back into the
original image (using a formula of INPUT1+INPUT2) where INPUT1
is the original image, INPUT2 is the image with the edge sharpen
filter added, it has the effect of enhancing edges in the image.
The edge sharpening filter is often known as a Laplacian filter
Ford.alg
This filter enhanced high frequency data, such as road networks. An
ideal filter for highlighting urban infrastructure
Gaussian_avg_std_dev_1.0.alg
This filter applies a Gaussian average to the image, in this case with
a standard deviation of 1.0 (other standard deviations may also be
used).
This will blur an image. Unlike a standard box average filter,
this filter has a circular rolloff, and so provides a more natural
averaging. Subtracting this average from the original image will
result in showing high frequency changes - typically structure.
Gaussian_removed.alg
Demonstrates the use of a gaussian filter to show high frequency
structure (for example roads or coastlines) in an image by
subtracting the gaussian average filter from the original data. The
formula in this algorithm is:
INPUT1 - INPUT2
Supplied Algorithms
303
Input1 is the original data, and Input2 is the original data with a
gaussian filter inserted before the formula.
Highlight_structure.alg
Highlights structure by running two Brightest Cell ranking filters in
sequence then a gaussian filter, and subtracting all of this from the
original data.
Median_noise_removal.alg
Applies a median filter to remove noise from an image. This filter has
been implemented as a C function, which is dynamically linked into
ER Mapper during algorithm execution.
•
Laplacian.alg
•
Majority.alg
•
Sharpen_11x11.alg
•
Sharpen_3x3.alg
Slope_degrees.alg
This algorithm demonstrates the use of the Slope filter, which
calculates the slope, or steepness, for a DTM. The numbers output
from the Slope_Degrees filter range from 0 to 90. 0 indicates a flat
area, with no slope; 90 indicates a vertical slope.
Although slope is useful by itself, a common use is to use formula to
select only slopes with a maximum grade. For example, the following
formula would only show slopes with less than 5 degrees of slope:
IF input1 <= 5 THEN input1 ELSE NULL
For this formula to work, input1 should be mapped to a DTM,
and the input layer should have a Slope_degrees filter applied
prior to the formula.
Sobel.alg
Vectors_and_Imagery
Vectors_from_Airphoto.alg
Shows an airphoto as an RGB image with vectors digitized on-screen
from the airphoto as a vector layer. These vectors can be edited by
clicking on the annotation toolbar icon.
304
Supplied Algorithms
Vectors_over_SPOT_Pan.alg
Shows vectors imported from USGS DLG-3 format over SPOT
Panchromatic imagery. The vectors were the original data used to
geocode the raster imagery in these examples. Zooming in to the
vector data reveals inaccurate and incomplete data for many new
areas. This vector data can be updated by clicking on the annotation
toolbar icon.
Virtual Datasets
The following algorithms demonstrate the use of virtual dataset.
Landsat_TM_Cloud_and_Water.alg
This algorithm uses a virtual dataset which has ‘cloud’ and ‘water’
layers - true/false layers which return true if a pixel is water or cloud.
The algorithm uses this virtual dataset to classify an image to show
water and cloud. Where there is no water and no cloud, the original
imagery is shown as an RGB image.
Landsat_TM_Cloud_removal.alg
Demonstrates the use of virtual datasets to remove cloud. The
algorithm uses two Landsat images, one with cloud, and one with no
cloud. The image with cloud has the following formula for each layer:
IF IS_CLOUD THEN NULL ELSE INPUT1
This formula will result in ‘null’ data where there is cloud. The other
Landsat image is shown under the cloudy image, so it shines through
where there is cloud in the first image.
This algorithm is intended as an example to show how to remove
cloud from images, and fuse multiple images together to end up with
a cloud free view of an area.
The formula used to decide if cloud is present considered ‘see
through’ cloud to be acceptable. This formula can be changed to
be more stringent in rejecting cells that are partially cloudy.
Miscellaneous/Te
mplates
Common
These are common processing or display algorithms. A typical use is
to load the algorithm and change the image to your own data.
Classified_data.alg
Displays a classified image. This is the same as starting with a new
algorithm and image window, adding a Class Display layer, and
loading your classified image into the layer.
Supplied Algorithms
305
Color_coded_intervals.alg
An algorithm to take data (such as DTM or Magnetics), and color
code the data based on intervals. For example, show all data in the
range of 0 to 20 as Blue, data from 21 to 40 as Green, 41 to 60 as
Yellow, and so on.
Uses Classification layers with a formula in the layer to generate the
intervals.
HSI_to_RGB.alg
Takes data in HSI color space and converts it to RGB space. You
would not normally need to do this - a simpler way is to just have
Hue, Saturation and Intensity layers in your algorithm. ER Mapper
will automatically convert these to RGB when displaying the results.
This algorithm is only really useful if you have data in HSI space, but
want to modify the transforms in RGB space (using HSI layers would
mean that you would modify the data in HSI space, which is what
you normally want to do).
HSI_to_RGB_usercode.alg
Uses usercode formulas to convert data from HSI components to
RGB components. This is not normally necessary in ER Mapper, but
can be useful for example for saving an HSI-enhanced image as redgreen-blue image bands for transfer to another system.
RGB.alg
Displays data as RGB.
RGB_autoscale.alg
Autoscales data as an RGB image using statistics from the image to
compute an autoscale factor.
It is simpler to use a normal RGB image, and click on Refresh
Image with 99% clip on limits
to scale the data.
RGB_BCET_autoscale.alg
Autoscales data as an RGB image, using the BCET function.
It is simpler to use a normal RGB image, and click on Refresh
Image with 99% clip on limits
to scale the data.
RGB_to_HSI_to_RGB.alg
Takes RGB data and computes HSI. The data is shown in the
algorithm as Hue, Saturation and Intensity layers.
This is useful if you have RGB based data but want to enhance it in
HSI color space.
306
Supplied Algorithms
Single_Band_grayscale.alg
A one layer algorithm that displays the image as grayscale.
Dataset_Output
These algorithms will replicate input data. They are used when
creating output images that have been processed in some way.
These algorithms do not have final transforms to ensure the data is
not modified.
Landsat_MSS.alg
An algorithm set up with as many layers as exist in Landsat MSS
data. Each layer has been named in accordance with the normal
naming for this data. There are no final transforms on the layers.
Landsat_TM.alg
An algorithm set up with as many layers as exist in Landsat TM data.
Each layer has been named in accordance with the normal naming
for this data. There are no final transforms on the layers.
Landsat_TM_scan_line_noise_removal.alg
An algorithm set up to generate scan line noise removed TM
imagery.
SPOT_Pan.alg
An algorithm set up with as many layers as exist in SPOT Pan data
(one layer). The layer has been named in accordance with the
normal naming for this data. There is no final transforms on the
layer.
SPOT_XS.alg
An algorithm set up with as many layers as exist in SPOT XS data (3
bands). Each layer has been named in accordance with the normal
naming for this data. There are no final transforms on the layers.
FFT
First_Vertical_Deriv.alg
First_Vertical_Deriv_Power_Spectrum.alg
Highpass_Filter.alg
A template algorithm used to generate highpass filtered FFT
imagery.
Lowpass_Filter.alg
A template algorithm used to generate lowpass filtered FFT imagery.
Notch_Filter.alg
A template algorithm used to generate notch filtered FFT imagery.
Supplied Algorithms
307
Reduce_to_Pole.alg
Reduce_to_Pole_Power_Spectrum.alg
Second_Vertical_Deriv.alg
Second_Vertical_Deriv_Power_Spectrum.alg
Vector_Deriv.alg
Vector_Deriv_Power_Spectrum.alg
Vector_Field.alg
Vector_Field_Power_Spectrum.alg
Vertical_Continuation.alg
Vertical_Continuation_Power_Spectrum.alg
Virtual_Datasets
These algorithms are used to generate virtual datasets; these are
processed versions of data. For example, a virtual dataset might be
used to create an 8 band virtual dataset containing the 7 Landsat TM
bands and the 1 SPOT Pan band, giving in effect a virtual 8 band 10
meter resolution image.
Typical use for these algorithms is to load the algorithm, change the
layers to reflect your own data, then save the resultant algorithm as
a virtual dataset or as another algorithm.
DEM_Height_Slope_Aspect.alg
For a DEM (DTM) elevation image, this algorithm generates a three
layer view into the data:
•
Height - the original height data in the DTM
•
Aspect - direction of slope in the DTM
•
Slope - the steepness of slope in the DTM
A virtual dataset (or algorithm) created using this template can be
used in conditional formula; for example to only show areas with a
certain slope and aspect.
Landsat_TM_and_SPOT_Pan.alg
An eight band algorithm that contains 7 layers used to input Landsat
TM data, and one band used to input SPOT Pan data.
This algorithm is used as input into other algorithms that require
access to both Landsat and SPOT data within a single formula - the
Brovey transform is an example.
308
Supplied Algorithms
Landsat_TM_PC1-6_HistEq.alg
Gives a histogram equalized view into PCs 1 to 6 for Landsat TM.
Landsat_TM_transformations.alg
Landsat_TM_two_years.alg
A 14 band (layer) algorithm used when formula requiring input from
two Landsat TM images (such as for change detection) is required.
LandVD_TM_transformations.alg
Magnetics_uc1_and_full.alg
Generates 1/2 and full upward continuations for Magnetics data.
Magnetics_uch_and_dch.alg
Magnetics_uch_and_full.alg
Magnetics_uch_full_and_dch.alg
RGB_to_HSI.alg
Takes RGB data and generates a view into the data that is HSI based.
Test_Patterns.alg
Generates various test patterns. Each test pattern is a layer (band)
of the algorithm.
Miscellaneous/Te
st_Patterns
3D_Cube.alg
HSI_wheel.alg
Generates an HSI color wheel. Useful for balancing hardcopy output
with screen colors.
3D_Julia_Fractals.alg
Demonstrates the use of a C based formula to compute the Julia
fractal.
rgbcmyk_bands.alg
Generates an RGBCMYK color strip. Useful for balancing hardcopy
output with screen colors.
Test_Patterns.alg
An algorithm demonstrating the use of a virtual dataset that
generates test patterns. Most of these test patterns are created
using formula within each layer of the virtual dataset. This is useful
for research purposes to test algorithms against specific test
patterns.
The virtual dataset contains the following test pattern bands:
Supplied Algorithms
309
•
Sine wave
•
Linear Scale
•
Checker board
•
Cross hatch
•
Diagonal triangles
•
Random
The algorithm displays an addition of the Sine wave with Random, to
result in a sine pattern with random variations.
Shared_Data
310
This directory, and those below it, contain common image datasets
used by the example algorithms.
Supplied Algorithms
Supplied Filters
A filter (or kernel) is defined in ER Mapper to be an operation that
processes a cell based on the cells which are surrounding it in the
spatial domain. Filters supported are convolutions (e.g. average
filters) and threshold filters (convolution with threshold).
Filters are applied by choosing the Filter button from the process
diagram in the Algorithm dialog box for the current layer.
Filters are text files and are located in the kernel directory. Please
refer to kernel directory for details about a particular filter.
As well as the filters provided with ER Mapper, it is important to note
that it is possible to use other types of filters.
See Chapter 12, “Filter files (.ker)” in the Customizing ER Mapper
manual for more information on filter formats.
Some of the filters provided with ER Mapper are described below.
Filters
Listed below is information describing each supplied filter.
DEM filters
aspect
Calculates aspect in degrees from North.
slope
Calculates the slope from 0 to 200%
slope_degrees
Calculates the slope from 0 to 90 degrees.
Edge filters
Different
This filter uses graphical convolution to highlight differences in
contour algorithms. The filter used is:
Supplied Filters
-1
-1
0
-1
3
0
0
0
0
311
Scale factor = 1
Gradient_X
This filter uses graphical convolution to highlight the gradient in the
X direction. The filter used is:
0
-1
1
Scale factor = 1
Gradient_Y
This filter uses graphical convolution to highlight the gradient in the
Y direction. The filter used is:
0
-1
1
Scale factor = 1
Gaussian filters
std_dev_0.391
Standard deviation = 0.391
std_dev_0.625
Standard deviation = 0.625
std_dev_1.0
Standard deviation = 1.0
std_dev_1.6
Standard deviation = 1.6
Geophysics filters
Frequency filters
These are derived pass filters at spectral wavelengths of
approximately a multiple of grid spacing. They are useful in some
cases for removing surface noise from aeromagnetic data.
high_pass_4. This filter passes high frequency (low wavelength data
with wavelength less than or equal to approximately four times the
grid spacing). Removes low frequency data with wavelengths greater
than 4 times the grid cell size.
Used to remove deeper/broader (regional) features allowing analysis
of shallow/narrower features.
312
Supplied Filters
low_pass_4. Removes high frequency data with wavelengths smaller
than 4 times the grid cell size. Used to remove shallow/narrower
features allowing analysis of deeper/broader (regional) features.
High frequency data may be mostly noise depending on wavelength
so can be used to remove noise in some cases.
bandpass_3_5. Band pass filters remove wavelengths below a
minimum wavelength and above a maximum wavelength.In this
case wavelengths longer than 5 grid cells and wavelengths shorter
than 3 grid cells will be removed. Another way of stating the result
is that wavelengths between 3 and 5 grid cells will remain for
analysis.
Used to enhance features of a particular wavelength ie.features
from/with a particular depth/width Used most commonly in the
seismic business.
Continuation filters
down_cont_1, down_cont_2, down_cont_half, downhalf. These are
the only true geophysical frequency filters for enhancing low or high
frequency, deep or near surface sources respectively.
Downward Continuation of potential field data such as gravity or
magnetics. These filters transform the data by shifting the point of
observation down vertically. They calculate what the field measured
at one flight height (say 300m) would be if it had been measured at
a lower flight height (say 100m).
The filter values _1, _2, half refer to the vertical continuation
distance in numbers of grid cells.
Downward Continuation is used to enhance near surface or high
frequency features in the data. Continuation should not be done
below ground level as the algorithm becomes unstable.
up_cont_1, up_cont_2, up_cont_half. Upward Continuation of
potential field data such as gravity or magnetics. These filters
transform the data by shifting the point of observation up vertically.
They calculate what the field measured at one height (say 2m i.e.
ground survey) would be if it had been measured at a higher flight
height (say 100m).
The filter values _1, _2, half refer to the vertical continuation
distance in numbers of grid cells.
Upward Continuation is used to suppress near surface or high
frequency features in the data.It can be useful for suppressing noise
caused by magnetite/maghemite in soils in ground magnetic
surveys.Median filters are also effective for removing noise spikes in
these kinds of surveys.
Derivative filters
Horizontal and vertical first and second derivative filters are also in
common use for calculating gradients of magnetic and gravity fields.
Supplied Filters
313
Horizontal. The horizontal derivative is the rate of change of values
in the horizontal plane i.e. from 1 pixel to the next. The simple filter
[-1,0,1] calculates an EW horizontal derivative at the centre pixel by
subtracting the two outside pixels from each other.To calculate the
true horizontal gradient it is necessary to divide the result by the
horizontal distance between the two outside pixels (nt/metre).
This filter is also referred to as a NS edge detection filter.
Application of this filter to the result of the above will produce the EW
2nd horizontal derivative ie.the gradient of the gradient.
Vertical. The first vertical derivative or vertical gradient is the rate of
change of values in the vertical plane ie.height.Therefore a vertical
derivative can be roughly simulated in ER Mapper using the
difference between two continuation filters. For instance a value
upward continued to 100m minus a value upward continued to 200m
will give the change in field per 100m vertically. Dividing by 100 will
give the vertical gradient (nt/metre).
These filters have been written up in the EMU USERS GROUP EMU
Forum Volume 2 issue 2. They are implemented as algorithms and
use the virtual dataset feature and are named Magnetics_vds.ers
and Magnetics_quarter_deriv.alg.
Second Derivative. The derivative filters supplied are from
Henderson and Zeitz (Fuller,1967). These filters are used for edge
enhancement and crude sunangle effects.
Residual
Residual filters highlight local anomalous values.
Edge
Edge enhancement filters.
High pass averaging
filters
Ford
This is a convolution filter designed to enhance high-frequency detail
of images. It is very useful for urban areas. For more information on
the design of this filter, reference the author’s article in:
Ford, G. E., V.R. Algazi, and D. I. Meyer, 1983. “A Noninteractive
Procedure for Land Use Determination,” Remote Sensing of
Environment, Vol. 13, pp. 1-16.
Sharpedge
3x3 edge sharpen filter using the following array.
-1
314
-1
-1
Supplied Filters
-1
8
-1
-1
-1
-1
Scale factor = 1
highpass_1x33
High pass (central minus filter ndow mean).
Sharpen11
11x11 high pass er using the following array:
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1 -1
-1 -1
-1 -1
-1 -1
-1 -1
-1 241
-1 -1
-1 -1
-1 -1
-1 -1
-1 -1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
Scale factor =121
Sharpen2
General edge sharpening filter using the following array:
-1
-1
-1
-1
14
-1
-1
-1
-1
Scale factor = 6
Low pass averaging
filters
The low-pass averaging filters are used to smooth data noise and
gridding artifacts. For example, radiometric data may need an
averaging filter before applying a sun angle filter.
avgx
These averaging filters average each pixel based upon surrounding
pixels. The ‘x’ in the filter name is the dimension of the square filter.
For example, ‘avg3.ker’ filter is a 3 x 3 matrix. The larger the
dimension of the filter the greater the smoothing effect will be.
Supplied Filters
315
1
1
1
1
1
1
1
1
1
Scale factor = 9
Non-square filters have both dimensions in their name, for example
‘avg_33x1.ker’.
avg_diag
The diagonal average filter includes the four corner neighbor pixels
in the average:
1
0
1
0
1
0
1
0
1
Scale factor = 5
avg_hor
The horizontal average includes right and left neighbor pixels in the
average:
0
1
0
0
1
0
0
1
0
Scale factor = 3
avg_ver
The vertical average filter includes the pixels above and below:
0
0
0
1
1
1
0
0
0
Scale factor = 3
avg_h_v
The avg_h_v average filter includes the horizontal and vertical
neighbors of the pixel in the average.
0
1
0
1
1
1
0
1
0
Scale factor = 5
eps
The description of the "Edge Preserving Smoothing" filter can be
found in:
316
Supplied Filters
Nagao, M. and T. Matsuyama. 1979. Edge Preserving Smoothing.
Computer Graphics and Image Processing, vol. 9, pp. 394-407.
This filter uses a 5 x 5 moving window. Within the window, it looks
at a series of "subwindows" (north, south, east, west, northeast,
southeast, northwest, and southwest). It then replaces the central
pixel of the window with the mean of the most homogeneous
subwindow (that is, the subwindow with the lowest variance).
Ranking filters
adaptive_median_5x5
This is a high-speed adaptive median filter; being three medians,
then the median of the three results.
brightest_pixel_5x5
This filter returns the brightest pixel in the kernel
darkest_pixel_5x5
This filter returns the darkest pixel in the kernel
deviation_3x3
Deviation filter
deviation_5x5
Deviation filter
majority.ker
This filter is used for smoothing classified images. It replaces the
one central pixel with the majority class.
median_11x11
This filter calculates median values
median_3x3
Median filter
median_5x5
Median filter
SAR filters
adaptive_median_5x5
This is a high-speed adaptive median filter; being three medians,
then the median of the three results. This filter differs from the
adaptive_median_5x5 Ranking filter in that it is applied before the
dataset is subsampled.
Supplied Filters
317
average_5x5
This is a 5x3 averaging filter which uses the following array:
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Scale factor = 25
median_5x5
This median filter is applied before any data subsampling takes
place.
std_dev_0.625
This is a Guassian: Standard Deviation=0.625 filter which is applied
before any data subsampling takes place.
std_dev_1.6
This is a Guassian: Standard Deviation=0.625 filter which is applied
before any data subsampling takes place.
318
Supplied Filters
frost
glcm_contrast_u
glcm_dissimilar
glcm_entropy_u
glcm_homo_u
glcm_inv_moments_1_u
glcm_inv_moments_2_u
glcm_inv_moments_3_u
glcm_max_prob_u
glcm_mean
glcm_moments_1_u
glcm_moments_2_u
glcm_moments_3_u
glcm_std_dev_u
glcm_uniform_u
lee
leek
lees
mtf
mtfk
mtfs
sarsim_u
sigma_u
Seismic
Supplied Filters
319
Easterly_dip
This filter uses graphical convolution to highlight easterly dips.. The
filter used is:
0 0
-1 0
0 0
0
1
0
Scale factor = 1
Hanning_3
The Hanning 3x3 filter used is:
.06
0.10
.06
0.10
0.36
0.10
0.06
0.10
0.06
Scale factor = 1
Northerly_dip
This filter uses graphical convolution to highlight northerly dips.. The
filter used is:
0
0
0
1 0
0 0
-1 0
Scale factor = 1
Standard Filters
Sobel1
The Sobell 3x3 kernel filter used is:
1 2 1
0 0 0
-1 -2 -1
Scale factor = 1
Sobel2
The Sobell 3x3 kernel filter used is:
-1 0
-2 0
-1 0
1
2
1
Scale factor = 1
darn
This filter uses graphical convolution to highlight darn holes. The
filter used is:
-1
0
-1
0
-1
0
2
2
2
0
-1
2
-8
2
-1
0
2
2
2
0
-1
0
-1
0
-1
Scale factor = 8
default
This as the default filter having a single cell with a value of 1.
force_nosubs
This default filter forces no subsampling.
h_edge
Horizontal edge operation uses graphical convolution to highlight
horizontal edges (but not vertical ones). The filter used is:
320
Supplied Filters
-1 -1 -1
0 0 0
1 1 1
Scale factor = 1
h_edge_5.ker
Horizontal edge operation uses graphical convolution to highlight
horizontal edges (but not vertical ones). The filter used is:
-1
0
0
0
1
-1
0
0
0
1
-1
0
0
0
1
-1
0
0
0
1
-1
0
0
0
1
Scale factor = 1
lap_clean.ker
The laplacian operation uses graphical convolution to highlight clean
edges. The filter used is:
-1 -1 -1
-1 17 -1
-1 -1 -1
Scale factor = 9
laplacian.ker
The laplacian operation uses graphical convolution to highlight
outline edges. The filter used is:
-1 -1 -1
-1 8 -1
-1 -1 -1
Scale factor = 1
laplacian5.ker
A 5x5 laplacian filter.The filter used is:
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
25
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
Scale factor = 1
low_freq
Low frequency filter highlights unchanging areas. The filter used is:
-1 -1 -1
-1 1 -1
-1 -1 -1
sharpen
This operation uses graphical convolution to accentuate the
differences between the centre pixel and the surrounding ones.
-1 -1 -1
-1 9 -1
-1 -1 -1
Scale factor = 1
thresh3.ker
Supplied Filters
A 3x3 averaging filter with a threshold = 5. The filter used is:
321
1
1
1
1
1
1
1
1
1
Scale factor =1
Threshold = 5
thresh5.ker
A 5x5 averaging filter with a threshold = 5. The filter used is:
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Scale factor = 1
Threshold = 5
v_edge
Uses graphical convolution to highlight vertical edges (but not the
horizontal ones). The filter used is:
-1 0 1
-1 0 1
-1 -0 1
Scale factor = 1
v_edge_5
Uses graphical convolution to highlight vertical edges (but not the
horizontal ones). The filter used is:
-1
-1
-1
-1
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
Scale factor = 1
Sunangle Filters
East_West
3x3 East West sun filter using the following array:
-1 0
-1 0
-1 0
1
1
1
North_East
3x3 North East sun filter using the following array:
0
1
1
North_South
-1 -1
0 -1
1 0
3x3 North South sun filter using the following array:
-1 -1 -1
0 0 0
1 1 1
North_West
322
3x3 North Wast sun filter using the following array:
Supplied Filters
-1 -1 0
-1 0 1
0 1 1
North_West_5
South_East
5x5 North Wast sun filter using the following array:
0
-1
-1
0
0
-1
-1
0
1
-1
-1
0
1
1
-1
0
1
1
0
0
1
1
0
0
-1
-1
-1
-1
0
-1
-1
-1
0
1
-1
-1
0
1
1
-1
0
1
1
1
0
1
1
1
1
3x3 South East sun filter using the following array:
1
1
0
South_North
0
1 0
0 -1
-1 -1
3x3 South North sun filter using the following array:
1 1 1
1 0 0
-1 -1 -1
South_West
3x3 South West sun filter using the following array:
0 1 1
-1 0 1
-1 -1 0
West_East
3x3 West East sun filter using the following array:
1
1
1
0
0
0
-1
-1
-1
Usercode
average
Averaging filter written in C.
kernel.template
A template file for crating user defined C filters
Supplied Filters
323
local_enhance
Does a local histogram equalise on the data, accepting any range of
input data and outputting values between 0 and 255.
majority
This kernel can be used for smoothing classified images. It calculates
the majority class within the kernel window. If the class of the central
cell is different from the majority, it is changed to the majority. If
there is no majority class the central cell is left unchanged. This filter
is typically used for smoothing classified images. An average filter is
inappropriate because the values are just labels and have no
numerical relationship (the average of classes 2 and 4 is not class 3!)
median
Median filter written in C
modefilt
The mode filter replaces a pixel value by the majority value in the
kernel.
It is a suitable filter to smooth a classification image as there is no
numerical calculation involved in the filtering.
rm2badli
Removes two bad lines using a conditional filter
rmbadlin
Removes bad lines using a conditional filter
w_loc_me
This filter adjusts the local mean values s to a given DN value (110)
without changing the local contrast. A user specified kernel (31*31
by default) is used to calculate and adjust the local mean values.
Square root distance to the central pixel is used as the weight of the
contribution of each neighbour pixel in order to reduce the edge
effects.
This technique can enhance the texture information of very bright
and very dark image areas. It is useful for analysing the textures of
high contrast images.
324
Supplied Filters
Index
Numerics
2-D seismic data 124
3-D example algorithms 285
3-D perspective viewing 29
3-D seismic data 125
3-D seismic two-way-time horizons, displaying 8
3-D viewing 295
3-D flythrough 29
algorithm 278, 280
draping vectors 29
fallback modes 29
height layers 29
stereo 30, 287
sun shading 288
texture mapping 29
view position 29
3-D visualization 285
A
Abrams_Ratios.alg 270
abstracted views into DEMs 34
abstracting data 32
accuracy and calibration of radiometric data
119
accuracy assessment of classifications 61, 72
acid rain 203, 209
adaptive_median_5x5 filter 317
Advanced Very High Resolution Radiometer
222
aerial photography 183, 194, 199
aeromagnetic surveys 135
aeromagnetics 277, 295
algorithms 277
merging with radiometrics data 7
AGC 116
agricultural development, application of remote sensing 189, 199
agriculture, application of classification 59
airborne electromagnetics data 102, 129
airborne radar 213
aircraft altitude 120
aircraft altitude effects 118
Airphoto_and_Landsat_TM_and_Vectors.alg
262
Airphoto_with_roads.alg 263
Alaska, remote sensing applications in 207
albedo 271
albedo effects, eliminating with band ratios 79
algorithm examples
Index
3-D 285
ARC/INFO 263
classification 289
data fusion or merging 292
digital elevation 264
dynamic links 297
FFT 297
geophysics 277
gridding 298
Landsat MSS 268
Landsat TM 270
map composition 299
map legends 301
processing kernels 302
raster and vector integration 304
regions 301
seismic 280
SPOT Pan 283
SPOT XS 284
test patterns 304
virtual datasets 305
algorithm templates
common 305
dataset output 307
FFT 307
virtual datasets 308
alteration mapping 179
alteration zones 180
annotating images 117, 201, 263
annotation
editing cultural vector data 247
anomaly shape 118
ARC/INFO data importing and dynamic links
103
ARC/INFO example algorithms 263
ARC/INFO viewing 263
ARC/PLOT based dynamic link 263
Archaean 151
area specific stretch 272
argillyzed 180
arid areas, problems with vegetation indices
94
artifacts, errors in DEMs 21
artifacts, from data levelling 118
ARVI vegetation index 90
aspect 265, 308
ASPECT filter 35
aspect filter 311
aspect from DEMs 32, 33
aspect, in fire risk 36
Aspect.alg 265, 302
atmospheric effects, eliminating 52
atmospheric noise 89
atmospherically resistant indices 90
325
Atmospherically Resistant Vegetation Index
90
attribute data
colordraping over two-way time structure
125
highlighting within a region 14
attribute highlighting 5, 12
Australia_Colordrape.alg 266
AutoCAD 187
AutoCAD DXF, exporting to 248
automatic data fusion 4
average filter 323
average_5x5 filter 318
averaging filters 28, 315
avg_diag filter 316
avg_h_v filter 316
avg_hor filter 316
avg_ver filter 316
AVHRR data 76, 209, 222
Azimuth 281
azimuth displays in seismic 125
B
backdrop images 5
Baltic Sea, remote sensing applications in 203
band ratios for detecting change 54
band ratios to eliminate albedo effects and
shadows 79
bandpass_3_5 filter 313
bands for showing vegetation 79
barley, classification of 190
bathymetry data 130
displaying 8
integration into ER Mapper 195
BCET 306
BIL (Binary Interleaved by Line) image file
format 45
bio-optical algorithms 204
blaze 207
Boreal forest project 208
borehole depths 103
borehole locations 103
Bouguer density 127
brightest_pixel_5x5 filter 317
brightness 272, 276
Brightness layer 269
brightness of vegetation 270, 271
Brightness temperature values 224
Brovey transform 9, 284, 293
algorithm 12
RGBI data fusion 10
Brovey transform algorithm 292
Brovey_Transform.alg 292
326
Brovey_Transform_SPOT_XS_and_Pan.alg
293
byte order 45
C
C program for dynamic links 147
C/Fortran user code 116
Canada, remote sensing applications in 183
carbonates 275
cartographic applications 183
Cell Values Profile 201
centre pivot irrigation 189
centreline network updating 47
change detection
using band ratios 54
change image, creating 53
Charisma 2D XYZ ASCII grid import 234, 237
Charisma 3D Inline Xline XYZ ASCII grid import 234, 237
Checker board test pattern 310
chlorophyll 92, 276
chlorophyll concentration 204
classification
accuracy assessment 72
displaying results 71, 290
for crop yield estimation 191
for erosion, siltation and lake capacity
monitoring 201
for irrigation requirements analysis 190
in forestry 210
maximum likelihood classifier 67
methodology 69
multispectral 60
of fire risk 37
of geophysical data 123
of marine habitats 196
real-time 71
schema 62
typicality thresholds 70
unsupervised 71
using DEM data 38
weighting classes 71
classification example algorithms 289
classification highlighting 15
Classification layers
used to highlight 13
classification techniques using algorithms 289
classification-height images 31
classified images
algorithm 305
comparing 14
merging with airphotos and satellite imagery 7
Index
viewing in 3-D over a DEM 286
Classified_data.alg 305
clay 270
clay types 121
Clay_ratio.alg 270
cloud 305
cloud removal 210, 305
clouds, penetrating with radar 213
color and intensity, using to merge datasets 5
color highlighting 16
color perception 107
color strips 300
Color_coded_intervals.alg 306
colordrape images 31
algorithms 266, 278
for displaying DEMs 24
for displaying geophysical data 110
in 3-D viewing 287
Colordrape.alg 266
Colordrape_Greeness_over_Brightness.alg
271
Colordrape_NDVI_over_PC1.alg 271
Colordrape_shiny_look.alg 266
colordraping
tips 258
Colordraping, using to view oil and gas data
256
Colorize_Regions_Over_Greyscale.alg 283
combined geochemical data 134
combining data for what-if processing 32
combining data, see merging data
combining images 256
common algorithm templates 305
comparing results using normalised difference
image 51
comparing two classified images 14
complex numbers in Fourier processing 177
Complex_San_Diego_Map.alg 300
Compute_MaxLike_Landsat_TM_1985.alg
289
Compute_Vegetation_over_RGB_541.alg 289
continuation filters 114
contours 105
generating a DEM from 20
of geophysical data 103
raster line generation 28, 266, 279
Contours_from_Raster.alg 266
convolution filter 321
copper 134
correcting SAR data with DEMs 39
correlations between magnetics and radiometrics 278
creating a change image 53
crop
327
canopy 64
condition 59
production 190
production monitoring 189
crop calendar considerations for classification
72
crop circles 189
crop discrimination 63
crop signatures 63
crop yield estimation 191
Cross hatch test pattern 310
Crosta technique 179
cubic convolution resampling 50
cultural vector data, updating in ER Mapper
247
customizing forestry applications 208
D
darkest_pixel_5x5 filter 317
darn filter 320
data
enhancement 49
ERS-1 SAR imagery 213
geophysical 104
satellite imagery 194, 222
scanned 194
smoothing 106
sources 47
storage considerations 146
two-way time 125
data currency 47
data fusion
formulas for 17
using color and intensity 6
using region highlighting 15
data gathering 183
data integration 183
bathymetry 195
geophysical 144
raster and vector 31
raster and vector algorithms 304
data inversion formulas 255
data levelling artefacts 118
data merging
example algorithms 292
data mosaicing 4
formulas for 17
data. See also particular data types
dataset output algorithms 307
dataset resolution 4
datasets
creating different views for the same data
6
Index
displaying with color and intensity 5
merging or fusing 3
decorrelation stretch 271
Decorrelation_Stretch.alg 271
DEM_HEIGHT filter 35
DEM_Height_Slope_Aspect.alg 308
demographics 185
DEMs
applications 22
aspect from 32
bounding with polygon regions 27, 33
combining with other data for classification
32
combining with other data for what-if processing 32
correcting SAR data with 39
creating 19
displaying and processing 21, 23
in 3-D viewing 286
integrating with other data 30
mosaicing 26
processing 31
range of height from 32, 33
selecting values within certain regions 32
slope 31
smoothing 27
sources and quality 19
steepness 31
using for classification 38
depression angle, of radar data (SAR) 214
depth estimates 138
derivative filters 313
development applications 184
deviation filters 317
DGN data import 103
diagonal average filter 316
Diagonal triangles test pattern 310
Difference Vegetation Index 84
different filter 311
different views from the same data 6
digital elevation example algorithms 264
Digital Terrain 264
viewing in 3-D 286
Digital_Terrain_Map.alg 286
Dip 281
dip and azimuth displays in seismic 125
dip comutation from a DTM 267
Dip.alg 267
directional filters 113, 264
discrete transform in Fourier processing 176
disk space
minimizing with virtual datasets 34
display of transforms in Fourier processing
177
328
displaying and processing DEMs 21, 23
displaying classification results 71, 290
displaying images 61
down_cont filters 313
downward continuation filter 112, 313
drainage, remote sensing application 185
drill hole traces 150
drilling isopach maps 139
drilling programs 151
DTM data in 3-D 286
DVI vegetation index 84
DXF data 44, 125
DXF data import and dynamic link 103
dye sublimation printer 60
dyke 138
dyke models 139
dynamic link construction 147
dynamic links example algorithms 297
dynamic links from multiple sources 41
dynamic links to user code 128
dynamic links to vector data and GIS 103
dynamic operation within ER Mapper 148
E
East_West filter 322
easterly_dip filter 320
Eastern Europe 203
Easting Northing map grids 299
edge effects in Fourier processing 175
edge enhancement filters 27, 113, 169
edge shading filter 267
Edge_Shade_from_NE.alg 271
Edge_Shaded.alg 267
Edit Class/Region Color and Name 69
effect of aircraft altitude 118
effect of depth on anomaly shape 118
electromagnetic data 102, 129
imaging 129
elevation data
combining with other data 7
Digital Elevation Models (DEMs) 19
displaying 7
EM imaging 129
embedded algorithms 300
End of Hole Geology 147
engineering applications 183
enhancing data 49
enhancing edges 169
enhancing images 61
environmental assessments, remote sensing
applications 184
environmental fragility in semi-arid countries
190
Index
environmental monitoring applications 183,
193, 199, 211
environmentalists, applications for 199
eps filter 316
equal probability contours 67
equations in geophysics 115
erosion monitoring 200
ERS-1 imagery 213
characteristics 213
Europe, remote sensing applications in 183
example algorithms 259
example algorithms, see algorithm examples
example dynamic link 297
Example_Map_Composition_Grids.alg 299
Example_Map_Composition_Items.alg 299
Example_Map_Composition_Scale_Bars.alg
300
Example_Map_Composition_Text.alg 300
Example_Map_Composition_Titles.alg 300
Example_Map_Composition_ZScale.alg 300
exploration effectiveness 143
exploration geophysicists, applications for 133
exploration licence areas 103
exploration target generation 216
exporting vector data 247
extrapolation for filling in Fourier processing
176
F
fallback modes in 3-D perspective viewing 29
fallow circles, classification of 190
false color 269, 274
false color images
for displaying DEMs 24
fault zones
identifying 215
faults 267, 268
identifying with SAR 218
feathering mosaiced images 26
fertiliser use 203
FFT algorithms 307
FFT filtering 21
FFT processing, see Fourier processing
FFT_Highpass_Power_Spectrum.alg 297
FFT_Lowpass_Power_Spectrum.alg 297
FFT_Notch_Power_Spectrum.alg 297
FFT_Power_Spectrum.alg 298
filter
adaptive_median_5x5 317
ASPECT 35
aspect 311
average 323
average_5x5 318
329
avg_diag 316
avg_h_v 316
avg_hor 316
avg_ver 316
bandpass_3_5 313
brightest_pixel_5x5 317
darkest_pixel_5x5 317
darn 320
deviation 317
different 311
down_cont 313
downward continuation 112
East_West 322
easterly_dip 320
eps 316
force_nosubs 320
gaussian 303
GOOD_HEIGHTS 35
GOOD_REGIONS 35
gradient_X 312
gradient_Y 312
h_edge 320
hanning 112
Hanning_3 320
high_pass_4 312
highpass_4 112
horizontal derivative 314
kernel.template 323
lap_clean 321
laplacian 321
local_enhance 324
low_freq 321
low_pass_4 313
lowpass_4 112
majority 317, 324
median 317, 324
modefilt 324
North_East 322
North_South 322
North_West 322
North_West_5 323
northerly_dip 320
nosubs 110
rm2badli 324
rmbadlin 324
sharpedge 314
sharpen 321
sharpen11 315
sharpen2 315
slope 311
SLOPE_DEGREES 35, 264, 304
slope_degrees 311
SLOPE_PERCENT 265
sobel 320
Index
South_East 323
South_North 323
South_West 323
std_dev 312, 318
thresh 321
TM_NDVI 35
v_edge 322
vertical derivative 314
w_loc_me 324
West_East 323
filters
averaging 28
continuation 114
DEM_HEIGHT 35
derivative in a specified direction 170
directional 113
edge enhancement 27, 113
example algorithms 302
field component in a specified direction
170
frequency 112
geophysical 112
geophysical filters 107
geophysical gravity filters 128
geophysics 312
high-pass Fourier 165, 168
low-pass Fourier 165, 168
majority 28
notch Fourier 165, 166
ranking 28, 317
reduction to the pole 171
reduction-to-pole 173
residual 114
second derivative 114
SLOPE 35
smoothing 112
sun angle 27
vertical continuation 170
vertical derivative 170
fire risk
creating maps 6
displaying 5
identifying 22, 36
fire-break road design, application for DEMs
22
Fireburn 275
firefighting 207
first upward derivative filter 170
First_Vertical_Deriv.alg 307, 308
First_Vertical_Deriv_Power_Spectrum.alg
307
fish stocks 203
flight line overlays 103
floating point operations 11
330
floods, applications for DEMs 23
foliage height distribution 64
fonts 300
list of 300
force_nosubs filter 320
forest degradation 207
forest management maps 6
forest management, remote sensing application 183
forest monitoring 207
foresters, applications for 208
forestry, customizing applications 208
formula, using to view oil and gas data 255
formulas 289
INREGION vunction 33
inverting data values 255
formulas for data fusion and mosaicing 17
Fourier processing
discrete transform 176
display of transforms 177
edge effects 175
example algorithms 297
low and high pass filters 168
of potential-field data 170
power spectra 177
real image 177
frequency filters 112
Frequency-domain-processing 165
fusing data
common ways 7
fusing data using color and intensity 6
fusing data, see also merging 3
G
Gamma-ray 277
Gaussian equalisation 52, 267, 274
gaussian filter 303
Gaussian_removed.alg 303
GCPs (Ground Control Points) 49
GEMI vegetation index 89
geochemical data display 133
geochemical data merged with Landsat 134
geochemists, applications for 133, 143
geolinking windows 117
geological applications 275
geological interpretation using SAR imagery
218
geological interpretations, applications for
DEMs 22
geological outcrop maps 139
geologists, applications for 133, 143
geology 120
geology databases, links to 143
Index
geometric coincidence 62
geometrical significance 138
geophysical 277
geophysical anomaly spatial frequencies 107
geophysical data imaging 101, 107
geophysical data spacing 106
geophysical filters 107
in gravity 128
geophysical station and line location data 103
geophysical survey boundaries 154
geophysicists, applications for 143
geophysics example algorithms 277
geophysics filters 312
GeoQuest (IESX) ASCII grid dump import 236
GeoQuest (IESX) MapView import 240
GeoQuest IESX ASCII grid dump import 234
Geoquest imports 231
geoscientific data 133
GIS 183
GIS and forestry 208
GIS data, overlaying on airphotos 44
GIS, links to 103
Global Environmental Monitoring Index 89
gloss 279
goethite 275
GOOD_HEIGHTS filter 35
GOOD_REGIONS filter 35
gradient_X filter 312
Gradient_Y filter 312
graphics formats, printing to 30
gravity 102, 126, 277
combined with vectors 127
Fourier processing 170
profiling 128
stripping 128
gravity and Bouguer density choice in imaging
127
Green Vegetation Index 91
greeness 272, 276
greeness of vegetation 270, 271
Greenness layer 269
greyscale images 271
algorithm 267
for displaying DEMs 24
greyscale strips 300
Greyscale.alg 267, 271, 283
Greyscale_Hospitals_and_Fire_Stations.alg
283
gridded borehole depths 103
gridded data
reliability 118
gridding data 19, 105
geophysical 104
grids 300
331
ground based EM systems imaging 129
ground resolution 119
ground truthing 61, 67
GVI vegetation index 91
H
h_edge filters 320
hanning filter 112
Hanning_3 filter 320
hardcopy. See printing
header file 45
height 308
height layers for 3-D viewing 29, 285
helicopter surveys 118
hematite 275
high frequency information, displaying 6
high performance of virtual datasets 34
high precision of virtual datasets 34
high spatial frequency data 111
high spectral resolution vegetation indices 95
high_pass_4 filter 312
high-frequency content of imagery 175
Highlight_structure.alg 304
highlighting
classifications 15
data of interest 5
images only within a certain area 16
regions 5
regions combined with backdrop images
15
vegetation 5
water 13
with color on a greyscale image 16
highlighting structure 304
highpass filtered FFT 307
high-pass Fourier filters 165, 168, 297, 298
highpass_4 filter 112
Highpass_Filter.alg 307
Histogram equalisation 267
histogram equalization 27, 52, 257
histograms
non-cumulative 108
using with classification 63
hole depths 151
homogenous features recognition with classification 72
Horizon 281
Horizon_Azimuth.alg 281
Horizon_Colordraped.alg 281
Horizon_Colordraped_Dip.alg 281
Horizon_Colordraped_Step_LUT.alg 282
Horizon_Dip.alg 282
Horizon_IHS.alg 282
Index
Horizon_Low_Amplitude.alg 282
Horizon_Pseudocolor.alg 282
Horizon_Realtime_Sun_Shade.alg 282
horizontal average filter 316
horizontal derivative filters 314
hot wind in fire risk 36
HRV 283
HSI 273, 279, 300
HSI color wheel 309
HSI imaging 111, 266, 294, 299
HSI_to_RGB.alg 306
HSI_to_RGB_usercode.alg 306
HSI_wheel.alg 309
HSV imaging 9
Hue Saturation Intensity, see HSI
hydrography data 117
hydrologists, applications for 190, 199
I
Identifying fault zones 215
identifying intrusive bodies 215
IESX seismic software 233
IHS merge 294, 298
image
display 61
enhancement 61
interpretation 195, 211
using annotation 117
lower resolution overview 43
processing 204
image, see also data
image differencing 54
image discontinuity 176
image mosaicing 201
image processing in mineral exploration 133
image rectification 21
image striping 120
image values 300
import
Charisma 2D XYZ ASCII grid 234, 237
Charisma 3D Inline Xline XYZ ASCII grid
234, 237
GeoQuest (IESX) ASCII grid dump 236
GeoQuest (IESX) MapView 240
GeoQuest IESX ASCII grid dump 234
oil and gas software 231
OpenWorks 3.1 Wells 243
SEG-Y 234
SeisWorks Fault Polygons 244
SeisWorks Manual Contours 245
Zycor ASCII grid 234, 235
importing and exporting 45
importing vector data 103
332
In_Regions.alg 283
incorrect registration, errors in DEMs 21
Infrared Percentage Vegetation Index 83
infrared response for vegetation 63
infrared satellite imagery 221
infrastructure development, remote sensing
applications 184
Ingres 143
Input_Image.alg 298
INREGION function 15, 18, 33
INREGION() function 301
integrating data 183
DEM and other data 30
intensity layer, using in the oil and gas industry 255
Interactive_mosaic_of_4_datasets.alg 296
intercellular air spaces and vegetation indices
78
interfacing with ER Mapper 138
Interferometric SAR processing for generating
DEMs 20
interpretation using annotation 117
interpreted dips 141
interpreting radar (SAR) data 215
Intrepid dynamic links 103
intrusive bodies 214
identifying 215
IPVI vegetation index 83
iron 274
iron oxide 180, 270, 272
iron oxide absorption 92
Iron_Oxide_ratio.alg 272
iron-bearing minerals 275
iron-stained 180
irrigation requirements assessment, remote
sensing application 189, 190
ISOCLASS_Landsat_TM_year_1985.alg 290
ISODATA unsupervised classification 210
iterative training classes 69
J
Julia fractal 309
Julia_Fractals.alg 309
K
K/Th ratio for radiometrics data 115
kernel
north-east shade 271
north-west shading 267
kernel.template filter 323
kernels, see filters
Index
L
lake capacity monitoring 200
land ownership 185
Land_Use_Classification.alg 286
Landmark imports 231
Landsat MSS 204, 268, 269
example algorithms 268
Landsat TM 204
algorithms 270
data
example algorithms 270
merging with airborne magnetics surveys
7
merging with SPOT Pan 7, 10, 11
merging with SPOT Pan over DEMS 286
viewing in 3-D 286
Landsat TM data 79, 179, 209
Landsat_MSS.alg 307
Landsat_MSS_natural_color.alg 269
Landsat_over_DTM.alg 286
Landsat_SPOT_fusion_over_DTM.alg 286
Landsat_TM.alg 307
Landsat_TM_and_SPOT_Pan.alg 308
Landsat_TM_and_SPOT_Pan_IHS_merge.alg
294, 298
Landsat_TM_and_SPOT_Pan_RGBI.alg 294
Landsat_TM_area_specific_stretch.alg 272
Landsat_TM_Cloud_and_Water.alg 305
Landsat_TM_Cloud_removal.alg 305
Landsat_TM_PC1-6_HistEq.alg 309
Landsat_TM_scan_line_noise_removal.alg
307
Landsat_TM_Tasseled_Cap_in_RGB.alg 272
Landsat_TM_transformations.alg 309
Landsat_TM_two_years.alg 309
Landsat_TM_With_simple_grid.alg 300
LandsatMSS_NDVI.alg 268
LandsatMSS_PVI6.alg 268
LandsatMSS_PVI7.alg 268
LandsatMSS_Tasseled_Cap.alg 269
landuse/terrain classification map using SAR
217
lap_clean filter 321
Laplacian filter 279
laplacian filters 321
larvae 221
Latitude Longitude map grids 299
layers
Intensity 255
order in mosaicing algorithms 26
layover effect in radar processing 216
leaf area indices 64
least squares fit 272
Leeuwin Current 221
333
legends 300
legislative obligations and infringements, remote sensing application 184
lettuce, classification of 190
levelling artefacts 118
levelling problems in DEMs 26
lightning strikes 207
lineament detection 264
lineaments 267, 268
linear leveling 27
Linear Scale test pattern 310
links to vector data and GIS 103
local anomalous 114
local_enhance filter 324
low frequency filter 321
low vegetation cover and vegetation indices
93
low_pass_4 filter 313
lower resolution overview image files 43
lowpass filtered FFT 307
low-pass Fourier filters 165, 168, 297, 298
lowpass_4 filter 112
Lowpass_Filter.alg 307
Lsfit.alg 272
M
MAFF 185
magnetics data
collection 117
displaying 7, 102
displaying over a satellite image 8
imaging 117
viewing in 3-D 287
Magnetics.alg 287
Magnetics_1Q_Vertical_Derivative.alg 278
Magnetics_2nd_Vertical_Derivative.alg 278
Magnetics_3Q_Vertical_Derivative.alg 278
Magnetics_and_Radiometrics_Colordrape.alg
278
Magnetics_and_Vectors.alg 278
Magnetics_Colordrape.alg 278
Magnetics_Colordrape_map_legend.alg 278
Magnetics_Colordrape_shiny_look.alg 279
Magnetics_Colordrape_wet_look.alg 279
Magnetics_Contours_from_Raster.alg 279
Magnetics_POB_Vertical_Derivative.alg 280
Magnetics_Pseudocolor.alg 280
Magnetics_Realtime_Sun_Shade.alg 280
Magnetics_rgb_3angle.alg 280
Magnetics_uc1_and_full.alg 309
Magnetics_uch_and_dch.alg 309
Magnetics_uch_and_full.alg 309
majority filter 317, 324
Index
majority filters 28
map
creating a 41, 250
map composition 300
example algorithms 299
multiple language support 300
map composition items 17
map grids 299
map legends
example algorithms 301
map titles 300
Map_Symbols_In_Use.alg 300
maps
fire risk 6
forest management 6
landuse/terrain classification 217
mineral exploration 6
precipitation 6
updating with remote sensing 184
maps, showing integrated data 6
marine environment 205
marine habitat protection 193
marine organisms 221
masking 195
masking of water, forest and urban areas 73
Maximum Gold Value 146
maximum likelihood classifier 67, 190, 289
measurements of the earth’s gravity field 170
median filter 304, 324
for removing errors in DEMs 21
in change detection 55
median filters 317
Median_noise_removal.alg 304
merging data 210
Brovey transform and RGBI 10
classified and airphoto or satellite imagery
7
ERS-1 and Landsat TM 216
geochemical and Landsat 134
gravity and vectors 127
Landsat TM and aeromagnetic surveys 7
Landsat TM and SPOT Pan 7
Landsat TM and SPOT Pan over DEMs 286
radiometrics and aeromagnetic surveys 7
radiometrics and satellite imagery 123
using color and intensity 6
merging data, see also fusing 3
mineral exploration industry 133
mineral exploration maps 6
mineralization 120, 121, 215
mixed-residential class in classification 69
modefilt filter 324
MODELVISION 141
Modified Soil Adjusted Vegetation Index 88
334
monitoring crop production 189
monitoring environmental change. See environmental monitoring
MONOCOLOR layers 17
MONOCOLOUR layers 148
mono-cropping 203
mosaicing data 4, 201, 210
algorithm 26
DEMs 26
formulas for 17
strip of aerial photographs 186
mosaicing data algorithm 296
MOSS 187
MSAV vegetation index 88
MSAVI2 vegetation index 88
MSS data 76, 79
multiple language support 300
multispectral
classification 60, 190
satellite imagery 190
sensing of crops 63
N
native GIS format 44
natural color 269, 273, 284
NDVI 268, 276, 284
NDVI vegetation index 31, 83, 96, 191, 271
near surface sources 114
nearest neighbour resampling 50
new road development, identifying with satellite imagery 47
Newcastle_Magnetics_Map.alg 300
nickel anomalous regions 153
nickel mineralizations 134
nickel potential 149
NIR 276
NIR data 76, 92
NOAA satellite imagery 222
noise removal 14, 276, 304
non-cumulative histogram display 108
non-linear mixing 93
non-NULL data, transforming 9
normalized difference image for comparing results 51
Normalized Difference Vegetation Index 83
normalized difference vegetation index 268
North America, remote sensing applications in
183
North Arrows 300
North_East filter 322
North_South filter 322
North_West filter 322
North_West_5 filter 323
Index
northerly_dip filter 320
nosubs filter 110
notch filtered FFT 307
notch Fourier filtering 165, 166, 297, 298
notch transform 279
Notch_Filter.alg 307
O
ocean current charting 221
oil and gas industry, using ER Mapper in 231
oil spill response 193
OpenWorks 3.1 Wells import 243
order of layers in mosaicing algorithms 26
Output_Image_Highpass.alg 298
Output_Image_Lowpass.alg 298
Output_Image_Notch.alg 298
overlaying GIS or CAD data 44
oversampling 106
P
panchromatic data 47
PC1 17, 271, 273, 275
PCA. See Principal Components Analysis
Penman method 191
pepper-and-salt noise 169
performance and virtual datasets 34
Perpendicular Vegetation Index 84
Perpendicular Vegetation Index, see also PVI
vegetation index 268
petroleum exploration, applications of radar
data 213
Petroseis 124
photogrammetric generation of DEMs 20
phyllosilicates 270, 275
pipeline design, application for DEMs 22, 38
pixel to pixel registration 51
pixel transform 9
Planck function 223
planning, remote sensing applications 184
plant development and stress 64
plant leaves and vegetation indices 78
Pleasing_image.alg 273
POB vertical derivative 280
political boundaries, remote sensing application 185
polygon regions 16
polygons
using to define areas of interest 5
polynomial rectification 49
polynomial order 50
potassium 280
potassium-40 277
power spectra in Fourier processing 177
335
power spectrum image in Fourier processing
173, 297
precipitation maps 6
principal components analysis 17, 54, 123,
204
Principal_Component_1.alg 273
printing 249
stereo pair images 30, 288
stereo pairs with vectors 30
test patterns 304
TIFF format 249
to graphics formats 30
priority of vector layers 17
processing DEMs 31
profiles 137
pseudocolor colordrape, using for displaying
data 7, 8
pseudocolor images 109, 267
Pseudocolor.alg 267
PVI vegetation index 84, 96, 191, 268
Q
quality control 144
quantizing data 102
quartz 275
R
radar data
characteristics 213
correcting with DEMs 39
geological interpretation 218
in mineral and oil exploration 213
interpretation 215
layover effect 216
Radian CP3 124
radiance and reflectance 80
Radiometric 277
radiometric correction 62
radiometrics 102, 277, 295
algorithms 277
imaging 119
interpreting 121
K/Th ratio 115
merging with aeromagnetics surveys 7
merging with satellite data 123
over magnetics in 3-D 287
Radiometrics_and_Magnetics_RGBI.alg 295
Radiometrics_Magnetics_RGBI.alg 280
Radiometrics_over_Magnetics algorithm 287
Radiometrics_ratio_K_Th.alg 280
random line data 104
random point data 104
Random test pattern 310
Index
range of heights from a DEM 32, 33
ranking filters 28, 317
raster and vector data integration 31
raster and vector integration example algorithms 304
raster contour line generation 28
raster data
rectifying to vector data 50
raster dataset resolution 4
raster to vector conversion 17
ratio combinations (RGB) 122
ratio vegetation index 82
ratios in geophysics 115, 121
real image in Fourier processing 177
real time “Sun” shaded images 280
realtime "Sun" shaded images
for viewing geophysical data 110
realtime “Sun” shaded images 282
for displaying DEMs 24
realtime “sun” shaded images
algorithm 268
real-time classification 71
Realtime_Sun_Shade.alg 268
rectifying images 21, 49, 186, 195, 200
pixel to pixel registration 51
rectifying raster data to vector data 50
RMS errors 50
selecting control points 49
Red Green difference image 54
Reduce_to_Pole.alg 308
Reduce_to_Pole_Power_Spectrum.alg 308
reduction to the pole filters 171
reduction-to-pole Fourier processing 173
reflectance 80
reflectance spectra 85
regions
displaying attribute data within 14
example algorithms 301
highlighting 5
highlighting combined with backdrop images 15
highlighting data example algorithms 283
showing DEM values within 33
registering data to ground control points 49
registering datasets
pixel to pixel registration 51
regolith maps 139
regular line data 105
reliability of gridded magnetic data 118
removing noise from images 14
resampling 50, 106
residual filters 114
resistivity 129
resolution of scanned images 262
336
resolution requirements for merging datasets
4, 296
RGB -> HSI -> RGB 294
RGB to HSI to RGB 275, 299
RGB.alg 263, 306
RGB->HSI->RGB 9
RGB_321.alg 269, 273
RGB_321_sharpened_with_SPOT_Pan.alg
269
RGB_321_to_HSI_to_RGB.alg 273
RGB_341.alg 273
RGB_421.alg 269
RGB_432.alg 274
RGB_531.alg 274
RGB_541.alg 274
RGB_541_to_HSI_to_RGB.alg 274
RGB_542.alg 274
RGB_741.alg 275
RGB_autoscale.alg 306
RGB_BCET_autoscale.alg 306
RGB_Principal_Components_123.alg 275
RGB_Principal_Components_bands_741.alg
275
RGB_to_HSI.alg 309
RGB_to_HSI_to_RGB.alg 275, 299, 306
RGBCMYK color strip 309
rgbcmyk_bands.alg 309
RGB-Height images 31
RGBI algorithm 10
RGBI images 9
ringing 169
RIP (Raster Image Processing) engine 42
river transports 203
rm2badli filter 324
rmbadlin filter 324
RMS errors in rectification 50
road boundaries, enhancing 169
road centreline network 49
roads draped over DEM in 3-D 287
Roads_and_SPOT_over_DTM.alg 287
rotated text 300
RVI vegetation index 82
S
SAR filters 317
SAR interferometry 214
SAR. See radar data
satellite imagery 194
availability and acquisition 217
SAVI vegetation index 86, 87
SBI (Soil Brightness Index) 191
scale bars 300
scale independence 144
Index
scan line noise removal 276
scan line noise removed 307
scan lines 276
Scan_line_noise_removal.alg 276
scanned data 194, 262
scattergrams 69
scene brightness, in change detection 55
Schlumberger imports 231
scripting language 43
seasonal effects, eliminating 52
sea-surface temperatures 222
sea-surface temperatures (SSTs) 221
second derivative filters 114
second vertical derivative 170
Second_Vertical_Deriv_Power_Spectrum.alg
308
sedimentation and erosion, applications for
DEMs 23
SEG-Y import 234
seismic data 102, 124, 280
example algorithms 280
seismic data, importing into ER Mapper 233
seismic datasets, viewing 255
SeisWorks Fault Polygons import 244
SeisWorks Manual Contours import 245
SeisWorks seismic software 233
selecting control points for rectification 49
semi-arid areas, problems with vegetation indices 94
semi-arid countries, remote sensing in 190
Sensors_16.alg 276
shaded relief images. See sun angle shading
Shaded_Digital_Terrain_Map.alg 288
shadow strips, errors in DEMs 21
shadows, eliminating with band ratios 79
sharpedge filter 314
sharpen filter 321
Sharpen_TM_with_Pan.alg 295
sharpen11 filter 315
sharpen2 filter 315
sharpening Landsat TM imagery 3, 12
shiny colordrape images
for displaying DEMs 25
siltation monitoring 200
Simple_San_Diego_Map.alg 301
Sin_curve.alg 288
Sine wave test pattern 310
Single_Band_Greyscale.alg 307
slope 308
SLOPE filter 35
slope filter 311
slope, generating from DEMs 31, 32
slope, in fire risk 36
slope_degrees 31
337
SLOPE_DEGREES filter 35, 264, 304
slope_degrees filter 311
Slope_degrees.alg 264, 304
SLOPE_PERCENT filter 265
Slope_percent.alg 265
smelting 209
smoke spotting 207
smoke-stack industrial centres 203
smooth resampling 106
smoothing 106
smoothing filters 112
Sobel filters 320
soil 64
Soil Adjusted Vegetation Index 86
soil line 82, 94
soil mapping 120
soil noise 85
sources and quality of DEM data 19
sources of topography data 130
South_East filter 323
South_North filter 323
South_West filter 323
spatial sampling 106
spectral characteristics of vegetation 77, 78
spectral pattern recognition 60
spectral reflectance characteristics 63
spectral response of crop species 65
spectral unmixing 95
SPOT data 76
viewing in 3-D 286
SPOT HRV 283
SPOT Pan 307
SPOT Panchromatic 293
SPOT Panchromatic data 47, 60, 283, 288
example algorithms 283
SPOT Panchromatic images 284
SPOT XS 72, 284, 293, 307
SPOT XS data
example algorithms 284
SPOT_over_DTM.alg 288
SPOT_Pan.alg 307
SPOT_Pan_in_Regions.alg 301
SPOT_Pan_in_Regions_over_Landsat_TM.alg
301
SPOT_Pan_with_roads_and_drainage.alg 264
SPOT_XS.alg 307
SPOT_XS_and_Pan_Brovey_merge.alg 284
SPOT_XS_and_SPOT_Pan_RGBI.alg 295
SPOT_XS_natural_color.alg 284
SPOT_XS_NDVI_colordraped_over_Pan.alg
284
SPOT_XS_NDVI_veg_index_greyscale.alg
284
SPOT_XS_rgb_321.alg 284
Index
SPOT_XS_rgb_321_sharpened_with_SPOT_P
an.alg 285
SQL 145
SSTs (sea-surface temperatures) 221
statistic used for erosion and siltation estimates 201, 249, 255
statistical sample in classification 68
statistics calculations 301
statistics, in classification 63
statutory designations, remote sensing application 185
std_dev filters 312, 318
steepness, from DEMs 31
stereo pair images 30
stereo pairs 288
with vectors 30
stereo viewing 30, 287
strike filtering 169
strip of aerial photographs 186
striping 120
structural features 180
structure, displaying 7
subsampling and geophysical data 115
sub-sectioning data 42
subsectioning data 195
sun angle 110
sun angle filters 27
sun shading viewing 288
sun shading, using to view oil and gas data
256
supervised classification 61, 190
surface information from Radiometrics 278
survey boundary polygons 103
susceptibility values 141
suspended aerosols 89
T
table based dynamic links 283
table of data dynamic link 297
Tabular_Data_as_Circles.alg 297
tasseled cap vegetation 269
tasseled cap vegetation calculations 276
tasseled caps 272
Tasseled_Cap_Transforms.alg 276
TC_Greeness_over_Brightness.alg 270
template algorithms 259
template algorithms, see algorithm templates
terraces, errors in DEMs 20
terrain, displaying with principle components
17
Tertiary 151
test 3-D image 288
test patterns example algorithms 304
338
Test_Fonts.alg 300
Test_Patterns.alg 309
text 300
texture mapping in 3-D perspective viewing
29
thermal infrared satellite imagery 221
thorium 280
thorium-232 277
thresh filters 321
thresholding in change detection 56
TIFF format, printing to 249
tiff, printing to graphics files 30
TM data. See Landsat TM
TM_NDVI filter 35
TNDVI 276
topographic variations, emphasizing with radar data 213
topography 102, 129, 264
sources of data 130
training region refinement 69
training site selection 61, 67
Transformed Soil Adjusted Vegetation Index
87
transforms 109
in Fourier processing 177
transmittance of canopy components 64
transport corridors, remote sensing applications 184
Transverse Mercator projection 196
traversing 109
tree damage 209
triangles 300
TRUECOLOR layers 17
Truecolor_Color_Spiral.alg 297
TRUECOLOUR layers 148
TSAVI vegetation index 87
tundra-like regions 207
turbidity 204
two-way time data 125
typicality thresholds 70
U
unsupervised classification 71, 210
unsupervised multi-spectral classification 190
updates from a single authorised source 143
updating vectors 186
Upward Continuation filter 313
uranium-238 277
urban planning, remote sensing application
184
User_Example_Dynamic_Link.alg 297
usercode filters 323
USGS DLG-3 305
Index
utilities mapping, remote sensing application
184
V
v_edge filters 322
vector based geology information 287
vector based polygon regions to raster
datasets 283
vector based polygons 301
vector data 49, 263
vector data, overlaying on raster backdrops
44
vector polygons of high fire risk 37
Vector_Deriv_Power_Spectrum.alg 308
Vector_Field_Power_Spectrum.alg 308
vectors
displaying over raster data 5
draped over DEM 287
draping over perspective views 29
importing and viewing with dynamic links
103
updating 186
viewing over stereo pairs 30
Vectors_from_Airphoto.alg 263, 304
Vectors_over_Greyscale.alg 284
Vectors_over_SPOT_Pan.alg 305
vegetation 274, 276, 284
highlighting 5
in fire risk 36
shown in red 79
spectral characteristics of 77, 78
vegetation canopy, penetrating with radar
213
vegetation indices 77, 289
ARVI 90
atmospherically resistant indices 90
basic assumptions 81
DVI 84
GEMI 89
GVI 91
IPVI 83
MSAV 88
MSAVI2 88
NDVI 31, 83, 96, 191
problems in arid and semi-arid areas 94
PVI 84, 96, 191
questions and answers 81
RVI 82
SAVI 86, 87
SBI 191
TSAVI 87
WDVI 85
with low vegetation cover 93
339
vegetation, dry 276
vegetation, green 276
Vegetation_NDVI.alg 276
Vegetation_TNDVI.alg 276
vegetative biomass monitoring using classification 191
vertical average filter 316
vertical continuation filter 170
vertical derivative filters 314
vertical derivative in geophysical data processing 116
vertical derivative of magnetics 278
vertical edges filter 322
vertical illumination 283
Vertical_Continuation.alg 308
Vertical_Continuation_Power_Spectrum.alg
308
via_ARCPLOT_Airphoto_with_roads.alg 264
via_ARCPLOT_Coast_with_boundaries.alg
264
view position in 3-D perspective viewing 29
village plans, use in development applications
184
virtual datasets 210
and principal components analysis 179
example algorithms 305
for difference imaging 55
template algorithms 308
with DEMs 34
Virtual_Dataset_mosaic.alg 296
visibility impact design, applications of DEMs
22
visualizing geophysical data 101, 107
W
w_loc_me filter 324
warping. See rectifying images
water 305
water quality 203
water resources management 201
water resources monitoring 199
water, highlighting 13
Waterlevel Color Look Up Table 108
watersheds, remote sensing application 185
wave frequencies electromagnetic data 102
WDVI vegetation index 85
Weighted Difference Vegetation Index 85
weighting the classifier 71
West_East filter 323
Western Australia, remote sensing applications in 193
wet look 279
wetness 272, 276
Index
Wetness layer 269
what-if processing 32, 35, 101
wheat, classification of 190
Z
Z-profiling 109
Z-scales 300
Z-values 300
Zycor 124
Zycor ASCII Grid import 234
Zycor ASCII grid import 235
340
Index
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement