ENVI for Defense and Intelligence

ENVI for Defense and Intelligence

ENVI for

Defense and Intelligence

Copyright 2011

All rights reserved.

E3De, ENVI and IDL are trademarks of Exelis, Inc. All other marks are the property of their respective owners. ©2011, Exelis Visual Information Solutions, Inc.

Produced by Outreach Services

Exelis Visual Information Solutions

4990 Pearl East Circle

Boulder, CO 80301

303-786-9900

ii

ENVI for Defense and Intelligence

Contents

Contents

Introduction ..................................................................................................................................................................... 1

What is ENVI?............................................................................................................................................................... 2

ENVI + IDL, ENVI, and IDL ................................................................................................................................... 2

ENVI Zoom .............................................................................................................................................................. 2

ENVI Resources ............................................................................................................................................................ 3

Contacting Exelis Visual Information Solutions ...................................................................................................... 3

Training .................................................................................................................................................................... 3

Tutorials.................................................................................................................................................................... 3

ENVI Support ........................................................................................................................................................... 3

Contacting Technical Support .............................................................................................................................. 4

Online Resources ...................................................................................................................................................... 4

Exelis Visual Information Solutions Website ...................................................................................................... 4

IDL Newsgroup.................................................................................................................................................... 4

About this Training Manual .......................................................................................................................................... 4

Mastering the Basics........................................................................................................................................................ 5

What You Will Learn In This Chapter........................................................................................................................... 6

Fundamentals................................................................................................................................................................ 6

Creating and Managing Image Display........................................................................................................................ 7

Finding Windows on a Busy Screen....................................................................................................................... 12

Which Bands are in Which Display? ...................................................................................................................... 13

Closing and Reopening the Available Bands List .................................................................................................. 13

The Meta Scroll Window ................................................................................................................................... 16

Mouse Button Review ........................................................................................................................................ 16

The Display Group Menu Bar ................................................................................................................................ 17

Spatial and Spectral Subsetting................................................................................................................................... 20

Online Help ................................................................................................................................................................. 23

Customizing the ENVI Configuration ......................................................................................................................... 24

Closing Files ............................................................................................................................................................... 26

Skills Check ................................................................................................................................................................. 27

Self Test....................................................................................................................................................................... 27

Image Display Concepts................................................................................................................................................ 29

What You Will Learn In This Chapter......................................................................................................................... 30

Stretching Image Data ................................................................................................................................................ 30

Interactive Contrast Stretching ................................................................................................................................... 34

The Histogram Source........................................................................................................................................ 36

The Stretch Type ................................................................................................................................................ 37

Saving a Custom Stretch as a LUT..................................................................................................................... 38

The Default Stretch Hierarchy................................................................................................................................ 38

Histogram Matching ............................................................................................................................................... 39

Adding Color to Displays............................................................................................................................................ 39

ENVI for Defense and Intelligence iii

Contents

Color Tables ........................................................................................................................................................... 39

Using a LUT File in Conjunction with a Color Table........................................................................................ 41

Density Slicing ....................................................................................................................................................... 41

Animation ................................................................................................................................................................... 42

Chapter Review........................................................................................................................................................... 44

The ENVI NITF/NSIF Module .................................................................................................................................... 45

What You Will Learn In This Chapter ........................................................................................................................ 46

The NITF and NSIF Format ....................................................................................................................................... 46

The NITF Image Format .................................................................................................................................... 46

Chapter Review........................................................................................................................................................... 54

Electromagnetic Spectrum Image Display and Analysis ........................................................................................... 57

What You Will Learn In This Chapter ........................................................................................................................ 58

The Electromagnetic Spectrum................................................................................................................................... 58

Understanding Emission of Electromagnetic Radiation Blackbodies ........................................................................ 59

Electromagnetic Radiation Interaction with the Atmosphere: Atmospheric Windows.......................................... 60

Viewing Panchromatic QuickBird Data ..................................................................................................................... 62

Viewing Multispectral QuickBird Data ...................................................................................................................... 62

Multispectral Pixel Signatures.................................................................................................................................... 67

Multispectral Landsat................................................................................................................................................. 70

Band Ratios for Analysis ............................................................................................................................................ 71

Extra Work.................................................................................................................................................................. 73

Chapter Review........................................................................................................................................................... 73

Introduction to ENVI Zoom......................................................................................................................................... 75

What You Will Learn In This Chapter ........................................................................................................................ 76

Introduction to ENVI Zoom ........................................................................................................................................ 76

Working with the Data Manager........................................................................................................................ 77

Working with Layers ......................................................................................................................................... 77

Exploring the ENVI Zoom Interface.................................................................................................................. 78

Using Display Tools .......................................................................................................................................... 79

Working with the Overview Window ................................................................................................................ 79

Working with a Portal........................................................................................................................................ 80

Pinning the Portal to the Image.......................................................................................................................... 80

Working with Blend, Flicker, and Swipe........................................................................................................... 81

Adding PIA TREs.............................................................................................................................................. 83

Saving the File ................................................................................................................................................... 83

The ENVI Feature Extraction Workflow..................................................................................................................... 84

Opening and Displaying the Image.................................................................................................................... 85

Segmenting the Image........................................................................................................................................ 85

Rule-Based Classification.................................................................................................................................. 87

Normalized Band Ratio...................................................................................................................................... 89

Rectangular Shape ............................................................................................................................................. 92

Area.................................................................................................................................................................... 92 iv

ENVI for Defense and Intelligence

Contents

Average Pixel Value........................................................................................................................................... 92

Saving the Rule Set ............................................................................................................................................ 94

Exporting Classification Results to a Shapefile.................................................................................................. 94

Viewing the Report and Statistics ...................................................................................................................... 95

Modifying Export Options (Optional)................................................................................................................ 95

Chapter Review ........................................................................................................................................................... 97

Change Detection: The December 2004 Tsunami....................................................................................................... 99

What You Will Learn in this Chapter ........................................................................................................................ 100

Exercise Overview..................................................................................................................................................... 100

Preprocessing............................................................................................................................................................ 100

Instrument Calibration.............................................................................................................................................. 100

Geometric Correction ............................................................................................................................................... 100

Orthorectification and Registration .......................................................................................................................... 101

Atmospheric Correction ............................................................................................................................................ 105

Image Subset ............................................................................................................................................................. 108

Review ....................................................................................................................................................................... 109

Supervised vs. Unsupervised Classification .............................................................................................................. 109

Supervised Classification...................................................................................................................................... 110

Analyzing Classification Results .......................................................................................................................... 121

Review ....................................................................................................................................................................... 122

What You Will Learn in this Section ......................................................................................................................... 123

Change Detection Analysis ....................................................................................................................................... 123

Review ....................................................................................................................................................................... 130

What You Will Learn in this Section ......................................................................................................................... 130

Synthesizing Results with Post-processing Tools ...................................................................................................... 130

Review ....................................................................................................................................................................... 135

SPEAR Tools................................................................................................................................................................ 137

What You Will Learn in this Chapter ........................................................................................................................ 138

Terrain Categorization.............................................................................................................................................. 138

Change Detection Analysis ....................................................................................................................................... 145

Chapter Review ......................................................................................................................................................... 147

RX Anomaly Detection, Target Mapping, and Material Identification.................................................................. 149

What You Will Learn in this Chapter ........................................................................................................................ 151

The RX Anomaly Detection Algorithm in SPEAR Tools ........................................................................................... 151

The RX Anomaly Detection Algorithm in THOR....................................................................................................... 159

Chapter Review ......................................................................................................................................................... 169

Image Sharpening........................................................................................................................................................ 171

ENVI for Defense and Intelligence v

Contents

What You Will Learn in this Chapter........................................................................................................................ 172

RGB Sharpening ....................................................................................................................................................... 172

Opening and Viewing Exercise Data........................................................................................................................ 172

RGB Image Sharpening ............................................................................................................................................ 173

Spectral Sharpening.................................................................................................................................................. 174

Chapter Review......................................................................................................................................................... 180

Topographic Analysis for Mission Planning............................................................................................................. 181

What You Will Learn in this Chapter........................................................................................................................ 182

Explore and Resize the Image Data.......................................................................................................................... 182

Topographic Analysis ............................................................................................................................................... 183

Texture Analysis .................................................................................................................................................. 188

Image Masking..................................................................................................................................................... 189

Classification Overlay .......................................................................................................................................... 190

3D Surface View .................................................................................................................................................. 191

Chapter Review......................................................................................................................................................... 195

vi

ENVI for Defense and Intelligence

Introduction

What is ENVI?............................................................................................................................... 2

ENVI Resources............................................................................................................................ 3

About this Training Manual ........................................................................................................... 4

ENVI for Defense and Intelligence

1

Chapter 1:Introduction What is ENVI?

What is ENVI?

ENVI

®

(the Environment for Visualizing Images) is the ideal software for the visualization, analysis, and presentation of all types of digital imagery. ENVI’s complete image-processing package includes advanced, yet easy-to-use spectral tools, geometric correction, terrain analysis, radar analysis, raster and vector GIS capabilities, extensive support for images from a wide variety of sources, and much more.

ENVI can be used to perform numerous image analysis techniques, including multispectral classification, various types of spatial filtering, image registration, principal components transformations, band ratios, and image statistics. ENVI also has a unique suite of advanced spectral analysis tools designed specifically for working with hyperspectral data (although many are also appropriate for multispectral analysis) and a complete set of tools for working with radar data (both single band and fully polarimetric SAR).

Furthermore, ENVI provides full access to the programming language in which it was written, the

Interactive Data Language (IDL), a powerful, yet easy to use fourth generation language whose programs can easily be incorporated into ENVI.

ENVI’s interactive analysis capabilities include:

• Multiple dynamic overlay capabilities that allow easy comparison of images in multiple displays.

• Real-time extraction and linked spatial/spectral profiling from multiband and hyperspectral data that provide you with new ways of looking at high-dimensional data.

• Interactive tools to view and analyze vectors and GIS attributes.

• Standard capabilities, such as contrast stretching and two-dimensional scatter plots.

ENVI + IDL, ENVI, and IDL

ENVI is written in Interactive Data Language (IDL

®

), a powerful structured programming language that offers integrated image processing. The flexibility of ENVI is due largely to IDL’s capabilities.

There are two types of ENVI licenses:

• ENVI + IDL — ENVI plus a full version of IDL

• ENVI — ENVI plus a runtime version of IDL

ENVI + IDL users can use IDL to customize their own command-line functions. Advanced ENVI + IDL users should find the flexibility offered by IDL’s interactive features helpful for their dynamic image analyses.

ENVI Zoom

Beginning with ENVI 4.3, ENVI Zoom is included in your ENVI installation. ENVI Zoom is an easy to use, powerful imagery viewer used to display and manipulate remote sensing images. The interface provides quick access to common display tools such as contrast, brightness, sharpening, and transparency. You can work with multiple layers of data at one time and in one window, use a Data Manager and Layer Manager to keep track of multiple datasets, and “punch through” layers to view and work with another layer or layers in the same window. ENVI Zoom also contains the robust RX Anomaly Detection processing feature (but you will not use it in this exercise). This algorithm detects spectral or color differences between layers and extracts unknown targets that are spectrally distinct from the image background. In addition, ENVI Zoom re-projects and re-samples images on-the-fly.

2

ENVI for Defense and Intelligence

ENVI Resources Chapter 1: Introduction

While anyone can take advantage of the display and enhancement tools, ENVI Zoom is primarily designed for defense imagery analysts and other military personnel.

ENVI Resources

Exelis Visual Information Solutions has a team of Global Services Group (GSG) consultants who provide custom software development; consulting services; and training to commercial, research, and government markets. The GSG team can either help you define requirements and lead your development cycle from prototyping to final installation, or they can join your project mid-stream and provide expert assistance.

Each GSG team member offers expertise in areas such as image processing; data analysis; visualization; software development; a broad range of scientific application areas; and government civilian, defense, and intelligence community requirements. If needed, Exelis Visual Information Solutions has staff with the necessary security clearances to support classified projects.

The GSG team is experienced in extending ENVI’s robust suite of user functions and batch programming capabilities, and has up-to-date knowledge on recent product enhancements and future product direction. To email Exelis Visual Information Solutions’s Global Services Group at [email protected]

.

Contacting Exelis Visual Information Solutions

Call, email, or visit online:

Exelis Visual Information Solutions, Inc.

4990 Pearl East Circle

Boulder, CO 80301 USA

Phone: 303-786-9900

Fax: 303-786-9909

Email: [email protected]

Web: www.exelisvis.com

Training

Exelis Visual Information Solutions offers a full range of IDL and ENVI training courses for everyone from the beginning user to the experienced application developer. We teach courses on a rotating basis at the

Exelis Visual Information Solutions training facility in Boulder, Colorado. In addition, regional training classes are offered every year at various locations in the United States, Europe, and Australia. For the latest training schedule, a detailed course outline, and/or the cost of a training course, call, send email to [email protected]

, or go online.

Tutorials

A number of ENVI tutorials are available on the ENVI website ( www.ittvis.com/envi ) as well as the data

CDs that shipped with your ENVI software.

ENVI Support

If you experience a problem with ENVI, first verify that the issue is not a result of misinterpreting the expected outcome of a specific function or action. Double-check ENVI Help, or check with a local expert.

Make sure your system is properly configured with enough virtual memory and sufficient operating system quotas.

If the problem still occurs, report it to Exelis Visual Information Solutions Technical Support quickly, so that the issue can be resolved, or a workaround can be provided. If you cannot find the information you need

ENVI for Defense and Intelligence

3

Chapter 1:Introduction About this Training Manual in the ENVI written guides or ENVI Help, report this to Technical Support as well, so that the documentation can be updated.

Contacting Technical Support

To report a problem, call, e-mail, or go online to submit a support incident:

Technical Support: 303-413-3920

Email: [email protected]

Online Resources

There are two additional resources for ENVI support: the Exelis Visual Information Solutions website, and the IDL newsgroup.

Exelis Visual Information Solutions Website

The Exelis Visual Information Solutions website has several links that provide additional ENVI support.

The website includes access to user-contributed ENVI code, an ENVI user forum, an IDL user forum, and technical tips. Go to www.exelisvis.com

, select Support or Community, then select an option.

ENVI product documentation, user’s guides, tutorials, and module guides are also available on the ENVI website. Go to www.exelisvis.com/envi , select ProductsENVI, then select Product Documentation.

IDL Newsgroup

The Usenet newsgroup comp.lang.idl-pvwave is dedicated to the discussion of IDL. Users post questions and answers and share information about their own IDL projects. Note that many Exelis Visual Information

Solutions employees read this newsgroup, but do not usually post messages to the group.

About this Training Manual

This is a training manual used by Exelis Visual Information Solutions, to teach its customers ENVI. It is designed to be a classroom training aid. However, if you cannot attend a training course, this manual is the next best tool for quickly learning and understanding ENVI.

We want you to learn ENVI and apply it successfully to your work. If you have any suggestions for improvements or additions to this manual, please let us know.

Most chapters in this manual are self-contained. In most cases you can pick up the manual in any particular functional area and start working with the exercise immediately.

Bold All ENVI menu options, dialog buttons, dialog fields, other dialog options, and values that you need to enter are bold.

Courier

Numbered Steps

Filenames, directory paths, and IDL/ENVI programming variables are in a

Courier

font.

Paragraphs beginning with a bold number designate commands that need to be performed for a particular exercise.

4

ENVI for Defense and Intelligence

Chapter 1:

Mastering the Basics

What You Will Learn In This Chapter............................................................................................6

Fundamentals ............................................................................................................................... 6

Creating and Managing Image Display ......................................................................................... 7

Spatial and Spectral Subsetting .................................................................................................. 20

Online Help ................................................................................................................................. 23

Customizing the ENVI Configuration........................................................................................... 24

Closing Files................................................................................................................................ 26

Skills Check................................................................................................................................. 27

Self Test ...................................................................................................................................... 27

ENVI for Defense and Intelligence

5

Chapter 1:Mastering the Basics What You Will Learn In This Chapter

What You Will Learn In This Chapter

In this chapter you will learn how to:

• Create and manage display groups using the Available Bands List

• Manipulate display groups with multiple mouse button controls

• Spatially and spectrally subset image data on-the-fly

• Use ENVI Help

• Custom-configure the ENVI configuration file

Fundamentals

Exercise 1: Starting ENVI and Exploring the Main Menu

1. In Windows, start a new ENVI session by clicking StartAll ProgramsENVI x.x

ENVI+IDL. If you are working on a UNIX machine, type envi

at the UNIX prompt.

In ENVI + IDL, the IDL Workbench window (the control panel for the IDL session that is running

ENVI) is minimized at the bottom of the screen. This window (Figure 1) is only used for advanced techniques involved with customizing and extending ENVI, but must be open to run ENVI + IDL. If you are running ENVI (vs. ENVI + IDL), access to the IDL Workbench is not provided, and therefore, the window is not displayed.

If you are working on a UNIX machine, the IDL Workbench window is automatically opened.

6

Figure 1: IDL Workbench Window

ENVI for Defense and Intelligence

Creating and Managing Image Display Chapter 1: Mastering the Basics

2. Once ENVI loads, you will see the ENVI main menu bar (Figure 2). This is the primary control panel for working in ENVI, allowing you to do such things as open files and apply processing functions. By default, the ENVI menu is oriented horizontally as illustrated in Figure 2, but you can change it to a vertical orientation by grabbing a corner of the menu with the mouse and dragging.

Figure 2: The ENVI Main Menu Bar

Note the location of the following tools:

• Making image mosaics is under both the Basic Tools and the Map menus

• Computing image statistics and making masks are under the Basic Tools menu

• Opening image files is under the File menu

• Reading files from tape is under the File menu.

Creating and Managing Image Display

Opening files and displaying images is perhaps the most basic, yet fundamentally important task for image processing software. In ENVI, these tasks are simplified as much as possible by building in support for the most common data formats and by making a special control panel dedicated to managing open files and creating display groups.

Exercise 2: Opening a File

1. From the ENVI main menu bar, select FileOpen Image File.

2. ENVI looks for data in the C:\

Program Files\ITT\IDL\IDLxx\products\ENVIxx\data directory, where

xx

is the software version number. Select the file bhtmref.img

from the list, then click Open.

Each time you open a new file, ENVI displays the Available Bands List, which lists each individual band in the image that was opened (see Figure 3 on page 16). Entries in the list appear in reverse chronological order, with the most recently opened file’s bands at the top of the list.

3. Take a moment to explore the entries that were placed into the Available Bands List.

ENVI for Defense and Intelligence

7

Chapter 1:Mastering the Basics

The opened file displays with an icon indicating the file is a multiband image file

List of the band names contained in the file

Creating and Managing Image Display

The wavelength value of each band. If wavelength information is not available, the parenthetical information does not display

8

Figure 3: The Available Bands List

The file bhtmref.img

contains a small subset from a 30 m spatial resolution Landsat TM scene over the Bighorn Basin in Wyoming (USA). As you can see, the file contains six individual bands

(it is a multi-band file), where each band is a separate image collected at a different wavelength in the electromagnetic spectrum. The Landsat TM sensor measures in seven different wavelengths.

However, there are only six entries in the Available Bands List, and the band names skip from Band

5 to Band 7.

Question: What happened to Band 6?

Band 6 (the thermal band whose band center wavelength is ~11.5 µm) is often removed from

Landsat TM datasets because it has a larger pixel size than the rest of the bands and is typically analyzed separately. Thus, the band names in this dataset skip from Band 5 to Band 7. The bands are located in the following parts of the spectrum (the numbers are the band passes for the fullwidth-half-maximum of each band):

Band 1 = 0.45 - 0.52 microns (blue)

Band 2 = 0.52 - 0.60 microns (green)

Band 3 = 0.63 - 0.69 microns (red)

Band 4 = 0.76 - 0.90 microns (near-infrared)

Band 5 = 1.55 - 1.75 microns (shortwave infrared)

Band 7 = 2.08 - 2.35 microns (shortwave infrared)

Note that ENVI also opened the file bhdemsub.img

. This file contains a digital elevation model for the area and was opened because it is associated with the Landsat image.

ENVI for Defense and Intelligence

Creating and Managing Image Display Chapter 1: Mastering the Basics

Exercise 3: Displaying Images

The Available Bands List not only keeps track of the images that have been opened, but it also serves as the control panel for creating display groups. ENVI uses the red, green, blue (RGB) color display standard. This standard allows 256 levels of brightness (i.e., byte scale) in each channel (red, green, and blue) for each pixel, therefore allowing the display of the highest quality color-composite images. ENVI can display numeric data of any format (for example, floating-point temperature/thermal data), but to actually put those data on the screen as an image, it must be scaled to the 256 brightness values to be understood by the monitor. Fortunately, ENVI can do this with minimal user input. Later in the lesson, you will learn to control how these values are displayed.

1. Left-click to select Band 3 in the Available Bands List. This band is now placed in the Selected

Band field and its dimensions, data type, and interleave are listed at the bottom of the window in the Dims field.

2. Click the RGB Color radio button. This allows you to select three bands from the Available Bands List.

RGB color-composite images combine three different bands into a single-color image, where the color of each pixel is dependent on the relative brightness of the corresponding pixel in each of the three bands. Using bands from parts of the electromagnetic spectrum to which our own eyes are not sensitive, you can discriminate features that are otherwise difficult to see.

3. Note that the R (red channel) radio button is selected by default. Select Band 4 in the Available Bands List and watch what happens to the middle part of the window.

Band 4 is placed in the R field, and ENVI automatically switches to the G (green channel) radio button.

4. Select Band 3 for the G field. Band 3 is placed in the G field, and ENVI switches to the B (blue channel) radio button.

5. Select Band 2 for the B field.

The Available Bands List should now be set up for a color-composite image with Band 4 in the red channel,

Band 3 in the green channel, and Band 2 in the blue channel. For simplicity, RGB composites such as this are often written using the convention Band (4,3,2)

RGB composite.

This particular combination of bands for Landsat TM imagery is usually called a color-infrared composite (because it makes use of the near-infrared channel), and it tends to produce an image where healthy, green vegetation is red.

ENVI for Defense and Intelligence

9

Chapter 1:Mastering the Basics Creating and Managing Image Display

You could change any band assigned to one of the color channels, by choosing a color channel radio button, then selecting a new band from the Available Bands List.

6. Display the Band (4,3,2) RGB composite image by clicking Load RGB.

Instead of opening just one window, ENVI creates a display group consisting of three windows

(Figure 4).

Display group title bar

(the #1 corresponds to the display group number)

The Image window contains image data at exactly one-to-one resolution (one screen pixel for each image data pixel)

The Zoom window contains a small area from the Image window that has been magnified

The Scroll window displays the entire image; however, it is subsampled such that it will fit into the window

10

Figure 4: The ENVI Display Group

ENVI for Defense and Intelligence

Creating and Managing Image Display Chapter 1: Mastering the Basics

The three windows of the display group are linked together to act as a single view of the image.

Actions you take in one window often affect the contents of the others. If you close the Image window, then ENVI automatically closes the whole associated display group. You can, however, close the Zoom or Scroll windows without closing the entire display group.

ENVI uses display groups because most remote sensing images are too large to display in their entirety on a computer monitor. We will learn in the next exercise how to manipulate the three windows of the display group.

There is no limit to the number of display groups that you can have open simultaneously; however, with limited space on your monitor it will be important to manage the number and content of each display.

The button on the bottom of the Available Bands List now reads Display #1, which corresponds to the display number in the title bar of each of the display group windows. Locate the display number on the Image window title bar (see Figure 4 on page 10).

7. To load a second image into a new display group, click Display #1 on the

Available Bands List, and select New Display.

8. This opens a new, empty Image window. Use the Available Bands List to load a Band (3,2,1) RGB composite. This is a true-color composite because it approximates the way this area would look to a human observer. Load this into Display #2 by clicking Load RGB.

Examine the Display #2 drop-down button at the bottom of the Available Bands List. This button lets you choose where to display the next image you create. You have the option of creating a new display group or loading an image into an existing display group (this replaces the image that is currently in that display group).

9. In the Available Bands List, select the Gray Scale radio button, and select Band 4.

10. Click the Display #2 drop-down button, and select Display #1 (you are selecting to replace the contents of Display #1 with Band 4).

11. Click Load Band.

12. Practice working with the Available Bands List:

• Display a Band (7,4,1) RGB composite into a third display group. You may want to move a few windows around so that you can see all three images on your monitor.

• Replace Display #2 with a Band (5,3,1) RGB composite.

• Replace Display #3 with a Gray Scale image of Band 1.

• From the #1 Display group menu bar, select FileCancel.

• View the options on the Available Bands List Display #3 drop-down button. The only available display groups are Display #2 and Display #3.

• Create a new, empty display window and display Band 3 in it as Gray Scale image.

ENVI for Defense and Intelligence

11

Chapter 1:Mastering the Basics Creating and Managing Image Display

Question: What display number was created?

ENVI made another Display #1 instead of making a Display #4. ENVI always defaults to the lowest available display number.

13. Right-click on one of the bhtmref.img

band names in the Available Bands List. From here, you can choose to load and display this band (as a gray scale image) to a new or currently selected display group.

14. Right-click on the bhtmref.img

filename to load a True-color or CIR composite to a new or currently selected display group.

The option to load a True-color or color-infrared image using the right-click menu requires that the bands in the file have wavelength information associated with them. ENVI uses this information to find the best band combination to produce the selected composite for display.

Finding Windows on a Busy Screen

With three display groups (nine windows), the ENVI main menu bar, and the Available Bands List all open at the same time, it may sometimes be difficult to find the window you want! ENVI provides a quick solution for this common problem, called the Window Finder.

1. From the ENVI main menu bar, select WindowWindow Finder.

This dialog lists all of the currently open windows. Any window can be brought to the front simply by clicking its name in the list.

Selecting one of the open display groups causes all three windows in the display group to be brought to the front.

Many users find this tool to be convenient, especially as you begin to use more advanced ENVI functions and have many windows open simultaneously.

2. Close the ENVI Window Finder dialog by selecting FileCancel from the dialog menu bar.

12

ENVI for Defense and Intelligence

Creating and Managing Image Display Chapter 1: Mastering the Basics

Which Bands are in Which Display?

The title bar of each Image window (see Figure 4 on page 10) lists the bands that were used to make the display. However, if the band names are long, they could prevent all three names from fitting. You can find information about your display using the Display Information window.

1. From the ENVI main menu bar, select Window

Display Information. Place your cursor over each of the three display groups and note how the display information in the dialog updates accordingly.

This dialog not only lists the bands used to make the display group but also information about the contrast stretch that has been applied (a topic covered in a later chapter) and the size and image subsets for each window of the display group.

2. Close the Display Information dialog.

Closing and Reopening the Available Bands List

You can close and reopen the Available Bands List without any effect on the opened files. You may want to close the Available Bands List to save screen space and re-open it only when needed.

1. From the Available Bands List menu bar, select FileCancel.

2. To re-open the Available Bands List, select WindowAvailable Bands List from the ENVI main menu bar.

Exercise 4: Manipulating the Display

The red box inside the Scroll window (called the Image box) outlines the area shown at full resolution in the

Image window (Figure 5). To change the view in the Image window, left-click in the Scroll window:

• A single click centers the Image box over the cursor position.

• Click and drag to place the Image box in a new location.

The Scroll window with

Image box (red box outlining the area shown in full resolution in the

Image window)

Figure 5: Scroll Window with Image Box

ENVI for Defense and Intelligence

13

Chapter 1:Mastering the Basics Creating and Managing Image Display

1. You probably still have three display groups open. Close Display #2 and Display #3 (leaving only

Display #1 open). To do this, you can either select FileCancel from the Display group menu bar, or click the X at the top-right of the associated Image window title bar. ENVI automatically closes the entire display group.

2. In Display #1 (which should have Band 3 displayed as a gray scale image), move the Image box in the Scroll window to several new locations using both single-click, then click-and-drag options.

Observe the changes to the contents of the Image window with each move.

The small red box inside the Image window (called the Zoom box) outlines the area shown in the

Zoom window. The controls for moving the Zoom box are nearly identical to those for the Image box.

• A single click centers the Zoom box over the cursor position.

• Click and drag to place the Zoom box in a new location.

The Image window with

Zoom box (red box outlining the area shown in the Zoom window)

Figure 6: Image Window with Zoom Box

3. Experiment with the methods for moving the Zoom box in the Image window. Observe the changes to the Zoom window as you move the Zoom box.

4. You can also move the Zoom box by clicking inside the Zoom window.

• Click once in the upper-right corner of the Zoom window (be careful to do only a single click as sometimes you can produce a series of clicks if the mouse is in motion when you click). This centers the Zoom box over the selected pixel.

• Click and drag in the Zoom window. This continuously places the Zoom box over the selected pixel, allowing you to pan around the image. Note that the further you are from the center of the

Zoom window, the faster the image pans.

5. You can resize display group windows by clicking and dragging one of the window’s corners.

• Resize the Zoom window to make it larger. Note that as you change the shape or size of the

Zoom window, the Zoom box in the Image window changes accordingly.

• Make the Image window as large as possible on your monitor.

Question: Why can’t you make the Image window as big as your whole monitor screen?

14

ENVI for Defense and Intelligence

Creating and Managing Image Display Chapter 1: Mastering the Basics

The Image window always displays the image data at a one-to-one scale, so the maximum size for the Image window (in screen pixels) is equal to the number of samples and lines in the image.

6. Find the size of the bhtmref.img

image by returning to the Available Bands List, selecting one of the image bands, then looking at the bottom of the dialog in the Dims field to see how many samples and lines it has.

The maximum size for the Image window is 512 x 512 pixels because the image has only 512 samples and 512 lines.

Question: What happened to the Scroll window when you maximized the size of the Image window?

The Scroll window shows the portion of the image that is displayed at full resolution in the Image window. If the entire image fits into the Image window, then the Scroll window is unnecessary and is not displayed.

7. Reduce the size of the Image window so that the Scroll window displays.

8. Try resizing the Scroll window so that it is a tall, thin rectangle. Now try resizing it so that it is a long, thin rectangle.

Question: Why does the Scroll window keep jumping back to a square?

The Scroll window always displays the full image scene at a subsampled resolution so that it fits into the window you create. You cannot change the aspect ratio (the ratio of the length to the width) of the Scroll window because it is determined by the aspect ratio of the image. Since the bhtmref.img

image is square, you cannot make the Scroll window a long rectangle. When you try, ENVI automatically changes the window dimensions to fit the aspect ratio of the image based on the smallest dimension of the resized window.

Exercise 5: Mouse Controls in the Zoom Window

ENVI uses all three mouse buttons to provide convenient controls for manipulating the display group windows. You have learned how to move the Image box and the Zoom box using the left mouse button. The other two mouse buttons also have special functions.

The Zoom window with

Zoom controls

Figure 7: Zoom Window with Zoom Controls

ENVI for Defense and Intelligence

15

Chapter 1:Mastering the Basics Creating and Managing Image Display

1. Three small square boxes (Zoom controls) are displayed in the corner of the Zoom window (Figure

7). Clicking the mouse buttons inside any of these small symbol boxes produces different effects than clicking elsewhere in the Zoom window.

• Left-click on the or Zoom control to zoom out or in by a factor of 1.

• Middle-click on the or Zoom control to zoom out or in by a factor of 2.

• Right-click on the or Zoom control to return the Zoom window to the default zoom factor.

• Left-click on the graphic to toggle the crosshair cursor in the Zoom window on or off.

• Middle-click on the graphic to toggle the crosshair cursor in the Image window on or off.

• Right-click on the graphic to toggle the Zoom box in the Image window on or off.

2. The Zoom window can also have optional scroll bars, which provide an alternate method for moving through the Zoom window. Right-click in the Zoom window, and select ToggleZoom

Scroll Bars.

To have scroll bars appear on the Zoom window by default, select FilePreferences from the

ENVI main menu bar. The Preferences dialog appears. Select the Display Defaults tab, then set the

Zoom window Scroll Bars toggle to Yes.

3. Using the scroll bars in the Zoom window, pan the view and note that the position of the Zoom box in the Image window updates.

4. From the ENVI main menu bar, select WindowMouse Button Descriptions. As you move your cursor over different windows, this dialog updates to display a message describing the functions of the three mouse buttons in the selected window. This is a useful tool not only for the display group mouse controls, but also for more advanced functions that use the mouse (which you will learn in later exercises).

The Meta Scroll Window

When displaying an extremely large image, the Scroll window can sometimes be so severely subsampled that the image’s features are completely obscured. When this happens you can temporarily treat the entire image as if it were actually much smaller than it really is by making a meta scroll window. Note that the benefits of the meta scroll window are best illustrated on a large image ( bhtmref.img

is fairly small) because the window, by definition, must be larger than the area currently displayed in the Image window.

1. Click and drag with the middle mouse button in the Scroll window outside of the Image box. The area of the image that is contained within the area that you define will temporarily become the new full image. The Scroll window is redrawn and scroll bars are added to the window if necessary.

2. Restore the original Scroll window by right-clicking in the Scroll window and selecting Reset

Scroll Range.

Mouse Button Review

1. All of the mouse button controls for the display group windows are summarized in Table 1 on page

17. Explore using all of the mouse button controls outlined in the table.

2. Close the Mouse Button Descriptions dialog.

16

ENVI for Defense and Intelligence

Creating and Managing Image Display Chapter 1: Mastering the Basics

Display Group

Window

Image window

Scroll window

Zoom window

Table 1: Summary of Mouse Button Controls for a Display Group

Mouse Button

Left

Single click = centers Zoom box

Middle

Click and drag inside Zoom box = places

Zoom box

No function

Single click = centers Image box

Click and drag inside or outside Image box = places Image box

Click and drag = defines

“meta scroll window” boundary (for use with very large images)

Single click = centers Zoom box in Image window on pixel

No function

Click and drag = pans Zoom box in Image window

Right

Displays menu

Displays menu

Displays menu

The Display Group Menu Bar

Along the top of each Image window in a display group is the Display group menu bar. The Display group menu bar options act only on the image in the current display group (or on the data used to make the display group). For example, the Tools menu provides options that let you extract transects of data from the Image window. The Enhance menu provides options that let you adjust the contrast stretch for the displayed image. The Overlay menu provides options that let you overlay annotations, vectors, map grid lines, regions of interest (ROIs), and other types of information.

The Image window with the Display group

menu bar

Figure 8: The Image Window with the Display Group Menu Bar

Exercise 6: Exploring the Display Group Menu Bar

1. From the Display group menu bar, select ToolsProfilesX Profile. The Horizontal Profile window appears. This plot window interactively displays a horizontal transect of image data. The vertical red bar in the plot window shows the location of the central pixel in the Zoom window.

ENVI for Defense and Intelligence

17

Chapter 1:Mastering the Basics Creating and Managing Image Display

Figure 9: X Profile Window

2. Click and drag in the Zoom window and observe how the Horizontal Profile plot updates.

3. From the Horizontal Profile menu bar, select FileCancel.

4. From the Display group menu bar, select EnhanceFilterSharpen [10]. The Enhance menu provides easy access to filters and contrast stretches that you can quickly apply to the displayed image data.

The sharpening filters perform a high-pass convolution on the data in the display group. Sharpening filters with higher numbers in the brackets have a larger amount of the original data added back to the filtered image.

5. Return the original data to the display group by selecting EnhanceFilterNone from the

Display group menu bar.

The Display group menu bar also provides access to the controls for how the display group is set up. For example, you can define the default sizes for the three windows and choose how the windows are positioned on the screen. The default positioning is applied whenever a new image is displayed or when any of the display group windows are resized.

1. Use the Available Bands List to create a display group with a Band (4,3,2) RGB composite.

2. Resize the Zoom window to be about the same size as the Image window, and move it so that it is side-by-side with the Image window.

3. Resize the Scroll window and watch how ENVI’s auto-positioning feature moves the Zoom window back to its original position. The same thing would have happened if you had resized the Zoom window.

In some cases this auto-positioning is convenient, but you may find it useful to turn it off. For example, imagine you are working with three or four displays simultaneously, and you have carefully placed them where you want; then you resize one of the Zoom windows.

4. Right-click in the Image window, and select Scroll/Zoom PositionAuto Placement Off.

18

ENVI for Defense and Intelligence

Creating and Managing Image Display Chapter 1: Mastering the Basics

Now the display group windows stay in their current positions even after resizing or when a new image band is displayed in the window.

5. In the Image window, place the Zoom box over the round, red feature found approximately in the middle of the image (this is a center-pivot irrigation agricultural field).

6. In the Zoom window, turn the cursor crosshairs on by clicking the graphic.

This round red feature in the Zoom window is almost the same color as the window’s graphics (the cursor crosshairs), so it is difficult to see the selected pixel.

7. From the Display group menu bar, select File

Preferences. The Display Preferences dialog appears.

8. At the bottom of the dialog, right-click on the Display Graphic Color red color box, and select

Items 1:20Yellow (Figure 10). When you are finished, close the dialog by clicking OK. The

Image and Zoom boxes and the Zoom controls are immediately updated.

You can optionally cycle through the color choices available in the ENVI color tables by leftclicking in the color box.

Figure 10: Display Graphic Color Box Right-Click Menu

Some users prefer to work in a single window, such as the Image or Zoom window. You can change the default display window style (Image/Scroll/Zoom) using the right-click menu or from the Display group menu bar.

1. From the Display group menu bar, select FilePreferences.

The Display Preferences dialog appears. The Window Style drop-down list provides the options to display only the Image,

Zoom, or Scroll/Zoom windows. Click Cancel to close the dialog.

ENVI for Defense and Intelligence

19

Chapter 1:Mastering the Basics Spatial and Spectral Subsetting

2. Right-click in the Image window, and select Display Window StyleImage Only. The Scroll and Zoom windows close, and the Image window displays scroll bars so that you can access the entire image. Explore a couple of different display window styles to decide which you prefer. When you are finished, return the display style to Scroll/Image/Zoom.

3. Changes to the display group remain in effect until the window is cancelled, even if you display a different image in the window.

Spatial and Spectral Subsetting

For many applications, it is helpful to think about multispectral datasets as a 3D cube with all of the bands stacked on top of one another (as in Figure 11).

Visualizing image data in this manner makes it easy to see that multispectral images provide information in two distinct domains: spatial and spectral. The spatial domain of the data represents an area within any one band (in sample/line space), while the spectral domain of the data represents the response of any one pixel in all of the bands (in band space).

samples

Figure 11: Multi-Band Files Visualized Geometrically as a Cube

Many processing algorithms can be categorized as either spatial or spectral, depending on the domain from which the data are extracted for processing. For example, image registration is a spatial function, while image classification is a spectral function. When applying routines such as these, it is convenient to define the part of the image that you would like to work on at the time the processing is being set up (thus preventing the need to make intermediate files). In ENVI, you can easily do this through the use of the standard File Selection Dialog (Figure 12 on page 21).

20

ENVI for Defense and Intelligence

Spatial and Spectral Subsetting Chapter 1: Mastering the Basics

Exercise 7: Defining Subsets

1. From the ENVI main menu bar, select Basic ToolsRotate/Flip Data. The Rotation Input File dialog appears.

2. Select the bhtmref.img

file. Notice that:

• The right side of the dialog window lists an abbreviated summary of the image characteristics

(this can help in finding the correct file to select when there are many files available).

• If your processing function allows for image subsetting, then the Spatial Subset and/or

Spectral Subset buttons appear at the bottom of the dialog.

Figure 12: Standard File Selection Dialog with Options for Spatial and Spectral Subsetting

3. Click Spectral Subset. The File Spectral Subset dialog appears (Figure 13 on page 22), listing all of the bands in the image. By default, all of the bands are selected.

• To select multiple items adjacent to one another in the list, click on the first item, hold down the

Shift key, then click on the last item.

• To select items that are not adjacent to one another in the list, click on each item while holding down the Ctrl key.

• Each time you click on an item in the list without holding down the Shift or Ctrl keys, you reset the selection to only the item currently selected.

4. Using the Ctrl key and left mouse button, select bands 7, 4, and 1, then click OK.

ENVI for Defense and Intelligence

21

Chapter 1:Mastering the Basics Spatial and Spectral Subsetting

Figure 13: The File Spectral Subset Dialog

5. Click Spatial Subset. The Select Spatial Subset dialog appears, allowing you to define spatial subsets in several different ways:

• The sample and line ranges can be explicitly defined using the fields provided.

• Clicking Image allows you to drag and size a box on a thumbnail picture of the image to define a subset.

• If the image is georeferenced, clicking Map allows the subset to be defined by entering map coordinates.

• You can subset your image using the subset area of another subsetted image, or you can subset your image based on the area encompassing selected regions of interest (ROIs) or ENVI vector files (EVFs).

6. Click Image and define an area that captures only the right half of the image, similar to the image shown at right (the exact subset is not important for this exercise).

• To resize the box, click on the corner of the box and drag.

• To move the box, click inside the box and drag to the new location.

• Middle-click to center the box on the cursor.

7. Click OK. The Select Spatial Subset dialog reappears with the sample and line ranges from your selection entered into the corresponding fields.

8. Click OK. The Rotation Input File dialog reappears with the spatial and spectral subsets from your selections entered into the corresponding fields.

22

ENVI for Defense and Intelligence

Online Help Chapter 1: Mastering the Basics

The Input File dialog provides additional features. For example, the Open drop-down button includes a list of options to open a previous file, new file, spectral library, ROI file, or EVF file. The

Previous button automatically imports the last spatial and spectral subset and applies it to the current selection (which can save time when the same subsets are being applied to multiple images).

9. Click OK to continue to the processing routine. The Rotation Parameters dialog appears.

10. In the Angle field, type 270 and hit the Enter key to set the rotation angle.

11. Click the Transpose toggle button to switch the transpose to Yes (this flips the image upside down).

12. Select the Memory radio button to output the result to memory, accept the default background value of 0, then click OK.

13. The results are added to the Available Bands List. Note that the band names for the new image begin with Rotate. ENVI always adds a short descriptor to the band names to help you to remember the processing history of the file.

14. Look at your rotated, subsetted results by loading the Rotate Band (7,4,1) RGB composite image into a new display group.

Online Help

Extensive ENVI documentation is accessible from within ENVI, and printable documentation and tutorials are available on the ENVI web site ( http://www.ittvis.com/envi ). ENVI Help includes Contents, Index,

Search, and Bookmarks tabs to help you find topics quickly and easily.

ENVI for Defense and Intelligence

Figure 14: ENVI Help

23

Chapter 1:Mastering the Basics Customizing the ENVI Configuration

Exercise 8: Using ENVI Online Help

1. From the ENVI main menu bar, select HelpStart ENVI Help.

2. ENVI Help is an extremely valuable resource that many users forget. For example, if you needed a quick review of the mouse functions in the Zoom window, you could find this quite easily by clicking on the book icon for Working with ENVI, then clicking on Using ENVI, and Interactive

Displays, then clicking the Using Window Options from the Display Group topic (at the bottom of the list).

3. An even quicker way to find a topic in ENVI Help is to use the Index tab to conduct a search of the index. Click on the Index tab and search for information on how to customize the ENVI configuration file by typing ENVI configuration in the text field in the upper left. Then click on

ENVI configuration files to see a description in the right hand panel.

4. You will use the information about the ENVI configuration file in the next exercise. To access this topic quickly, create a bookmark to it. Click on the star icon for Add topic to favorites. This creates a shortcut to this help topic and allows you to quickly access it.

Note: You can also access Adobe Acrobat (

.pdf

) versions of the ENVI documentation from outside of ENVI Help, which can be especially useful for ENVI programming, when ENVI may not be running. You can obtain the PDFs from the ENVI Tutorial CDs, or by visiting the ENVI documentation web site ( http://www.ittvis.com/envi ) under Product

Documentation. For help in UNIX, type envihelp

at the prompt.

5. Close ENVI Help

Customizing the ENVI Configuration

Much of the ENVI interface and system-wide defaults are configurable. On start-up, ENVI searches for a small file called envi.cfg

. This file contains default settings for your ENVI sessions, including the directory to use when creating new file output, how to arrange and size the ENVI display group windows, and which histogram stretch to use. You can customize the ENVI configuration by editing the settings in this file using the FilePreferences menu option on the ENVI main menu bar or by editing the file in a text editor. For a Windows installation, the ENVI configuration file is stored in the ENVI’s menu folder (the default location is

C:\Program Files\ITT\IDLxx\products\envixx\menu\envi.cfg

, where

xx

is the software version).

Users on UNIX platforms must have write permissions to the directory where the ENVI configuration file is installed in order to edit the file. See ENVI Help for detailed instructions.

Exercise 9: Customize your ENVI Working Environment

In this exercise, you will edit the ENVI configuration file and save your changes so that they are retained when you restart ENVI.

1. From the ENVI main menu bar, select FilePreferences. The Preferences dialog appears.

2. Select the Display Defaults tab. This tab contains the most commonly customized parts of the configuration file (Figure 15 on page 25). Here, you can set the default size and placement for the display group windows, turn the Image window scroll bars on or off, and set the global default

24

ENVI for Defense and Intelligence

Customizing the ENVI Configuration Chapter 1: Mastering the Basics contrast stretch.

Explore the settings available but leave the Display Retain Value and the Display Default Stretch settings at their default values.

3. Select the Miscellaneous tab. Here, you can change the orientation of the ENVI main menu bar, block the IDL command line from being active in the IDL Workbench window (see Figure 1 on page 6), or change the number of histogram bins used for computing statistics.

Because remote sensing datasets can be quite large (often much larger than the amount of RAM available on the computer system), ENVI must carefully manage the memory usage in order to minimize the chances of a crash. The Memory Usage configuration item settings on this tab assist in this memory management process.

When any image file is processed, ENVI breaks the image into smaller pieces called tiles and processes each tile separately, thus reducing memory requirements. It is recommended that you set the Image Tile Size between 1 and 4 MB (depending on the amount of physical RAM on the system).

ENVI constantly keeps track of the amount of memory it is using. The Total Cache Size setting sets the maximum amount of memory that ENVI should use at any given time. If you run a processing routine that causes ENVI to exceed the cache size setting, ENVI starts deleting all nonessential memory-only items until it frees enough memory to allow it to stay within the limits. For single-user machines, the cache is typically set to 50-75% of the physical RAM available on the system.

Figure 15: The ENVI Configuration File Settings

ENVI for Defense and Intelligence

25

Chapter 1:Mastering the Basics Closing Files

4. Change the Cache Size to 256 MB. If you are taking this course outside of the Boulder facility, leave the Cache Size at its default value of 24 MB (remember: because this is only a suggested upper limit on memory usage, keeping the cache small does not prohibit processing in ENVI).

5. Select the Default Directories tab.

For this class, all of the data you will use are contained in the root directory of the computer in a folder called envimil

. Change the Data Directory to reflect the appropriate path to the envimil directory. For instance, if the envimil

directory is on the

C:\

drive, enter the path as

C:\envimil

. Refer to this drive letter whenever there is any reference to the envimil

directory.

6. Change the default Temp Directory, Output Directory, and Alternate Header Directory to the envimil\enviout

directory. All of the output files produced during the class go into this subdirectory.

7. Click OK to accept the changes, then click Yes when asked to save the current preferences. The

Output Configuration dialog appears with the path and name for the default ENVI configuration file. Click OK to overwrite the default envi.cfg

file.

With these changes, ENVI automatically defaults to the envimil

directory when you open a new data file.

In addition, ENVI automatically writes new files to the enviout

directory.

Closing Files

Once a file is opened in ENVI, it always remains open until the session is ended or you manually choose to close the file. However, because data from the files are only read when needed and memory is used then, having many files opened simultaneously does not consume any memory. Nonetheless, sometimes it is useful to remove an opened file from the ENVI session. There are a number of ways to close files in ENVI:

• Close all opened files using the ENVI main menu bar FileClose All Files option.

• Close all opened files using the Available Bands List menu bar FileClose All Files option.

• Close an individual file using the Available Bands List menu bar FileClose Selected File option.

• Close a display group using the Display group menu bar FileCancel option.

• Right-click on an image in the Available Bands List, then select Close Selected File.

Exercise 10: Close Files

1. From the Available Bands List, select the bhtmref.img

file.

2. From the Available Bands List menu bar, select FileClose Selected File.

3. If any of the bands from bhtmref.img

are currently in display group windows, you will receive a warning message that these displays must also be closed if the file is to be removed from the ENVI session. If you receive this warning message, answer Yes to close the display group windows.

4. In preparation for the next chapter, close all open files by selecting FileClose All Files from the

Available Bands List menu bar. Answer Yes to the warning message.

26

ENVI for Defense and Intelligence

Skills Check Chapter 1: Mastering the Basics

Skills Check

At this point you should:

• Know how to create RGB or Gray Scale display groups in new or existing windows

• Understand the relationships between the three windows of a display group

• Know how to use all three mouse buttons in any display group window (including the Zoom controls in the Zoom window)

• Be able to find hidden windows using the Window Finder

• Know how to find and use ENVI Help

• Understand the purpose of the ENVI configuration file (FilePreferences).

Self Test

1. Name three pieces of information that you can quickly and easily obtain from the Available Bands

List?

2. If you need a reminder for one of the mouse button controls, what option is available in ENVI to help?

3. From this point forward, when you write a new file to disk from ENVI, where is it written by default?

ENVI for Defense and Intelligence

27

Chapter 2:

Image Display Concepts

What You Will Learn In This Chapter.......................................................................................... 30

Stretching Image Data ................................................................................................................ 30

Interactive Contrast Stretching.................................................................................................... 34

Adding Color to Displays............................................................................................................. 39

Animation .................................................................................................................................... 42

Chapter Review........................................................................................................................... 44

ENVI for Defense and Intelligence

29

Chapter 2:Image Display Concepts What You Will Learn In This Chapter

What You Will Learn In This Chapter

In this chapter you will learn:

• What contrast stretching means and why it is important

• How to control the contrast stretch of an image

• How to apply color to gray scale images

• The difference between color tables and density slicing

• How to animate a series of images

Stretching Image Data

Why are image data stretched when making a display?

The pixels in a data file that make up an image can have any value: negative, positive, integer, or floating point. When the image data are visualized on screen, they are displayed as brightness values for each screen pixel. A data pixel with a larger value is brighter than one with a smaller value. However, unlike the image data, screen pixels can only have 256 unique brightness values, varying as integers between 0 and 255

(where 0 is black and 255 is white). Clearly this limitation prevents the data from being displayed with brightness exactly equal to their real value. For example, how do you display a negative data pixel or floating-point data ranging from 0 to 1?

Stretching the image data refers to a method by which the data pixels are rescaled from their original values into a range that the monitor can display—namely, into integer values between 0 and 255. For example, if the image data were floating-point values that ranged from -1.0 to 1.0, the image might be stretched such that data values of -1.0 are assigned a brightness of 0 and data with values of 1.0 are assigned a brightness of

255. All of the intermediate data values would be assigned new stretched values based on a simple model, or stretch type. Commonly, a linear stretch type is used so that the stretched data values maintain the same relationship to each other as the original data (e.g., the relative distance between two stretched values is the same as the relative distance between the two original data values). Other stretch types use different models to assign the intermediate values, such as Gaussian, equalization, or square root functions.

Exercise 1: Comparing Data and Stretched Values

1. From the ENVI main menu bar, select FileOpen Image File. The Enter Data Filenames dialog appears.

2. Navigate to the envimil\avhrr

directory, and select the image

SEcoast.dat

, then click Open.

This image (from the NOAA-16 satellite) contains four bands of AVHRR data with a spatial resolution of 1100 m. The image was obtained in September, 2001 and shows a large part of the

Southeast coast of the US. The first band has been calibrated into sea surface temperatures in degrees Celsius and is floating point data.

3. Load a gray scale image of the SST Image band into a new display group.

4. From the Display group menu bar, select ToolsCursor Location/Value (you can also open this tool by double clicking in the Image window or by right clicking in the display and selecting the tool).

30

ENVI for Defense and Intelligence

Stretching Image Data Chapter 2: Image Display Concepts

5. Try to find some of the brightest and darkest pixels in the image. To do this, reposition the Image box and Zoom box then use the Cursor Location/Value tool (ToolsCursor Locator/Value from the Display group menu bar) in the Zoom window. Note the data value, the corresponding screen value (the stretched value), and the image coordinates for one of the bright and dark pixels.

bright pixel: coords: ______________data value: ___________screen value: _______

dark pixel: coords: _______________data value: ___________screen value: _______

What is meant by “contrast stretch”?

The stretch used to rescale image data into brightness values can make a drastic difference in the way that the image appears. You can adjust the parameters of the stretch in order to maximize the information content of the display for the features in which you are most interested. This process is referred to as contrast stretching because it changes contrast in the image. Contrast refers to the relative differences in the brightness of the data values (i.e., increasing contrast means that the dark pixels are darker, and the bright pixels are brighter, so the brightness difference between the two is increased).

For example, consider an image whose data numbers (DN) are integers that range between 35 and 85 (51 different data values). If this image was stretched with a simple “one-to-one” model where a data value of 0 is assigned 0 brightness, and a data value of 255 is assigned 255 brightness, then the image display is quite dim (since the brightest pixel is only a brightness of 85). This stretch produces a low-contrast image because a difference in data value of one unit is represented by a difference in brightness of one unit. Furthermore, much of the range of available screen brightness is not being used because there are only 51 different values in the image data (there are no pixels with a brightness between 51 and 255, so these brightness values are unused).

The image contrast could be maximized by assigning a brightness of 0 to the minimum data value of 35, a brightness of 255 to the maximum data value of 85, and linearly stretching the remaining 49 data values through the rest of the available brightness range. This increases the contrast because adjacent data values now differ by several units of brightness rather than just 1, making it easier to visually distinguish slight differences in the data values.

Through careful adjustment of the image stretch, it is possible to highlight certain features in an image.

ENVI provides several sophisticated tools for this purpose.

ENVI’s default stretch (defined in the configuration file) is a 2% linear stretch, where the image histogram is computed and the cumulative 2% and 98% tails are determined. Then, the data value that defines the threshold for the 2% tail is assigned a brightness of 0, the data value that defines the 98% tail is assigned a brightness of 255, and a linear model is used to assign the intermediate values. In order to speed processing, the initial stretch (the very first time the image is displayed) is computed using the data contained only in the

Scroll window.

1. From the Display group menu bar, select Enhance and examine the list of predefined stretches you can apply to a display group.

ENVI for Defense and Intelligence

31

Chapter 2:Image Display Concepts Stretching Image Data

You can apply several contrast stretches to the displayed image without having to manually define the parameters for the stretch. Each of these predefined stretches is based on image statistics, so there are three versions of each: one that computes the histogram statistics from only the data in the

Image window, one that uses only the Scroll window data, and one that uses only the Zoom window data.

• The Linear stretch sets the data minimum and maximum to screen values (brightness) of 0 and

255 and stretches all other data values linearly between 0 and 255.

• The Linear 0-255 stretch sets a data value of 0 to a screen value of 0 and a data value of 255 to a screen value of 255 and stretches all data values between 0 and 255 linearly. This is the same as applying no stretch to the data.

• The Linear 2% stretch sets the highest and lowest 2% of data values to screen values of 0 and

255, and stretches all other data values linearly (the same method as ENVI’s default stretch).

• The Gaussian stretch sets the data mean value to a screen value of 127, the data value three standard deviations below the mean value to a screen value of 0, and the data value three standard deviations above the mean value to a screen value of 255. Intermediate data values are assigned screen values using a Gaussian curve.

• The Equalization stretch scales the data to equalize the number of DNs in each display histogram bin.

• The Square Root stretch takes the square root of the input histogram and applies a linear stretch.

2. Move the Image box in the Scroll window to an area along the coast. From the Display group menu bar, select Enhance[Image] Equalization.

Question: What features does this equalization stretch highlight?

3. Move the Image box in the Scroll window to the lower right-hand part of the image. This part of the image contains many clouds. Perform another [Image] Equalization stretch.

Question: What happened to the definition of the coastline on the left side of the image? Why do you think this happened?

Because the data in the Image window had changed, the image statistics changed and the stretch that was applied was quite different.

4. Find an area in the image with the very brightest pixels (there is a small area around image coordinate [570, 1200] that is nearly saturated with white pixels). Zoom in on this white patch so that the Zoom window contains mostly white pixels (a zoom factor of about 15).

5. Using the Cursor Location/Value tool, note that even though the pixels in the Zoom window all appear to have the same brightness, they actually do have slightly different data values.

6. From the Display group menu bar, select Enhance[Zoom] Equalization.

7. This approach should display some of the more subtle data variations in only the very brightest pixels of this cloud. Using the Zoom stretch is a fast way to explore image data for very specific, small-scale features.

32

ENVI for Defense and Intelligence

Stretching Image Data Chapter 2: Image Display Concepts

What happens to the data when the image is stretched for display?

Contrast stretching an image for display does not affect the original image data. When ENVI performs numerical processing, it uses the original data from the file, not the contrast-stretched data that are displayed.

1. Load a gray scale image of the SST Image band into a new display group, and compare the two open images.

The two image displays look quite different, but the data in each display group are identical; it is only the stretch that is different.

2. From the Display group menu bar in each Image window, select ToolsPixel Locator (each display group has its own Pixel Locator). Arrange the windows on your screen so that you can see both Image windows, both Pixel Locators, and the Cursor Location/Value tool.

3. Using the notes you wrote down in step 5 on page 38, place the Zoom window for Display #2 at the image coordinate corresponding to your bright pixel. Click Apply in the Pixel Locator dialog to move the Zoom box.

Question: What data value is reported by the Cursor Location/Value?

4. Repeat the above step for Display #1 (the display group that is virtually all black).

Question: Do both display groups report the same data value for the bright pixel?

Both display groups should report the same data value for the same image coordinate, regardless of the stretch applied. Optionally, repeat this test for the dark pixel you found in step 4.

If you are not finding the identical data value in both windows, be sure you are entering the same image coordinate for each and make sure that you are looking at the Cursor Location/Value tool immediately after you click Apply in the Pixel Locator, since the Cursor Location/Value tool updates as soon as your cursor enters another window.

What happens on 24-bit monitors that can show 16.7 million colors?

Even 24-bit monitors can only display 256 different brightness values. The number of colors that a monitor can display differs from the brightness values because of the way that colors are created. Colors are defined by a combination of individual brightness for three different color planes (red, green, and blue).

On 8-bit systems, only 256 different screen brightness values exist; however, each brightness can be replaced with a color defined by a red, green, and blue triplet (a RGB triplet). Thus, 256 colors can be displayed. This mapping of screen brightness to colors is referred to as a color lookup table, which you will explore in more detail in a later exercise.

On 24-bit systems, you can think of the monitor as three different monitors stacked on top of each other, where each individual “stacked” monitor can have 256 different brightness values. If each stacked monitor is assigned a color plane (red, green, or blue), then any given screen pixel could end up as one of

256 x 256 x 256 different colors (i.e., one of 16,777,216 different colors). However, each plane still has only

256 different brightness values. Even on hardware that can support 24-bit color, where more than 16.7 million different colors can be displayed simultaneously, image data going into each color plane must still be stretched before it can be displayed.

ENVI for Defense and Intelligence

33

Chapter 2:Image Display Concepts Interactive Contrast Stretching

1. Replace Display #1 with a Band (5,4,3) RGB composite of

SEcoast.dat

.

2. Use the Cursor Location/Value tool to look at the RGB data and screen values. Note that each color channel’s band is stretched independently into an integer screen brightness value.

Because a 24-bit monitor can display any combination of colors simultaneously, running ENVI in this mode allows you to display an unlimited number of RGB images at the same time. This behavior is not possible when running on a monitor in 8-bit mode because there are only 256 total colors that can be displayed (one for each screen brightness value). However, ENVI is able to emulate 24-bit true-color display properties even on 8-bit monitors by breaking the system color table into multiple, small color tables, with each color table “subset” defined by the colors needed to display one of the RGB image windows.

The size of each color table subset is defined in the ENVI configuration file (which you can change by choosing the FilePreferencesDisplay Defaults tab from the ENVI main menu). ENVI’s default is 40 colors per gray scale and 64 colors per RGB, which means when displaying a RGB image on an 8-bit monitor, the image data for each color plane are stretched into only 64 different values (instead of 256). A 64-entry color lookup table is defined then by the stretched image’s RGB triplets.

Interactive Contrast Stretching

In this exercise, you will learn how to use ENVI’s tools to control the display group’s contrast stretch.

Exercise 2: Interactive Contrast Stretching

1. From the Available Bands List, close all open images.

2. From the ENVI main menu bar, select FileOpen Image File. The Enter Data Filenames dialog appears.

3. Navigate to the envimil\avhrr

directory and open the grnland

image.

4. Load a Band (5,3,1) RGB image of the grnland

file into a new display.

5. This is an AVHRR image of Greenland, showing open ocean (the orange areas), sea ice (small areas of dim magenta), glaciers (purple), land (yellow), the ice sheet (the dominant bluish-purple area), and clouds (various shades of green, cyan, and blue).

6. Load Band 2 as a gray scale image in a new display group.

7. From the #2 Display group menu bar, select EnhanceInteractive Stretching. The Interactive

Histogram dialog appears (see Figure 16).

• The Input Histogram plot on the left represents the distribution of pixels in the input data, which initially contains the pixels from the entire band. The type of stretch model and the histogram source are indicated at the bottom of the dialog.

34

ENVI for Defense and Intelligence

Interactive Contrast Stretching Chapter 2: Image Display Concepts

• The Output Histogram plot to the right represents the new distribution of stretched pixel values in the image display window.

• Along the top of the dialog window, next to the Apply button, the current stretch’s minimum and maximum values (i.e., those assigned to 0 and 255 brightness, respectively) are displayed in the fields labeled Stretch. The locations of these values in the Input Histogram plot window are indicated by the dotted vertical bars in the plot.

Current stretch minimum and maximum cutoff values

Data value of selected histogram bin

Number of pixels and percent of all pixels in this bin

Cumulative percentage of all pixels with a

DN less than or equal to this value

Figure 16: Interactive Histogram Dialog

8. In the Interactive Histogram dialog, left-click and drag your mouse and note the new information displayed below the plot (see Figure 16).

As you drag the cursor, the plot window reports the DN for the currently selected histogram bin, the number of data pixels that fall within this bin, the percent of data pixels this bin represents, and the cumulative percent of all pixels that fall into or below this bin (into the lower tail). The cursor query works in both the Input Histogram and Output Histogram plots.

9. Click and drag one of the dotted vertical bars in the Input Histogram plot to a new data location.

This changes the Stretch minimum or maximum value, and you should see the value reported in the corresponding fields.

After you change the Stretch minimum and maximum values, the Output Histogram plot automatically updates. The contrast of the displayed image does not change until you click Apply.

However, you can have any changes in the stretch automatically applied so that you don’t have to click the Apply button. Select Options → Auto Apply. Now the Apply button is grayed out

ENVI for Defense and Intelligence

35

Chapter 2:Image Display Concepts Interactive Contrast Stretching because it is no longer needed. This auto-apply is something you can also set up in ENVI preferences.

10. You can also set the Stretch minimum and maximum to specific percent values by typing the desired value into the fields at the top of the dialog window. Define a 7% linear stretch by typing

7% into the minimum Stretch field, then pressing the Enter key. Type 93% into the maximum

Stretch field, and press Enter.

You can also set the stretch minimum and maximum values by typing a number into one of the text boxes.

11. Specific peaks in the input histogram correspond to features in the image data. Experiment with the contrast stretch by moving the dotted vertical bars in the Input Histogram plot. Try to find stretches that maximize the contrast for certain parts of the image (see Figure 17 for some suggestions).

12. After you have finished experimenting with the contrast stretches, reset the initial stretch by selecting OptionsReset Stretch from the Interactive Histogram dialog menu bar.

The Histogram Source

1. From the Interactive Histogram dialog menu bar, select Histogram_SourceZoom. The histogram plots should update.

The Histogram_Source menu allows you to choose the area from which the image statistics are extracted for the stretch: the Image window, the Scroll window, the Zoom window, the entire band, or a user-defined region of interest (ROI). Controlling the areas from which the statistics are computed can have a dramatic effect on the stretch that is calculated. This allows you to exclude certain areas and purposely bias the histogram statistics.

2. In the lower-right corner of the Greenland image, there is an area that looks uniformly black.

Question: Is there any variation in the data in this area?

3. Center the Zoom box over this area. After the new histograms are computed, if auto apply is not turned on you would need to click Apply to view this new contrast stretch. Remember, you can also zoom in or out to further control the area being used to collect the histogram statistics.

Again, you can avoid having to click Apply by selecting Options → Auto Apply to have the contrast update automatically any time you change anything in the Interactive Histogram.

36

ENVI for Defense and Intelligence

Interactive Contrast Stretching Chapter 2: Image Display Concepts

Figure 17: Stretch Displaying a Contrast for the Interior of the Ice Sheet (top)

Stretch Displaying a Contrast for Areas of Sea Ice (bottom)

The Stretch Type

The type of stretch model can also be controlled manually.

1. From the Interactive Histogram dialog menu bar, select Histogram_SourceImage.

2. From the Interactive Histogram dialog menu bar, select Stretch_TypeGaussian and reapply the new stretch.

ENVI for Defense and Intelligence

37

Chapter 2:Image Display Concepts Interactive Contrast Stretching

Note the change in the shape of the Output Histogram, and the red Gaussian function that is displayed over the Output Histogram data.

3. Experiment with several different stretch types. The Piecewise Linear model allows you to build your own transfer function (the relationship between the input DN and output screen brightness values). If you choose Piecewise Linear, you can middle-click to add nodes to the model. Left-click on a node and drag to position it, and right-click to delete nodes. To enter the node values manually, select OptionsEdit Piecewise Linear from the Interactive Histogram dialog menu bar.

4. Close Display #2, the one with Band 2 displayed as a gray scale image.

5. Use Display #1, which should contain the RGB image, to experiment with a few of the predefined stretches in the Enhance menu. Remember, each band is stretched independently in order to maximize the contrast for each dataset.

6. From the Display group menu bar, select EnhanceInteractive Stretching. Remember that the stretch type is automatically set to the last predefined stretch type selected from the Enhance menu.

7. The Interactive Histogram dialog for an RGB display has R, G, and B radio buttons in the upperright portion of the dialog. You can stretch the data loaded to the red, green, or blue plane of the image display once you select the appropriate button. The Input Histogram is shown in the color corresponding to the color plane that is currently selected. For example, if you select the green button, then the Input Histogram is green, and the dialog controls the green color plane of the display. Using the Interactive Histogram dialog, experiment with the RGB image’s stretch to see the effects of changing only one or two of the band’s contrast stretches.

Keep in mind that you can stretch each color plane of the display separately. You can also choose different Stretch_Type and Histogram_Source options for the different color planes of the display.

Saving a Custom Stretch as a LUT

If you have defined a custom stretch that you wish to save for future use, you can store it as a lookup table

(LUT). Use one of the following methods:

• From the Interactive Histogram dialog menu bar, select FileSave Stretch to LUTASCII

LUT. This saves the stretch as an ASCII file that can then be imported and applied to any image in the future using FileRestore LUT Stretch. Click Cancel to close this dialog.

• From the Interactive Histogram dialog menu bar, select FileSave Stretch to LUTENVI

Default LUT. This creates a special type of ENVI binary file with the same root name as the image being displayed but with a

.lut

extension. If the image file has multiple bands, this special binary LUT file can contain multiple contrast stretch LUTs—one for each band. If the filenames are not changed, then ENVI automatically uses the

.lut

file’s stretch when displaying the image, instead of the stretch specified in the ENVI configuration file.

Unlike the default stretch defined in the configuration file, the LUTs are not statistically based. They are a static table that associates each data value in the image for which it was defined with a specific screen brightness value.

The Default Stretch Hierarchy

As you have already learned, the global default stretch in ENVI is defined in the ENVI configuration file

(which you can change using the FilePreferencesDisplay Defaults tab). However, you can also save

38

ENVI for Defense and Intelligence

Adding Color to Displays Chapter 2: Image Display Concepts a default

.lut

file for any particular image, which replaces the file’s default stretch with a specific look up table. There is also a third way of specifying a default stretch for a particular file that supersedes the global default stretch. In an image’s ENVI header file, there is a Default Stretch field that can specify a stretch type and initial parameters for the image’s default stretch. The hierarchy for which ENVI applies stretch follows these rules:

• If a LUT file is associated with an image, then ENVI uses the LUT settings.

• If the Default Stretch field in the image header is specified (and there is no LUT file), then

ENVI uses the header stretch settings.

• If neither a LUT file nor a header stretch is specified, then ENVI uses the default stretch settings from the ENVI configuration file.

Histogram Matching

Sometimes it is useful to try to match the histogram of one displayed image to another to make the brightness distribution of the two images as close as possible. The technique for accomplishing this is called histogram matching and is often used when preparing images for mosaicking or when visually comparing uncalibrated images from the same sensor. Histogram matching is easy to do in ENVI using the

Display group menu bar, EnhanceHistogram Matching option.

When you use histogram matching, the histogram for the input display group changes to match the histogram for the display group you selected to match to. Precise control of the input histogram is available by controlling which data are used to compute the input histogram; this can be limited to specific areas of the image, or even an ROI.

Adding Color to Displays

In this exercise, you will learn how to use ENVI’s color tables.

Color Tables

There are several ways to create color displays of image data—for example, creating RGB color composites, where the color of each pixel is defined by its relative brightness in three different images. However, you can also use colors to accentuate features in a single band gray scale image. ENVI’s color tables are a special kind of lookup table that associates a screen brightness value with an RGB triplet (a color).

Exercise 3: Applying Color Tables

1. Open two gray scale image displays of Band 1 of the grnland

image.

2. From the Display group menu bar in one of these displays, select ToolsColor Mapping

ENVI Color Tables. The ENVI Color Tables dialog appears (Figure 18).

This dialog displays a list of all available color tables. At the top of the dialog, the horizontal color ramp displays the currently selected color table, from minimum brightness at the left to maximum brightness at the right. The two slider bars beneath the color ramp allow the color table to be adjusted by compressing the range of colors from either the top down or the bottom up. Beneath the slider bars is a list of more than 40 different pre-defined color tables that are provided with ENVI.

You can add additional color tables by clicking Edit System Color Tables on the File

PreferencesDisplay Defaults tab.

ENVI for Defense and Intelligence

39

Chapter 2:Image Display Concepts Adding Color to Displays

Figure 18: ENVI Color Tables Dialog

Adding color to a gray scale image brings a new dimension to its contrast and can help distinguish features that are otherwise hard to see because they have similar brightness values.

3. Select the color table GRN-RED-BLU-WHT. The color table should automatically apply to the image display, and the color ramp bar at the top of the ENVI Color Tables dialog updates.

4. Enlarge each of the Scroll windows and place them side-by-side for comparison.

Question: Did the color table help distinguish any features in the image?

5. Experiment using several other color tables. STEPS, Peppermint, and Blue-Red work well with this image.

If you want to see the color ramp for a selected table without automatically applying it to the image display, toggle the Options → Auto Apply off from the ENVI Color Tables dialog menu bar. To apply a color table to the display in this mode, select OptionsApply.

6. Experiment with the slider bars to see their effects.

Question: What happens if you reverse the positions of the slider bars, setting the Stretch Bottom all the way to the right and the Stretch Top all the way to the left?

7. From the Display group menu bar of the display with the color table applied, select Enhance

[Image] Gaussian.

40

ENVI for Defense and Intelligence

Adding Color to Displays Chapter 2: Image Display Concepts

Question: What happened? Did the colors change?

8. Experiment with the image’s contrast stretch.

As you can see, the color table is sensitive to the contrast stretch. Remember, the color table associates a screen brightness (i.e., a stretched data value) with a fixed RGB triplet. Whenever the stretched values change, the colors on the display also change.

Using a LUT File in Conjunction with a Color Table

Because colors applied to an image using a color table are dependent on the image’s stretch, the use of a color table is often coupled with a custom LUT file. For example, you could display the image and apply a color table. Next, using the Interactive Histogram dialog, you could manipulate the stretch until you’re happy with the way the colors look on the image. Then, save the current stretch to an ENVI default LUT.

From this point forward, whenever any user displays this image, the LUT file defines the stretch, and whenever the color table is applied, it will always look exactly the same. This method is useful if you are sending an image to a colleague to view and you want it to look the same on their computer screen as it did on yours.

Density Slicing

In some cases, it may be more useful to color-code an image directly from its data values, instead of its stretched values. This can be accomplished in ENVI using a density slice, where specified ranges of data values are assigned colors. You can even treat density slicing as a simple one-band “classification” of the image based solely on data ranges.

Exercise 4: Applying a Density Slice

1. Close the display group containing the grnland

Band 1 image with the color table applied.

2. From the Display group menu bar of the other display containing Band 1 (in gray scale), select

ToolsColor MappingDensity Slice. The Density Slice Band Choice dialog appears.

3. Select Band 1, then click OK. The Density Slice dialog appears. Default, equally-spaced density ranges are automatically calculated with a different color assigned to each data range.

4. Click Apply to apply the default density slices to the image display.

A density slice allows you to add color only where you want and leave the original gray scale image display elsewhere in an image. For example, the open ocean in Band 1 is captured by the smallest data values. You can highlight where the ocean is visible by making a density slice range which color codes only the data that you think are the ocean pixels.

5. Click Clear Ranges.

6. From the Density Slice dialog menu bar, OptionsAdd New Ranges.

ENVI for Defense and Intelligence

41

Chapter 2:Image Display Concepts Animation

7. Define the new open ocean range as data values between 1 and 40.

Set the color for this range to Sea Green.

8. Click OK, then click Apply in the Density Slice dialog to apply the range to the image.

You may also wish to capture the higher elevation portions of the

Greenland ice sheet using additional density slice ranges.

9. Try making two more density slices for the following data ranges:

650 to 720 = Maroon

721 to 927 = Red

10. Apply these new ranges and evaluate your results.

Question: Is it possible to capture the gray ring (the edge of the ice sheet) bordering the Maroon density slice range?

11. Using the Cursor Location/Value tool, determine the approximate range of data values in this gray border area. Then make a new density slice range for this data range.

12. You can export density slice results to ENVI vector files (EVFs) or to a classification image using the Density Slice dialog File menu.

13. From the Available Bands List menu bar, select FileClose All Files. Answer Yes to the warning message.

Figure 19: (a) Displaying an Image

(b) Displaying a Gray Scale Image with a Color Table Applied

(c) Displaying a Gray Scale Image with a Density Slice Applied

Note: The density slice is the only time when the display does not reflect the stretched data.

Animation

Image animation allows you to quickly cycle through a display of multiple bands in the same window, controlling the speed and direction of the sequence. It is a great way to identify differences between image bands, changes in a time series of images, or to look for anomalies in a multi-band dataset.

42

ENVI for Defense and Intelligence

Animation Chapter 2: Image Display Concepts

Exercise 5: Animation of El Niño Severity

1. From the ENVI main menu bar, select FileOpen Image File. The Enter Data Filenames dialog appears.

2. Navigate to the envimil\elnino

directory, select topex_1997series.dat

, and click Open.

This file contains a time series of global ocean topography images collected during the joint

U.S./French TOPEX/POSEIDON mission. With the onset of El Niño, a special type of oceanic wave called a Kelvin wave migrates along the equator from West to East. El Niño tends to reach its peak as the wave reflects off the coast of South America and returns westward. Thus, sea surface height anomalies are used to detect the timing of El Niño onset.

Each of the 28 images in this time series represents a 10-day composite of data (the length of time required for the satellite to image the whole Earth) with pixel sizes approximately 75 km in longitude and 150 km in latitude. The time series runs from January to December of 1997 and illustrates the buildup and migration of the Kelvin wave.

The last band in this image shows the state of the ocean height anomalies as the 1997-98 El Niño event was beginning to peak in December of 1997. Display a gray scale image of the last band (Nov

31 1997) in a new display.

3. Apply the Blue-Red color table to the image. You can maximize the Image window to see the whole dataset at full resolution.

4. From the Display group menu bar, select Tools

Animation. The Animation Input Parameters dialog appears with all of the bands in the image selected by default. This dialog allows you to choose a spatial subset and set the size of the window to use for the animation.

5. Select a resampling method by selecting the Pixel

Aggregate or Nearest Neighbor radio button.

Nearest neighbor resampling uses the nearest pixel value in the animation window and pixel aggregate averages all the pixel values that contribute to the output animation pixel. For example, if your animation window size is half the size of the image, the nearest neighbor method uses every other pixel and every other line to create the image in the animation window. The aggregate method averages four pixels to create the output image.

6. Set the Window Size to 450 x 224.

ENVI for Defense and Intelligence

43

Chapter 2:Image Display Concepts Chapter Review

7. There are no restrictions on the window size that you can choose for the animation. However, ENVI automatically resamples the image to fill the window, so be careful to set the animation window to a size that maintains the same aspect ratio (i.e., ratio of samples to lines) as the image being animated, or the image is distorted. Also, remember that larger windows take more time to initialize than smaller windows.

8. Click OK to initialize the animation. ENVI resamples the bands to a 450 by 224 pixel resolution and then loads the images into memory.

9. Experiment with the control buttons for pausing, reversing, and setting the speed of the animation sequence.

Note: Currently, it is only possible to animate in color by using IDL. If you wish to do this and have an ENVI+IDL session running, bring up the IDL Workbench. At the command line, type device, decomposed=0.

This sets the IDL display system to a mode where color tables can be applied manually to the entire ENVI session. At the command line, type xloadct. This brings up an interactive IDL widget where you can select a color table that can be applied to the animation tool. After selecting a color table, in

ENVI, close and restart the animation tool. When finished, at the command line type

device, decomposed=1. This returns the display system to normal. Close the color table.

10. Select FileCancel from the Animation window menu bar.

11. You can save an animation in Moving Picture Experts Group (MPEG) format by selecting File

Save Animation as MPEG from the Animation window menu bar.

12. From the Available Bands List menu bar, select FileClose All Files. Answer Yes to the warning message.

Chapter Review

• Before an image can be displayed, it must be stretched into integer values ranging between 0 and 255.

• By manipulating the contrast stretch, you can change the way that an image looks without changing the actual image data.

• ENVI provides pre-defined contrast stretches, as well as a sophisticated tool for interactive stretching.

• Color can be added to a gray scale image by applying a color table, but the colors reflect the stretched data, not the real image data.

• Density slicing color-codes a gray scale image independent of the contrast stretch by defining colors to be assigned to specific data ranges.

• Image animation is a powerful, flexible tool for viewing multiple gray scale images.

44

ENVI for Defense and Intelligence

Chapter 3:

The ENVI NITF/NSIF

Module

What You Will Learn In This Chapter.......................................................................................... 46

The NITF and NSIF Format ........................................................................................................ 46

Chapter Review........................................................................................................................... 54

45

ENVI for Defense and Intelligence

Chapter 3:The ENVI NITF/NSIF Module What You Will Learn In This Chapter

What You Will Learn In This Chapter

In this chapter you will learn how to:

• Read NITF or NSIF format files into ENVI

• Use the NITF/NSIF Metadata Viewer

• Write NITF format files from ENVI

• Edit metadata tags in a NITF file

• Add tags to a NITF file

The NITF and NSIF Format

The National Imagery Transmission Format (NITF) is a U.S. Department of Defense (DoD) and Federal

Intelligence Community suite of standards for the exchange, storage, and transmission of digital-imagery and image-related products.

The ENVI NITF Module provides JITC (Joint Interoperability Test Command) certified NITF and NATO Secondary Image Format (NSIF) read/write support, ensuring conformance between imagery systems.

The ENVI NITF Module is an add-on to ENVI. While the NITF module is automatically installed as part of the default ENVI installation, it requires a separate license to activate its functionality. With the ENVI NITF

Module, the analyst can read and display all compressed or uncompressed NITF version 2.0 and 2.1 and

NSIF 1.0 files, as well as legacy NITF 1.1 files, and can write NITF version 2.0 and 2.1 and NSIF 1.0 files.

The ENVI NITF Module provides rich functionality for viewing and editing NITF attributes and tags.

The NITF format is used extensively in the United States. The multinational members of the North Atlantic

Treaty Organization (NATO) use the NATO Secondary Image Format (NSIF). The NSIF 1.0 format is identical to the NITF 2.1 format, with the exception of the version name in the file header. In place of

NITF02.10, this field contains NSIF01.00.

In this training manual, general information about the NITF format, and specific information about the

NITF 2.1 format, also applies to the NSIF format.

The NITF Image Format

Any valid NITF dataset provides a main header identifying the file as a NITF dataset and describing the contents of the file. The header is usually followed by one or more data segments. Each data segment consists of a segment subheader identifying the type and properties of the data, followed by the data itself.

Data segments can be any of the following types:

• Image

• Symbol

• Label

• Graphic

• Text

• Extension

The image segment contains the data file imagery. For NITF 2.0 files, each image segment can contain 1, 3, or 4 bands. The image segment for NITF 2.1 files can contain between 1 and 999 bands.

For NITF 2.1, the symbol and label segments were combined into the graphic segment. These segments contain overlay information, referred to in ENVI as annotations. Annotations can be text, bitmaps,

46

ENVI for Defense and Intelligence

The NITF and NSIF Format Chapter 3: The ENVI NITF/NSIF Module polygons, ellipses, and so on.

Text segments contain information about the file that can’t be contained in the header or other segments. For example, the file we’ll work with in the following exercise contains text segment information about the image copyright.

Extension segments are used for data that can’t be contained in the main header or data segment headers and are of two types: data and reserved. The reserved extensions are reserved for future expansion of the NITF format. Data extensions are used for storing Tagged Record Extensions (TREs). TREs can be associated with an entire NITF dataset or any segment type within a NITF dataset. Tags can be stored in the main header or a subheader, unless they exceed a certain size, in which case they can overflow to a data extension segment (DES). Each tag is identified by a unique six-character name.

Tags come in two forms: Registered Extensions (REs) and Controlled Extensions (CEs). The NITF Standard

Technical Board (NTB) maintains a registry of known CEs and REs; the main difference between them is that both the tag name and tag layout of CEs are controlled by the NTB, whereas only the tag names of REs are registered with the NTB to prevent different users from using the same tag name. Therefore, CEs can be interpreted based on the published information contained in the NTB repository, whereas REs require specific knowledge of the tag contents available to the creator of the tag that may not be available to the data consumer.

Exercise 1: Opening a NITF file in ENVI and Viewing NITF Attributes

ENVI can read several formats widely used in military and intelligence, including NITF, ADRG, CADRG and CIB. This exercise uses NITF data.

1. From the ENVI main menu bar, select FileOpen External FileMilitaryNITFNITF.

The Enter NITF Filenames dialog appears.

2. Navigate to the envimil/NITF

directory, select

FairbanksAK.NTF

, then click Open.

Note: This image is courtesy of DigitalGlobe, www.DigitalGlobe.com

. The data is to be used for class exercises only and any other use of the data including resale, distribution or reproduction or for purposes other than noted above without the prior written permission of

DigitalGlobe is strictly prohibited.

When using the NITF module for file import, ENVI automatically extracts all of the information contained in the input file and adds its image bands into the Available Bands List. In most cases, the

NITF image data being read will include not only the information required for reading the data itself, but also ancillary information mentioned in the discussion above. This illustrates that anytime you use one of ENVI’s custom file readers, you will not need to know any details about the data file in order to import it into ENVI.

3. In the Available Bands List, right-click on

FairbanksAK.NTF

and select Load True-color.

This dataset is a high-resolution (2.8 m pixel size) multispectral image from DigitalGlobe’s

QuickBird satellite, acquired near Fairbanks, Alaska. Expand the Map Info icon in the Available

Bands List to see that the file has been geometrically corrected to an *RPC* Geographic Lat/Lon projection. ENVI automatically reads the information from the NITF header embedded in the file and can easily import the data and map information.

4. In the Available Bands List, right-click on the filename and select View NITF Metadata.

ENVI for Defense and Intelligence

47

Chapter 3:The ENVI NITF/NSIF Module The NITF and NSIF Format

5. The NITF Metadata Viewer appears (Figure 20). This dialog allows you to view all ancillary information contained in the NITF file, such as NITF header information, information related to image segments (e.g., classification level), and any other ancillary information, such as text segments or tags.

6. Click a + icon to expand the NITF file header, Image segment, or Text segment node in the tree view. Take a few moments to navigate the tree structure in the NITF Metadata Viewer.

Figure 20: The NITF Metadata Viewer

7. Try and answer the following questions about the file:

• What is the date of image acquisition?

• In what portion of the electromagnetic spectrum is each band acquired?

• What is the data type for the file? Is the data type listed in the NITF Metadata Viewer the same as that listed in the Available Bands List?

• Can you find the DigitalGlobe licensing information?

Exercise 2: Writing NITF Files

In addition to the NITF Metadata Viewer, ENVI also has advanced attribute creation and editing capabilities. In this exercise we will create a new NITF file, edit its attributes, and add some tags.

48

ENVI for Defense and Intelligence

The NITF and NSIF Format Chapter 3: The ENVI NITF/NSIF Module

1. From the ENVI main menu bar, select File Save File AsNITF. The Select NITF Output File dialog appears.

2. Select

FairbanksAK.NTF

, then click OK. An ENVI Warning dialog appears, informing you that the image will be converted from band sequential interleave (BSQ) to block interleave. Click OK.

The NITF Output Parameters dialog appears.

Storing an image in block interleave is similar to the way that ENVI processes large datasets through image tiling. With block interleave, the image is broken into a number of spatial blocks, each stored in band sequential storage order.

The NITF/NSIF Module can create new NITF datasets from existing raster data. New datasets can be created in NITF 2.0, NITF 2.1 and NSIF 1.0 format. ENVI can write NITF datasets that originally contained more than one image segment, and may contain one or more text segments.

Currently, the NITF/NSIF Module cannot export annotation segments or DES segments to new

NITF datasets. One final consideration is that images smaller than 1024 x 1024 pixels cannot be compressed using JPEG 2000 NPJE.

3. Keep the Version setting at NITF02.10, but change the Compression setting to JPEG

2000 NPJE (Visually Lossless). Click Edit

NITF Metadata. This opens the NITF

Metadata Editor, as seen in Figure 21 on page 50.

4. Click a + icon to expand the NITF file

header. Select the third line in the NITF

Metadata Editor, Originating Station ID:

“ENVI ”. The Edit String Value dialog appears. If you enter new information into this dialog, then click OK, it appears in the

NITF file header segment. Click Cancel in the Edit String Value dialog.

5. Select the second line in the NITF Metadata Editor, File Type: “NITF02.10”. Note that the editing dialog does not appear, indicating you cannot change the value in the field.

ENVI for Defense and Intelligence

49

Chapter 3:The ENVI NITF/NSIF Module The NITF and NSIF Format

50

Figure 21: The NITF Metadata Editor

Some NITF attributes are editable, while others are not. For example, within Image segment #1, you’ll find that the Image Time and Date field can’t be edited. This is essential information that should never be disassociated from the file.

6. Select the + icon next to Image segment #1 to expand it. Select the first line under Image

segment #1 that reads Image ID: “M100E87C00”. The Edit String value dialog appears. Press the

Enter key. The Edit String dialog closes, and the highlighted line in the NITF Metadata Editor advances to the line Image Date and Time: [Aug 1 2002 21:01:08]. You cannot edit this information.

7. Expand the + icon for File Security in the NITF Metadata Editor. Select the line Classification:

ENVI for Defense and Intelligence

The NITF and NSIF Format Chapter 3: The ENVI NITF/NSIF Module

<Unclassified> to open the Edit List Value dialog. Using the Enter key, scroll through the attribute fields under Image Security for Image segment #1 and make note of all the available information that can be associated with the image segment.

8. Close the Edit List Value dialog but keep the NITF Metadata Editor open for the next exercise.

Exercise 3: Viewing and Manipulating Tags

1. Within Image segment #1, take note of the lines reading Tag 1 through Tag 5.

Tagged Record Extensions (TREs) can be associated with a NITF dataset, or any data segment in the file. In some cases, tags are applied to the data in a file or segment to enhance the utility of the data. One or more tags that apply to the entire NITF dataset can be present in the file header, and each segment (image, annotation, or text) can also have one or more tags associated with it. Unless there is a large amount of data in a tag, the tag data is stored in the main header or segment subheader to which the tag applies.

Tag 1 is known as a Profile for Imagery Access tag. Profile for Imagery Access and Profile for

Imagery Archive (PIA) tags are used to hold information required by the Standards Profile for

Imagery Access (SPIA). A variety of government agencies require these tags in NITF image products. When a NITF file is saved to a new NITF file, the PIA tags associated with the file header and any image, symbol, label, or text segments are preserved in the new file.

There are ten unique PIA tags; newer PIA tags are labeled Profile for Imagery Access, and older

PIA tags are labeled Profile for Imagery Archive. You can edit, delete, and save both sets of PIA tags in NITF files, but you can create only the Profile for Imagery Access tags in Table 2 on page 52 in ENVI.

2. Tag 1: PIAIMC is a PIA tag containing additional image information and can be edited. For example, select Cloud Cover: “999”, and enter a new value between 0 and 999 (this image contains no clouds, so you could enter 0).

3. Select the + icon next to Tag2: RPC00B. This is an example of a controlled extension.

Tag 2: RPC00B (Rapid Positioning Coordinate) is a tag that can be associated with an image segment in a NITF dataset, and provides coefficients that can be used to orthorectify, or georectify on-the-fly, the associated image segment.

Note: Classified imagery typically uses the RPC00A tag. The RPC00A RE definition and use is classified at the SECRET level. To use the RPC00A RE, a user must obtain the installation which includes an IDL

.sav

file, and an

RPC00A.xml

file.

PIA Tag Header

Location

PIAPRD File

Description

derived from a source file.

PIAIMC Image

Label, or Text segment. One tag per target.

ENVI for Defense and Intelligence

51

Chapter 3:The ENVI NITF/NSIF Module The NITF and NSIF Format

PIA Tag Header

Location

Label, or Text

Label, or Text

Description

segment. One tag per identified person. segment. One tag per identified event.

Label, or Text equipment in a segment. One tag per identified piece of equipment.

Table 2: PIA Tags that can be Created in ENVI

Note: A list of tags supported in the ENVI NITF Module can be found in the ENVI Help by typing TREs under the Index tab and selecting list. If desired, look up the other three TRE tags.

4. Add PIA tags to Image segment #1. Highlight Image segment #1 in the NITF Metadata Editor.

Click Add PIA Tags at the bottom of the dialog The Add PIA Tags dialog appears which allows you to select the number of each type of tag you would like to add. Use the up and down arrows on each text field to add 1 PIATGB (PIA Target Tag) and 1 PIAEQA (PIA Equipment Tag). Click OK.

You should see Tag 7: PIATGB and Tag 8: PIAEQA appear under Image segment #1 in the

NITF Metadata Editor.

5. Click the + icon next to Tag 7: PIATGB to expand it. Select Target UTM: “” and then scroll through the attribute fields by pressing the Enter key, noting the information that can be stored about a target. Also examine the attribute fields in the newly added PIAEQA tag.

6. In the NITF Metadata Editor, select Tag 8: PIAEQA. Click Delete PAI Tag. You can add and delete any of the PIA tags listed in Table 2 from a NITF file.

7. Keep the NITF Metadata Editor open for the next exercise.

Exercise 4: Viewing, Adding, Deleting, and Editing Text Segments

1. In the NITF Metadata Editor, click the + icon next to Text segment #1 to expand it.

This text segment contains the DigitalGlobe QuickBird end-use license. The ENVI NITF Module allows you to add, delete and edit text segments in a NITF file.

2. View the DigitalGlobe licensing information by clicking on the line that reads

Text <click to view>. If you are working on a machine with the Windows OS, you’ll notice that the text built in to the file by DigitalGlobe isn’t formatted properly. The carriage returns are UNIX style. Click Cancel to close.

To illustrate how to add and edit text segments, we’ll import correctly formatted text into a new text segment, then delete the old one.

1. Click Add Text at the bottom of the NITF Metadata Editor. The Add Text Segment dialog appears.

2. Make sure that the Attach to Segment field is set to NITF file header.

3. Click Import ASCII.

52

ENVI for Defense and Intelligence

The NITF and NSIF Format Chapter 3: The ENVI NITF/NSIF Module

4. Navigate to the envimil/NITF

directory, select

DG_Demo_License.txt

, then click Open. The

Add Text Segment dialog appears; click OK. In the NITF Metadata Editor, you will see a new text segment, Text segment #2.

5. Click the + icon next to Text segment #2 to expand it, then click on Text <click to view>. In the

Text dialog, you will see the correctly formatted text. Click Cancel in the Text dialog. In Text

segment #2, edit the Text ID and Text Title attribute fields so that they match Text segment #1.

6. Select Text segment #1 in the NITF Metadata Editor, then click Delete Text. The text segment that we added is renamed to Text segment #1. Click on Text <click to view> in Text segment #1 to confirm that the text segment was added properly.

7. Click OK in the NITF Metadata Editor. The NITF Output Parameters dialog appears.

8. Name the output file

FairbanksAK_compressed.ntf

. Double check that the Version is set to

NITF02.10, and that Compression is set to JPEG 2000 NPJE (Visually Lossless). Click OK.

9. Keep all display groups and images open for the next exercise.

Exercise 5: Examine the Results from Lossy JPEG Compression

1. Display

FairbanksAK_compressed.ntf

in a new display. It should be Display #2.

Note: If you had closed the NITF file, you can open it by selecting FileOpen Image File.

ENVI automatically recognizes the file as NITF format and opens it as if it was a native

ENVI format file. This is a convenient feature of ENVI, enabling you to open a NITF file without having to select FileOpen External FileMilitaryNITF from the ENVI main menu bar. ENVI automatically recognizes many of the most common external file formats such as GeoTiff, HDF, and NITF.

2. Image linking is a handy way to compare the result of a process with the original image. In this case we’ll compare the original NITF image in Display #1 with the compressed version in Display #2.

Link the two display groups by selecting ToolsLinkLink Displays from the #1 Display group menu bar. The Link Displays dialog appears.

3. Ensure that Display #1 and Display #2 toggles are both set to Yes, then click OK.

4. Left-click in the #2 Image window. You should see a subtle flicker as you left-click because the two display groups contain slightly different data. The #2 Image window contains data that has been subjected to JPEG 2000 compression.

5. To more easily see the effect of image linking, apply a different stretch to Display #2. From the #2

Display group menu bar, select Enhance[Scroll] Gaussian. Now when you left-click to flicker between displays, you will see much greater contrast.

6. You can resize the linked area shared between the two displays by middle-clicking and holding button while dragging the mouse to set the preferred size.

7. For an alternative way of examining the result of lossy JPEG compression on the data, open the interactive stretching tools for both display groups (from the Display group menu bar, select

ENVI for Defense and Intelligence

53

Chapter 3:The ENVI NITF/NSIF Module Chapter Review

EnhanceInteractive Stretching). You’ll find that the range and distribution of data values for each of the displayed bands has changed slightly, which affects the shape and distribution of the output, or byte scaled, histogram.

Next, we’ll examine how individual pixel values were changed as a result of the data compression.

1. From the #1 Display group menu bar, select ToolsCursor Location/Value. The Cursor

Location/Value tool appears.

As you move your cursor over a display group, you’ll see the information in the Cursor

Location/Value tool update. The first line shows which display group your cursor is currently positioned over, the image coordinates (in parenthesis) for the pixel located at the cursor, and the screen byte-scaled data values for the

Red (R), Green (G) and Blue (B) color channels. The second and third lines contain map information for the pixel located at the cursor. In this case, we can see the projection for the data file (RPC Geographic Lat/Lon) and the geographic coordinates for the pixel at the cursor. The last two lines show the actual data values from the image data as it is stored on the computer’s hard drive for the two linked displays. The data values are slightly different between the two displays.

2. Close the Cursor Location/Value tool.

3. Examine the size of the two files as they reside on disk. If you need to see a complete list of all currently opened files and be able to quickly view and edit the header data, use ENVI’s Available

Files List.

4. From the Available Bands List, select FileAvailable Files List. The Available Files List presents much useful information about files currently opened in ENVI. You should see very little difference between the two files, beyond their respective file sizes. Using JPEG 2000 results in a compressed file that is less than 20% of the original file size!

Using the File options from the Available Files List menu bar allows you to remove memory items, close or delete individual files, and write memory items to disk files. These options are useful if you have many opened files or items saved to memory and you want to close items you are no longer using in the ENVI session.

5. From the Available Files List menu bar, select FileClose All Files to prepare for the next chapter’s exercises. Click Yes to confirm that you want to close all items. Also, close all open dialogs.

Chapter Review

• The ENVI NITF Module provides functionality for viewing NITF attributes, saving NITF files, and editing NITF attributes.

• Using ENVI’s NITF Metadata Editor, you can add and delete selected PIA tags or text segments to NITF header and image segments.

54

ENVI for Defense and Intelligence

Chapter Review Chapter 3: The ENVI NITF/NSIF Module

• NITF used JPEG 2000 compression, offering drastically reduced file sizes using lossless or lossy options.

ENVI for Defense and Intelligence

55

Chapter 4:

Electromagnetic

Spectrum Image Display and Analysis

What You Will Learn In This Chapter.......................................................................................... 58

The Electromagnetic Spectrum...................................................................................................58

Understanding Emission of Electromagnetic Radiation Blackbodies.......................................... 59

Viewing Panchromatic QuickBird Data ....................................................................................... 62

Viewing Multispectral QuickBird Data ......................................................................................... 62

Multispectral Pixel Signatures ..................................................................................................... 67

Multispectral Landsat .................................................................................................................. 70

Band Ratios for Analysis ............................................................................................................. 71

Chapter Review........................................................................................................................... 73

57

ENVI for Defense and Intelligence

Chapter 4:Electromagnetic Spectrum Image Display and Analysis What You Will Learn In This Chapter

What You Will Learn In This Chapter

In this chapter you will learn:

• How the electromagnetic spectrum is the basis for all multispectral remote sensing

• Concepts related to multispectral imagery and spectral discrimination of materials

• How to display band combinations to accentuate different surface features

The Electromagnetic Spectrum

When we think of light, we automatically think of the visible light that illuminates the world around us.

However, visible light, or light that the human eye can detect, is not the only type of light. For example, most people has had the uncomfortable experience of spending too much time under a sunny sky without applying sunscreen. Ultraviolet radiation is a high-energy form of light that our eyes can’t detect, and it can cause painful sunburn.

Physics tells us that light can be understood as both a particle and a wave, the so-called dual nature of light.

In remote sensing, we think of light as taking the form of a wave, with different types of light associated with waves of varying wavelengths. The wavelength of light is literally the distance between two consecutive crests of waves (Figure 22). Because light propagates as a wave at the speed of light, wavelength is related to frequency, which is the number of waves that pass a point in space within a given time period. These different forms of light occur as a continuum of wavelengths, or frequencies of light, from very small wavelengths (high frequency energy), to very long wavelengths (low frequency energy).

This continuum of energy is known as the electromagnetic spectrum, and is the basis for all multispectral remote sensing.

The electromagnetic energy with the smallest wavelength belongs to gamma radiation which is emitted by the nuclei of atoms undergoing nuclear reaction. The frequency of this type electromagnetic radiation is extremely high, with a very small wavelength (about the width of an atomic nuclei). At the other end of the spectrum are radio waves which occupy a portion of the electromagnetic spectrum with wavelengths greater than one meter. Radio waves have a scale that is easily grasped; they are the size of common objects like cars and buildings. Between gamma and radio radiation exist many other portions of the electromagnetic spectrum, such as the x-ray, ultraviolet, visible, infrared and microwave regions. Light from the ultraviolet, through the visible and in to the near-infrared have wavelengths varying in scale from the width of a microbe to the width of a human hair (Figure 23 on page 59).

Wavelength

Figure 22: The Measurement of Wavelengths between Consecutive Crests of Waves

58

ENVI for Defense and Intelligence

Understanding Emission of Electromagnetic Radiation Blackbodies Chapter 4: Electromagnetic Spectrum

Image Display and Analysis

Gamma Waves

Size: Atomic Nuclei

Ultraviolet

Radiation

Size: Microbes

Near Infrared

Radiation

Size: Human Hair

Radio Waves

Size: Buildings and Larger

High Frequency

Radiation

Low Frequency

Radiation

10

-12 m 10

-10 m 10

-8 m 0.5

-6 m 10

-5 m 10

-2 m 10

3 m

Figure 23: Wavelengths of Electromagnetic Radiation: From the Very Short to the Very Long

Each region of the electromagnetic spectrum has properties that suit it for specific uses. For example, high energy x-rays can penetrate soft tissue but are stopped by bone, resulting in the medical imaging application that changed modern medicine. Portions of the visible and near-infrared have sufficient energy, and are of the correct wavelength, to interact with the molecular structures of materials on the Earth’s surface and are reflected, absorbed or transmitted. This forms the basis for optical remote sensing. In microwave ovens, microwave radiation, with a wavelength of approximately 12 centimeters, interacts with water molecules in food causing them to vibrate violently, resulting in the food in the oven heating up. Microwaves are also particularly useful for a type of remote sensing called RADAR, where microwaves are emitted by an instrument, interact with an object, and bounce back to the instrument’s detector. The returned signal carries information about the object’s composition, its distance from the sensor and its size.

Understanding Emission of Electromagnetic Radiation

Blackbodies

All objects that have a temperature above absolute zero (-273 o

C) emit and absorb radiation. The type of radiation emitted by objects is dependent on their temperature. A perfect emitter and absorber of radiation is termed a blackbody, which is an object that absorbs and re-emits all light incident upon it. Figure 24 on page

60 depicts idealized blackbody curves for three objects: the Sun, a fire, and the Earth, and shows how the frequency of maximum radiated energy increases with temperature. For example, the temperature of the Sun

(6000 K) is much greater than that of the Earth (300 K) and therefore will emit its maximum energy at a smaller wavelength (higher frequency), and greater intensity. The Sun, therefore, emits its maximum radiated energy within the visible and near-infrared wavelengths, while the Earth emits its maximum energy in the thermal infrared.

ENVI for Defense and Intelligence

59

Chapter 4:Electromagnetic Spectrum Image Display and Analysis

Electromagnetic Radiation Blackbodies

Understanding Emission of

Figure 24: Idealized Blackbody Curves for Objects of Different Temperatures

Electromagnetic Radiation Interaction with the Atmosphere:

Atmospheric Windows

Gamma rays are electromagnetic energy (free neutrons) produced by nuclear reactions. Previously we mentioned that light can be understood as a particle or a wave. Gamma rays lend themselves to being understood as particles (or photons—quantized packets of light energy) because of their high energy. Radio waves, on the other hand, have such a miniscule energy, they are best understood as waves. Gamma rays possess such a high energy that they can penetrate deeply into materials and, when they collide with atoms, displace electrons. Gamma rays can cause cellular damage if penetrating living tissue, resulting in mutation or cell death. Luckily, gamma rays don’t have a chance to reach the surface of the Earth because they are entirely absorbed by the molecules in the atmosphere.

Figure 25 shows atmospheric penetration of electromagnetic radiation of different frequencies based on the degree of interaction with the atmosphere. Harmful high frequencies like gamma rays, x-rays and much ultraviolet radiation are blocked by the atmosphere. Visible wavelengths penetrate to the Earth’s surface with little atmospheric interaction. This isn’t to say that no interaction takes place; visible light in the blue range scatters strongly with the atmosphere relative to green and red wavelengths. High frequency infrared wavelengths penetrate to the Earth’s surface, while longer wavelength infrared and microwave frequencies do not. Radio wavelengths have little interaction with the atmosphere, and penetrate easily to the Earth’s surface.

60

ENVI for Defense and Intelligence

Understanding Emission of Electromagnetic Radiation Blackbodies Chapter 4: Electromagnetic Spectrum

Image Display and Analysis

Figure 25: Atmospheric Penetration of Electromagnetic Radiation

Visible, infrared and certain microwave frequencies penetrate the Earth’s atmosphere and are therefore the ideal wavelength ranges for conducting Earth remote sensing. Within these ranges, certain wavelengths of light interact with specific atmospheric gasses to create wavelength ranges where little incoming radiation makes it to the surface. For example, Figure 26 shows atmospheric penetration within the region of the electromagnetic spectrum used for optical remote sensing that extends from the blue visible wavelengths around 400 nanometers (1 nanometer = 1 billionth of a meter), to the shortwave infrared around 2500 nanometers (nm). At around 1400 and 1900 nm in the shortwave infrared, there is very little penetration of electromagnetic energy through the atmosphere. Incoming solar radiation at these wavelengths interacts with water molecules in the atmosphere and is almost completely absorbed. You’ll notice similar, although less extreme, features in Figure 26 that result from absorption of light by other atmospheric gasses. For example, at 2000 nm there is a sharp absorption caused by light interaction with carbon dioxide molecules.

VISIBLE

(VIS)

NEA R-

INFRARED

(NIR)

SHORT WA VE

INFRA RED

(SWIR)

0.0

400.0

700.0

1000.0

1300.0

1600.0

Wavelength (nm)

1900.0

2200.0

2500.0

Figure 26: Atmospheric Penetration in the Optical Remote Sensing Wavelength Range

ENVI for Defense and Intelligence

61

Chapter 4:Electromagnetic Spectrum Image Display and Analysis Viewing Panchromatic QuickBird Data

Those regions in the optical remote sensing range of the electromagnetic spectrum that don’t have significant interaction with atmospheric gasses are the regions that are commonly sampled with optical remote sensing instruments. These regions are known as atmospheric windows. Figure 26 shows the band positions within atmospheric windows of the six bands collected with the Landsat Thematic Mapper sensor, in operation since the launch of Landsat 4 in 1982.

In the previous chapter, we displayed gray scale imagery from two types of satellite platforms in ENVI, and learned how to use some of the basic image visualization tools available in ENVI. Now that we know some of the background behind multispectral remote sensing, we’ll open a multispectral data file, and see how displaying different band combinations as RGB color composites allows us to visualize and interpret different features on the Earth’s surface.

Viewing Panchromatic QuickBird Data

Exercise 1: Open and View Panchromatic Imagery

1. From ENVI main menu bar, select FileOpen Image File. The Enter Data Filenames dialog appears.

2. Navigate to the envimil\Quickbird

directory, select boneyard_pan.dat

, then click Open.

The band appears in the Available Bands List. This is the panchromatic (one color, or gray scale) image that is collected coincident with four-band multispectral QuickBird imagery.

3. In the Available Bands List, click the plus sign next to the Map Info icon. This contains useful map projection information for the image, as well as the pixel size. Note the pixel size of 0.6 meters.

4. Load the panchromatic band into a new display. In Display #1, take a few moments to explore the image. This is a DigitalGlobe QuickBird scene over the Air Force’s Aircraft Maintenance and

Regeneration Center (AMARC) outside Tucson, AZ, otherwise known as the Boneyard. This is one location where national defense aircraft go to retire.

Next we’ll compare this image with the multispectral bands.

Viewing Multispectral QuickBird Data

Exercise 2: Multispectral Animation, Image Linking and Dynamic Overlay, and RGB Composites

1. From ENVI main menu bar, select FileOpen Image File. The Enter Data Filenames dialog appears.

2. Navigate to the envimil\Quickbird

directory, select boneyard_mul.dat

, then click Open.

The four bands of the file appear in the Available Bands List.

The file boneyard.dat

contains a small spatial subset from the 2.4 meter spatial resolution multispectral scene over the Boneyard. The file contains four individual bands, where each band is a separate image collected at a different wavelength region in the electromagnetic spectrum.

62

ENVI for Defense and Intelligence

Viewing Multispectral QuickBird Data Chapter 4: Electromagnetic Spectrum Image Display and Analysis

Figure 27: The Available Bands List with the Single-band Panchromatic and Multispectral Files

The bands are located in the following parts of the spectrum (the numbers are the band passes for the Full Width at Half Maximum (FWHM) of each band:

Band 1 = 450 - 520 nanometers (blue)

Band 2 = 520 - 600 nanometers (green)

Band 3 = 630 - 690 nanometers (red)

Band 4 = 760 - 900 nanometers (near-infrared)

You can see band center information (the center wavelength of the band within the electromagnetic spectrum) displayed in the Available Bands List in parenthesis after each band name (Figure 27).

3. In the Available Bands List, load Band 3 as a Gray Scale image into a new display.

For many applications, it is helpful to think about multispectral datasets as if they were a 3D cube, with all of the bands stacking up on top of one another (as in Figure 28 on page 64).

ENVI for Defense and Intelligence

63

Chapter 4:Electromagnetic Spectrum Image Display and Analysis Viewing Multispectral QuickBird Data

64

.

.

Ba nd s s

Figure 28: Multi-band Files are Often Visualized Geometrically as a Cube

Visualizing image data in this manner makes it easy to see that multispectral images can provide information in two distinct domains, spatial and spectral. The spatial domain of the data represents an area within any one band (in sample/line space), while the spectral domain of the data represents the response of any one pixel in all of the bands (in band space). Many processing algorithms can be categorized as either spatial or spectral depending on the domain from which the data are extracted for processing. For example, image threshold (finding all pixels in one band with a range of data values) would be a spatial function, while image classification (assigning pixels to categories based on their spectral signatures) would be a spectral function.

An effective tool for visualizing multispectral imagery that illustrates how materials on the Earth’s surface have different radiometric properties is the animation tool.

4. From the #2 Display group menu bar, select ToolsAnimation. The Animation Input Parameters dialog appears. Click Spatial Subset. The Spatial Subset dialog appears.

5. Enter the image subset dimensions as Samples: 329 to 728 NS: 400 and Lines: 267 to 666 NL: 400

(see Figure 29 on page 65). Click OK. In the Animation Input Parameters dialog, use a Window

Size of 400 x 400 pixels with Resampling of Nearest Neighbor. Click OK. The Animation window appears.

ENVI for Defense and Intelligence

Viewing Multispectral QuickBird Data Chapter 4: Electromagnetic Spectrum Image Display and Analysis

Figure 29: The Spatial Subset Dialog

6. In the Animation window, click the pause button , and then use the left and right arrow buttons to scroll through the bands. Note how some of the aircraft are visible in Band 1 (blue), while other aircraft are more visible (relative to the background) in Band 4 (near-infrared). This illustrates the power of spectral imaging: the ability to discriminate objects on the ground spectrally. Experiment with some of the other tools available in the animation tool such as Speed and Cycle ( ). Also note that you can save an animation as MPEG from the File menu. Close the animation window.

Next, you’ll load two RGB color composite images by combining three different bands into single color images, where the color of each pixel is dependent on the relative brightness of the corresponding pixel in each of the three bands. Using bands from parts of the electromagnetic spectrum to which our own eyes aren’t sensitive, we are often able to discriminate features in the image that would otherwise be difficult to see.

7. From the Available Bands List, right click on the boneyard_mul.dat

file name and select Load

True Color to <current>. This should load bands 3,2,1 as an RGB into Display #2.

A true-color composite is a representation of the surface as it might appear to a human observer.

8. Load a near-infrared color composite Band (4,3,2) RGB of boneyard_mul.dat

into a new display.

ENVI for Defense and Intelligence

65

Chapter 4:Electromagnetic Spectrum Image Display and Analysis Viewing Multispectral QuickBird Data

This particular combination of bands for QuickBird imagery is also called a color-infrared

composite because it displays the near-infrared channel (Band 4) as red, tending to produce an image where healthy, green vegetation is shown in red.

Often, it is advantageous to link the images to visually compare two display groups that contain different views of the same dataset, as you have with the two RGB combinations loaded into Displays #2 and #3.

9. From the #2 Display group menu bar, select ToolsLinkLink Displays. The Link Displays dialog appears.

Note that you can specify which displays to link, a linked pixel coordinate, and the display you want to act as the base for window size and position. You can also specify dynamic overlay on/off, and a dynamic overly transparency.

10. Make sure that the Display #2 and Display #3 toggles are both set to Yes, and that the Display #1 toggle is set to No. Click OK.

Note that image linking is pixel-based. This means that images need to be co-registered so that their pixels line up precisely. This is why we set the Display #1 toggle to No; Display #1 contains the panchromatic band which is of the same area, but at a different spatial resolution (pixel size).

11. Left-click in the Image window of either display (#2 or #3). You should see the flicker effect of one image overlaid the other image in the Image window.

12. Middle-click to change the size of the dynamic overlay. In either Image window, middle-click and hold the button and drag to resize the overlay window size. When you release the mouse button and then left-click in either Image window, you will see the newly-sized dynamic overlay window. This behavior works in either the Image or Zoom windows.

Next we’ll use the Pixel Locator dialog to examine a specific area on the ground.

13. From the #2 Display group menu bar, select ToolsPixel Locator. The Pixel Locator appears.

This tool is useful for moving to a specific area in an image using either image pixel, geographic, or map based coordinates.

14. Set Sample = 763 and Line = 543, then click Apply.

66

ENVI for Defense and Intelligence

Multispectral Pixel Signatures Chapter 4: Electromagnetic Spectrum Image Display and Analysis

A) B)

Figure 30: The Zoom Display Containing A) True-color and B) Near-infrared Color Composite Band

Combinations

Your two Zoom windows should appear as in Figure 30. The aircraft centered in the #2 Zoom window crosshairs is easier to discriminate in the near-infrared color composite (Display #3 Zoom window) because it stands out from the soil background.

15. Keep all display groups open for the next exercise.

Multispectral Pixel Signatures

Exercise 3: Using the Z Profile Tool

In the previous exercise, we saw that having an additional band in the near-infrared made it easier to distinguish aircraft from the soil background. Now we’ll see why.

All materials interact with light differently dependent on composition, structure, and the wavelength of light.

Multispectral imaging leverages this basic fact by recording electromagnetic radiation at discreet wavelengths. It is easy to visually compare the differences in pixels by plotting the recorded value for each band. A plot of a pixels value in all bands is called a Z Profile or spectrum (see Figure 31).

ENVI for Defense and Intelligence

67

Chapter 4:Electromagnetic Spectrum Image Display and Analysis Multispectral Pixel Signatures

2

3

Figure 31: Creating a Spectral Signature: The Z Profile for a Pixel Created by Plotting the Value in Each Band

1. Right-click in the Display #2 and select Dynamic Overlay Off.

2. From the #2 Display group menu bar, select ToolsProfilesZ Profile (Spectrum). The #2

Spectral Profile window appears. Move your Zoom box around the Image window. As you move the Zoom box to new locations, watch the #2 Spectral Profile window update to the current pixel’s spectral signature.

3. In the #2 Pixel Locator, re-center the zoom crosshairs on pixel 763, 543.

4. In the #2 Spectral Profile window, take note of the spectral signature for the paint covering the aircraft centered in the crosshairs of the Zoom window of Display #2. We want to compare this signature to the signature from a nearby soil pixel. In the next step, you’ll save the aircraft signature by moving it to a new ENVI Plot Window.

5. From the #2 Spectral Profile window menu bar, select OptionsNew Window: with Plots....

This creates a new ENVI Plot Window, containing the aircraft spectral signature. In the new plot window, select EditData Parameters. You can edit many attributes associated with signatures in the Data Parameters dialog. For example, you could change the signature color, line thickness, line style, and so forth. Rename this signature to aircraft. Change the color by right-clicking on the color box and choosing Items 1:20Red. When done with the Data Parameters dialog, click

Cancel.

6. In the #2 Pixel Locator window, enter Sample = 764, Line = 536, then click Apply. Your Zoom box in both Displays #2 and #3 should move to the new location centered on a soil pixel next to the aircraft. That pixel’s spectrum should now be plotted in the #2 Spectral Profile window.

7. In ENVI, you can move signatures between plot windows by dragging and dropping plot names.

From the #2 Spectral Profile window menu bar, select OptionsPlot Key to display the plot key name.

68

ENVI for Defense and Intelligence

Multispectral Pixel Signatures Chapter 4: Electromagnetic Spectrum Image Display and Analysis

8. Left-click on the plot key name and drag it to the ENVI Plot Window containing the aircraft signature.

9. In the plot with the two spectra, select EditData Parameters. Change the name of the second spectrum to Soil, and its color to Green. When you are done editing the signature attributes, cancel the Data Parameters dialog.

In Bands 1-3, the two pixels have similar signatures. However, in Band 4, the two curves diverge.

The soil is much brighter in the near-infrared Band 4. It is this difference that allows us to spectrally distinguish the aircraft from the background soil.

If you have an Internet connection, go to www.local.google.com

, and enter 32.1579 -110.8245 as your search. Turn on the Satellite view and zoom all the way in to see a higher resolution version of the aircraft and surrounding area.

Figure 32: The Z Profile Tool Showing Two Spectral Signatures, One from an Aircraft Pixel, the Other from a

Bare Soil Pixel

10. Familiarize yourself with the Spectral Z Profile tool. From the #2 Spectral Profile tool menu bar, select OptionsCollect Spectra. If you left-click outside the zoom box in the Display #2 Image window and hold the mouse button while you move it around, you can see that you will over-plot many spectral signatures. Select OptionsCollect Spectra to deselect the option. Move the zoom window around the image and explore pixel spectral signatures from various materials. Using

Display #3 as a guide, locate some vegetated pixels and examine their signatures (vegetation should appear as red in the near-infrared color composite.)

11. Close all plot windows and display groups by selecting WindowClose All Plot Windows and

WindowClose All Display Windows from the main ENVI menu.

ENVI for Defense and Intelligence

69

Chapter 4:Electromagnetic Spectrum Image Display and Analysis Multispectral Landsat

Multispectral Landsat

Exercise 4: Viewing Different Landsat Band Combinations—Less Spatial,

More Spectral Resolution

1. From the ENVI main menu bar, select FileOpen Image File. The Enter Data Filenames dialog appears.

2. Navigate to the envimil\Landsat

directory, select the file

Boneyard_ETM_pansharp.dat

, then click Open.

This is a six-band multispectral Landsat ETM dataset, also acquired over the Boneyard near Tucson,

AZ. It has been pan sharpened using the Landsat panchromatic (gray scale) band so that the image has a nominal pixel size of 15 meters. (We’ll cover pan sharpening in a later module.)

3. Load a true-color composite Band (3,2,1) RGB to a new display.

You’ll immediately notice the loss of detail when compared to the 2.4 meter pixel resolution of the

QuickBird imagery used in the previous exercise.

4. View spectral profiles from the Landsat imagery by selecting ToolsProfilesZ Profile

(Spectrum) from the Display group menu bar. Compare the wavelengths of the six Landsat channels with the four channels of Quickbird.

5. There are several commonly used band combinations for Landsat (see Table 3). Load a new display with a near-infrared color composite Band (4,3,2) RGB. Then, load a third new display with a shortwave-infrared color composite Band (7,4,3) RGB.

Table 3: Common Landsat Band Combinations

6. Link all three display groups (select ToolsLinkLink Displays from any Display group menu bar). Click in one image to see the overlay from another display. When you have three or more windows linked you can cycle through which display is overlain by holding the left mouse button down while clicking with the middle mouse button. Explore the images, noting differences between the three band combinations.

Notice how different features on the ground are highlighted using different band combinations.

Contrast between soil and vegetation is improved by displaying a near-infrared or shortwave infrared color composite.

7. Keep all files and displays open for the next exercise.

70

ENVI for Defense and Intelligence

Band Ratios for Analysis Chapter 4: Electromagnetic Spectrum Image Display and Analysis

Band Ratios for Analysis

Exercise 5: Viewing Different Landsat Band Combinations

1. If you have closed or covered your spectral plot, from one of the display windows select Tools

ProfilesZ Profile (Spectrum).

2. From the Display group menu bar of any of the display groups, select ToolsPixel Locator. The

Pixel Locator appears. Enter Sample = 6370, Line = 2265, then click Apply. In the Spectral Profile tool note that the spectral profile tool came from a pixel containing vegetation (Figure 33).

Figure 33: Spectrum from Vegetation Pixel of the Boneyard Landsat 7 ETM+ Image

In the figure above, note the slight peak in reflectance in the green band (Band 2), the absorption in the red band (Band 3), and the drastic increase in reflectance between the red band and the nearinfrared (Band 4). These are typical features found in reflectance spectra of vegetation.

Vegetation has a unique signature recognizable in raw, radiance, or reflectance data, which is caused by the adaptation of vegetation for maximization of photosynthesis. Red light, used for photosynthesis, is strongly absorbed, while near-infrared light is efficiently reflected to prevent heat build-up in photosynthetic tissues (leaves).

ENVI for Defense and Intelligence

71

Chapter 4:Electromagnetic Spectrum Image Display and Analysis Band Ratios for Analysis

Figure 34: Application of the Simple Ratio with Landsat TM for Mapping Vegetation

In the figure above:

A) Is the near-infrared color composite (RGB = Bands 4, 3, 2).

B) Is the reflectance of a pixel with a high concentration of vegetation. Vegetation has a distinct absorption of electromagnetic radiation in Band 3 due to chlorophyll absorption and Band 4 has increased brightness due to scattering of light at cell wall / air space interfaces within leaves.

C) Is the output from Band 4 versus Band 3 ratio (Output = Band4/Band3). Brighter areas contain higher concentrations of healthy vegetation

Because vegetation has a steep slope between the red and near-infrared portions of the spectrum, we can use that relationship to highlight vegetation in an image by calculating a band ratio. Dividing the near-infrared band by the red (see Figure 34) will result in relatively high values in pixels containing healthy vegetation.

3. From the ENVI main menu bar, select TransformBand Ratios. The Band Ratio Input Bands dialog appears.

4. Select Band 4 (0.8200). Band 4 (0.8200):Boneyard_ETM_pansharp.dat is inserted into the

Numerator field. Select Band 3 (0.6500) for the Denominator, and then click Enter Pair to insert the selected bands into the Selected Ratio Pairs portion of the dialog. Click OK. The Band Ratios

Parameters dialog appears.

5. Output the result to a file named

Landsat_veg.dat

.

6. Load Ratio band of

Landsat_veg.dat

into a new display, and link it to the other displays.

Explore how vegetated pixels have been highlighted by the band ratio.

Note: You can use ENVI’s Color Tables or the Density Slice tool to give color to the band ratio.

Band ratios are useful when the material of interest in a multispectral image has a unique spectral signature, where values are high in one band, and low in another. There are many common band ratios that have applications in specific fields. For example, exploration geologists often use band ratios to look for areas with high mineral concentrations.

7. Close all display groups and open files. From the Available Bands List, select FilesClose All

Files.

72

ENVI for Defense and Intelligence

Extra Work Chapter 4: Electromagnetic Spectrum Image Display and Analysis

Extra Work

If you have extra time, test yourself by doing the following exercise.

Extra Exercise: Quickbird Band Ratios

1. Use the Band Ratio tool on the Quickbird scene boneyard_mul.dat

.

2. Display and link the band ratio result to a color display with boneyard_mul.dat.

Look at various planes in the band ratio result.

3. Look at the Z-Profiles of the planes from the original scene ( boneyard_mul.dat

) to get an idea of how they relate to the Band Ratio result.

Chapter Review

• The electromagnetic spectrum is a continuum of wavelengths of energy that stretches from high-frequency gamma radiation, to low frequency radio wavelengths.

• The optical remote sensing wavelength range, from 0.4 to 2.5 µm, is the region of maximum solar energy output.

• Remote sensing data collected at different wavelengths of the electromagnetic spectrum can be viewed as single-band gray scale imagery, or combined into a color image by assigning different bands to each of the three RGB color channels.

• Band ratios can be used to accentuate materials with unique spectral characteristics in a multispectral image.

ENVI for Defense and Intelligence

73

Chapter 5:

Introduction to ENVI

Zoom

What You Will Learn In This Chapter.......................................................................................... 76

Introduction to ENVI Zoom.......................................................................................................... 76

The ENVI Feature Extraction Workflow ..................................................................................... 84

Chapter Review.......................................................................................................................... 97

75

ENVI for Defense and Intelligence

Chapter 5:Introduction to ENVI Zoom What You Will Learn In This Chapter

What You Will Learn In This Chapter

In this chapter you will learn how to:

• Use ENVI Zoom to display a multispectral image

• Enhance, zoom, pan, and rotate the image

• Create a Portal and compare it to the original scene using blend, flicker, and swipe tools

• Use Chip to File Display to take a screen capture of the image

• Edit NITF Metadata

• Use Feature Extraction to perform a rule-based classification

Introduction to ENVI Zoom

ENVI Zoom is an easy to use, powerful imagery viewer used to display and manipulate remote sensing images. The interface provides quick access to common display tools such as contrast, brightness, sharpening, and transparency. You can work with multiple layers of data at one time and in one window, use a Data Manager and Layer Manager to keep track of multiple datasets, and “punch through” layers to view and work with another layer or layers in the same window. In addition, ENVI Zoom will re-project and resample images on-the-fly.

While anyone can take advantage of the display and enhancement tools, ENVI Zoom is primarily designed for defense imagery analysts and other military personnel.

Exercise 1: Starting ENVI Zoom and Setting Preferences

By default when you open a file, ENVI Zoom attempts to automatically display a true-color or gray scale image based on your file type. For this tutorial, you will change this preference and display the Data

Manager.

1. From the Main ENVI menu, select Launch ENVI Zoom. When ENVI Zoom appears, from its menu bar, select FilePreferences. The ENVI Zoom Preferences dialog appears.

2. On the left side of the dialog, select Data Manager.

3. On the right side of the dialog, click on the Auto Display Method for Multispectral Files field and select CIR (color-infrared). This will cause image files to be displayed as color-infrared by default.

4. Click on the Launch Data Manager After File/Open field, and select Always. This will change the preference and allow the Data Manager to be viewed every time you open a file.

5. Ensure the following settings are selected:

Auto Display Files on Open = True

Clear Display When Loading New Image = False

Close Data Manager After Loading New Data = False.

6. Click OK in the ENVI Zoom Preferences dialog to save these preferences.

Exercise 2: Working with ENVI Zoom

1. Click Open on the toolbar. The Open dialog appears.

2. Navigate to envimil\Quickbird

and open

SKorea_sub

. Because of the preferences you set in

76

ENVI for Defense and Intelligence

Introduction to ENVI Zoom Chapter 5: Introduction to ENVI Zoom the previous section, the image is automatically displayed as color-infrared and the Data Manager appears.

Working with the Data Manager

The Data Manager lists the files that you have open and makes them accessible to load into your display.

When you open a file in ENVI Zoom, a new item is added to the top of the Data Manager tree. You can open multiple files in one ENVI Zoom session, and you can choose which of those files to display and how to display them using the Data Manager.

3. When you click on band names in the

Data Manager, color gun assignments automatically cycle through red, green, then blue (in that order). Experiment with selecting different band combinations.

Click the band name you want to assign to red. A red box appears next to the band name.

4. Repeat for the green and blue bands. If one band is assigned multiple colors, a split box appears next to the band name, showing the colors. You must click Load

Data each time to see the new band combination.

5. You originally had a color-infrared image loaded into the Image window. In the Data

Manager, right-click on the filename

(

SKorea_sub

) and select Load True

Color. ENVI Zoom determines the proper bands to load a true-color image into the

Image window.

6. Click the Tip: Working with the Data Manager link at the bottom of the Data Manager. You will find quick access to helpful tips throughout ENVI Zoom. These tips provide links to the ENVI

Zoom Help, which is also accessible via the Help toolbar button or Help menu.

7. Close the ENVI Zoom Help (use the X at the top right of the dialog title bar).

8. Explore the toolbar buttons on the Data Manager. From the Data Manager toolbar, you can open new files, expand and collapse files, close files, pin the Data Manager to keep it on the screen or

unpin it to have it automatically close when you load a file into the display, and open files in ENVI or ArcMap.

9. Close the Data Manager (use the X on the top right of the dialog title bar).

Working with Layers

You can load multiple layers into ENVI Zoom at one time and manage those layers using the Layer

Manager. In the last exercise, you created separate true-color and color-infrared layers for the same file.

Both are displayed in the Layer Manager.

You can control the order of layers in the Image and Overview windows by dragging and dropping layers in the Layer Manager tree or by using menu options (which you will use in a later exercise).

ENVI for Defense and Intelligence

77

Chapter 5:Introduction to ENVI Zoom Introduction to ENVI Zoom

10. Click and drag

SKorea_sub

in the Layer Manager above

[1]SKorea_sub

.

By default, all layers in the Layer Manager are displayed in the Image window. You can temporarily hide the display of a layer so that you can work with other layers in the Image window.

11. Right-click on

SKorea_sub

in the Layer Manager, and disable the Show Layer option to turn the display of that layer off in the Image window.

12. Right-click on a

SKorea_sub

again and enable the Show

Layer option to turn the display of that layer back on.

Exploring the ENVI Zoom Interface

The ENVI Zoom interface includes a menu bar, toolbars, category bars, and a Status bar. Much of the ENVI

Zoom interface is customizable and provides options to make use of multiple monitors.

Menu bar

Toolbars

Category bars

Step 1

Step 3

Step 4

Information bar

Process Manager

13. Detach the Layer Manager category by clicking the Detach button to the right of the Layer Manager category bar (see the previous image).

14. Reattach the Layer Manager category by clicking the X on the top right of the Layer Manager dialog window.

15. Collapse the entire category panel by clicking on the collapse bar to the right of the categories (see the previous image). This allows you to view a larger Image window. Now, expand the categories by clicking again on the same bar (to the left of the Image window).

16. Collapse the Toolbox by clicking the arrow to the left of the Toolbox bar (see the previous image).

Now, expand the Toolbox category by clicking again on the same arrow.

78

ENVI for Defense and Intelligence

Introduction to ENVI Zoom Chapter 5: Introduction to ENVI Zoom

Using Display Tools

17. Click the Cursor Value button . Then as you move your cursor around the display, you will see pixel values that correspond to the position of the cursor in the imagery. Close the Cursor Value dialog.

18. Click the Zoom button then left-click and drag your cursor to draw a rubber-band box around a vegetated area near the center of the image. This will zoom to that area in the Image window.

19. Click the Pan button then left-click and drag your cursor in the Image window to pan in the direction of the mouse. You can also use the middle mouse button to perform a pan.

20. Click the Fly button then left-click and hold to continuously drift in the direction of the cursor.

Moving further from the center (closer to any side) causes the drift to increase in speed.

21. Click the Rotate button then left-click and drag the cursor in a clockwise or counter-clockwise direction to rotate the image. A text window to the right of the north arrow and initially saying

Rotate To or interactively reports the current degree of rotation.

22. Click the Select button to exit the Rotate tool.

23. Click the angle drop-down list on the toolbar and select . If the image is georeferenced, you can also click on the north arrow to rotate north to the top.

24. Experiment with the Brightness, Contrast, Sharpen, and

Transparency sliders.

• Click on the slider bar to the right or left of the indicator or click the slider then use the

Page Up or Page Down keys to move the slider up or down incrementally by ten percent.

• Click on the icons to the right or left of the slider bar or click the slider then use the arrow keys on the keyboard to move the slider up or down incrementally by one unit.

• Click the slider then use the Home key on the keyboard to move the slider to 100 and the

End key to move the slider to 0.

25. Click the Reset button on each slider to return them to their default values.

26. Experiment with different stretch types by selecting options from the Stretch Types drop-down list.

View box

Working with the Overview Window

The Overview window provides a view of the full extent of the layers loaded into the Image window. Each time you display a new layer, the Overview window is resized to encompass the extents of all layers in the Image window. The Overview window is not populated until pyramids are built for the image, therefore it may appear blank for several seconds when you first load an image while pyramids are being built.

The View box is a small, partially transparent window inside the

Overview window that shows the extent of the imagery visible in the

Image window.

ENVI for Defense and Intelligence

79

Chapter 5:Introduction to ENVI Zoom Introduction to ENVI Zoom

27. Increase or decrease the size of the View box by clicking and dragging a corner of that box. This will zoom in or out on the image displayed in the Image window. As you click and drag a side, the

View box adjusts shape to maintain the proper aspect ratio of the Image window.

28. Click inside of the View box and drag it to any location within the Overview window to dynamically update the Image window.

29. Click outside of the View box in the Overview window to re-center the View box on the spot where you clicked. Hold the mouse button down and drag the View box around. This is an effective way to move around in the image. Note the “snail trail” that the View box leaves as it is moved around.

This path history shows where you have already looked in the image. You can right-click in the

Overview window and de-select Show Path History or select Clear Path History.

Working with a Portal

A Portal is a window inside the Image window that allows you to view multiple layers in the Layer Manager simultaneously. A Portal works as a separate layer (inside the Portals folder) in the Layer Manager. In this step, you will compare the true-color and color-infrared layers.

30. In the Layer Manager, right-click on the

[1]SKorea_sub

(the true-color image) and select Order

Bring to Front. This will place the true-color image at the top of the layer list.

31. Click the Portal button on the toolbar. ENVI Zoom creates a new Portal from the second layer in the Layer Manager, which is the color-infrared image. ENVI Zoom adds the new Portal to the

Portals folder in the Layer Manager.

32. Click and drag inside the Portal to move it around the Image window.

33. Click and drag on a corner or side of the portal to resize it.

34. Click the Pan button on the ENVI Zoom toolbar. Grab the true-color image (click outside of the Portal) and drag it around in the Image window. Notice how the Portal stays in one location while the image moves behind it.

35. Click the Select button to exit the Pan tool.

Pinning the Portal to the Image

You can attach (or pin) the Portal to the image so that the Portal moves with the data (versus moving and panning with the image as you did in the last exercise). This way, when you pan the image, the Portal stays fixed to its original position relative to the data.

36. Click once inside the Portal to select it, then place your cursor at the top inside of the

Portal to display the Portal toolbar.

37. Click the Pin button . The button changes to Unpin.

38. Click the Pan button on the ENVI

Zoom toolbar. Grab the true-color image

(click outside of the Portal) and drag it around in the Image window. Notice how the Portal stays fixed to the image.

39. Click the Select button on the ENVI

80

ENVI for Defense and Intelligence

Introduction to ENVI Zoom Chapter 5: Introduction to ENVI Zoom

Zoom toolbar to exit the Pan tool.

40. Click once inside the Portal to select it, then place your cursor at the top inside of the Portal to display the Portal toolbar.

41. Click the Unpin button on the Portal toolbar.

Working with Blend, Flicker, and Swipe

ENVI Zoom provides tools that help you compare two different layers. You can use these tools for comparing entire images or you can use them inside of a Portal, as you will do in this tutorial. These tools are enabled only when you have two or more layers open in the Layer Manager, and when you display at least one layer in the Image window. For optimal viewing when using these tools, it is recommended that you not use the transparency enhancement slider.

Blending

Blending allows you to gradually transition from one image to another, by increasing the transparency of one image.

42. Right-click inside of the Portal and select Blend. Blending automatically begins between the truecolor and color-infrared layers.

43. Experiment with the speed of the blend, using the and buttons available on the Portal toolbar.

44. Click the Pause button on the Portal toolbar to stop the blend.

Flickering

Flickering allows you to toggle between two images at a desired speed.

45. Right-click inside of the Portal and select Flicker. Flickering automatically begins between the truecolor and color-infrared layers.

46. Experiment with the speed of the flicker, using the and buttons available on the Portal toolbar.

47. Click the Pause button on the Portal toolbar to stop the flicker.

48. If you paused the flicker action while the true-color image was displayed, your Portal appears transparent.

Swiping

Swiping allows you to spatially transition from one image to another using a vertical dividing line that moves between two images.

49. Right-click inside of the Portal and select Swipe. Swiping automatically begins between the between the true-color and color-infrared layers.

50. Experiment with the speed of the swipe, using the and buttons available on the Portal toolbar.

51. Click the Pause button on the Portal toolbar to stop the swipe.

52. Click on the x at the top of the Portal to close it.

To exit blend, flicker, or swipe, you must close the Portal, unless you want the Portal to appear in the screen

ENVI for Defense and Intelligence

81

Chapter 5:Introduction to ENVI Zoom Introduction to ENVI Zoom capture that you will create in the next step.

Exercise 3: Chipping and Saving

In this step, you will use Chip to File to take a screen capture of the contents of the Image window, and save the image. Any enhancements, zooming, rotating, or Portals that are displayed in the Image window are burned into the output image. ENVI Zoom creates an 8-bit, three-band image at screen resolution.

1. Click the Chip to File button on the ENVI Zoom toolbar. The Chip to File Parameters dialog appears.

2. Click the Output File drop-down list and view the options for output file format.

3. For output you can either save the display to memory by clicking on the file or Memory icon , or click the File Select button to browse to a directory to which you will write a file.

4. Click Cancel on the Chip to File Parameters dialog.

Exercise 4: Editing NITF Metadata

This section is for users who are familiar with the NITF format and have purchased the optional NITF

Module license.

1. Click the Chip to File button on the ENVI Zoom toolbar.

2. In the Chip to File Parameters dialog, select Output FileNITF. Then click the NITF icon .

The NITF Metadata Editor dialog appears. Because the input image is not in NITF format, the output image will only contain default NITF headers.

3. In the tree view, click File Header. In the NITF Metadata Editor dialog, the field names that are black indicate that you can enter a value for those fields. Click in the Originator's Name field, and add your own text.

82

ENVI for Defense and Intelligence

Introduction to ENVI Zoom Chapter 5: Introduction to ENVI Zoom

Adding PIA TREs

4. On the left side of the NITF Metadata Editor dialog, click Image Segment #1. ENVI Zoom creates one image segment in NITF format from the output image you create using Chip to File.

5. At the bottom of the NITF Metadata Editor dialog, click Add PIAs.

The Add PIA Tags dialog appears.

Profile for Imagery Access and Profile for Imagery Archive (PIA)

Tagged Record Extensions (TREs) hold information required by the

Standards Profile for Imagery Access (SPIA). A variety of government agencies require these TREs in NITF image products.

See ENVI Zoom Help for further details.

6. In the PIATGB (Target) field, enter 1. In this step, you are simulating adding a Profile for Imagery Access Target Descriptive

(PIATGB) TRE to mark a target of interest in the data.

7. Click OK in the Add PIA Tags dialog. A new PIATGB TRE is added to Image Segment #1:

8. Click OK in the NITF Metadata Editor dialog.

Saving the File

9. In the Chip from Display Parameters dialog, click the File Select button . The Select Output

Filename dialog appears.

10. Browse to your output location on your hard drive, enter zoomNITF

as the file name, then click

Open.

11. Click OK on the Chip to File Parameters dialog. For NITF formats, ENVI Zoom creates an 8-bit, three-band output file, displays it, adds it as a layer, and as a file in the Data Manager. Because the original data set is not georeferenced the new image is also not georeferenced and is displayed adjacent to the original image.

12. Open the Data Manager by clicking on Data Manager icon . Right click on a data file name and select Close All Files. Then close the Data Manager.

ENVI for Defense and Intelligence

83

Chapter 5:Introduction to ENVI Zoom The ENVI Feature Extraction Workflow

The ENVI Feature Extraction Workflow

ENVI Feature Extraction is a module for extracting information from high-resolution panchromatic or multispectral imagery based on spatial, spectral, and texture characteristics. You can extract multiple features at a time such as vehicles, buildings, roads, bridges, rivers, lakes, and fields. ENVI Feature

Extraction is designed to work in an optimized, user-friendly, and reproducible fashion so you can spend less time understanding processing details and more time interpreting results.

ENVI Feature Extraction uses an object-based approach to classify imagery. Traditional remote sensing classification techniques are pixel-based, meaning that spectral information in each pixel is used to classify imagery. This technique works well with hyperspectral data, but it is not ideal for panchromatic or multispectral imagery. With high-resolution panchromatic or multispectral imagery, an object-based method offers more flexibility in the types of features to be extracted. An object is a region of interest with spatial, spectral (brightness and color), and/or texture characteristics that define the region.

The Feature Extraction workflow is the combined process of segmenting an image into regions of pixels, computing attributes for each region to create objects, and using the attributes to classify the objects with rule-based or supervised classification techniques. The workflow also allows you to go back to previous steps if you want to change your settings.

84

ENVI for Defense and Intelligence

The ENVI Feature Extraction Workflow Chapter 5: Introduction to ENVI Zoom

Exercise 5: Feature Extraction with Rule-Based Classification

Rule-based classification lets you define features by building rules based on object attributes. Rule-based classification is a powerful tool for feature extraction, often performing better than supervised classification for many feature types. Rule-building is primarily based on human knowledge and reasoning about specific feature types: For example, roads may be elongated, many buildings approximate a rectangular shape, vegetation has a high NDVI value, and trees are highly textured compared to grass.

Taking this concept a step further, you can define a rule using one or more conditions; for example, you could define the rule for “lake” as the following:

• Objects with an area greater than 500 pixels AND

• Objects with an elongation less than 0.5 AND

• Objects with a band ratio value less than 0.3

Once you have created a set of rules and they seem to work well for your region of interest, you can save the rule set for later use.

Opening and Displaying the Image

1. From the ENVI Zoom menu bar, select FileOpen.

Note: To load a data set on a Windows machine, you can drag and drop an image file from

Windows Explorer into ENVI Zoom.

2. Navigate to envimil\NITF and open Qingdao_sub.ntf. In this QuickBird subset there are several buildings with blue roofs in a natural color image. Some of these buildings have roofs with relatively high reflectance in the near-infrared. These structures will be our target. They show up bright pink in a color infrared image.

Note: you may see that the image displayed is a true color image instead of color infrared (CIR) even though you specified in Preferences that a CIR image be automatically loaded. This is because

NITF files have a method to set a default display band order using the NITF metadata. If you wish to see a CIR image, open the Data Manager, right-click on the filename and select Load CIR.

3. Double click on Feature Extraction in the Toolbox. The Select Fx Input Files dialog appears.

4. Select Qingdao_sub.ntf and click OK. You can create spectral and spatial subsets for input into Feature Extraction and you can also specify Ancillary Data and a Mask File (under Select

Additional Files), but you will not do these steps in this exercise. The Feature Extraction dialog appears.

Segmenting the Image

5. Enable the Preview option to display a Preview Portal showing the current segmentation results.

You can move the Preview Portal around the image or resize it to look at different areas.

Tip: If the segments are too light to visualize in the Preview Portal, you can click in the Image window to select the image layer, then increase the transparency of the image using the

Transparency slider in the main toolbar.

6. You want to choose the highest Scale Level that delineates the structures as well as possible.

Choosing a high Scale Level causes fewer segments to be defined, and choosing a low Scale Level causes more segments to be defined. If you choose too high of a Scale Level, the boundaries

ENVI for Defense and Intelligence

85

Chapter 5:Introduction to ENVI Zoom The ENVI Feature Extraction Workflow between segments will not be properly delineated and you will lose features of interest. You should ensure that features of interest are not grouped into segments represented by other features.

Specify a Scale Level value of 30.0 which seems to delineate the rooftop boundaries while preserving some detail in their shapes. The Preview Portal updates to show the change in segmentation. You can move the slider or type this value in to the Find Objects dialogue. If you click on the Scale Level bar on either side of the slider, you will move in increments of 10.

86

7. Click Next to segment the entire image. ENVI Zoom creates a Region Means image, adds it to the

Layer Manager as the top layer, and displays it in the Image window. The new layer name is

Qingdao_subRegionMeans

. The Region Means image is a raster file that shows the results of the segmentation process. Each segment is assigned the mean band values of all the pixels that belong to that region. Feature Extraction proceeds to the Merge step.

8. The Merge step groups similar adjacent segments by re-assembling over-segmented or highly textured results. It is useful for merging small segments into larger ones. You should ideally choose the highest Merge Level that delineates the boundaries of features as well as possible. For this dataset, set the Merge Level to a value of 50.0, and click Next. Feature Extraction proceeds to the

Refine step (Step 3 of 4 of the Find Objects task).

9. The Refine step is an optional step that uses a technique called thresholding to further adjust the segmentation of objects. Thresholding works best with point objects that have a high contrast relative to their background (for example, bright aircraft against a dark tarmac). Accept the default selection of No Thresholding, and click Next. Feature Extraction proceeds to the Compute

Attributes step (Step 4 of 4 of the Find Objects task).

ENVI for Defense and Intelligence

The ENVI Feature Extraction Workflow Chapter 5: Introduction to ENVI Zoom

10. For this exercise, you will compute all available attributes. Ensure that each attribute category is selected, clicking on the Advanced tab to see Color Space and Band Ratio. Click Next. These attributes will be available for the rule-based classification. If you choose not to compute selected attributes, you will save a little time but will be unable to use those attributes for classification.

Feature Extraction proceeds to the Extract Features task.

Note: In the future if you wish to change which bands are used for the Band Ratio attribute, click on the Band Ratio icon.

Rule-Based Classification

11. Select Choose by Creating Rules, and click Next. Rule-based classification begins with one new feature (Feature_1) and one undefined rule.

ENVI for Defense and Intelligence

87

Chapter 5:Introduction to ENVI Zoom The ENVI Feature Extraction Workflow

12. Double-click Feature_1. The Properties dialog appears.

13. Change the Feature Name to warehouses, and click OK.

As mentioned earlier, rule-building is primarily based on human knowledge and reasoning about specific feature types. In this exercise, you are extracting specific warehouses from the imagery.

What characteristics do these structures have relative to other features? Among their characteristics, we should consider their spectral response as shown in the ENVI Plot Window below. The spectra in this window were collected by calling up the Z profile tool in ENVI.

88

ENVI for Defense and Intelligence

The ENVI Feature Extraction Workflow Chapter 5: Introduction to ENVI Zoom

• Both the target warehouses and vegetation have low response in channel 3 (654 nm) and high response in channel 4 (814 nm), giving their spectra a steep rise from red to nearinfrared wavelengths. We can use the normalized difference vegetation index (NDVI) value to discriminate the target warehouses and vegetation from other features.

• To help us differentiate the warehouses from vegetation, we can consider their shape. The shape of the warehouses approximates a rectangle.

• To further differentiate the target warehouses from vegetation, we should consider their spectral response in channel 1 (480 nm). The warehouses of interest have relatively high response at that wavelength compared to vegetation.

• The area of warehouses is within a certain range, compared to houses or other types of buildings.

The typical workflow for building rules is to begin with one attribute, test its confidence in extracting your feature of interest, then use more conditions and attributes to filter out all other features from the scene so that you are left only with your feature of interest.

Normalized Band Ratio

One of the attributes that ENVI Zoom computed in the Compute Attributes step was bandratio (specifically a normalized difference). By default, ENVI Zoom uses the near-infrared and red bands for this attribute, so the bandratio attribute is a measure of NDVI.

Next, you will see if the bandratio attribute is good for highlighting the target warehouses.

14. Double-click the name of the undefined rule under the warehouses feature. The Attribute Selection dialog appears.

ENVI for Defense and Intelligence

89

Chapter 5:Introduction to ENVI Zoom The ENVI Feature Extraction Workflow

Double-click this undefined rule

15. The Customized folder contains color space and bandratio attributes. Click the + symbol next to

Customized to expand the list of attributes.

16. Highlight the bandratio attribute, then enable the Show Attribute Image option. After a few seconds, ENVI Zoom displays a grayscale image of bandratio attribute values. The attribute image helps you select the appropriate attributes to define a rule for a certain feature. If the objects belonging to the feature have a high contrast relative to the other objects, then the attribute is useful for this rule. You can adjust the image transparency to view the underlying image if needed, using the Transparency slider on the main toolbar.

From the attribute image, you can see that some of the warehouses are fairly bright compared to surrounding objects.

90

ENVI for Defense and Intelligence

The ENVI Feature Extraction Workflow Chapter 5: Introduction to ENVI Zoom

17. Deselect the Show Attribute Image and then double-click the bandratio attribute under the

Customized folder. The bandratio Attribute Setting dialog appears with a histogram that shows the frequency of occurrence of the bandratio attribute values for all of the objects in the image.

The Show Rule Confidence Image is enabled by default, and a Preview Portal appears. The Preview Portal displays a rule confidence image, which shows the relative confidence of each object belonging to a feature.

The rule confidence image is a solid-red color until you define the values for the attribute.

Click and drag these lines to define the minimum and maximum values for the attribute.

18. Each attribute has a unique histogram. Click and drag the vertical lines on the histogram to define the minimum and maximum values for the attribute. The target warehouses have a high bandratio value, so you won’t need to adjust the maximum value. Only adjust the minimum value (the leftmost vertical bar). As you let go of the line after dragging it, the Preview Portal shows the updated rule confidence levels. The higher the brightness of an object, the higher the confidence that the object belongs to the warehouses feature, according to the bandratio attribute.

You want to determine a range of attribute values that best delineates the target warehouses, which are blue in a natural color image and bright pink in a color infrared image. If you define too large a range of values in the histogram, other unwanted features are added. If you define too narrow a range of values, you may lose some of the targets.

Tip: You can observe the brightness values of the objects in the Cursor Value category of the ENVI Zoom interface. Because of the fuzzy logic applied underneath, you will notice that objects have a brightness value between 0 and 255. If your rule set only has one rule, any object with a brightness value greater than 255 times the Confidence Threshold value (see this by selecting the Advanced Settings tab in the Feature

Extraction dialogue) will be classified as the feature. The default Confidence Threshold value is 0.4. So if the brightness value of an object is greater than 102, then this object will be classified as the feature using this rule.

19. Set the minimum value to 0.12 by typing this number into the text box on the left side of the

Attribute Setting dialog. Then press the Enter key.

This range of values effectively delineates the warehouses in the confidence image as well as some vegetation.

ENVI for Defense and Intelligence

91

Chapter 5:Introduction to ENVI Zoom The ENVI Feature Extraction Workflow

Note: The Fuzzy Tolerance, Membership Function Set Type, and Logic parameters are designed for users who have an advanced understanding of rule-based classification. See the Feature

Extraction Module User’s Guide for details. You can leave the default values for these parameters throughout this exercise.

20. Click OK in the bandratio Attribute Setting dialog. An icon appears under the rule name in the

Feature Extraction dialog to indicate that you have added an attribute to the rule. The icon is followed by the attribute definition.

21. You need to define some more attributes for the rule set. Now consider the next characteristic of warehouses: The shape of dark rooftops approximates a rectangle. You can use the rect_fit attribute to filter out the non-rectangular objects from the image.

Rectangular Shape

22. In the Feature Extraction dialog, click the Add Attribute to Rule button . The Attribute

Selection dialog appears.

23. Click the + symbol next to the Spatial folder to expand the list of spatial attributes.

24. Double-click the rect_fit attribute. The rect_fit Attribute Setting dialog appears. This attribute is a shape measure that indicates how well the shape is described by a rectangle. With this attribute, you can typically leave alone the maximum value in the histogram and only adjust the minimum value.

25. Experiment with different minimum values by clicking-and-dragging the left-most vertical line on the histogram. Leave the maximum value as-is, and set the minimum value to 0.50.

26. Click OK in the rect_fit Attribute Setting dialog. The rect_fit attribute condition is added to the rule set in the Feature Extraction dialog.

Area

From the rule confidence map in the last step, you may have noticed some remaining, unwanted, small objects. If you are extracting warehouses, you know that the area of the rooftops is within a certain range compared to other types of buildings (such as houses). You can use the area attribute to further define your rule set.

27. In the Feature Extraction dialog, click the Add Attribute to Rule button . The Attribute

Selection dialog appears.

28. Click the + symbol next to the Spatial folder to expand the list of spatial attributes.

29. Double-click the area attribute. The area Attribute Setting dialog appears.

30. Change the Fuzzy Tolerance value of this attribute to 0 percent.

31. Experiment with different minimum and maximum area values, and notice their results in the rule confidence image. A range of 55.0000 to 550.0000 works well in extracting residential rooftops.

32. Click OK in the area Attribute Setting dialog. The area attribute condition is added to the rule set in the Feature Extraction dialog.

Average Pixel Value

Now that you have filtered out non-rectangular shapes, and small and large features, the final step is to filter

92

ENVI for Defense and Intelligence

The ENVI Feature Extraction Workflow Chapter 5: Introduction to ENVI Zoom out the vegetation that remains. Since you know that the target warehouses are brighter in channel 1 than vegetation, you can use the avgband_1 attribute to further define your rule set.

33. In the Feature Extraction dialog, click the Add Attribute to Rule button . The Attribute

Selection dialog appears.

34. Click the + symbol next to the Spectral folder to expand the list of spectral attributes.

35. Double-click the avgband_1 attribute. The avgband_1 Attribute Setting dialog appears.

36. Experiment with different minimum and maximum area values, and notice their results in the rule confidence image. Setting the minimum threshold to a value of 280.0 works well in eliminating vegetation.

37. Click OK in the avgband_1 Attribute Setting dialog. The avgband_1 attribute condition is added to the rule set in the Feature Extraction dialog.

When you are finished, the rule set in the Feature Extraction dialog should look similar to the following:

38. Click the Preview check box to view classification results in a Preview Portal. Any undefined rules are ignored. You can move the Preview Portal around the image to look at classification results for different areas.

39. In the Layer Manager, click-and-drag Qingdao_sub.ntf (the original image) above the Region

Means image. You may need to move the Feature Extraction dialog out of the way, but don’t close it.

40. Click inside the Preview Portal to select it, then use the Transparency slider in the ENVI Zoom toolbar to increase the transparency of the Preview Portal. By doing this, you can preview the classification results over the original image:

ENVI for Defense and Intelligence

93

Chapter 5:Introduction to ENVI Zoom The ENVI Feature Extraction Workflow

If you wish, you may re-adjust some of the rule thresholds by double clicking on a particular rule.

You may also go back through all the previous steps by pressing the Previous button. For now, the rule set that you just built extracts the target warehouses fairly well. If extraneous features still remain, you can clean up these with ENVI Zoom’s vector tools after you finish the classification and output vectors as a result.

Saving the Rule Set

Once you have defined a rule set that works well in extracting dark rooftops, you can save the rule set to an

XML file. You can restore and use this rule set as a starting point for a different neighborhood, for example, so that you won’t have to rebuild the entire rule.

41. Click the Save Rule Set As button in the Feature Extraction dialog. The File Save As dialog appears.

42. Browse to envimil\enviout and save the rule set as warehouses_ruleset.xml.

To restore the rule set later, click the Restore Rule Set button in the Feature Extraction dialog.

In the file selection dialog that appears, select the ruleset (.xml) and click Open. If you have already defined other rules, a dialog appears that asks if you want to replace or expand the current rule set.

Exporting Classification Results to a Shapefile

43. When you are satisfied with the classification results, click Next in the Feature Extraction dialog.

The classification is run and Feature Extraction proceeds to the Export Features step.

44. If you had extracted multiple features from the image, you could choose to export each feature to its own shapefile. But because you only extracted one feature in this exercise, you can just output the results to one shapefile. Click Export features to a single layer, and select Polygon from the dropdown list provided.

45. Save the shapefile in the envimil\enviout directory as warehouses.shp.

46. The Smooth Vectors option is best suited for generalizing curved features such as rivers and not for structured objects such as buildings. Uncheck this option.

47. For this exercise, you don’t need to export the classification results to an image file. So you can leave the Image Output → Export Class Results option unchecked.

48. Make sure Display Datasets After Export is checked, then click Next. ENVI Zoom creates a polygon shapefile of the warehouses you extracted and overlays the shapefile on the original image.

You can use ENVI Zoom’s vector-editing tools to remove extraneous objects or holes from the shapefile.

49. To better view your results, click on Qingdao_sub.ntf in the Layer Manager to select it and experiment with the Transparency slider in the main toolbar.

94

ENVI for Defense and Intelligence

The ENVI Feature Extraction Workflow Chapter 5: Introduction to ENVI Zoom

Viewing the Report and Statistics

After you export your classification results, you are presented with a summary of the processing options and settings you used throughout the Feature Extraction workflow. A Statistics tab is also available since you exported your results to vector shapefiles. This tab presents a table view of the features you defined, along with area statistics for each feature (in map units determined by the input image). You can sort the statistics table cells by right-clicking anywhere in the table and selecting Sort by selected column forward (ascending order) or Sort by selected column reverse (descending order).

50. Save all of the information under the Report and Statistics tabs to a text file by clicking Save Text

Report in the Feature Extraction dialogue. In the Save Fx Summary Report dialog, browse to envimil\enviout, type warehouses_Fx.txt as an output filename, and click Open. Then click OK.

Modifying Export Options (Optional)

After viewing the processing summary, you can click Previous to go back to the Export step and change the output options for classification results.

If you click Previous, any output that you created is removed from the Data Manager and Layer Manager. If you click Next from the Export step without making any changes, Feature Extraction will not re-create the output. You must make at least one change in the Export step for Feature Extraction to create new shapefiles and/or classification images

51. Click Finish to exit the Feature Extraction workflow. Then click Yes to the question “Are you sure you want to exit the Feature Extraction workflow?”.

ENVI for Defense and Intelligence

95

Chapter 5:Introduction to ENVI Zoom The ENVI Feature Extraction Workflow

Exercise 6: Editing Vector Layers

In order to see and edit the appropriate vectors it may be necessary to reorder the layers within the Layer

Manager window. For example, if there are multiple vector layers present it is helpful to drag the layer to be edited to the top of the list of vectors in the Layer Manager window. That way, it will be on top of the other vectors.

In ENVI Zoom you can create vector records in an existing vector layer as well as edit vector records and vertices. In order to edit a layer it must be set as the active layer by right-clicking on it in the Layer Manager window and selecting Set as Active Vector Layer. If only one vector layer is present, it is active by default.

The following are the vector edit modes that can be selected using the appropriate toolbar item on the main

ENVI Zoom interface:

Vector Create Vector Edit Vertex Edit Vector Join

To add another vector to the currently active vector layer, Click on the Vector Create button and place vertices by clicking in the display window. A right click will bring up a pop-up menu to allow you to

Accept or Clear your changes. When a vector layer has been modified, the icon next to the layer name in the Layer Manager changes to , to indicate the layer has changed. You can save or discard those changes as needed. At any point while using the vector tools, you can:

• Save the changes to the original file.

• Save the changes to a new file.

• Right-click in the Image window or right-click on the layer name in the Layer Manager and select Revert to clear all vector record edits and return the layer to the state it was in after it was last saved.

If you wish to try editing your vectors, try the following:

1. Click on the Vector Edit button and select one or more vectors. To select more than one vector you can click on each one, or you can draw a box that includes all vectors you wish to select. Right click to bring up a pop-up menu to Delete, Remove Holes, Merge, or Group vectors. Clear Selections deselects the vectors you selected.

2. Click on the Vertex Edit button and select a vector to move vertices. You may wish to zoom in so that you can see more detail in the vectors. Right click to bring up a pop-up menu to Insert Vertex,

Delete Vertex, Snap to Nearest Vertex, or Mark Vertex. Experiment with these as you wish.

3. If you want to split a polygon, click on the Vertex Edit button, then right-click on a vertex and select Mark Vertex. Then right-click on another vertex and select Mark Vertex. This selects all vertices between the two marked ones. Than right-click and select Split at Marked Vertices.

4. When you are finished right-click on the Layers folder and select Remove All Layers.

96

ENVI for Defense and Intelligence

Chapter Review Chapter 5: Introduction to ENVI Zoom

Chapter Review

• ENVI Zoom introduces a new image display paradigm for ENVI

• Get familiar with Zoom functionality

• Editing NITF Metadata

• Feature Extraction allows you to use spatial and textural characteristics to classify an image as well as spectral characteristics

ENVI for Defense and Intelligence

97

Chapter 6:

Change Detection: The

December 2004 Tsunami

What You Will Learn in this Chapter ......................................................................................... 100

Exercise Overview .................................................................................................................... 100

Preprocessing ........................................................................................................................... 100

Supervised Classification .......................................................................................................... 110

Change Detection Analysis ....................................................................................................... 123

Synthesizing Results................................................................................................................. 130

99

ENVI for Defense and Intelligence

Chapter 6:Change Detection: The December 2004 Tsunami What You Will Learn in this Chapter

What You Will Learn in this Chapter

In the following chapter you will learn:

• About the importance of data preprocessing to ensure results are accurate

• How to apply radiometric, geometric and atmospheric corrections

• About temporal data resolution

• How to apply ENVI change detection routines to classification images derived through multispectral classification

• How to apply ENVI change detection to raw image bands

• About applying change detection in a real-world scenario

• How to use ENVI’s rich presentation tools to arrive at a final map product

Exercise Overview

This exercise is designed to give you experience applying ENVI in a real-world analysis scenario. You’ll work with data acquired by the QuickBird imaging satellite over Aceh Province, Indonesia on two dates, the first on April 12, 2004, and the second, just days after the devastating tsunami of December 24, on January

2, 2005. The goal of the analysis is to determine what areas of the coastline were impacted by the tsunami, and is broken into four steps: preprocessing, multispectral analysis, change analysis and presentation of results.

Preprocessing

Preprocessing is essential for successful analysis because the images are compared on a pixel for pixel basis; the images must be radiometrically, geometrically and atmospherically similar. As a first step, you’ll learn about radiometric calibration (the conversion of image data from raw sensor response to known standard units). Next, because the collection geometry between the images is radically different, you’ll apply a geometric correction so that they can be directly compared on a pixel for pixel basis. Finally, because Aceh

Province in Indonesia is tropical, acquired imagery will be significantly impacted by atmospheric water vapor. You’ll learn about and apply a simple multispectral atmospheric correction to the imagery.

Instrument Calibration

Satellite images frequently require a system calibration to be applied before the data can be quantitatively analyzed. Calibration is particularly important when comparisons are to be made between datasets. This is true whether comparing data from the same sensor taken at different times or locations, or comparing data from different sensors. System calibrations, which are based on pre-launch calculated gains and offsets, convert digital numbers (instrument response) to radiance or reflectance above the atmosphere. ENVI contains system calibrations for AVHRR, Landsat MSS, and Landsat TM images. It is also possible for users to write their own calibration functions for ENVI. Often image data is delivered by the data provider with an instrument calibration applied. That is the case with the imagery that you’ll use in this exercise.

Geometric Correction

Satellite imagery typically contains geometric inaccuracies due to internal instrument geometries, look angles, and topographic variations. If the goal of an analysis is to look at a single image, geometric correction may not be important. If, however, the goal is to compare imagery across time, it is essential that relative geometries between images be correct. In this analysis, our goal is to give an accurate measure of the coastline impacted by the tsunami. Therefore it is also important that the imagery also be corrected for geometric inaccuracies due to topography.

100

ENVI for Defense and Intelligence

Orthorectification and Registration Chapter 6: Change Detection: The December 2004 Tsunami

The two images we’re using in this analysis were collected with different viewing geometries using the

QuickBird off-nadir pointing capabilities; the first was collected with a viewing angle of 8.4 degrees, while the second was collected with a view angle of 22.6 degrees. First, we’ll open and inspect the two images for differences caused by viewing geometry.

Orthorectification and Registration

Exercise 1: Orthorectification of QuickBird

Before you begin, you need to extract the compressed Quickbird imagery. In the envimil\tsunami directory, right-click on original_data.zip and select Extract All.

1. From the ENVI main menu bar, select FileOpen External FileQuickBirdGeoTIFF.

Note that you can open GeoTIFF format files in ENVI as native ENVI flat binary files. However, all header information, such as wavelength information, is not read when you open the file in this manner. The Enter TIFF/GeoTIFF Filename dialog appears.

2. Navigate to the envimil\tsunami

directory, select the following two files, then click Open:

(04APR12040007-M1BS-000000188762_01_P001.TIF

05JAN02041354-M1BS-000000188763_01_P001.TIF)

3. From the Available Bands List, load a near-infrared color composite Band (4, 3, 2) RGB of the

April 12, 2004 image. This is the before image.

4. Load a near-infrared color composite Band (4, 3, 2) RGB from the January 02, 2005 image into a new display. This is the after image. Take a few moments to explore the after image.

Question: It’s easy to see that the two images represent different areas on the ground. Are there similarities?

5. From the #2 Display group menu bar, select ToolsLinkGeographic Link. The Geographic

Link dialog appears.

6. Toggle Display #1 and Display #2 to On. Explore similarities between the two images.

Notice the different geometries of the two images. They were collected with the QuickBird sensor pointing in different directions. We’ll need to register these two images together before we can perform a change detection analysis. With registration, the two images will have the same pixel size, orientation, and dimensions.

We’ll start by orthorectifying the before image. ENVI is able to orthorectify these images because they have

RPC information. However, we have no GCP information to anchor the orthorectification to real locations on the Earth, nor a DEM to account for elevation, so the images do not have identical geometries. Due to this, and because having the two images match each other is more important to a change detection analysis than having either image accurately located on the ground, we will orthorectify the before image, then register the after image to match the before image.

ENVI for Defense and Intelligence

101

Chapter 6:Change Detection: The December 2004 Tsunami Orthorectification and Registration

1. From the ENVI main menu bar, select MapOrthorectificationQuickBirdOrthorectify

QuickBird. The Select File to Orthorectify dialog appears.

2. Select the before image (

04APR12040007-

M1BS-000000188762_01_P001.TIF

).

3. Spatially subset the image to remove all the water pixels. These are pixels we’re not interested in, and they make the file size unmanageable. Click

Spatial Subset. The Select Spatial Subset dialog appears.

4. Enter Samples: 3493 to 6842, and Lines: 1 to

7450. Click OK, then click OK again. The

Orthorectification Parameters dialog appears.

5. Set Image Resampling to Nearest Neighbor and leave the Background data value as 0. Change the Input Height from DEM to Fixed, because we don’t have a DEM to go with this image. Note that you can specify the output projection, pixel size and dimensions of the orthorectified image.

Leave the default UTM projection defined.

Orthorectification using RPC information is computationally expensive; it would take a couple of minutes to run. Instead, we will open the image that has been previously orthorectified.

Cancel the Orthorectification Parameters dialog.

6. Open the orthorectified version of

04APR12040007-M1BS-000000188762_01_P001.TIF

. From the ENVI main menu bar, select FileOpen Image File. In the envimil\tsunami

directory, select before_ortho.dat

, then click Open.

7. Display a near-infrared color composite Band (4, 3, 2) RGB in Display #2 so that you can investigate how orthorectification changed the image geometry. From the #2 Display group menu bar, select ToolsLinkGeographic Link. The Geographic Link dialog appears.

8. Toggle both displays On and click OK. Move the zoom window in one display to the coastline and the other zoom window will follow. Turn on the Zoom window crosshairs so you can see the slight difference in location for objects between the two scenes.

9. Close both display groups.

Exercise 2: Automatic Ground Control Point Selection and Image

Registration

1. Load Orthorectified (Band 4) of before_ortho.dat

as a Gray Scale into a new display. Load

Band 4 of the after (Jan. 02, 2005) image into a new display.

Again, note the difference in image geometry. We want to be able to directly compare pixels from the two images; therefore, the before and after images need to be registered so that they have the same dimensions, pixel size, and orientation.

2. To register these two images together, we’ll use ENVI’s image-to-image registration tool, with automatic tie point selection. From the ENVI main menu bar, select MapRegistrationSelect

102

ENVI for Defense and Intelligence

Orthorectification and Registration Chapter 6: Change Detection: The December 2004 Tsunami

GCPs: Image to Image. The Image to Image Registration dialog appears.

3. Select Display #1 as the Base Image (the orthorectified before image), and Display #2 as the Warp

Image (the after image). Click OK. The Ground Control Points Selection dialog appears.

4. From the Ground Control Points Selection dialog menu bar, select OptionsAutomatically

Generate Tie Points. The Automatic Tie Points Parameters dialog appears.

• Area-based — Area-based matching, depending on the input data, is generally the more accurate method. Area-based image matching compares the gray scale values of patches of two or more images and tries to find conjugate image locations based on similarity in those gray scale value patterns. The results of area-based matching largely depend upon the quality of the approximate relationship between the base image and the warp image. This is determined through map, RPC, or pseudo map information, or by using three or more tie points. Because the QuickBird imagery has RPC information, the Area Based matching option is available.

• Feature-based — Feature-based image matching extracts distinct features from images then identifies those features that correspond to one another. This is done by comparing feature attributes and location.

5. In the Automatic Tie Points dialog note that the option is Area Based. Click OK.

6. After points have been identified, click Show List in the Ground Control Points Selection dialog.

When the list is displayed, click and drag the lower left corner to see all the columns.

7. Back in the Ground Control Points Selection dialog, notice the RMS Error parameter.

RMS error is calculated by examining the geometric difference between the two sets of points: those in the orthorectified base image, and those in the warp image. You can see how RMS error is calculated by examining the field names in the Image to Image GCP List dialog. Note that that the x and y pixel location for each point is listed. The predicted x and y location is the location where

ENVI calculates each point should reside based on the geometry of the base points. From the predicted location, an error in the x and y directions, and an overall error, is calculated for each

ENVI for Defense and Intelligence

103

Chapter 6:Change Detection: The December 2004 Tsunami Orthorectification and Registration point. The RMS Error listed in the Ground Control Points Selection dialog is an average of the RMS associated with each point and is in units of pixels. For example, if the RMS Error is 5, the positional error in the warped image would average 5 pixels across the scene. The goal of ground control point selection is to minimize RMS error as you select points.

In the following steps, you will attempt to interactively minimize the RMS error between your points. For best results, try to get the RMS error below 2.0.

1. In the Image to Image GCP List dialog select Options → Order Points by Error. Click on the row identifier for the point with the most error. The Zoom box should update to this new position in both the base and warp images.

2. Some points may have been placed in the water, obviously contributing nothing to the registration process. With the row for a GCP in the water selected click Delete. Consider replacing these with better points found elsewhere in the image.

3. Often, it is easy to see where points are misplaced. Reposition one or both Zoom boxes on an improved point location, then click Update in the Image to Image GCP List dialog. Make note of the RMS Error in the Ground Control Point Selection dialog.

4. Experiment with turning points On/Off by clicking the button at the bottom of the dialog. Watch what this does to your overall RMS error in the Ground Control Points Selection dialog.

5. Consider adding points in areas of the base image where points may be lacking.

If you have difficulty getting your RMS error below 2.0, you can use previously defined points, found in the envimil\tsunami

directory. If you have a group of points that give you a good overall RMS error, skip the next step.

6. To use previously defined points, first select OptionsClear All Points to delete all points in the

Image to Image GCP List dialog. Then, from the Ground Control Points Selection dialog, select File

Restore GCPs From ASCII. Navigate to the envimil\tsunami

directory, select final_points.pts

, then click Open. This is a group of 14 points with an overall RMS error of

1.843501.

The final step of the process is to register the after image to the before image. This is done by

warping the after image so that its geometry matches that of the before image. The warp is defined by modeling the geometric relationship between the two sets of points (those in the base and warp images) which results in a new orientation for the warp image. The process is completed by determining which pixel in the warp image belongs in each location in the final output image. This is called resampling.

7. In the Ground Control Points Selection dialog, select OptionsWarp File. The Input Warp

Image dialog appears.

8. Select the after image (

05JAN02041354-M1BS-000000188763_01_P001.TIF

), then click OK.

The Registration Parameters dialog appears.

9. Set the following: Method = Polynomial, Degree = 2, and Nearest Neighbor = Resampling. Note the different warping and resampling methods that are available. Set the Output Image Extent so that it will match the before image. Set Upper Left X and Upper Left Y to 1, Output Samples to

3350, and Output Lines to 7450. Set the output name to after_warped.dat

, then click OK.

10. When processing is complete, load Band 4 of after_warped.dat

to a new display.

11. Link Display #1 and Display #3 (those containing before_ortho.dat

and after_warp.dat

, respectively). From either display group, right-click, then select Link Displays. In the Link

Displays dialog, toggle Display #1 and Display #3 to Yes (Display #2 to No), then click OK.

104

ENVI for Defense and Intelligence

Atmospheric Correction Chapter 6: Change Detection: The December 2004 Tsunami

12. Use dynamic overlay to assess how well the image warp matched geometries between the images.

In general, you should find that areas of the image on the eastern side are less well registered because of steeper topography.

13. Close all open display groups and dialogs.

Atmospheric Correction

When looking at a surface through a medium (for example, the atmosphere), it is necessary to consider the effects of the passage of radiation through the medium. It is especially important to account for atmospheric effects when comparing imagery from the same area on the Earth’s surface collected at two different times.

Atmospheric effects can change radically across even short time scales. Indeed, one can see big differences in these images due to clouds. In the case of this analysis, the goal is to directly compare image pixels with the assumption that the only changes between the two times were due to the tsunami. Therefore, it is essential that we conduct an atmospheric correction.

ENVI provides some standard techniques for addressing these problems. There are several simple atmospheric correction methods built into ENVI. Three of these, Flat Field, Empirical Line, and Internal

Average Reflectance, are located in the Basic Tools → Preprocessing → Calibration Utilities menu. These routines are typically used only on hyperspectral datasets. Dark Subtract, a fourth method primarily used on multispectral data, is located in the Basic Tools → Preprocessing → General Purpose Utilities menu.

Atmospheric modeling is another common approach for applying atmospheric and solar spectrum corrections to both multi- and hyperspectral datasets. Exelis Visual Information Solutions sells a separate

ENVI for Defense and Intelligence

105

Chapter 6:Change Detection: The December 2004 Tsunami Atmospheric Correction atmospheric correction module called FLAASH (Fast Line-of-sight Atmospheric Analysis of Spectral

Hypercubes) as a module to ENVI. FLAASH uses MODTRAN4+ radiative transfer code to correct images for atmospheric water vapor, oxygen, carbon dioxide, methane, and ozone absorptions, molecular and aerosol scattering, and the shape of the solar spectrum.

For this study, you will use the simple dark subtraction. This technique, typically performed on multispectral data, removes the additive effect of scattered light. In the realm of multispectral imaging, light scattering is the predominant atmospheric effect that can impact the measured radiance from any pixel.

Sunlight is scattered preferentially in the blue and green portions of the electromagnetic spectrum (Figure

35). Therefore, the assumption of the dark subtraction technique is that a larger proportion of the measured signal at smaller wavelengths is due to atmospheric scattering. To execute dark subtraction, a dark spectrum is subtracted from every pixel spectrum. The dark spectrum can be defined as the mean spectrum of a dark region in the image, as the minimum value in each band in the image, or as a user defined spectrum. A userdefined spectrum, derived by examination of the band histograms for the lowest significant values, is commonly used for the dark spectrum.

Range of

Atmospheric

Scattering

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Wavelength ( µm)

Figure 35: Relative Atmospheric Scattering with Wavelength

Exercise 3: Atmospheric Correction with Dark Subtraction

1. In the Available Bands List right-click on before_ortho.dat

and select Load True Color. Then right-click in the display and select Quick Stats.

2. In the Statistics Results dialog click Select PlotHistograms: All Bands. You can zoom into the left hand side of the histogram plot by drawing a box with the middle mouse button. Note that some pixels have a value of zero. These are the no data pixels along the margin of the warped image. The rest of the histograms for the three bands are shifted to the right, away from zero. This shift to the right is mainly a result of scattered light. The Dark Subtraction tool will subtract the scattered light component.

106

ENVI for Defense and Intelligence

Atmospheric Correction Chapter 6: Change Detection: The December 2004 Tsunami

3. Look at the table for Histogram Band 1 in the lower part of the dialog and note that there are several

DN values with no pixels (Npts=0). Close the display and the Statistics dialog.

4. From the ENVI main menu bar, select Basic ToolsPreprocessingGeneral Purpose

UtilitiesDark Subtract. In the Dark Subtract Input File dialog, select before_ortho.dat

, then click OK. The Dark Subtraction Parameters dialog appears.

5. Select User Value, and enter the values below. These were obtained in the manner discussed above.

• Band 1: 174

• Band 2: 176

• Band 3: 48

• Band 4: 7

6. Output the result to a file named before_drk.dat

. Click OK.

7. From the ENVI main menu bar, select Basic ToolsPreprocessingGeneral Purpose

ENVI for Defense and Intelligence

107

Chapter 6:Change Detection: The December 2004 Tsunami Image Subset

UtilitiesDark Subtract. The Dark Subtract Input File dialog appears.

8. Select after_warp.dat

, then click OK. The Dark Subtraction Parameters dialog appears.

9. Select User Value, and enter the following values for each band:

• Band 1: 220

• Band 2: 250

• Band 3: 105

• Band 4: 70

10. Output the result to a file named after_drk.dat

. Click OK.

11. Load a true-color composite Band (3, 2, 1) RGB of before_drk.dat

into a new display. Load a true-color composite Band (3, 2, 1) RGB of after_drk.dat

into a new display. Link the two displays (from either display group, right-click, then select Link Displays), and explore the two images noting changes that have occurred related to the tsunami.

Image Subset

Exercise 4: Spatially Subset Both Images to Remove Clouds and “No-Data”

Pixels

Now the before and after images have the same extent, but there is a lot of no-data border space around the

after image. Let’s make a subset of both images, that includes only the land area, and not so much ocean and clouds. This will make subsequent processing go faster.

1. From the ENVI main menu bar, select Basic ToolsResize Data (Spatial/Spectral). The Resize

Data Input File dialog displays.

2. Select the before_drk.dat

image, and then click Spatial Subset. The Spatial Subset dialog appears.

3. Enter Samples: 1 to 3288 NS: 3288 and Lines: 2329 to 7418 NL: 5090. Click OK and OK. The

Resize Data Parameters dialog appears.

4. Enter the output filename before_drk_sub.dat

, then click OK. ENVI generates the subset and adds it to the Available Bands List.

5. Display the new subset image in Display #1 as a true-color composite Band (3,2,1) RGB.

6. From the ENVI main menu bar, select Basic ToolsResize Data (Spatial/Spectral). The Resize

Data Input File dialog appears.

7. Select the after_drk.dat

image, and then click Spatial Subset. The Spatial Subset dialog appears.

8. In the Select Spatial Subset dialog, click Previous. The subset defined using the before_drk.dat

image will also be applied to the after_drk.dat

image. Click OK and OK.

9. In the resulting Resize Data Parameters dialog, enter the output filename as after_drk_sub.dat

, then click OK. ENVI generates the subset and adds it to the Available Bands List.

10. Load the new subset image into Display #2 as a true-color composite Band (3,2,1) RGB.

11. Link the displays and assess your preprocessing results (from either display group, right-click, then select Link Displays). One thing to look for is accurate registration.

108

ENVI for Defense and Intelligence

Review Chapter 6: Change Detection: The December 2004 Tsunami

Question: Are the images well registered in the coastal areas of interest?

12. Another important consideration when analyzing images for change over time is that pixel values be comparable for similar materials. Start the Cursor Location/Value tool from the Tools menu in

Display #1. Move your cursor to an area of Display #1 where both the before and after images contain green vegetation. Compare the data values in each of the three bands in the Cursor

Location/Value tool.

A better way to compare data values for individual pixels is to use the Z-Profile tool.

13. From the #1 Display group menu bar, select ToolsProfilesZ Profile (Spectrum). The #1

Spectral Profile plot appears.

14. From the #2 Display group menu bar, select ToolsProfilesZ Profile (Spectrum). The #2

Spectral Profile plot appears.

15. Change the color for the signature in the #2 Spectral Profile dialog by choosing EditData

Parameters from the #2 Spectral Profile dialog menu bar. In the Data Parameters dialog, rightclick on the Color box and select Items 1:20Red. Click Apply, then click Cancel.

16. From the #2 Display group menu bar, select ToolsPixel Locator to open the Pixel Locator. In the #2 Pixel Locator dialog, enter Sample = 2271, Line = 4721, then click Apply.

You can compare pixel signatures in the same plot window by dragging and dropping signature names from one plot window to another.

17. In the #2 Spectral Profile dialog, right-click on the plot window and select Plot Key. This makes the name for the current pixel signature appear. Left-click and hold on the plot name (#2 x:2271), then drag and drop the name to the #1 Spectral Profile dialog.

Question: Are the two pixel signatures similar? Does this indicate that our imagery is properly calibrated? The vegetation in Display #2 looks darker than that in Display #1. Why might that be?

This concludes preprocessing. You now have the two images prepared so that pixel for pixel comparisons can be made in the assessment of tsunami impacts on this stretch of coastline in Aceh

Province, Indonesia.

18. Close all displays and dialogs, but leave the files open in the Available Bands List.

Review

• Before analysis begins, it is essential to consider how your data may affect results.

• Preprocessing is essential in several domains: instrument calibration, geometric correction, and atmospheric correction.

• Instrument calibration is important so that pixel values are related to physical processes.

Without instrument calibration, pixel values simply represent sensor response to changes in pixel brightness.

• Automatic ground control point selection is a time-saving tool for image to image registration.

• Atmospheric correction is important if your goal is to compare two images that were collected at different times.

Supervised vs. Unsupervised Classification

Multispectral image classification separates pixels into different categories based on the types of land cover they contain, resulting in a single-band land cover map. At a high level, there are two basic ways to conduct

ENVI for Defense and Intelligence

109

Chapter 6:Change Detection: The December 2004 Tsunami Supervised vs. Unsupervised Classification image classification:

• Use the statistical properties of pixels to automatically discriminate different classes of land cover types. This method requires no human/computer interaction, and is referred to as

unsupervised classification.

• The user provides spectral signatures that represent the desired land cover classes. Once representative signatures (often referred to as training classes) have been defined, a classification algorithm compares unknown pixels to the known land cover class signatures, and a similarity metric is determined. Pixels are then classified as the land cover class that they most closely resemble. Because these methods rely on user interaction to develop training classes, and to determine which pixels belong in individual classes, they are referred to as supervised

classifiers.

In the context of multispectral image classification for military and intelligence applications, analysts are most often interested in finding very specific materials in an image. That is the case in this analysis where the goal is to map locations along the western shore of Aceh Province in Indonesia affected by the great tsunami of December 26, 2004. For this application, we’re not too concerned with mapping all the different land cover types that exist in the image; we want to find those pixels affected by the tsunami. Additionally, unsupervised classification can be computationally intensive, especially when dealing with high spatial resolution satellite imagery. For this reason, an unsupervised classification isn’t the most efficient way to approach the problem.

For this analysis, we’ll use supervised classification and try to identify two simple classes: soil and vegetation.

Supervised Classification

Supervised classification depends on the analyst providing examples of the types of materials they want to map. These sample signatures are often referred to as training data, and are derived from one of two sources:

• Pixels in the image that you think are good examples of different materials in the image that you want to map

• Signatures from a spectral library (a spectral library consists of reflectance signatures of materials collected under controlled conditions in a laboratory).

ENVI provides the ability to analyze imagery with either method. In this analysis, you’ll extract training signatures directly from the image. Because the analyst makes subjective decisions in selecting pixels as members of training classes, results are highly dependent on the training data, and ultimately on the decisions the analyst makes throughout the classification process.

Supervised classification comprises two stages:

• The analyst decides which materials to map in the image, and, if using image pixels as training data, which pixels to use in the training set for each class. Once pixels are chosen, ENVI derives statistics for each group of training class pixels. For example, each training class will have a mean spectral signature.

• Using a supervised classification algorithm, compare every pixel in the image to each training signature. You can use several different algorithms for this comparison, all accessed from the

ENVI main menu bar ClassificationSupervised menu.

Choosing how many classes to map and the pixels for each training class can be challenging. In this analysis, we know that we have a fairly simple task: map vegetation and soil in both the before and after images. However, how do we know which pixels in the image should be used as examples of these two land cover categories? ENVI offers a tool, known as the 2D Scatter Plot, designed to facilitate the process of finding pixels for training classes by visualizing the actual numerical distribution of the data.

110

ENVI for Defense and Intelligence

Supervised vs. Unsupervised Classification Chapter 6: Change Detection: The December 2004 Tsunami

Exercise 5: Use the 2D Scatter Plot and ROI Tool to Define Training Classes.

1. Load Band 4 from before_drk_sub.dat

as a Gray Scale image into a new display.

2. From the #1 Display group menu bar, select Tools2D Scatter Plots. The Scatter Plot Band

Choice dialog appears.

3. Select Band 3 of before_drk_sub.dat

as Band X and Band 4 of before_drk_sub.dat

as

Band Y, then click OK.

The 2D Scatter Plot tool allows you to visualize how the pixel values in each of the two chosen bands plot in a 2D scatter plot space. Each point in the scatter plot has a unique location in the image.

4. Move the Image box in the Scroll window and watch the Scatter Plot redraw. The data plotted in the

2D Scatter Plot tool comes from what is displayed in the Image window, not the entire image as represented in the Scroll window.

You will find that the image pixels plot at unique locations inside the Scatter Plot space, based on how the materials they contain reflect light. For example, pixels that are completely covered with vigorously growing vegetation will have high reflectance in the near-infrared (Band 4) and relatively low reflectance in the red (Band 3) (Figure 36).

Figure 36: The 2D Scatter Plot Tool

The 2D Scatter Plot has a number of tools designed to facilitate the process of finding pixels that represent the different training classes in an image.

5. Middle-click and hold the button as you move your cursor around inside the #1 Scatter Plot. A red box appears in the scatter plot, while pixels at the location of the box are highlighted in the Image and Zoom windows. This effect is known as dancing pixels. The inverse operation is also possible: left-click and hold the button in the Image window to see the locations of pixels (in a 10 x 10 pixel window centered on your cursor) highlighted back in the 2D Scatter Plot.

6. Mouse button functions changes when a tool such as the 2D Scatter Plot is initiated. To see the

ENVI for Defense and Intelligence

111

Chapter 6:Change Detection: The December 2004 Tsunami Supervised vs. Unsupervised Classification functions of your mouse buttons with the 2D Scatter Plot, select WindowMouse Button

Descriptions from the ENVI main menu bar. If you move your cursor over the #1 Scatter Plot window, the functions of your mouse buttons appear in the Mouse Button Descriptions dialog.

7. Follow the tips in the Mouse Button Descriptions dialog to draw a polygon in the #1 Scatter Plot window that highlights vegetated pixels in Display #1 (Figure 36). Left-click and hold the button to draw a polygon, right-click to close the polygon and turn pixels in the image to the currently selected color. Alternatively, you can use left-clicks to define the polygon as line segments.

In general, you’ll find that pixels containing pure materials (those that are ideal for training classes), are located at the corners of the data cloud.

8. From the #1 Display group menu bar, select ToolsPixel Locator. In the #1 Pixel Locator dialog, enter Sample = 1907, Line = 5952, then click Apply.

9. Take a few moments to explore how different materials at this location in the image are separated in the #1 Scatter Plot. For example, notice that water pixels are located near the minimum for both

Band 3 and Band 4.

You’ll classify two materials in the before and after images: vegetation and soil/sand.

• Sand and soil pixels seem to fall on a line extending across the Scatter Plot at a 45° angle.

• Healthy vegetation has strong absorption of light in the red and very high reflectance in the near-infrared.

10. Right-click in the Scatter Plot window and select New Class. Left-click and define a polygon to represent the Bare Soil class, then right-click to complete the polygon. Repeat this process a second time to define a polygon that highlights Vegetation pixels. If you make a mistake, you can rightclick and select Clear Class.

Figure 37: Scatter Plot with Two Classes Defined

11. Once you’ve defined the two classes, export them to the ROI tool by selecting OptionsExport

All from the Scatter Plot window menu bar.

112

ENVI for Defense and Intelligence

Supervised vs. Unsupervised Classification Chapter 6: Change Detection: The December 2004 Tsunami

12. Close the #1 Scatter Plot window.

The ROI tool is one of the most commonly used tools in ENVI. Anytime you wish to examine some, but not all, of the pixels in an image, use the ROI tool.

13. In the #1 ROI Tool dialog, rename each of your exported regions by clicking in the ROI Name field and entering the new name (as in Figure 38).

The #1 ROI tool dialog has four radio buttons near the top of the dialog, one each for Image, Scroll,

Zoom and Off. These control in which of your display windows your mouse has ROI functionality.

You can use the ROI tool to define other training pixels by hand.

Figure 38: The ROI Tool with Exported Regions from the 2D Scatter Plot

14. In the #1 ROI tool dialog, click New Region. Change the Region Name to Ocean, and ensure the

Zoom radio button is selected.

15. In the #1 Scroll window, move the Image box so that it is located well out in the ocean. In the #1

Zoom window, left-click and drag to draw a polygon around ocean water pixels. To close the polygon, right-click twice.

If you attempt to move your Zoom window in the Image window, and you begin to draw an ROI, pay attention to which radio button is selected at the top of the #1 ROI dialog. Make sure Window is set to Off.

If needed, you can middle-click to remove previously defined nodes. Additionally, if you defined an

ROI and you want to remove it, make the ROI active (left-click on the box next to its field in the

ROI tool), select the appropriate window to work in, and middle-click on the ROI in that display window.

16. Save your ROIs to disk for future use. From the #1 ROI Tool dialog menu bar, select FileSave

ROIs. The Save ROIs to File dialog appears.

17. Click Select All Items, and enter an output filename of

Aceh.roi

. Click OK to save the file.

Be aware that when you save ROI information, you aren’t saving any image data. An ROI file contains only pointers to pixels in an image of specific dimensions. You can “trick” an ROI file to

ENVI for Defense and Intelligence

113

Chapter 6:Change Detection: The December 2004 Tsunami Supervised vs. Unsupervised Classification overlay on an unrelated file by subsetting the unrelated file to the same pixel dimensions as the file used to define the ROI. You can see this behavior by overlaying the ROIs defined on the before image on the after image.

18. Load a true-color composite Band (3,2,1) RGB of after_drk_sub.dat

into Display #2.

19. From the #2 Display group menu bar, select OverlayRegion of Interest.

20. Move the Image box in the Display #2 Scroll window to the ROIs. You can see that the area where the soil ROI was defined in the before image is now water.

The ROIs defined in the before image will overlay on the after image because it has the same pixel dimensions in samples and lines. The fact that the two images happen to be registered and of the same area on the Earth’s surface is unrelated. As we will see in later exercises, this can be useful behavior.

Exercise 6: Minimum Distance Classification on Before and After Scenes

In this exercise you will apply the first of two supervised classification algorithms that you will study, the minimum distance classifier.

Minimum distance classification measures the distance of every pixel in the image to the mean of each of the training classes. The distance metric can be measured using either standard deviation or digital number units. Figure 39 on page 115 shows a graphical representation of the minimum distance classifier for a two band (two-dimensional) example where the training class data are plotted within a two dimensional Scatter

Plot.

1. On the ENVI main menu bar, select ClassificationEndmember Collection. The Classification

Input File dialog appears.

2. Select before_drk_sub.dat

, then click OK. The Endmember Collection dialog appears.

The Endmember Collection dialog is very useful for spectral analysis because it allows you to efficiently try several mapping algorithms while using the same training signatures.

3. Import endmember (training) signatures from ROI/EVF from input file. From the Endmember

Collection dialog menu bar, select Importfrom ROI/EVF from input file. The Enter ROI/EVF

Filenames dialog appears.

4. If there are more ROIs than just one, select aceh.roi

, then click Open. The Select Regions for

Stats Calculation dialog appears.

5. Click Select All Items, then click OK. ENVI generates statistics and returns the mean signatures from all pixels in each of the training groups to the Endmember Collection dialog. Click on the Plot button to see the spectra for the two training sets.

6. In the Endmember Collection dialog, select AlgorithmMinimum Distance, then click Apply.

7. An important concept in supervised classification is that of the threshold. You set the threshold in the Minimum Distance Parameters dialog Set Max stdev from Mean and Set Max Distance Error fields. For example, if you select Single Value from the Set Max Distance Error field and enter a value of 10, you are specifying that a pixel must be within 10 digital numbers from the mean of a class for that pixel to be classified as that class (See the threshold labeled in Figure 39). If the pixel is within 10 digital numbers from multiple classes, it is classified to the class that it’s closest to.

114

ENVI for Defense and Intelligence

Supervised vs. Unsupervised Classification Chapter 6: Change Detection: The December 2004 Tsunami

B2

B1

Two-Band

Image

Unknown

Pixel

Training Pixels

+

Class 1

Class 2

Unknown

Pixel

Threshold

+

+

+

+

+

+

+

Measured Distance to the Mean of Each

Training Class

Band 1

Figure 39: Minimum Distance Supervised Classification

8. In the Minimum Distance Parameters dialog Set Max Distance Error (DN) field, select None.

Output the classification result to a file named before_mindist.dat

, and output the rule images to a file named before_mindist_rule.dat

. Click OK to start the classification.

Two files are added to the Available Bands List: the single band classification image and a multiband rule image.

9. Run the classification for the after image also. From the Endmember Collection dialog menu bar, select FileChange Input File. In the Select Input File dialog, select after_drk_sub.dat

, then click OK.

10. In the Endmember Collection dialog, click Apply. Set Max Distance Error (DN) to None, output the classification file to after_mindist.dat

, and output the rule image to after_mindist_rule.dat

. Click OK.

11. Load the classification output file, before_mindist.dat

, into a new display and link it with the original image data in Display #1 (right-click in either display group, then select Link Displays.). If

Display #1 does not contain a color-infrared composite, you may want to display one so that vegetation stands out. Take a few moments to examine the classification output relative to the original input image.

A classification image is a special file type in ENVI, denoted by the classification icon in the

Available Bands List. It is a single band, gray scale image that has had a color table applied where individual data values are assigned a specific color.

12. In the image display containing the classification image, start the Cursor Location/Value tool

(ToolsCursor Location/Value). Move your cursor around the classification image and note that all pixels with the same color have the same data value. Close the Cursor Location/Value tool.

You’ll notice that the two classes are not ideally mapped in the classification image. This illustrates two points:

• The threshold value is extremely important for determining which pixels fall in which classes.

• Not all classes are mapped accurately using the same threshold value; different classes require different threshold values. For example, note how the vegetation class looks reasonable, while ocean pixels are obviously being misclassified as soil.

Modify the threshold value for each class to arrive at a final classification result. This is where the rule images that you output from the minimum distance classifier are essential.

ENVI for Defense and Intelligence

115

Chapter 6:Change Detection: The December 2004 Tsunami Supervised vs. Unsupervised Classification

13. Load the Soil rule image from before_mindist_rule.dat

to Display #2.

In the case of the minimum distance classifier, the rule images record the distance of each pixel to the mean of each training class. Measured distance units are related to the range of data values in the dataset.

14. From the #2 Display group menu bar (the display group containing the Soil rule image), select

ToolsCursor Location/Value. Scroll around the image looking at different pixel values. First look at the data value for a very bright pixel (the clouds near the center of the image are a good option). Next find a very dark pixel, such as those associated with beach sand.

Because the Soil rule image records the distance of each pixel in the image to the mean of the soil training data, pixels that closely match water have small data values relative to all others in the image. This may be counter-intuitive, as we often think of close matches in remote sensing as having large values and thus being bright.

Use the rule images to determine what an appropriate threshold value for each class should be.

15. Using the Cursor Location/Value tool, look at the data values for a number of sand/soil pixels in the soil rule image. Make note of the general value for soil pixels in the list below. Do the same thing for the Vegetation rule image band. For example, load the vegetation rule image, examine some of the darkest pixels in the image associated with vegetation, and note their value in the list.

Minimum Distance

Class

Before Threshold After Threshold

Soil: ________________ ________________

Vegetation: ________________ ________________

Use these threshold values to derive a final classification result. This task is most easily accomplished using the Rule Image Classifier tool.

16. From the ENVI main menu bar, select ClassificationPost ClassificationRule Classifier.

The Rule Image Classifier Input dialog appears.

17. Select the before rule image that was output from the minimum distance classification, before_mindist_rule.dat

, then click OK. The Rule Image Classifier Tool dialog appears

(Figure 40).

116

Figure 40: The Rule Image Classifier Tool Dialog

ENVI for Defense and Intelligence

Supervised vs. Unsupervised Classification Chapter 6: Change Detection: The December 2004 Tsunami

The Rule Image Classifier tool allows you to modify the threshold value for each class, quickly arriving at a final classification result.

18. To get a feel for how the Rule Image Classifier tool works, enter 200 into the field next to the Set

All Thresholds button. Press the Enter key to set all the threshold values to 200.000, then click

Quick Apply to apply those threshold values to the image. A new display containing that result opens.

19. Because smaller values are better matches with the Minimum Distance classifier, click the Classify

By toggle so that it changes to Minimum Value, then click Quick Apply again.

Note: Because a pixel is classified to the class it is closest to, we need to classify by minimum value.

Other algorithms, like maximum likelihood, classify by maximum value.

20. Enter the threshold values that you found in step 8 (for the before image) into the Thresh fields for each class, then click Quick Apply. This should provide an improved classification.

21. Click Hist next to the Bare Soil class in the Rule Image Classifier dialog.

Histograms of rule images are extremely useful for determining the appropriate threshold value for each class. What you see in the Histogram: Rule (Bare Soil) dialog (Figure 41 on page 117) is a frequency plot of all pixels in the cloud rule image. The histogram shows the number (or frequency) of soil rule image pixels that have each data value. You can see the number of pixels within each data value bin by left-clicking in the Histogram: Rule dialog.

Soil / sand pixels

Figure 41: The Bare Soil Rule Image Histogram

Often there will be a feature in the histogram that indicates the appropriate threshold value for a class. In Figure 41, you can see a long tail on the left side of the histogram associated with the pixels that are a close match for soil. It makes sense that the histogram should have this shape; if you look at the original image, you’ll see that there are very few soil or beach sand pixels relative to the entire scene so the left hand tail stays low.

22. Left-click in the Histogram: Rule (Bare Soil.) plot to determine an appropriate cutoff for the soil threshold. You’ll find that it should be around 300. Set the new threshold value in the Rule Image

ENVI for Defense and Intelligence

117

Chapter 6:Change Detection: The December 2004 Tsunami Supervised vs. Unsupervised Classification

Classifier Tool dialog, then click Quick Apply.

23. Use the Hist button in the Rule Image Classifier Tool dialog for the vegetation class to find appropriate threshold values. You should find that a threshold around 650 should work for vegetation in the before minimum distance rule image.

24. When you are happy with your classification from the Rule Image Classifier, click Save to File, name the file before_mindist_final.dat

, then click OK.

25. Close the Rule Image Classifier.

26. Repeat steps 12-24 using the after minimum distance rule image. You should find threshold values of 350 and 800 work well for soil and vegetation respectively. Save the after minimum distance classification to after_mindist_final.dat

.

27. Close the Rule Image Classifier Tool dialog.

Before you do a change analysis using these two before and after minimum distance classification images, it is a good idea to clean up the classification images to remove stray pixels and to close holes. This will also result in a more coherent result when we create vector layers from the classification output.

28. From the ENVI main menu bar, select ClassificationPost ClassificationMajority/Minority

Analysis. The Classification Input File dialog appears.

29. Select before_mindist_final.dat

, then click OK. The Majority/Minority Parameters dialog appears.

30. Click Select All Items. Change the Kernel Size to 7 x 7. Output the result to a file named before_mindist_majority.dat

, then click OK.

Majority analysis looks at all pixels in a window (7 x 7 pixels in this case) and then reclassifies the center pixel based on the classification majority inside the window.

31. Repeat steps 28-30 for the after_mindist_final.dat

classification image. Output the majority analysis for the after image to a file named after_mindist_majority.dat

.

There are many algorithms that can be applied to your multispectral image data for mapping. Before moving on to change detection, you’ll run another classification algorithm, the Spectral Angle

Mapper, and compare its results to the Minimum Distance.

32. Close all display groups and open dialogs except for the Endmember Collection dialog.

Exercise 7: Spectral Angle Mapper Classification on Before and After Scenes

SAM is an automated method for comparing image spectra to training class spectra. The SAM algorithm determines the similarity between two spectra by calculating the spectral angle between them, treating them as unit vectors in spectral space with dimensionality equal to the number of bands.

The simplest way to look at this classifier is to consider a hypothetical unknown image pixel spectrum from a two-band image and a training class spectral signature. The two spectra can be represented in a twodimensional scatter plot as two points (Figure 42 on page 119). A vector from the origin through each point describes the position of each respective spectrum under all possible illumination conditions.

1. From the Endmember Collection dialog used in the last exercise, change the input file to the before image (from the menu bar, select FileChange Input File). The Select Input File dialog appears.

2. Select before_drk_sub.dat

, then click OK). If the Endmember Collection dialog was closed, you can re-access it from the Classification menu on the ENVI main menu bar. The Endmember

118

ENVI for Defense and Intelligence

Supervised vs. Unsupervised Classification Chapter 6: Change Detection: The December 2004 Tsunami

Collection dialog appears.

3. Ensure the Algorithm is changed to Spectral Angle Mapper.

4. There is no need to reset the spectral training signatures. To make sure they look OK, you may want to click Select All and Plot.

5. Click Apply. The Spectral Angle Mapper Parameters dialog appears.

B2

B1

Two-Band

Image

Unknown

Pixel

Th re sh old

Spectral

Angle

Training

Class

Pixel

Band 1

Figure 42: The Spectral Angle Mapper (SAM)

In the Spectral Angle Mapper, the threshold value is measured in radians around the vector defined by the training class spectral signature (Figure 42).

6. Use the default Single Value threshold in the Set Maximum Angle (radians) field. Output the result to a file named before_SAM.dat

and a rule image named before_SAM_rule.dat

. Click

OK.

7. Run the Spectral Angle Mapper algorithm on the after image. From the Endmember Collection dialog menu bar, change the input file by choosing FileChange Input File. The Select Input

File dialog appears.

8. Select after_drk_sub.dat

, then click OK.

9. In the Endmember Collection dialog, click Apply. In the Spectral Angle Parameters dialog, use the default threshold of 0.10, enter the output classification filename as after_SAM.dat

, and enter the output rule image filename as after_SAM_rule.dat

. Click OK to start processing.

10. Load the soil rule images for the before and after images into new display groups.

The Spectral Angle Mapper is similar to the minimum distance classifier in that pixels that are close matches to training classes have small values. This will make close matches dark in the rule image.

Note that the darkest pixels are sand/soil pixels along the coastline.

The Rule Image Classifier tool is very useful for land cover mapping. Use the Rule Image Classifier to find those pixels that most closely match the two training signatures input to the spectral angle mapper algorithm.

11. From the ENVI main menu bar, select ClassificationPost ClassificationRule Classifier. In the Rule Image Classifier input file dialog, select before_SAM_rule.dat

, then click OK. The

Rule Image Classifier Tool dialog appears.

ENVI for Defense and Intelligence

119

Chapter 6:Change Detection: The December 2004 Tsunami Supervised vs. Unsupervised Classification

12. Select to Classify by Minimum Value, enter 0.10 in the Set All Thresholds field (press the Enter key to apply the value to all Thresh fields), then click Quick Apply.

13. Load a true-color composite of before_drk_sub.dat

Band (3,2,1) RGB into Display #1 and link it with the Rule Classifier display (Display #3). Using dynamic overlay, evaluate the current threshold setting of 0.1.

14. In the Rule Image Classifier Tool dialog, click Hist next to the Soil Rule image. Note the transition between close matches for soil in the soil rule image histogram as those pixels with the smallest values. Try a threshold of 0.2 for the soil class.

15. Examine the vegetation rule histogram. You will find that a threshold near 0.1 will work well for vegetation.

Question: Can you explain why the vegetation rule image histogram has such a distinctive shape with a large spike in pixels with very low data values?

16. When you have settled on appropriate threshold values for the before SAM rule image, click Save

to File in the Rule Image Classifier Tool dialog. The Output Rule Classification Filename dialog appears.

17. Enter before_SAM_final.dat

, then click OK.

18. Close the Rule Image Classifier Tool dialog.

19. From the ENVI main menu bar, select ClassificationPost ClassificationRule Classifier.

The Rule Image Classifier input file dialog appears.

20. Select after_SAM_rule.dat

, then click OK.

21. Toggle Classify by to Minimum Value. Then use histograms, image linking and dynamic overlay for the after SAM rule images to determine the appropriate threshold values to apply. (0.20 seems to work well for both the soil and vegetation classes.)

22. Once you’ve settled on appropriate threshold values for the after SAM rule image, click Save to

File in the Rule Image Classifier Tool dialog. The Output Rule Classification Filename dialog appears.

23. Enter after_SAM_final.dat

, then click OK.

Just as with the minimum distance classified result in the previous exercise, it is a good idea to clean up the classification images to remove stray pixels and to close holes. This will also result in a more coherent result when we create vector layers from the classification output.

24. From the ENVI main menu bar, select ClassificationPost ClassificationMajority/Minority

Analysis. The Classification Input File dialog appears.

25. Select before_SAM_final.dat

, then click OK. The Majority/Minority Parameters dialog appears.

26. Click Select All Items. Change the Kernel Size to 7 x 7. Output the result to a file named before_SAM_majority.dat

, then click OK.

27. Repeat steps 24-26 for the after_SAM_final.dat

classification image. Output the majority analysis for the after image to after_SAM_majority.dat

.

28. Close all display groups and any open dialogs.

120

ENVI for Defense and Intelligence

Supervised vs. Unsupervised Classification Chapter 6: Change Detection: The December 2004 Tsunami

Analyzing Classification Results

Exercise 8: Compare Results From the Classification Methods

Different classification algorithms will provide different results, because all methods approach the problem of land cover classification in different ways. In this exercise, you’ll look at the tools available in ENVI that you can use to assess your results.

1. At this point, your Available Bands List is full of files that you have created during the various classifications, making it difficult to easily see what files are available for display. From the

Available Bands List menu bar, select OptionsFold All Files to fold up individual bands beneath filenames.

2. Load a true-color Band (3,2,1) RGB of before_drk_sub.dat

into a new display.

3. In the Available Bands List, select New Display from the Display #1 drop-down button and load a gray scale of the before_mindist_majority.dat

classification image Display #2.

4. From the Available Bands List, load before_SAM_majority.dat

into a new display (Display

#3).

5. Link all three displays. Right-click in the Image window of any of the three displays and select

Link Displays. In the Link Displays dialog, select Yes for each display, then click OK. You may need to resize your Image window and do some rearranging so that you can easily see all three displays at once.

6. Use dynamic overlay to investigate the two different supervised classification results relative to the original true-color RGB before image. You will find that the two classification methods give different but similar results.

7. Look at the statistical distribution of the different classes using the two methods. From the ENVI main menu bar, select ClassificationPost ClassificationClass Statistics. The Classification

Input File dialog appears.

8. Select before_mindist_majority.dat

, then click OK. In the Statistics Input File dialog, again select before_mindist_majority.dat

, then click OK (this dialog gives you the option of choosing another file from which you’d like to generate stats, using the boundaries of the classification result). The Class Selection dialog appears.

9. Click Select All Items, then click OK. The Compute Statistics Parameters dialog appears. Use the defaults, then click OK. The Class Distribution Summary is reported in the text portion of the Class

Statistics Results dialog.

10. Repeat steps 7 - 9 for the SAM classification results before_SAM_majority.dat

.

11. Arrange the two Class Statistics Results dialogs so that you can see the text portions of both. Look at the percentage of total pixels classified in each land cover class by the two methods. They should be similar. Also note that because the file is georeferenced with a known pixel size, the statistics routine calculated the area occupied by each class. Once you are satisfied with the statistics output, close the statistics dialogs.

ENVI for Defense and Intelligence

121

Chapter 6:Change Detection: The December 2004 Tsunami Review

12. From the #1 Display group menu bar of the display that contains the before QuickBird image, select

OverlayClassification. Select the SAM results, before_SAM_majority.dat

, then click OK.

The #1 Interactive Class Tool dialog appears (Figure 43). You can use the Interactive Class Tool dialog to overlay individual classes on the original image, a powerful technique for visual assessment of classification accuracy.

Figure 43: The Interactive Class Tool Dialog

13. Enable individual classes and assess their quality. Note that under the Options menu, there are many useful tools for editing and visualizing results, and calculating statistics.

14. Modify the transparency of the overlay by choosing OptionsClass Transparency. from the

Interactive Class Tool dialog. In the Class Tool Transparency field enter 50, then click OK. This results in a 50% transparency in the display.

15. When finished with the Interactive Class Tool dialog, close it.

16. From the #1 Display group menu bar of the display that contains the before QuickBird image, select

OverlayClassification. Select the minimum distance results, before_mindist_majority.dat

, then click OK. In the Interactive Class Tool dialog, assess the minimum distance results relative to the SAM results.

Question: Which classification method appears to give the best results? Keep in mind the purpose of this analysis, which is the quantification of change due to the tsunami.

17. You’ll see that the two methods give a different spatial distribution of soil pixels. The minimum distance method classifies many more pixels as soil in agricultural areas. This is a realistic result.

However, we’re more interested in the brighter sandy soil pixels found along the coast. The Spectral

Angle Mapper algorithm does a better job of mapping those pixels. Therefore, you’ll use the SAM results for further change detection analysis.

18. Close all open display groups.

Review

• Multispectral analysis is based on examination of pixel-level spectral signatures for determination of the materials that exist within the pixel.

122

ENVI for Defense and Intelligence

What You Will Learn in this Section Chapter 6: Change Detection: The December 2004 Tsunami

• Multispectral classification is a technique that can be used to map material locations within an image.

• Multispectral classification falls into two broad categories: unsupervised and supervised.

• The utility of supervised classification for materials mapping depends on the quality of training data supplied, and analyst input regarding appropriate threshold values.

What You Will Learn in this Section

In this chapter you will learn:

• How to perform change detection analysis on the tsunami imagery

Temporal resolution refers to the frequency of sampling, and in the context of remote sensing, determines how often a point on the Earth’s surface is imaged by an instrument. It can be measured as the repeat time of the platform: how long it takes to complete one series of orbits so that the instrument returns to its original starting position. However, because most orbits overlap slightly, the frequency that a location can be imaged is often greater than the absolute repeat time. In addition, satellites in a polar orbit have greater overlap with increasing latitude, increasing the probability that a location can be imaged more frequently. Also, many space-born sensors now have sensor pointing capabilities, meaning that they can point off nadir and image locations on adjacent flight paths (which is the case of the QuickBird tsunami data used in this analysis). For polar orbiting satellites, repeat time can vary from several days to weeks. Satellite sensors in geostationary orbit, continuously monitoring one area on the Earth’s surface, can have repeat times measured in minutes.

Temporal resolution plays an essential role in many defense and intelligence applications, for obvious reasons. In this exercise, we’ll examine the two preprocessed QuickBird images to quantify the destruction caused by the great tsunami of December 26, 2004.

Change Detection Analysis

Exercise 9: Visually Assess Change between the Two Dates

1. Load a near-infrared color composite Band (4, 3, 2) RGB of before_drk_sub.dat

into a new display.

This image shows the state of the western shore of Aceh Province, Indonesia on April 12, 2004.

2. From the Available Bands List, load a near-infrared color composite Band (4, 3, 2) RGB of after_drk_sub.dat

.

3. Link the two display groups (right-click in the Image window of Display #1 and select Link

Displays).

4. Using dynamic overlay, spend some time examining the changes that took place in the region due to the great tsunami of December 26, 2004.

Next, you’ll view the classification images created by Spectral Angle Mapper (SAM) on each of the datasets. Classification images in ENVI are a special file type. Note that the two images you opened have a special icon in the Available Bands List. A classification image is a gray-scale image where individual land cover classes all have the same pixel value. ENVI applies a unique color (defined in the file header) to each of the data values in the classification image.

ENVI for Defense and Intelligence

123

Chapter 6:Change Detection: The December 2004 Tsunami Change Detection Analysis

5. Load before_sam_majority.dat

into Display #1.

6. Load after_sam_majority.dat

into Display #2.

7. From the #2 Display group menu bar, select ToolsCursor Location/Value. Examine data values for pixels in the two linked displays. You will see that both images are classified into just two land cover types: soil (Data = 1) and vegetation (Data = 2), plus a category for unclassified

(Data = 0).

8. Keep all display groups open.

Exercise 10: Change Detection Analysis

1. Load a true-color composite Band (3,2,1) RGB of before_drk_sub.dat

into Display #1.

For this analysis, you’ll use the two previously generated SAM supervised classification images.

Note that the original QuickBird images are registered to one another so that they have the same number and size of pixels. This is an important requirement of change detection, and is the purpose of the preprocessing portion of the practicum. If the images you are comparing are not registered, the images must have geographic information. Then, ENVI automatically registers the two images to one another as the first step of change detection analysis.

2. From the ENVI main menu bar, select Basic ToolsChange DetectionCompute Difference

Map.

3. In the Select the ‘Initial State’ Image dialog, select before_sam_majority.dat

, then click OK.

In the Select the ‘Final State’ Image dialog, select after_sam_majority.dat

, then click OK.

The Compute Difference Map Input Parameters dialog appears.

4. Set the Number of Classes to 3, set Change Type to Simple Difference and output the result to a file named tsunami_dif.dat

. The output image, tsunami_dif.dat

, is a classification image.

5. Load tsunami_dif.dat

to Display #2 and link it with the true-color composite RGB of the before

QuickBird image displayed in Display #1 (in the display group, right-click and select Link

Displays).

6. Start the Cursor Location/Value tool (ToolsCursor Location/Value) so that you can see the data values associated with the tsunami_dif.dat

image. (Remember that actual data values are on the last line of the Cursor Location/Value tool; these are distinct from the screen values on the first line).

7. You can see that the classification output image includes just three classes: Data:1 {Change [+]}

(those pixels that changed from unclassified to soil, or unclassified to vegetation, or vegetation to soil), Data:3 {Change [-]} (those pixels that changed from vegetation to unclassified, soil to unclassified, or soil to vegetation), and Data: 2 {No Change}.

Next, you’ll overlay the change map on an RGB color composite of the after image.

8. Load a true-color composite Band (3,2,1) RGB of after_drk_sub.dat

into Display #1.

9. Overlay tsunami_dif.dat

on the RBG color composite in Display #1. From the Display #1

124

ENVI for Defense and Intelligence

Change Detection Analysis Chapter 6: Change Detection: The December 2004 Tsunami group menu bar, select OverlayClassification. In the Interactive Class Tool Input File dialog, select tsunami_dif.dat

, then click OK.

10. Overlay both the positive (Change {+}) and negative (Change {-}) classes and try to determine how good a job the image classification coupled with the change algorithm did at highlighting change.

11. To best visualize the result of change detection using classification images, enable transparency in the #1 Interactive Class Tool (from the Options menu in the #1 Interactive Class Tool dialog, select

Class Transparency). In the Class Tool dialog, set Class Tool Transparency to 50, then click OK.

This allows you to see which areas were stripped of vegetation and villages, and that match up with the change analysis.

Exercise 11: Change Detection Analysis

Next you’ll use raw image bands to calculate change between the two images instead of simple classification images. This is a much more complicated analysis because you will use raw image bands and many factors can cause change in images collected at different points in time. For example, time of year can play a large roll because of seasonality (for example, an image acquired in the Southeastern U.S. will look much different if acquired in July rather than in January). Also, if imagery is calibrated into different units

(for example, raw vs. radiance vs. reflectance), results of a change analysis can be significantly impacted.

Misregistration problems or differences in pixel size between images can also cause significant errors.

Finally, atmospheric effects—namely variations in water vapor across the scene, can significantly impact an analysis.

Most of these factors have been controlled for this analysis. For example:

• The two images were collected at different times of the year; however, Aceh Province Indonesia is tropical, so changes in vegetation cover due to seasonality should be minimal.

• The images were atmospherically corrected to control for atmospheric scattering.

• Both images were calibrated into units of radiance.

• Image registration is adequate.

One factor that hasn’t been completely controlled is viewing geometry. The after shot was acquired off-

nadir. While some error associated with viewing geometry has been controlled through image registration, registration was not exact.

For this analysis you’ll use rule image bands for vegetation from the SAM classification.

1. Load the vegetation rule image from before_sam_rule.dat

into Display #1.

2. Load the vegetation rule image from after_sam_rule.dat

into Display #2.

Note how the changes in vegetation are extremely apparent in these two displays. This is because of the unique spectral signature of vegetation. Because vegetation is so spectrally distinct, it is easy to discriminate in the SAM rule image.

3. From the ENVI main menu bar, select Basic ToolsChange DetectionCompute Difference

Map. In the Select the ‘Initial State’ Image dialog, select Rule (Vegetation) from before_sam_rule.dat

, then click OK. In the Select the ‘Final State’ Image dialog, select Rule

(Vegetation) from after_sam_rule.dat

, then click OK. The Compute Difference Map Input

Parameters dialog appears (Figure 44).

ENVI for Defense and Intelligence

125

Chapter 6:Change Detection: The December 2004 Tsunami Change Detection Analysis

4. Set the Number of Classes to 21 and the Change Type to Simple Difference. Select Standardized

to Unit Variance, output the result to a file named tsunami_rule_dif.dat

, then click OK.

126

Figure 44: The Compute Difference Map Input Parameters Dialog

This analysis works by subtracting the final image from the initial, and then grouping pixels into classes based on the magnitude of change. In the vegetation rule for each image, it’s easy to identify pixels where tsunami destruction has occurred because they are generally lighter than previously occurring vegetation pixels. Therefore, pixels where tsunami destruction has occurred will tend to have more positive values and will subsequently be grouped into classes that contain greater magnitudes of change. The option Standardize to Unit Variance gives both rule images a mean of zero and unit variance, ensuring that the analysis compares “apples” and “apples.”

You’ll now compare the change image to the original QuickBird images from before and after the tsunami.

5. Load a near-infrared color composite Band (4, 3, 2) RGB of after_drk_sub.dat

to Display #1.

6. Load tsunami_rule_dif.dat

to Display #2.

Note that there are now many more classes in comparison to our original change analysis of the

SAM classification images.

Note that some changes between pixel values are insignificant. You want to be able to visualize which pixels have had significant change associated with urban development. An ideal tool for this task is the Class Color Mapping tool (Figure 45) which allows the analyst to change the colors of individual classes in a classification image.

7. From the Display group menu bar of the display that contains tsunami_rule_dif.dat

(Display

#2), select ToolsColor MappingClass Color Mapping. The #2 Class Color Mapping dialog appears (Figure 45).

ENVI for Defense and Intelligence

Change Detection Analysis Chapter 6: Change Detection: The December 2004 Tsunami

8. Experiment with changing various classes to the color black. For example, select Change (-1) in the

Selected Classes area. From the Color drop-down button, select Colors 1-20Black. The pixels in this class will turn black in Display #2. These are pixels that changed slightly in value between the two images, and are most likely not associated with urban development. Also turn No Change pixels to black.

9. In the #2 Class Color Mapping dialog, change the classes Change (+1) through Change (+5) and

Change (-1) through Change (-10) to Black. The classes Change (-1) through Change (-10) represent those pixels that got darker over time, generally not associated with the tsunami. This should give you a good visual indication of which pixels experienced some degree of change through time.

10. From the #2 Class Color Mapping dialog menu bar, select OptionsSave Changes, then, select

FileCancel to close the dialog.

11. Link the two open display groups (from either display group, right-click, then select Link

Displays). In the Link Displays dialog, make sure all displays are toggled to Yes, then click OK.

12. Explore the accuracy of the change map that you generated using dynamic overlay.

13. Close all opened displays in preparation for the next exercise.

ENVI for Defense and Intelligence

127

Chapter 6:Change Detection: The December 2004 Tsunami Change Detection Analysis

Figure 45: The Class Color Mapping Tool Dialog

Exercise 12: Change Detection Analysis

In this exercise, we will determine the amount of land area that was destroyed by the tsunami on December

26, 2004. We’ll generate statistics using the change map created previously. First, we’ll clean up the change map to remove classes that you determined did not represent change.

1. From the ENVI main menu bar, select ClassificationPost ClassificationCombine Classes.

The Combine Classes Input File dialog appears.

2. Select tsunami_rule_dif.dat

, then click OK.

In the Combine Classes Parameters dialog, you are going to reassign all of the original classes (21 in all) to one of two classes: unclassified and Change (+10) (Figure 46 on page 129).

3. Select Change (+10) from the Select Output Class list. From the Select Input Class list, select

Change (+10) and click Add. Repeat by selecting, one at a time, Change (+9) through Change

(+6) from the Select Input Class list and clicking Add. Then, select Unclassified from the Select

Output Class list. From the Select Input Class list, select Change (+5) and click Add. Repeat by

128

ENVI for Defense and Intelligence

Change Detection Analysis Chapter 6: Change Detection: The December 2004 Tsunami selecting, one at a time, all remaining classes in the Select Input Class list except for Unclassified and clicking Add. Click OK.

Figure 46: Class Reassignments for the Combine Classes Parameters Dialog

4. Toggle Remove Empty Classes? to Yes, name the output file tsunami_rule_class.dat

, then click OK.

5. Load tsunami_rule_class.dat

into a display.

Note that the image tsunami_rule_class.dat

now comprises only two classes. Also note that there are pixels miss-classified as tsunami impacted, such as in the ocean.

6. As a final step, clean up the classification image. From the ENVI main menu bar, select

ClassificationPost ClassificationMajority/Minority Analysis. The Classification Input

File dialog appears.

7. Select tsunami_rule_class.dat

, then click OK. The Majority/Minority Parameters dialog appears.

8. Click Select All Items, set the Analysis Method to Majority, set the Kernel Size to 7 x 7 pixels, leave the Center Pixel Weight set at 1, and output the result to a file named

Final_Tsunami_Class_Map.dat

.

ENVI for Defense and Intelligence

129

Chapter 6:Change Detection: The December 2004 Tsunami Review

9. Load

Final_Tsunami_Class_Map.dat

to Display #2.

10. Calculate the land area that was impacted by the tsunami. From the ENVI main menu bar, select

ClassificationPost ClassificationClass Statistics. In the Classification Input File dialog, select

Final_Tsunami_Class_Map.dat

, then click OK. In the Statistics Input File dialog, click

Final_Tsunami_Class_Map.dat

, then click OK.

11. In the Class Selection dialog, select the class Change (+10), then click OK. In the Compute

Statistics Parameters dialog, use the defaults, then click OK.

Question: How many square kilometers were affected by the tsunami in this area?

12. Close all opened dialogs and display groups in preparation for the next exercise.

Review

• Change detection is a powerful tool for determining changes at specific locations through time.

• There are several algorithms for change detection analysis in ENVI—both map-based and statistics-based.

• Several types of change analysis were completed: o

Change detection using classification images from the before and after periods o

Change detection using raw rule images from supervised multispectral analysis o

Statistical analysis using maps derived from change analysis

What You Will Learn in this Section

In this section you will learn how to:

• Assess changes associated with the tsunami

• Create and overlay vector files for areas affected by the tsunami

• Digitize roads in the imagery

• Display village boundaries

• Add annotations

Synthesizing Results with Post-processing Tools

In this final section of the tsunami change analysis, you’ll learn about tools in ENVI for synthesizing results and creating output for use in decision making processes. Commonly, geographic information derived from imagery needs to interface with vector based GIS software packages. In this chapter, you’ll learn how to save raster imagery to vector files. You’ll also learn about image chipping and creating annotated output using ENVI’s QuickMap utility.

Exercise 13: Visually Assess Changes Associated with the Tsunami and

Spatially Subset the Area of Interest

1. Load true-color composites Band (3,2,1) RGB for both the before_drk_sub.dat

and after_drk_sub.dat

images to new displays.

2. Link the two display groups (from either display group, right-click, then select Link Displays).

130

ENVI for Defense and Intelligence

Synthesizing Results with Post-processing Tools

Tsunami

Chapter 6: Change Detection: The December 2004

3. Start the Pixel Locator dialog. From the #1 Display group menu bar, select ToolsPixel Locator.

The #1 Pixel Locator dialog appears.

4. Enter Sample = 2122, Line = 6167, then click Apply.

This village will be the central command for this area of the coastline. Your task is to create a map of the area showing both previously existing houses and infrastructure, with the extent of tsunami damage overlaid.

The first step in creating our output product is to subset the original before image to our area of interest for use as a backdrop.

5. From the ENVI main menu bar, select Basic ToolsResize Data. The Resize Data Input File dialog appears.

6. Select before_drk_sub.dat

, then click Spatial Subset. The Select Spatial Subset dialog appears.

7. Enter Samples: 1711 to 2860 NS: 1150, and Lines: 2964 to 4763 NL: 1800. Click OK and OK.

The Resize Data Parameters dialog appears.

8. Output the result to a file named before_village.dat

, then click OK.

9. Load before_village.dat

to Display #1 as a true-color composite Band (3,2,1) RGB.

Exercise 14: Create Vector Files for Areas Affected by the Tsunami and

Overlay them on Imagery

1. Create a vector format file for areas impacted by the tsunami. From the ENVI main menu bar, select

ClassificationPost ClassificationClassification to Vector. The Raster to Vector Input Band dialog appears.

2. Select the band for

Final_Tsunami_Class_Map.dat

, then click OK. The Raster to Vector

Parameters dialog appears.

3. Select Change (+10) as the class to vectorize, and output the result as a Single Layer to a file called tsunami.evf

. Click OK.

After processing, the new vector layer appears in the Available Vectors List. The Available Vectors

List is much like the Available Bands List in ENVI. It is the place where any open vectors are managed. You can load vectors to selected display groups or to ENVI vector windows, close vectors, edit vector projections and names, and so forth.

4. Load the RTV (Final_Tsunami_Class_Map.dat) tsunami vector into Display #1. In the Available

Vectors List, click Select All Layers, then click Load Selected. In the Load Vectors dialog, select

Display #1, then click OK. The #1 Vector Parameters dialog appears.

Note that, as with the ROI Tool dialog, the Vector Parameters dialog has four radio buttons at the top. These specify which display group window is the active window in which to create vectors.

ENVI for Defense and Intelligence

131

Chapter 6:Change Detection: The December 2004 Tsunami

Tools

Synthesizing Results with Post-processing

5. Note that Image is the currently selected window. In the Display #1 Image window, try to drag and drop the Zoom Box. You’ll notice that a cursor appears and that it snaps to the overlaid vector layer.

6. In the #1 Vector Parameters dialog, select the Zoom radio button.

7. In the Display #1 Image window, drag the Zoom box to a location so that the Zoom window contains some vectors. Left-click in the Display #1 Zoom window. You should see that your mouse cursor now snaps to vectors in the Zoom window.

8. In the #1 Vector Parameter dialog, double left-click on the layer name. This toggles the layer off.

Double left-click again to re-display the layer.

9. You can edit layer appearance for any displayed vector. In the #1 Vector Parameter dialog, select

EditEdit Layer Properties. The Edit Vector Layers dialog appears.

10. Right-click on the Color box to select a new color. Select Items 1:20Yellow. Change the Line

Attribute Thick parameter to 3. Click Preview to display the changes. The currently displayed vector is a polygon. In the Polygon Fill field, select Line with a Space of 0.25. Click OK.

Exercise 15: Create a New Roads Vector Layer by Digitizing off the Imagery

1. From the #1 Vector Parameters dialog menu bar, select FileCreate New Layer. The New

Vector Layers Parameters dialog appears.

2. Name the layer Main Road – Pre Tsunami. Output the new vector layer to a file named

PreTsunamiRoad.evf

, then click OK.

3. You want to be able to digitize the road without the tsunami layer getting in the way. Deactivate the tsunami layer by double clicking on the layer name in the #1 Vector Parameters dialog.

4. In the #1 Vector Parameters dialog, make Main Road – Pre Tsunami the active layer. You can do this by selecting its name in the Available Vectors Layers area, or by choosing OptionsSelect

Active LayerMain Road – Pre Tsunami from the Vector Parameters dialog menu bar.

5. From the #1 Vector Parameters dialog menu bar, select ModeAdd New Vectors.

6. Set the type of feature you want to digitize. Because you’re creating a roads layer, select Mode

Polyline.

7. Change the color of the new roads vector to Red by right-clicking on the Current Layer color box and selecting Items 1:20Red.

8. Set the Window radio button to Image.

Find the main road that runs through the village at the center of the image. Follow it to the top of the image.

9. In the Image window, left-click on the road. You will notice that your cursor now has a line attached to it that connects to the point you clicked on the road. Position your cursor at a second

132

ENVI for Defense and Intelligence

Synthesizing Results with Post-processing Tools

Tsunami

Chapter 6: Change Detection: The December 2004 point on the road and left-click again. You should see the new vector appear as a red line connecting your two points.

Note: If you make a mistake, you can back up one node by middle-clicking.

10. Continue to digitize the road. When you run out of space in the Image window, move the Image box in the Scroll window to a new location.

Note: There are two main roads in the image. For now, digitize one or the other.

11. When you get to the end of the road, right-click twice to close the new vector. The second rightclick brings up a context menu. Select Accept New Polyline.

12. Another, faster way to digitize a linear feature is to use the Intelligent Digitizer. From the #1 Vector

Parameters dialog menu bar, select ModeIntelligent Digitizer. This tool allows you to follow roads with fewer points by automatically curving the new vector to follow bends in the road.

13. Add the other road to the Main Road – Pre Tsunami layer. For now, it will not intersect the first line.

14. When you are done digitizing the roads, make sure you save your changes. From the #1 Vector

Parameters dialog menu bar, select EditSave Changes Made to Layer.

Next you’ll join the two road vectors so that they form one individual polyline. This is accomplished by splitting the continuous vector into two vectors, then joining the dangling vector to one of the newly split vectors.

15. From the #1 Vector Parameters dialog menu bar, select ModeEdit Existing Vectors.

16. Where the two roads should intersect, left-click the continuous vector that you want to attach the dangling vector to; you should see the line and individual nodes become highlighted. Right-click on one node close to the intersection. Select Mark Node. Right-click again on the marked node and select Split Vector. This divides the continuous vector into two individual vectors. Right-click once again and select Accept Changes.

17. Left-click on the dangling vector to highlight it. Left-click on one of the vectors that you want to attach the dangling vector to. It should be highlighted. Right-click and select Join Vectors. Rightclick a final time and select Accept Changes.

18. From the #1 Vector Parameters dialog menu bar, select EditSave Changes Made to Layer.

Exercise 16: Display the Village Boundaries Layer and Modify Parameters

1. From the #1 Vector Parameters dialog menu bar, select ModeCursor Query.

2. From the #1 Vector Parameters dialog menu bar, select FileOpen Vector File. The Select

Vector Filename dialog appears.

3. Navigate to the envimil\tsunami

directory, select villages.evf

, then click Open. The

Villages vector layer is added to the #1 Vector Parameters dialog, and displayed in Display #1.

ENVI for Defense and Intelligence

133

Chapter 6:Change Detection: The December 2004 Tsunami

Tools

Synthesizing Results with Post-processing

4. Edit the appearance of the Villages and Roads vector layers.

5. From the #1 Vector Parameters dialog menu bar, select EditEdit Layer Properties. The Edit

Vector Layers dialog appears.

6. Change the Villages layer color to Cyan, and set Polygon Fill to Solid. Change the Main Road –

Pre Tsunami vector layer line Thickness to 3. Click OK.

Currently, the vector layers in Display #1 are not in the correct order.

7. From the #1 Vector Parameters dialog menu bar, select OptionsArrange Layer Order. In the

Vector Layer Ordering dialog, drag and drop Villages into position #1, RTV

(Final_Tsunami_Class_Map.dat) into position 2, and Main Road – Pre Tsunami into position 3.

Click OK.

Exercise 17: Image Annotations

1. From the #1 Display group menu bar, select OverlayAnnotation. The #1 Annotation dialog appears.

Use annotations to overlay text, legends, and scale bars on your imagery.

Note that the Annotation dialog, like the ROI Tool and Vector Parameters dialogs, allows you to select which window of the three-window display group you would like to work in.

2. From the #1 Display group menu bar, select OverlayAnnotation. The #1 Annotation dialog appears.

3. From the #1 Annotation dialog menu bar, select the Image window as the active window by selecting the Image radio button.

4. From the #1 Annotation dialog menu bar, select ObjectRectangle.

5. In the Display #1 Image window, left-click and drag the mouse to create a rectangle. When you release the left mouse button, note that the newly-created rectangle has a diamond handle in its center. You can left-click on the handle and move the rectangle to a new location. Once you’re satisfied with its location, right-click to set its position.

6. You can modify the annotation’s appearance after setting it in place. From the #1 Annotation dialog menu bar, select ObjectSelection/Edit. In the active window, left-click and drag to draw a red box around the object that you want to edit. When you release the left mouse button, the diamond handle for the object that you selected should be highlighted. You can then edit the object in the #1

Annotation dialog. If you want to delete the object, select SelectedDelete from the #1

Annotation dialog menu bar.

Spend some additional time using the Annotation tool.

You can add gridlines to your image and make a map by choosing. From the Display group menu bar, select OverlayGridlines. The #1 Gridline Parameters dialog appears. Note that you can toggle on or off a pixel grid, map grid and geographic grid, and control the grid spacing. From the

Gridline Parameters dialog menu bar Options menu, you can set grid attributes, such as fonts, font

134

ENVI for Defense and Intelligence

Review Chapter 6: Change Detection: The December 2004 Tsunami sizes, and grid line colors and thicknesses. You can also change the border whitespace thickness from the Options menu by selecting Set Display Borders.

You can save Display groups with annotation graphics, vectors, ROIs, and data stretches to external format graphics files.

7. From the #1 Display group menu bar, select FileSave Image AsImage File.

8. In the Output Display to Image File dialog, you can choose the bit depth (8-bit gray scale or 24-bit color), which layers you want to have as overlay, any spatial subsets, and output image type. If you are happy with your current display, use the default settings. In the Output File Type field, select

BMP, and enter the output result to a file named

Aceh_Province.bmp

.

As an example of what types of annotations are possible, you can open a che.bmp

from the envimil/tsunami

directory.

9. Once you are satisfied with the annotation tools and image output, close all displays and all open files.

Review

• ENVI’s overlay capabilities allow simple presentation of complex analyses.

• You can create vector layers from classification images which can be used for vector analysis, presentation of results, or for export to other software.

• ENVI’s annotation tools provide rich functionality for presentation of results.

• There are many options for saving information as displayed in an ENVI Image Display, from common raster graphics formats like BMP and JPG, to vector PostScript.

ENVI for Defense and Intelligence

135

Chapter 7:

SPEAR Tools

What You Will Learn in this Chapter ......................................................................................... 138

Terrain Categorization............................................................................................................... 138

Change Detection Analysis ...................................................................................................... 145

Chapter Review......................................................................................................................... 147

ENVI for Defense and Intelligence

137

Chapter 7:SPEAR Tools What You Will Learn in this Chapter

What You Will Learn in this Chapter

Use the Spectral Processing Exploitation and Analysis Resource (SPEAR) tools to highlight specific features in your imagery. Each tool has a wizard that guides you through step-by-step processes. Each wizard provides a series of panels in which you set the parameters that ENVI uses to process the imagery.

Each panel includes basic instructions on the left side of the panel. Detailed instructions for each wizard are located in ENVI Help when you click Help. In this chapter you will:

• Input and Output files into a SPEAR module

• Perform an in-scene atmospheric correction

• Run TERCAT

• Perform automatic registration

• Apply two methods for change detection

Terrain Categorization

The Terrain Categorization (TERCAT) tool creates an output product in which pixels with similar spectral properties are clumped into classes. These classes may be either user-defined, or automatically generated by the classification algorithm. The TERCAT tool provides all of the standard ENVI classification algorithms, plus an additional algorithm called Winner Takes All.

Exercise #1: Supervised Terrain Classification (TERCAT)

1. From the ENVI main menu bar, select FileOpen Image File.

2. Navigate to the envimil\Quickbird

directory and open the

SKorea_sub

image. The file will appear in the Available Bands list.

3. Start the SPEAR Tools by selecting Spectral SPEAR ToolsTERCAT.

138

First TERCAT panel

ENVI for Defense and Intelligence

Terrain Categorization Chapter 7: SPEAR Tools

4. The ENVI TERCAT wizard displays the File Selection panel. Click Select Input File, choose the file

SKorea_sub

, and then click OK. The input image to SPEAR should be a multispectral file in any format readable by ENVI. If the wavelengths are not listed in the image header, a series of

Select Band dialogs appear. SPEAR allows the user to process only a portion of the scene, using the

Select Subset option. This launches the ENVI Basic Tools > Resize Data (Spatial/Spectral) module.

For this exercise we will use the entire scene. By default, the SPEAR output files are saved to the same directory and use the same rootname as the input file, minus any extension. You can change the directory and/or root filename, by clicking the Select Output Root Name button. Keep the checkbox next to the Show informational dialogs between steps checked. Click Next. The

Atmospheric Correction panel appears.

5. Atmospheric Correction is available within many of the SPEAR tools. For most spectral processing applications, working with atmospherically-corrected data produces the most accurate results.

Atmospheric correction methods available in SPEAR include: Dark Object Subtraction; Flat Field

Calibration; Internal Average Relative Reflectance, Log Residuals, and Empirical Line Calibration.

The Dark Object Subtraction routine is appropriate for multispectral data. Select Show Advanced

Options. Within the Dark Object Subtraction Parameters panel, change the search area to Entire

scene, and uncheck the use ignore data value. Click Next.

6. In the Method Selection panel that appears you will select the classification methods to use.

Unsupervised methods do not require training data to create a TERCAT, and the resulting classes will not be labeled. Supervised methods require you to train the algorithms by creating regions of interest (ROIs) that include representative pixels of the desired classes. Check only the Maximum

Likelihood, Spectral Angle Mapper, and Minimum Distance algorithms for this exercise. Also select Winner Takes All (TERCAT).

7. Click Next. A display showing the scattered light-corrected

SKorea_sub

image appears. Click OK after reading the important message about selecting ROIs (Figure 2).

The SPEAR Tools have important steps highlighted to assure that the

user receives the best results possible

ENVI for Defense and Intelligence

139

Chapter 7:SPEAR Tools Terrain Categorization

8. The ROI Tool also opened up with the image. Make sure the ROI_Type is set to Polygon and click the Image radio button to draw polygons in the main display window. Move the red box in the

Scroll window to find an area that contains urban materials, forests, rivers, and crop fields (Figure

3). Click to draw polygon vertices over an urban area. One right click will close the polygon and a diamond-shaped handle will appear that will allow you to reposition the polygon if you wish. If you aren’t satisfied with the polygon at this point (with the diamond handle visible), a middle click will erase it. A second right click will close and fill the polygon. Click twice in the ROI Name block for this red region and change the name to urban. Click on New Region and next draw a polygon for a

forest ROI. Repeat the process to define training sites for the river and crop fields. Edit the ROI names appropriately. If you want to change the color of an ROI, right click in the Color box and select a color from the pull-down list.

Note: It is important that you do not mix the materials you are trying to classify in the same ROI.

140

ROIs drawn in the main display window.

ENVI for Defense and Intelligence

Terrain Categorization Chapter 7: SPEAR Tools

9. Click on New Region in the ROI Tool. Change the name of this ROI to roads. Click ROI_Type and select either Polyline or Point. Click on the Zoom radio button so that you can draw in it for detailed work.

10. Re-position the main display window and the Zoom window so that sections of road are visible in the Zoom window. Draw a polyline and/or place points along sections of road. Try to avoid mixing other materials in the roads ROI.

11. Return to the TERCAT panel and Click Next and then OK when the ROI Selection panel appears.

12. Each TERCAT method has its own set of advanced parameters that you can adjust. To view the advanced parameters, click Show Advanced Options, and then click on the tab for each algorithm to view the possible parameters. In most cases, using the default values produces satisfactory results. Select Output rule images?. You can use these to modify thresholds later. Click Next to move run the classifications.

13. After processing is complete, two dynamically linked display groups appear: the natural color composite is the reference image (on the left) and the Maximum Likelihood classification results

(on the right). Add an additional display to the group by selecting the ‘Spectral Angle Mapper’ from the Results Display #2 drop-down list (Figure 4). Roam around in the displays to compare results. Left click in a display to see another display. To cycle through each display, hold the left mouse button down in one display while you click with the middle mouse button. Black pixels are unclassified.

ENVI for Defense and Intelligence

141

Chapter 7:SPEAR Tools Terrain Categorization

Selecting results to examine.

14. Right click in any of the display windows to select Cursor Location/Value. Move the mouse over an image to see the names of the classes assigned for any given pixel.

15. After comparing the Maximum Likelihood and Spectral Angle Mapper results. Replace one of them with the Minimum Distance result.

Which result maps the forests better? The river? Urban areas?

16. In the TERCAT Examine Results panel, select Winner Takes All (TERCAT) from the Results

Display #1 drop-down list.

Does this image show fairly good results? In the case of a tie between algorithms, the dominant class of the neighboring pixels is used to classify the pixel in question.

After inspection of the results it may be that you wish to change something and re-run the classifications. One thing you may see is that some holding ponds are classified as roads. To rectify this if you have time, you could click the Prev button to go back and perhaps add a training site for the holding ponds.

17. In addition to the TERCAT product, ENVI creates a Winner Takes All (Probability) image. This image indicates the level of confidence for each pixel's classification, as determined by the number of classification methods that correctly classified the pixel. Load the Winner Takes All

(Probability) map using the Results Display #2 drop-down list and query the pixels with the

Cursor Location/Value tool. The user should carefully examine areas with the darkest pixels (lowest probability, here 0.33) and consider revising the training ROIs. You might be able to improve the

142

ENVI for Defense and Intelligence

Terrain Categorization Chapter 7: SPEAR Tools results by adding more polygons for a particular class to better encompass its variability.

18. A number of post-classification processing techniques available in ENVI are available directly from the SPEAR GUI. These methods may be used to “clean” the TERCAT results by removing spurious pixels that may not be significant for the terrain classification. You can also generate class statistics from your TERCAT such as means spectra, area covered by each class, etc. In this step you will select the TERCAT(s) that the desired operation should be performed on from the

TERCATs to process list. Then you select the class(es) that the operation should be performed on and the desired operation from the Processing to perform drop-down list. Depending on the method chosen, parameters may appear in the area immediately below the Processing to perform drop-down list. Select the Majority Analysis process for the Maximum Likelihood result. De-select

Unclassified from the Classes to process list. Click Go after making the appropriate selections.

19. When processing is complete, click OK in the Information dialog. Load the Maximum Likelihood

(MAJORITY) result into Results Display #2 and compare the results.

Maximum Likelihood result on the left and Maximum Likelihood (MAJORITY) on the right.

20. Select Class Statistics from the Processing to perform drop-down list and click GO. To view the statistics for any class, select that class from the Stats for drop-down list.

ENVI for Defense and Intelligence

143

Chapter 7:SPEAR Tools Terrain Categorization

Class Statistics dialogue.

21. When you are finished examining results, click Next in the Examine Results panel, then click Finish to exit the wizard.

22. If you have time, you might try the Rule Classifier on some of the rule images as described in the Tsunami chapter. Because smaller values are better matches with the Minimum Distance and Spectral Angle

Mapper classifiers, the Classify By toggle should be set to Minimum Value. For Maximum Likelihood, in which larger values are better matches, it should be set to Maximum Value.

144

ENVI for Defense and Intelligence

Change Detection Analysis Chapter 7: SPEAR Tools

Change Detection Analysis

SPEAR has three relative change detection methods: Transform, Subtractive and Two-Color Multi-View

(2CMV). Change detection analysis is used to highlight changes in imagery collected over the same area at different times. The Transform method uses either Principal Components Analysis, Minimum noise

Fraction, or Independent Component Analysis to highlight changed areas. The Subtractive Method computes the Normalized Difference Vegetation Index and ratios, then subtracts the results from the input images to create difference images.

Exercise #2: Change Detection Analysis, Subtractive Method

1. For this exercise, you will use two images created in the previous chapter before_drk_sub.dat

and after_drk_sub.dat.

If you do not have them, you may use time#1.dat

and time#2.dat

in the tsunami

directory.

Time#1.dat

is the before scene and time#2

is the after scene. If you need to use them, open them in ENVI.

2. Start the SPEAR Tools by selecting Spectral

SPEAR Tools Change Detection.

The SPEAR Change Detection workflow panel.

ENVI for Defense and Intelligence

145

Chapter 7:SPEAR Tools Change Detection Analysis

3. Click Select Time #1 File, choose before_drk_sub.dat

(or time#1.dat

) for the base image, and then click OK. To preserve as much information as possible, if the scenes have different resolutions, use the highest resolution image for image #1. The Auto Tie Point Matching Band dialog appears. Keep the default band selected for tie point matching, and click OK. Click Select Time #2 File, choose the after_drk_sub.dat

(or time#2.dat

) for the warp image, then click OK. The Auto Tie Points

Matching Band appears again. Keep the default band selected for tie point matching, and click OK.

4. For output, make sure the path is envimil/enviout

with the Output Root Name of change. Click Next.

The Method Selection panel appears with co-registration parameters.

5. In the Co-Registration Parameters panel, select Images already co-registered. Click Next. Here the registration is running; the images are checked for where actual data exists and subsetted to match. This process may take a couple of minutes. The Check Co-Registration panel appears and two new images are loaded into ENVI and displayed. They are change_time1_coreg and change_time2_coreg.

Check them by clicking in one to see the other in a dynamic overlay. You may also use Flicker, Blend, and

Swipe in the Auto-Flicker panel to make the comparison. Click Next.

6. For Change Detection Methods, select Two-Color Multiview (2CMV) and Image Transform. Click the

Show Advanced Options tab to view other options for the various change detection routines. For

Transform type, we will use the Principle Components default. Principal Component Analysis is recommended unless you have specific noise information for your data set. Click Next. Processing may take a few minutes. During this time, the after scene is normalized to the before scene, the two data sets are Layer Stacked, and the PCA is run.

7. The Examine Results panel appears and three display groups appear: In the display on the left, channel 1 of the Time #1 image is displayed as red, and channel 1 of the Time #2 image as both green and blue. The color in the display depends on the relative brightness of changed objects in the two scenes. Red areas are bright in the before scene but dark in the after. Cyan areas are bright in the after scene but dark in the before. The 2CMV image can only display one band at a time. To select another band (2-4), click on the

See 2CMV for drop-down list, make a selection, then click Load Image. You can toggle the colors displayed (Normal or Reversed) using the 2CMV Colors option. The other two displays contain natural color images of the before scene and the normalized after scene.

8. At this point you may save the results using Export to NITF and/or Save to Graphic

9. In the Examine Results pane1, select the other Image Transform method by clicking the pull-down list for See results for.

10. The display group on the left now shows PC Band 1 from the change_pca

result. A histogram window also appears, allowing customized stretching to highlight bright and dark areas in the transform results.

One of the transform result bands highlights changes; however, it can be any one of the transformed bands. Typically, changes are in the second or third band, but it varies depending on scene content and the amount of change. The display groups are dynamically linked to allow you to compare and see which PC

Band contains the change information. To change which band is displayed, select the desired band in the

Examine Results panel, then click Load PCA Image. Try selecting PC Band 4 and load that.

11. Changed areas are highlighted as bright and/or dark pixels in one of the transformed bands. Dark areas should correspond to some features decreasing from before to after; bright areas should correspond to other areas increasing. Drag the dotted, vertical bars in the histogram window to show contrast only in these bright and dark pixels to aid in visualizing just these areas. If the histogram plot is too low to see any

146

ENVI for Defense and Intelligence

Chapter Review Chapter 7: SPEAR Tools structure, middle-click in the plot area to increase the scale. If the highlighted pixels relate to changes, exploit the result as desired. For example, make ROIs of the bright and dark areas in the change PC Band.

You may then overlay the ROIs on a single band image and output as a geotiff, and/or convert the ROIs to vectors.

12. When you are finished examining results, click Next in the Examine Results panel, then click

Finish to exit the wizard

Chapter Review

SPEAR Tools offer a way to work with ENVI tools following a workflow scenario

TERCAT performs classifications using either user-defined or automatically generated classes

We applied two ways to perform a change detection analysis using the Change Detection tool

ENVI for Defense and Intelligence

147

Chapter 8:

RX Anomaly Detection,

Target Mapping, and

Material Identification

What You Will Learn in this Chapter ......................................................................................... 151

The RX Anomaly Detection Algorithm in SPEAR Tools............................................................ 151

The RX Anomaly Detection Algorithm in THOR........................................................................ 159

Chapter Review......................................................................................................................... 169

149

ENVI for Defense and Intelligence

Chapter 8:RX Anomaly Detection, Target Mapping, and Material Identification Chapter Review

150

ENVI for Defense and Intelligence

What You Will Learn in this Chapter

Identification

Chapter 8: RX Anomaly Detection, Target Mapping, and Material

What You Will Learn in this Chapter

In this chapter you will learn:

• About the RX Anomaly Detection algorithm and its application

• How to threshold detected targets using the interactive stretching tool

• How to tag detected targets as good or bad anomalies

• How to export results as vector files and .kml files for Google Earth

The RX Anomaly Detection Algorithm in SPEAR Tools

The ENVI RX Anomaly Detection tool uses the popular RXD anomaly detection algorithm in its classic form to extract anomalous features from spectral images. This ENVI tool provides a one-button solution for identifying anomalies in images, making it ideal for ENVI users of all abilities. Users need only select the input file and the tool highlights the pixels that are different from the general image background. No expert knowledge of ENVI or image processing is required to quickly process images and create accurate, easy-tointerpret results.

Exercise 1: Use RX Detection to Detect Anomalous Pixels in a Multispectral

Desert Scene

The satellite image you will process with the RX Anomaly Detection algorithm is a QuickBird image from northern Libya. (See the file

DG_Demo_License.txt

in the envimil\Libya

directory for usage information.) Using the RX Detection algorithm, you’ll be able to detect man-made objects in the desert landscape.

1. From the ENVI main menu bar, select FileOpen Image File. The Enter Data Filenames dialog appears.

2. Navigate to the envimil\Libya

directory, select

Libya.dat

, then click Open.

3. From the ENVI main menu bar, select SpectralSPEAR Tools Anomaly Detection.

4. Click the Select Input File button. Then select

Libya.dat

from the Select Input File dialog and click OK. Because wavelengths are not specified in the header file for this data set you are asked to select the proper channels for the wavelengths. For Select BLUE band click on Band 1 and click

OK. For Select GREEN band click on Band 2 and click OK. For Select RED band click on Band 3 and click OK. For Select NIR band click on Band 4 and click OK.

5. Click the Select Output Root Name button. Browse to the env imil\output

directory, type

Lybia_anom

for the output file name and click Open.

6. Click the Next button to proceed.

7. For Anomaly Detection Parameters leave the default selection of RXD as the Algorithm, and the

Mean Source as Global. Note that for scenes with little vegetation you can choose to have extra output generated that ignores vegetation. Click the Next button to proceed.

8. After processing is complete two displays will appear. One display shows a natural color composite of the input scene and the other display shows the gray scale result. The two displays are linked so click in one of the displays to see the other image.

The interactive contrast stretch control for the gray scale result also appears. This tool will be familiar from a previous chapter. To review, the Interactive Stretching dialog contains two image histograms: the one on the left contains a histogram of the original data values as they exist in the

ENVI for Defense and Intelligence

151

Chapter 8:RX Anomaly Detection, Target Mapping, and Material Identification The RX Anomaly Detection

Algorithm in SPEAR Tools data file, while the histogram on the right is generated after the image data has been byte scaled into the range of values acceptable for the computer monitor (between 0 and 255). Also recall that image histograms often contain features associated with pixels of interest in the image. In the case of this

RXD Anomaly image, anomalous pixels have high values relative to the rest of the pixels in the image. Move the dotted vertical line in the interactive contrast stretch control panel to change thresholds. If you haven’t set ENVI’s preferences to have this tool automatically apply changes to it, click OptionsAuto Apply. Now any changes you make in the Interactive Stretching dialog will automatically be applied without you having to click the Apply button.

Adjust threshold

Figure 47: Interactive Stretching Dialog

9. You may notice that the two vertical dotted bars in the histogram are next to one another and that they are locked, meaning if you move one, the other moves with it (click Options to see that Lock

Stretch Bars is selected). You can turn Lock Stretch Bars off and on by double clicking in the left hand histogram window. Click on the vertical dotted bars and move them to the right. You should see that fewer pixels are highlighted. The strongest anomalies are at the extreme right side of the

Input Histogram. Drag the dotted vertical bars so that you see a value of 50.419 in the Stretch text box (Figure 47). This threshold value eliminates most of the smaller, less significant anomalies.

10. To help you evaluate what the anomalies are, click in one of the displays to use dynamic overlays to investigate the two images.

11. In the Anomaly Detection panel, click the Retrieve Value button to bring in the Stretch minimum value (50.419) you specified in the Interactive Stretch dialog. Note that you can choose to display a

False Color Composite as the Reference Display. If you click the green arrowhead you will start the Flicker option as another way to view dynamic overlays. Click the red button to stop the

Flicker. Note that you can choose Swipe and Blend as comparison tools and that you can change the speed from Slow to Fast. You can also save an animated .gif of the flicker, swipe, or blend comparison by clicking on the floppy disk button .

152

ENVI for Defense and Intelligence

The RX Anomaly Detection Algorithm in SPEAR Tools Chapter 8: RX Anomaly Detection, Target Mapping, and Material Identification

12. Click the Next button to proceed. The Manage Anomalies step of the Anomaly Detection workflow is next. The images are redisplayed and an Available Vectors List appears. A temporary vector layer is listed there and polygons are drawn in the Natural Color Composite around the anomalies that are above the threshold you listed in the previous step. The Manage Anomalies step allows you to tag the anomalies as Good (ones to keep track of), or Bad. Pending anomalies means you have not made a decision on them.

13. Dozens of anomalies are listed with their ID number, the number of Pixels they cover, and their

Strength which represents how far they stick out statistically from the background. Click on the

Sort by pull-down menu and select Pixels. Then select Strength.

14. Click on the tab labeled 1 which now represents the strongest anomaly. Both displays will update to show that anomaly in the middle of the zoom window. Look in the Zoom window of the Natural

Color Composite (or click in the Zoom window of the gray scale anomaly image) to see if you can tell what the anomaly is. You will see a blue vector outlining this anomaly. Blue means that this is the one that you have selected and are currently investigating.

You are going to be looking for buildings in the image and tag these as good anomalies.

15. The first (strongest) anomaly appears to be a structure. In the Natural Color Composite image some of its pixels are blue. Click on the green GOOD flag button to tag this anomaly. The vector for this anomaly turns green and the displays will automatically update to go to the next strongest anomaly.

Look at the Zoom windows to help you identify what this anomaly is. If it appears to be dark

(perhaps some kind of holding pond) click on the red BAD flag button to tag this anomaly. The vector for this anomaly turns red and the displays will automatically update to go to the next strongest anomaly. Pending anomalies are yellow.

ENVI for Defense and Intelligence

153

Chapter 8:RX Anomaly Detection, Target Mapping, and Material Identification The RX Anomaly Detection

Algorithm in SPEAR Tools

16. Go to the first 7 (strongest) anomalies. Tag them green if they appear to be blue in the Natural Color

Composite. Leave them yellow for PENDING if they appear to be bright white. Tag them red if they appear to be darker holding ponds. Your resulting list may look something like the one below

(ignore the ID column and focus on the Pixels and Strength columns.

Note: Besides automatically moving to the next anomaly by tagging an anomaly, you can move to other anomalies by either clicking on their number or by clicking on the Next arrow button with the blue background just above the anomalies list.

154

17. After tagging the first 7 (strongest) anomalies, click the Next button on the bottom of the Manage

Anomalies panel to proceed. The Export Vectors step is next. Make sure the Good and Bad anomalies are selected under the Select Anomaly Type(s) to Export list. Then click on the Select

Vector Rootname button. For output, browse to the envimil\enviout directory and type

Libya_anom

for the output root name and click Open.

18. Click the Next button to proceed. The Good and Bad Anomalies vector layers will appear in the

Available Vectors List. Click the Finish button.

Next, you’ll use supervised classification to try and map pixels that match the Good anomaly.

ENVI for Defense and Intelligence

The RX Anomaly Detection Algorithm in SPEAR Tools Chapter 8: RX Anomaly Detection, Target Mapping, and Material Identification

Exercise 2: Use Training sites in a Supervised Classification

In this exercise you’ll use pixels flagged as anomalous by RXD in the previous exercise to look for similar materials in the scene.

1. Display Libya.dat with Band 3 as Red, Band 2 as Green, and Band 1 as Blue.

2. From the main ENVI menu select ClassificationSupervisedMinimum Distance. Select the

Libya.dat

file and click OK.

3. In the Minimum Distance Parameters dialog, select both EVF: Good Anomalies and EVF: Bad

Anomalies. You will work with the rule images later, but you can try setting some thresholds for the Classification result. For Max stdev from Mean type in 1.

4. For Max Distance Error type in 50. Click the Preview button.

5. If you don’t see any pixels highlighted in the Classification Preview window to the right, you may need to view a different part of the scene. To do this click Change View, then in the Select Spatial

Subset dialog click Image and move the red box to the top left part of the image, avoiding the dark corner of the scene. Then click OK twice to update the Classification Preview.

6. Output the result to a file named

Libya_anom_mindist.dat

, and output the rule images to a file named

Libya_anom_mindist_rule

. Click OK to start the process.

7. In the Available Bands List, right-click on the

Min Dist

band of Libya_anom_mindist.dat

and select Load Band to New Display.

8. Right click in one of the two displays and select Link Displays. Make sure that both displays are toggled to Yes and click OK. Move around the displays and clicking in one to see the other, evaluate the classification.

The Rule images can often be used to improve your classification so we will evaluate them next.

ENVI for Defense and Intelligence

155

Chapter 8:RX Anomaly Detection, Target Mapping, and Material Identification The RX Anomaly Detection

Algorithm in SPEAR Tools

9. From the Available Bands List, load

Rule(Good Anomalies)

band of

Libya_anom_mindist_rule

to a new display which should be Display #3. .

10. Link Display #3 with the other two displays. In the #3 display group, right-click and select Link

Displays. In the Link Displays dialog, toggle Display #3 to Yes. If you have an anomaly of interest centered in the other displays, then in the pull-down menu for Link Size / Position select Display

#1 or Display #2 then click OK.

As you may recall, the Minimum Distance algorithm measures the Euclidean distance between unknown pixels in the image and the known training signature in a multi-dimensional scatter plot where the number of dimensions is equal to the number of bands. Smaller distances indicate closer matches. In the Minimum Distance rule images these smaller distances represented by lower numbers means that darker pixels are closer matches to the target signature. This is counter-intuitive because we most often think of mapped targets as being brighter in the image.

11. In Display #3, reverse the color ramp (make dark pixels bright and bright pixels dark) so that we can visualize pixels mapped as anomalies as bright. From the #3 Display group menu bar, select

ToolsColor MappingENVI Color Tables. The ENVI Color Tables dialog appears with the first color table, B-W Linear, being applied.

12. Take the Stretch Bottom slider and move it to the right. Take the Stretch Top slider and move it to the left. You can make more things dark, eliminating the background, by moving the Stretch

Bottom slider back to the left. However, another way to do this is to use the Interactive Stretching tool which you will use next.

156

13. Right-click in #3 Display and select Interactive Stretching. The #3 Display Interactive Stretching dialog appears.

14. If you have not set ENVI Preferences to have this tool automatically applied, then select Options

Auto Apply.

15. Left-click on the right hand vertical dotted line in the Input Histogram, and drag it to the left. As you move it you will see more and more of the background go dark. When the two vertical dotted lines are close together have them move in tandem by selecting OptionsLock Stretch Bars

(another way to toggle Lock Stretch Bars off and on is to double click in the Input Histogram window).Now click on one of the vertical dotted lines and drag them both to the left. At some point you will get close to the threshold you set for the classification.

ENVI for Defense and Intelligence

The RX Anomaly Detection Algorithm in SPEAR Tools Chapter 8: RX Anomaly Detection, Target Mapping, and Material Identification

16. Use image linking to explore the result of the stretched minimum distance rule image relative to the

RX classification in Display #2. You should have three images linked. To cycle through the three displays so that you see different overlays, hold the left mouse button down in the Rule image and click the middle mouse button. When you see the classification result as the overlay, release both mouse buttons, then left-click as you would normally to compare the two displays. Move around the scene to compare the thresholded rule image to the classification result.

17. Close Display #3. Next you will determine the best threshold for each type of anomaly using the

Rule Image Classifier.

18. From the ENVI main menu bar, select ClassificationPost ClassificationRule Classifier.

The Rule Image Classifier file input dialog appears.

19. Select

Libya_anom_mindist_rule

, then click OK. The Rule Image Classifier Tool dialog appears.

20. With the Rule Image Classifier it is vital that you consider whether dark are bright pixels represent better matches to your target. Remember that with the Minimum Distance classifier the dark areas

(smaller values) are better matches. Therefore you must toggle the Classify by value to Minimum

Value.

ENVI for Defense and Intelligence

157

Chapter 8:RX Anomaly Detection, Target Mapping, and Material Identification The RX Anomaly Detection

Algorithm in SPEAR Tools

21. Click Hist for Rule (Good Anomalies). Because the closest matches for the target when using the minimum distance classifier are those pixels with the smallest values, you will focus on the left hand side of the histogram. Zoom into the lower left corner of the histogram by drawing a box with the middle mouse button.

22. Left-click in the Histogram dialog and find a conservative value that defines the threshold between the close matches for Good Anomalies and all the rest of the pixels in the image. The example above shows that the slope of the histogram starts to increase around a data value of 58.7. Enter 59 in the Thresh field for Rule (Good Anomalies) as a conservative value.

23. Uncheck the On box for Rule (Bad Anomalies). You can work with this rule image later if you have time. Click Quick Apply. Link this new classification display to Display #1 (right-click in Display

#1 or Display #3 and select Link Displays), and examine the new classification result relative to the true color image. The pixels that match to Good Anomalies appear to be areas of construction of man-made features.

24. To save this result, select Save to File in the Rule Image Classifier Tool dialog. In the Output Rule

Classification Filename dialog, enter good_anomalies.dat

, then click OK.

25. The new classification result will be listed in the Available Bands List. Load this result into Display

#2.

26. If you have time, check the On box for Rule (Bad Anomalies) and find a good threshold for Bad

Anomalies by evaluating the histogram for that rule image.

27. Close the Rule Classifier tool.

Exercise 3: Synthesize Results: Overlay Classifications and Annotations and

Output to Graphics

In this exercise you’ll synthesize your results and create a final output image. The first step is to create an

ENVI annotation layer that highlights all the Good Anomalies.

1. From the #2 Display group menu bar (the display containing the classification image) select

OverlayAnnotation. The #2 Annotation dialog appears.

2. From the #2 Annotation dialog menu bar, select ObjectEllipse. Make the Image window active for creating annotations by selecting the Image radio button.

158

ENVI for Defense and Intelligence

The RX Anomaly Detection Algorithm in THOR Chapter 8: RX Anomaly Detection, Target Mapping, and

Material Identification

3. Position your Image window so that you can see several groups of classified pixels. In the Display

#2 Image window, left-click on one of the classified groups of pixels and create an ellipse around it by dragging the mouse. When you release the mouse button, you will see a red diamond handle at the center of the newly created annotation, which indicates you can edit the annotation (if you leftclick in the Image window again, the ellipse changes shape. You can also click on the red handle and move it to a new location.). To fix the annotation in place, right-click in the Image window.

4. Continue adding annotations to the image, denoting the mapped anomalies. As an example of what you might create, open the bitmap file

Libya_Targets.bmp

in the envimil\Libya

directory.

However, this result was created using looser thresholds so more anomalies are mapped.

Note: If you are adding annotation when you don’t want to, turn annotation control off by clicking on the Off radio buttton.

5. To edit annotation, select ObjectSelection/Edit from the Annotation dialog menu bar. In the

Image window, left-click and drag the dashed rectangle selection tool so that it surrounds the annotation object. You should see the diamond handle reappear, indicating you can edit the object.

You may wish to change the color of the circles by right-clicking on the color box in the Annotation dialog and choosing from the pull-down list.

6. When you are finished with your annotation, save it to an annotation file. From the #2 Annotation dialog menu bar, select FileSave Annotation. In the Output Annotation Filename dialog, type good_anomalies.ann

, then click OK. Next we will overlay the annotation on the true color image.

7. You should still have the true color image in Display #1. Select OverlayGrid Lines.

8. In the #1 Grid Line Parameters dialog for the Geographic grid, set the Spacing to 0 degrees 0 minutes and 20 seconds. Click Apply.

9. If you close the Grid Line Parameters dialog the grid will disappear so move the dialog to the side or minimize it.

10. From the menu for Display #1 select OverlayAnnotation. Then select FIleRestore

Annotation. Browse to the envimil\enviout directory where your good_anomalies.ann file should be. Select it and click Open.

11. You may make additional edits by clicking ObjectSelection/Edit from the Annotation dialog menu bar. Then in the Image window, left-click and drag the dashed rectangle selection tool around an annotation object. You should see the diamond handle reappear, indicating you can edit the object.

12. When you are finished editing your annotation, re-save it. From the Annotation dialog menu bar, select FileSave Annotation. In the Output Annotation Filename dialog, type good_anomalies.ann

, then click OK.

13. To save an image with your annotation burned in, from the #1 Display group menu bar, select

FileSave Image AsImage File. In the Output Display to Image File dialog, select

TIFF/GeoTIFF from the Output File Type drop-down list. Give your file a name, and output it to the envimil\enviout

directory.

14. Close all display groups and open files.

The RX Anomaly Detection Algorithm in THOR

The ENVI RX Anomaly Detection tool in THOR is similar to the tool in SPEAR but it adds a tool to compare unknown spectra to spectral libraries in order to identify materials. Some expert knowledge is

ENVI for Defense and Intelligence

159

Chapter 8:RX Anomaly Detection, Target Mapping, and Material Identification The RX Anomaly Detection

Algorithm in THOR useful to aid in interpretation.

Exercise 5: Use RX Detection to Detect Anomalous Pixels in a Hyperspectral

Scene

The data set for this exercise was collected by the AVIRIS hyperspectral sensor which is an airborne instrument. The data set is from Cuprite, Nevada.

1. From the ENVI main menu bar, select FileOpen Image File. The Enter Data Filenames dialog appears.

2. Navigate to the envimil\AVIRIS directory, select cup95_at.int

, then click Open.

3. From the ENVI main menu bar, select SpectralTHOR Workflows Anomaly Detection.

THOR like SPEAR Tools has a help panel for each step. You can open and close this panel by clicking on the black arrowhead .

4. Click the Select Input File button. Then select cup95_at.int

from the Select Input File dialog and click OK.

5. You will select a spectral subset with which to work. The idea here is to only look at wavelengths where potential targets have identifiable reflectance features. Also, it is good practice to eliminate bands that are noisy or contain artifacts. Select a Spectral Subset by clicking on the Graphically button.

6. In the THOR Spectral Subsetting dialog there is a plot window with a representative spectrum on the left and a window to view individual bands on the right. You can “paint” bands Good or Bad by clicking on the appropriate button and dragging the cursor across the plot. Bad bands will not be used in the analysis. The Bad Bands button is active right away. Move the cursor in the plot to about

Band 6 at 2.04 micrometers and click the left mouse button and drag all the way to the left. You will have painted those bands pink, tagging them as bad so they will not be used in the workflow.

The bad bands here contain a carbon dioxide artifact.

7. Next position the cursor near Band 44 at 2.419 micrometers and click the left mouse button and drag all the way to the right. You will have painted those bands pink as well, tagging them as bad.

The bands here, the last channels of the data set, tend to be noisy and do not contain any spectral features of interest.

8. Click OK when finished with the Spectral Subset selection which should look like the figure below.

160

ENVI for Defense and Intelligence

The RX Anomaly Detection Algorithm in THOR Chapter 8: RX Anomaly Detection, Target Mapping, and

Material Identification

9. Click the Select Output Root Name button. Browse to the env imil\output

directory, type cuprite_anom

for the output file name and click Open.

10. Click the Next button to proceed.

11. The first step in the THOR workflow is Atmospheric Correction. This step removes the effects of atmospheric gas absorptions, atmospheric scattering, and the shape of the solar irradiance curve.

The goal of this step is to get the data to reflectance or something that approximates reflectance.

Reflectance spectra from the data can then be compared to spectral libraries to help with the identification of materials based on their reflectance properties. In this case the data have already been converted to reflectance so leave the Atmospheric Correction Method as None / Already

corrected. Click the Next button to proceed. You may notice that a subset of the data will now be added to the Available Bands List.

12. The Dimensionality Reduction and Band Selection panel can be used to reduce the number of bands to process by applying an image transform. Image transforms attempt to pack signals into a reduced number of bands. The transformed bands are combinations of the original bands. By reducing the data in an intelligent manner, you can shorten processing times. From the Method pull-down menu select Image Transform. The Manual band selection method is similar to the spectral subset tool you worked with previously.

13. For Transform Method select Independent Component Analysis. Click on the Transform

Params button to view options. Note that you can reduce the Number of Output IC Bands and get sampling statistics from a spatial subset. From the Masking Method pull-down menu you can select a user-specified value for pixels that will not be evaluated (such as 0). Or you can specify a mask file if you have created one to mask out areas or features you are not interested in such as clouds.

For now you will leave the defaults so click OK to close the ICA Parameters dialog.

14. In the Dimensionality Reduction panel click on the green Run Transform button.

15. When the ICA transform process is finished there are a number of tools that you can use to select a

ENVI for Defense and Intelligence

161

Chapter 8:RX Anomaly Detection, Target Mapping, and Material Identification The RX Anomaly Detection

Algorithm in THOR reduced number of transformed bands to use. You can select them graphically as you did with the spectral subset tool previously or you can select them from a list. To help you evaluate the ICA bands you can double-click a band or click Display Band to open the selected band in a display.

Animate Bands will launch ENVI's animation tool. For ICA, you can click Plot 2D Coherence to see a plot of the spatial coherence for each transformed band (ICA bands that have some signal will generally have values above 0 or slightly negative).

Additionally, there is a useful tool that shows you how much weight each original band had on a particular ICA band. If you select Plot ICA Band Weights and then select an ICA Band from the

ICA dialog, you will see a plot showing how much of a contribution each original band had to that

ICA band. This is helpful because if you know at what wavelength your target had some unique reflectance feature, you could see what ICA band was most affected by the original band at that wavelength.

For now you will process all bands so click the Next button to proceed. Click Yes to the ENVI

Question about how many ICA bands to use.

16. For Anomaly Detection Parameters leave the default selection of RX as the Algorithm. Click the

Next button to proceed.

17. After processing is complete the result will be listed in the Available Bands List and the Rule

Thresholding panel will appear. Also the THOR Viewer will appear together with the Anomalies

Histogram. You can select what image to display in the THOR Viewer (Natural Color, False Color,

SWIR Color, etc.) from the THOR Viewer Controls in the Rule Thresholding panel. You can also choose which ICA band(s) to display in the THOR Viewer or you can display the rule image from the RX processing.

The Anomalies Histogram panel shows the histogram plot for the rule image. The shaded portion of the histogram shows the portion being excluded from the anomaly overlay, and the unshaded portion represents the detected anomalies. Click and drag the threshold bar to adjust the threshold.

162

ENVI for Defense and Intelligence

The RX Anomaly Detection Algorithm in THOR Chapter 8: RX Anomaly Detection, Target Mapping, and

Material Identification

The Anomalies Histogram controls include check boxes to Hide the anomalies or make them

Flicker so that you can see the underlying base image. You can change the color of the Regions of

Interest (ROIs) that cover the anomalous pixels by clicking on the ROI color box with either the left or right mouse button. You can also adjust the Opacity of the ROIs by moving the slider.

18. Adjust the anomalies threshold slider so that 0.1% of the pixels in the scene are selected as anomalies. This will be a DN of around 140.

19. Click the Next button to proceed.

20. The Spatial Filtering step allows you to exclude small anomalies with Sieve and to group clusters of anomalies together with Clump. Clump and Sieve are typically used with noisy results from multispectral data analysis and should be used with caution on hyperspectral data processing results.

Skip this step by checking Do not perform spatial filtering. Click the Next button to proceed.

21. The Review Detections step of the Anomaly Detection workflow is next. The anomalies are redisplayed in the THOR Viewer. They are yellow which means that their evaluation is Pending.

You can choose what base image to use underneath the ROIs by clicking on the pull-down menu for

Reference Image. Dozens of anomalies are listed with their ID number, the number of Pixels they cover, and their Strength which represents how far they stick out statistically from the background.

By default the anomalies are listed in order of appearance in the display (top left to bottom right).

Click on the Sort by pull-down menu and select Pixels. Then select Strength. Scroll back to the top of the list anomalies to see the anomaly now listed as number 1.

ENVI for Defense and Intelligence

163

Chapter 8:RX Anomaly Detection, Target Mapping, and Material Identification The RX Anomaly Detection

Algorithm in THOR

22. Click on the tab labeled 1 which now represents the strongest anomaly. The THOR Viewer will update to show that anomaly in the cross hairs. Click on the Plot Spectrum button to view the average spectrum for that anomaly ROI.

164

23. The first (strongest) anomaly shows a strong absorption feature around 2.34 micrometers. To compare this spectrum to a spectral library to help you identify that material click the Identify

Material button. The material Identification panel will appear with a list of spectral libraries that have been searched and a plot showing the Unknown ROI spectrum in red and the matching spectral library spectrum in green. The blue curve across the bottom shows the importance of each band in

ENVI for Defense and Intelligence

The RX Anomaly Detection Algorithm in THOR Chapter 8: RX Anomaly Detection, Target Mapping, and

Material Identification identifying the material. Note that there is a spike below the strong absorption feature near 2.34 micrometers.

24. Check the box for Edit bad bands. This will allow you to select portions of the spectrum that you wish to exclude. If you focus on just the wavelengths that contain the absorption feature you will increase the accuracy of the identification. Use the Bad Bands button to reduce the number of Good

Bands as shown below. If you make a mistake and paint too many bands Bad, click the Good

Bands button and repaint. Then click Update Results.

ENVI for Defense and Intelligence

165

Chapter 8:RX Anomaly Detection, Target Mapping, and Material Identification The RX Anomaly Detection

Algorithm in THOR

25. The list of matches at the bottom of the panel will update. You should see Concrete and Calcite listed as the best matches. Calcite is the mineral that makes up limestone a common rock type.

Limestone is often a component in concrete so it is no surprise to see the calcite absorption feature in concrete. If you click on the row number for one of the listed matches, the spectral plot will update allowing you to evaluate the match.

26. Material Identification uses the Spectral Angle Mapper routine to compare spectra so you see a column for Spectral Angle. The smaller the angle the better the match. Note that you can choose which spectral libraries to use in the comparison by highlighting them in the Select Libraries to

Search window. Close the Material Identification panel.

27. With the row for the strongest anomaly selected, click on the red Bad button . THOR will update so that the second strongest anomaly is selected and shown in the THOR Viewer. Click on the Plot Spectrum button to see the average spectrum for that ROI. It looks to be the same material so click on the red Bad button to tag it.

28. Select the anomaly listed as number 7, then click on Plot Spectrum. It may look like the figure below with a absorption feature (low point in the spectrum) near 2.12 micrometers.

166

29. Click on Identify Material. You may see orthoclase listed as the best match. However, inspection

ENVI for Defense and Intelligence

The RX Anomaly Detection Algorithm in THOR Chapter 8: RX Anomaly Detection, Target Mapping, and

Material Identification of the two spectra shows that orthoclase has the lowest minimum near 2.20 whereas the Unknown has its lowest minimum near 2.12 micrometers.

30. Focus only on the wavelengths where you see the absorption feature in the Unknown by clicking on the box for Edit bad bands. Then use the Bad Bands button to reduce the number of Good Bands as shown below. Then click Update Results.

31. You may see hydroxyapophyllite and buddingtonite listed as the top two choices now. However, hydroxyapophyllite has a very deep absorption feature near 2.4 micrometers that the Unknown does

ENVI for Defense and Intelligence

167

Chapter 8:RX Anomaly Detection, Target Mapping, and Material Identification The RX Anomaly Detection

Algorithm in THOR not. Click on the listing for buddingtonite to see that plot in the Material Identification window.

This is a fairly close match. Click on the Export to ENVI Plot button to display the Unknown and buddingtonite in a typical plot window. This window allows you to click on a spectrum to see at what wavelengths the absorptions have their lowest point. Buddingtonite appears to be the best match for this Unknown. It is certainly possible that this mineral can occur in the volcanic rocks in this area. Buddingtonite is sometimes associated with gold deposits.

32. In the listing of all anomalies click on the green GOOD flag button to tag this anomaly.

168

33. Check other anomalies if you have time. When you are finished click the Next button to proceed.

34. In the Export Detections panel, select Good and Bad targets and click Export to ROIs. A display

ENVI for Defense and Intelligence

Chapter Review Chapter 8: RX Anomaly Detection, Target Mapping, and Material Identification appears together with a ROI Tool. The anomalies are listed in the ROI Tool. The main area for the

Good anomaly is near the center of the scene. Before you rush out to this area with a pick and shovel to dig for gold, know that this area has already been sampled and there is very little gold here.

35. Save the ROIs by clicking File Save ROIs in the ROI Tool. Click Select All Items, then browse to the envimil\enviout directory and type in Cuprite_anom.roi for the output file name.

Click OK to save the file. You can save the image with the ROIs overlain by clicking File → Save

Image As → Image File in the display

36. Close the display window and click the Next button to proceed. Processing is now complete so click Finish. Close all plots, displays, and dialogs that may still be open.

37. If you have time, re-run THOR Anomaly Detection using a different hyperspectral data set. Or you could use the anomaly ROIs to look for similar materials using a supervised classification tool.

Chapter Review

• The RXD algorithm is an ideal way for discriminating objects that are anomalous in a multispectral and hyperspectral images.

• From the multispectral exercise, anomalous pixels were discriminated by RXD, then you classified the image using the Minimum Distance classifier.

• The hyperspectral anomaly detection tool found rare materials in that scene. Then the Material

Identifier tool was run to identify materials by comparing their spectra to spectral libraries.

ENVI for Defense and Intelligence

169

Chapter 9:

Image Sharpening

What You Will Learn in this Chapter ......................................................................................... 172

RGB Sharpening ....................................................................................................................... 172

Opening and Viewing Exercise Data......................................................................................... 172

RGB Image Sharpening ............................................................................................................ 173

Spectral Sharpening.................................................................................................................. 174

Chapter Review......................................................................................................................... 180

ENVI for Defense and Intelligence

171

Chapter 9:Image Sharpening What You Will Learn in this Chapter

What You Will Learn in this Chapter

In this chapter you will learn how to:

• Apply panchromatic sharpening for improved spatial resolution of an RGB display

• Apply spectral sharpening for improved spatial and spectral resolution

RGB Sharpening

ENVI’s image sharpening tools are extremely useful for improving the visual and spectral interpretability of imagery. Many imaging systems acquire high spatial resolution panchromatic (gray scale) imagery coincident with lower resolution multispectral bands. The high spatial resolution panchromatic imagery can be used with a number of algorithms to improve the spatial resolution of the multispectral bands through a process known as sharpening.

The sharpening algorithms in ENVI generally fall into two classes: those that merge an RGB color image

(3-band byte-scaled imagery only) with a high-resolution gray scale image and those that merge any number of spectral bands with a high-resolution gray scale image.

ENVI has two image RGB image sharpening techniques: HSV transform and color normalized (Brovey) transform. To use these algorithms, the images must either be georeferenced or have the same image dimensions. The RGB input bands for the sharpening also must be stretched byte data or selected from an open color display.

The sharpening algorithms mentioned above use data transforms whereby the data space is altered as in principle components rotation or color model transforms.

Opening and Viewing Exercise Data

The data used in this exercise is the same as that used in the earlier chapter “Electromagnetic Spectrum

Image Display and Analysis.”

Exercise 1: Open and View Panchromatic and Multispectral Imagery

1. From the ENVI main menu bar, select FileOpen Image File. The Enter Data Filenames dialog appears.

2. Navigate to the envimil\Quickbird

directory and open boneyard_pan.dat

. This is the panchromatic (one color, or gray scale) image that is collected coincident with 4-band multispectral

QuickBird imagery. In the Available Bands List, click the plus sign next to the Map Info icon. This contains useful map projection information for the image, as well as the pixel size. Note the pixel size of 0.6 meters.

3. Load the panchromatic band into a new display. In Display #1, take a few moments to explore the image.

Next we’ll compare this image with the multispectral bands.

4. From the Available Bands List File menu bar, select Open Image File. The Enter Data Filenames dialog appears.

5. Navigate to the envimil\Quickbird

directory, select boneyard_mul.dat

, then click Open.

6. From the Available Bands List, load Band (4,3,2) RGB into a new display.

7. In the Display #2 Image window, right-click and select Geographic Link. In the Geographic Link

172

ENVI for Defense and Intelligence

RGB Image Sharpening Chapter 9: Image Sharpening dialog, toggle both displays to On, then click OK.

8. Move the Zoom box in the Display #2 Image window and notice the difference in spatial resolution between the two displays.

RGB Image Sharpening

Exercise 2: Apply HSV Sharpening

HSV Sharpening works through color model transforms.

Color models are different ways of defining individual colors through spatial coordinate systems. The default color model in ENVI is the RGB, or cubic color model, because it represents the way that humans perceive color. In the RGB color model, red, green, blue, cyan, yellow, magenta, black and white all occupy corners of the cubic model, and any color is defined by a 3D coordinate within the cube. Color can however be modeled in different ways. For example, the Hue, Saturation, Value (HSV) color space models color

(hue) on a circle whose radius is proportional to the saturation (purity) of the color. Value is a measure for the brightness of the color.

HSV Sharpening works by first transforming the displayed RGB bands to the HSV color space. Next, the high-resolution image is histogram matched to the value image. The HSV transformed image is then resampled to match the spatial resolution of the panchromatic image. Finally, the high-resolution panchromatic image is substituted for the value image, and the three-band HSV image is transformed back into the RGB color space.

1. From the ENVI main menu bar, select TransformImage SharpeningHSV. The Select Input

RGB dialog appears.

2. Select Display #2, then click OK. This tells ENVI that you plan to use the currently displayed band combination in the HSV transform.

3. In the High Resolution Input File dialog, select the Pan band from boneyard_pan.dat

, then click

OK. The HSV Sharpening Parameters dialog appears.

4. Set Nearest Neighbor as the Resampling type. Enter the output filename as

HSV_sharp.dat

, then click OK.

5. When processing is completed, load the result into a new display as an RGB Color composite. (Be sure to match up the HSV Sharp R band to the red color channel, and so on.)

6. From the #3 Display menu bar, select ToolsLinkGeographic Link. The Geographic Link dialog appears.

7. Toggle all Displays to On, then click OK.

Question: How does the HSV Sharpened image look relative to the original RGB near-infrared color composite?

8. From the #2 Display group menu bar, select ToolsPixel Locator. The #2 Pixel Locator dialog appears.

9. Enter Sample = 1140, Line = 215, then click Apply.

Question: Why do you think the vegetation appears much brighter in the original multispectral near-infrared color composite in Display #2?

Figure 48 shows the spectral response functions for the four multispectral QuickBird bands and the panchromatic band. A spectral response function simply shows the portion of the electromagnetic

ENVI for Defense and Intelligence

173

Chapter 9:Image Sharpening Spectral Sharpening spectrum that an image band samples. Note that the panchromatic band collects radiation across the visible and near-infrared, but that its peak intensity occurs in the transition between green and red.

Also recall that vegetation is one of the strongest natural reflectors of near-infrared radiation. The vegetated pixels aren’t as bright in the panchromatic band as they are in the near-infrared band that makes up the near-infrared color composite.

Figure 48: Spectral Response Functions for the 5 QuickBird Bands

10. Close Display #3 that contains the HSV sharpened image and the #2 Pixel Locator dialog. Keep all other display groups and dialogs open for the next exercise.

Spectral Sharpening

There are three algorithms that can be used for multi-band sharpening: Gram-Schmidt, PC and CN spectral sharpening. These techniques are generally preferable if your goal is to do any type of radiometric analysis of your panchromatic sharpened results because the spectral information from all channels is preserved.

One caveat is that pixel-level spectral signatures are modified by pan sharpening, an important consideration if your spectral mapping application relies on the exploitation of subtle variations in spectral signatures.

These modifications are, in most cases, statistically insignificant; however, you should be aware that these techniques can bias results.

As with the RGB sharpening techniques, the images used must either be georeferenced or have the same image dimensions. If the images are georeferenced, ENVI coregisters the images prior to performing the sharpening.

Exercise 3: Apply Gram-Schmidt Spectral Sharpening

Gram-Schmidt Spectral Sharpening works through a data space transform, much like the color model transform previously discussed for HSV sharpening. First, a panchromatic band is simulated from the lower spatial resolution spectral bands. Second, a Gram-Schmidt transformation is performed on the simulated panchromatic band and the spectral bands, where the simulated panchromatic band is employed as the first band. A Gram-Schmidt transform is a method for orthogonalizing a set of vectors in an inner product space,

174

ENVI for Defense and Intelligence

Spectral Sharpening Chapter 9: Image Sharpening and could conceptually be compared to a principal components transform. Third, the high spatial resolution panchromatic band is swapped with the first Gram-Schmidt band and finally, the inverse Gram-Schmidt transform is applied to form the pan-sharpened spectral bands.

1. From the ENVI main menu bar, select TransformImage SharpeningGram-Schmidt

Spectral Sharpening. The Select Low Spatial Resolution Multi Band Input File dialog appears.

2. Select boneyard_mul.dat

, then click OK. The Select High Spatial Resolution Pan Input Band dialog appears.

3. Select the Pan band from boneyard_pan.dat

, then click OK. The Gram Schmidt Spectral

Sharpen Parameters dialog appears.

4. Use the default selection of Average of Low Resolution Multispectral File in the Select Method

for Low Resolution Pan area. Set Resampling to Bilinear.

Bilinear resampling will result in a smoother looking output image. If nearest neighbor had been used, artifacts of the larger multispectral pixels would have been apparent in the output image. The

Gram Schmidt Spectral Sharpen Parameters dialog appears.

5. Enter the output filename as boneyard_

GS_sharp

, then click OK.

Note: Spectral sharpening is computationally intensive and will a minute or two to run.

6. From the Available Bands List, load a near-infrared color composite Band (4,3,2) RGB into a new display.

7. Take a few minutes to explore the image in Display #3. Look at the Scroll windows between

Displays #2 and #3. They should look very similar even though the two images have widely disparate spatial resolutions.

Next, you’ll compare some spectral signatures between the original and sharpened images.

8. From the #3 Display group menu bar, select ToolsLinkGeographic Link. The Geographic

Link dialog appears.

9. Ensure Displays #2 and #3 are On, and Display #1 is Off. Click OK.

10. From the #2 Display group menu bar, select ToolsPixel Locator. The #2 Pixel Locator dialog appears.

11. Enter Sample = 1144, Line = 213, then click Apply.

12. From the #2 Display group menu bar, select ToolsProfilesZ Profile (Spectrum). The #2

Spectral Profile plot appears.

13. From the #3 Display group menu bar, select ToolsProfilesZ Profile (Spectrum). The #3

Spectral Profile plot appears.

14. From the #3 Spectral Profile dialog menu bar, select EditData Parameters. The Data

Parameters dialog appears.

15. Change the color of the signature by right-clicking on the color box and choosing Items 1:20

Red. Click Cancel.

16. In the #3 Spectral Profile dialog, right-click and select Plot Key. Left-click and drag and drop the legend name for the signature in the #3 Spectral Profile dialog into the #2 Spectral Profile dialog.

Note the subtle difference in signatures caused by spectral sharpening.

17. Close all display groups and dialogs.

ENVI for Defense and Intelligence

175

Chapter 9:Image Sharpening Spectral Sharpening

Exercise #4: SPEAR Tools Pan Sharpening

Panchromatic images provide high spatial resolution detail that may be useful for observing the presence of a vehicle, but may not be used to determine the color of that vehicle. Conversely, lower spatial resolution multispectral images cannot as readily be used to that the object is a vehicle, but may be used to observe the color of the object. By merging high spatial resolution data with the color characteristics of the multispectral data, an analyst may be able to determine both the presence and color of objects. SPEAR Tools offers the

Gram-Schmidt Spectral Sharpening routine among others through a workflow scenario with help panels to guide the user through the process.

1. Navigate to File→Open Image File from the main ENVI menu. The Enter Data Filenames dialog appears.

2. Navigate to envimil\quickbird and hold down CTRL while clicking on the two following datasets: a.

kandahar_ms_subset b.

kandahar_pan_subset

3. Click on the Open button. The two selected datasets are loaded into the Available Bands List. kandahar_ms_subset

is a 4-band Quickbird multispectral image of Kandahar, Afghanistan and kandahar_pan_subset

is a Quickbird panchromatic image.

4. To view a true-color composite of the multispectral image, right-click on the file name kandahar_ms_subset

and choose Load True Color.

5. Right click on the panchromatic image band name (Band 1: Orthorectified) and select Load Band

to New Display. A new ENVI display group will open with the gray scale panchromatic image.

6. The two images cover overlapping geographic areas, so they can be geographically linked. Link the two images by right clicking in one of the displays and selecting Geographic Link…. In the

Geographic Link dialog, turn both displays On and click OK.

7. Navigate around the panchromatic scene either by dragging the red box for the zoom window around in the full resolution window, or drag the red box for the full resolution window around in the scroll window. Observe how the multispectral image is updated accordingly. As the cursor is moved around in one image, the position of both zoom windows updates. Close both displays when you are finished.

8. From the ENVI menu select Spectral→SPEAR Tools→Pan Sharpening. The Pan Sharpening interface appears.

176

ENVI for Defense and Intelligence

Spectral Sharpening Chapter 9: Image Sharpening

File Selection is the current step for our workflow. All SPEAR interfaces provide a simple help system located on the left side of the interface.

9. Click on the button labeled Select High Res File… and select Band 1 of the panchromatic image.

Click OK.

10. Click on the Select Low Res File… button and select the multispectral image (Note the File

Information in the panel on the right). Click OK.

11. The Low Res Band Match dialog offers you a choice on which low res band to use for matching. A later step will give you the option of automatically co-registering the two image datasets. To do this, SPEAR will need to know which multispectral image band you would like to use to autogenerate tie points to match the panchromatic scene. It is a good practice to use the red band when attempting to automatically co-register data sets. The red band is band 3. Choose

Resize(Orthorectified (Band 3) and click OK.

12. There are additional options available in the File Selection step. If you only wanted to perform pan sharpening on a small spatial subset, you could click on the Select Subset button and use ENVI’s spatial subset utility to define a specific geographic region on which to perform the Pan Sharpening process. Click on the Select Output File… button and navigate to envimil\enviout. Name the output file kandahar_panSharp, then click Open.

13. Click Next to proceed to the next step in the workflow.

14. To perform Pan Sharpening, the two images being used must be co-registered. The purpose of coregistration is to have particular ground features in both images at the same geographic location. If both images are orthorectified, they are essentially already co-registered. If both images are not orthorectified, they will likely need to be co-registered. SPEAR provides two different methods for performing co-registration: manual and automatic tie-point generation.

ENVI for Defense and Intelligence

177

Chapter 9:Image Sharpening Spectral Sharpening

To perform the co-registration process, you will be using ENVI’s auto tie point generation capability. Make sure the Select tie points automatically radio button is selected.

15. ENVI’s auto tie point generation tool attempts to identify similar features in both images using both the approximate geographic location and the actual observed pixel data. The first step in the auto tie point generation process is to determine some seed points to help the auto tie point tool get started.

Seed points are generated solely on the geographic position and do not take into account the pixel data.

These seed points are merely a starting point for the auto tie point tool and may not be used in the actual co-registration process. Make sure Use seed points is selected. Click on Auto-Generate

Seed Points and 4 points equally spaced throughout the image are generated.

Click Show Advanced Parameters at the bottom of the interface to view possible other parameters.

You can click on the Help button at the bottom of the interface to get more information on these.

After the seed points have been generated, the Method Selection panel should look similar to the following image.

16. Under the Advanced Parameters section, reduce the Number of Tie Points down to 25. This scene is a small subset without a lot of relief. Click Next to proceed to the next step. After the tie points are generated the two images will be redisplayed and the Ground Control Points dialog and Image to

Image GCP List appear. They allow you to review the tie points and remove potentially erroneous points prior to performing the co-registration process.

17. Each tie point that is generated has a certain error associated with it based on how well it fits relative to other tie points. To obtain the most accurate co-registration possible, it is best to remove tie points with high RMS (Root Mean Squared) error.

178

ENVI for Defense and Intelligence

Spectral Sharpening Chapter 9: Image Sharpening

All of the tie points in the Image to Image GCP List are colored based on their RMS error. To see this error you may need to click on the corner or edge of the Image to Image GCP List and drag it to make it wider. The RMS column is on the right hand side. If a tie point has an RMS error value of greater than 5 it is colored red. If it is between 3 and 5 it is colored yellow. If it is below 3 it is green. Click Options→Order Points by Error to put the points with the highest error at the top of the list.

18. Click on the number for individual points on the left margin of the list to have the displays update to show the point in the middle of each Zoom window. This allows you to judge whether a point is accurate or not. Click on the number for each point with a high RMS value and observe the position in each image. If you do not like the placement of a point, you can either click Delete or you can reposition a tie point by placing the same object under the cross hairs in the zoom windows of both displays and clicking Update. The overall RMS error for all tie points is listed at the bottom of the

Ground Control Points dialog. A good overall RMS error is 2.0 pixels. In areas where you have significant topography some individual points may have an error greater than 2.0 but still appear to be accurately located. However, the area in this scene is fairly flat so you will set a higher standard for each point.

19. For this exercise you will only use tie points that individually have an RMS error below 2.0. Back in the main Review Tie Points panel, type 2.0 into the Maximum allowable RMS per GCP: text box and click Apply. All tie points with an RMS above 2.0 are turned off and placed at the bottom of the Image to Image GCP List.

20. In addition to manipulating tie-points, this SPEAR interface allows you to determine the image warp method and the pixel value interpolation method. Warp methods include: RST (Rotation,

Scaling, and Translation), Polynomial, and Triangulation. Interpolation methods include: Cubic

Convolution, Bilinear and Nearest Neighbor. Make sure the Method is set to Polynomial and the

Interpolation is set to Cubic Convolution and click Next. The multispectral image will now be coregistered to the pan image. This may take some time depending on the size of the images.

21. When the image registration step is finished you will be prompted to check the accuracy of the coregistration process. The original panchromatic image and the co-registered multispectral scene are loaded into new displays and linked. Click in either the pan scene or the multispectral scene to view the other scene. If objects line up properly between the two images, the co-registration was accurate. You can also click on the green arrowhead to Flicker (or Swipe, or Blend) between the two scenes. Press the pause button , or the stop button to end the comparison. Pressing the floppy disk button will allow you to save an animated .gif file.

22. If you are satisfied with the co-registration result, the final parameter you must determine is the actual pan sharpening algorithm you would like to use. Many different techniques are available including Gram-Schmidt, Principle Components, Hue, Saturation, Value (HSV), and Color

Normalized (Brovey). Each method has its own benefits and shortcomings. They are discussed in the panel on the left side of the Check Co-Registration dialog. Gram-Schmidt is a popular technique that is recommended for most scenarios.

Confirm that Sharpening method is set to Gram-Schmidt and click Next. The progress dialog for the pan sharpening process will appear and the image will be pan sharpened.

23. When the process is finished the pan sharpened result will be displayed. From the Processing

Complete panel, you can write the final pan sharpened product to a NITF file, export the result to

ArcGIS Geodatabase, or compute spectral quality indices. Click on Export Image to NITF. All of the necessary NITF metadata will be taken from the original Pan and multispectral datasets to

ENVI for Defense and Intelligence

179

Chapter 9:Image Sharpening Chapter Review populate the metadata fields of the output NITF file. You can also add various tags yourself. Click

OK in the NITF Metadata Editor, then click Save in the output file name dialog.

24. Click Finish to exit the Pan Sharpening workflow. Then close all displays.

Chapter Review

• Panchromatic sharpening is a very useful technique for improving the spatial and spectral interpretability of an image.

• Image data space manipulations like color model and principle components transforms are the basis behind different sharpening algorithms.

• RGB sharpening techniques are useful for improving the visual interpretability of imagery.

• Spectral sharpening techniques are useful for improving the visual and maintaining the spectral interpretability of a multispectral dataset.

180

ENVI for Defense and Intelligence

Chapter 10:

Topographic Analysis for

Mission Planning

What You Will Learn in this Chapter ......................................................................................... 182

Explore the Image Data ............................................................................................................ 182

Topographic Analysis................................................................................................................ 183

Chapter Review......................................................................................................................... 195

181

ENVI for Defense and Intelligence

Chapter 10:Topographic Analysis for Mission Planning What You Will Learn in this Chapter

What You Will Learn in this Chapter

In this chapter you will learn:

• To apply a trafficability study in an arid landscape using topographic slope and texture measures

• How to create an image mask highlighting trafficable areas in the image

• How to create a polygon vector file showing untrafficable areas

• How to create a 3D overlay of high resolution imagery over a digital elevation model with vector overlay

• How to create a fly through of the 3D surface view and save it to an MPEG file

In this practicum, you’ll use data fusion to assess land surface trafficability. The scenario is that a portion of an armored battalion needs to cross the terrain in the image from the Northwest to the Southeast. Two datasets are used:

• A QuickBird multispectral image will be analyzed for surface roughness

• Slope measurements will be derived from a digital elevation model of the region.

These two measures are combined to find those areas in the image with low slope and relatively smooth surface. Your task is to determine the best path across the image.

Explore the Image Data

Exercise 1: Open and Explore the Datasets to use in the Analysis

1. From the ENVI main menu bar, select FileOpen Image File. The Enter Data Filenames dialog appears.

2. Navigate to the envimil\terrain

directory and open mongolia_QB.dat

, then click Open.

3. From the Available Bands List, right-click on the file name and select Load CIR. A near-infrared color composite (4,3,2) RGB image will display. Take a few moments to explore the image.

This image was acquired by DigitalGlobe’s QuickBird satellite over the Gobi desert in south central

Mongolia. As you can see, the region is sparsely populated and extremely arid. Very little vegetation is visible in the image.

4. Open the digital elevation model you’ll use in this exercise. From the ENVI main menu bar, select

FileOpen Image File. In the terrain directory, select mongolia_dem.dat

, then click Open.

5. From the Available Bands List, right-click on Band 1 for mongolia_dem.dat

select Load Band

to New Display.

6. Link the two display groups (right-click in either display group, then select Link Displays).

An ENVI annotation file was previously created that shows the starting and ending locations for the armored battalion.

7. From the #1 Display group menu bar, select OverlayAnnotation. The #1 Annotation dialog appears.

8. Select FileRestore Annotation. In the Enter Annotation Filename dialog, select start-

182

ENVI for Defense and Intelligence

Topographic Analysis Chapter 10: Topographic Analysis for Mission Planning finish.ann

, then click Open. Use the red image box in the Scroll window to examine the two annotations. These give you a general idea of the path that the battalion will need to take.

9. Close the #1 Annotation dialog.

Topographic Analysis

In a DEM, the Z value of each (x, y) location represents elevation. ENVI’s topographic modeling tools can calculate parametric information including slope, aspect, and shaded relief (assuming Lambertian surfaces).

A plane is fit to a kernel of a user-defined size centered over each pixel, and the slope and aspect of the plane are calculated. A root mean squared (RMS) error image, which indicates the planarity of the ninepixel box, can also be generated.

Slope is expressed in degrees from horizontal. Aspect is expressed in degrees from north (0 degrees is

North, 90 degrees is East, 180 degrees is South, and 270 degrees is West).

The sun elevation and azimuth must be specified to produce a shaded relief image. ENVI can calculate these parameters, given the month, day, year, time, latitude, and longitude. Shaded-relief images help visualize topography by simulating sunlight. Areas of direct sunlight are bright, and shadowed areas are dark.

Exercise 2: Generate Slope, Aspect, and Shaded Relief Image

1. From the ENVI main menu bar, select TopographicTopographic Modeling. The Topo Model

Input DEM dialog appears.

2. Select mongolia_dem.dat

, then click OK. The Topo Model Parameters dialog appears.

3. Set the Topographic Kernel Size to 9. In the Select Topographic Measures to Compute field, select only Slope, Aspect, and Shaded Relief.

4. Notice that when you selected Shaded Relief, the Compute Sun Elevation and Azimuth button became active. Click Compute Sun Elevation and Azimuth. The Compute Sun Elevation dialog, appears.

5. Enter the date, time and latitude/longitude of the QuickBird image acquisition, as seen in Figure 49.

Once you’ve entered the correct parameters, click OK.

Figure 49: The Compute Sun Elevation Parameters Dialog

ENVI for Defense and Intelligence

183

Chapter 10:Topographic Analysis for Mission Planning Topographic Analysis

6. Enter the output filename mongolia_topo.dat

, then click OK.

In the Available Bands List note that individual bands are output for each topographic parameter computed.

7. From the Available Bands List, load the Slope band as a Gray Scale image to a new display.

8. From the #1 Display group menu bar, select ToolsCursor Location/Value. The

Cursor/Location value dialog appears.

9. Move your cursor around Display #1 and note various pixel data values for bright and dark pixels.

Recall the difference between actual image data values and byte scaled screen values. The actual slope data values are on the last line of the Cursor Location/Value tool and represent the steepness of individual pixels in degrees from horizontal.

10. In the Available Bands List, right-click on the Aspect band from mongolia_top.dat

and select

Load Band to Current Display. Again, note the aspect data values for individual pixels. Aspect is the direction that individual pixels face, measured in degrees from North.

11. Load the Shaded Relief band from mongolia_top.dat

into Display #1.This image simulates topographic shadowing on the landscape with a specified sun position.

12. Close Display #1.

Exercise 3: Use RX Detection to Find Man-Made Structures

The goal of this exercise is to locate encampments that you should avoid in trek across the scene.

1. From the ENVI main menu bar, select SpectralSPEAR Tools Anomaly Detection.

2. Click the Select Input File button. Then select

Mongolia_QB.dat

from the Select Input File dialog and click OK.

3. Click the Select Output Root Name button. Browse to the env imil\output

directory, type

Mongolia_anom

for the output file name and click Open.

4. Click the Next button to proceed.

5. For Anomaly Detection Parameters leave the default selection of RXD as the Algorithm, and the

Mean Source as Global. Click in the check box to Suppress vegetation anomalies. Click the Next button to proceed.

6. After processing is complete two displays will appear. One display shows a natural color composite of the input scene and the other display shows the gray scale result. The two displays are linked so click in one of the displays to see the other image.

The interactive contrast stretch control for the gray scale result also appears. If you haven’t set

ENVI’s preferences to have this tool automatically apply changes, click OptionsAuto Apply.

Anomalous pixels have high values. The dotted vertical lines are locked together when this tool appears, and they are positioned at a threshold of around 99.9 % (most of the data are to the left of the vertical lines). Move the locked vertical lines to the right to a DN of around 30. That corresponds to a percentage of 99.99%. More pixels are dark now.

184

ENVI for Defense and Intelligence

Topographic Analysis Chapter 10: Topographic Analysis for Mission Planning

7. In the Anomaly Detection panel, click the Retrieve Value button to bring in the Stretch minimum value (~30) you specified. Note that you can choose to display a False Color Composite as the

Reference Display.

8. Click the Next button to proceed. The images are redisplayed and an Available Vectors List appears. A temporary vector layer is listed there and polygons are drawn in the Natural Color

Composite around the anomalies that are above the threshold you listed in the previous step.

9. Click on the Sort by pull-down menu and select Pixels. Then select Strength.

10. Click on the tab labeled 1 which now represents the strongest anomaly. Both displays will update to show that anomaly in the middle of the zoom window. Look in the Zoom window of the Natural

Color Composite (or click in the Zoom window of the gray scale anomaly image) to see if you can tell what the anomaly is. You will see a blue vector outlining this anomaly. Blue means that this is the one that you have selected and are currently investigating. You are going to be looking for manmade structures in the image and tag these as bad anomalies that you should avoid.

ENVI for Defense and Intelligence

185

Chapter 10:Topographic Analysis for Mission Planning Topographic Analysis

11. Looking at the full resolution and zoom windows for the color image, the first (strongest) anomaly appears to be part of an encampment. To help in its evaluation you can look for it in Google Earth if you have access to it. To do this, right-click in the image to bring up the Cursor Location/Value tool. Then position the Cursor Location/Value tool so it overlaps the highlighted structure in the

Zoom window as shown below. You are dong this so that you can move the cursor from the Zoom window to the Cursor Location/Value tool without changing coordinates significantly. Position the cursor over the structure so that you see its coordinates, then move the cursor into the Cursor

Location/Value tool and left click as you highlight the coordinates. Use a keyboard key combination

(control+c) to copy the coordinates, then paste (control+v) them into the Fly To text block in

Google Earth.

12. After you have evaluated this structure, click on the red BAD flag button to tag it. The vector for this anomaly turns red and the displays will automatically update to go to the next strongest anomaly. Look at the Zoom windows to help you identify what this anomaly is. Some bright anomalies appear to be single yurts in Google Earth. We will assume that these are friendly nomads and you can either ignore them or tag them Good. Some anomalies are dark. These are either springs or dark shadows. Ignore the dark anomalies. Remember that anomalies that are Pending are yellow. Move down the list of anomalies. Tag clusters of anomalies like the one pictured above as

Bad (note that several anomalies make up the encampment).

Single yurt

186

13. It turns out there are only two encampments that you have to avoid. Going through the top 25 anomalies (according to Strength) should allow you to tag most of the structures for them. They are circled in the figure below and have these coordinates:

ENVI for Defense and Intelligence

Topographic Analysis a. 44°5'36.84"N, 106°22'12.34"E b. 44°5'52.62"N, 106°21'29.12"E

Chapter 10: Topographic Analysis for Mission Planning

14. You can type the coordinates into the Pixel Locator tool to find the encampments if you wish. Your final anomalies list may look similar to the one below. Click the Next button to proceed.

15. In the Export Vectors step make select only the Bad anomalies. Then click on the Select Vector

Rootname button. For output, browse to the envimil\enviout directory and type mongolia_camps

for the output root name and click Open.

ENVI for Defense and Intelligence

187

Chapter 10:Topographic Analysis for Mission Planning Topographic Analysis

16. Click the Next button to proceed. The Bad Anomalies vector layers will appear in the Available

Vectors List. Click the Finish button. If displays are still open, close them all except for the color composite of Mongolia.

Exercise 3: Line of Sight Calculator

Use the Line of Sight Calculator to calculate which pixels can be seen from a specific location within any file that has an associated DEM. The pixels that can be seen are output as an ROI.

1. Display Mongolia_QB.dat image as a color composite image if it is not displayed already. From the display menu select ToolsLine of Sight Calculator. The Line of Sight calculator dialog appears. The current pixel location is listed in the Sample and Line text boxes.

2. From the Line of Sight calculator dialog select Options Map Coordinates. Click the up and down arrows to toggle to Latitude and Longitude input. Type in the coordinates from Step 13 a in the previous exercise (44°5'36.84"N, 106°22'12.34"E) and click Apply.

3. The Select Line of Sight Input DEM Band dialog appears. Select Band 1 from mongolia_dem.dat

and click OK.

4. In the Line of Sight Parameters dialog enter 2000 for the Distance Limit (Meters) and enter 4 for the

Elevation Above Point. Click OK.

5. An ROI is created that shows which pixels can be seen from the designated pixel. The ROI is labeled LOS in the ROI Tool dialog and is overlaid on your image. Repeat steps 2 and 4 using the coordinates from Step 13 b in the exercise above (44°5'52.62"N, 106°21'29.12"E).

6. Save these ROIs by selecting File →Save ROIs from the ROI Tool. Then click Select All Items and type in a file name of camp.roi and click OK. Close the Line of Sight Calculator.

Texture Analysis

In this particular region, topography is not extreme. From the slope image calculated in the previous exercise, most slopes fall in a range between 2 and 10 degrees; however, many areas in the image may not be trafficable because of surface roughness, or broken topography. To get a sense of surface roughness, you’ll calculate a texture image. Many images contain regions characterized by variation in brightness rather than any unique value of brightness. Texture refers to the spatial variation of image tone as a function of scale. In this exercise, you’ll use textural filters based on occurrence measures.

Occurrence measures use the number of occurrences of each gray level within the processing window for the texture calculations. Five different texture filters are available in ENVI: data range, mean, variance, entropy, and skewness. For this exercise you’ll use the variance occurrence filter to derive an image corresponding to surface roughness. For this analysis, we’re assuming that large variance within a moving window in an image is associated with a rough surface that may not be trafficable.

Exercise 4: Calculate an Occurrence Measure Texture Image

1. From the ENVI main menu bar, select FilterTextureOccurrence Measures. The Texture

Input File dialog appears.

2. Select mongolia_QB.dat

and then click Spectral Subset. The File Spectral Subset dialog, appears.

3. Select Band 3 (red), then click OK, and OK. The Occurrence Texture Parameters dialog appears.

4. Deselect all Textures to Compute except for Variance. Set the Processing Window to Rows = 5,

Cols = 5. Output the result to a file named mongolia_texture

. Click OK to start processing.

188

ENVI for Defense and Intelligence

Topographic Analysis Chapter 10: Topographic Analysis for Mission Planning

5. From the Available Bands List, load mongolia_texture

as a Gray Scale to a new display.

6. If the Quickbird scene is not displayed, load a near-infrared color composite (Band 4, 3, 2 as RGB) of mongolia_QB.dat

into a new display.

7. From a Display group right-click and select Link Displays. In the Link Displays dialog, click OK.

Using dynamic overlay, examine which portions of the image have high variance (and by extension, uneven topography).

8. Close all display groups.

Image Masking

A mask is a binary image that consists of values of 0 and 1 only. When a mask is used with a processing function, the pixels with values of 1 are processed, while the pixels with values of 0 are not included in any calculations.

Image masks can be defined using a data value, data ranges, ROIs, annotation, or vector files. In this exercise, data ranges define the masks. Mask bands can be applied during several ENVI functions, including statistics, classification, unmixing, matched filtering, continuum removal, and spectral feature fitting.

In the following exercise, you will create a mask that will highlight areas in the image that are not trafficable by finding all pixels that have a relatively high slope or a high texture variance measure.

Exercise 5: Build an Image Mask

1. From the ENVI main menu bar, select Basic ToolsMaskingBuild Mask. The Mask

Definition dialog appears.

2. Set Sample = 4000 and Line = 4000. This is the size of the Mongolia scene. You can get values for samples and lines if you click on any band for the Mongolia scene in the Available Bands List. The dimensions are given at the bottom of the Available Bands List.

3. From the Mask Definition dialog menu bar, select OptionsImport Data Range. The Select

Input for Mask Data Range dialog appears.

4. Select mongolia_topo.dat

. Click Spectral Subset. The File Spectral Subset dialog appears.

5. Select the Slope band, then click OK. The Select Input for Mask Data Range dialog appears. Click

OK. The Input for Data Range Mask dialog appears.

6. Set the Data Min Value to 20, then click OK. This tells ENVI that you want to find all pixels with a slope of 8 degrees or greater.

7. In the Mask Definition dialog select OptionsImport Data Range. The Input for Data Range

Mask dialog appears.

8. Click Select New Input. The Select Input for Mask Data Range dialog appears.

9. Select the Variance band from mongolia_texture

, then click OK. The Input for Data Range

Mask dialog appears.

10. Set Data Min Value to 1000, then click OK. This value corresponds to relatively high variance in the scene which is our surrogate for surface roughness.

11. Now we will add the ROIs from the Line of Sight Calculator as another area to avoid. In the Mask

Definition dialog click OptionsImport ROIs… and select the LOS ROIs you generated previously. Click OK.

ENVI for Defense and Intelligence

189

Chapter 10:Topographic Analysis for Mission Planning Topographic Analysis

12. Click Options to make sure Selected Areas “On” is checked. Normally for a mask you would have the areas you wish to have masked out turned off. However, you will turn this mask into a classification scene that you can overlay on top of the original scene.

13. Output the result to a file named

Mong_not_trafficable_mask

, then click Apply.

14. From the Available Bands List, load the newly created Mask Band into a new display.

The options for creating a layer that can be overlaid on the original QuickBird image include ROIs,

Annotations, Classification Images, or Vector Layers. In this case, a classification image will provide the best visual model of untrafficable areas because of the ability to adjust the transparency of classification overlays.

15. In the Available Bands List, right-click on the filename

Mong_not_trafficable_mask

and select

Edit Header.

16. In the Header Info dialog, click File Type and select ENVI Classification. The Classification

Parameters dialog appears. Set the Number of Classes to 2, then click OK. You can change class colors in the Class Color Map Editing dialog, if desired. Click OK in the Class Color Map Editing dialog, then click OK in the Header Info dialog.

Note that the file type icon was changed to classification image in the Available Bands List.

17. Close all display groups.

Classification Overlay

Exercise 6: Overlay the Classification Image on the Original QuickBird

1. From the Available Bands List, load a near-infrared color composite (Band 4, 3, 2 as RGB) of the original QuickBird image ( mongolia_QB.dat

) into a new display.

2. From the #1 Display group menu bar, select OverlayClassification. The Interactive Class Tool

Input File dialog appears.

3. Select

Mong_not_trafficable_mask

and note that its File Type is specified as ENVI

Classification in the File Information text block. Then click OK.

4. In the #1 Interactive Class Tool dialog, click the On check box for Class #1. Those pixels that were flagged as not trafficable in the analysis should now be highlighted in Display #1.

5. In the #1 Interactive Class Tool dialog, select OptionsClass Transparency. The Class Tool

Transparency dialog appears.

6. Set the transparency to 50, then click OK. Note that this allows you to see areas flagged as not trafficable, while still showing underlying terrain.

Recall that the objective of this practicum is to analyze the region to find a trafficable route across the image. Next you’ll restore the annotation that shows the starting and ending locations for your route.

7. From the #1 Display group menu bar, select OverlayAnnotation. The #1 Annotation dialog appears.

8. From the #1 Annotation dialog menu bar, select FileRestore Annotation. The Enter Annotation

Filename dialog appears.

9. Navigate to the envimil\terrain

directory, select start-finish.ann

, then click Open.

190

ENVI for Defense and Intelligence

Topographic Analysis Chapter 10: Topographic Analysis for Mission Planning

In the next exercise, you’ll visualize the QuickBird image overlaid on the digital elevation model in three dimensions. The ENVI 3D SurfaceView tool doesn’t allow for multi-layer raster overlay.

Therefore you’ll need to “burn” the classification and annotation overlays into the QuickBird scene by saving Display #1 as a new image.

1. Adjust the contrast stretch of #1 Display if you wish, then select FileSave Image AsImage

File. The Output Display to Image File dialog appears.

Use this image saving technique any time you wish to save an image just as it appears in a display.

This is different than saving the actual data from the data file, as imagery in an image display has been byte scaled to the range of available gray levels on the monitor.

2. Click Change Graphic Overlay Selections. The Change Graphics Overlay Options dialog appears.

Note that you can choose to add or remove different types of layers such as ROIs, vectors or annotations. Leave the items as they are and click OK.

3. Make sure the Output File Type is set to ENVI. Output the result to a file named mongolia_QB_overlay.dat

, then click OK.

In the Available Bands List, note that the new image has only three bands: the three bands of the near-infrared color composite previously displayed.

4. The bands for the result should automatically be listed after the R, G, and B radio buttons in the

Available Bands List. Click Load RGB to load the new image to Display #1.

5. Close the #1 Interactive Class Tool and the #1 Annotation Tool.

3D Surface View

In the previous section, you were able to see the areas that are not easily trafficable in the QuickBird scene.

In the next few exercises, you will use ENVI’s 3D SurfaceView tool to visualize the relationship between the landscape and trafficability. The 3D SurfaceView tool allows you to drape the QuickBird image over a

3D surface generated from the DEM. It also allows you to zoom in and out and rotate the resulting 3D view, and to create animated “fly-through” sequences. Exploring the view in this way should help you visualize whether different potential routes through the image appear more trafficable than others.

Exercise 7: 3D SurfaceView

1. From the display menu select Tools3D SurfaceView (or from the main ENVI menu bar you can select Topographic → 3D SurfaceView). The Associated DEM Input File dialog appears.

2. Select the Band 1 of mongolia_dem.dat

as the DEM, then click OK. The 3D SurfaceView

Input Parameters dialog appears.

The entire image is used as the overlay image on the DEM unless both the image and DEM files are georeferenced. If both the files are georeferenced, then only the part of the image that overlaps with the DEM is used. If a spatial subset is chosen for the DEM, then the georeferenced image is automatically subset to match. The spatial resolutions of the two files do not need to be the same.

Note: It is possible to permanently associate a DEM with an image file by editing the file’s ENVI header. This facilitates functions like the 3D SurfaceView.

3. Select 64 and 512 for the DEM Resolution. This is the number of pixels that will be used along the longest dimensions of the DEM in the 3D SurfaceView window. You want to use the lowest resolution (64) while determining the best flight path, then you can switch to the higher resolution to display your final fly-through sequence. Using the higher DEM resolutions significantly slows the display and should only be used on fast machines.

ENVI for Defense and Intelligence

191

Chapter 10:Topographic Analysis for Mission Planning Topographic Analysis

4. In the DEM min plot value field, enter 1150. This specifies that DEM values lower than 1150 meters are not plotted.

5. Leave all other parameters at their default values, then click OK. The 3D SurfaceView window appears. You can drag a corner of the D SurfaceView window to make it larger.

6. Use the mouse to interactively rotate, translate, and zoom into the surface. The mouse button functions are listed in Table 4.

Table 4: Mouse Button Functions in the 3D SurfaceView Window

Mouse Button Action

Left

Click and drag to rotate x/y axis.

Middle

Click and drag to translate image.

Right

Click and drag to the right to zoom in.

Click and drag to the left to zoom out.

7. From the 3D SurfaceView window menu bar, select OptionsSurface Controls. The 3D

SurfaceView Controls dialog appears. This dialog allows you to fine tune the rotation, translation, and zoom position of the plot (see Figure 50 on page 192).

8. Use the arrows and plus and minus buttons in the 3D SurfaceView Controls dialog to change the view a small amount at a time. Then try typing in relatively large numbers in the Inc field and use the arrows to jump quickly from one view to another.

The increment used to change the parameter with each click

Figure 50: The 3D Surface Controls Dialog

Note: You can reset the 3D view to its original position using the 3D SurfaceView window menu bar, Options → Reset View option.

9. Experiment viewing different potential routes across the image from all sides.

192

ENVI for Defense and Intelligence

Topographic Analysis Chapter 10: Topographic Analysis for Mission Planning

10. While exploring best routes of travel, you may wish to compare the elevation values for certain features in the image. To see the x, y, and z (elevation) values, open the Cursor Location/Value

tool from #1 Display and place the cursor in the 3D SurfaceView window over a feature of interest.

The x, y, and elevation (z) values reported by the Cursor Location/Value tool are calculated from the 3D model and are approximate. Close the Cursor Location/Value tool.

11. Explore each of these options available from the 3D SurfaceView menu bar:

• Change the color of the background from white to black, by selecting OptionsChange

Background Color. The resulting dialog allows custom RGB mixing or use of preset system colors.

• Smooth the 3D view so that the draped image appears less pixilated when you zoom in, by selecting OptionsBilinear Interpolation.

Exercise 8: Advanced 3D SurfaceView Options

1. From the 3D SurfaceView window, select OptionsMotion Controls. The 3D SurfaceView

Motion Controls dialog appears.

2. The Motion is set to User Defined Views by default. From the 3D SurfaceView Motion dialog menu bar, select Options to view the motion selection.

3. Position your image in the 3D SurfaceView window, then click Add to add your view to the animation. The view is listed as Flight Path View #1 in the 3D SurfaceView Motion dialog.

4. Move around in the 3D SurfaceView window and click Add in the 3D SurfaceView Motion

Controls dialog to add several views to the sequence.

Select a flight path, then click one of these buttons to replace, delete, or clear that path from the list

Increase the total number of frames to be calculated for the entire animation. A larger number will slow the animation, a smaller number will speed up the animation

Figure 51: The 3D SurfaceView Motion Controls Dialog

5. After you have added at several views, click Play Sequence to view your animated fly-through.

6. Continue adjusting the plot and adding, deleting, or replacing views until you have five or six views that you like in your animation.

7. Increase the number of Frames to make the flight sequence smoother and last longer. Try values as high as 500.

8. View the Options menu in the SurfaceView Motion Controls dialog to try the continuous loop play sequence and other parameters.

ENVI for Defense and Intelligence

193

Chapter 10:Topographic Analysis for Mission Planning Topographic Analysis

9. You can create a fly-through animation from the 3D SurfaceView window by drawing an annotation line on the original image display and then importing that line into to be used as a flight path. Go to the Image window containing the mongolia_QB_overlay.dat

composite, then select

OverlayAnnotation from the display group menu bar.

10. From the Annotation dialog menu bar, select ObjectPolyline.

11. Select Scroll as the active window. Now you can draw annotation in that window.

12. To draw a flight path in the Scroll window, either place several vertices for the polyline with a series of left-clicks or click and drag to draw a line. When you are finished drawing the line, rightclick. A diamond-shaped symbol appears near the center of the finished line, indicating that the object is no longer being drawn. It may be re-positioned or deleted at this point. Right-click again to accept the line (or, middle-click to delete the line).

13. Turn the Annotation tool off by clicking the Off radio button.

14. Once an annotation line has been drawn and accepted, it can be imported into the 3D SurfaceView window. From the 3D SurfaceView window containing the RGB mongolia_QB_overlay.dat

image, select OptionsMotion Controls.

15. From the 3D SurfaceView Motion Controls dialog menu, select OptionsMotion: Annotation

Flight Path.

16. Select the Input Annotation from Display option, then click OK. A new 3D SurfaceView Motion

Controls dialog appears and the flight path is drawn in the 3D SurfaceView window.

17. If you don’t see your annotation flight path listed, select File → Input Annotation from Display.

Information about the imported annotation is listed in the field at the top of the motion controls dialog.

Toggle between

Flight Elevation and

Flight Clearance

Smooth the flight path by using a running average of points (enter the number of points to use in the average)

Figure 52: The 3D Surface View Motion Controls Dialog

18. In the Flight Smooth Factor field, enter 50. This will smooth out the flight annotation line so that the flight is not so jerky.

19. In the Flight Clearance field, enter 30 meters (the clearance value uses the same units as the DEM, which in this case is meters).

194

ENVI for Defense and Intelligence

Chapter Review Chapter 10: Topographic Analysis for Mission Planning

20. In the Flight Look Angles area, set the Up/Down field value to -10 (you will be looking 10 degrees down from horizontal) and the Left/Right value to 0 (you will be looking straight ahead).

21. Click Play Sequence to view the animation. You may need to adjust your parameters to give a more appropriate view for your particular annotation flight line. For example, try a Flight Smooth Factor of 100. With higher factors you may find yourself flying underneath the surface in areas of significant relief! If that’s the case, you could increase the Flight Clearance value. To make a longer, slower flight, increase the number of Frames to 1000 or 2000.

You can save the surface animation to MPEG format via the File menu in the 3D Surface View dialog. Saving to MPEG can be time-intensive, as each frame can take from several to many seconds to save. For this exercise, you won’t save your animation to MPEG. You can restore a previously saved version from disk, located in envimil\terrain as travel_path.mpg

. To open the file, double-click on it in Windows Explorer.

22. You can view the surface of your 3D view panoramically as if you were standing in the image.

From the 3D SurfaceView window menu bar, select OptionsPosition Controls. The 3D

SurfaceView Position Controls dialog appears.

23. Enter Sample = 1850 and Line = 2222. This is a location at the top of the mountain near the center of the image.

24. Set the Azimuth to 270 degrees. This sets the view directly West.

25. Set the Height Above Ground to 30 meters. This is the distance from which you will look down on the image. Again, the height units are the same as the DEM elevation units.

Toggle between

Pixel Coord and

Map Coord for georeferenced images

Click to “look around” from a specified point (0

= North, 90 = East, 180

= South, 270 = West)

Determines the elevation angle (10 = horizontal, negative value = looking down)

Figure 53: The 3D SurfaceView Position Control Dialog

26. Drag the Elevation and Azimuth slider bars to look around the image.

27. When finished, close all open files and dialog windows.

Chapter Review

• Topographic analysis can be used to discern terrain features such as slope, aspect, etc.

• Texture analysis can be used to analyze surface roughness.

ENVI for Defense and Intelligence

195

Chapter 10:Topographic Analysis for Mission Planning Chapter Review

• Image masking is an indispensable technique for working with some but not all pixels in an image.

• Classification overlay is an effective way of visualizing regions of interest in a display.

• ENVI’s 3D overlay utility is useful for visualizing any surface in 3 dimensions.

196

ENVI for Defense and Intelligence

Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement

Table of contents