Kings College of Engineering
Question Bank
Sub. Code/Name: CE1304 Fundamentals of Remote sensing and GIS
Year/Sem: III / V
1.What is remote sensing?
Remote sensing is the science and art of obtaining information about on
object, area, or phenomena through the analysis of data acquired by a device that
is not in contact with the object, area, or phenomena under investigation.
2.What are all the applications of remote sensing?
In many respects, remote sensing can be thought of as a reading process.
Using various sensors, we remotely collect data that may be analyzed to obtain
information about the objects, areas, or phenomena being investigated. The
remotely collected data can be of many forms, including variations in force
distributions, acoustic wave distributions, or electromagnetic energy
3.Write the physics of remote sensing ?
Visible light is only one of many forms of electromagnetic energy. Radio waves,
heat, ultraviolet rays, and X-rays are other familiar forms. All this energy is
inherently similar and radiates in accordance with basic wave theory. This theory
describes electromagnetic energy as traveling in harmonic, sinusoidal fashion at
the “velocity of light” c. The distance from one wave peak to the next is the wave
length ψ, and the number of peaks passing a fixed point in space per unit time
is the wave frequency V.
From basic physics, wave obey the general equation
4.What are the Components of Remote Sensing ?
5.What is Electromagnetic radiation?
Electromagnetic (EM) radiation is a self-propagating wave in space or
through matter. EM radiation has an electric and magnetic field component
which oscillate in phase perpendicular to each other and to the direction of
energy propagation.
6.Write the type of Electromagnetic radiation?
Electromagnetic radiation is classified into types according to the
frequency of the wave, these types include (in order of increasing frequency):
radio waves, microwaves, terahertz radiation, infrared radiation, visible light,
ultraviolet radiation, X-rays and gamma rays.
7.Draw the quantum theory interaction?
A quantum theory of the interaction between electromagnetic radiation
and matter such as electrons is described by the theory of quantum
8.Write about refraction?
In refraction, a wave crossing from one medium to another of different
density alters its speed and direction upon entering the new medium. The ratio
of the refractive indices of the media determines the degree of refraction, and is
summarized by Snell's law. Light disperses into a visible spectrum as light is
shone through a prism because of refraction.
9.Draw the Wave model?
10.Write Planck’s equation?
The frequency of the wave is proportional to the magnitude of the
particle's energy. Moreover, because photons are emitted and absorbed by
charged particles, they act as transporters of energy. The energy per photon can
be calculated by Planck's equation:
where E is the energy, h is Planck's constant, and f is frequency.
11.What is Black body ?
By definition a black body is a material that absorbs all the radiant energy that strikes
it. A black body also radiates the maximum amount of energy, which is dependent on
the kinetic temperature.
12.Write Stefan Boltzman law?
According to the Stefan-Boltzman law the radiant flux of a
black body, Fb, at a kinetic temperature, Tkin, is Fb = s* Tkin
4 where s is the Stefan- Boltzman constant, 5.67*10-12 W*cm-2*°K-4.
13.What is emissivity?
Emissivity is a measure of the ability of a material to both radiate and absorb energy.
Materials with a high emissivity absorb and radiate large proportions of incident and
kinetic energy, respectively (and vice-versa).
14.Write Wein’s Displacement law?
For an object at a constant temperature the radiant power peak refers to the
wavelength at which the maximum amount of energy is radiated, which is expressed
as lmax. The sun, with a surface temperature of almost 6000°K, has its peak at 0.48mm
(wavelength of yellow). The average surface temperature of the earth is 290°K
(17°C), which is also called the ambient temperature; the peak concentration of
energy emitted from the earth is at 9.7mm.This shift to longer wavelengths with
decreasing temperature is described by Wien’s
displacement law, which states:
lmax = 2,897mm°K / Trad°K .
15.Write Planck’s Law?
The primary law governing blackbody radiation is the Planck Radiation
Law, which governs the intensity of radiation emitted by unit surface area into a fixed
direction (solid angle) from the blackbody as a function of wavelength for a fixed
temperature. The Planck Law can be expressed through the following equation.
16.What is Scattering?
Scattering occurs when particles or large gas molecules present in the atmosphere
interact with and cause the electromagnetic radiation to be redirected from its original
path. How much scattering takes place depends on several factors including the
wavelength of the radiation, the abundance of particles or gases, and the distance the
radiation travels through the atmosphere. There are three (3) types of scattering which
take place.
17.What are the types of scattering?
(i) Rayleigh scattering occurs when particles are very small compared to the
wavelength of the radiation.
(ii) Mie scattering
It occurs when the particles are just about the same size as the wavelength of the
(iii) Non Selective Scattering
The final scattering mechanism of importance is called nonselective scattering. This
occurs when the particles are much larger than the wavelength of the radiation.
18.What is Atmospheric Windows?
The areas of the spectrum which are not severely influenced by atmospheric
absorption and thus, are useful to remote sensors, are called atmospheric windows.
1. Discuss on spectral signature and its rule in identifying objects with suitable
2. Explain the principle of working of remote sensing?
3. With a suitable diagram explain the Electromagnetic Spectrums and its
characteristics used in remote sensing?
4. Explain on the different types of interactions of EMR with atmosphere?
1.What is passive sensors?
Passive sensors can only be used to detect energy when the naturally
occurring energy is available. For all reflected energy, this can only take place
during the time when the sun is illuminating the Earth. There is no reflected
energy available from the sun at night. Energy that is naturally emitted (such as
thermal infrared) can be detected day or night, as long as the amount of energy is
large enough to be recorded.
2.What is Active sensors?
On the other hand, provide their own energy source for illumination. The
sensor emits radiation which is directed toward the target to be investigated. The
radiation reflected from that target is detected and measured by the sensor.
3.Write the advantages of active sensors?
Advantages for active sensors include the ability to obtain measurements
anytime, regardless of the time of day or season. Active sensors can be used for
examining wavelengths that are not sufficiently provided by the sun, such as
microwaves, or to better control the way a target is illuminated. However, active
systems require the generation of a fairly large amount of energy to adequately
illuminate targets. Some examples of active sensors are a laser fluorosensor and a
synthetic aperture radar (SAR).
4. What are the types of Platforms?
The vehicle or carrier for remote sensor is borne is called the Platform.” The
typical platforms are satellite and aircraft, but they can also include radio controlled
airplanes, balloons, pigeons, and kites for low altitude remote sensing, as well as ladder
and cherry pickers for ground investigation.
5.Differentiate Geostationary orbit and Polar sun synchronous orbit.
Geostationary orbit
High altitude (36,000km)
Remains in same position above the Earth
Used by meteorological and communications satellites
Sees Earth disk (between third and quarter of Earths surface)
High temporal frequency (c.30 mins typical)
Polar sun synchronous orbit
Low altitude (200-1000km)
Goes close to poles
Higher spatial resolution than geostationary
Lower temporal resolution than geostationary
6. What is Resolution?
In general resolution is defined as the ability of an entire remote-sensing system,
including lens antennae, display, exposure, processing, and other factors, to render a
sharply defined image.It is the resolving power of the sensor to detect the smallest
meaningful elemental area in different spectral bands in s defined gray level at a regular
7.What are the elements of resolution?
The four elements of resolutions are Spatial, Spectral, Radiometric and Temporal.
8. Write short notes about Spatial resolution.
It is the minimum elemental area the sensor can detect or measure. The resolution
element is called pixel (picture element).
Example: IRS LISS 1-72.5m; LISS II-36.25m
Land sat MSS-80m; Land sat TM-30m
9. Write short notes about Spectral resolution.
It refers to the sensing and recording power of the sensor in different
bands of EMR. The sensors can observe an object separately in different bands or colors.
Examples: IRS-4 bands; Land sat MSS-4 bands; Land sat MSS TM-7 bands
SPOT-4 bands
It is the ability if the sensor to distinguish the finer variation of the reflected
radiation from different objects.
10. Write short notes on Radiometric resolutions.
It is the smallest amount of energy that can be detected by sensor and
differentiate the same in a defined scale. It is recorded in digital number (DN) for
different bands of the satellite. The radiometric value of the pixel is the average of the
values coming from every part of the pixel.
Example: IRS-128 gray level; Land sat MSS-64; Land sat TM-256; SPOT-256(it
is to be noted that ‘0’is also a value in the gray scale).
11. Write short notes on Temporal resolution.
It is the time interval between two successive surveys of a particular place
of the earth by the sensor or satellite.
Examples: IRS-22days; Land sat 16/18days; SPOT-16days.
12. Write the types of Microwave Sensors?
Active microwave sensors are generally divided into two distinct categories: imaging
and non-imaging. The most common form of imaging active microwave sensors is
13.What is RADAR?
RADAR is an acronym for RAdio Detection And Ranging, which essentially
characterizes the function and operation of a radar sensor. The sensor transmits a
microwave (radio) signal towards the target and detects the backscattered portion of the
14. What are the types of DATA products?
The data for all the sensors of IRS -1C/1D are supplied on digital media like
a) Computer copatible tapes (CCTs)
b) Cartridge tapes
c) Floppies
d) CD-ROM products
1.What is resolution of a sensor? Describe all sensor resolutions.
2.Write short notes on the Indian remote sensing programme.
3.What is the role of a scanner in remote sensing and describe the different types of
scanners used in remote sensing.
4.Discuss the thermal infrared in remote sensing?
5. Give details and examples about platforms and sensors.
6. What are the two type of sensors and discuss detail?
1.What is image interpretation?
Image interpretation is defined as the extraction of qualitative and quantitative
information in the form of a map, about the shape, location, structure, function, quality,
condition, relationship of and between objects, etc. by using human knowledge or
2. What are all the Types of image interpretation?
Photo interpretation photographic interpretation and image interpretation are the
terms used to interpret the Visual Image Interpretation.
3. What is Visual Image interpretation?
Visual Image interpretation is the act of examining photographs/images for the purpose
of identifying objects and judging their significance”
4. What is Photo interpretation?
Photo interpretation is defined as the process of identifying objects or conditions in aerial
photographs and determining their meaning or significance.
4.What is image reading?
Image reading is an elemental form of image interpretation. It corresponds to simple
identification of objects using such elements as shape, size, pattern, tone, texture, color,
shadow and other associated relationships. Image reading is usually implemented with
interpretation keys with respect to each object .
5.What is image measurement?
Image measurement is the extraction of physical quantities, such as length, location,
height, density, temperature and so on, by using reference data or calibration data
deductively or inductively.
6. What is image analysis?
Image analysis is the understanding of the relationship between interpreted information
and the actual status or phenomenon, and to evaluate the situation.
7. What is thematic map?
Extracted information will be finally represented in a map form called an interpretation
map or a thematic map.
8. What are the Image interpretation elements ?
The eigtht elements of image interpretation are shape ,size ,tone,shadows,texture
,site,pattern and association.
9. What is Digital Image Processing?
Digital Image Processing is a collection of techniques for the manipulation of digital
images by computers. The raw data received from the imaging sensors on the satellite
platforms contains flaws and deficiencies. To overcome these flaws and deficiencies
inorder to get the originality of the data, it needs to undergo several steps of processing.
This will vary from image to image depending on the type of image format, initial
condition of the image and the information of interest and the composition of the image
10. What are the general steps of image processing?
The three steps of image processing are ,
• Pre-processing
• Display and enhancement
• Information extraction
11.Write about pre processing?
In the preprocessing ,prepare data for subsequent analysis that attempts to correct or
compensate for systematic errors.
12. What is Image Enhancement?
The operations are carried out to improve the interpretability of the image by
increasing apparent contrast among various features in the scene. The enhancement
techniques depend upon two factors mainly l The digital data (i.e. with spectral bands and
14. Write the objectives of interpretation?
The objectives of interpretation as an image enhancement technique often drastically
alters the original numeric data, it is normally used only for visual (manual) interpretation
and not for further numeric analysis. Common enhancements include image reduction,
image rectification, image magnification, transect extraction, contrast adjustments, band
ratioing, spatial filtering, Fourier transformations, principal component analysis and
texture transformation.
15.What is digital image?
Digital Image is the matrix of “Digital Numbers”. A digital image is composed
of thousands of pixels. Each pixel represents the brightness of small region on the
earth surface.Digital Image processing involves the manipulation and interpretation
of digital image with the aid of computer.
16.What is filtering?
Filtering means the smoothening of an image using different Masks or Kernels.\
17.What is spatial filtering?
“ Spatial Filtering can be described as selectively emphasizing or suppressing
information at different spatial scales over an image. “
Spatial operation consists in changing the values of each pixels according to the
values of the pixels in the neighborhoods.
18.What is convolution?
A convolution is an integral which expresses the amount of overlap of one function g as it
is shifted over another function f. “
1.Write a detailed description on the elements of visual interpretation quoting suitalble
examples for each.
2.Give a detailed description on the how the flaws and deficiency in remote sensing data
can be removed.
3.Describe the different digital image processing techniques used.
4.Give a deatailed description on image classification and analysis of a remotely sensed
data.What is the use of classifying image.
1.What is map?
A map is usually considered to be a drawing to scale of the whole or a part of
the surface of the earth on a plane surface; it is a manually or mechanically drawn
picture of the earth showing the location and distribution of various natural and
cultural phenomena.A map is a symbolic representation of an area.
2.Write the two types of maps?
The two maps are topographical and thematic maps.
3.Write about topographical map?
It is a reference tool, showing the outlines of selected natural and man-made features of
the Earth
– often acts as a frame for other information
"Topography" refers to the shape of the surface, represented by contours and/or shading,
but topographic maps also show roads and other prominent features.
4.Write about thematic map?
It is a tool to communicate geographical themes such as, the distribution of population &
densities, climatic variables and land use etc.
5.What are the thematic maps in GIS?
a) choropleth map
b) area class map
c) isopleth map
6.What are the characteristics of map?
• maps are often stylized, generalized or abstracted, requiring careful interpretation
• usually out of date
• show only a static situation - one slice in time
• often highly elegant/artistic
• easy to use to answer certain types of questions:
– how do I get there from here?
– what is at this point?
• difficult or time-consuming to answer other types:
– what is the area of this lake?
– what places can I see from this TV tower?
– what does that thematic map show at the point I'm interested in on this
topographic map?
7.Write the necessity of map projection?
Projection is necessary one because spatial entities locate in two
dimensions. The method by which the “world is laid flat” is use to help
Doing the process introduce error into spatial data. Spatial data character
varies depending on the projection method chosen.
Shape and distance are distorted the accuracy world is spherical shape
visualize the two dimension in flat surface is difficult.
8.Write the types of map projection?
1.Cylindrical projection 2. Azimuthal projection
3. Conical projection
9.Write few lines about cylindrical projection?
Countries near the equator in true relative portion
Distance increases between countries located towards top and bottom of mage.
The view of the poles is very distorted
Area for the most part is preserved
10.Write few lines about conical projection?
Area is distorted.
Distance is very distorted towards the bottom of the image.
Scale for the most part is preserved
11.Write few lines about azimuthal projection?
Only a part of the earth surface is visible.
The view will be of half the globe or less.
Distortion will occur at all four edges.
Distance for the more part is preserved.
12.What is referencing system?
Referencing system is used to locate a feature on the earth’s surface or a two
dimension representation of this surface such as a map.
13.What are the methods of spatial referencing systems?
Several methods of spatial referencing exist all of which can be grouped into three
Geographical co-ordinate system
Rectangular co-ordinate system
Non-co-ordinate system
14. What is Geographic Co-Ordinate System?
This is a one of true co-ordinate system .the location of any point on the earth
surface can be defined by a reference using latitude and longitude.
15.What is QTM?
The quaternary triangular mesh refrenshing system tries to deal with
irregularities in the earth surface.
16.What is GIS?
It’s a computer based information system primarily aims in collecting, classifying,
crosschecking, manipulating, interpreting, retrieving and displaying data which are
spatially referred to the earth in an appealing way.
17.What are the components of GIS?
The Computer System (Hardware and Operating System)
The Software
Spatial Data
Data Management and analysis procedures
The People to operate the GIS
18.What are the GIS softwares used?
Standard GIS Softwares
1.What is map projection and explain the differentiate types of map projections with their
2.Explain in detail on the different types of data utilized in GIS technology.
3.Explain the different classification of maps.
4.Explain DBMS ,with emphasis on the differentiate types of DBMS used in GIS
1.What is Data model?
Data Models: Vector and Raster
Spatial data in GIS has two primary data formats: raster and vector.
Raster uses a grid cell structure, whereas vector is more like a drawn map.
Raster and Vector Data
Vector format has points, lines, polygons that appear normal, much like a map.
Raster format generalizes the scene into a grid of cells, each with a code to
indicate the feature being depicted. The cell is the minimum mapping unit.
Raster has generalized reality: all of the features in the cell area are reduced to a
single cell identity.
2.What is raster data?
Raster is a method for the storage, processing and display of spatial data.
Each area is divided into rows and columns, which form a regular grid structure.
Each cell must be rectangular in shape, but not necessarily square.
Each cell within this matrix contains location co-ordinates as well as an attribute
value. The origin of rows and column is at the upper left corner of the grid.
Rows function as the “y”coordinate and column as”x”coordinate in a two dimensional
system. A cell is defined by its location in terms of rows and columns.
3.What is vector data?
Vector data uses two dimensional Cartesian coordinates to store the shape of
spatial entity. Vector based features are treated as discrete geometric objects over
the space.
• In the vector data base point is the basic building block from which all the spatial
entities are constructed.
• The vector spatial entity ,the point is represented by a single x,y coordinate pair.
Line and area entities are constructed by a series of points into chains and
4. What is Raster?
The raster cell’s value or code represents all of the features within the grid, it does not
maintain true size, shape, or location for individual features. Even where “nothing” exists
(no data), the cells must be coded.
5.What is Vector?
vectors are data elements describing position and direction. In GIS, vector is the maplike drawing of features, without the generalizing effect of a raster grid. Therefore, shape
is better retained. Vector is much more spatially accurate than the raster format.
6.What is raster coding?
In the data entry process, maps can be digitized or scanned at a selected cell size
and each cell assigned a code or value.
The cell size can be adjusted according to the grid structure or by ground units,
also termed resolution.
There are three basic and one advanced scheme for assigning cell codes.
Presence/Absence: is the most basic method and to record a feature if some of it
occurs in the cell space.
7. What is Cell Center?
The cell center involves reading only the center of the cell and assigning the code
accordingly. Not good for points or lines.
8.What is Dominant Area?
To assign the cell code to the feature with the largest (dominant) share of the cell. This is
suitable primarily for polygons.
9.What is Percent Coverage?
A more advanced method. To separate each feature for coding into individual themes and
then assign values that show its percent cover in each cell.
10.Different methods of data input?
Key board entry
Manual digitizing
Automatic digitizing
Automatic line follower
Electronic data transfer
11.What is digitizing?
The most common method employed in encoding data from a paper map.
Manual digitizing
Automatic digitizing
Automatic line follower
12.Write the errors in digitizing?
Scale and resolution of the source/base map.
Quality of the equipment and the software used.
Incorrect registration.
A shaky hand.
Line thickness.
Under shoot.
Polygonal knot.
Psychological errors.
13.What is scanning?
piece of hard ware for converting an analogue source of document into digital
raster format (a light sensitive device).
Most commonly used method.
When raster data are there to be encoded scanning is the most appropriate option.
There are three different types of scanners available in usage : Flat-bed scanners (a PC peripheral).
Rotating drum scanners.
Large format feed scanners
14.Write the important components of scanner?
A light source.
A back ground.
A lens.
15.Write the practical problems in scanning?
Possibility of optical distortion associated with the usage of flat bed scanners.
Automatic scanning of unwanted information.
Selection of appropriate scanning tolerance to ensure important data are encoded,
and background data ignored.
The format of files produced and the input of data into G.I.S. software.
The amount of editing required to produce data suitable for analysis.
1.What is data model ?Enumerate different types of GIS data.
2.Write short notes on:
(i) Overlaying
(ii) Buffering and GIS
3.What are the possible techniques best adopted for better storage of raster data that
would avoid repetition of characters.
4.Explain on the different methods of data input in GIS.
1. Describe detaily about Electromagnetic Spectrums?
2. Write detaily the Energy Intraction with atmosphere.
3. Give details and examples about platforms and sensors
4. What are the two type of sensors and discuss detaily.
5. Discuss detaily about image interpetation keys and Techniques.
Elements of Visual Interpretation
As we noted in the previous section, analysis of remote sensing imagery
involves the identification of various targets in an image, and those targets may
be environmental or artificial features which consist of points, lines, or areas.
Targets may be defined in terms of the way they reflect or emit radiation. This
radiation is measured and recorded by a sensor, and ultimately is depicted as an
image product such as an air photo or a satellite image.
What makes interpretation of imagery more difficult than the everyday
visual interpretation of our surroundings? For one, we lose our sense of depth
when viewing a two-dimensional image, unless we can view it stereoscopically
so as to simulate the third dimension of height. Indeed, interpretation benefits
greatly in many applications when images are viewed in stereo, as visualization
(and therefore, recognition) of targets is enhanced dramatically. Viewing objects
from directly above also provides a very different perspective than what we are
familiar with. Combining an unfamiliar perspective with a very different scale
and lack of recognizable detail can make even the most familiar object
unrecognizable in an image. Finally, we are used to seeing only the visible
wavelengths, and the imaging of wavelengths outside of this window is more
difficult for us to comprehend.
Recognizing targets is the key to interpretation and information
extraction. Observing the differences between targets and their backgrounds
involves comparing different targets based on any, or all, of the visual elements
of tone, shape, size, pattern, texture, shadow, and association. Visual
interpretation using these elements is often a part of our daily lives, whether we
are conscious of it or not. Examining satellite images on the weather report, or
following high speed chases by views from a helicopter are all familiar examples
of visual image interpretation. Identifying targets in remotely sensed images
based on these visual elements allows us to further interpret and analyze. The
nature of each of these interpretation elements is described below, along with an
image example of each.
Tone refers to the relative brightness or colour of objects in an image.
Generally, tone is the fundamental element for distinguishing between different
targets or features. Variations in tone also allows the elements of shape, texture,
and pattern of objects to be distinguished.
Shape refers to the general form, structure, or outline of individual objects.
Shape can be a very distinctive clue for interpretation. Straight edge shapes
typically represent urban or agricultural (field) targets, while natural features,
such as forest edges, are generally more irregular in shape, except where man
has created a road or clear cuts. Farm or crop land irrigated by rotating sprinkler
systems would appear as circular shapes.
Size of objects in an image is a function of scale. It is important to assess
the size of a target relative to other objects in a scene, as well as the absolute size,
to aid in the interpretation of that target. A quick approximation of target size
can direct interpretation to an appropriate result more quickly. For example, if an
interpreter had to distinguish zones of land use, and had identified an area with
a number of buildings in it, large buildings such as factories or warehouses
would suggest commercial property, whereas small buildings would indicate
residential use.
Pattern refers to the spatial arrangement of visibly discernible objects.
Typically an orderly repetition of similar tones and textures will produce a
distinctive and ultimately recognizable pattern. Orchards with evenly spaced
trees, and urban streets with regularly spaced houses are good examples of
Texture refers to the arrangement and frequency of tonal variation in
particular areas of an image. Rough textures would consist of a mottled tone
where the grey levels change abruptly in a small area, whereas smooth textures
would have very little tonal variation. Smooth textures are most often the result
of uniform, even surfaces, such as fields, asphalt, or grasslands. A target with a
rough surface and irregular structure, such as a forest canopy, results in a rough
textured appearance. Texture is one of the most important elements for
distinguishing features in radar imagery.
Shadow is also helpful in interpretation as it may provide an idea of the
profile and relative height of a target or targets which may make identification
easier. However, shadows can also reduce or eliminate interpretation in their
area of influence, since targets within shadows are much less (or not at all)
discernible from their surroundings. Shadow is also useful for enhancing or
identifying topography and landforms, particularly in radar imagery.
Association takes into account the relationship between other recognizable
objects or features in proximity to the target of interest. The identification of
features that one would expect to associate with other features may provide
information to facilitate identification. In the example given above, commercial
properties may be associated with proximity to major transportation routes,
whereas residential areas would be associated with schools, playgrounds, and
sports fields. In our example, a lake is associated with boats, a marina, and
adjacent recreational land.
Digital Image Processing
In today's world of advanced technology where most remote sensing data
are recorded in digital format, virtually all image interpretation and analysis
involves some element of digital processing. Digital image processing may
involve numerous procedures including formatting and correcting of the data,
digital enhancement to facilitate better visual interpretation, or even automated
classification of targets and features entirely by computer. In order to process
remote sensing imagery digitally, the data must be recorded and available in a
digital form suitable for storage on a computer tape or disk. Obviously, the other
requirement for digital image processing is a computer system, sometimes
referred to as an image analysis system, with the appropriate hardware and
software to process the data. Several commercially available software systems
have been developed specifically for remote sensing image processing and
For discussion purposes, most of the common image processing functions
available in image analysis systems can be categorized into the following four
Image Enhancement
Image Transformation
Image Classification and Analysis
Preprocessing functions involve those operations that are normally required
prior to the main data analysis and extraction of information, and are generally
grouped as radiometric or geometric corrections. Radiometric corrections include
correcting the data for sensor irregularities and unwanted sensor or atmospheric
noise, and converting the data so they accurately represent the reflected or
emitted radiation measured by the sensor. Geometric corrections include
correcting for geometric distortions due to sensor-Earth geometry variations, and
conversion of the data to real world coordinates (e.g. latitude and longitude) on
the Earth's surface.
The objective of the second group of image processing functions grouped
under the term of image enhancement, is solely to improve the appearance of the
imagery to assist in visual interpretation and analysis. Examples of enhancement
functions include contrast stretching to increase the tonal distinction between
various features in a scene, and spatial filtering to enhance (or suppress) specific
spatial patterns in an image.
Image transformations are operations similar in concept to those for image
enhancement. However, unlike image enhancement operations which are
normally applied only to a single channel of data at a time, image
transformations usually involve combined processing of data from multiple
spectral bands. Arithmetic operations (i.e. subtraction, addition, multiplication,
division) are performed to combine and transform the original bands into "new"
images which better display or highlight certain features in the scene. We will
look at some of these operations including various methods of spectral or band
ratioing, and a procedure called principal components analysis which is used to
more efficiently represent the information in multichannel imagery.
Image classification and analysis operations are used to digitally identify
and classify pixels in the data. Classification is usually performed on multichannel data sets (A) and this process assigns each pixel in an image to a
particular class or theme (B) based on statistical characteristics of the pixel
brightness values. There are a variety of approaches taken to perform digital
classification. We will briefly describe the two generic approaches which are
used most often, namely supervised and unsupervised classification.
In the following sections we will describe each of these four categories of
digital image processing functions in more detail.
Pre-processing operations, sometimes referred to as image restoration and
rectification, are intended to correct for sensor- and platform-specific radiometric
and geometric distortions of data. Radiometric corrections may be necessary due
to variations in scene illumination and viewing geometry, atmospheric
conditions, and sensor noise and response. Each of these will vary depending on
the specific sensor and platform used to acquire the data and the conditions
during data acquisition. Also, it may be desirable to convert and/or calibrate the
data to known (absolute) radiation or reflectance units to facilitate comparison
between data.
Variations in illumination and viewing geometry between images (for
optical sensors) can be corrected by modeling the geometric relationship and
distance between the area of the Earth's surface imaged, the sun, and the sensor.
This is often required so as to be able to more readily compare images collected
by different sensors at different dates or times, or to mosaic multiple images
from a single sensor while maintaining uniform illumination conditions from
scene to scene.
As we learned in Chapter 1, scattering of radiation occurs as it passes
through and interacts with the atmosphere. This scattering may reduce, or
attenuate, some of the energy illuminating the surface. In addition, the
atmosphere will further attenuate the signal propagating from the target to the
sensor. Various methods of atmospheric correction can be applied ranging from
detailed modeling of the atmospheric conditions during data acquisition, to
simple calculations based solely on the image data. An example of the latter
method is to examine the observed brightness values (digital numbers), in an
area of shadow or for a very dark object (such as a large clear lake - A) and
determine the minimum value (B). The correction is applied by subtracting the
minimum observed value, determined for each specific band, from all pixel
values in each respective band. Since scattering is wavelength dependent the
minimum values will vary from band to band. This method is based on the
assumption that the reflectance from these features, if the atmosphere is clear,
should be very small, if not zero. If we observe values much greater than zero,
then they are considered to have resulted from atmospheric scattering.
Noise in an image may be due to irregularities or errors that occur in the
sensor response and/or data recording and transmission. Common forms of
noise include systematic striping or banding and dropped lines. Both of these
effects should be corrected before further enhancement or classification is
performed. Striping was common in early Landsat MSS data due to variations
and drift in the response over time of the six MSS detectors. The 'drift' was
different for each of the six detectors, causing the same brightness to be
represented differently by each detector. The overall appearance was thus a
'striped' effect. The corrective process made a relative correction among the six
sensors to bring their apparent values in line with each other. Dropped lines
occur when there are systems errors which result in missing or defective data
along a scan line. Dropped lines are normally 'corrected' by replacing the line
with the pixel values in the line above or below, or with the average of the two.
For many quantitative applications of remote sensing data, it is necessary
to convert the digital numbers to measurements in units which represent the
actual reflectance or emittance from the surface. This is done based on detailed
knowledge of the sensor response and the way in which the analog signal (i.e.
the reflected or emitted radiation) is converted to a digital number, called analogto-digital (A-to-D) conversion. By solving this relationship in the reverse
direction, the absolute radiance can be calculated for each pixel, so that
comparisons can be accurately made over time and between different sensors.
we learned that all remote sensing imagery are inherently subject to
geometric distortions. These distortions may be due to several factors, including:
the perspective of the sensor optics; the motion of the scanning system; the
motion of the platform; the platform altitude, attitude, and velocity; the terrain
relief; and, the curvature and rotation of the Earth. Geometric corrections are
intended to compensate for these distortions so that the geometric representation
of the imagery will be as close as possible to the real world. Many of these
variations are systematic, or predictable in nature and can be accounted for by
accurate modeling of the sensor and platform motion and the geometric
relationship of the platform with the Earth. Other unsystematic, or random,
errors cannot be modeled and corrected in this way. Therefore, geometric
registration of the imagery to a known ground coordinate system must be
The geometric registration process involves identifying the image
coordinates (i.e. row, column) of several clearly discernible points, called ground
control points (or GCPs), in the distorted image (A - A1 to A4), and matching
them to their true positions in ground coordinates (e.g. latitude, longitude). The
true ground coordinates are typically measured from a map (B - B1 to B4), either
in paper or digital format. This is image-to-map registration. Once several welldistributed GCP pairs have been identified, the coordinate information is
processed by the computer to determine the proper transformation equations to
apply to the original (row and column) image coordinates to map them into their
new ground coordinates. Geometric registration may also be performed by
registering one (or more) images to another image, instead of to geographic
coordinates. This is called image-to-image registration and is often done prior to
performing various image transformation procedures, which will be discussed in
section 4.6, or for multitemporal image comparison.
In order to actually geometrically correct the original distorted image, a
procedure called resampling is used to determine the digital values to place in
the new pixel locations of the corrected output image. The resampling process
calculates the new pixel values from the original digital pixel values in the
uncorrected image. There are three common methods for resampling: nearest
neighbour, bilinear interpolation, and cubic convolution. Nearest neighbour
resampling uses the digital value from the pixel in the original image which is
nearest to the new pixel location in the corrected image. This is the simplest
method and does not alter the original values, but may result in some pixel
values being duplicated while others are lost. This method also tends to result in
a disjointed or blocky image appearance. Bilinear interpolation resampling takes
a weighted average of four pixels in the original image nearest to the new pixel
location. The averaging process alters the original pixel values and creates
entirely new digital values in the output image. This may be undesirable if
further processing and analysis, such as classification based on spectral response,
is to be done. If this is the case, resampling may best be done after the
classification process. Cubic convolution resampling goes even further to
calculate a distance weighted average of a block of sixteen pixels from the
original image which surround the new output pixel location. As with bilinear
interpolation, this method results in completely new pixel values. However,
these two methods both produce images which have a much sharper appearance
and avoid the blocky appearance of the nearest neighbour method.
Image Enhancement
Enhancements are used to make it easier for visual interpretation and
understanding of imagery. The advantage of digital imagery is that it allows us
to manipulate the digital pixel values in an image. Although radiometric
corrections for illumination, atmospheric influences, and sensor characteristics
may be done prior to distribution of data to the user, the image may still not be
optimized for visual interpretation. Remote sensing devices, particularly those
operated from satellite platforms, must be designed to cope with levels of
target/background energy which are typical of all conditions likely to be
encountered in routine use. With large variations in spectral response from a
diverse range of targets (e.g. forest, deserts, snowfields, water, etc.) no generic
radiometric correction could optimally account for and display the optimum
brightness range and contrast for all targets. Thus, for each application and each
image, a custom adjustment of the range and distribution of brightness values is
usually necessary.
In raw imagery, the useful data often populates only a small portion of the
available range of digital values (commonly 8 bits or 256 levels). Contrast
enhancement involves changing the original values so that more of the available
range is used, thereby increasing the contrast between targets and their
backgrounds. The key to understanding contrast enhancements is to understand
the concept of an image histogram. A histogram is a graphical representation of
the brightness values that comprise an image. The brightness values (i.e. 0-255)
are displayed along the x-axis of the graph. The frequency of occurrence of each
of these values in the image is shown on the y-axis.
By manipulating the range of digital values in an image, graphically
represented by its histogram, we can apply various enhancements to the data.
There are many different techniques and methods of enhancing contrast and
detail in an image; we will cover only a few common ones here. The simplest
type of enhancement is a linear contrast stretch. This involves identifying lower
and upper bounds from the histogram (usually the minimum and maximum
brightness values in the image) and applying a transformation to stretch this
range to fill the full range. In our example, the minimum value (occupied by
actual data) in the histogram is 84 and the maximum value is 153. These 70 levels
occupy less than one-third of the full 256 levels available. A linear stretch
uniformly expands this small range to cover the full range of values from 0 to
255. This enhances the contrast in the image with light toned areas appearing
lighter and dark areas appearing darker, making visual interpretation much
easier. This graphic illustrates the increase in contrast in an image before (top)
and after (bottom) a linear contrast stretch.
A uniform distribution of the input range of values across the full range
may not always be an appropriate enhancement, particularly if the input range is
not uniformly distributed. In this case, a histogram-equalized stretch may be
better. This stretch assigns more display values (range) to the frequently
occurring portions of the histogram. In this way, the detail in these areas will be
better enhanced relative to those areas of the original histogram where values
occur less frequently. In other cases, it may be desirable to enhance the contrast
in only a specific portion of the histogram. For example, suppose we have an
image of the mouth of a river, and the water portions of the image occupy the
digital values from 40 to 76 out of the entire image histogram. If we wished to
enhance the detail in the water, perhaps to see variations in sediment load, we
could stretch only that small portion of the histogram represented by the water
(40 to 76) to the full grey level range (0 to 255). All pixels below or above these
values would be assigned to 0 and 255, respectively, and the detail in these areas
would be lost. However, the detail in the water would be greatly enhanced.
Spatial filtering encompasses another set of digital processing functions
which are used to enhance the appearance of an image. Spatial filters are
designed to highlight or suppress specific features in an image based on their
spatial frequency. Spatial frequency is related to the concept of image texture,
which we discussed in section 4.2. It refers to the frequency of the variations in
tone that appear in an image. "Rough" textured areas of an image, where the
changes in tone are abrupt over a small area, have high spatial frequencies, while
"smooth" areas with little variation in tone over several pixels, have low spatial
frequencies. A common filtering procedure involves moving a 'window' of a few
pixels in dimension (e.g. 3x3, 5x5, etc.) over each pixel in the image, applying a
mathematical calculation using the pixel values under that window, and
replacing the central pixel with the new value. The window is moved along in
both the row and column dimensions one pixel at a time and the calculation is
repeated until the entire image has been filtered and a "new" image has been
generated. By varying the calculation performed and the weightings of the
individual pixels in the filter window, filters can be designed to enhance or
suppress different types of features.
A low-pass filter is designed to emphasize larger, homogeneous areas of
similar tone and reduce the smaller detail in an image. Thus, low-pass filters
generally serve to smooth the appearance of an image. Average and median
filters, often used for radar imagery are examples of low-pass filters. High-pass
filters do the opposite and serve to sharpen the appearance of fine detail in an
image. One implementation of a high-pass filter first applies a low-pass filter to
an image and then subtracts the result from the original, leaving behind only the
high spatial frequency information. Directional, or edge detection filters are
designed to highlight linear features, such as roads or field boundaries. These
filters can also be designed to enhance features which are oriented in specific
directions. These filters are useful in applications such as geology, for the
detection of linear geologic structures.
Image Transformations
Image transformations typically involve the manipulation of multiple
bands of data, whether from a single multispectral image or from two or more
images of the same area acquired at different times (i.e. multitemporal image
data). Either way, image transformations generate "new" images from two or
more sources which highlight particular features or properties of interest, better
than the original input images.
Basic image transformations apply simple arithmetic operations to the
image data. Image subtraction is often used to identify changes that have
occurred between images collected on different dates. Typically, two images
which have been geometrically registered are used with the pixel (brightness)
values in one image (1) being subtracted from the pixel values in the other (2).
Scaling the resultant image (3) by adding a constant (127 in this case) to the
output values will result in a suitable 'difference' image. In such an image, areas
where there has been little or no change (A) between the original images, will
have resultant brightness values around 127 (mid-grey tones), while those areas
where significant change has occurred (B) will have values higher or lower than
127 - brighter or darker depending on the 'direction' of change in reflectance
between the two images . This type of image transform can be useful for
mapping changes in urban development around cities and for identifying areas
where deforestation is occurring, as in this example.
Image division or spectral ratioing is one of the most common transforms
applied to image data. Image ratioing serves to highlight subtle variations in the
spectral responses of various surface covers. By ratioing the data from two
different spectral bands, the resultant image enhances variations in the slopes of
the spectral reflectance curves between the two different spectral ranges that may
otherwise be masked by the pixel brightness variations in each of the bands. The
following example illustrates the concept of spectral ratioing. Healthy vegetation
reflects strongly in the near-infrared portion of the spectrum while absorbing
strongly in the visible red. Other surface types, such as soil and water, show near
equal reflectances in both the near-infrared and red portions. Thus, a ratio image
of Landsat MSS Band 7 (Near-Infrared - 0.8 to 1.1 m) divided by Band 5 (Red 0.6 to 0.7 m) would result in ratios much greater than 1.0 for vegetation, and
ratios around 1.0 for soil and water. Thus the discrimination of vegetation from
other surface cover types is significantly enhanced. Also, we may be better able
to identify areas of unhealthy or stressed vegetation, which show low nearinfrared reflectance, as the ratios would be lower than for healthy green
Another benefit of spectral ratioing is that, because we are looking at
relative values (i.e. ratios) instead of absolute brightness values, variations in
scene illumination as a result of topographic effects are reduced. Thus, although
the absolute reflectances for forest covered slopes may vary depending on their
orientation relative to the sun's illumination, the ratio of their reflectances
between the two bands should always be very similar. More complex ratios
involving the sums of and differences between spectral bands for various
sensors, have been developed for monitoring vegetation conditions. One widely
used image transform is the Normalized Difference Vegetation Index (NDVI)
which has been used to monitor vegetation conditions on continental and global
scales using the Advanced Very High Resolution Radiometer (AVHRR) sensor
onboard the NOAA series of satellites.
Different bands of multispectral data are often highly correlated and thus
contain similar information. For example, Landsat MSS Bands 4 and 5 (green and
red, respectively) typically have similar visual appearances since reflectances for
the same surface cover types are almost equal. Image transformation techniques
based on complex processing of the statistical characteristics of multi-band data
sets can be used to reduce this data redundancy and correlation between bands.
One such transform is called principal components analysis. The objective of this
transformation is to reduce the dimensionality (i.e. the number of bands) in the
data, and compress as much of the information in the original bands into fewer
bands. The "new" bands that result from this statistical procedure are called
components. This process attempts to maximize (statistically) the amount of
information (or variance) from the original data into the least number of new
components. As an example of the use of principal components analysis, a seven
band Thematic Mapper (TM) data set may be transformed such that the first
three principal components contain over 90 percent of the information in the
original seven bands. Interpretation and analysis of these three bands of data,
combining them either visually or digitally, is simpler and more efficient than
trying to use all of the original seven bands. Principal components analysis, and
other complex transforms, can be used either as an enhancement technique to
improve visual interpretation or to reduce the number of bands to be used as
input to digital classification procedures, discussed in the next section.
Image Classification and Analysis
A human analyst attempting to classify features in an image uses the
elements of visual interpretation to identify homogeneous groups of pixels
which represent various features or land cover classes of interest. Digital image
classification uses the spectral information represented by the digital numbers in
one or more spectral bands, and attempts to classify each individual pixel based
on this spectral information. This type of classification is termed spectral pattern
recognition. In either case, the objective is to assign all pixels in the image to
particular classes or themes (e.g. water, coniferous forest, deciduous forest, corn,
wheat, etc.). The resulting classified image is comprised of a mosaic of pixels,
each of which belong to a particular theme, and is essentially a thematic "map" of
the original image.
When talking about classes, we need to distinguish between information
classes and spectral classes. Information classes are those categories of interest
that the analyst is actually trying to identify in the imagery, such as different
kinds of crops, different forest types or tree species, different geologic units or
rock types, etc. Spectral classes are groups of pixels that are uniform (or nearsimilar) with respect to their brightness values in the different spectral channels
of the data. The objective is to match the spectral classes in the data to the
information classes of interest. Rarely is there a simple one-to-one match between
these two types of classes. Rather, unique spectral classes may appear which do
not necessarily correspond to any information class of particular use or interest
to the analyst. Alternatively, a broad information class (e.g. forest) may contain a
number of spectral sub-classes with unique spectral variations. Using the forest
example, spectral sub-classes may be due to variations in age, species, and
density, or perhaps as a result of shadowing or variations in scene illumination.
It is the analyst's job to decide on the utility of the different spectral classes and
their correspondence to useful information classes.
Common classification procedures can be broken down into two broad
subdivisions based on the method used: supervised classification and
unsupervised classification. In a supervised classification, the analyst identifies
in the imagery homogeneous representative samples of the different surface
cover types (information classes) of interest. These samples are referred to as
training areas. The selection of appropriate training areas is based on the
analyst's familiarity with the geographical area and their knowledge of the actual
surface cover types present in the image. Thus, the analyst is "supervising" the
categorization of a set of specific classes. The numerical information in all
spectral bands for the pixels comprising these areas are used to "train" the
computer to recognize spectrally similar areas for each class. The computer uses
a special program or algorithm (of which there are several variations), to
determine the numerical "signatures" for each training class. Once the computer
has determined the signatures for each class, each pixel in the image is compared
to these signatures and labeled as the class it most closely "resembles" digitally.
Thus, in a supervised classification we are first identifying the information
classes which are then used to determine the spectral classes which represent
Unsupervised classification in essence reverses the supervised
classification process. Spectral classes are grouped first, based solely on the
numerical information in the data, and are then matched by the analyst to
information classes (if possible). Programs, called clustering algorithms, are used
to determine the natural (statistical) groupings or structures in the data. Usually,
the analyst specifies how many groups or clusters are to be looked for in the
data. In addition to specifying the desired number of classes, the analyst may
also specify parameters related to the separation distance among the clusters and
the variation within each cluster. The final result of this iterative clustering
process may result in some clusters that the analyst will want to subsequently
combine, or clusters that should be broken down further - each of these requiring
a further application of the clustering algorithm. Thus, unsupervised
classification is not completely without human intervention. However, it does
not start with a pre-determined set of classes as in a supervised classification.
Data Integration and Analysis
In the early days of analog remote sensing when the only remote sensing
data source was aerial photography, the capability for integration of data from
different sources was limited. Today, with most data available in digital format
from a wide array of sensors, data integration is a common method used for
interpretation and analysis. Data integration fundamentally involves the
combining or merging of data from multiple sources in an effort to extract better
and/or more information. This may include data that are multitemporal,
multiresolution, multisensor, or multi-data type in nature.
Multitemporal data integration has already been alluded to in section 4.6
when we discussed image subtraction. Imagery collected at different times is
integrated to identify areas of change. Multitemporal change detection can be
achieved through simple methods such as these, or by other more complex
approaches such as multiple classification comparisons or classifications using
integrated multitemporal data sets. Multiresolution data merging is useful for a
variety of applications. The merging of data of a higher spatial resolution with
data of lower resolution can significantly sharpen the spatial detail in an image
and enhance the discrimination of features. SPOT data are well suited to this
approach as the 10 metre panchromatic data can be easily merged with the 20
metre multispectral data. Additionally, the multispectral data serve to retain
good spectral resolution while the panchromatic data provide the improved
spatial resolution.
Data from different sensors may also be merged, bringing in the concept
of multisensor data fusion. An excellent example of this technique is the
combination of multispectral optical data with radar imagery. These two diverse
spectral representations of the surface can provide complementary information.
The optical data provide detailed spectral information useful for discriminating
between surface cover types, while the radar imagery highlights the structural
detail in the image.
Applications of multisensor data integration generally require that the
data be geometrically registered, either to each other or to a common geographic
coordinate system or map base. This also allows other ancillary (supplementary)
data sources to be integrated with the remote sensing data. For example,
elevation data in digital form, called Digital Elevation or Digital Terrain Models
(DEMs/DTMs), may be combined with remote sensing data for a variety of
purposes. DEMs/DTMs may be useful in image classification, as effects due to
terrain and slope variability can be corrected, potentially increasing the accuracy
of the resultant classification. DEMs/DTMs are also useful for generating threedimensional perspective views by draping remote sensing imagery over the
elevation data, enhancing visualization of the area imaged.
Combining data of different types and from different sources, such as we
have described above, is the pinnacle of data integration and analysis. In a digital
environment where all the data sources are geometrically registered to a
common geographic base, the potential for information extraction is extremely
wide. This is the concept for analysis within a digital Geographical Information
System (GIS) database. Any data source which can be referenced spatially can be
used in this type of environment. A DEM/DTM is just one example of this kind
of data. Other examples could include digital maps of soil type, land cover
classes, forest species, road networks, and many others, depending on the
application. The results from a classification of a remote sensing data set in map
format, could also be used in a GIS as another data source to update existing map
data. In essence, by analyzing diverse data sets together, it is possible to extract
better and more accurate information in a synergistic manner than by using a
single data source alone. There are a myriad of potential applications and
analyses possible for many applications. In the next and final chapter, we will
look at examples of various applications of remote sensing data, many involving
the integration of data from different sources.
1) what are important elements for visual interpretation?
2) describe image classification and analysis?
3) Define detail about image enhancement techniques?
4) Describe about image transformation?
5) What are the important keys for image interpretations?
A map is a graphic representation of a portion of the earth's surface drawn
to scale. It uses colors, symbols, and labels to represent features found on the
ground. The ideal representation would be realized if every feature of the area
being mapped could be shown in true shape. Obviously this is impossible, and
an attempt to plot each feature true to scale would result in a product impossible
to read even with the aid of a magnifying glass.
Therefore, to be understandable, features must be represented by
conventional signs and symbols. To be legible, many of these must be
exaggerated in size, often far beyond the actual ground limits of the feature
represented. On a 1:250,000 scale map, the prescribed symbol for a building
covers an area about 500 feet square on the ground; a road symbol is equivalent
to a road about 520 feet wide on the ground; the symbol for a single-track
railroad (the length of a cross-tie) is equivalent to a railroad cross-tie about 1,000
feet on the ground.
b. The portrayal of many features requires similar exaggeration. Therefore, the
selection of features to be shown, as well as their portrayal, is in accord with the
guidance established by the Defense Mapping Agency.
A map is a visual representation of an area a symbolic depiction
highlighting relationships between elements of that space such as objects,
regions, and themes.
Many maps are static two-dimensional, geometrically accurate representations of
three-dimensional space, while others are dynamic or interactive, even threedimensional. Although most commonly used to depict geography, maps may
represent any space, real or imagined, without regard to context or scale; e.g.
Brain mapping, DNA mapping, and extraterrestrial mapping.
A map provides information on the existence, the location of, and the
distance between ground features, such as populated places and routes of travel
and communication. It also indicates variations in terrain, heights of natural
features, and the extent of vegetation cover. With our military forces dispersed
throughout the world, it is necessary to rely on maps to provide information to
our combat elements and to resolve logistical operations far from our shores.
Soldiers and materials must be transported, stored, and placed into operation at
the proper time and place. Much of this planning must be done by using maps.
Therefore, any operation requires a supply of maps; however, the finest maps
available are worthless unless the map user knows how to read them
Thematic map
A map that displays the spatial distribution of an attribute that relates to a single
topic, theme, or subject of discourse. Usually, a thematic map displays a single
attribute (a "univariate map") such as soil type, vegetation, geology, land use, or
landownership. For attributes such as soil type or land use ("nominal" variables),
shaded maps that highlight regions ("polygons") by employing different colors or
patterns is generally wanted. For other attributes (like population density - a
"metric" variable), a shaded map in which each shade corresponds to a range of
Thematic maps are used to display geographical concepts such as density,
distribution, relative magnitudes, gradients, spatial relationships and
movements. Also called geographic, special purpose, distribution, parametric, or
planimetric maps.
A thematic map displays spatial pattern of a theme or series of attributes.
In contrast to reference maps which show many geographic features (forests,
roads, political boundaries), thematic maps emphasize spatial variation of one or
a small number of geographic distributions. These distributions may be physical
phenomena such as climate or human characteristics such as population density
and health issues. These types of maps are sometimes referred to as graphic
essays that portray spatial variations and interrelationships of geographical
distributions. Location, of course, is also important to provide a reference base of
where selected phenomena are occurring. Barbara B. Petchenik described the
difference as "in place, about space." While general reference maps show where
something is in space, thematic maps tell a story about that place.
A topographic map is a type of map characterized by large-scale detail
and quantitative representation of relief, usually using contour lines in modern
mapping, but historically using a variety of methods. Traditional definitions
require a topographic map to show both natural and man-made features.
The Centre for Topographic Information provides this definition of a
topographic map:
"A topographic map is a detailed and accurate graphic representation of cultural
and natural features on the ground."
However, in the vernacular and day to day world, the representation of
relief (contours) is popularly held to define the genre, such that even small-scale
maps showing relief are commonly (and erroneously, in the technical sense)
called "topographic." According to Cartographer's Kraak and Ormeling,
"Traditionally, the main division of maps is into topographic and thematic maps.
Topographic maps supply a general image of the earth's surface: roads,
rivers, buildings, often the nature of the vegetation, the relief and the names of
the various mapped objects.The study or discipline of topography, while
interested in relief, is actually a much broader field of study which takes into
account all natural and man made features of terrain.
Topographic maps show a 3 dimensional world in 2 dimensions by using
contour lines. Many people have trouble reading these maps, because they have
mountains and valleys are represented with concentric circles and lines. Many
hikers use topographic maps, especially in areas where there are no roads with
signs. Geologists depend on topographic maps to record the types of rocks.
Engineers use topographic maps when they are planning roads, buildings, or
other human–made structures. Imagine designing a city without considering
where hills and valleys are located!
A geologic map or geological map is a special-purpose map made to show
geological features.
The stratigraphic contour lines are drawn on the surface of a selected deep
stratum, so that they can show the topographic trends of the strata under the
ground. It is not always possible to properly show this when the strata are
extremely fractured, mixed, in some discontinuities, or where they are otherwise
Strike and dip symbols consist of (at minimum) a long line, a number, and
a short line which are used to indicate tilted beds. The long line is the strike line,
which shows the true horizontal direction along the bed, the number is the dip or
number of degrees of tilt above horizontal, and the short line is the dip line,
which shows the direction of tilt.
A geologic map is a map of the different types of rocks that are on the
surface of the Earth. By mapping different rock types, geologists can determine
the relationships between different rock formations which can then be used to
find mineral resources, oil, and gravel deposits. Also, you want to know what
type of rock you are building on or else you might have a Leaning Tower of Pisa
or a pile of rubble after a strong earthquake.
3. Geographic maps
Cartography, or map-making is the study and, often, practice, of crafting
representations of the Earth upon a flat surface and one who makes maps is
called a cartographer.
Road maps are perhaps the most widely used maps today, and form a
subset of navigational maps, which also include aeronautical and nautical charts,
railroad network maps, and hiking and bicycling maps. In terms of quantity, the
largest number of drawn map sheets is probably made up by local surveys,
carried out by municipalities, utilities, tax assessors, emergency services
providers, and other local agencies. Many national surveying projects have been
carried out by the military, such as the British Ordnance Survey (now a civilian
government agency internationally renowned for its comprehensively detailed
A map can also be any document giving information as to where or what
something is.
Scientists involved in the study of animals, plants, and other living
organisms use maps to illustrate where these groups live or migrate. It is
important to many zoologists to know where the organisms that they study live
and where they move to. People who monitor endangered species need to know
if the ranges of migration have become larger or smaller through time.
These types of maps include maps that look at human's activity in urban
and metropolitan areas and the environment in which we all live. Maps that
illustrate physiographic features such as forests, grassland, woodland, tundra,
grazing land, ocean floors, and ocean sediments could be included in this large
Meteorological maps that show climate, weather and wind are types of
environmental maps. Meteorologists, oceanographers, geographers, city
planners, and many other professionals depend greatly on these maps to record
and forecast their specific field.
6.Orthophoto maps
These maps show land features using color-enhanced photographic
images which have been processed to show detail in true position. They may or
may not include contours. Because imagery naturally depicts an area in a more
true-to-life manner than the conventional line map, the orthophoto map provides
an excellent portrayal of extensive areas of sand, marsh, or flat agricultural areas.
7.Physical maps
Physical maps show the earth's landforms and bodies of water. The maps
use lines, shading, tints, spot elevations, and different colors to show elevation
This kind of map often has some road, city and cultural information but mostly
functions as a view of the land surface. Often these maps make very attractive
framed pieces for the den or office.
8.Political maps
Political maps show boundaries that divide one political entity from
another, such as townships, counties, cities, and states. Some maps emphasize
the boundaries by printing the areas of each political division in different colors,
for example world maps usually show each country in a different color.
A political map can be made of any area from the local county, municipal levels
all the way up to the world level. In general, most maps are political with far
fewer being produced as physical maps.
9.Relief maps: Shaded Relief and Raised Relief
Relief maps are maps that show relief data using contour lines, colors,
Shaded relief maps show topographic features by using shading to simulate the
appearance of sunlight and shadows. Steep mountains will have dark shadows,
Raised-relief maps are three-dimensional plastic or vinyl maps portraying the
physical features of a region. Raised relief maps can have as much as 2-3 inches
of vertical relief, while this type of map is neat to look at they are all but
impossible to ship so we cannot offer them on this site. In fact we rarely carry
them in our store as we had upwards of 50% of them arrive in the "flattened
relief" condition.
10.Road maps
Michelin in France and Gulf Oil in America produced the first road maps
to encourage people to travel more, thus consuming more tires and oil. Such
maps were usually free until the oil crisis of 1973, when service stations began to
A road map is published primarily to assist travelers in moving from one place
to another. Some road maps show only interstate highways, while others show a
detailed network of roads, including the back roads. Generally, only large-scale
maps - such as a topographic map, a Gem Trek map, Trails Illustrated map, or a
DeLorme Atlas and Gazetteer - will show unimproved roads.
Some road maps specify distances between various points on the map. Others
show various cultural geography features such as colleges and universities,
airports, museums, historical sights, and information to make a journey more
You will discover several publishers that have produced entire series of road
maps for given regions. Examples include the Michelin series for France or the
Road atlases are frequently a good choice for a traveler who is going to be
covering a large region. There are two main types of road atlases: state or
national atlases, and city street atlases.
Geographic information system
A geographic information system (GIS), also known as a geographical
information system or geospatial information system, is any system for
capturing, storing, analyzing, managing and presenting data and associated
attributes which are spatially referenced to Earth. In certain countries such as
Canada, GIS is more well known as Geomatics. The other definition is, "GIS is a
system or tool or computer based methodology to collect, store, manipulate,
retrieve and analyse spatially (georeferenced) data."
In the strictest sense, it is any information system capable of integrating,
storing, editing, analyzing, sharing, and displaying geographically referenced
information. In a more generic sense, GIS is a tool that allows users to create
interactive queries (user created searches), analyze the spatial information, edit
data, maps, and present the results of all these operations. Geographic
information science is the science underlying the geographic concepts,
applications and systems, taught in degree and GIS Certificate programs at many
universities.Geographic information system technology can be used for scientific
investigations, resource management, asset management, environmental impact
assessment, urban planning, cartography, criminology, history, sales, marketing,
and logistics. For example, GIS might allow emergency planners to easily
calculate emergency response times in the event of a natural disaster, GIS might
be used to find wetlands that need protection from pollution, or GIS can be used
by a company to site a new business location to take advantage of a previously
underserved market.
History of development
About 15,500 years ago, on the walls of caves near Lascaux, France, CroMagnon hunters drew pictures of the animals they hunted. Associated with the
animal drawings are track lines and tallies thought to depict migration routes.
While simplistic in comparison to modern technologies, these early records
mimic the two-element structure of modern geographic information systems, an
image associated with attribute information.
In 1854, John Snow depicted a cholera outbreak in London using points to
represent the locations of some individual cases, possibly the earliest use of the
geographic method.His study of the distribution of cholera led to the source of
the disease, a contaminated water pump within the heart of the cholera outbreak.
E. W. Gilbert's version (1958) of John Snow's 1855 map of the Soho cholera
outbreak showing the clusters of cholera cases in the London epidemic of 1854
While the basic elements of topology and theme existed previously in
cartography, the John Snow map was unique, using cartographic methods not
only to depict but also to analyze clusters of geographically dependent
phenomena for the first time.
The early 20th century saw the development of "photo lithography" where
maps were separated into layers. Computer hardware development spurred by
nuclear weapon research would lead to general purpose computer "mapping"
applications by the early 1960s.
The year 1962 saw the development of the world's first true operational
GIS in Ottawa Ontario, Canada by the federal Department of Forestry and Rural
Development. Developed by Dr. Roger Tomlinson, it was called the "Canada
Geographic Information.
System" (CGIS) and was used to store, analyze, and manipulate data
collected for the Canada Land Inventory (CLI)—an initiative to determine the
land capability for rural Canada by mapping information about soils, agriculture,
recreation, wildlife, waterfowl, forestry, and land use at a scale of 1:50,000. A
rating classification factor was also added to permit analysis.
CGIS was the world's first "system" and was an improvement over
"mapping" applications as it provided capabilities for overlay, measurement, and
digitizing/scanning. It supported a national coordinate system that spanned the
continent, coded lines as "arcs" having a true embedded topology, and it stored
the attribute and locational information in separate files. As a result of this,
Tomlinson has become known as the "father of GIS," particularly for his use of
overlays in promoting the spatial analysis of convergent geographic data. CGIS
lasted into the 1990s and built the largest digital land resource database in
Canada. It was developed as a mainframe based system in support of federal and
provincial resource planning and management. Its strength was continent-wide
analysis of complex data sets. The CGIS was never available in a commercial
In 1964, Howard T Fisher formed the Laboratory for Computer Graphics
and Spatial Analysis at the Harvard Graduate School of Design (LCGSA 19651991), where a number of important theoretical concepts in spatial data handling
were developed, and which by the 1970s had distributed seminal software code
and systems, such as 'SYMAP', 'GRID', and 'ODYSSEY' -- which served as literal
and inspirational sources for subsequent commercial development -- to
universities, research centers, and corporations worldwide.
By the early 1980s, M&S Computing (later Intergraph), Environmental
Systems Research Institute (ESRI) and CARIS (Computer Aided Resource
Information System) emerged as commercial vendors of GIS software,
successfully incorporating many of the CGIS features, combining the first
generation approach to separation of spatial and attribute information with a
second generation approach to organizing attribute data into database structures.
In parallel, the development of a public domain GIS was begun in 1982 by the
U.S. Army Corp of Engineering Research Laboratory (USA-CERL) in
Champaign, Illinois, a branch of the U.S. Army Corps of Engineers to meet the
need of the United States military for software for land management and
environmental planning. The later 1980s and 1990s industry growth were
spurred on by the growing use of GIS on Unix workstations and the personal
computer. By the end of the 20th century, the rapid growth in various systems
had been consolidated and standardized on relatively few platforms and users
were beginning to export the concept of viewing GIS data over the Internet,
requiring data format and transfer standards. More recently, there is a growing
number of free, open source GIS packages which run on a range of operating
systems and can be customized to perform specific tasks.
Components of a GIS
A GIS can be divided into five components: People, Data, Hardware,
Software, and Procedures. All of these components need to be in balance for the
system to be successful. No one part can run without the other.
The people are the component who actually makes the GIS work. They
include a plethora of positions including GIS managers, database administrators,
application specialists, systems analysts, and programmers. They are responsible
for maintenance of the geographic database and provide technical support.
People also need to be educated to make decisions on what type of system to
use. People associated with a GIS can be categorized into: viewers, general users,
and GIS specialists.
Viewers are the public at large whose only need is to browse a
geographic database for referential material. These constitute the largest class of
General Users are people who use GIS to conducting business, performing
professional services, and making decisions. They include facility managers,
resource managers, planners, scientists, engineers, lawyers, business
entrepreneurs, etc.
GIS specialists are the people who make the GIS work. They include GIS
managers, database administrators, application specialists, systems analysts, and
programmers. They are responsible for the maintenance of the geographic
database and the provision of technical support to the other two classes of users.
(Lo, 2002)
Procedures include how the data will be retrieved, input into the system,
stored, managed, transformed, analyzed, and finally presented in a final output.
The procedures are the steps taken to answer the question needs to be resolved.
The ability of a GIS to perform spatial analysis and answer these questions is
what differentiates this type of system from any other information systems.
The transformation processes includes such tasks as adjusting the
coordinate system, setting a projection, correcting any digitized errors in a data
set, and converting data from vector to raster or raster to vector. (Carver, 1998).
Hardware consists of the technical equipment needed to run a GIS
including a computer system with enough power to run the software, enough
memory to store large amounts of data, and input and output devices such as
scanners, digitizers, GPS data loggers, media disks, and printers. (Carver, 1998).
There are many different GIS software packages available today. All
packages must be capable of data input, storage, management, transformation,
analysis, and output, but the appearance, methods, resources, and ease of use of
the various systems may be very different. Today’s software packages are
capable of allowing both graphical and descriptive data to be stored in a single
database, known as the object-relational model. Before this innovation, the georelational model was used. In this model, graphical and descriptive data sets
were handled separately. The modern packages usually come with a set of tools
that can be customized to the users needs (Lo, 2002).
The producers and the main products of GIS Software are the following:
Environmental Systems Research Institute ( ESRI ): ArcInfo, ArcView.
Autodesk: AutoCAD Map
Clark Labs: IDRISI
International Institute for Aerospace Survey and Earth Sciences: ILWIS
Mapinfo Corporation: Mapinfo.
Bentley Systems: Microstation.
PCI Geomatics: PAMAP
Perhaps the most time consuming and costly aspect of initiating a GIS is creating
a database. There are several things to consider before acquiring geographic
data. It is crucial to check the quality of the data before obtaining it. Errors in
the data set can add many unpleasant and costly hours to implementing a GIS
and the results and conclusions of the GIS analysis most likely will be wrong.
Several guidelines to look at include:
Lineage – This is a description of the source material from which the data
were derived, and the methods of derivation, including all transformations
involved in producing the final digital files. This should include all dates of the
source material and updates and changes made to it. (Guptill, 1995)
Positional Accuracy – This is the closeness of an entity in an appropriate
coordinate system to that entity’s true position in the system. The positional
accuracy includes measures of the horizontal and vertical accuracy of the
features in the data set. (Guptill, 1995)
Attribute Accuracy – An attribute is a fact about some location, set of
locations, or features on the surface of the earth. This information often includes
measurements of some sort, such as temperature or elevation or a label of a place
name. The source of error usually lies within the collection of these facts. It is
vital to the analysis aspects of a GIS that this information be accurate.
Logical Consistency - Deals with the logical rules of structure and attribute
rules for spatial data and describes the compatibility of a datum with other data
in a data set. There are several different mathematical theories and models used
to test logical consistency such as metric and incidence tests, topological and
order related tests. These consistency checks should be run at different stages in
the handling of spatial data. (Guptill, 1995).
Completeness – This is a check to see if relevant data is missing with regards
to the features and the attributes. This could deal with either omission errors or
spatial rules such as minimum width or area that may limit the information.
(Guptill, 1995) (Chrisman,1999).
Also known as geospatial data or geographic information it is the data or
information that identifies the geographic location of features and boundaries on
Earth, such as natural or constructed features, oceans, and more. Spatial data is
usually stored as coordinates and topology, and is data that can be mapped.
Spatial data is often accessed, manipulated or analyzed through Geographic
Information Systems (GIS).
Definition 1:
The conversion or abstraction of the earth and it’s properties to a database
that defines location and properties of individual features of interest.
Definition 2:
Duplicating the real world in the computer by collecting information
about things and where these things are located.
Spatial Data is:
An inventory of assets - Landcover, Landuse and other natural resources
can be considered assets.
A ‘Snapshot’ in time - Information loses value if not maintained.
A “living document” type of resource if you chose to keep it up to date.
Spatial Data = Spatial (Where) + Data (What)
Non-spatial data may be joined to geocoded files with matching attributes
and displayed as regular maps. This is common in Geographic Information
Systems (GIS). For example census information such as race or income, noninherently spatial data, can be displayed as maps. Unfortunately non-spatial data
often has no corresponding geocoded representation; yet valuable information
may still be derived if the right representation can be found. By drawing on
cartographic metaphors and representing non-spatial data as maps, or
"information maps," the information in non-spatial data can be "spatialized,"
analyzed, browsed, and processed using GIS and cartographic methods, then
shared on the web using internet map servers.
Additional non-spatial data can also be stored besides the spatial data
represented by the coordinates of a vector geometry or the position of a raster
cell. In vector data, the additional data are attributes of the object. For example, a
forest inventory polygon may also have an identifier value and information
about tree species. In raster data the cell value can store attribute information,
but it can also be used as an identifier that can relate to records in another table.
Spatial analysis with GIS
Given the vast range of spatial analysis techniques that have been
developed over the past half century, any summary or review can only cover the
subject to a limited depth. This is a rapidly changing field, and GIS packages are
increasingly including analytical tools as standard built-in facilities or as optional
toolsets, add-ins or 'analysts'. In many instances such facilities are provided by
the original software suppliers (commercial vendors or collaborative non
commercial development teams), whilst in other cases facilities have been
developed and are provided by third parties. Furthermore, many products offer
software development kits (SDKs), programming languages and language
support, scripting facilities and/or special interfaces for developing one’s own
analytical tools or variants. The website Geospatial Analysis and associated
book/ebook attempt to provide a reasonably comprehensive guide to the subject.
In geometry, a line segment is a part of a line that is bounded by two
distinct end points, and contains every point on the line between its end points.
Examples of line segments include the sides of a triangle or square. More
generally, when the end points are both vertices of a polygon, the line segment is
either an edge (of that polygon) if they are adjacent vertices, or otherwise a
diagonal. When the end points both lie on a curve such as a circle, a line segment
is called a chord (of that curve).
In geometry a polygon is traditionally a plane figure that is bounded by a
closed path or circuit, composed of a finite sequence of straight line segments
(i.e., by a closepolygonal chain). These segments are called its edges or sides, and
the points where two edges meet are the polygon's vertices or corners. The interior
of the polygon is sometimes called its body. A polygon is a 2-dimensional
example of the more general polytope in any number of dimensions.
Usually two edges meeting at a corner are required to form an angle that
is not straight (180°); otherwise, the line segments will be considered parts of a
single edge.
The basic geometrical notion has been adapted in various ways to suit particular
purposes. For example in the computer graphics (image generation) field, the
term polygon has taken on a slightly altered meaning, more related to the way
the shape is stored and manipulated within the computer.
Data representation
GIS data represents real world objects (roads, land use, elevation) with
digital data. Real world objects can be divided into two abstractions: discrete
objects (a house) and continuous fields (rain fall amount or elevation). There are
two broad methods used to store data in a GIS for both abstractions: Raster and
A simple vector map, using each of the vector elements: points for wells,
lines for rivers, and a polygon for the lake.
n a GIS, geographical features are often expressed as vectors, by
considering those features as geometrical shapes. Different geographical features
are expressed by different types of geometry:
Zero-dimensional points are used for geographical features that can
best be expressed by a single point reference; in other words, simple
location. For example, the locations of wells, peak elevations, features of
interest or trailheads. Points convey the least amount of information of
these file types. Points can also be used to represent areas when displayed
at a small scale. For example, cities on a map of the world would be
represented by points rather than polygons. No measurements are
possible with point features.
Lines or polylines
One-dimensional lines or polylines are used for linear features such
as rivers, roads, railroads, trails, and topographic lines. Again, as with
point features, linear features displayed at a small scale will be
represented as linear features rather than as a polygon. Line features can
measure distance.
Two-dimensional polygons are used for geographical features that cover a
particular area of the earth's surface. Such features may include lakes,
park boundaries, buildings, city boundaries, or land uses. Polygons
convey the most amount of information of the file types. Polygon features
can measure perimeter and area.
Each of these geometries is linked to a row in a database that describes
their attributes. For example, a database that describes lakes may contain a lake's
depth, water quality, pollution level. This information can be used to make a
map to describe a particular attribute of the dataset. For example, lakes could be
coloured depending on level of pollution. Different geometries can also be
compared. For example, the GIS could be used to identify all wells (point
geometry) that are within 1-mile (1.6 km) of a lake (polygon geometry) that has a
high level of pollution.
Vector features can be made to respect spatial integrity through the
application of topology rules such as 'polygons must not overlap'. Vector data
can also be used to represent continuously varying phenomena. Contour lines
and triangulated irregular networks (TIN) are used to represent elevation or
other continuously changing values. TINs record values at point locations, which
are connected by lines to form an irregular mesh of triangles. The face of the
triangles represent the terrain surface.
A raster data type is, in essence, any type of digital image. Anyone who is
familiar with digital photography will recognize the pixel as the smallest
individual unit of an image. A combination of these pixels will create an image,
distinct from the commonly used scalable vector graphics which are the basis of
the vector model. While a digital image is concerned with the output as
representation of reality, in a photograph or art transferred to computer, the
raster data type will reflect an abstraction of reality. Aerial photos are one
commonly used form of raster data, with only one purpose, to display a detailed
image on a map or for the purposes of digitization. Other raster data sets will
contain information regarding elevation, a DEM, or reflectance of a particular
wavelength of light, LANDSAT.
Digital elevation model, map (image), and vector data
Raster data type consists of rows and columns of cells, with each cell
storing a single value. Raster data can be images (raster images) with each pixel
(or cell) containing a color value. Additional values recorded for each cell may be
a discrete value, such as land use, a continuous value, such as temperature, or a
null value if no data is available. While a raster cell stores a single value, it can be
extended by using raster bands to represent RGB (red, green, blue) colors,
colormaps (a mapping between a thematic code and RGB value), or an extended
attribute table with one row for each unique cell value. The resolution of the
raster data set is its cell width in ground units.
Raster data is stored in various formats; from a standard file-based
structure of TIF, JPEG, etc. to binary large object (BLOB) data stored directly in a
relational database management system (RDBMS) similar to other vector-based
feature classes. Database storage, when properly indexed, typically allows for
quicker retrieval of the raster data but can require storage of millions of
significantly-sized records.
Advantages and disadvantages
There are advantages and disadvantages to using a raster or vector data
model to represent reality. Raster data sets record a value for all points in the
area covered which may require more storage space than representing data in a
vector format that can store data only where needed. Raster data also allows easy
implementation of overlay operations, which are more difficult with vector data.
Vector data can be displayed as vector graphics used on traditional maps,
whereas raster data will appear as an image that, depending on the resolution of
the raster file, may have a blocky appearance for object boundaries. Vector data
can be easier to register, scale, and re-project. This can simplify combining vector
layers from different sources. Vector data are more compatible with relational
database environment. They can be part of a relational table as a normal column
and processes using a multitude of operators.
The file size for vector data is usually much smaller for storage and
sharing than raster data. Image or raster data can be 10 to 100 times larger than
vector data depending on the resolution. Another advantage of vector data is it
can be easily updated and maintained. For example, a new highway is added.
The raster image will have to be completely reproduced, but the vector data,
"roads," can be easily updated by adding the missing road segment. In addition,
vector data allow much more analysis capability especially for "networks" such
as roads, power, rail, telecommunications, etc. For example, with vector data
attributed with the characteristics of roads, ports, and airfields, allows the
analyst to query for the best route or method of transportation. In the vector
data, the analyst can query the data for the largest port with an airfield within 60
miles and a connecting road that is at least two lane highway. Raster data will
not have all the characteristics of the features it displays.
There are five kinds of data to be represented in a GIS, see figure 1.
Point features
Eg., location of soil samples, boreholes, manholes, rain gauges, burst
water mains, pumping stations, trees, buildings. The points consist of a number
of nodes with no thickness and is often referred to as zero dimensional. One
method to store a point feature in a GIS is as a table in the data base management
Point ID
X coordinate Y Coordinate
Point 1
Point 2
Point 3
Where the pointer to attribute data is a link into another data base table
when other data about that point is kept, for example that it represents an access
chamber to a sewer system and that it has properties such as date of
construction, condition, size, material etc. This is a link into a full data base
management system so further relations are permitted from this point
Linear features
Eg. roads (on small scale maps), rivers, pipe lines, power lines, elevation
contours. The nodes are linked with arcs, each with a number of vertices (the
simple arc is a straight line). Between vertices the arc is usually considered a
straight line but curved links are possible. Line data can either be non-branching
lines, or tree or network structures. In a network there are more than one routes
between two nodes. This data has one dimension, that is, it does not have
thickness and care must be taken in the definition of the system that a loop is not
confused with a polygon. A simple structure for a line feature or network is:
line reference
Attribute Pointer
arc 1
arc 2
Arc reference
X coordinate
Y Coordinate
Node 1
Node 2
Vertex 3
Areas (polygons) with common properties, e.g. pressure zones,
catchments, contributing areas, soil association mapping units, climate zones,
administrative district areas, buildings and other land cover. The polygon
consists of a number of arcs or linear features that form a closed loop without
crossing over one another. The arcs are usually straight between vertices but may
be curved.
Polygon Reference Attribute Pointer
Point 1
X coordinate
Y Coordinate
Simple polygon structure.
The simple polygon representation shown above where a quadrangle is
represented, as used in CAD (or DXF format), is of little use in GIS. The 3 major
problems with simple polygons are:
The boundary between 2 polygon needs to be stored twice. There is
always a possibility that the nodes for each boundary polygon are in slightly
different positions resulting in artificial gaps between polygons, or slivers where
an area is assigned to two or more polygons (see figure 3). These problems are
particularly acute when manually digitising.
1. When manually digitising it is possible to accidentally cross over from one
polygon to another creating a totally false polygon or to pass from one
node to another in incorrect order giving rise to a weird polygon (see
figure 4)
1. Complex geographical objects are difficult to represent, for example
islands or disjointed polygons (see figure 5). If we consider an example
from urban drainage where a garden area is completely surrounded by
car park, as, for example, at a prestige office complex, then if we calculate
the area of simple polygons the area of the car park will include,
erroneously, the area of the gardens and grossly overestimate the
impermeable area.
In GIS therefore area data is represented as topological structure in one of
a number of ways. The Arc/Info method of storing this information is shown in
figure 7. A separate list is used to hold information about islands and disjointed
structures. Different themes can be represented on the same coverage and there
is no requirement that polygons do not overlap. For example a single coverage
may contain polygons representing landcover, whereas another other polygons
may contain the contributing areas to inlet nodes of a storm water drainage
system, see figure 6. The polygons naturally overlap and the intersections of
these polygons provides one of the main uses of GIS and is known as overlay to
reflect the graphical process of overlaying one theme upon another.
Actual or potential surfaces, e.g. ground elevation, variation of mean
annual temperature, spatial distributions of rainfall, population densities. These
are discussed in detail in the section on the digital elevation model (DEM)
Temporal elements, e.g. changes in land use over time, changes to a pipe
network, rainfall records or streamflow records. These are not well represented
in current GIS technology, but newer object oriented GIS should make this more
readily available
Raster Representation
Figure 6 shows two polygons intersecting. The numerical calculation
required to calculate either the intersection or the join of the 2 polygons is quite
intensive. The whole process is made much simpler if the polygons are all the
same shape and size, preferably rectangular. This use of rectangular polygons is
known as a cell, grid or raster representation and provides one of the simplest
representations for GIS and spatial statistical modelling. Figure 7 shows the same
polygon data represented as a vector and as a raster. Note that the individual cell
values can be either numbers for computation, such as elevations or pointers to a
database with further attributes.
The ease of programming raster GIS systems and low computational
overheads makes them very suitable for natural or environmental modelling.
The size of cells used in GIS modelling requires careful thought before data entry
and modelling can begin. I have used cells of 1m square for urban drainage work
where we were only interested in a small catchment and 250m square for land
evaluation where we were studying the whole of Ghana.
There is always error in the representation of real world structures as small cells
and it is important to realise the trade off between small cells that accurately
represent the real world but carry a lot of computational overhead and large cells
that are much more efficient but introduce large errors. Fortunately computers
are getting more powerful and disk drives much larger every year so these
problems become less important and we can select cell sizes to represent the
natural variation we observe. For example a soil association boundary will never
be known on the ground to better than 50m accuracy, therefore using any cell
size less than 50m is pointless. My recommendations on cell size are as follows:
Data derived from 1:50 000 maps
Data derived from 1:10 000 maps
Data derived from 1:1250 maps
Any modelling with satellite remote
resolution of the sensor (often 30m)
Nation wide land evaluation
Studies involving geodemographics
20-40m (it is debatable whether it is
truly physically based at this
Physically based rainfall runoff modelling
resolution but this will allow
realistic computation times)
Flood plane studies
Most GIS that use raster data have some means of compressing the data
using either run length encoding, quad trees or any of the loss less schemes for
computer graphics. Unless you intent to write your own modules and one of the
big attractions of raster GIS is that you can write your own modules then, then
the compression technique is irrelevant to the user. However, it does mean that
raster GIS data bases can be as small as their vector counterparts.
With some raster GIS all overlays must be carried out with identically
sized cells and all resampling must be carried out manually before the overlay
modelling begins. With other GIS the resampling is carried out dynamically to
either the largest grid size of all the overlays in the model or some user specified
grid size.
Raster and vector GIS are traditionally compared and the author states his
preference for one or the other, but most modern GIS have vector and raster
components which can often be inter linked seamlessly. Many tasks are easier to
carry out in each form, for example cadasteral work requires the accuracy and
precision of a vector GIS, whereas determining the water requirements of a
region can be best done using a raster representation.
GIS software comes in a variety of packages. The two main types, as
already described, are the vector based system and the raster based system. More
modern systems permit the total integration of raster and vector data, allowing
the advantages of both methods to be enjoyed, with few of the disadvantages.
Vector systems are often supported by traditional DataBase Management
Systems (DBMS). The most common conform to the relational model, see Avison
(1992). Arc-Info, the most widely used vector GIS package, follows this approach,
Info being a relational DBMS in its own right The relational model is the basis of
most DBMS used in organisations and businesses. This underlies the vector
model's principle use as an asset or resource inventory system. A DBMS should
allow access to appropriate parts of the database to different types of user, and
prevent unauthorised viewing or changing. It should also maintain data
concurrency, provide archive facilities and present a simple interface to the user
for manipulating the data.
Raster systems generally do not employ such strict data management.
They have developed from image processing systems and are often used by a
single user. Clearly these are generalisations, and many packages will embody
aspects of both systems.
The most up-to-date systems are described as 'object oriented'. The
distinction of object oriented systems is that all data items are described as being
of one or more object type; e.g. a linear feature, a point, a vector polygon, a
regular raster, a raster cell, a TIN, a DEM, etc. In addition to storing the
description of the object, the methods of displaying, plotting and general
manipulation are also carried with the object type, this is known as
Objects are hierarchical; rivers, roads and pipes will be objects that are
descended from the linear object, each will, therefore, have the properties,
behaviour and methods inherited from the linear feature, such as length.
However they will each have behaviour and properties that are distinct; roads
will have classes (i.e. 'A' roads and motorways); pipes and roads will not be able
to connect to form a network.
The object oriented paradigm is currently of great interest to the computer
science community. Object oriented programming languages, databases and, of
course, GIS are under development, (see Worboys et al, 1990). There are several
advantages that are stressed by advocates of the object oriented approach;
(i) it is intuitive as people naturally think in terms of objects;
(ii) by specifying behaviour, inconsistencies in the database can be reduced, for
example sewers and water mains objects exhibit different behaviour and should
not be part of the same network;
(iii) developing applications is easy; by having a hierarchical structure new
objects are easily created.
There are a variety of ways of storing geographical data and different ways of
processing the data. The choice of data structure is largely dictated by the use the
data is to be put to, the capabilities of the GIS being used and, to a large extent by
the existing data formats.
Figure 7(a) Simple vector representation, using the topologic model presented by
Dangermond (1982), more complex structures are used to improve access times.
(b) Raster representation, a raster layer is required for each attribute to be
1) define map and explain different type of maps?
2) Define geographical information systems and how can it is help to create digital
3) What are the components of GIS?
4) Write detail about spatial and non-spatial datas?
5) Write vector and raster data base structure?
Perhaps the initial GIS analysis that any user undertakes is the retrieval
and/or reclassification of data. Retrieval operations occur on both spatial and
attribute data. Often data is selected by an attribute subset and viewed
graphically. Retrieval involves the selective search, manipulation, and output of
data without the requirement to modify the geographic location of the features
The ability to query and retrieve data based on some user defined criteria
is a necessary feature of the data storage and retrieval subsystem.
Data retrieval involves the capability to easily select data for graphic or
attribute editing, updating, querying, analysis and/or display.
The ability to retrieve data is based on the unique structure of the DBMS
and command interfaces are commonly provided with the software. Most GIS
software also provides a programming subroutine library, or macro language, so
the user can write their own specific data retrieval routines if required.
Querying is the capability to retrieve data, usually a data subset, based on some
user defined formula. These data subsets are often referred to as logical views.
Often the querying is closely linked to the data manipulation and analysis
subsystem. Many GIS software offerings have attempted to standardize their
querying capability by use of a Standard Query Language (SQL). This is especially
true with systems that make use of an external relational DBMS. Through the use
of SQL, GIS software can interface to a variety of different DBMS packages. This
approach provides the user with the flexibility to select their own DBMS. This
has direct implications if the organization has an existing DBMS that is being
used for to satisfy other business requirements. Often it is desirable for the same
DBMS to be utilized in the GIS applications. This notion of integrating the GIS
software to utilize an existing DBMS through standards is referred to as corporate
or enterprise GIS. With the migration of GIS technology from being a research tool
to being a decision support tool there is a requirement for it to be totally
integrated with existing corporate activities, including accounting, reporting, and
business functions.
There is a definite trend in the GIS marketplace towards a generic interface with
external relational DBMS's. The use of an external DBMS, linked via a SQL
interface, is becoming the norm. A flexibility as such is a strong selling point for
any GIS. SQL is quickly becoming a standard in the GIS software marketplace.
Spatial analysis
In statistics, spatial analysis or spatial statistics includes any of the formal
techniques which study entities using their topological, geometric, or geographic
properties. The phrase properly refers to a variety of techniques, many still in
their early development, using different analytic approaches and applied in
fields as diverse as astronomy, with its studies of the placement of galaxies in the
cosmos, to chip fabrication engineering, with its use of 'place and route'
algorithms to build complex wiring structures. The phrase is often used in a
more restricted sense to describe techniques applied to structures at the human
scale, most notably in the analysis of geographic data. The phrase is even
sometimes used to refer to a specific technique in a single area of research, for
example, to describe geostatistics.
The history of spatial analysis starts with early mapping, surveying and
geography at the beginning of history, although the techniques of spatial analysis
were not formalized until the later part of the twentieth century. Modern spatial
analysis focuses on computer based techniques because of the large amount of
data, the power of modern statistical and geographic information science (GIS)
software, and the complexity of the computational modeling. Spatial analytic
techniques have been developed in geography, biology, epidemiology, statistics,
geographic information science, remote sensing, computer science, mathematics,
and scientific modelling.
Complex issues arise in spatial analysis, many of which are neither clearly
defined nor completely resolved, but form the basis for current research. The
most fundamental of these is the problem of defining the spatial location of the
entities being studied. For example, a study on human health could describe the
spatial position of humans with a point placed where they live, or with a point
located where they work, or by using a line to describe their weekly trips; each
choice has dramatic effects on the techniques which can be used for the analysis
and on the conclusions which can be obtained. Other issues in spatial analysis
include the limitations of mathematical knowledge, the assumptions required by
existing statistical techniques, and problems in computer based calculations.
Classification of the techniques of spatial analysis is difficult because of
the large number of different fields of research involved, the different
fundamental approaches which can be chosen, and the many forms the data can
Common errors in spatial analysis
The fundamental issues in spatial analysis lead to numerous problems in
analysis including bias, distortion and outright errors in the conclusions reached.
These issues are often interlinked but various attempts have been made to
separate out particular issues from each other.
The locational fallacy
The locational fallacy is a phrase used to describe an error due to the
particular spatial characterization chosen for the elements of study, in particular
choice of placement for the spatial presence of the element.
Spatial characterizations may be simplistic or even wrong. Studies of
humans often reduce the spatial existence of humans to a single point, for
instance their home address. This can easily lead to poor analysis, for example,
when considering disease transmission which can happen at work or at school
and therefore far from the home.
The spatial characterization may implicitly limit the subject of study. For
example, the spatial analysis of 'crime' data has recently become popular but
these studies can only describe the particular kinds of crime which can be
described spatially. This leads to many maps of assault but not to any maps of
embezzlement with political consequences in the conceptualization of crime and
the design of policies to address the issue.
The atomic fallacy
This describes errors due to treating elements as separate 'atoms' outside of their
spatial context.
The ecological fallacy
The ecological fallacy describes errors due to performing analyses on
aggregate data when trying to reach conclusions on the individual units. It is
closely related to the Modifiable Areal Unit Problem. The modifiable areal unit
The modifiable areal unit problem (MAUP) is an issue in the analysis of
spatial data arranged in zones, where the conclusion depends on the particular
shape of the zones used in the analysis.
Spatial analysis and modeling often involves aggregate spatial units such as
census tracts and traffic analysis zones. These units may reflect data collection
and/or modeling convenience rather than homogeneous, cohesive regions in the
real world. The spatial units are therefore arbitrary or modifiable and contain
artifacts related to the degree of spatial aggregation or the placement of
The problem arises because it is known that results derived an analysis of
these zones depends directly on the zones being studied. It has been shown that
the aggregation of point data into zones of different shape can lead to opposite
Various solutions have been proposed to address the MAUP, including
repeated analysis and graphical techniques but the issue cannot yet be
considered to be solved. One strategy is to assess its effects in a sensitivity
analysis by changing the aggregation or boundaries and comparing results from
the analysis and modeling under these different schemes. A second strategy is to
develop optimal spatial units for the analysis.
GIS analysis functions use the spatial and non-spatial attribute data to
answer questions about real-world. It is the spatial analysis functions that
distinguishes GIS from other information systems.
When use GIS to address real-world problems, you'll come up against the
question that which analysis function you want to use and to solve the problems.
In this case, you should be aware that wisely using functions will lead to high
quality of the information produced from GIS and individual analysis functions
must be used in the context of a complete analysis strategy. (Stan Aronoff, 1989)
1. Spatial Data Functions
Spatial data refers to information about the location and shape of, and
relationships among, geographic features, usually stored as coordinates and
topology. Spatial data functions are used to transform spatial data files, such as
digitized map, edit them, and assess their accuracy. They are mainly concerned
with the spatial data.
Format Transformations
Format is the pattern into which data are systematically arranged for use
on a computer. Format transformations are used to get data into acceptable GIS
format. Digital Files must be transformed into the data format used by the GIS,
such as transforming from raster to vector data structure. A raster data often
requires no re-formatting. A vector data often requires topology to be built from
coordinate data, such as arc/node translations. Transformation can be very
costly and time-consuming with poor coordinate data.
Geometric Transformations
Geometric transformations are used to assign ground coordinates to a
map or data layer within the GIS or to adjust one data layer so it can be correctly
overlayed on another of the same area. The procedure used to accomplish this
correction is termed registration.
Two approaches are used in registration: the adjustment of absolute
positions and the adjustment of relative position. Relative Position refers to the
location of features in relation to a geographic coordinate system. Rubber
sheeting (registration by Relative Position) is the procedure using "slave" and
"master" mathematical transformations to adjust coverage features in a
nonuniform manner. Links representing from- and to-locations are used to
define the adjustment. It needs easily identifiable, accurate, well distributed
control points. Absolute Position is the location in relation to the ground. This
registration is done by individual layers. The advantage is that it does not
propagate errors.
Projection Transformations
Map projection is a mathematical transformation that is used to represent
a spherical surface on a flat map. The transformation assigns to each location on
a spherical surface a unique location on a 2-dimensional map.
Map projections always causes some distortion: area, shape, distance, or
direction distortion. GIS commonly supports several projections and has
software to transform data from one projection to another. The map projections
most commonly used for mapping at scales of 1:500,000 or larger in North
America is the UTM(Universal Transverse Mercator) Projection. For maps of
continental extent, the Albers, Lambert's Azimuthal, and Polyconic projections
are commonly used.
Conflation is the procedure of reconciling the positions of corresponding
features in different data layers. Conflation functions are used to reconcile these
differences so that the corresponding features overlay precisely. This is
important when data from several data layers are used in an analysis.
Edge matching is a procedure to adjust the position of features extending
across map sheet boundaries. This function ensures that all features that cross
adjacent map sheets have the same edge locations. Links are used when
matching features in adjacent coverages.
Editing Functions
Editing functions are used to add, delete, and change the geographic
position of features. Sliver or splinter polygons are thin polygons that are
occurring along the borders of polygons following digitizing and the topological
overlay of two or more coverages.
Address Matching is a mechanism for relating two files using address as
the relate item. Geographic coordinates and attributes can be transferred from
one address to the other. For example, a data file containing student addresses
can be matched to a street coverage that contains addresses creating a point
coverage of where the students live.
Line Coordinate Thinning
The Thinning function reviews all the coordinate data in a file, identifies
and then removes unnecessary coordinates. Depending on scale, a number of
coordinate pairs can often be significantly reduced without a perceived loss of
This function is used to reduce the quantity of coordinate data that must
be stored by the GIS. Coordinate thinning, by reducing the number of coordinate
points, reduces the size of the data file, thereby reducing the volume of data to be
stored and processed in the GIS.
2. Attribute Data Functions
Attribute Data is relate to the description of the map items. It is typically
stored in tabular format and linked to the feature by a user-assigned identifier
(e.g., the attributes of a well might include depth and gallons per minute).
Retrieval(selective search)
Retrieval operations on the spatial and attribute data involve the selective
search manipulation, and output of data without the need to modify the
geographic location of features or to create new spatial entities. These operations
work with the spatial elements as they were entered in the data base.
nformation from database tables can be accessed directly through the
map, or new maps can be created using information in the tabular database. Both
graphic and tabular data must be stored in formats the computer can recognize
and retrieve.
Classification is the procedure of identifying a set of features as belonging
to a group and defining patterns. Some form of classification function is
provided in every GIS. In a raster-based GIS, numerical values are often used to
indicate classes. Classification is important because it defines patterns. One of the
important functions of a GIS is to assist in recognizing new patterns.
Classification is done using single data layers, as well as with multiple data
layers as part of an overlay operation.
Generalization, also called map dissolve, is the process of making a classification
less detailed by combining classes. Generalization is often used to reduce the
level of classification detail to make an underlying pattern more apparent.
Verification is a procedure for checking the values of attributes for all records in
a database against their correct values. (Keith C. Clarke, 1997)
3. Integrated Analysis of Spatial and Attribute Data
Overlay is a GIS operation in which layers with a common, registered map base
are joined on the basis of their occupation of space. (Keith C. Clarke, 1997).
The overlay function creates composite maps by combining diverse data sets.
The overlay function can perform simple operations such as laying a road map
over a map of local wetlands, or more sophisticated operations such as
multiplying and adding map attributes of different value to determine averages
and co-occurrences.
Raster and vector models differ significantly in the way overlay operations are
implemented. Overlay operations are usually performed more efficiently in
raster-based systems. In many GISs a hybrid approach is used that takes
advantage of the capabilities of both data models. A vector-based system may
implement some functions in the raster domain by performing a vector-to-raster
conversion on the input data, doing the processing as a raster operation, and
converting the raster result back to a vector file.
Region Wide Overlay: "Cookie Cutter Approach"
The region wide, or "cookie cutter," approach to overlay analysis allows natural
features, such as forest stand boundaries or soil polygons, to become the spatial
area(s) which will be analyzed on another map.
For example ( see figures above): given two data sets, forest patches and slope,
what is the area-weighted average slope within each separate patch of forest? To
answer this question, the GIS overlays each patch of forest from the forest patch
data set onto the slope map and then calculates the area-weighted average slope
for each individual forest patch.
Topological Overlay:
Co-Occurrence mapping in a vector GIS is accomplished by topological
overlaying. Any number of maps may be overlayed to show features occurring at
the same location. To accomplish this, the GIS first stacks maps on top of one
another and finds all new intersecting lines. Second, new nodes (point features
where three or more arcs, or lines, come together) are set at these new
intersections. Lastly, the topologic structure of the data is rebuilt and the
multifactor attributes are attached to the new area features.
Neighborhood Function
Neighborhood Function analyzes the relationship between an object and
similar surrounding objects. For example, in a certain area, analysis of a kind of
land use is next to what kinds of land use can be done by using this function.
This type of analysis is often used in image processing. A new map is created by
computing the value assigned to a location as a function of the independent
values surrounding that location. Neighborhood functions are particularly
valuable in evaluating the character of a local area.
Point-in-Polygon and Line-In-Polygon
Point-in-Polygon is a topological overlay procedure which determines the
spatial coincidence of points and polygons. Points are assigned the attributes of
the polygons within which they fall. For example, this function can be used to
analyze an address and find out if it (point) is located within a certain zip code
area (polygon).
Line-in-Polygon is a spatial operation in which lines in one coverage are
overlaid with polygons of another coverage to determine which lines, or portions
of lines, are contained within the polygons. Polygon attributes are associated
with corresponding lines in the resulting line coverage. For example, this
function can be used to find out who will be affected when putting in a new
powerline in an area.
In a vector-based GIS, the identification of points and lines contained
within a polygon area is a specialized search function. In a raster-based GIS, it is
essentially an overlay operation, with the polygons in one data layer and the
points and/or lines in a second data layer.
Topographic Functions
Topography refers to the surface characteristics with continuously
changing value over an area such as elevations, aeromagnetics, noise levels,
income levels, and pollution levels. The topography of a land surface can be
represented in a GIS by digital elevation data. An alternative form of
representation is the Triangulated Irregular Network or TIN used in vectorbased systems.
Topographic functions are used to calculate values that describe the topography
at a specific geographic location or in the vicinity of the location. The two most
commonly used terrain parameters are the slope and aspect, which are calculated
using the elevation data of the neighbouring points.
Slope is the measure of change in surface value over distance, expressed in
degrees or as a percentage. For example, a rise of 2 meters over a distance of 100
meters describes a 2% slope with an angle of 1.15. Mathematically, slope is
referred to as the first derivative of the surface. The maximum slope is termed
the gradient. In a raster format DEM, another grid where each cell is the slope at
a certain position can be created, then the maximun difference can be found and
the gradient can be determined. Aspect is the direction that a surface faces.
Aspect is defined by the horizontal and vertical angles that the surface faces. In a
raster format DEM, another grid can be created for aspect and a number can be
assigned to a specific direction.
Sun intensity is the combination of slope and aspect. Illumination portrays the
effect of shining a light onto a 3-dimensional surface. (Stan Aronoff, 1989).
Thiessen Polygons
Thiessen or voronoi polygons define individual areas of influence around
each of a set of points. Thiessen polygons are polygons whose boundaries define
the area that is closest to each point relative to all other points. Thiessen polygons
are generated from a set of points. They are mathematically defined by the
perpendicular bisectors of the lines between all points. A tin structure is used to
create Thiessen polygons.
Interpolation is the procedure of predicting unknown values using the
known values at neighboring locations. The quality of the interpolation results
depends on the accuracy, number, and distribution of the known points used in
the calculation and on how well the mathematical function correctly models the
Most GIS's provide the capability to build complex models by combining
primitive analytical functions. Systems vary as to the complexity provided for
spatial modelling, and the specific functions that are available. However, most
systems provide a standard set of primitive analytical functions that are
accessible to the user in some logical manner.
A geodatabase is a database that is in some way
reference to locations on the earth. Coupled with
this data is usually data known as attribute data. Attribute data generally
defined as additional information, which can then be tied to spatial data.
Geodatabases are grouped into two different types: vector and raster.
Most GIS software applications mainly focus on the usage and manipulation of
vector geodatabases with added components to work with raster-based
Vector data
Vector data is split into three types: polygon, line (or arc) and point data.
Polygon data is used to represent areas. Polygon features are most commonly
distinguished using either a thematic mapping symbology (color schemes),
patterns, or in the case of numeric gradation, a color gradation scheme could be
In this view of a polygon based dataset, frequency of fire in an area is depicted
showing a graduate color symbology.
Line (or arc) data is used to represent linear features. Common examples would
be road centerlines and hydrology. Symbology most commonly used to
distinguish arc features from one another are line types (solid lines versus
dashed lines) and combinations using colors and line thicknesses. In the example
below roads are distinguished from the stream network but designated the roads
as a solid black line and the hydrology a dashed blue line.
Point data is most commonly used to represent nonadjacent features. Examples
would be schools, points of interest, and in the example below, bridge and
culvert locations.
Point features are
also used to represent abstract points. For instance, point locations could
represent city locations or place names.
Both line and point feature data represent polygon data at a much smaller scale.
They help reduce clutter by simplifying data locations. As the features are
zoomed in, the point location of a school is more realistically represented by a
series of building footprints showing the physical location of the campus. Line
features of a street centerline file only represent the physical location of the
street. If a higher degree of spatial resolution is needed, a street curbwidth file
would be used to show the width of the road as well as any features such as
medians and right-of-ways (or sidewalks).
Raster Data
Raster data are cell-based spatial datasets. There are also three types of
raster datasets: thematic data, spectral data, and pictures.
This example of a thematic raster dataset is called a Digital Elevation
Model (DEM). Each cell presents a 30m pixel size with an elevation value
assigned to that cell. The area shown is the Topanga Watershed in California and
gives the viewer and understand of the topography of the region.
This image shows a portion of Topanga, California taken from a USGS DOQ.
Each cell contains one value representing the dominate value of that cell.
Raster datasets are intrinsic to most spatial analysis. Data analysis such as
extracting slope and aspect from Digital Elevation Models occurs with raster
datasets. Spatial hydrology modeling such as extracting watersheds and flow
lines also uses a raster-based system. Spectral data presents aerial or satellite
imagery which is then often used to derive vegetation geologic information by
classifying the spectral signatures of each type of feature.
Vegetation classification raster data. The vegetation data was derived
from NDVI classification of a satellite image.
What results from the effect of converting spatial data location information into a
cell based raster format is called stairstepping. The name derives from the image
of exactly that, the square cells along the borders of different value types look
like a staircase viewed from the side.
Unlike vector data, raster data is formed by each cell receiving the value of
the feature that dominates the cell. The stairstepping look comes from the
transition of the cells from one value to another. In the image above the dark
green cell represents chamise vegetation. This means that the dominate feature in
that cell area was chamise vegetation. Other features such as developed land,
water or other vegetation types may be present on the ground in that area. As the
feature in the cell becomes more dominantly urban, the cell is attributed the
value for developed land, hence the pink shading.
GIS and modeling
Using GIS to prepare data, display results
loosely coupled to modeling code
Model and GIS working off the same database
component-based software architecture
tight coupling
Writing the model in the GIS's scripting language
performance problems for dynamic models
Modeling Tools in GIS
Modeling lies at the very core of analytical applications in GIS.Species
habitat modeling, soil erosion modeling, vulnerability modeling, and so on - all
have the common element of deriving new maps of the likely occurrence or
magnitude of some phenomenon based on an established relation between
existing map layers.Given the importance of this activity, it is not surprising then
to see that GIS continues to evolve in its modeling tools.The latest developments,
however, promise to take GIS to dramatic new levels of functionality.
The earliest modeling tools were macro-scripting languages (e.g.,
Arc/Info AML, ERDAS EML and IDRISI IML).Macro languages allow one to
develop and save a sequence of GIS operational commands, either as sequences
of command line statements or through the use of a special-purpose macro
language, in some cases incorporating some of the control and interface design
elements of a programming language (e.g., ArcView Avenue).Macro languages
can be very powerful, but the sequences are often tedious to construct.In
addition, each scripting language tends to be system specific, requiring a
substantial investment of learning when several systems are used.
A typical map calculator tool.Map calculators are
excellent for the implementation of models that
can be expressed as equations using the
operations typically associated with a scientific
The next tool to be developed was the map calculator (Figure 1).Using the
analogy of a scientific calculator, these tools offer the ability to enter and save
algebraic or
logical equations using map layers as variables.Map
Dynamic modeling - calculators are popular because of their immediate
a somewhat new familiarity and the ease with which complex equations
extremely can be entered and executed.However, they typically do
not offer the functionality of a macro scripting language,
development in the being limited to the kinds of operations found on a
GIS scientific calculator.
Most recently we have seen the development of two new
modeling approaches that offer not only the ability to
automate complex tasks, but the promise of profoundly changing the nature of
GIS modeling itself.These are the COM client/server model and the graphical
modeling medium.
COM is Microsoft's acronym for the Component Object Model - a
language independent specification for the manner in which software
components communicate.COM is an outgrowth of the developments in Object
Linking and Embedding (OLE) and the component model that underlies most
visual programming languages (VBX/OCX/ActiveX).Today it provides the very
foundation of Windows software development.Significantly, we are seeing the
transformation of most major GIS software systems into COM servers.
A typical sequence in accessing the exposed
procedures of a COM server.Once the server
(IDRISI32 in this example) has been registered
with your programming software (in this case,
Microsoft's Visual Basic for Applications), typing
the server reference followed by a dot is enough
to list the available properties and methods (top
illustration).Then, when a method has been
selected, code completion lists the parameters
and their data types (bottom illustration).
A COM server, such as IDRISI32 or the latest release of ArcGIS, is an application
that exposes elements of its functionality to other applications (clients).Through
the standardized interfaces of COM technology, it is possible to use a visual
programming language such as Visual Basic, Delphi or Visual C++ to write
programs that control the server application like a puppet.Although there is
some investment in learning one of these languages, the payoff is substantial the ability to create complex models with customized interfaces based on the
capabilities of the host GIS software (Figures 2 and 3).Further, since the interface
is standard across many applications (unlike system specific scripting
languages), it is possible to marshal the capabilities of several tools in a single
model.The advantages for third-party software developers are clear.However,
for the individual user and agency workgroup, the potential is also
significant.With only the most fundamental knowledge of visual programming
(something that can be gained from a self-help book in a couple of days) it is
possible to construct models of a complexity that would be almost inconceivable
to do by hand.
The land cover change prediction module
illustrated here was developed as a COM client
program - a program that makes calls to
modules in the COM server to do the actual
work.In this case, the program code associated
with the OK button simply serves to direct the
sequence of operations among standard
IDRISI32 modules exposed by the COM
interface.The model is complex (a typical run
might involve more than two thousand GIS
operations), but the programming knowledge
required to develop it is minimal.
While COM gets at the internals of the system, graphical modeling tools go in a
very different direction (Figure 4).By placing an additional layer onto the top of
the system they provide a very simple and powerful medium for expressing the
relationship between operations that form the sequence of a model.They also
offer a very flexible means of model development: add a step; test it; add
another; test it; modify it if necessary; and so on.
Graphical modeling environments provide a
very direct means of visualizing the sequence of
operations in a model.This example illustrates a
cellular automata process of urban growth using
a feedback loop (the red link).The result is a
dynamic model - a major new phase in GIS
model development.The continued development
of control structures in graphical modeling
environments (such as conditional branches of
control and iteration structures) suggest that
graphical programming environments may rival
current programming environments for many
modeling activities.
Current graphical modeling environments in GIS are largely confined to the tree
structure of traditional cartographical modeling (although those built on
structures).However, it is perhaps not surprising that we are beginning to see the
introduction of alternative control structures.For example, the illustration in
Figure 4 shows the use of a Dynalink in IDRISI32 - a feedback loop that replaces
input layers with the outputs of previous iterations.The result is a form of
dynamic modeling - a somewhat new but extremely important development in
the history of GIS modeling.
Perhaps the surprising thing about graphical modeling in GIS is that the
direction in which these tools are heading promises to replicate much of the
power of COM-based visual programming in the not too distant future.We are
already seeing the introduction of feedback and iteration structures that allow for
elementary dynamic modeling and highly flexible iteration and batch
processes.However, the introduction of true graphical conditional branches of
control and other basic elements of programming languages are not far on the
horizon.Coupled with suitable interface development tools, it is not beyond
reason to expect that these may replace conventional programming languages for
model development for the majority of GIS professionals.
Digital elevation model
3D rendering of a DEM of Tithonium Chasma on Mars
A digital elevation model (DEM) is a digital representation of ground surface
topography or terrain. It is also widely known as a digital terrain model (DTM).
A DEM can be represented as a raster (a grid of squares) or as a triangular
irregular network. DEMs are commonly built using remote sensing techniques,
however, they may also be built from land surveying. DEMs are used often in
geographic information systems, and are the most common basis for digitallyproduced relief maps.
Digital elevation models may be prepared in a number of ways, but they
are frequently obtained by remote sensing rather than direct survey. One
powerful technique for generating digital elevation models is interferometric
synthetic aperture radar; two passes of a radar satellite (such as RADARSAT-1)
suffice to generate a digital elevation map tens of kilometers on a side with a
resolution of around ten meters. One also obtains an image of the surface cover.
Another powerful technique for generating a Digital Elevation Model is
using the digital image correlation method. It implies two optical images
acquired with different angles taken from the same pass of an airplane or an
Earth Observation Satellite (such as the HRS instrument of SPOT5).
Older methods of generating DEMs often involve interpolating digital
contour maps that may have been produced by direct survey of the land surface;
this method is still used in mountain areas, where interferometry is not always
satisfactory. Note that the contour data or any other sampled elevation datasets
(by GPS or ground survey) are not DEMs, but may be considered digital terrain
models. A DEM implies that elevation is available continuously at each location
in the study area.
The quality of a DEM is a measure of how accurate elevation is at each pixel
(absolute accuracy) and how accurately is the morphology presented (relative
accuracy). Several factors play an important role for quality of DEM-derived
terrain roughness;
sampling density (elevation data collection method);
grid resolution or pixel size;
interpolation algorithm;
vertical resolution;
terrain analysis algorithm;
Methods for obtaining elevation data to used to create DEMs
Real Time Kinematic GPS
stereo photogrammetry
there are others that must be added, please help...
Common uses of DEMs include:
extracting terrain parameters
modeling water flow or mass movement (for example avalanches)
creation of relief maps
rendering of 3D visualizations.
creation of physical models (including raised-relief maps)
rectification of aerial photography or satellite imagery.
reduction (terrain correction) of gravity measurements (gravimetry,
physical geodesy).
terrain analyses in geomorphology and physical geography
Integration with-GIS
Instant Access to Critical Information
Laserfiche Integration Express-GIS unites Laserfiche solutions with ESRI
ArcMap 8.x, allowing users tosel ect map elements – parcels, streets, water
mains, for example – and immediately access associated documents. An
intelligent search tool helps userspin point the specific type of document needed
in seconds. Police department case files, historical maps, work orders, business
licenses and other documents become instantly available to GIS users in support
of effective decision making organization-wide.
Complement Homeland Security Initiatives
Laserfiche solutions form the archival and retrieval core of informationrelated homeland security initiatives. Working in conjunction with ESRI GIS,
Laserfiche enhances emergency response and preparedness with ready access to
building plans, hazardous materials reports and other documents essential to a
rapid, effective response. Laserfiche Integration Express-GIS is the key, making
available this paperbound information that otherwise would remain inaccessible
to dispatchers, wireless-equipped first responders and other field personnel
A Streamlined Solution for IT
As demand for information access increases, IT staff are charged with
making disparate systems work together to solve real-world problems.
Laserfiche Integration Express-GIS is a complete, packaged integration solution.
It delivers unified information access benefits to staff who rely on GIS and
document management solutions without taxing IT resources with excessive
customization work.
Integration Express-GIS Highlights
1. Improve service by bridging the gap between
document management and GIS.
Access supporting documents directly within the
ESRI interface.
3. Enhance homeland security effectiveness and
empower first-responders with instant
information access.
4. Conserve IT resources with this packaged
integration solution.
5. Leverage standard-setting Laserfiche solutions to
deliver information access and protection benefits
1) write detaily about spatial analysis?
2) Define data retrival?
3) How can use the GIS technology for modeling?
4) Define DEM (Digital elevation model) and give brief discuss?
5) How can use the GIS techniques for Artificial intelligence?
6) How can derive the cost and path analysis in GIS?
As we learned in the section on sensors, each one was designed with a
specific purpose. With optical sensors, the design focuses on the spectral bands
to be collected. With radar imaging, the incidence angle and microwave band
used plays an important role in defining which applications the sensor is best
suited for.
Each application itself has specific demands, for spectral resolution,
spatial resolution, and temporal resolution.
To review, spectral resolution refers to the width or range of each spectral
band being recorded. As an example, panchromatic imagery (sensing a broad
range of all visible wavelengths) will not be as sensitive to vegetation stress as a
narrow band in the red wavelengths, where chlorophyll strongly absorbs
electromagnetic energy.
Spatial resolution refers to the discernible detail in the image. Detailed
mapping of wetlands requires far finer spatial resolution than does the regional
mapping of physiographic areas.
Temporal resolution refers to the time interval between images. There are
applications requiring data repeatedly and often, such as oil spill, forest fire, and
sea ice motion monitoring. Some applications only require seasonal imaging
(crop identification, forest insect infestation, and wetland monitoring), and some
need imaging only once (geology structural mapping). Obviously, the most timecritical applications also demand fast turnaround for image processing and
delivery - getting useful imagery quickly into the user's hands.
In a case where repeated imaging is required, the revisit frequency of a
sensor is important (how long before it can image the same spot on the Earth
again) and the reliability of successful data acquisition. Optical sensors have
limitations in cloudy environments, where the targets may be obscured from
view. In some areas of the world, particularly the tropics, this is virtually a
permanent condition. Polar areas also suffer from inadequate solar illumination,
for months at a time. Radar provides reliable data, because the sensor provides
its own illumination, and has long wavelengths to penetrate cloud, smoke, and
fog, ensuring that the target won't be obscured by weather conditions, or poorly
Often it takes more than a single sensor to adequately address all of the
requirements for a given application. The combined use of multiple sources of
information is called integration. Additional data that can aid in the analysis or
interpretation of the data is termed "ancillary" data.
The applications of remote sensing described in this chapter are
representative, but not exhaustive. We do not touch, for instance, on the wide
area of research and practical application in weather and climate analysis, but
focus on applications tied to the surface of the Earth. The reader should also note
that there are a number of other applications that are practiced but are very
specialized in nature, and not covered here (e.g. terrain trafficability analysis,
archeological investigations, route and utility corridor planning, etc.).
Multiple sources of information
Each band of information collected from a sensor contains important and
unique data. We know that different wavelengths of incident energy are affected
differently by each target - they are absorbed, reflected or transmitted in different
proportions. The appearance of targets can easily change over time, sometimes
within seconds. In many applications, using information from several different
sources ensures that target identification or information extraction is as accurate
as possible. The following describe ways of obtaining far more information about
a target or area, than with one band from a sensor.
The use of multiple bands of spectral information attempts to exploit
different and independent "views" of the targets so as to make their identification
as confident as possible. Studies have been conducted to determine the optimum
spectral bands for analyzing specific targets, such as insect damaged trees.
Different sensors often provide complementary information, and when
integrated together, can facilitate interpretation and classification of imagery.
Examples include combining high resolution panchromatic imagery with coarse
resolution multispectral imagery, or merging actively and passively sensed data.
A specific example is the integration of SAR imagery with multispectral imagery.
SAR data adds the expression of surficial topography and relief to an otherwise
flat image. The multispectral image contributes meaningful colour information
about the composition or cover of the land surface. This type of image is often
used in geology, where lithology or mineral composition is represented by the
spectral component, and the structure is represented by the radar component.
Information from multiple images taken over a period of time is referred
to as multitemporal information. Multitemporal may refer to images taken days,
weeks, or even years apart. Monitoring land cover change or growth in urban
areas requires images from different time periods. Calibrated data, with careful
controls on the quantitative aspect of the spectral or backscatter response, is
required for proper monitoring activities. With uncalibrated data, a classification
of the older image is compared to a classification from the recent image, and
changes in the class boundaries are delineated. Another valuable multitemporal
tool is the observation of vegetation phenology (how the vegetation changes
throughout the growing season), which requires data at frequent intervals
throughout the growing season.
'Multitemporal information' is acquired from the interpretation of images
taken over the same area, but at different times. The time difference between the
images is chosen so as to be able to monitor some dynamic event. Some
catastrophic events (landslides, floods, fires, etc.) would need a time difference
counted in days, while much slower-paced events (glacier melt, forest regrowth,
etc.) would require years. This type of application also requires consistency in
illumination conditions (solar angle or radar imaging geometry) to provide
consistent and comparable classification results.
The ultimate in critical (and quantitative) multitemporal analysis depends
on calibrated data. Only by relating the brightnesses seen in the image to
physical units, can the images be precisely compared, and thus the nature and
magnitude of the observed changes be determined.
Agriculture plays a dominant role in economies of both developed and
undeveloped countries. Whether agriculture represents a substantial trading
industry for an economically strong country or simply sustenance for a hungry,
overpopulated one, it plays a significant role in almost every nation. The
production of food is important to everyone and producing food in a costeffective manner is the goal of every farmer, large-scale farm manager and
regional agricultural agency. A farmer needs to be informed to be efficient, and
that includes having the knowledge and information products to forge a viable
strategy for farming operations. These tools will help him understand the health
of his crop, extent of infestation or stress damage, or potential yield and soil
conditions. Commodity brokers are also very interested in how well farms are
producing, as yield (both quantity and quality) estimates for all products control
price and worldwide trading.
Satellite and airborne images are used as mapping tools to classify crops,
examine their health and viability, and monitor farming practices. Agricultural
applications of remote sensing include the following:
crop type classification
crop condition assessment
crop yield estimation
mapping of soil characteristics
mapping of soil management practices
compliance monitoring (farming practices)
Crop Type Mapping
Identifying and mapping crops is important for a number of reasons. Maps
of crop type are created by national and multinational agricultural agencies,
insurance agencies, and regional agricultural boards to prepare an inventory of
what was grown in certain areas and when. This serves the purpose of
forecasting grain supplies (yield prediction), collecting crop production statistics,
facilitating crop rotation records, mapping soil productivity, identification of
factors influencing crop stress, assessment of crop damage due to storms and
drought, and monitoring farming activity.
Key activities include identifying the crop types and delineating their
extent (often measured in acres). Traditional methods of obtaining this
information are census and ground surveying. In order to standardize
measurements however, particularly for multinational agencies and consortiums,
remote sensing can provide common data collection and information extraction
Remote sensing offers an efficient and reliable means of collecting the
information required, in order to map crop type and acreage. Besides providing a
synoptic view, remote sensing can provide structure information about the
health of the vegetation. The spectral reflection of a field will vary with respect to
changes in the phenology (growth), stage type, and crop health, and thus can be
measured and monitored by multispectral sensors. Radar is sensitive to the
structure, alignment, and moisture content of the crop, and thus can provide
complementary information to the optical data. Combining the information from
these two types of sensors increases the information available for distinguishing
each target class and its respective signature, and thus there is a better chance of
performing a more accurate classification.
Interpretations from remotely sensed data can be input to a geographic
information system (GIS) and crop rotation systems, and combined with
ancillary data, to provide information of ownership, management practices etc.
Crop identification and mapping benefit from the use of multitemporal
imagery to facilitate classification by taking into account changes in reflectance as
a function of plant phenology (stage of growth). This in turn requires calibrated
sensors, and frequent repeat imaging throughout the growing season. For
example, crops like canola may be easier to identify when they are flowering,
because of both the spectral reflectance change, and the timing of the flowering.
Multisensor data are also valuable for increasing classification accuracies
by contributing more information than a sole sensor could provide. VIR sensing
contributes information relating to the chlorophyll content of the plants and the
canopy structure, while radar provides information relating to plant structure
and moisture. In areas of persistent cloud cover or haze, radar is an excellent tool
for observing and distinguishing crop type due to its active sensing capabilities
and long wavelengths, capable of penetrating through atmospheric water
Although the principles of identifying crop type are the same, the scale of
observation in Europe and Southeast Asia is considerably smaller than in North
America, primarily due to smaller field parcel sizes. Cloud cover in Europe and
tropical countries also usually limits the feasibility of using high-resolution
optical sensors. In these cases high-resolution radar would have a strong
The sizable leaves of tropical agricultural crops (cocoa, banana, and oil
palm) have distinct radar signatures. Banana leaves in particular are
characterized by bright backscatter (represented by "B" in image). Monitoring
stages of rice growth is a key application in tropical areas, particularly Asian
countries. Radar is very sensitive to surface roughness, and the development of
rice paddies provides a dramatic change in brightness from the low returns from
smooth water surfaces in flooded paddies , to the high return of the emergent
rice crop.
The countries involved in the European Communities (EC) are using remote
sensing to help fulfil the requirements and mandate of the EC Agricultural
Policy, which is common to all members. The requirements are to delineate,
identify, and measure the extent of important crops throughout Europe, and to
provide an early forecast of production early in the season. Standardized
procedures for collecting this data are based on remote sensing technology,
developed and defined through the MARS project (Monitoring Agriculture by
Remote Sensing).
The project uses many types of remotely sensed data, from low resolution
NOAA-AVHRR, to high-resolution radar, and numerous sources of ancillary
data. These data are used to classify crop type over a regional scale to conduct
regional inventories, assess vegetation condition, estimate potential yield, and
finally to predict similar statistics for other areas and compare results.
Multisource data such as VIR and radar were introduced into the project for
increasing classification accuracies. Radar provides very different information
than the VIR sensors, particularly vegetation structure, which proves valuable
when attempting to differentiate between crop type.
One the key applications within this project is the operational use of high
resolution optical and radar data to confirm conditions claimed by a farmer
when he requests aid or compensation. The use of remote sensing identifies
potential areas of non-compliance or suspicious circumstances, which can then
be investigated by other, more direct methods.
As part of the Integrated Administration and Control System (IACS),
remote sensing data supports the development and management of databases,
which include cadastral information, declared land use, and parcel
measurement. This information is considered when applications are received for
area subsidies.
This is an example of a truly successfully operational crop identification
and monitoring application of remote sensing.
Crop Monitoring & Damage Assessment
Assessment of the health of a crop, as well as early detection of crop
infestations, is critical in ensuring good agricultural productivity. Stress
associated with, for example, moisture deficiencies, insects, fungal and weed
infestations, must be detected early enough to provide an opportunity for the
farmer to mitigate. This process requires that remote sensing imagery be
provided on a frequent basis (at a minimum, weekly) and be delivered to the
farmer quickly, usually within 2 days.
Also, crops do not generally grow evenly across the field and
consequently crop yield can vary greatly from one spot in the field to another.
These growth differences may be a result of soil nutrient deficiencies or other
forms of stress. Remote sensing allows the farmer to identify areas within a field
which are experiencing difficulties, so that he can apply, for instance, the correct
type and amount of fertilizer, pesticide or herbicide. Using this approach, the
farmer not only improves the productivity from his land, but also reduces his
farm input costs and minimizes environmental impacts.
There are many people involved in the trading, pricing, and selling of
crops that never actually set foot in a field. They need information regarding
crop health worldwide to set prices and to negotiate trade agreements. Many of
these people rely on products such as a crop assessment index to compare
growth rates and productivity between years and to see how well each country's
agricultural industry is producing. This type of information can also help target
locations of future problems, for instance the famine in Ethiopia in the late 1980's,
caused by a significant drought which destroyed many crops. Identifying such
areas facilitates in planning and directing humanitarian aid and relief efforts.
Remote sensing has a number of attributes that lend themselves to monitoring
the health of crops. One advantage of optical (VIR) sensing is that it can see
beyond the visible wavelengths into the infrared, where wavelengths are highly
sensitive to crop vigour as well as crop stress and crop damage. Remote sensing
imagery also gives the required spatial overview of the land. Recent advances in
communication and technology allow a farmer to observe images of his fields
and make timely decisions about managing the crops. Remote sensing can aid in
identifying crops affected by conditions that are too dry or wet, affected by
insect, weed or fungal infestations or weather related damage . Images can be
obtained throughout the growing season to not only detect problems, but also to
monitor the success of the treatment. In the example image given here, a tornado
has destroyed/damaged crops southwest of Winnipeg, Manitoba.
Healthy vegetation contains large quantities of chlorophyll, the substance
that gives most vegetation its distinctive green colour. In referring to healthy
crops, reflectance in the blue and red parts of the spectrum is low since
chlorophyll absorbs this energy. In contrast, reflectance in the green and nearinfrared spectral regions is high. Stressed or damaged crops experience a
decrease in chlorophyll content and changes to the internal leaf structure. The
reduction in chlorophyll content results in a decrease in reflectance in the green
region and internal leaf damage results in a decrease in near-infrared reflectance.
These reductions in green and infrared reflectance provide early detection of
crop stress. Examining the ratio of reflected infrared to red wavelengths is an
excellent measure of vegetation health. This is the premise behind some
vegetation indices, such as the normalized differential vegetation index (NDVI)
(Chapter 4). Healthy plants have a high NDVI value because of their high
reflectance of infrared light, and relatively low reflectance of red light. Phenology
and vigour are the main factors in affecting NDVI. An excellent example is the
difference between irrigated crops and non-irrigated land. The irrigated crops
appear bright green in a real-colour simulated image. The darker areas are dry
rangeland with minimal vegetation. In a CIR (colour infrared simulated) image,
where infrared reflectance is displayed in red, the healthy vegetation appears
bright red, while the rangeland remains quite low in reflectance.
Examining variations in crop growth within one field is possible. Areas of
consistently healthy and vigorous crop would appear uniformly bright. Stressed
vegetation would appear dark amongst the brighter, healthier crop areas. If the
data is georeferenced, and if the farmer has a GPS (global position satellite) unit,
he can find the exact area of the problem very quickly, by matching the
coordinates of his location to that on the image.
Detecting damage and monitoring crop health requires high-resolution imagery
and multispectral imaging capabilities. One of the most critical factors in making
imagery useful to farmers is a quick turnaround time from data acquisition to
distribution of crop information. Receiving an image that reflects crop conditions
of two weeks earlier does not help real time management nor damage mitigation.
Images are also required at specific times during the growing season, and on a
frequent basis.
Remote sensing doesn't replace the field work performed by farmers to monitor
their fields, but it does direct them to the areas in need of immediate attention.
Efficient agricultural practices are a global concern, and other countries share
many of the same requirements as Canada in terms of monitoring crop health by
means of remote sensing. In many cases however, the scale of interest is smaller smaller fields in Europe and Asia dictate higher resolution systems and smaller
areal coverage. Canada, the USA, and Russia, amongst others, have more
expansive areas devoted to agriculture, and have developed, or are in the process
of developing crop information systems (see below). In this situation, regional
coverage and lower resolution data (say: 1km) can be used. The lower resolution
facilitates computer efficiency by minimizing storage space, processing efforts
and memory requirements.
As an example of an international crop monitoring application, date palms are
the prospective subject of an investigation to determine if remote sensing
methods can detect damage from the red palm weevil in the Middle East. In the
Arabian Peninsula, dates are extremely popular and date crops are one of the
region's most important agricultural products. Infestation by the weevil could
quickly devastate the palm crops and swallow a commodity worth hundreds of
millions of dollars. Remote sensing techniques will be used to examine the health
of the date crops through spectral analysis of the vegetation. Infested areas
appear yellow to the naked eye, and will show a smaller near infrared reflectance
and a higher red reflectance on the remotely sensed image data than the healthy
crop areas. Authorities are hoping to identify areas of infestation and provide
measures to eradicate the weevil and save the remaining healthy crops.
Canadian Crop Information System: A composite crop index map is created each
week, derived from composited NOAA-AVHRR data. Based on the NDVI, the
index shows the health of crops in the prairie regions of Manitoba through to
Alberta. These indices are produced weekly, and can be compared with indices
of past years to compare crop growth and health.
In 1988, severe drought conditions were prevalent across the prairies. Using
NDVI values from NOAA AVHRR data, a drought area analysis determined the
status of drought effects on crops across the affected area. Red and yellow areas
indicate those crops in a weakened and stressed state, while green indicates
healthy crop conditions. Note that most of the healthy crops are those in the
cooler locations, such as in the northern Alberta (Peace River) and the higher
elevations (western Alberta). Non-cropland areas (dry rangeland and forested
land) are indicated in black, within the analysis region.
Forests are a valuable resource providing food, shelter, wildlife habitat, fuel, and
daily supplies such as medicinal ingredients and paper. Forests play an
important role in balancing the Earth's CO2 supply and exchange, acting as a key
link between the atmosphere, geosphere, and hydrosphere. Tropical rainforests,
in particular, house an immense diversity of species, more capable of adapting
to, and therefore surviving, changing environmental conditions than
monoculture forests. This diversity also provides habitat for numerous animal
species and is an important source of medicinal ingredients. The main issues
concerning forest management are depletion due to natural causes (fires and
infestations) or human activity (clear-cutting, burning, land conversion), and
monitoring of health and growth for effective commercial exploitation and
Humans generally consider the products of forests useful, rather than the forests
themselves, and so extracting wood is a wide-spread and historical practice,
virtually global in scale. Depletion of forest resources has long term effects on
climate, soil conservation, biodiversity, and hydrological regimes, and thus is a
vital concern of environmental monitoring activities. Commercial forestry is an
important industry throughout the world. Forests are cropped and re-harvested,
and the new areas continually sought for providing a new source of lumber.
With increasing pressure to conserve native and virgin forest areas, and
unsustainable forestry practices limiting the remaining areas of potential cutting,
the companies involved in extracting wood supplies need to be more efficient,
economical, and aware of sustainable forestry practices. Ensuring that there is a
healthy regeneration of trees where forests are extracted will ensure a future for
the commercial forestry firms, as well as adequate wood supplies to meet the
demands of a growing population.
Non-commercial sources of forest depletion include removal for agriculture
(pasture and crops), urban development, droughts, desert encroachment, loss of
ground water, insect damage, fire and other natural phenomena (disease,
typhoons). In some areas of the world, particularly in the tropics, (rain) forests,
are covering what might be considered the most valuable commodity - viable
agricultural land. Forests are burned or clear-cut to facilitate access to, and use of,
the land. This practice often occurs when the perceived need for long term
sustainability is overwhelmed by short-term sustenance goals. Not only are the
depletion of species-rich forests a problem, affecting the local and regional
hydrological regime, the smoke caused by the burning trees pollutes the
atmosphere, adding more CO2, and furthering the greenhouse effect.
Of course, monitoring the health of forests is crucial for sustainability and
conservation issues. Depletion of key species such as mangrove in
environmentally sensitive coastline areas, removal of key support or shade trees
from a potential crop tree, or disappearance of a large biota acting as a CO2
reservoir all affect humans and society in a negative way, and more effort is
being made to monitor and enforce regulations and plans to protect these areas.
International and domestic forestry applications where remote sensing
can be utilized include sustainable development, biodiversity, land title and
tenure (cadastre), monitoring deforestation, reforestation monitoring and
managing, commercial logging operations, shoreline and watershed protection,
biophysical monitoring (wildlife habitat assessment), and other environmental
General forest cover information is valuable to developing countries with
limited previous knowledge of their forestry resources. General cover type
mapping, shoreline and watershed mapping and monitoring for protection,
monitoring of cutting practices and regeneration, and forest fire/burn mapping
are global needs which are currently being addressed by Canadian and foreign
agencies and companies employing remote sensing technology as part of their
information solutions in foreign markets.
Forestry applications of remote sensingg include the following:
1) reconnaissance mapping:
Objectives to be met by national forest/environment agencies include
forest cover updating, depletion monitoring, and measuring biophysical
properties of forest stands.
forest cover type discrimination
agroforestry mapping
2) Commercial forestry:
Of importance to commercial forestry companies and to resource
management agencies are inventory and mapping applications: collecting
harvest information, updating of inventory information for timber supply,
broad forest type, vegetation density, and biomass measurements.
clear cut mapping / regeneration assessment
burn delineation
infrastructure mapping / operations support
forest inventory
biomass estimation
species inventory
3) Environmental monitoring
Conservation authorities are concerned with monitoring the quantity,
health, and diversity of the Earth's forests.
deforestation (rainforest, mangrove colonies)
species inventory
watershed protection (riparian strips)
coastal protection (mangrove forests)
forest health and vigour
Canadian requirements for forestry application information differ
considerably from international needs, due in part to contrasts in tree size,
species diversity (monoculture vs. species rich forest), and agroforestry practices.
The level of accuracy and resolution of data required to address respective
forestry issues differs accordingly. Canadian agencies have extensive a priori
knowledge of their forestry resources and present inventory and mapping needs
are often capably addressed by available data sources.
For Canadian applications requirements, high accuracy (for accurate
information content), multispectral information, fine resolution, and data
continuity are the most important. There are requirements for large volumes of
data, and reliable observations for seasonal coverage. There is a need to balance
spatial resolution with the required accuracy and costs of the data. Resolution
capabilities of 10 m to 30 m are deemed adequate for forest cover mapping,
identifying and monitoring clearcuts, burn and fire mapping, collecting forest
harvest information, and identifying general forest damage. Spatial coverage of
100 - 10000 km2 is appropriate for district to provincial scale forest cover and
clear cut mapping, whereas 1-100 km2 coverage is the most appropriate for site
specific vegetation density and volume studies.
Tropical forest managers will be most concerned with having a reliable data
source, capable of imaging during critical time periods, and therefore
unhindered by atmospheric conditions.
Hydrology is the study of water on the Earth's surface, whether flowing
above ground, frozen in ice or snow, or retained by soil. Hydrology is inherently
related to many other applications of remote sensing, particularly forestry,
agriculture and land cover, since water is a vital component in each of these
disciplines. Most hydrological processes are dynamic, not only between years,
but also within and between seasons, and therefore require frequent
observations. Remote sensing offers a synoptic view of the spatial distribution
and dynamics of hydrological phenomena, often unattainable by traditional
ground surveys. Radar has brought a new dimension to hydrological studies
with its active sensing capabilities, allowing the time window of image
acquisition to include inclement weather conditions or seasonal or diurnal
Examples of hydrological applications include:
wetlands mapping and monitoring,
soil moisture estimation,
snow pack monitoring / delineation of extent,
measuring snow thickness,
determining snow-water equivalent,
river and lake ice monitoring,
flood mapping and monitoring,
glacier dynamics monitoring (surges, ablation)
river /delta change detection
drainage basin mapping and watershed modelling
irrigation canal leakage detection
irrigation scheduling
Flood Delineation & Mapping
A natural phenomenon in the hydrological cycle is flooding. Flooding is
necessary to replenish soil fertility by periodically adding nutrients and fine
grained sediment; however, it can also cause loss of life, temporary destruction
of animal habitat and permanent damage to urban and rural infrastructure.
Inland floods can result from disruption to natural or man-made dams,
catastrophic melting of ice and snow (jökulhlaups in Iceland), rain, river ice jams
and / or excessive runoff in the spring.
Remote sensing techniques are used to measure and monitor the areal extent of
the flooded areas , to efficiently target rescue efforts and to provide quantifiable
estimates of the amount of land and infrastructure affected. Incorporating
remotely sensed data into a GIS allows for quick calculations and assessments of
water levels, damage, and areas facing potential flood danger. Users of this type
of data include flood forecast agencies, hydropower companies, conservation
authorities, city planning and emergency response departments, and insurance
companies (for flood compensation). The identification and mapping of
floodplains, abandoned river channels, and meanders are important for planning
and transportation routing.
Many of these users of remotely sensed data need the information during a
crisis and therefore require "near-real time turnaround". Turnaround time is less
demanding for those involved in hydrologic modelling, calibration/validation
studies, damage assessment and the planning of flood mitigation. Flooding
conditions are relatively short term and generally occur during inclement
weather, so optical sensors, although typically having high information content
for this purpose, can not penetrate through the cloud cover to view the flooded
region below. For these reasons, active SAR sensors are particularly valuable for
flood monitoring. RADARSAT in particular offers a high turnaround interval,
from when the data is acquired by the sensor, to when the image is delivered to
the user on the ground. The land / water interface is quite easily discriminated
with SAR data, allowing the flood extent to be delineated and mapped. The SAR
data is most useful when integrated with a pre-flood image, to highlight the
flood-affected areas, and then presented in a GIS with cadastral and road
network information.
Requirements for this application are similar the world over. Flooding can affect
many areas of the world, whether coastal or inland, and many of the conditions
for imaging are the same. Radar provides excellent water/land discrimination
and is reliable for imaging despite most atmospheric limitations.
In 1997, the worst Canadian flood of the 20th century inundated prairie fields
and towns in the states of Minnesota, North Dakota, and the Canadian province
of Manitoba. By May 5th, 25,000 residents of Manitoba had been evacuated from
their homes, with 10,000 more on alert. The watershed of the Red River, flowing
north from the United States into Canada, received unusually high winter
snowfalls and heavy precipitation in April. These factors, combined with the
northward flow into colder ground areas and very flat terrain beyond the
immediate floodplain, caused record flooding conditions, with tremendous
damage to homes and property, in addition to wildlife and livestock casualties.
For weeks emergency response teams, area residents, and the media monitored
the extent of the flood, with some input from remote sensing techniques. It is
impossible to imagine the scale of flooding from a ground perspective, and even
video and photographs from aircraft are unable to show the full extent.
Spectacular satellite images however, have shown the river expand from a 200 m
wide ribbon, to a body of water measuring more than 40 km across. Towns
protected by sand-bag dikes, were dry islands in the midst of what was
described as the "Red Sea". Many other towns weren't as fortunate, and home
and business owners were financially devastated by their losses.
Insurance agents faced their own flood of claims for property, businesses, and
crops ruined or damaged by the Red River flood. To quickly assess who is
eligible for compensation, the insurance companies can rely on remotely sensed
data to delineate the flood extent, and GIS databases to immediately identify
whose land was directly affected. City and town planners could also use the
images to study potential locations for future dike reinforcement and
construction, as well as residential planning.
Both NOAA-AVHRR and RADARSAT images captured the scale and extent of
the flood. The AVHRR sensors onboard the NOAA satellites provided smallscale views of the entire flood area from Lakes Manitoba and Winnipeg south to
the North Dakota - South Dakota border. Some of the best images are those taken
at night in the thermal infrared wavelengths, where the cooler land appears dark
and the warmer water (A) appears white. Manmade dikes, such as the Brunkild
Dike (B), were quickly built to prevent the flow of water into southern Winnipeg.
Dikes are apparent on the image as very regular straight boundaries between the
land and floodwater. Although the city of Winnipeg (C) is not clearly defined,
the Winnipeg floodway (D) immediately to the east, paralleling the Red River at
the northeast end of the flood waters, is visible since it is full of water. The
floodway was designed to divert excess water flow from the Red River outside of
the city limits. In this case, the volume of water was simply too great for the
floodway to carry it all, and much of the flow backed up and spread across the
RADARSAT provided some excellent views of the flood, because of its ability to
image in darkness or cloudy weather conditions, and its sensitivity to the
land/water differences. In this image, the flood water (A) completely surrounds
the town of Morris (B), visible as a bright patch within the dark flood water. The
flooded areas appear dark on radar imagery because very little of the incident
microwave energy directed toward the smooth water surface returns back to the
sensor. The town however, has many angular (corner) reflectors primarily in the
form of buildings, which cause the incident energy to "bounce" back to the
Transportation routes can still be observed. A railroad, on its raised bed, can be
seen amidst the water just above (C), trending southwest - northeast. Farmland
relatively unaffected by the flood (D) is quite variable in its backscatter response.
This is due to differences in each field's soil moisture and surface roughness.
Soil Moisture
Soil moisture is an important measure in determining crop yield potential in
Canada and in drought-affected parts of the world (Africa) and for watershed
modelling. The moisture content generally refers to the water contained in the
upper 1-2m of soil, which can potentially evaporate into the atmosphere. Early
detection of dry conditions which could lead to crop damage, or are indicative of
potential drought, is important for amelioration efforts and forecasting potential
crop yields, which in turn can serve to warn farmers, prepare humanitarian aid
to affected areas, or give international commodities traders a competitive
advantage. Soil moisture conditions may also serve as a warning for subsequent
flooding if the soil has become too saturated to hold any further runoff or
precipitation. Soil moisture content is an important parameter in watershed
modelling that ultimately provides information on hydroelectric and irrigation
capacity. In areas of active deforestation, soil moisture estimates help predict
amounts of run-off, evaporation rates, and soil erosion.
Why remote sensing? Remote sensing offers a means of measuring soil
moisture across a wide area instead of at discrete point locations that are
inherent with ground measurements. RADAR is effective for obtaining
qualitative imagery and quantitative measurements, because radar backscatter
response is affected by soil moisture, in addition to topography, surface
roughness and amount and type of vegetative cover. Keeping the latter elements
static, multitemporal radar images can show the change in soil moisture over
time. The radar is actually sensitive to the soil's dielectric constant, a property
that changes in response to the amount of water in the soil.
Users of soil moisture information from remotely sensed data include
agricultural marketing and administrative boards, commodity brokers, large
scale farming managers, conservation authorities, and hydroelectric power
Obviously, a sensor must be sensitive to moisture conditions, and radar
satisfies this requirement better than optical sensors. Frequent and regular
(repeated) imaging is required during the growing season to follow the change in
moisture conditions, and a quick turnaround is required for a farmer to respond
to unsuitable conditions (excessive moisture or dryness) in a timely manner.
Using high resolution images, a farmer can target irrigation efforts more
accurately. Regional coverage allows an overview of soil and growing conditions
of interest to agricultural agencies and authorities.
Data requirements to address this application are similar around the world,
except that higher resolution data may be necessary in areas such as Europe and
Southeast Asia, where field and land parcel sizes are substantially smaller than in
North America.
As with most Canadian prairie provinces, the topography of Saskatchewan is
quite flat. The region is dominated by black and brown chernozemic soil
characterized by a thick dark organic horizon, ideal for growing cereal crops
such as wheat. More recently, canola has been introduced as an alternative to
cereal crops.
Shown here is a radar image acquired July 7, 1992 by the European Space
Agency (ESA) ERS-1 satellite. This synoptic image of an area near Melfort,
Saskatchewan details the effects of a localized precipitation event on the
microwave backscatter recorded by the sensor. Areas where precipitation has
recently occurred can be seen as a bright tone (bottom half) and those areas
unaffected by the event generally appear darker (upper half). This is a result of
the complex dielectric constant which is a measure of the electrical properties of
surface materials. The dielectric property of a material influences its ability to
absorb microwave energy, and therefore critically affects the scattering of
microwave energy.
The magnitude of the radar backscatter is proportional to the dielectric constant
of the surface. For dry, naturally occurring materials, this is in the range of 3 - 8 ,
and may reach values as high as 80 for wet surfaces. Therefore the amount of
moisture in the surface material directly affects the amount of backscattering. For
example, the lower the dielectric constant, the more incident energy is absorbed,
the darker the object will be on the image.
Land Cover & Land Use
Although the terms land cover and land use are often used
interchangeably, their actual meanings are quite distinct. Land cover refers to the
surface cover on the ground, whether vegetation, urban infrastructure, water,
bare soil or other. Identifying, delineating and mapping land cover is important
for global monitoring studies, resource management, and planning activities.
Identification of land cover establishes the baseline from which monitoring
activities (change detection) can be performed, and provides the ground cover
information for baseline thematic maps.
Land use refers to the purpose the land serves, for example, recreation,
wildlife habitat, or agriculture. Land use applications involve both baseline
mapping and subsequent monitoring, since timely information is required to
know what current quantity of land is in what type of use and to identify the
land use changes from year to year. This knowledge will help develop strategies
to balance conservation, conflicting uses, and developmental pressures. Issues
driving land use studies include the removal or disturbance of productive land,
urban encroachment, and depletion of forests.
It is important to distinguish this difference between land cover and land
use, and the information that can be ascertained from each. The properties
measured with remote sensing techniques relate to land cover, from which land
use can be inferred, particularly with ancillary data or a priori knowledge.
Land cover / use studies are multidisciplinary in nature, and thus the
participants involved in such work are numerous and varied, ranging from
international wildlife and conservation foundations, to government researchers,
and forestry companies. Regional (in Canada, provincial) government agencies
have an operational need for land cover inventory and land use monitoring, as it
is within their mandate to manage the natural resources of their respective
regions. In addition to facilitating sustainable management of the land, land
cover and use information may be used for planning, monitoring, and evaluation
of development, industrial activity, or reclamation. Detection of long term
changes in land cover may reveal a response to a shift in local or regional
climatic conditions, the basis of terrestrial global monitoring.
Ongoing negotiations of aboriginal land claims have generated a need for
more stringent knowledge of land information in those areas, ranging from
cartographic to thematic information.
Resource managers involved in parks, oil, timber, and mining companies,
are concerned with both land use and land cover, as are local resource inventory
or natural resource agencies. Changes in land cover will be examined by
environmental monitoring researchers, conservation authorities, and
departments of municipal affairs, with interests varying from tax assessment to
reconnaissance vegetation mapping. Governments are also concerned with the
general protection of national resources, and become involved in publicly
sensitive activities involving land use conflicts.
Land use applications of remote sensing include the following:
natural resource management
wildlife habitat protection
baseline mapping for GIS input
urban expansion / encroachment
routing and logistics planning for seismic / exploration / resource
extraction activities
damage delineation (tornadoes, flooding, volcanic, seismic, fire)
legal boundaries for tax and property evaluation
target detection - identification of landing strips, roads, clearings, bridges,
land/water interface
Land Use Change (Rural / Urban)
As the Earth's population increases and national economies continue to move
away from agriculture based systems, cities will grow and spread. The urban
sprawl often infringes upon viable agricultural or productive forest land, neither
of which can resist or deflect the overwhelming momentum of urbanization. City
growth is an indicator of industrialization (development) and generally has a
negative impact on the environmental health of a region.
The change in land use from rural to urban is monitored to estimate
populations, predict and plan direction of urban sprawl for developers, and
monitor adjacent environmentally sensitive areas or hazards. Temporary refugee
settlements and tent cities can be monitored and population amounts and
densities estimated.
Analyzing agricultural vs. urban land use is important for ensuring that
development does not encroach on valuable agricultural land, and to likewise
ensure that agriculture is occurring on the most appropriate land and will not
degrade due to improper adjacent development or infrastructure.
With multi-temporal analyses, remote sensing gives a unique perspective of
how cities evolve. The key element for mapping rural to urban landuse change is
the ability to discriminate between rural uses (farming, pasture forests) and
urban use (residential, commercial, recreational). Remote sensing methods can be
employed to classify types of land use in a practical, economical and repetitive
fashion, over large areas.
Requirements for rural / urban change detection and mapping
applications are 1) high resolution to obtain detailed information, and 2)
multispectral optical data to make fine distinction among various land use
Sensors operating in the visible and infrared portion of the spectrum are
the most useful data sources for land use analysis. While many urban features
can be detected on radar and other imagery (usually because of high reflectivity),
VIR data at high resolution permits fine distinction among more subtle land
cover/use classes. This would permit a confident identification of the urban
fringe and the transition to rural land usage. Optical imagery acquired during
winter months is also useful for roughly delineating urban areas vs. non-urban.
Cities appear in dramatic contrast to smooth textured snow covered fields.
Radar sensors also have some use for all urban/rural delineation
applications, due to the ability of the imaging geometry to enhance
anthropogenic features, such as buildings, in the manner of corner reflectors. The
optimum geometric arrangement between the sensor and urban area is an
orientation of linear features parallel to the sensor movement, perpendicular to
the incoming incident EM energy.
Generally, this type of application does not require a high turnaround
rate, or a frequent acquisition schedule.
Throughout the world, requirements for rural/urban delineation will differ
according to the prevalent atmospheric conditions. Areas with frequently cloudy
skies may require the penetrating ability of radar, while areas with clear
conditions can use airphoto, optical satellite or radar data. While the land use
practices for both rural and urban areas will be significantly different in various
parts of the world, the requirement for remote sensing techniques to be applied
(other than the cloud-cover issue) will be primarily the need for fine spatial
This image of land cover change provides multitemporal information in the
form of urban growth mapping. The colours represent urban land cover for two
different years. The green delineates those areas of urban cover in 1973, and the
pink, urban areas for 1985. This image dramatically shows the change in
expansion of existing urban areas, and the clearing of new land for settlements
over a 12 year period. This type of information would be used for upgrading
government services, planning for increased transportation routes, etc
Land Cover / Biomass Mapping
Land cover mapping serves as a basic inventory of land resources for all
levels of government, environmental agencies, and private industry throughout
the world. Whether regional or local in scope, remote sensing offers a means of
acquiring and presenting land cover data in a timely manner. Land cover
includes everything from crop type, ice and snow, to major biomes including
tundra, boreal or rainforest, and barren land.
Regional land cover mapping is performed by almost anyone who is interested
in obtaining an inventory of land resources, to be used as a baseline map for
future monitoring and land management. Programs are conducted around the
world to observe regional crop conditions as well as investigating climatic
change on a regional level through biome monitoring. Biomass mapping
provides quantifiable estimates of vegetation cover, and biophysical information
such as leaf area index (LAI), net primary productivity (NPP) and total biomass
accumulations (TBA) measurements - important parameters for measuring the
health of our forests, for example.
There is nothing as practical and cost efficient for obtaining a timely
regional overview of land cover than remote sensing techniques. Remote sensing
data are capable of capturing changes in plant phenology (growth) throughout
the growing season, whether relating to changes in chlorophyll content
(detectable with VIR) or structural changes (via radar). For regional mapping,
continuous spatial coverage over large areas is required. It would be difficult to
detect regional trends with point source data. Remote sensing fulfills this
requirement, as well as providing multispectral, multisource, and multitemporal
information for an accurate classification of land cover. The multisource example
image shows the benefit of increased information content when two data sources
are integrated. On the left is TM data, and on the right it has been merged with
airborne SAR.
For continental and global scale vegetation studies, moderate resolution
data (1km) is appropriate, since it requires less storage space and processing
effort, a significant consideration when dealing with very large area projects. Of
course the requirements depend entirely on the scope of the application. Wetland
mapping for instance, demands a critical acquisition period and a high resolution
Coverage demand will be very large for regional types of surveying. One
way to adequately cover a large area and retain high resolution, is to create
mosaics of the area from a number of scenes.
Land cover information may be time sensitive. The identification of crops,
for instance canola, may require imaging on specific days of flowering, and
therefore, reliable imaging is appropriate. Multi-temporal data are preferred for
capturing changes in phenology throughout the growing season. This
information may be used in the classification process to more accurately
discriminate vegetation types based on their growing characteristics.
While optical data are best for land cover mapping, radar imagery is a
good replacement in very cloudy areas.
Case study (example) NBIOME:
A major initiative of the Canada Centre for Remote Sensing is the development
of an objective, reproducible classification of Canada's landcover. This
classification methodology is used to produce a baseline map of the major
biomes and land cover in Canada, which can then be compared against
subsequent classifications to observe changes in cover. These changes may relate
to regional climatic or anthropogenic changes affecting the landscape.
The classification is based on NOAA-AVHRR LAC (Local Area Coverage)
(1km) data. The coarse resolution is required to ensure efficient processing and
storage of the data, when dealing with such a large coverage area. Before the
classification procedure, cloud -cover reduced composites of the Canadian
landmass, each spanning 10 day periods are created. In the composite, the value
for each pixel used is the one most cloud free of the ten days. This is determined
by the highest normalized difference vegetation index (NDVI) value, since low
NDVI is indicative of cloud cover (low infrared reflectance, high visible
reflectance). The data also underwent a procedure to minimize atmospheric,
bidirectional, and contamination effects.
The composites consist of four channels, mean reflectance of AVHRR
channels 1 and 2, NDVI and area under the (temporal NDVI) curve. 16
composites (in 1993) were included in a customized land cover classification
procedure (named: classification by progressive generalization), which is neither
a supervised nor unsupervised methodology, but incorporates aspects of both.
The classification approach is based on finding dominant spectral clusters and
conducting progressive merging methodology. Eventually the clusters are
labelled with the appropriate land cover classes. The benefit is that the
classification is more objective than a supervised approach, while not controlling
the parameters of clustering, which could alter the results.
The result of this work is an objective, reproducible classification of
Canada's land cover.
Digital Elevation Models
The availability of digital elevation models (DEMs) is critical for performing
geometric and radiometric corrections for terrain on remotely sensed imagery,
and allows the generation of contour lines and terrain models, thus providing
another source of information for analysis.
Present mapping programs are rarely implemented with only planimetric
considerations. The demand for digital elevation models is growing with
increasing use of GIS and with increasing evidence of improvement in
information extracted using elevation data (for example, in discriminating
wetlands, flood mapping, and forest management). The incorporation of
elevation and terrain data is crucial to many applications, particularly if radar
data is being used, to compensate for foreshortening and layover effects, and
slope induced radiometric effects. Elevation data is used in the production of
popular topographic maps.
Elevation data, integrated with imagery is also used for generating
perspective views, useful for tourism, route planning, to optimize views for
developments, to lessen visibility of forest clearcuts from major transportation
routes, and even golf course planning and development. Elevation models are
integrated into the programming of cruise missiles, to guide them over the
Resource management, telecommunications planning, and military
mapping are some of the applications associated with DEMs.
There are a number of ways to generate elevation models. One is to
create point data sets by collecting elevation data from altimeter or Global
Positioning System (GPS) data, and then interpolating between the points. This is
extremely time and effort consuming. Traditional surveying is also very time
consuming and limits the timeliness of regional scale mapping.
Generating DEMs from remotely sensed data can be cost effective and
efficient. A variety of sensors and methodologies to generate such models are
available and proven for mapping applications. Two primary methods if
generating elevation data are 1. Stereogrammetry techniques using airphotos
(photogrammetry), VIR imagery, or radar data (radargrammetry), and 2. Radar
Stereogrammetry involves the extraction of elevation information from
stereo overlapping images, typically airphotos, SPOT imagery, or radar. To give
an example, stereo pairs of airborne SAR data are used to find point elevations,
using the concept of parallax. Contours (lines of equal elevation) can be traced
along the images by operators constantly viewing the images in stereo.
The potential of radar interferometric techniques to measure terrain
height, and to detect and measure minute changes in elevation and horizontal
base, is becoming quickly recognized.
Interferometry involves the gathering of precise elevation data using
successive passes (or dual antenna reception) of spaceborne or airborne SAR.
Subsequent images from nearly the same track are acquired and instead of
examining the amplitude images, the phase information of the returned signals is
compared. The phase images are coregistered, and the differences in phase value
for each pixel is measured, and displayed as an interferogram. A computation of
phase "unwrapping" or phase integration, and geometric rectification are
performed to determine altitude values. High accuracies have been achieved in
demonstrations using both airborne (in the order of a few centimetres) and
spaceborne data (in the order of 10m).
Primary applications of interferometry include high quality DEM
generation, monitoring of surface deformations (measurement of land
subsidence due to natural processes, gas removal, or groundwater extraction;
volcanic inflation prior to eruption; relative earth movements caused by
earthquakes), and hazard assessment and monitoring of natural landscape
features and fabricated structures, such as dams. This type of data would be
useful for insurance companies who could better measure damage due to natural
disasters, and for hydrology-specialty companies and researchers interested in
routine monitoring of ice jams for bridge safety, and changes in mass balance of
glaciers or volcano growth prior to an eruption.
From elevation models, contour lines can be generated for topographic
maps , slope and aspect models can be created for integration into (land cover)
thematic classification datasets or used as a sole data source, or the model itself
can be used to orthorectify remote sensing imagery and generate perspective
The basic data requirement for both stereogrammetric and
interferometric techniques is that the target site has been imaged two times, with
the sensor imaging positions separated to give two different viewing angles.
In virtually all DEM and topographic map generation applications,
cartographic accuracy is the important limiting factor. Turnaround time is not
critical and repeat frequency is dependent on whether the application involves
change detection, and what the temporal scope of the study is.
Aerial photography is the primary data source for DEM generation in
Canada for national topographic mapping. For other applications of DEMs, there
are additional satellite sources such as SPOT, with its pointable sensors and 10m
panchromatic spatial resolution, producing adequate height information at scales
smaller than 1:50,000.
The height accuracy requirement for 1:50,000 mapping in Canada is
between 5 and 20 m. In developing countries it is typically 20 m. The original
elevation information used in the Canadian National Topographic Series Maps
was provided from photogrammetric techniques.
In foreign markets, airborne radar mapping is most suited for
approximately 1:50,000 scale topographic mapping. Spaceborne radar systems
will be able to provide data for the generation of coarser DEMs through
radargrammetry, in areas of cloud cover and with less stringent accuracy
requirements. Stereo data in most modes of operation will be available because
of the flexible incidence angles, allowing most areas to be captured during
subsequent passes. Interferometry from airborne and spaceborne systems should
meet many mapping requirements.
1) how can use the GIS Techniques to natural resources?
2) How can creat the Agriculture map using GIS. What are the Spatial
features from Agriculture Resource?
3) Write detaily the water Resources Application using in GIS Techniques.
4) How can do the aerial survey for surface water bodies?
5) How can implement the GIS techniques to waste managements?
Define and decribe LIS?
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF